id
stringlengths 32
40
| text
stringlengths 1
2.95M
| added
stringlengths 24
24
| created
stringlengths 20
24
| source
stringclasses 3
values | original_shard_dir
stringclasses 174
values | original_shard_idx
int64 0
311k
| num_tokens
int64 2
512k
|
---|---|---|---|---|---|---|---|
8df5da7805a1a3722b08f1420deb68045c58c3e9 | \section{Introduction}
Let $B, n \in \mathbb{N}_{>1}$ be such that
\begin{eqnarray}
\label{nosplit}
\varphi^{*}(B) := \varphi({\rm rad } \ (B)) \quad \hbox{ and } \quad (n,
\varphi^*(B)) = 1.
\end{eqnarray}
Here ${\rm rad } \ (B)$ is the radical of $B$ and the condition implies that
$B$ has no prime factors $t \equiv 1 \bmod n$. In particular, none of
its prime factors splits completely in the \nth{n} cyclotomic field.
More generally, for a fixed $B \in \mathbb{N}_{> 1}$ we let
\begin{eqnarray}
\label{expos}
\id{N}(B) = \{ n \in \mathbb{N}_{ > 1} \ | \ \exists \ k > 0 \hbox{ such that } n | {\varphi^*(B)}^k \}.
\end{eqnarray}
If $p$ is an odd prime, we shall denote by CF the combined condition
requiring that
\begin{enumerate}[I]
\item \label{i} The Vandiver Conjecture holds for $p$, so the class
number $h_p^+$ of the maximal real subfield of the cyclotomic field
$\mathbb{Q}[\zeta_p]$ is not divisible by $p$.
\item \label{ii} The index of irregularity of $p$ is small, namely $i_r(p)
< \sqrt{p} -1$, so there are $i_r(p)$ odd integers $k < p$ such that
the Bernoulli number $B_k \equiv 0 \bmod p$.
\end{enumerate}
The second condition was discovered by Eichler, as a sufficient
condition for FLT1 to be true. It is known from recent computations of
Buhler and Harvey \cite{BH} that the condition CF is satisfied by
primes up to $163 \cdot 10^6$.
We consider the binary Thue equation
\begin{eqnarray}
\label{bin}
X^n - 1 & = & B \cdot Z^n,
\end{eqnarray}
where solutions with $Z \in \{ -1, 0, 1\}$ are considered to be
trivial. The assertion that equation \rf{bin} has finitely many solutions
other than the trivial ones is a special case of the general Pillai conjecture
(Conjecture 13.17 of \cite{Bilu}). This equation is encountered as a
particular case of binary Thue equations of the type
\begin{eqnarray}
\label{mike}
a X^n - b Y^n = c,
\end{eqnarray}
see \cite{BGMP}. In a seminal paper \cite{Be}, Michael Bennett proves
that in the case of $c = \pm 1$ there is at most one solution for
fixed $(a, b; n)$ and deduces that the parametric family $(a+1, a; n)$
has the only solution $(1,1)$ for all $n$. The equation \rf{bin}
inserts naturally in the family of equations \rf{mike}, with $a = c =
\pm 1$.
A conjecture related directly to \rf{bin} states that
\begin{conjecture}
\label{bt}
Under \rf{nosplit}, Equation \rf{bin} has no other non-trivial solution than $(X, Y; B, n) =
(18, 7; 17, 3)$.
\end{conjecture}
Current results on \rf{bin} are restricted to values of $B$ which are
built up from small primes $p \leq 13$ \cite{G}. If expecting that the equation
has no solutions, -- possibly with the exception of some isolated
examples -- it is natural to consider the case when the exponent $n$
is a prime. Of course, the existence of solutions $(X, Z)$ for
composite $n$ imply the existence of some solutions with $n$ prime, by
raising $X, Z$ to a power.
The main contribution of this paper will be to relate \rf{bin}
in the case when $n$ is a prime and \rf{nosplit} holds, to the
diagonal Nagell -- Ljunggren equation,
\begin{eqnarray}
\label{dn}
\frac{X^n-1}{X-1} = n^e Y^n, \quad e = \begin{cases} 0 & \hbox{ if } X
\not \equiv 1 \bmod n, \\
1 & \hbox{ otherwise.} \end{cases}
\end{eqnarray}
This way, we can apply results from \cite{Mi1} and prove the following:
\begin{theorem}
\label{tbin}
Let $n$ be a prime and $B > 1$ an integer with $(\varphi^{*}(B), n ) =
1$. Suppose that the equation \rf{bin} has a non trivial integer
solution different from $n = 3$ and $(X, Z; B) = (18, 7; 17)$. Let $X
\equiv u \bmod n , 0 \leq u < n$ and $e = 1$ if $u = 1$ and $e = 0$
otherwise. Then:
\begin{enumerate}[A]
\item $n > 163 \cdot 10^6$. \label{A}
\item $X - 1 = \pm B/n^e$ and $B < n^n$. \label{B}
\item If $u \not \in \{ -1, 0, 1\}$, then condition CF \rf{ii} fails
for $n$ and \label{C}
\begin{eqnarray*}
\begin{array}{r c r r l}
2^{n-1} & \equiv & 3^{n-1} \equiv 1 & \bmod \ n^2, & \hbox{ and } \\
r^{n-1} & \equiv & 1 & \bmod \ n^2 & \hbox{ for all $ r | X(X^2-1)$}.
\end{array}
\end{eqnarray*}
If $u \in \{ -1, 0, 1\}$, then Condition CF \rf{i} fails for $n$.
\end{enumerate}
\end{theorem}
The particular solution $n = 3$ and $(X, Z; B) = (18, 7; 17)$ is
reminiscent of a solution of the diagonal Nagell solution; it is
commonly accepted that the existence of non trivial solutions tends to
render Diophantine equations more difficult to solve. Based on Theorem
\ref{tbin} we prove nevertheless the following
\begin{theorem}
\label{genexpo}
If equation \rf{bin} has a solution for a fixed $B$ verifying the
conditions \rf{nosplit}, then either $n \in \id{N}(B)$ or there is a
prime $p$ coprime to $\varphi^*(B)$ and a $m \in \id{N}(B)$ such that
$n = p \cdot m$. Moreover $X^m, Y^m$ are a solution of \rf{bin} for
the prime exponent $p$ and thus verify the conditions in Theorem
\ref{tbin}.
\end{theorem}
This is a strong improvement of the currently known results.
\begin{remark}
\label{remark}
Theorem \ref{tbin} uses criteria from the diagonal case of the
Nagell-Ljunggren equation, the relation being established by point
\rf{B} of the theorem. The criteria were proved in \cite{Mi1} and
are in part reminiscent from classical cyclotomic results on
Fermat's Last Theorem. Thus, the criteria for the First Case, which
are ennounced in point \rf{C} are the Eichler criterion CF \rf{ii}
and the criteria of Wieferich and Furtw\"angler (cf. Theorem 2 in \cite{Mi1}). For the Second Case
of Diagonal Nagell-Ljunggren, in point \rf{C}, it was possible to
restrict the two conditions proved by Kummer for the FLTII to the
single condition CF \rf{i}, namely Vandiver's conjecture (cf. Theorem 4 in \cite{Mi1}). This is a
consequence of the fact that unlike FLT, Nagell-Ljunggren is a
binary equation, a fact which allowed also to prove upper bounds for
the solutions, which are given in Theorem \ref{ubounds} below. The
fact that the Nagell-Ljunggren equation is not homogenous in $X$
makes it difficult to prove lower bounds, thus leaving a gap on the
way to a complete proof of Conjecture \ref{bt}.
The proofs in \cite{Mi1} used methods that generalize the ones that
helped proving the Catalan Conjecture \cite{Mi2}. A variant of these
methods will be applied for proving point \rf{B}. We
gathered the occasion of writing this paper to give in the
Appendix an extensive exposition of the computations on which the singular case
in the proof the respective estimate from \cite{Mi1} relies: some colleagues had
pointed out that they could not verify the computation on base of the arguments
in \cite{Mi1}, so this difficulty should be dealt with in the Appendix.
\end{remark}
The plan of the paper is as follows: in Section 2 we establish the
connection between equations \rf{bin} and \rf{tbin}, review some basic
properties of Stickelberger ideals and prove auxiliary technical
lemmata concerning coefficients of binomial series development.
With these prerequisites, we complete the proof of Theorem \ref{tbin}
in Section 3. Given the reduction to the Nagell-Ljunggren Diagonal
Case, the proof focuses on point \rf{B} of Theorem \ref{tbin}. In Section
4 we drop the condition that $n$ be a prime and use the proven facts
in order to deduce the results on \rf{bin} for arbitrary exponents $n$
which are stated in Theorem \ref{genexpo}. Finally, the Appendix
provides the details for an estimate used in \cite{Mi1}, as mentioned
in Remark \ref{remark}.
\section{Preliminary results}
The proof of Theorem \ref{tbin} emerges by relating the equation \rf{bin}
to the Diagonal Case of the Nagell -- Ljunggren conjecture. In this section
we shall recall this conjecture and several technical tools used for
reducing one conjecture to the other. The reduction is performed in the
next section.
\subsection{Link of \rf{bin} with the diagonal Nagell -- Ljunggren
equation}
We note that $\delta = \left(\frac{X^n-1}{X-1}, X-1\right)$ divides $ n$
and $\delta = n$ exactly when $X \equiv 1 \bmod n$. Indeed,
from the expansion
\[ \frac{X^n-1}{X-1} = \frac{((X-1)+1)^n - 1}{X - 1} = n + k (X-1), \]
with $k \in \mathbb{Z}$, one deduces the claim $\delta \big \vert n$. If $ u
\neq 1$, then $\delta=n$ and thus $n | (X - 1)$ must hold. Conversely,
inserting $X \equiv 1 \bmod n$ in the previous expression shows that
in this case $\delta = n$.
We first show that any solution of \rf{bin} leads to a solution of
\rf{dn}. For this, let $\prod_{i=1}^k
p_i$ be the radical of $\frac{X^n-1}{n^e(X-1)}$. Obviously, ${\rm rad } \ (\frac{X^n-1}{n^e(X-1)}) \ | \ {\rm rad } \ (X^n -
1)$. Let $\zeta \in \mathbb{C}$ be a primitive \nth{n} root of unity. Then the
numbers $\alpha_c = \frac{X - \zeta^c}{(1-\zeta^c)^e} \in \mathbb{Z}[\zeta]$
by definition of $e$, and $(\alpha_c, n) = 1$. Since for distinct $c, d
\not \equiv 0 \bmod n$ we have $( 1-\zeta^d)^e \cdot \alpha_d - (
1-\zeta^c)^e \cdot \alpha_c = \zeta^c - \zeta^d$, it follows that
$(\alpha_c, \alpha_d) \ | \ (1-\zeta)$ and in view of $(\alpha_c, n) =
1$, it follows that the $\alpha_c$ are coprime.
Let $F = \prod_{c=1}^{n-1} \alpha_c = \frac{X^n-1}{n^e(X-1)}$ and $q \ | \ F$ be a rational prime. In the
field $\mathbb{Q}[ \zeta ]$, it splits completely in the prime ideals $\eu{Q}_c
= (q, \alpha_c), \ c = 1, 2, \ldots, n-1$: these ideals are coprime,
as a consequence of the coprimality of the $\alpha_c$. Therefore $q
\equiv 1 \bmod n$ and it follows from \rf{nosplit} that $(q, B) = 1$,
so $q | Z$. Furthermore, \rf{bin} implies that there exists $j_q >0$
such that $q^{j_q n} || Z^n$ and thus $q^{j_q n} || F$. This holds
for all primes $q \ | \ {\rm rad } \ (F)$. It follows that \rf{dn} is verified
for $Y = \prod_{q | F} q^{j_q}$ and $Y \ | \ Z$. We have thus proved
that if $(X, Z)$ is a solution of \rf{bin} for the prime $n$, then
there exists $C \in \mathbb{Z}$ such that $Z = C \cdot Y$ with $Y$ as above,
and:
\begin{eqnarray}
\label{x}
\frac{X^n-1}{n^e (X-1)} & = & Y^n \quad \hbox{ and } \quad \\
\label{y} X-1 & = & B \cdot C^n / n^e.
\end{eqnarray}
We shall write from now on $D = X-1$.
From the above, we conclude that any integer solution of \rf{bin}
induces one of \rf{dn}. Conversely, if $(X, Y)$ is a solution of
\rf{dn}, then $\left(X, Y; n^e(X-1)\right)$ is a solution of
\rf{bin}. For instance, the particular solution $(X, Y; B) = (18, 7;
17)$ of \rf{bin} stems from
\begin{eqnarray*}
\label{part}
\frac{18^3-1}{18-1} = 7^3,
\end{eqnarray*}
which is supposed to be the only non trivial solution of \rf{dn}.
\begin{remark}
\label{minus}
Note that if $(X, Z)$ verify \rf{bin}, then $(-X, Z)$ is a solution of
$\frac{X^n + 1}{X + 1} = B Z^n$, so the results apply also to the
equation:
\begin{eqnarray*}
\label{tbins}
X^n + 1 = B Z^n.
\end{eqnarray*}
\end{remark}
\subsection{Bounds to the solutions of Equation \rf{dn}}
We shall use the following Theorem from \cite{Mi1}:
\begin{theorem}
\label{ubounds}
Suppose that $X, Y$ are integers verifying \rf{dn} with $n \geq 17$
being a prime. Let $u = (X \bmod n)$. Then there is an $E \in \mathbb{R}_+$
such that $| X | < E$. The values of $E$ in the various cases of the
equation are the following:
\begin{eqnarray}
\label{Bvals}
E = \begin{cases} \ 4 \cdot \left(\frac{n-3}{2}\right)^{\frac{n+2}{2}}
& \hbox{if $u \not \in \{-1, 0, 1\}$} \\ \ (4 n)^{\frac{n-1}{2}} &
\hbox{if $ u = 0$ }, \\ \ 4 \cdot
\left(n-2\right)^{n} & \hbox{otherwise}.
\end{cases}
\end{eqnarray}
\end{theorem}
By comparing the bounds \rf{Bvals} with \rf{y}, it follows that $| C |
< 2n-1$. In particular, the primes dividing $C$ do not split
completely in $\mathbb{Q}[\zeta_n]$ -- since a prime splitting in this field
has the form $r = 2 k n + 1 > 2n$.
\begin{remark}
\label{4}
Note that $| C | < 2 n-1$ implies a fortiori that for all primes $r |
C$, $r^2 \not \equiv 1 \bmod n$. If $d(r) \leq \mbox{Gal}(\mathbb{Q}[ \zeta ]/\mathbb{Q})$
is the decomposition group of the unramified prime $r$, it follows
that $| d(r) | \geq 3$; moreover, either $d(r)$ contains a subcycle
$d' \subset d(r)$ of odd order $| d' | \geq 3$ or it is a cyclic $2$-group
with at least $4$ elements.
\end{remark}
\subsection{A combinatorial lemma}
\begin{lemma}
\label{sm}
Let $p$ be an odd prime, $k \in \mathbb{N}$ with $1 < k < \log_2(p)$ and $P =
\{ 1, 2, \ldots, p-1\}$. If $S = \{ a_1, a_2, \ldots, a_k \} \subset
P$ be a set of numbers coprime to $p$ and such that $a_i \not \equiv
\pm a_j \bmod p$ for $i \neq j$. We set the bound $A = 2 \lceil
p^{1/k} \rceil$; then there are $k$ numbers $b_i \in \mathbb{Z}, \ i = 1, 2,
\ldots, k$, not all zero, with $0 \leq | b_i | \leq A$ and such that
\[ \sum_{i=1}^k a_i b_i \equiv 0 \bmod p. \]
For $k = 2$, we can choose the $b_i$ such that the additional condition
\[ \sum_{i=1}^2 b_i/a_i \not \equiv 0 \bmod p. \]
holds.
\end{lemma}
\begin{proof}
Let $T = \{ 1, 2, \ldots, A\} \subset P$. Consider the functional
$f : T^k \rightarrow \ZM{p}$ given by
\[ f(\vec{t}) \equiv \sum_{i=1}^k t_i a_i \ \ \bmod p, \quad \hbox{
with} \quad \vec{t} = (t_1, t_2, \ldots, t_k) \in T^k.\] Since $|
T^k | > p$, by the pigeon hole principle there are two vectors
$\vec{t} \neq \vec{t'}$ such that $f(\vec{t}) \equiv f(\vec{t'}) \bmod
p$. Let $b_i = t_i - t'_i$; by construction, $0 \leq | b_i | \leq A$
and not all $b_i$ are zero, since $\vec{t} \neq \vec{t'}$. The choice
of these vectors implies $\sum_{i=1}^k a_i b_i \equiv 0 \bmod p$, as
claimed.
We now turn to the second claim. If the claim were false, then
\[ a_1 b_1 + a_2 b_2 = 0 \hbox{ and } b_1/a_1 + b_2 / a_2 = 0, \]
a homogenous linear system $S$ with determinant
$\det(S) = \frac{a_1^2-a_2^2}{a_1 a_2}$, which is non vanishing
under the premise of the lemma. This would imply that
the solution $b_1, b_2$ is trivial, in contradiction with our construction.
This completes the proof.
\end{proof}
\subsection{Some notation}
We assume that $n$ is prime and let $\zeta$ be a primitive \nth{n}
root of unity, $\mathbb{K} = \mathbb{Q}(\zeta)$ the \nth{n} cyclotomic field and $G =
\mbox{Gal}(\mathbb{K}/\mathbb{Q})$ the Galois group. The automorphisms $\sigma_a \in G$ are
given by $\zeta \mapsto \zeta^a, \ a = 1, 2, \ldots, n-1$; complex
conjugation is denoted by $\jmath \in \mathbb{Z}[ G ]$. In the ring of
integers $\mathbb{Z} [\zeta]$, one has finite \textit{$\lambda$-adic} expansions:
for any $\alpha \in \mathbb{Z}[ \zeta ]$ and some $N > 0$ there are
$a_j \in \{ -(p-1)/2, \cdots, 0, 1, \cdots, (p-3)/2 \}, j = 0, 1, \ldots N$ such that:
\begin{equation}
\label{lambdaadic}
\alpha = \sum_{j=0}^{N} a_j (1-\zeta)^{j}.
\end{equation}
We shall use the algebraic $O( \cdot )$-notation, to suggest the
remainder of a power series. This occurs explicitly in the following
four contexts
\begin{itemize}
\item[ (i) ] In a $\lambda$-adic development of the type
\rf{lambdaadic}, we write $\alpha = x+ O(\lambda^m)$ to mean that
there is some $y \in \mathbb{Z}[ \zeta ]$ such that $\alpha - x = \lambda^m
y$. Since $(n) = (\lambda^{p-1})$, powers of $n$ can occur as well
as powers of $\lambda$ in this notation.
\item[ (ii) ] We also use formal power series, often written $f =
f(D) \in \mathbb{K}[[ D ]]$. For $f = \sum_{k=0}^{\infty} f_k D^k$ with
partial sum $S_m(f) = \sum_{k=0}^{m} f_k D^k$ we may also use the
$O( \cdot )$-notation and denote the remainder by $f(D) = S_m(D) +
O( D^{m+1} )$.
\item[ (iii) ] Suppose that $D$ is an integer and the formal power
series converges in the completion $\mathbb{K}_{\eu{P}}$ at some prime
$\eu{P} \subset \id{O}(\mathbb{K})$ dividing $D$. Suppose also that in this
case all coefficients of $f$ be integral: then the remainder $f(D) -
S_m(D)$ is by definition divisible by $\eu{P}^{m+1}$, so $O( D^{m+1}
)$ means in this context that the remainder is divisible by
$\eu{P}^{m+1}$.
\item[ (iv) ] If $f(D)$ converges at all the prime ideals dividing
some integer $a | D $, then $O( D^{m+1} )$ will denote a number
divisible by $a^{m+1}$. In this paper we shall use this fact in the
context in which $a = p$ is an integer prime dividing $D$ and such
that $f(D)$ converges at all prime ideals of $\mathbb{K}$ above $p$.
\end{itemize}
\subsection{Auxiliary facts on the Stickelberger module}
\label{SecStick}
The following results are deduced in \cite{Mi1}, Section 4, but see also
\cite{Mi2}, \S 2.1-2.3 and \S 4.1. The results shall only be
mentioned here without proof.
The Stickelberger module is $I = (\vartheta \cdot \mathbb{Z}[ G ])
\cap \mathbb{Z}[ G ]$, where $\vartheta = \frac{1}{n} \sum_{c=1}^{n-1} c \cdot
\sigma_c^{-1}$ is the Stickelberger element. For $\theta = \sum_c
n_c \sigma_c \in I$ we have the relation $\theta + \jmath \theta =
\varsigma(\theta) \cdot \mbox{\bf N}$, where $\varsigma(\theta) \in \mathbb{Z}$ is
called the \textit{relative weight} of $\theta$. The augmentation of
$\theta$ is then
\[ | \theta | = \sum_c n_c = \varsigma(\theta) \cdot \frac{n-1}{2}.\]
The Fueter elements are
\[ \psi_k = (1 + \sigma_{k} - \sigma_{k+1}) \cdot \vartheta =
\sum_{c=1}^{n-1} \left( \left[ \frac{(k+1)c}{n} \right] - \left[
\frac{kc}{n} \right] \right) \cdot \sigma_c^{-1}, \quad 1 \leq k
\leq (n-1)/2.\] Together with the norm, they generate $I$ as a $\mathbb{Z}$ -
module (of rank $(n+1)/2$) and $\varsigma(\psi_k) = 1$ for all $k$.
The Fuchsian elements are \[ \Theta_k = (k - \sigma_k) \cdot \vartheta
= \sum_{c=1}^{n-1} \left[ \frac{kc}{n} \right] \cdot \sigma_c^{-1},
\quad 2 \leq k \leq n.\] They also generate $I$ as a $\mathbb{Z}$ -
module. Note that $\Theta_n$ is the norm, and that we have the
following relationship between the Fueter and the Fuchsian elements:
\begin{equation*}
\psi_1 = \Theta_2 \text{ and }
\psi_k = \Theta_{k+1} - \Theta_k,\ k \geq 2
\end{equation*}
An element $\Theta = \sum_c n_c \sigma_c$ is \textit{positive}
if $n_c \geq 0$ for all $c \in \{ 1, 2, \ldots, p-1\}$. We write
$I^+ \subset I$ for the set of all positive elements. They
form a multiplicative and an additive semigroup.
The Fermat quotient map $I \rightarrow \ZM{n}$, given by $$\varphi :
\theta = \sum_{c=1}^{n-1} n_c \sigma_c \mapsto \sum_{c=1}^{n-1}
c n_c \bmod n,$$ is a linear functional, with kernel $I_f=\{ \theta
\in I : \zeta^{\theta} = 1\}$ (the \textit{Fermat module}), and enjoys
the properties:
\begin{eqnarray*}
\label{fc}
\zeta^{\theta} & = & \zeta^{\varphi(\theta)}, \nonumber \\
(1+\zeta)^{\theta} & = & \zeta^{\varphi(\theta)/2}, \\
(1-\zeta)^{\theta} & = & \zeta^{\varphi(\theta)/2} \cdot \left(
\lchooses{-1}{n} n\right)^{\varsigma(\theta)/2}, \nonumber
\end{eqnarray*}
where $\lchooses{-1}{n} $ is the Legendre symbol.
The last relation holds up to a sign which depends on the embedding of
$\zeta$. For a fixed embedding, we let $\nu = \sqrt{\lchooses{-1}{n}
n}$ be the generator of the quadratic subfield $\mathbb{Q}(\nu) \subset \mathbb{Q}(\zeta)$.
A short computation shows that $(1-\zeta)^{\theta} = \zeta^{\varphi(\theta)/2}
\nu$. Note that for $\theta \in I$ with $\varsigma(\theta) = 2$ we
have $(1-\zeta)^{2 \theta} = \zeta^{\varphi(\theta)} \cdot n^2$ for
any embedding.
We shall want to consider the action of elements of $\theta \in \mathbb{F}_n[G]$ on explicit algebraic
numbers $\beta \in \mathbb{K}$. Unless otherwise specified, an element $\theta = \sum_{c=1}^{n-1} m_c \sigma_c \in \mathbb{F}_n[G]$ is lifted to $\sum_{c=1}^{n-1} n_c \sigma_c$, where $n_c \in \mathbb{Z}$ are the unique integers with $0 \leq n_c < p$ and $n_c \equiv m_c \mod p$. In particular, lifts are always positive, of bounded weight $w(\theta) \leq (p - 1)^2$. Rather than introducing an additional notation for the lift defined herewith, we shall always assume, unless otherwise specified, that $\theta \in \mathbb{F}_p[G]$ acts upon $\beta \in \mathbb{K}$ via this lift.
Using this lift, we define the following additive maps:
\begin{equation*}
\rho_0 : \mathbb{F}_n [G] \rightarrow \mathbb{Q}(\zeta) \quad
\theta = \sum_{c=1}^{n-1} n_c \sigma_c \mapsto \sum_{c \in P} \frac{n_c}{1- \zeta^c},
\end{equation*}
and
\begin{equation*}
\rho : \mathbb{F}_n [G] \rightarrow \mathbb{Z}[\zeta] \quad
\theta \mapsto (1-\zeta) \cdot \rho_0[\theta].
\end{equation*}
The \nth{i} moment of an element $\theta = \sum_{c=1}^{n-1} n_c
\sigma_c$ of $\mathbb{Z}[G]$ is defined as:
\begin{equation*}
\phi^{(i)} (\theta) = \sum_{c=1}^{n-1} n_c c^i \mod n.
\end{equation*}
Note that $\phi^{(1)}$ is the {\em Fermat quotient map}: $\phi^{(1)} =
\varphi$. The moments are linear maps of $\mathbb{F}_p$-vector spaces and
homomorphism of algebras, verifying:
\begin{eqnarray}
\label{phiprop}
\begin{array}{l c l l c l}
\qquad \phi^{(i)}(a \theta_1 + b \theta_2) & = & a \phi^{(i)}(\theta_1) + b \phi^{(i)}(\theta_2), & \hbox{and} & \\
\qquad \phi^{(i)}(\theta_1 \theta_2) & = & \phi^{(i)}(\theta_1) \phi^{(i)}(\theta_2), & \hbox{with } & \theta_j \in \mathbb{F}_p[ G ]; a, b \in \mathbb{F}_p.
\end{array}
\end{eqnarray}
The linearity in the first identity is a straight-forward verification
from the definition. For the second, note that for $\theta = \sum_c
n_c \sigma_c$ we have
\[ \phi^{(i)}(\sigma_a \theta) = \phi^{(i)}\left(\sum_c n_c \sigma_{a c}\right) =
\sum n_c \cdot (a c)^i = a^i \cdot \phi^{(i)}(\theta). \]
Using the already established linearity, one deduces the multiplicativity of
$\phi^{(i)}$ as a ring homomorphism.
Let $\alpha = \frac{X-\zeta}{(1-\zeta)^e} \in \mathbb{Z}[ \zeta]$, as before,
and define $c_X \equiv 1/(X-1) \bmod n$ if $e = 0$ and $c_X = 0$ if $e
= 1$. For any $\theta \in I^+$, there is a \textit{Jacobi integer}
$\beta[\theta] \in \mathbb{Z}[ \zeta ]$ such that $\beta[\theta]^n =
(\zeta^{c_X} \alpha)^{\theta}$, normed by $\beta[\theta] \equiv 1
\bmod (1-\zeta)^2$ (Lemma 2 of \cite{Mi1}). The definition of $\varsigma(\theta)$
implies that
\begin{eqnarray}
\label{norm}
\beta[\theta] \cdot \overline{\beta[\theta]} =
\mbox{\bf N}_{\mathbb{K}/\mathbb{Q}}(\alpha)^{\varsigma(\theta)} = Y^{\varsigma(\theta)}.
\end{eqnarray}
We have for any $\theta \in I^+$,
\begin{eqnarray}
\label{beta1}
\beta[\theta]^n = (\zeta^{c_X} \alpha)^{\theta} = (\zeta^{c_X}
(1-\zeta)^{1-e})^{\theta} \cdot \left(1 + \frac{X-1}{1-\zeta}\right)^{\theta}
\end{eqnarray}
\begin{lemma}
\label{theta}
We remind that $D=X-1$. For any $\theta \in 2 \cdot I_f^+$, for any prime ideal $\eu{P} \ | \
D$, there is a $\kappa = \kappa_{\eu{P}}(\theta) \in \ZM{n}$ such that
\begin{equation*}
\label{base1}
\beta[ \theta ] \equiv \zeta^{\kappa} \cdot Y^{\frac{\varsigma(\theta)}{2}} \bmod \eu{P}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\theta_0$ be an element of $I^+_f$, and let $\theta = 2
\theta_0$. Note that from \rf{norm} we have $Y^{\varsigma(\theta_0)
n} = \beta[\theta_0]^n \cdot \overline \beta[ \theta_0 ]^n$. Thus
$ \beta[\theta]^n = \beta[\theta_0]^{2 n} = Y^{\varsigma(\theta_0)
n} \cdot \left ( \beta[\theta_0]/\overline \beta[\theta_0]
\right)^{n}$. Using \rf{beta1} and the previous observations, we
find:
\begin{eqnarray}
\beta[\theta]^n & = & Y^{\varsigma(\theta_0)n} \cdot \left(\zeta^{c_X} \cdot
(1-\zeta)^{1-e}\right)^{ (\theta_0 - \jmath \theta_0)} \cdot \left(1 +
\frac{X-1}{1-\zeta}\right)^{(\theta_0 - \jmath \theta_0)} \nonumber \\
& = & Y^{\varsigma(\theta_0) n} \cdot \zeta^{(2c_X + 1) \varphi(\theta_0)} \cdot \left(1 + D/(1-\zeta)\right)^{ (\theta_0 - \jmath
\theta_0)} \nonumber \\
\beta[\theta]^n & = & Y^{\varsigma(\theta_0) n} \cdot \left(1 + \frac{D}{1-\zeta}\right)^{(\theta_0 - \jmath
\theta_0)}. \label{pow}
\end{eqnarray}
Thus for any prime ideal $\eu{P} \ | \ D$ there is a $\kappa =
\kappa_{\eu{P}}(\theta) \in \ZM{n}$ such that
\begin{equation}
\label{kappa}
\beta[ \theta ] \equiv \zeta^{\kappa} \cdot Y^{\varsigma(\theta_0)} \bmod \eu{P}.
\end{equation}
\end{proof}
In the sequel, we indicate how to choose $\theta$ such that
$\kappa=0$. In this case, the relation \rf{beta1} leads to a
$\eu{P}$-adic binomial series expansion for $\beta[\theta]$.
We will use the Voronoi identities -- see Lemma 1.0 in \cite{Jha}
--, which we remind here for convenience:
\begin{lemma}
\label{Jha}
Let $m$ be an even integer such that $2 \leq m \leq n-1$. Let $a$ be
an integer, coprime to $n$. Then
\begin{equation}
\label{voronoi}
a^m \sum_{j=1}^{n-1} \left[ \frac{aj}{n} \right] j^{m-1} \equiv \frac{(a^{m+1}-a)B_m}{m} \mod n,
\end{equation}
where $B_m$ is the $m$-th Bernoulli number. In particular, for
$m=n-1$, we get
\begin{equation*}
\sum_{j=1}^{n-1} \left[ \frac{aj}{n} \right] j^{n-2} \equiv \frac{a^{n}-a}{n} \mod n,
\end{equation*}
which is the Fermat quotient map of the $a$-th Fuchsian element,
$\varphi(\Theta_a)$.
\end{lemma}
\begin{lemma}
\label{simple}
Let $\psi_k$ denote the $k$-th Fueter element. Then, there exists a
linear combination $\theta = \sigma \psi_k + \tau \psi_l \in I$ with
$\sigma, \tau \in G$ and $1 \leq k , l < n$ such that $\phi^{(1)}
(\theta) = 0$ and $\phi^{(-1)} (\theta) \neq 0$.
\end{lemma}
The proof of this Lemma is elementary, using the Voronoi relations
\rf{voronoi}; since the details are rather lengthy, they will
be given in the Appendix.
The following two lemmata contain computational information for binomial
series developments that we shall use below. First, we remind that $\rho_0$ is the following additive map:
\begin{equation*}
\rho_0 : \mathbb{F}_n [G] \rightarrow \mathbb{Q}(\zeta) \quad
\theta = \sum_{c=1}^{n-1} n_c \sigma_c \mapsto \sum_{c \in P} \frac{n_c}{1- \zeta^c}
\end{equation*}
\begin{lemma}
\label{dvpt}
Let $D$ be an indeterminate. Let $\theta = \sum_{c=1}^{n-1} n_c \sigma_c \in \mathbb{Z}[G]$ and $f[\theta]
= \left( 1 + \frac{D}{1-\zeta} \right)^{\theta/n} \in \mathbb{K}[[ D ]]$.
Let $0 < N < n$ be a fixed integer. Then,
\begin{equation*}
f[\theta] = 1 + \sum_{k=1}^N \frac{a_k[\theta]}{k! n^k} D^k + O(D^{N+1}),
\end{equation*}
where, for $1 \leq k \leq N$, we have
\begin{equation*}
a_k[\theta] = \rho_0^k [\theta] + O\left( \frac{n}{(1-\zeta)^k}\right).
\end{equation*}
In the above identity, $a_k[ \theta ], \rho_0^k[ \theta ] \in \mathbb{Z}[
\zeta, \frac{1}{n} ]$ are not integral, but their difference is an
algebraic integer $a_k[ \theta ] - \rho_0^k[ \theta ] \in
\frac{n}{(1-\zeta)^k} \cdot \mathbb{Z}[ \zeta ]$.
\end{lemma}
\begin{proof}
Let $\theta = \sum_c n_c \sigma_c$ and $m = m(\theta) = | \ \{ c \ :
\ n_c \neq 0 \} \ |$ be the number of non vanishing coefficients of
$\theta$. We prove this result by induction on $m$. First, note
that
\begin{equation*}
\binom{n_c/n}{k} = \frac{1}{k!} \cdot \frac{n_c^k}{n^k} \cdot (1 + O(n)).
\end{equation*}
Thus, if $\theta = n_c \sigma_c$ and $m = 1$, then:
\begin{equation*}
f[\theta] = 1 + \sum_{k=1}^{n-1} \frac{1}{k!} \cdot \frac{n_c^k}{n^k} \cdot (1 + O(n)) \cdot \frac{D^k}{(1-\zeta)^k} = 1 + \sum_{k=1}^N \frac{a_k[\theta]}{k! n^k} D^k + O(D^{N+1}),
\end{equation*}
where, for $1 \leq k \leq N$,
\begin{equation*}
a_k[\theta] = \rho_0^k [\theta] + O\left( \frac{n}{(1-\zeta)^k}\right),
\end{equation*}
which confirms the claim for $m = 1$. Suppose the claim holds for all
$j \leq m$ and let $\theta = \theta_1 + \theta_2$ with $m(\theta_i) <
m$ and $m(\theta) = m$. Then,
\begin{equation*}
\begin{array}{lcl}
f[\theta] & = & \left( 1 + \frac{D}{1-\zeta} \right)^{\theta_1/n} \cdot \left( 1 + \frac{D}{1-\zeta} \right)^{\theta_2/n}\\
& = & 1 + \sum_{k=1}^N \alpha_k[\theta] D^k + O(D^{N+1}),
\end{array}
\end{equation*}
where for $k < n-1$ we have
\begin{equation*}
\begin{array}{lcl}
\alpha_k[\theta] & = & \sum_{j=1}^k \frac{a_j [\theta_1]}{n^j j! (1-\zeta)^l} \cdot \frac{a_{k-j}[\theta_2]}{n^{k-j} (k-j)! (1-\zeta)^{k-l}} \cdot (1 + O(n))\\
& = & \frac{1}{k!n^k} \left( \rho_0 [\theta_1] + \rho_0 [\theta_2] \right)^{k} + O\left( \frac{n}{k! n^k(1-\zeta)^k}\right)\\
& = & \frac{1}{k!n^k} \cdot \rho_0^k [\theta] + O\left( \frac{n}{k! n^k(1-\zeta)^k}\right) =
\frac{1}{k!n^k} \cdot \left( \rho_0^k [\theta] + O( n/(1-\zeta)^k\right)
\end{array}
\end{equation*}
This proves the claim by complete induction.
\end{proof}
\begin{lemma}
\label{RemInt}
By proceeding like in Lema 8 of \cite{Mi2}, we notice that $\frac{a_k[\theta]}{k!} \in \mathbb{Z}[\zeta]$ (notation is different between both articles).
\end{lemma}
As a consequence, we may deduce that matrices built from the first
coefficients occurring in some binary series developments are regular.
\begin{lemma}
\label{regular}
Let $\theta = \sum_{c=1}^{n-1} n_c \sigma_c \in \mathbb{Z}[G]$ such that
$\phi^{(-1)}(\theta) \not\equiv 0 \mod n$, let $f[\theta] = \left( 1 +
\frac{D}{1-\zeta} \right)^{\theta/n}$ and $0<N<n-1$ be a fixed integer.
Then,
\[ f[\theta] = 1 + \sum_{k=1}^N \frac{b_k[\theta]}{k! n^k (1-\zeta)^k}
D^k + O(D^{N+1}), \quad \hbox{with} \quad \frac{b_k[\theta]}{k!} \in
\mathbb{Z}[\zeta]. \] Moreover, if $J \subset \mbox{Gal}(\mathbb{Q}[ \zeta ]/\mathbb{Q})$ is a
subset with $| J | = N$, then the matrix\footnote{We shall apply this
Lemma below, in a context in which $J$ satisfies the additional
condition that $i + j \neq n$ for any $i, j$ with $\sigma_i \in J$
and $\sigma_j \in J$.}
\[ A_N = \left(b_k[\sigma_c \theta] \right)_{ k=0;\sigma_c \in J}^{N-1}
\in \mbox{GL}(\mathbb{K}, N) \].
\end{lemma}
\begin{proof}
Let $\lambda = 1-\zeta$; we show that the determinant of $A_N$ is
not zero modulo $\lambda$. Using Lemma \ref{dvpt}, we know that we
have a development of symbolic power series
\begin{equation*}
f[\theta] = 1 + \sum_{k=1}^N \frac{a_k[\theta]}{k! n^k} D^k + O(D^{N+1}),
\end{equation*}
where
\begin{equation*}
a_k[\theta] = \rho_0^k [\theta] + O\left( \frac{n}{(1-\zeta)^k}\right).
\end{equation*}
By definition, $(1-\zeta)^k \cdot a_{k} [\sigma_c \theta] \in
\mathbb{Z}[\zeta]$ for all $\sigma_c \in G$. Let $b_k[\theta] = (1-\zeta)^k
\cdot a_{k} [\theta] \in \mathbb{Z}[\zeta]$. Then, according to Lemma
\ref{dvpt},
\begin{equation*}
\begin{array}{rcl}
b_k[\sigma_c \theta] & = & (1-\zeta)^k \cdot \left( \rho_0^k [\sigma_c \theta] + O\left( \frac{n}{(1-\zeta)^k}\right) \right)\\
& = & \rho^k [\sigma_c \theta] + O(n) = \left( \sum_{l=1}^{n-1} n_l \cdot \frac{1-\zeta}{1-\zeta^{lc}} \right)^k + O(n) \\
& \equiv & \left( \sum_{l=1}^{n-1} \frac{n_l}{lc} \right)^k \mod \lambda
\equiv \left( \frac{\phi^{(-1)} [\theta]}{c} \right)^k \mod \lambda.
\end{array}
\end{equation*}
Thus, $\det A_N \equiv \left| \left( \left( \frac{\phi^{(-1)}
[\theta]}{c} \right)^k\right)_{k=0,\sigma_c \in J}^{N-1} \right|
\mod \lambda$. We have obtained a Vandermonde determinant:
\begin{equation*}
\det A_N \equiv \left( \phi^{(-1)} [\theta] \right)^{N(N-1)/2} \cdot \prod_{i \neq j; \sigma_i, \sigma_j \in J} \left( \frac{1}{i} - \frac{1}{j}\right) \mod \lambda.
\end{equation*}
By hypothesis, $\phi^{(-1)} [\theta] \not\equiv 0 \mod n$,
and $1/i \not \equiv 1/j \bmod n$ for $\sigma_i, \sigma_j \in J$;
this implies finally that $\prod_{\sigma_i, \sigma_j \in J} \left(
\frac{1}{i} - \frac{1}{j}\right) \not\equiv 0 \mod n$, which
confirms our claim.
\end{proof}
\section{Proof of Theorem \ref{tbin}}
Theorem 4 in \cite{Mi1} proves that if CF holds, then \rf{dn} has no solution except for \rf{part}. The computations in \cite{BH} prove that CF holds for $n\leq 163.10^6$. This proves Theorem \ref{tbin}.\rf{A}.
Theorem \ref{tbin}.\rf{C} is also proved in Theorem 4 in \cite{Mi1}. In the sequel we shall show that the
only possible solutions are $X = \pm B/n^e + 1$. We may assume in particular that $n > 163 \cdot 10^6$.
We have already proved that $X-1 = B \cdot C^n/n^e$. If $C = \pm 1$,
then $X - 1 = \pm B/n^e$, as stated in point \rf{B} of Theorem \ref{tbin} and
$X$ is a solution of \rf{dn}. The bounds on $| X |$ in \rf{Bvals}
imply $| B | < n^n$, the second claim of \rf{B}.
Consequently, Theorem \ref{bin} will follow if we prove that $C = \pm
1$; we do this in this section. Assume that there is a prime $p | C$ with $p^{i} || C$.
Let $\eu{P} \subset \mathbb{Z}[\zeta]$ be a prime ideal lying
above $p$ and let $d(p) \subset G$ be its decomposition group. We shall use
Remark \ref{4} in order to derive some group ring elements which cancel
the exponents $\kappa$ occurring in \rf{kappa}.
Recall that $D = B \cdot C^n/n^e = X-1$, with $C$ defined by
\rf{y}. Note that \rf{y} implies that either $(n, D) = 1$, or $n^2 |
B$ and $(n, C) = 1$. Indeed, if $(n, D) \neq 1$, then $e = 1$ and $n^2
| n^e (X-1) = B C^n$ and since $(C, n) = 1$, it follows that $n^2 |
B$; the last relation follows from the bounds $C^n \leq E < 4
(n-2)^n$, hence $| C | < n$. In both cases $1/(1 - \zeta)$ is
congruent to an algebraic integer modulo $D/n^{v_n(D)} \cdot
\mathbb{Z}[\zeta]$.
According to Remark \ref{4}, we know that there are at least two elements, $\sigma^{'}_1, \sigma^{'}_2 \in
d(p)$ such that $\sigma^{'}_1 \neq \jmath \cdot \sigma^{'}_2$. Let
$\sigma^{'}_i(\zeta) = \zeta^{c_i}, \ c_i \in \ZMs{n}$. It
follows from Lemma \ref{sm} that, for $c_i \neq c_j$ when $i \neq j$, there are $h'_1, h'_2 \in \mathbb{Z}$ with $|
h'_i | \leq \sqrt{n}$ and $\sum_{i=1}^2 h'_i c_i \equiv 0 \bmod n$ while
$\sum_{i=1}^2 h'_i / c_i \not \equiv 0 \bmod n$.
We define
\begin{eqnarray}
\label{mudef}
\begin{array}{l c l}
\mu & = & \sum_{i=1}^2 h_i \sigma_i \in \mathbb{Z}[ d(p) ] \subset \mathbb{Z}[ G ], \quad \hbox{with} \\
h_i & = & \begin{cases}
h'_i & \hbox{if $h'_i > 0$ and }\\
- h'_i & \hbox{otherwise}, \quad \quad \quad \hbox{and}
\end{cases} \\
\sigma_i & = & \begin{cases}
\sigma^{'}_i & \hbox{if $h'_i > 0$ and }\\
\jmath \sigma^{'}_i & \hbox{otherwise}.
\end{cases}
\end{array}
\end{eqnarray}
By construction, $\mu$ is a positive element, i.e. the coefficients
$h_i \geq 0$. Let $\widehat{ \cdot } : G \rightarrow \ZMs{n}$ denote the
cyclotomic character and note that $h'_i \widehat{\sigma} = h_i
\widehat{\sigma'}$ for $h'_i < 0$ and thus $\phi^{(1)}(\mu) =
0$. In view of Lemma \ref{sm}, we also know that we can choose the
$h'_i$ and thus $\mu$, such that
\[ \phi^{(1)}(\mu) = 0, \quad \hbox{ but } \quad \phi^{(-1)}(\mu) \neq 0. \]
Since $\mathbb{K}/\mathbb{Q}$ is abelian, all the primes $\eu{P} | (p)$ have the same
decomposition group $d(p)$ and $\mu$ enjoys the following stronger
property: let $\eu{P} | (p)$ and $S \subset G$ be a set of
representatives of $G/d(p)$; let $\gamma \in \mathbb{Z}[\zeta]$ be such that
$\gamma \equiv \zeta^{c_{\sigma}} \bmod \sigma (\eu{P} )$ for all
$\sigma \in S$; then $\gamma^{\mu} \equiv 1 \bmod p \mathbb{Z}[\zeta]$, as
follows directly from $\zeta^{\mu} \equiv 1 \bmod \sigma(\eu{P})$, for
all $\sigma \in S$.
In view of Lemma \ref{simple} and the fact that Fueter elements are positive,
we also know that there is a $\theta_0 \in I_f^+$ such that
$\varsigma(\theta_0) = 2$ and $\phi^{(-1)} (\theta_0) \neq 0$.
Let
\begin{equation*}
\Theta = 2 \cdot \mu \cdot \theta_0.
\end{equation*}
In view of the properties \rf{phiprop} of moments and since for both
$\mu, \theta_0$, the Fermat quotient vanishes, while $\phi^{(-1)}$
is non-null, it follows that the same must hold for $\Theta$,
so $\Theta \in 2 \cdot I_f^+$ and $\phi^{(-1)} (\Theta)
\neq 0$. Let
\[ h = 2 \cdot \sum_{i=1}^l |h_i| = 2 . w( \mu ) , \]
where we defined the \textit{absolute weight} $w(\sum_c n_c \sigma_c)
= \sum_c | n_c |$. From subsection \ref{SecStick}, we know that there
exists a Jacobi integer $\beta[ 2 \theta_0 ] \in \mathbb{Z}[\zeta]$ such that
$\beta[ 2 \theta_0 ]^n = (\zeta^{c_X} (1-\zeta)^{1-e})^{\theta_0} \cdot
\left(1 + \frac{X-1}{1-\zeta}\right)^{\theta_0}$ (see \rf{beta1}). It
follows from \rf{base1} and Lemma \ref{theta}, that in both cases we
have $\beta[ 2 \theta_0 ] \equiv \zeta^{\kappa(\theta_0)} \cdot Y^4
\bmod \eu{P}$. We have chosen $\mu$ as a linear combination of two
elements from the decompostion group $D(\eu{P}) \subset G$, so $\mu$
acts on $\zeta \bmod \eu{P}$ by $\zeta \mod \eu{P} \mapsto \zeta^{\mu} \equiv 1 \bmod \eu{P}$.
Therefore, from $\beta[ \Theta ] = \beta[ 2 \theta_0 ]^{\mu}$ and thus,
by the choice of $\mu$, we have
\begin{eqnarray}
\label{unif}
\beta[ \Theta ] \equiv Y^h \bmod p \mathbb{Z}[ \zeta ].
\end{eqnarray}
Let $\Theta = 2 \sum_{c = 1}^{n-1} n_c \sigma_c$; for any prime
$\eu{P} | (p)$, the binomial series of the \nth{n} root of the right
hand side in \rf{pow} converges in the $\eu{P}$ - adic valuation and
its sum is equal to $\beta[ \Theta ]$ up to a possible \nth{n} root of
unity $\zeta^c$. Here we make use of the choice of $\Theta$: comparing
\rf{unif} with the product above, it follows that $\zeta^c = 1$ for
all primes $\eu{P} \ | \ (p)$. For any $N > 0$, we have $p^{i n N} | |
D^N$ and thus
\begin{eqnarray}
\label{beta}
\beta[ \Theta ] \equiv Y^h \prod_{c=1}^{n-1} \left(\sum_{k=0}^{N-1}
\binom{n_c/n}{k} \left(\frac{D}{1-\zeta^c}\right)^k \right) \quad \bmod p^{i n N}.
\end{eqnarray}
We develop the product in a series, obtaining an expansion which
converges uniformly at primes above $p$ and is Galois covariant; for
$N < n-1$ and $\sigma \in G$, we have:
\begin{eqnarray*}
\label{inhom}
\beta[ \sigma \Theta ] = Y^h \left(1 + \sum_{k = 1}^{N-1}
\frac{b_k[ \sigma \Theta ]}{(1-\zeta)^k n^k k!} \cdot D^k \right) + O
(p^{i n N}),
\end{eqnarray*}
with $b_k[ \Theta ] \in \mathbb{Z}[\zeta]$. Let $P \subset \{ 1, 2, \cdots ,
n-1 \}$ be a set of cardinal $1 < N < (n-1)/2$ such that if $c \in P$ then
$n-c \not \in P$, and $J \subset \mathbb{Z}[G]$ be the Galois automorphisms of
$\mathbb{K}$ indexed by $P$: $J=\{ \sigma_c \}_{c \in P}$. Consider the linear
combination $\Delta = \sum_{\sigma \in J} \lambda_{\sigma} \cdot
\beta[\sigma \cdot \Theta]$ where $\lambda_{\sigma} \in \mathbb{Q}[\zeta]$
verify the linear system:
\begin{eqnarray}
\label{hom}
\sum_{\sigma \in J} & & \lambda_{\sigma} \cdot b_k[\sigma \cdot \Theta] = 0 ,
\text{ for } k = 0, \ldots, N-1,\ k \neq \lceil N/2 \rceil \nonumber
\quad \text{ and } \\
\sum_{\sigma \in J} & & \lambda_{\sigma} \cdot b_{\lceil N/2 \rceil}[\sigma \cdot \Theta] = (1-\zeta)^{\lceil N/2 \rceil} n^{\lceil N/2 \rceil} \lceil N/2 \rceil !.
\end{eqnarray}
Applying Lemma \ref{dvpt} we observe that this system is regular for
any $N < n-1$. There exists therefore a unique solution which is not
null.
We recall that a power series $\sum_{k=0}^{\infty} a_k X^k \in
\mathbb{C}[[X]]$ is dominated by the series $\sum_{k=0}^{\infty} b_k X^k \in
\mathbb{R}[[X]]$ with non-negative coefficients, if for all $k \geq 0$, we
have $|a_k| \leq b_k$. The dominance relation is preserved by addition
and multiplication of power series.
Following the proof of Proposition 8.2.1 in \cite{Bilu2}, one shows
that if $r \in \mathbb{R}_{>0}$ and $\chi \in \mathbb{C}$, with $|\chi| \leq K$ with
$K \in \mathbb{R}_{>0}$, then the binomial series $(1+\chi T)^r$ is dominated
by $(1- K T)^{-r}$. From this, we obtain that $(1+ \chi T)^{\Theta/n}$ is
dominated by $(1 - K T)^{-w(\Theta)/n}$. In our case \rf{beta}, $T=D$, $\chi = \frac{1}{1 - \zeta^c}$ and
\[ K = \max_{1 \leq c < n} | 1/(1-\zeta^c) | = 1/\sin(\pi/n) \leq
n/\pi \cos(\pi/3) = 2n/\pi < n . \] Applying this to our selected
$\Theta$, whose absolute weight is bounded by $w \leq 4 n \sqrt{n}$,
we find after some computations that $|b_k[\sigma \cdot \Theta]| <
n^{k} \cdot \binom{-w/n}{k} \cdot k! < n^{3k}$ for $N < n/2$.
Let $A = \det \left( b_k[\sigma_c \cdot \Theta ] \right)_{k=0; c \in
I}^{N-1} \neq 0$ be the determinant of the matrix of the system
\rf{hom}, which is non vanishing, as noticed above: note that the
division by $k!$ along a complete row does not modify the regularity
of the matrix.
Let $\vec{d} = (1-\zeta)^{\lceil N/2 \rceil} n^{\lceil N/2 \rceil} \lceil N/2 \rceil ! \left( \delta_{k,\lceil N/2
\rceil}\right)_{k=0}^{N-1} $, where $\delta_{i,j}$ is Kronecker's
symbol. The solution to our system is $\lambda_\sigma = A_{\sigma} /
A$, where $A_\sigma \in \mathbb{Z}[\zeta]$ are the determinants of some minors
of $\left( b_k[\sigma_c \cdot \Theta ] \right)_{k=0;c \in I}^{N-1}$
obtained by replacing the respective column by $\vec{d}$.
Noticing that $|(1-\zeta)^{\lceil N/2 \rceil} n^{\lceil N/2 \rceil} \lceil N/2 \rceil !| < n^{3(N-1)}$, Hadamard's inequality implies that
\begin{eqnarray*}
|A_\sigma| & \leq & n^{3(N-1)(N-2)/2} \cdot (N-1)^{(N-1)/2} \leq n^{3N^2/2} \cdot N^{N/2} \quad \hbox{and} \\
|A| & \leq & n^{3N^2/2} \cdot N^{N/2}
\end{eqnarray*}
Let $\delta = A \cdot \Delta \in \mathbb{Z}[\zeta]$,
\[ \delta = \sum_{\sigma \in J} A_{\sigma} \cdot \beta[\sigma \cdot
\Theta] \in \mathbb{Z}[\zeta]. \]
We set $N = \lceil n^{3/4} \rceil$ and claim that for such $N$, $\delta \neq 0$. By choice of the $\lambda$'s, we have $\delta = A.p^{in \lceil N/2 \rceil}.u + p^{i n N} z$ for some $z \in \mathbb{Z}[ \zeta ]$, where $u = \frac{D^{\lceil N/2 \rceil}}{ p^{in \lceil N/2 \rceil}}.Y^h$ is a unit in $(\mathbb{Z}/p\mathbb{Z})^*$. Therefore, if we assume that $\delta = 0$, then necessarily $p^{in \lceil N/2 \rceil}$ divides $A$. However, $v_p(A) < n \lceil N/2 \rceil$. Indeed, the upper bound for $| A |$ implies a fortiori that $v_p(A) \leq \lceil N/2 \cdot \log N + \frac{3 N^2}{2}\log n \rceil$. Then, the assumption $\delta = 0$ would imply $n \leq 3\left[ n^{3/4} + \frac{1}{4}\right] \log n$, which is false for $n \geq 4,5.10^6$. This contradicts thus our initial assumption. Therefore $\delta \neq 0$.
Given the bounds on $A_{\sigma}$, we obtain $|\delta| \leq N Y^h
n^{3N^2/2} \cdot N^{N/2}$ and using the fact that $h < 4n^{1/2}$, $Y
< n^n$ (Theorem \ref{tbin}.\rf{B}) and $N = \lceil n^{3/4} \rceil$, we find
\begin{eqnarray*}
\label{up}
| \mbox{\bf N}_{\mathbb{K}/\mathbb{Q}}(\delta) | < \left( n^{\frac{11}{2} n^{3/2} + \frac{3}{8} n^{3/4} +
\frac{3}{4}}\right) ^{n-1}.
\end{eqnarray*}
The initial homogenous conditions in \rf{hom} imply $\delta \equiv 0
\bmod p^{i n \lceil N/2 \rceil }$, therefore $| \mbox{\bf N}_{\mathbb{K}/\mathbb{Q}}(\delta)
| \geq p^{i n (n-1) N/2}$. Combining this inequality with \rf{up} and
$n \geq 163 \cdot 10^6$, one finds that $\log p < 1.64$. This shows
that $p= 2, 3$ or $5$.
We consider the case $p \leq 5 $ separately as follows. We choose in this case $\mu = 1 + p \jmath
\sigma_p^{-1}$ and verify that $\varphi(\mu) = 0$, while
$\phi^{-1}(\mu) = 1 - p^2 \not \equiv 0 \bmod n$. Consequently
$\varsigma(\Theta) = 4(p+1)$ and the norm of $\delta$ is thus bounded by
\begin{eqnarray*}
p^{n (n-1) N/2} \leq | \mbox{\bf N}_{\mathbb{K}/\mathbb{Q}}(\delta) | < \left( n^{4(p+1) + 3N^2/2} \cdot N^{N/2+1} \right)^{n-1}.
\end{eqnarray*}
Letting $N = 48$, we obtain the inequality
\[ 2^n \leq n^{73} \cdot 48^{25/24} < 64 n^{73} \quad \Rightarrow
\quad \frac{n-6}{73} \leq \log(n)/\log(2), \] which is false for for
$n > 695$, and a fortiori for $n > 163.10^{6}$. We obtain a contradiction
in this case too, and thus $C = \pm 1$, which completes the proof of
Theorem \ref{tbin}.
\qed
\section{Consequences for the general case of the binary Thue equation \rf{bin}}
In this section we derive Theorem \ref{genexpo}. For this we
assume that \rf{bin} has a solution with $(\varphi^*(B),n) = 1$, since
our results only hold in this case, a fact which is reflected also in
the formulation of Theorem \ref{genexpo}.
Consider the case when $n = p \cdot q$ is the product of two distinct
primes. If $(n , B) = 1$, then Theorem \ref{tbin} holds for both $p$
and $q$ with the value $e = 0$. If $X, Z$ is a solution, then Theorem
\ref{tbin} . \rf{B} implies that $X^p = \pm B + 1$ and $X^q = \pm B
+1$. Consequently either $X^p + X^q = 2$ or $X^p - X^q = 2$. This is
impossible for $| X | > 2$ and a simple case distinction implies that
there are no solutions. As a consequence,
\begin{corollary}
\label{c1}
Consider Equation \rf{bin} for fixed $B$ and suppose that $n$ is
an integer which has two distinct prime divisors $p > q > 2$ with $(p, B) =
( q, B ) = 1$. Then \rf{bin} has no solutions for which \rf{nosplit} holds.
\end{corollary}
If all divisors of $n$ are among the primes dividing $B$, we are led
to the following equation: $p (X^q - 1) = q (X^p - 1)$, which has no
solutions in the integers other than $1$. Indeed, assume $X \neq 1$ to be a
solution of the previous equation, and $q=p+t,\ t \geq 0$. The
real function $f(t) = p(X^{p+t}-1) - (p+t)(X^p-1)$ is strictly
monotonous and $f(0)=0$. Therefore, the equation $p (X^q - 1) = q
(X^p - 1)$ has no solutions. There is only the case left in which $n$
is built from two primes, one dividing $B$ and one not. In this case,
one obtains that equation $p ( X^q - 1) = X^p - 1$ which can also be
shown not to have non trivial solutions, using the above remark, this
time with $f(t) = p(X^{p+t}-1) - (X^p-1)$. Hence:
\begin{corollary}
\label{c2}
The equation \rf{bin} has no solutions for exponents $n$ which are
divisible by more than one prime and for $B$ such that \rf{nosplit} holds.
\end{corollary}
We are left to consider the case of prime powers $n = p^c$ with $c >
1$. If $p \nmid B$, we obtain $X^{n/p} - 1 = B/p^e$, so in particular
$B/p^e+1 \geq 2^{p^{c-1}}$ is a \nth{p^{c-1}} power. Since in this
case, \rf{bin} has in particular a solution for the exponent $p$, the
Theorem \ref{tbin} implies that $B < p^p$; when $c > 2$, combining this with the
previous lower bound implies that there are no solutions.
For $c = 2$, we deduce that $| X | < p$ and, after applying the
Theorem \ref{tbin} again and letting $\xi = \zeta^{1/p}$ be a
primitive \nth{p^2} root of unity, we obtain the equation
\[ Y^{p^2} = \frac{X^{p^2} - 1}{p^e(X^p-1)} = \mbox{\bf N}_{\mathbb{Q}[ \xi ]/\mathbb{Q}}
(\alpha) \quad \quad \quad \alpha = \frac{X-\xi}{(1-\xi)^e} .\] As
usual, the conjugates of the ideal $(\alpha)$ are pairwise coprime. We
let $\eu{A} = (Y, \alpha)$ be an ideal with $N(\eu{A}) = (Y)$; moreover,
if $\eu{L} | \eu{A}$ is a prime ideal and $N(\eu{L}) = (\ell)$, then
the rational prime $\ell$ is totally split in $\mathbb{Q}[ \xi ]$, the factors
being the primes $(\ell, \sigma_c(\alpha))$. Being totally split, it
follows in particular that $\ell \equiv 1 \bmod p^2$ so $Y \geq \ell >
2 p^2$, in contradiction with $Y < X < p+1$. This shows that there are
no solutions for $n = p^2$. \qed
\begin{corollary}
If the Equation \rf{bin} in which $n = p^c$ is a prime power has non
trivial solutions for which \rf{nosplit} holds, then $c = 1$.
\end{corollary}
\qed
The primes dividing the exponent $n$ used in the above corollaries are
by definition coprime to $\varphi^*(B)$. As a consequence, if $n$ is
an exponent for which \rf{bin} has a solution and $m | n$ is the
largest factor of $n$ with $m \in \id{N}(B)$ -- as defined in
\rf{expos} -- then the corollaries imply that there is at most one
prime dividing $n/m$ and the exponent of this prime in the prime
decomposition of $n$ must be one. This is the first statement of
Theorem \ref{genexpo}, which thus follows from these corollaries and
Theorem \rf{bin}.
\section{Appendix}
The proof of Theorem \ref{tbin} is based on results from \cite{Mi1}. It has
been pointed out that the proof of Theorem 3 in \cite{Mi1} may require
some more detailed explanation in the case of a singular system of
equations in the proof of Lemma 14 of \cite{Mi1}. Since the statements
of \cite{Mi1} are correct and can even be slightly improved, while the
explanations may have seemed insufficient, we provide here for the readers
interested to understand the technicalities of the proofs in \cite{Mi1}
some additional details and explanation, confirming those claims and results.
\subsection{Clarification on the singular case of Theorem 3 in \cite{Mi1}}
Let $m \in \mathbb{Z}_{>0}$ be a positive integer, $\mathbb{K}$ a field, $V = \mathbb{K}^{m}$
as a $\mathbb{K}$-vector space and let $L \subsetneq V$ be a proper subspace of $V$ of dimension $r$. We
assume that there exists at least one vector $w_1 \in L$ which is free
of $0$-coefficients over the canonical base $\id{E}$. For $(x, y) \in V^2$, $x=(x_1, \cdots, x_m)$ and $y=(y_1,\cdots,y_m)$, the
Hadamard product is defined by $[ x, y ] = (x_1y_1, \cdots, x_m y_m)$. For
any subspace $W \subset V$ we define the $W$-\textit{bouquet} of $L$ by
\[ L_W = \langle \ \ \{ \ [ w, x ] \ : \ w \in W, \ x \in L \ \} \ \
\rangle_{\mathbb{K}} ,\] the $\mathbb{K}$-span of all the Hadamard products of
elements in $W$ by vectors from $L$.
\begin{lemma}
\label{bouquet}
Let $a_1 = (1, 1, \ldots, 1)$ over $\id{E}$, and $e_2 \in V$ such that its coordinates be pairwise distinct over $\id{E}$. Let $A_2 = \langle \{ a_1, a_2\} \rangle_{\mathbb{K}}$ be
the subspace generated by $e_1,e_2$. Let $L_{A_2}$ be the
resulting $A_2$-bouquet. Then $dim(L_{A_2}) > dim(L)$.
\end{lemma}
\begin{proof}
Obviously, $L \subset L_{A_2}$ (as $a_1 \in A$). We would like to
show that $L_{A_2} \neq L$. We know that the system $(w_1,
[w_1,a_2], [w_1,a_2^2], \cdots, [w_1,a_2^{m-1}])$ (the notion of
power of a vector here is to be understood as an "Hadamard power")
is free (as it induces a Vandermonde matrix over $\id{E}$,
$w_1$ does not have any zero among its coordinates and all
coordinates of $a_2$ are pairwise distinct). We know that $w_1 \in
L$; let us assume that $[w_1,a_2^i] \in L$ for $i \leq j$ (we know
that $j \leq m - r < m$). Then, $[w_1,a_2^j] \in L$ and
$[w_1,a_2^{j+1}] \notin L$. However, the Hadamard product of
$[w_1,a_2^j] \in L$ by $a_2$, that is $[w_1,a_2^{j+1}]$, belongs to
$L_{A_2}$. Thus, $\dim L_{A_2} > \dim L$.
\end{proof}
\subsubsection{Application of Lemma \ref{bouquet} to the proof
of the singular case in the argument on pages 266 -- 270 of \cite{Mi1}}
We apply here the lemma in the first case (that is $x \not\equiv s
\mod p$, where $s \in \{ -1, 0, 1\}$), the application to the second
case being similar.
Let all notation be like in Lemma 14 in \cite{Mi1}. As in \cite{Mi1},
we will assume that $\rg{A} = \left( \zeta^{-\kappa_{ac}/a}
\right)^{(p-1)/2}_{a,c=1}$ (where $\kappa_{ac}$ are the \textit{Galois
exponents}) is singular. Let $m = (p-1)/2$, $\mathbb{K} = \mathbb{Q}(\zeta_p)$ and
$r = {\rm rank } \ (\rg{A}) < (p-1)/2$. Without loss of generality, we assume
that a regular $r$-submatrix of $\rg{A}$ is built with the first $r$
rows and the first $r$ columns. Therefore, the first $r$ rows of
$\rg{A}$ are independent, and we denote by $W$ the sub-space of
$V=\mathbb{K}^{m}$ generated by the first $r$ row vectors $w_1, \cdots, w_r$
of $\rg{A}$. For $a_1 = (1, 1, \ldots, 1)$, we let $a_2$ be the vector
of $V$ whose components are \footnote{\text{In the context of \cite{Mi1}, $\eta$ corresponds to $b_1[\theta]$ in our context}} $\left( \eta (\sigma_c
\theta)\right)_{c=1}^{(p-1)/2}$ and $A_2=\{ a_1, a_2 \}$. Then,
according to Lemma \ref{bouquet}, there exists at least one vector
$\vec{v} \in L_{A_2}$ which is independent on the first $r$ vectors of
$\rg{A}$.
Let $\rg{S}$ be the $(r+1) \times (r+1)$ submatrix of $\rg{A}$
comprising the first $r$ rows and $r+1$ columns of $\rg{A}$, to which
we have added an additional row: the first $r+1$ components of
$\vec{v}$. Let $\vec{\lambda}'$ be the vector solution of $\rg{A}
\vec{\lambda}' = \vec{d}'$, where $\vec{d}' = \left( \delta_{c,r+1}
\right)_{c=1}^{r+1}$. We know that $\vec{\lambda}' \neq \vec{0}$, as
$\rg{S}$ is regular and $\vec{d}'$ is not the null vector. For $1 \leq
c \leq r + 1$, by Cramer's rule, $\lambda_c = \frac{S_c}{S}$, where
$S_c$ are the determinants of some minors of $\rg{S}$ obtained by
replacing the $c$-column by $\vec{d}'$, and $S = \det \rg{S}$.
Let $\vec{\lambda} \in V$ be a vector whose first $r+1$ coordinates
are those of $\vec{\lambda}'$ and the others are $0$. Let $\left(
\delta_{c,r+1} \right)_{c=1}^{m}$. Then, $\vec{\lambda}$ verifies:
$\rg{A} \vec{\lambda} = \vec{d}$.
Let $\delta = \sum_{c=1}^{r+1} \left( \lambda_c \cdot \beta_c +
\overline{\lambda_c \cdot \beta_c} \right)$. Using Hadamard's
inequality, we bound $|S_c| \leq \left(
\frac{p-3}{2}\right)^{\frac{p-3}{4}} = D_1$ and $|S| \leq \left(
\frac{p-1}{2}\right)^{\frac{p-1}{4}} = D_0$. Then, using the fact
that the choice of $\lambda_c$ eliminates the first term in the
expansion of $f_c$, we find that $|S| \cdot |\delta| \leq 2
x^{(p-1)/2p} \cdot \sum_{c=1}^{r+1} |S_c| |R_{c,0} (x)|$, where
$R_{c,0} (x) = f_c(x) - x^{(p-1)/2p}$. With the same arguments
as in \cite{Mi1}, we deduce: $$|S \delta| < 2(p-1) D_1 \cdot
\frac{1}{|x|^{(p+1)/2p}}.$$ This inequality holds for all conjugates
$\sigma_c(\delta)$, thus leading to:
\begin{equation*}
\left|\mbox{\bf N}\left(S \delta\right)\right| < \left( 2 (p-1) D_1\right)^{(p-1)/2}
\cdot \frac{1}{|x|^{\frac{(p-1)(p+1)}{4p}}}.
\end{equation*}
If $\delta \neq 0$, then $\left|\mbox{\bf N}\left(S \delta\right)\right| \geq
1$ and thus $|x| \leq 2^{5-p} \left( \frac{p}{2}
\right)^{\frac{p}{2}}$. If $\delta = 0$, then $0 = S \delta = S \cdot
|x|^{(p-1)/2} - \sum_{c=1}^{(p-1)/2} S_c R_{0,c}$, and thus:
\begin{equation*}
|x| \leq \sum_c |S_c|/|S| < (p-1) D_1 < 3
\left(\frac{p-3}{2}\right)^{(p-3)/2} .
\end{equation*}
These bounds are better than the ones in \cite{Mi1}, and this
concludes the clarification.
\subsection{Proof of Lemma \ref{simple}}
\begin{proof}
Let $\theta = \sigma_w \psi_u + \sigma_z \psi_v$. The conditions
required by the lemma lead to the following linear system of
equations over $\mathbb{F}_n$:
\begin{equation}
\label{sys1}
\left\{
\begin{array}{lcllcl}
w \cdot \varphi (\psi_u) & + & z \cdot \varphi (\psi_v) & = & 0 \\
1/w \cdot \phi^{(-1)} (\psi_u) & + & 1/z \cdot \phi^{(-1)} (\psi_v) & \neq & 0
\end{array}
\right.
\end{equation}
Considered as a linear system in the unknowns $w, z \in \mathbb{F}_n$, the
above system has the matrix $M = \left( \begin{array}{c c} \varphi
(\psi_u) & \varphi (\psi_v)\\ \phi^{(-1)} (\psi_v) & \phi^{(-1)}
(\psi_u) \end{array}\right)$. Assume that the product $P(t) =
\varphi(\psi_t) \cdot \phi^{(-1)}(\psi_t)$ is not constant for all $t
\in \ZMs{n}$. Then there are two elements $u, v \in \ZMs{n}$ such that
$P(u) \neq P(v)$; for such values $u, v$, the matrix $M$ is regular
over $\mathbb{F}_p$ and for any non vanishing right hand side in the second
equation, the system has a unique solution $(w, z)$. For this choice
of $u, v; w, z$, the element $\theta = \sigma_w \psi_u + \sigma_z
\psi_v$ satisfies the condition of the lemma.
We now show that $P(t) : \ZMs{n} \rightarrow \mathbb{F}_p$ is not a constant
function. The proof uses explicit computations which include divisions
by several constants which must be assumed to be non - null. Therefore
we suppose that $n \not \in E := \{ 3, 7 \}$ and shall verify
independently that the claim of the lemma holds for this exceptional
set.
Let $\varphi$ be the Fermat quotient map and $\Theta_k$ be the \nth{k}
Fuchsian. For any integer $1 < k < n-1$, we have:
\begin{equation*}
\begin{array}{lcll}
(n-k)^n - (n-k) & \equiv & -k^n - n + k & \mod n^2 \\
& \equiv & -n \left( \frac{k^n - k}{n} + 1 \right) & \mod n^2 .
\end{array}
\end{equation*}
Dividing both terms by $n$ and recalling from Lemma \ref{Jha} that
$\varphi(\Theta_k) = \varphi(k) \equiv \frac{k^n - k}{n} \bmod n$, we find:
\begin{equation}
\label{FuchsEq}
\varphi (\Theta_{n-k}) = n - \left( 1 + \varphi(\Theta_k) \right).
\end{equation}
Using now \rf{voronoi} from Lemma \ref{Jha}, with $m=2$, we find that:
\begin{equation*}
\phi^{(-1)} (\Theta_k) \equiv \frac{k^3 - k}{2k^2} B_{2} \equiv \frac{1}{12}\cdot
\left(k - \frac{1}{k} \right) \mod n,
\end{equation*}
where we used the fact that $B_2 = 1/6$. Finally, using that $\psi_k =
\Theta_{k+1} - \Theta_k$ for $k > 1$ while $\psi_1 = \Theta_2$, we
obtain the following expressions for the moments of interest:
\begin{eqnarray*}
\varphi(\psi_k) & = & \varphi(k+1)-\varphi(k), \\
\phi^{(-1)}(\psi_k) & \equiv & \frac{1}{12} \cdot \left( 1 + \frac{1}{k(k+1)}\right).
\end{eqnarray*}
Note that $\phi^{(-1)}(\psi_k) = 0$ iff $k^2 + k + 1 = 0$; if $n
\equiv 1 \bmod 6$, the equation has two solutions in $\mathbb{F}_n$, otherwise
it has none. In the latter case $\phi^{(-1)}(\psi_k) \neq 0$ for all
k.
We shall assume that $P$ is the constant function and shall show that
this assumption fully determines the Fermat quotient of integers in
dependence of $\varphi(2)$, and this determination is in contradiction
with \rf{FuchsEq}; the contradiction implies that $P$ cannot be
constant, thus completing the proof.
Let thus $C = \varphi(2) \cdot \phi^{(-1)}(\Theta_2) = \varphi(2)
\cdot \frac{1}{8}$. Assume first that $\varphi(2) = 0$ and recall from
\rf{FuchsEq} that $\varphi(k) + \varphi(n-k) + 1 = 0$. Therefore at
least $\frac{n-1}{2}$ of the values of $\varphi$ are
non-vanishing. Since $\phi^{(-1)}(k) \cdot (\varphi(k+1)-\varphi(k)) =
0$ for all $k$ we see that if $n \not \equiv 1 \bmod 6$, then
$\varphi$ is constantly vanishing, which is impossible.
If $n \equiv 1 \bmod 6$, let $l, m \in \mathbb{F}_n$ be the non trivial third
roots of unity, so $\phi^{(-1)}(\psi_l) = \phi^{(-1)}(\psi_m) = 0$,
while for all $k \not \in \{l, m\}$ we must have $\varphi(k+1) =
\varphi(k)$. In particular, if $l < m$, there are two integers $a, b$
such that
\[ \varphi(2) = 0 = \ldots = \varphi(l); \quad \varphi(l+1) = a =
\ldots = \varphi(m); \quad \varphi(m+1) = b = \ldots = \varphi(n-1).\]
But $\varphi(n-1) = -1$ while $\varphi(n-2) = -1 - \varphi(2) = -1$,
so $b = -1$. For symmetry reasons induced by \rf{FuchsEq}, we must
have $a = -1/2$ and $m = n-l$. This is absurd since $m^3 \equiv 1
\bmod n$ implies $l^3 = (n-m)^3 \equiv -m^3 \equiv -1 \bmod n$, so $n
= 2 \not \equiv 1 \bmod 6$. Thus $\varphi(2) \neq 0$ in this case
too. Since $\phi^{(-1)}(l) = 0$, it follows however that $C =
\varphi(l) \cdot \phi^{(-1)}(l) = 0$ and thus $C = 0 = \varphi(2) / 8$
and we should have $\varphi(2) = 0$, in contradiction with the facts
established above. Consequently, if $n \equiv 1 \bmod 6$, then $P$
cannot be constant.
We consider now the case $n \not \equiv 1 \bmod 6$, in which we know
that $C \neq 0$. By expressing $C = P(2) = P(k)$ we obtain the
following induction formula
\begin{eqnarray*}
C = \frac{1}{12} \cdot \frac{3 \varphi(2)}{2} & = &
\frac{1}{12} (\varphi(k+1) - \varphi(k)) \cdot \frac{k^2 + k + 1}{k(k+1)}, \quad \hbox{hence} \\
\varphi(k+1) - \varphi(k) & = & \frac{3 \varphi(2)}{2} \cdot \frac{k(k+1)}{k^2 + k + 1}, \\
\varphi(3) - \varphi(2) & = & \frac{9}{7} \varphi(2) \quad \Rightarrow \quad \varphi(3) = \frac{16}{7} \varphi(2).
\end{eqnarray*}
By eliminating $\varphi(2)$ from the above identity for two successive
values of $k$ one finds
\[ \varphi(k+1) = \frac{2k^3}{k^3-1} \cdot \varphi(k) +
\frac{k^3+1}{k^3-1} \cdot \varphi(k-1). \]
We shall use the reflexion formula \rf{FuchsEq} between the last and
the first values in the sequence $1, 2,\ldots, n-2, n-1$. Letting $k =
n-2$ in the above induction, we find
\begin{eqnarray*}
-1 & \equiv & \varphi(n-1) \equiv \frac{16}{9} \cdot \varphi(n-2) +
\frac{7}{9} \cdot \varphi(n-3) \\
& \equiv & \frac{16}{9} \cdot(-1- \varphi(2)) + \frac{7}{9} \cdot (-1-\varphi(3)) \bmod n, \\
9 & \equiv & 16 + 16 \varphi(2) + 7 + 7 \varphi(3) \equiv 23 +
(16+7 \cdot \frac{16}{7} ) \varphi(2) \bmod n, \quad \hbox{hence} \\
-7 & \equiv & 16 \cdot \varphi(2) \bmod n.
\end{eqnarray*}
Consequently $\varphi(2) \equiv -\frac{7}{16} \bmod n$ and thus
$\varphi(3) \equiv \frac{16}{7} \varphi(2) \equiv -1 \bmod n$. But
then \rf{FuchsEq} implies that $\varphi(n-3) = -1-\varphi(3) = 0$, and
thus $C = 0$, in contradiction with the previously obtained non
vanishing fact. This confirms that $P(t)$ is non constant in
this case too.
It remains to verify the claim for the exceptional primes in $E$. For
$n = 3$ the Stickelberger ideal is trivial, so there is nothing to
prove. For $n = 7$ one can repeat the proof of the case $n \equiv 1
\bmod 6$, which requires no division by $7$; this completes the proof
of the Lemma.
\end{proof}
\thanks{\textbf{Acknowledgements}: \textit{The first author is grateful to the Universities of Bordeaux and
G\"ottingen for providing a stimulating environment during the
development of this work. Both authors thank Mike Bennett and
Kalman Gy\H{o}ry for suggesting this interesting problem for an
algebraic investigation.}}
| 2023-04-23T06:40:25.457Z | 2015-07-01T02:07:45.000Z | redpajama/arxiv | arxiv_0001 | 353 | 10,898 |
ce290be48b808f38fc6bf82178c26cdde1f420af | \section{Introduction}
Throughout this paper we consider simple graphs, that are finite and undirected graphs without loops or multiple edges.
Let $G=(V (G),E(G))$ be a connected graph of order $n=|V(G)|$ and of size $m=|E(G)|$.
The distance between two vertices $u$ and $v$ is denoted by $d_G(u,v)$ and is the length of a shortest path between $u$ and $v$ in $G$.
The diameter of $G$ is $\max\{d_G(u,v):~u,v\in V(G)\}$. It is well known that almost all graphs have diameter two.
When $u$ is a vertex of $G$, then the neighbor of $u$ in $G$ is the set $N_G(u)=\{v:~uv\in E(G)\}$.
The degree of $u$ is the number of edges adjacent to $u$ and is denoted by $\deg_G(u)$ .
A graph is said to be regular if all of its vertices have the same degree.
In a search for triangle-free graphs with arbitrarily large chromatic number,
Mycielski \cite{Mycielski} developed an interesting graph transformation as follows.
For a graph $G=(V,E)$, the {\it Mycielskian} of G is the graph $\mu(G)$, or simply $\mu$, with the disjoint union $V\cup X\cup \{x\}$
as its vertex set and $E\cup \{v_ix_j:~v_iv_j\in E\}\cup \{xx_j:~1\leq j\leq n\}$ as its edge set,
where $V=\{v_1,v_2,...,v_n\}$ and $X=\{x_1,x_2,...,x_n\}$.
The Mycielskian and generalized Mycielskians have fascinated graph theorists a great deal. This has resulted
in studying several graph parameters of these graphs (see for instance \cite{Fisher}).
A {\em chemical graph} is a graph whose vertices denote atoms and edges denote bonds between those atoms of any underlying chemical structure.
A {\it topological index} for a (chemical) graph $G$ is a numerical quantity invariant under
automorphisms of $G$ and it does not depend on the labeling or pictorial
representation of the graph.
Topological indices and graph invariants based on the distances between vertices of a graph or vertex degrees are widely used for
characterizing molecular graphs, establishing relationships between structure and properties of molecules, predicting
biological activity of chemical compounds, and making their chemical applications. These indices may be used
to derive quantitative structure-property or structure-activity relationships (QSPR/QSAR).
The concept of topological index came from work done by Harold Wiener in 1947 while he was working on boiling point
of paraffin \cite{Wiener}. He named this index as path number. Later on, path number was renamed as {\it Wiener index} and then theory of
topological index started. The Wiener index of $G$ is defined as
$W(G)=\sum_{\{u,v\}\subseteq V(G)} d_G(u,v)$.
The very first and oldest degree based topological index is {\it Randi$\acute{\mbox{c}}$ index} \cite{Randic} denoted by $R(G)$ and introduced by Milan
Randi$\acute{\mbox{c}}$ in 1975 as
$$R(G)=\sum_{uv\in E(G)}{1\over \sqrt{\deg_G(u)~\deg_G(v)}}.$$
It has been closely correlated with many chemical properties.
The general Randic index was proposed by Bollobás and Erd$\ddot{\mbox{o}}$s \cite{Bollobas} and Amic et al. \cite{Amic} independently, in 1998. Then it has
been extensively studied by both mathematicians and theoretical chemists \cite{MATCH-Randic}, \cite{HuGutman}.
For a survey of results, we refer to the new book by Li and Gutman \cite{LiGutman}.
An important topological index introduced about forty years ago
by Ivan Gutman and Trinajsti$\acute{\mbox{c}}$ \cite{GutmanTrinajstic} is the {\it Zagreb index} or
more precisely {\it first zagreb index} denoted by $M_1(G)$ and was defined as the sum of degrees of end vertices of all edges of G,
$$M_1(G)=\sum_{uv\in E(G)}(\deg_G(u)+\deg_G(v))=\sum_{x\in V(G)}(\deg_G(x))^2.$$
The {\it degree distance} was introduced by Dobrynin and Kochetova \cite{Dobrynin} and Gutman \cite{Schultz-GutmanIndex} as a weighted version of the
Wiener index. The degree distance of G, denoted by DD(G), is defined as follows and it is computed for important families of graphs ( see\cite{MATCH-Degreedistance} for instance):
$$DD(G)=\sum_{\{ u,v \} \subseteq V(G)} d_G(u,v)~(\deg_G (u)+\deg_G(v)).$$
In this paper we provide upper and lower bounds for the Randic index of the Mycielskian graphs.
Also, we determine the degree distance index of the Mycielskian of each graph with diameter two.
Up to now and as far as we know, these parameters are not determined for the Mycielskian graphs.
\section{The Degree Distance index}
In order to determine the degree distance index of the Mycielskian graph, we need the following observations.
Throughout this paper we suppose that $G$ is a connected graph, $V(G)=\{v_1,v_2,...,v_n\}$,
$X=\{ x_1,x_2,...,x_n\}$, $V(G)\cap X=\emptyset$, $x\notin V(G)\cup X$,
and
$$V(\mu)=V(G)\cup X\cup\{x\}, ~E(\mu)=E(G)\cup \{v_ix_j:~v_iv_j\in E(G)\}\cup\{xx_i:~1\leq i \leq n\}.$$
\begin{observation} \label{MycielskiDegree}
Let $\mu$ be the Mycielskian of $G$. Then for each $v\in V(\mu)$ we have
\begin{eqnarray*}
\deg_\mu (v)=
\begin{cases}
n & v=x \\
1+\deg_G(v_i) & v=x_i \\
2\deg_G(v_i) & v=v_i.
\end{cases}
\end{eqnarray*}
\end{observation}
\begin{observation} \label{MycielskiDistance}
Distances between the vertices of the Mycielskian $\mu$ of $G$ are given as follows.
For each $u,v\in V(\mu)$ we have
\begin{eqnarray*}
d_\mu(u,v)=
\left\{ \begin{array}{ll}
1 & u=x,~v=x_i \\
2 & u=x,~v=v_i \\
2 & u=x_i,~v=x_j \\
d_G(v_i,v_j) & u=v_i,~v=v_j,~d_G(v_i,v_j)\leq 3 \\
4 & u=v_i,~v=v_j,~d_G(v_i,v_j)\geq 4 \\
2 & u=v_i,~v=x_j,~i=j \\
d_G(v_i,v_j) & u=v_i,~v=x_j,~i\neq j,~d_G(v_i,v_j)\leq 2 \\
3 & u=v_i,~v=x_j,~i\neq j,~d_G(v_i,v_j)\geq 3.
\end{array} \right.
\end{eqnarray*}
\end{observation}
Note that there are $|E(G)|$ unordered pairs of vertices in $V(G)$ whose distance is 1,
$$ | \left\{ \{u,v\}\subseteq V(G): ~ d_G(u,v)=1 \right\} | = |E(G)|. $$
Also,
$$\sum_{ \substack{ \{u,v\}\subseteq V(G) \\ d_G(u,v)=1 }} (\deg_G(u)+\deg_G(v))=
\sum_{uv\in E(G) } (\deg_G(u)+\deg_G(v))=M_1(G).$$
It is well known that almost all graphs have diameter two. This means that graphs of diameter two play an important role in the theory of graphs.
\begin{lm} \label{Distance2}
If $G$ is a graph of diameter 2, then
\begin{eqnarray*}
\sum_{ \substack{ \{v_i,v_j\}\subseteq V(G) \\ d_G(v_i,v_j)=2 } } (\deg_G(v_i)+\deg_G(v_j)) = 2(n-1) |E(G)| - M_1(G).
\end{eqnarray*}
\end{lm}
\begin{proof}{
Since the diameter of $G$ is two and each vertex $v_i\in V(G)$ has $\deg_G(v_i)$ neighbours in $G$, the number of vertices in $V$ which their distance to $v_i$ is two equals $n-1-\deg_G(v_i)$. This implies that
\begin{eqnarray*}
\sum_{ \substack{ \{v_i,v_j\}\subseteq V(G) \\ d_G(v_i,v_j)=2 } } (\deg_G(v_i)+\deg_G(v_j)) &=& \sum_{i=1}^n (n-1-\deg_G(v_i)) \deg_G(v_i) \\
&=& (n-1) \sum_{i=1}^n \deg_G(v_i) - \sum_{i=1}^n (\deg_G(v_i))^2 \\
&=& 2(n-1) |E(G)| - M_1(G).
\end{eqnarray*}
}\end{proof}
\begin{theorem} \label{DegreeDistance}
Let $G$ be an $n$-vertex graph of size $m$ whose diameter is 2. If $\mu$ is the Mycielskian of $G$, then the degree distance index of $\mu$ is given by
$$DD(\mu)=4 DD(G)- M_1(G) +( 7n-1)n + (8n+12)m,$$
where, $M_1(G)$ is the first Zagreb index of $G$.
\end{theorem}
\begin{proof}{
By the definition of degree distance index, we have
\begin{eqnarray*}
DD(\mu(G))&=&\sum_{\{ u,v \} \subseteq V(\mu)} d_\mu(u,v)~(\deg_\mu (u)+\deg_\mu(v)).
\end{eqnarray*}
Regarding to the different possible cases which $u$ and $v$ can be choosen from the set $V(\mu)$, we consider the following cases.
We use the same notations as before. For computing degrees and distances two observations \ref{MycielskiDegree} and \ref{MycielskiDistance} are applied.
\\
{\large \bf Case 1.} $u=x$ and $v\in X$:
\begin{eqnarray*}
\sum_{i=1}^n d_\mu(x,x_i)~(\deg_\mu(x)+\deg_\mu(x_i)) = \sum_{i=1}^n 1~(n+(1+\deg_G(v_i))) = n(n+1)+2m.
\end{eqnarray*}
{\large \bf Case 2.} $u=x$ and $v\in V(G)$:
\begin{eqnarray*}
\sum_{i=1}^n d_\mu(x,v_i) ~(\deg_\mu(x)+\deg_\mu(v_i)) = \sum_{i=1}^n 2~(n+2\deg_G(v_i)) = 2~(n^2+4m).
\end{eqnarray*}
{\large \bf Case 3.} $\{u,v\}\subseteq X$:
\begin{eqnarray*}
\sum_{\{x_i,x_j\}\subseteq X} d_\mu(x_i,x_j)~ (\deg_\mu(x_i)+\deg_\mu(x_j)) &=& \sum_{\{x_i,x_j\}\subseteq X} 2~ ((1+\deg_G(v_i))+(1+\deg_G(v_j))) \\
&=& 2 \left( \sum_{\{x_i,x_j\}\subseteq X} 2 + \sum_{\{x_i,x_j\}\subseteq X} (\deg_G(v_i)+\deg_G(v_j)) \right) \\
&=& 2 \left( 2\binom{n}{2} + \sum_{k=1}^n (n-1) \deg(x_k) \right) \\
&=& 2n^2-2n+4(n-1)m,
\end{eqnarray*}
Note that for each $x_k\in X$ we have $|\{ \{x_k,x_j\}:~j\neq k\}|=n-1$.
\\
{\large \bf Case 4.} $\{u,v\}\subseteq V(G)$:
\\
Since the diameter of $G$ is two, Observation \ref{MycielskiDistance} implies that $d_\mu (v_i,v_j)=d_G(v_i,v_j)$ for each $v_i,v_j\in V(G)$. Hence,
\begin{eqnarray*}
\sum_{\{v_i,v_j\}\subseteq V(G)} d_\mu(v_i,v_j)~ (\deg_\mu(v_i)+\deg_\mu(v_j))
&=& \sum_{\{v_i,v_j\}\subseteq V(G)} d_G (v_i,v_j)~ (2\deg_G(v_i)+2\deg_G(v_j)) \\
&=& 2~ DD(G).
\end{eqnarray*}
\\
{\large \bf Case 5.} $u=v_i$ and $v=x_i$, $1\leq i\leq n$:
\begin{eqnarray*}
\sum_{i=1}^n ~d_\mu (v_i,x_i)~ (\deg_\mu(v_i)+\deg_\mu (x_i)) &=& \sum_{i=1}^n ~2~ ( 2\deg_G(v_i)+1+\deg_G(v_i) ) \\
&=& 2~(n+6m).
\end{eqnarray*}
\\
{\large \bf Case 6.} $u=v_i$ and $v=x_j$, $i\neq j$:
\begin{eqnarray*}
\sum_{\substack{\{v_i,x_j\} \subseteq V(\mu) \\ i\neq j}} d_\mu(v_i,x_j)~(\deg_\mu(v_i)+\deg_\mu(x_j))
&=& \sum_{\substack{ \{v_i,x_j\} \subseteq V(\mu) \\ i\neq j}} d_\mu(v_i,x_j)~ (2\deg_G(v_i)+1+\deg_G(v_j) ) \\
&=& \sum_{\substack{ \{v_i,x_j\} \subseteq V(\mu) \\ i\neq j}} d_\mu(v_i,x_j)~ (\deg_G(v_i)+\deg_G(v_j)) \\
&+& \sum_{\substack{ \{v_i,x_j\} \subseteq V(\mu) \\ i\neq j } } d_\mu(v_i,x_j)~ (1+\deg_G(v_i)).
\end{eqnarray*}
Since $d_\mu(v_i,x_j)=d_\mu(v_j,x_i)$, and using Observation \ref{MycielskiDistance}, we have
\begin{eqnarray*}
\sum_{\substack{ \{v_i,x_j\} \subseteq V(\mu) \\ i\neq j}} d_\mu(v_i,x_j)~ (\deg_G(v_i)+\deg_G(v_j)) &=&
2 \sum_{\substack{ \{v_i,v_j\} \subseteq V(G) \\ i\neq j }} d_\mu(v_i,x_j)~ (\deg_G(v_i)+\deg_G(v_j)) \\
&=& 2 \sum_{\substack{ \{v_i,v_j\} \subseteq V(G) \\ i\neq j }} d_G(v_i,v_j)~ (\deg_G(v_i)+\deg_G(v_j)) \\
&=& 2~DD(G).
\end{eqnarray*}
Now since the diameter of $G$ is two, Lemma \ref{Distance2} implies that
\begin{eqnarray*}
\sum_{\substack{ \{v_i,x_j\} \subseteq V(\mu) \\ i\neq j } } d_\mu(v_i,x_j)(1+\deg_G(v_i)) &=&
\sum_{\substack{ \{v_i,x_j\} \subseteq V(\mu) \\ d_G(v_i,v_j)=1 }} 1~(1+\deg_G(v_i)) \\
&+& \sum_{\substack{\{v_i,x_j\} \subseteq V(\mu) \\ d_G(v_i,v_j)=2 }} 2~(1+\deg_G(v_i)) \\
&=& 2 m + \sum_{v_iv_j \in E(G)} (\deg_G(v_i)+\deg_G(v_j)) \\
&+& 2 \left( 2 ({n\choose 2}-m) + \sum_{\substack{ \{v_i,v_j\} \subseteq V(G)\\ d_G(v_i,v_j)=2 }} (\deg_G(v_i)+\deg_G(v_j))\right) \\
&=& 2m+M_1(G) \\
&+& 2 ( n(n-1)-2m + 2(n-1)m-M_1(G)) \\
&=& 2n(n-1) + 2m(2n-3)-M_1(G).
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\sum_{\substack{\{v_i,x_j\} \subseteq V(\mu) \\ i\neq j}} d_\mu(v_i,x_j)~(\deg_\mu(v_i)+\deg_\mu(x_j)) = 2~DD(G) - M_1(G) + 2n(n-1) + 2m(2n-3).
\end{eqnarray*}
Therefore, using the cases 1 to 6, we obtain
\begin{eqnarray*}
DD(\mu)&=& \bigg{(} n(n+1)+2m \bigg{)} + \bigg{(} 2n^2+8m \bigg{)} + \bigg{(} 2n^2-2n+4(n-1)m \bigg{)} + \bigg{(} 2~ DD(G) \bigg{)} \\
&+& \bigg{(} 2n+12m \bigg{)} + \bigg{(} 2~DD(G) - M_1(G) + 2n(n-1) + 2m(2n-3) \bigg{)} \\
&=& 4~DD(G) - M_1(G) + (7n-1)n + (8n+12)m.
\end{eqnarray*}
}\end{proof}
\section{The Randi$\acute{\mbox{c}}$ index}
In the following theorem we provide upper and lower bounds for the Randi$\acute{\mbox{c}}$ index of the Mycielskian graph.
\begin{theorem}
Let $G$ be an $n$-vertex graph of size $m$ with maximum degree $\Delta$ and minumum degree $\delta$, whose Mycielskian graph is $\mu$. Then,
\begin{eqnarray*}
{1\over 2} R(G) + {\sqrt{2}~m + \sqrt{n\Delta} \over \sqrt{\Delta^2+\Delta}}
\leq R(\mu) \leq
{1 \over 2} R(G) + {\sqrt{2}~m + \sqrt{n\delta} \over \sqrt{\delta^2+\delta}}
\end{eqnarray*}
Moreover, equalities hold if and only if $G$ is a regular graph.
\end{theorem}
\begin{proof}{
By the definition of Randi$\acute{\mbox{c}}$ indx and by considering the various types of edges in $\mu$, one can see that
\begin{eqnarray*}
R(\mu) &=& \sum_{uv \in E(\mu)} \frac{1}{\sqrt{\deg_\mu (u)~\deg_\mu (v)}} \\
&=& \sum_{v_i v_j \in E(\mu)} \frac{1}{\sqrt{\deg_\mu (v_i)~\deg_\mu (v_j)}} \\
&+& \sum_{x_iv_j\in E(\mu)} \frac{1}{\sqrt{\deg_\mu (x_i)~\deg_\mu (v_j)}} \\
&+& \sum_{xx_i\in E(\mu)} \frac{1}{\sqrt{\deg_\mu (x)~\deg_\mu (x_i)}}.
\end{eqnarray*}
Since $N_\mu(x_i)=N_G(v_i)\cup \{x\}$ and using Observation \ref{MycielskiDegree} we get
\begin{eqnarray*}
\sum_{x_iv_j\in E(\mu)} \frac{1}{\sqrt{\deg_\mu (x_i)~\deg_\mu (v_j)}} &=&
\sum_{x_iv_j\in E(\mu)} \frac{1}{\sqrt{(1+\deg_G(v_i))~(2\deg_G(v_j))}} \\
&=& \sum_{i=1}^n \sum_{v_j \in N_G(v_i)} \frac{1}{\sqrt{(1+\deg_G(v_i))(2\deg_G(v_j))}} \\
&\leq& \sum_{i=1}^n \sum_{v_j \in N_G(v_i)} \frac{1}{\sqrt{(\delta +1)(2\delta)}} \\
&=& \sum_{i=1}^n \frac{\deg(v_i)}{\sqrt{(\delta+1)(2\delta)}} \\
&=& \frac{2m}{\sqrt{(\delta+1)(2\delta)}} \\
&=& \frac{\sqrt{2}m}{\sqrt{\delta^2+\delta}}.
\end{eqnarray*}
Similarly, we can see that
\begin{eqnarray*}
\frac{\sqrt{2}m}{\sqrt{\Delta^2+\Delta}} \leq \sum_{x_iv_j\in E(\mu)} \frac{1}{\sqrt{\deg_\mu (x_i)~\deg_\mu (v_j)}}.
\end{eqnarray*}
Note that
\begin{eqnarray*}
\sum_{xx_i\in E(\mu)} \frac{1}{\sqrt{\deg_\mu (x)~\deg_\mu (x_i)}} &=&
\sum_{xx_i\in E(\mu)} \frac{1}{\sqrt{n(1+\deg_G(v_i))}} \\
&=& \frac{1}{\sqrt{n}} \sum_{i=1}^n \frac{1}{\sqrt{1+\deg_G(v_i)}}.
\end{eqnarray*}
This implies that
$$ \sqrt{ \frac{n}{1+\Delta} } \leq
\sum_{xx_i\in E(\mu)} \frac{1}{\sqrt{\deg_\mu (x)~\deg_\mu (x_i)}}
\leq \sqrt{ \frac{n}{1+\delta } } ~.$$
Now since
\begin{eqnarray*}
\sum_{v_i v_j \in E(\mu)} \frac{1}{\sqrt{\deg_\mu (v_i)~\deg_\mu (v_j)}}
&=& \sum_{v_i v_j \in E(G)} \frac{1}{\sqrt{(2\deg_G(v_i))~(2\deg_G(v_j))}} \\
&=& \frac{1}{2}R(G),
\end{eqnarray*}
the proof is complete.
}\end{proof}
| 2023-04-23T06:40:25.644Z | 2014-12-19T02:08:37.000Z | redpajama/arxiv | arxiv_0001 | 360 | 2,502 |
b9a48949f1543bd5f23f89c4a57a54d94d2dcc90 | \section{}
\section{Introduction}
One of the most challenging obstacles to the realization of solid-state quantum computing devices is decoherence caused by charge noise.\cite{dovzhenko2011nonadiabatic,petersson2010quantum,dial2013charge,muhonen2014storing,veldhorst2014addressable,veldhorst2014a}
Charge fluctuations in solid-state devices can arise from several sources, such as Johnson noise from electrical wiring,\cite{muhonen2014storing,nyquist1928thermal,johnson1928thermal} evanescent-wave Johnson noise from metallic gates,\cite{langsjoen2012qubit,poudel2013relaxation} or $1/f$ noise.\cite{paladino2014noise}
The most widely accepted explanation for $1/f$ noise is the presence in the host sample of bistable localized charge states.\cite{dutta1981low}
Such two-level fluctuators involve tunneling between two spatial configurations with nearly equal potential energy and are routinely observed in amorphous materials.\cite{anderson1972anomalous,black1978interaction,black1981low,agarwal2013polaronic}
These fluctuators have been observed as spurious resonances in the spectrum of superconducting phase qubits,\cite{simmonds2004decoherence} and have been the subject of an extensive literature in the Josephson qubit community.\cite{lisenfeld2010measuring,grabovskij2012strain,muller2011coherent,dubois2013delocalized,o2015qubit}
Similar two-level fluctuators consisting of a charge hopping between localized states have been observed in the environment of various other solid-state devices, including lateral gated heterostructures\cite{kurdak1997resistance,jung2004background,pioro2005origin,buizert2008insitu} and self-assembled quantum dots.\cite{kuhlman2013charge,hauck2014locating} Two-level fluctuators have thus been considered an important source of qubit dephasing in several theoretical studies.\cite{faoro2006quantum,sun2010quantum,muller2009relaxation,culcer2009dephasing,ramon2010decoherence,culcer2013dephasing}
Despite the ubiquitousness of two-level charge fluctuators in the solid state, their physical nature can be expected to change from one system to the next. In addition, the microscopic mechanisms causing transitions within pairs of states can hardly be assumed to be universal. For example, the fluctuators can interact with a phonon bath.
\cite{lisenfeld2010measuring,wold2012decoherence}
Alternatively, charge traps near metallic gates or itinerant bands can undergo tunneling.\cite{paladino2002decoherence,desousa2005ohmic,abel2008decoherence,yurkevich2010decoherence}
To minimize the consequent deleterious effects on qubit coherence, it is important to be able to discriminate between different fluctuator baths (e.g., phonons or electrons) from a simple set of measurements.
Any experiment that is designed to measure qubit coherence will typically reveal information about the local environment and may shed light on charge dynamics. Qubit coherence is described by the coherence factor, which empirically often takes the form
\cite{medford2012scaling,dial2013charge}
\begin{equation}
C(t)=\exp[-(t/T_2)^{\alpha}].\label{eqnSpectro}
\end{equation}
Here, the coherence time, $T_2$, and stretching parameter, $\alpha$, parametrize the decay of qubit coherence. When $\alpha=1$, Eq.~\eqref{eqnSpectro} describes exponential decay, arising from Markovian evolution of the qubit. For $\alpha\ne 1$, Eq.~\eqref{eqnSpectro} describes a non-Markovian stretched-exponential ($\alpha< 1$) or compressed-exponential ($\alpha> 1$) decay.
The analysis of coherence measurements giving the above empirical form is often based on phenomenological techniques. In the presence of classical Gaussian dephasing noise, $C(t)$ can be written as a simple function of the associated noise spectrum. An analytical form for the noise spectrum is then chosen to best fit the measured coherence factor, $C(t)$.\cite{medford2012scaling,dial2013charge} For example, choosing a $1/f$-like spectrum $S(\nu)\propto 1/\nu^{\beta}$, with $\beta=\alpha-1>0$, exactly yields a coherence factor described by Eq.~\eq{eqnSpectro}.
In this paper, rather than assuming a $1/f$-like spectrum, we begin from a generic microscopic model of fluctuator dynamics. This model results in a coherence factor that closely approximates the compressed-exponential form given in Eq.~\eq{eqnSpectro}. From this model, we find closed-form expressions for the coherence time and stretching parameter, $T_2$ and $\al$. These results allow us to predict a crossover from the non-Markovian to the Markovian regime as temperature $T$ is varied. In addition, we find that different microscopic mechanisms giving rise to fluctuator dynamics typically lead to distinct power-law dependences for $T_2(T)$. In combination with complementary theoretical studies of $T_2(T)$ in the Markovian regime (see, e.g., Refs. \onlinecite{semenov2004phonon,roszak2009phonon,kornich2014phonon}), this will help to better understand and suppress microscopic sources of dephasing.
This paper is divided as follows. In Sec.~\ref{secTwoLevel}, we present the general features of the fluctuator model used throughout the paper. This fluctuator model is used in Sec.~\ref{secCompressed} to show that the qubit coherence factor is well approximated by the compressed exponential form, Eq.~\eqref{eqnSpectro}. In Secs.~\ref{secElectrons} and \ref{secPhonons}, we find analytical expressions for the fluctuator equilibration time and the corresponding noise amplitude for fluctuators coupled to electron or phonon baths, respectively. These expressions are then used in Sec.~\ref{secMicro} to find the temperature dependence of the qubit coherence time $T_2$ and the stretching parameter $\al$ for the microscopic mechanisms considered in Secs.~\ref{secElectrons} and \ref{secPhonons}. We conclude by illustrating an application of this theory to recent experiments.
\section{Two-level fluctuators\label{secTwoLevel}}
We consider an ensemble of two-level fluctuators coupled to a qubit. Each fluctuator is itself coupled to an independent thermal bath, allowing equilibration [see Fig.~\ref{figQFB}(a)]. The qubit is subject to a train of fast $\pi$-pulses. In the toggling frame,\cite{mehring1976high} which accounts for dynamics induced by qubit rotations, the Hamiltonian is then
\begin{equation}
\ham=\underbrace{\ham_\mrm Q(t)+\sum_n\left[\ham_\mrm F^{n}+\ham_\mrm{FB}^{n}+\ham_\mrm B^{n}\right]}_{\equiv\ham_0(t)}+\underbrace{\sum_n\ham_\mrm{QF}^{n}(t)}_{\equiv\hat V},\label{eqnH}
\end{equation}
where
\begin{align}
\ham_\mrm Q(t)&=\frac12\hbar\w_\mrm Q s(t)\sz,\qquad\qquad\ham_\mrm F^{n}=\frac12\hbar\w_n\tz_n,\\
\ham_\mrm{QF}^{n}(t)&=\frac12\hbar\W_n s(t)\sz\tz_n.\label{eqnHQF0}
\end{align}
Here, we have introduced the Pauli matrices $\sz$ and $\tz_n$ for the qubit and for the $n$-th fluctuator, respectively. The qubit and fluctuator energy splittings are $\hbar\w_\mrm Q$ and $\hbar\w_n$, respectively, and the qubit-fluctuator couplings are $\hbar\W_n$. The sign function, $s(t)$, alternates between $s(t)=\pm 1$ at times $t_m=t_1, t_2, t_3,\ldots, t_{s-1}$, accounting for a sequence of fast $\pi$-pulses, ending at $t=t_s$ [see Fig.~\ref{figQFB}(d)]. Here we will focus on free-induction decay (no $\pi$-pulse) and Hahn echo (a single $\pi$-pulse),\cite{hahn1950spin} but this notation also allows for a direct extension to other pulse sequences, including, e.g., Carr-Purcell\cite{carr1954effects} or Uhrig dynamical decoupling.\cite{uhrig2007keeping} Retaining only the Ising-like terms $\sim\sz\tz_n$ in the qubit-fluctuator Hamiltonian is justified within a secular approximation, in which the qubit and typical fluctuator splittings are assumed to be large compared to the relevant couplings, $\hbar\omega_Q,\hbar\omega_n \gg \hbar\Omega_n$. The fluctuator-bath interaction $\ham_\mrm{FB}^{n}$ and bath Hamiltonian $\ham_\mrm B^{n}$ are left unspecified for now. Microscopic forms for these Hamiltonians are considered in Secs.~\ref{secElectrons} and \ref{secPhonons}, where we analyze fluctuator equilibration dynamics for specific physical systems.
\begin{figure}
\begin{center}
\includegraphics{figQFB.pdf}
\end{center}
\caption{(Color online) (a) A qubit (Q) is coupled to an ensemble of independent fluctuators. Each fluctuator $(F_n)$ is itself coupled to an independent bath $(B_n)$. (b) A two-level fluctuator. Because of the interaction with its bath, the two-level fluctuator is excited at the rate $\gamma_\upa$ and relaxes at the rate $\gamma_\dwna$. (c) Two-level fluctuators can be, e.g., two localized states (represented by the green wave functions) between which a charge can tunnel. (d) The qubit evolves under the influence of sharp control $\pi$-pulses. \label{figQFB}}
\end{figure}
To set up a perturbative expansion, we define $\hat V'\equiv\hat V-\mean{\hat V}_F$ and $\ham_0'(t)\equiv\ham_0(t)+\mean{\hat V}_F$, where $\ham_0$ and $\hat V$ are defined in Eq.~\eq{eqnH}. The expectation values $\mean{\cdot}_F$ are taken with respect to the initial state of the fluctuators. We then move to the interaction picture, taking $\hat V'$ as a perturbation (i.e., for a general operator $\hat O$, $\hat O_\mrm I(t)=U_0^\dagger(t)\hat O U_0(t),\;U_0(t) = \exp\left[-i\int_0^t dt' \hat H_0'(t')/\hbar\right]$). We thus have
\begin{align}
\hat V'_\mrm I(t)=\frac12\hbar\hat\xi(t)s(t)\sz,\label{eqnHQF}
\end{align}
and we have introduced the noise operator
\begin{align}
\hat\xi(t)=\sum_n\W_n\left[\hat\tau^z_{n,\mrm I}(t)-\mean{\tz_n}_F\right]. \label{eqnXi}
\end{align}
Our goal is to evaluate the coherence factor parametrized by a pulse sequence $s$,
\begin{equation}\label{eq:CsDefinition}
C^s(t_s) = \left|\langle \hat{S}_+(t_s)\rangle\right|/\left|\langle \hat{S}_+(0)\rangle\right|,
\end{equation}
where $\langle \hat{S}_+(t_s)\rangle=\left[\langle \hat{\sigma}^x(t_s)\rangle+i\langle \hat{\sigma}^y(t_s)\rangle\right]/2$ is the off-diagonal element of the qubit density matrix in the $\hat{\sigma}^z$ eigenbasis. Under quite general conditions, Eq.~\eqref{eq:CsDefinition} can be accurately evaluated using a Magnus expansion.
\cite{blanes2009magnus,magnus1954exponential,maricq1982application}
The leading-order term in the Magnus expansion describes dynamics under the action of the time average of $\hat V'_\mrm I(t)$. This leading-order term will always dominate at sufficiently short time or for sufficiently rapid fluctuations in the noise operator (see Appendix \ref{secMagnus}). Assuming a large number of independent fluctuators, $\hat\xi(t)$ becomes a source of Gaussian noise due to the central-limit theorem. Conditions for Gaussian noise to dominate over the leading non-Gaussian corrections to the qubit coherence factor are discussed in Appendix~\ref{secMagnus}. We will also assume that the noise is stationary, i.e., that the fluctuators are in a steady state. If, in addition, the initial state of the fluctuators and the qubit is separable, the coherence factor is given by
\begin{align}
&C^s(t_s)=\eul{-\frac12\int_0^{t_s}dt_1\int_0^{t_s}dt_2s(t_1)s(t_2)g(t_1-t_2)},\label{eqnCts}\\
&g(t)=\mean{\hat\xi(t)\hat\xi(0)},\label{eqnAuto}
\end{align}
where $s\rightarrow\ast$ for free-induction decay and $s\rightarrow\mrm e$ for Hahn echo. In Appendix~\ref{secMagnus}, we consider subleading corrections to the leading-order Magnus expansion and Gaussian approximation. These corrections set limits on the range of validity of Eq.~\eq{eqnCts}.
In the frequency domain, Eq.~\eq{eqnCts} becomes\cite{martinis2003decoherence,cywinski2008how,uhrig2008exact,desousa2009electron}
\begin{equation}
C^s(t_s)=\exp\left[-\int_{-\infty}^\infty d\nu\,\frac{S(\nu)}{\nu^2}F^s(\nu t_s)\right], \label{eqnFilter}
\end{equation}
where the noise spectrum $S(\nu)$ and filter function $F^s(\nu t_s)$ are given by
\begin{eqnarray}
S(\nu)&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} dt\,\eul{i\nu t}g(t),\\
F^s(\nu t_s)&=&\frac{\nu^2}{2}\left|\int_0^{t_s} dt s(t)e^{i\nu t}\right|^2.
\end{eqnarray}
A natural way to describe a compressed-exponential decay [Eq.~\eqref{eqnSpectro}] is to postulate a $1/f$-like noise spectrum,\cite{cywinski2008how,medford2012scaling,dial2013charge,veldhorst2014addressable}
\begin{equation}
S(\nu) = \frac{A}{|\nu|^\beta},\label{eqn1overfNoise}
\end{equation}
with a general exponent $\beta$. Such a spectrum can also be justified from noise-spectroscopy measurements.\cite{bylander2011noise,yuge2011measurement} Inserting the $1/f$-like spectrum, Eq.~\eqref{eqn1overfNoise}, into Eq.~\eqref{eqnFilter} leads directly to a compressed-exponential decay [Eq.~\eqref{eqnSpectro}] with stretching parameter $\alpha$ and coherence time $T_2^s$ given by
\begin{eqnarray}
\alpha &=& 1+\beta,\label{eq:alpha1overf}\\
T_2^s &=& \left(2A\int_0^\infty dx\frac{F^s(x)}{x^{\alpha+1}}\right)^{-1/\alpha}. \label{eqnT21surf}
\end{eqnarray}
$T_2^s$ exists when the integral in Eq.~\eq{eqnT21surf} converges, i.e., when $\al<2$ for free-induction decay (since $F^\ast(x)\propto x^2$ for $x\rightarrow0$) and when $\al<4$ for Hahn echo (since $F^\mrm e(x)\propto x^4$ when $x\rightarrow0$). One consequence of Eq.~\eqref{eqn1overfNoise} is that the stretching parameter $\alpha$ depends only on the noise spectrum through the exponent $\beta$ [Eq.~\eqref{eq:alpha1overf}], not on the pulse sequence $s$. This procedure provides a satisfying and useful relationship between the stretching parameter $\alpha$, coherence time $T_2^s$, and pulse sequence $s$. However, ultimately, Eq.~\eqref{eqn1overfNoise} amounts to a (non-unique) reparametrization of the observed compressed-exponential decay and does not necessarily provide additional insight into the relevant physical processes or further predictive power. An alternative approach, which we take here, is to directly evaluate fluctuator dynamics from plausible microscopic interactions.
Equation \eq{eqnCts} shows that for a given pulse sequence, $C^s(t_s)$ is entirely determined by the autocorrelation function $g(t)$ of the fluctuator-induced noise. To evaluate this autocorrelation function, we consider the regime where the fluctuator dynamics are described by a Markovian master equation. The evolution of a fluctuator is Markovian when the fluctuator equilibrates with its local bath on a time scale $\tau_n$ that is long compared to the bath correlation time $\tau_\mrm{cB}^n$. Typically, $\tau_\mrm{cB}^n$ is set by the inverse bandwidth of bath excitations. When $\tau_n\gg\tau_\mrm{cB}^n\,\forall\,n$, the evolution of the fluctuators is described by a Lindblad-form master equation. Assuming, as illustrated in Fig.~\ref{figQFB}(a), that each fluctuator is coupled to an independent bath, the reduced density matrix, $\hat\rho_n$, for fluctuator $n$ evolves according to
\begin{align}
\dot{\hat\rho}_n(t)&=\liouv_n\hat\rho_n(t),\\
\liouv_n\cdot&=-\frac{i}{\hbar}[\ham_\mrm F^n,\cdot]+\gamma_\upa^n\dissip[\tpl_n]\cdot+\gamma_\dwna^n\dissip[\tm_n]\cdot,\label{eqnLiouv}
\end{align}
where $\dissip[\hat X]\hat O=\hat X\hat O\hat X^\dagger-\frac12(\hat X^\dagger \hat X \hat O+\hat O\hat X^\dagger \hat X)$ and where $\g^n_\upa$ and $\g^n_\dwna$ are the excitation and relaxation rates for fluctuator $n$ [see Fig.~\ref{figQFB}(b)]. In the above equation, and throughout this paper, the centerdot (``$\cdot$'') represents an arbitrary operator upon which the relevant superoperator is applied. Using Eq.~\eq{eqnLiouv}, it is then straightforward to evaluate $g(t)$ with the usual multitime averaging formula.\cite{gardiner2000quantum} Under the stationary-noise assumption, the autocorrelation function of the resulting noise becomes that of a mixture of independent Ornstein-Uhlenbeck processes,\cite{uhlenbeck1930theory,wang1945theory}
\begin{equation}
g(t_1-t_2)=\sum_n \Delta\xi^2_n\eul{-|t_1-t_2|/\tau_n}\label{eqnOU}.
\end{equation}
Here, $\Delta\xi_n$ is the amplitude of the noise induced by fluctuator $n$ and $\tau_n$ is the associated equilibration time. These parameters are related directly to the excitation (relaxation) rates $\gamma_{\uparrow(\downarrow)}^n$ and couplings $\Omega_n$ through
\begin{align}
\Delta\xi_n^2&=\W_n^2\frac{4\g_\upa^n\g_\dwna^n}{[\g_\upa^n+\g_\dwna^n]^2},\label{eqnDxind}\\
1/\tau_n&=\g_\upa^n+\g_\dwna^n.\label{eqnTau}
\end{align}
We note that Eqs.~\eq{eqnOU} to \eq{eqnTau} would be unchanged if a pure dephasing term $\propto\dissip[\tz_n]\cdot$ were added to Eq.~\eq{eqnLiouv}.
As is well known, a mixture of Ornstein-Uhlenbeck processes, Eq.~\eqref{eqnOU}, can approximate $1/f$ noise with an appropriately chosen distribution of amplitudes and equilibration times.\cite{surdin1939fluctuations,bernamont1937fluctuations,mcwhorter19571,dutta1981low,paladino2014noise} It is not, however, generally necessary to approximate a $1/f$-like noise spectrum [Eq.~\eqref{eqn1overfNoise}] to find a coherence factor $C^s(t_s)$ that approximates a compressed-exponential decay. As we illustrate numerically below, even a Lorentzian noise spectrum associated with a single equilibration time $\tau=\tau_n$ results in an approximate compressed-exponential decay over a wide parameter range.
\section{Functional form of the coherence factor \label{secCompressed}}
Substituting the noise autocorrelation function [Eq.~\eq{eqnOU}] into the coherence factor [Eq.~\eq{eqnCts}] with the function $s(t)$ for either free-induction decay ($s\rightarrow\ast$) or Hahn echo ($s\rightarrow\mrm e$) gives the closed-form expressions,\cite{klauder1962spectral,desousa2009electron}
\begin{align}
C^s(t_s)&=\exp\left[-f^s(t_s)\right],\label{eqnCsts}\\
f^s(t_s)&=\frac{t_s}{T_{2\mrm M}}-\sum_n\D\xi_n^2\tau_n^2\,h^s(t_s/\tau_n),\label{eqnfs}\\
1/T_{2\mrm M}&=\sum_n\D\xi_n^2\tau_n,\label{eqnTdM}
\end{align}
where
\begin{align}
&h^\ast(x)=1-\eul{-x},\label{eqnhst}\\
&h^\mrm e(x)=\eul{-x}-4\eul{-x/2}+3.\label{eqnhe}
\end{align}
We define $T^s_2$ to be the $1/e$ decay time of $C^s(t)$ through
\begin{equation}
f^s(T_2^s)=1. \label{eqnDefT2s}
\end{equation}
The form of $C^s(t_s)$ as given in Eq.~\eqref{eqnCsts} does not generally describe a pure compressed-exponential decay, $\sim \exp\left[-\left(t/T_2\right)^\alpha\right]$. However, we will show that Eq.~\eqref{eqnCsts} can approximate a compressed exponential over a wide parameter range. We therefore define a time-dependent stretching parameter $\al^s(t_s)$ such that, instantaneously, $f^s(t_s)=(t_s/T_2^s)^{\al^s(t_s)}$ and introduce a typical value $\al^s$ of the stretching parameter at the $1/e$ decay time,
\begin{equation}
\al^s(t_s)=\frac{d\log f^s(t_s)}{d\log(t_s/T_2^s)},\quad \alpha^s\equiv \alpha^s(T_2^s).\label{eqnAlpha}
\end{equation}
The functions $f^\mrm e(t_s)$ and $\al^\mrm e(t_s)$ are shown in Figs.~\ref{figCrossover}(a) and \ref{figCrossover}(b) assuming $\tau_n\equiv\tau\,\forall\,n$. The coherence factor can then be replaced by $C^s(t_s)\simeq\exp[-(t_s/T_2^s)^{\al^s}]$ with small corrections when $\al^s(t_s)$ varies slowly for $t_s$ in the vicinity of $T_2^s$.
The coherence factor behaves very differently in either the ``slow-noise" or ``fast-noise" regime. These two regimes are determined by the ratio of the correlation time $\tau_c$ [the decay time of the noise autocorrelation function $g(t)$] to the coherence time, $T_2^s$. We define the correlation time $\tau_c$ through\cite{fick1990quantum}
\begin{equation}
\tau_c\equiv\frac{\int_0^\infty dt g(t)t}{\int_0^\infty dt g(t)}=\frac{\sum_n \D\xi_n^2\tau_n^2}{\sum_n \D\xi_n^2\tau_n},\label{eqnTauc}
\end{equation}
where the second equality follows directly from Eq.~\eqref{eqnOU}.
The slow-noise regime is given by $T_2^s< \tau_c$. In this regime, $g(t)$ is slowly-varying in Eq.~\eqref{eqnCts} over the time scale of interest ($\sim T_2^s$). Expanding $g(t)$ around $t=0$ and keeping the leading nontrivial correction in Eq.~\eq{eqnCts} then gives the compressed-exponential form in Eq.~\eq{eqnSpectro}, with $\alpha=\alpha^s$ and $T_2=T_2^s$ for decoupling sequence $s$, consistent with known results for Gaussian spectral diffusion due to classical noise,\cite{klauder1962spectral}
\begin{align}
\al^\ast=2,\quad1/T_2^\ast&=\textstyle{\left(\frac12\sum_n\D\xi_n^2\right)^{\frac12}},\hspace{10mm} (T_2^\ast \ll \tau_c),\label{eqnTdNMst}\\
\al^\mrm e=3,\quad 1/T_2^\mrm e&=\textstyle{\left(\frac{1}{12}\sum_n\D\xi_n^2/\tau_n\right)^{\frac13}},\quad (T_2^\mathrm{e} \ll \tau_c).\label{eqnTdNMe}
\end{align}
In the opposite (fast-noise) regime, $T_2^s\gtrsim \tau_c$, we evaluate $T_2^s$ and $\alpha^s$ from Eqs.~\eqref{eqnDefT2s} and \eqref{eqnAlpha}. Neglecting exponentially small corrections in $T_2^s/\tau_c\gtrsim 1$, we find the coherence times
\begin{eqnarray}
T_2^\ast &= & \frac{(1+\sum_n\Delta\xi_n^2\tau_n^2)}{\sum_n\Delta\xi_n^2\tau_n},\hspace{18.5mm} (T_2^\ast \gtrsim \tau_c),\label{eqnT2StarFast}\\
T_2^\mrm e &= & \frac{(1+3\sum_n\Delta\xi_n^2\tau_n^2)}{\sum_n\Delta\xi_n^2\tau_n},\hspace{16.5mm} (T_2^\mrm e \gtrsim \tau_c),\label{eqnT2eFast}
\end{eqnarray}
and stretching parameters
\begin{eqnarray}
\alpha^s & = & 1+\beta^s,\label{eqnBetaDef}\\
\beta^\ast & = & \sum_n \Delta\xi_n^2\tau_n^2, \hspace{28mm} (T_2^\ast \gtrsim \tau_c), \label{eqnBetaStar}\\
\beta^\mrm e & = & 3\sum_n \Delta\xi_n^2\tau_n^2,\hspace{26mm} (T_2^\mrm e \gtrsim \tau_c).\label{eqnBetaE}
\end{eqnarray}
In contrast with the result from an assumed $1/f$-like spectrum in Sec.~\ref{secTwoLevel}, here the stretching parameter $\alpha^s$ is sensitive to the pulse sequence $s$. In fact, the parameters $\beta^s$ for echo and free-induction decay are related by a universal factor of three in the fast-noise regime, $\beta^e\simeq 3\beta^*$.
Equations \eqref{eqnTdNMst} to \eqref{eqnBetaE} provide a complete analytical description of both the coherence time $T_2^s$ and form of decay (through $\alpha^s$) in either the slow-noise or fast-noise regime. This description can be related to a microscopic model of fluctuator dynamics through the noise amplitudes $\Delta\xi_n^2$ and equilibration times $\tau_n$. In particular, $T_2^s$ and $\alpha^s$ will inherit temperature dependences associated with the fluctuator excitation (relaxation) rates $\gamma^n_{\uparrow (\downarrow)}$ through Eqs.~\eqref{eqnDxind} and \eqref{eqnTau}. In the rest of this paper, we will evaluate these temperature dependences for physically relevant microscopic mechanisms and connect fluctuator dynamics to qubit coherence through Eqs.~\eqref{eqnTdNMst} to \eqref{eqnBetaE}. Since the qubit coherence time $T_2^s$ and noise correlation time $\tau_c$ typically have distinct temperature dependences, tuning the bath temperature will typically induce a transition between the slow-noise $(T_2^s<\tau_c)$ and fast-noise $(T_2^s\gtrsim\tau_c)$ regimes.
To describe the transition from the slow-noise to the fast-noise regime, it is useful to define a dimensionless parameter that controls a Markov approximation:
\begin{equation}
\eta\equiv\frac{\tau_c}{T_{2\mrm M}}=\sum_n\D\xi_n^2\tau_n^2. \label{eqneta}
\end{equation}
When $\eta\ll1$ (the fast-noise limit), a Markov approximation gives exponential decay ($\al^s=1$), with $T_2^\ast \simeq T_2^\mrm e \simeqT_{2\mrm M}$. In the opposite (slow-noise) limit, $\eta\to\infty$, we recover the results of Eqs.~\eqref{eqnTdNMst} and \eqref{eqnTdNMe}.
\begin{figure}
\begin{center}
\includegraphics{fe.pdf}
\end{center}
\caption{(Color online) Approximate compressed-exponential form of $C^\mrm e(t_s)$ when $\tau_n\equiv\tau\,\forall\,n$. (a) The slope of $f^\mrm e(t_s)$ in log-log scale gives $\al^\mrm e(t_s)$ [see Eq.~\eq{eqnAlpha}]. (b) We approximate $C^\mrm e(t_s)\simeq\exp[-(t_s/T_2^\mrm e)^{\al^\mrm e}]$ by taking $\al^\mrm e\equiv\al^\mrm e(T_2^\mrm e)$. When $\al^\mrm e(T_2^\mrm e)\simeq3$ (in the slow-noise limit, where $T_2^\mrm e/\tau \ll 1$ is in the blue area), decay is faster than exponential and $T_2^\mrm e$ is given by Eq.~\eq{eqnTdNMe}. When $\al^\mrm e(T_2^\mrm e)\simeq1$ (in the fast-noise limit, where $T_2^\mrm e/\tau \gg 1$ is in the red area), the decay is purely exponential and $T_2^\mrm e \simeq T_\mrm{2M}$ is given by Eq.~\eq{eqnTdM}. (c-e) Comparison of the exact (solid black line) and compressed-exponential (dashed red line) forms of $C^\mrm e(t_s)$. (c) $\eta=10$, (d) $\eta=0.1$, (e) $\eta=0.01$. (f) Maximum error made by taking $C^\mrm e(t_s)\simeq\exp[-(t_s/T_2^\mrm e)^{\al^\mrm e}]$ with $\al^\mrm e\equiv\al^\mrm e(T_2^s)$, Eq.~\eqref{eq:epsmax}. Dots correspond to (c-e). \label{figCrossover}}
\end{figure}
While the coherence factor exhibits a simple form in either the slow-noise or fast-noise limit, it is less clear how to simply describe the decay in the intermediate regime $\eta\sim 1$. It is, however, straightforward to numerically verify the assumed compressed-exponential form $\left\{C^s(t_s)= \exp{\left[-(t/T_2^s)^{\alpha^s}\right]}\right\}$. To simplify the analysis, we assume a single equilibration time for all fluctuators, $\tau_n\equiv\tau\,\forall\,n$, corresponding to a pure Lorentzian noise spectrum $S(\nu)$. In this case, Eq.~\eq{eqnfs} reduces to
\begin{equation}
f^s(t_s)=\eta\left[\frac{t_s}\tau-h^s(t_s/\tau)\right].\label{eqnfsSingleTau}
\end{equation}
In Figs.~\ref{figCrossover}(c-e), we compare $C^\mrm e(t_s)$ with $\exp[-(t_s/T_2^\mrm e)^{\al^\mrm e}]$ for a fixed correlation time $\tau$ and a range of $\eta$. For a given value of $\eta$, the maximum error made in replacing $C^s(t_s)$ by the compressed-exponential form is
\begin{align}\label{eq:epsmax}
\veps_\mrm{max}\equiv\max_{t_s\in[0,\infty[}\{|C^s(t_s)-\exp[-(t_s/T_2^s)^{\al^s}]|\}.
\end{align}
In Fig.~\ref{figCrossover}(f), we plot $\veps_\mrm{max}$ as a function of $\eta$. Dots in Fig.~\ref{figCrossover}(f) indicate the three values of $\eta$ corresponding to Figs.~\ref{figCrossover}(c-e). The error, $\veps_\mrm{max}$, is maximized for $\eta\simeq0.1$, the value taken for Fig.~\ref{figCrossover}(d). Even in this worst case, the difference between the exact and compressed-exponential forms of $C^\mrm e(t_s)$ is small ($\veps_\mrm{max}\simeq0.06$). Thus, while the microscopic analysis presented here leads, in general, to a complex functional form [Eq.~\eqref{eqnCsts}], this functional form will likely be indistinguishable from a compressed exponential in many experiments.
\section{Electron baths \label{secElectrons}}
In this section, we consider charge fluctuators described by Anderson impurities. These impurities can equilibrate through tunnel coupling to a continuum of delocalized electronic states in a reservoir (the bath). The electron reservoirs are held in thermal equilibrium with occupation described by a Fermi-Dirac distribution $n_\mrm F(\e)=1/\{\exp[(\e-\mu)/\kB\temp]+1\}$ at a common temperature $T$ and chemical potential $\mu$. As illustrated in Fig.~\ref{figTunnel}, we consider both first-order (direct tunneling) and second-order (cotunneling) processes. Qubit decoherence due to fluctuators tunnel-coupled to an electron reservoir has been considered previously in, e.g., Refs.~\onlinecite{abel2008decoherence,grishin2007low,paladino2002decoherence,desousa2005ohmic,yurkevich2010decoherence}.\begin{figure}
\begin{center}
\includegraphics{tunnel.pdf}
\end{center}
\caption{(Color online) Tunneling processes between localized electron states and a continuum of delocalized states. (a) Direct tunneling. This first-order process is only allowed if $|\e_n-\mu|\lesssim\kB\temp$, where $\e_n$ is the energy of the localized state for fluctuator $n$ and $\mu$ is the chemical potential of the electron reservoir. (b) Cotunneling between pairs of localized states forming a fluctuator $n$. This second-order process occurs if $|\e_{\al n}-\e_{\bt n}|\lesssim\kB\temp$. \label{figTunnel}}
\end{figure}
\subsection{Direct tunneling \label{secDirect}}
In the first-order process (direct tunneling), we assume that each impurity $n$ is coupled to an independent bath through a Fano-Anderson model.\cite{mahan1990many} We then have
\begin{align}
&\ham_\mrm F^n=\sum_{\s}\eps_n\dd_{n\s}\od_{n\s}, \qquad \ham_\mrm B^n=\sum_{\mvec k\s}\eps_{\mvec k}\cd_{\mvec kn\s}\lc_{\mvec kn\s},\label{eqnFreeElectrons}\\
&\ham_\mrm{FB}^n=\sum_{\mvec k\s}\left(t_{\mvec kn}^\ast\cd_{\mvec kn\s}\od_{n\s}+t_{\mvec k n}\dd_{n\s}\lc_{\mvec kn\s}\right). \label{eqnHFano}
\end{align}
For each fluctuator $n$, we have introduced $\od^{(\dagger)}_{n\s}$ and $\lc^{(\dagger)}_{\mvec kn\s}$, the annihilation (creation) operators for the localized and delocalized states, respectively. The corresponding eigenenergies are $\eps_n$ and $\eps_{\mvec k}$. The spin index is $\s\,\in\,\{\upa,\dwna\}$ and $t_{\mvec k n}$ is the amplitude for tunneling between the impurity and the continuum. Assuming strong Coulomb blockade for each impurity (due, e.g., to a large on-site charging energy), we restrict to the space of singly-occupied ($\ket\al\equiv\ket{\s}_n$) and empty ($\ket\bt\equiv\ket{0}_n$) states. Thus, each impurity $n$ is a two-level fluctuator with splitting $\hbar\w_n=\e_n-\mu$. Each impurity can couple to the qubit through the Coulomb interaction.\cite{gamble2012two,ramon2010decoherence} Under these assumptions, Eqs.~\eqref{eqnFreeElectrons} and \eqref{eqnHFano} correspond to the physical model of Eq.~\eq{eqnH}.
In direct tunneling, we find the excitation ($\gamma_\upa^n$) and relaxation ($\gamma_\dwna^n$) rates of a given fluctuator using Fermi's golden rule,
\begin{align}
&\g_{\al\rightarrow\bt}^n=\frac{2\pi}{\hbar}\sum_{if}\rho(i)|\,\!_n\bra{\bt f}\ham_\mrm{FB}^{n}\ket{\al\,i}_n|^2\dt(\hbar\w_n+E_f-E_i),\label{eqnFermi}
\end{align}
where $\alpha$ and $\beta$ are collective indices (including, e.g., both spin and orbital degrees of freedom) labeling the initial and final states of the fluctuator, $i$ and $f$ label the initial and final states of the bath with energies $E_i$ and $E_f$, respectively, and $\rho(i)$ is the probability for the bath to be initially in state $i$. In thermal equilibrium, this probability distribution is given by the Fermi-Dirac distribution. We take the continuum limit $\sum_{\mvec k}\rightarrow\int d\e D_\mrm{el}(\e)$ when summing over the initial and final bath states $i$ and $f$.
Using Eqs.~\eq{eqnDxind} and \eq{eqnTau}, we calculate the noise amplitudes $\Delta\xi_n$ and the fluctuator equilibration rates $1/\tau_n$ from $\gamma_\upa^n$ and $\gamma_\dwna^n$. Summing the rates of the transitions from the reservoir to the degenerate eigenstates $\ket{\!\!\upa}$ and $\ket{\!\!\dwna}$ then gives
\begin{align}
\D\xi_n^2&=\frac{8\W_n^2\eul{\hbar\w_n/\kB\temp}}{(2+\eul{\hbar\w_n/\kB\temp})^2},\label{eqnXiTunnel}\\
\frac1{\tau_n}&=\frac{2\pi}{\hbar} D_\mrm{el}(\e_n)|t_n(\e_n)|^2\frac{2+\eul{\hbar\w_n/\kB\temp}}{1+\eul{\hbar\w_n/\kB\temp}},\label{eqnTauTunnel}
\end{align}
where $t_n(\eps_n)$ is the tunneling amplitude $t_{\mvec kn}$ in the continuum limit. Equation \eq{eqnXiTunnel} implies that the fluctuators are frozen out and have exponentially small contribution to qubit dephasing when $\hbar|\w_n|=|\e_n-\mu|>\kB T$, as expected from Fig.~\ref{figTunnel}. In the opposite (high-temperature) limit, $\kB\temp>\hbar|\w_n|$, we have $\D\xi_n^2\simeq\frac89\W_n^2$, giving a maximal contribution to qubit dephasing. In this high-temperature limit, Eq.~\eq{eqnTauTunnel} also gives an equilibration rate that is approximately constant with temperature.
\subsection{Cotunneling}
In the second-order tunneling process (cotunneling), we consider the case where two localized states with energies $\e_{n\al}$ and $\e_{n\bt}$ are coupled to the same electron reservoir $n$. We now have
\begin{align}
\ham_\mrm F^n=\sum_{l\s}\e_{nl}\od^\dagger_{ln\s}\od_{ln\s},\;\;\;\hat V^n=\sum_{l\mvec k\s}\left(t^\ast_{l\mvec kn}\cd_{\mvec kn\s}\od_{ln\s}+\hc\right), \label{eqnVCotunnel}
\end{align}
where $l\in\{\al,\bt\}$. In this case, $\ham_\mrm B^n$ is again given by Eq.~\eqref{eqnFreeElectrons}. When $\mu-\e_{ln}>\kB\temp\;\forall\;l\in\{\al,\bt\}$, direct tunneling is forbidden. However, the cotunneling process illustrated in Fig.~\ref{figTunnel} can still occur if $\e_{\bt n}-\e_{\al n}<\kB\temp$. Each fluctuator $n$ is then described by a pair of localized states coupled to the same bath with fluctuator energy splitting $\hbar\w_n=\e_{\bt n}-\e_{\al n}$. The fluctuator-bath Hamiltonian corresponding to the (second-order) cotunneling process is obtained using the Schrieffer-Wolff expansion. To leading order in $\hat V_n$, the effective Hamiltonian for this process can be written as
\begin{align}
\ham_\mrm{FB}^n=\frac12\commut{\left(\frac{1}{\mrm L_0^n}\hat V^n\right)}{\hat V^n}, \label{eqnSW}
\end{align}
where $\mrm L_0^n\cdot=[\ham_\mrm F^n+\ham_\mrm B^n,\cdot]$. Using this fluctuator-bath Hamiltonian, we evaluate excitation and relaxation rates using Fermi's golden rule, Eq.~\eq{eqnFermi}. As written, Eq.~\eqref{eqnSW} contains formal divergences (zero denominators) corresponding to resonant cotunneling processes. These contributions can be systematically regularized,\cite{koenig1997cotunneling} leading to exponentially small corrections in the limit $\mu-\e_{ln}>\kB\temp\;\forall\;l$, which we assume here. Neglecting resonant cotunneling in this limit, from the inelastic cotunneling rates,\cite{qassemi2009stationary} we then find
\begin{align}
\frac{1}{\tau_n}&\simeq\frac{1}{\pi}\left|\frac{\hbar \G_n}{\mu-\overline{\e}_n}\right|^2\w_n\coth\left(\frac{\hbar\w_n}{2\kB\temp}\right),\label{eqnRateCotunnel}\\
\G_n&=2\pi D_\mrm{el}(\mu)|t_{\al n}(\mu)||t_{\bt n}(\mu)|/\hbar,\\
\D\xi_n^2&=\W_n^2\,\mrm{sech}^2\left(\hbar\w_n/2\kB T\right).\label{eqnsech}
\end{align}
Here, we have introduced $\overline{\e}_n\equiv(\e_{\al n}+\e_{\bt n})/{2}$. Eq.~\eq{eqnRateCotunnel} is valid up to corrections of order $\sim\hbar\w_n/(\mu-\overline{\e}_n)$. The difference between $\D\xi_n^2$ given in Eq.~\eq{eqnsech} and that given in Eq.~\eq{eqnXiTunnel} for direct tunneling arises from spin degeneracy.\cite{beenakker1991theory,gurvitz1996microscopic} As in the case discussed below Eq.~\eq{eqnXiTunnel}, Eq.~\eq{eqnsech} implies that $\D\xi_n^2$ decays exponentially for $\hbar\w_n>\kB\temp$. However, from Eq.~\eq{eqnRateCotunnel}, for $\kB\temp>\hbar\w_n$ the equilibration rate $1/\tau_n$ now increases linearly with $T$.
Table~\ref{tabTunnel} summarizes the distinct temperature dependences obtained for $1/\tau_n$ due to the two processes discussed in this section. These will be useful in Sec.~\ref{secMicro}, when we evaluate the temperature dependences of $T_2^s$ and $\al^s$.
\begin{table}
\begin{center}
\begin{tabular}{lc}
\hline
\hline
& $1/\tau_n$ \\
\hline
Direct tunneling & $\propto 1$\\
Cotunneling & $\propto T$\\
\hline
\hline
\end{tabular}
\end{center}
\caption{Temperature dependence of the equilibration rates $1/\tau_n$ for electronic baths when $\hbar\w_n<\kB\temp$ in the case of first-order (direct) tunneling and second-order cotunneling. In both cases, $1/\tau_n$ is independent of the fluctuator splitting $\w_n$. \label{tabTunnel}}
\end{table}
\section{Phonon baths \label{secPhonons}}
In this section, we evaluate the amplitude $\D\xi_n^2$ and equilibration time $\tau_n$ for fluctuators coupled to independent phonon baths. For all processes considered in this section, $\D\xi_n^2$ is given simply by Eq.~\eq{eqnsech}, valid in the absence of spin degeneracy. This expression can be derived from Eq.~\eq{eqnDxind} simply by assuming detailed balance between $\gamma_\upa^n$ and $\gamma_\dwna^n$. To evaluate $\tau_n$, we will consider one-phonon direct, and two-phonon sum and Raman processes, as indicated schematically in Figs.~\ref{figPhonons}(a-c).
Each fluctuator consists of two impurity states $\ket{\al}$ and $\ket{\bt}$. These could be, e.g., two localized states in a double well, as illustrated in Fig.~\ref{figQFB}(c), or the ground and excited states of a single donor impurity. The energy splitting between states $\ket \al$ and $\ket \bt$ for fluctuator $n$ is $\hbar\w_n\equiv\e_{\bt n}-\e_{\al n}$. Thus, the fluctuator and bath Hamiltonians for fluctuator $n$ are
\begin{align}
\ham_\mrm F^n=\sum_{l\s}\e_{nl}\dd_{ln\s}\od_{ln\s},\;\;\; \ham_\mrm B^n=\sum_{\mvec q\ld}\hbar\w_{\mvec q\ld}\ad_{\mvec qn\ld}\oa_{\mvec qn\ld}, \label{eqnHFandBphonons}
\end{align}
where $\oa^{(\dagger)}_{\mvec qn\ld}$ annihilates (creates) a phonon with wave vector $\mvec q$ in branch $\ld$ of the phonon bath $n$. We work within the regime of validity of the envelope-function approximation for the impurity. We also assume acoustic phonons with a linearized dispersion. We will focus on two materials: GaAs and silicon. For either material, ignoring anharmonic corrections, the fluctuator-bath interaction is then given by
\begin{equation}
\ham_\mrm{FB}^n=\sum_{\s \mvec q \ld\chi}\sum_{l\neq l'}A_{\mvec q\ld\chi} S_{\chi,ll'}^n(\mvec q)\dd_{nl\s}\od_{nl'\s}(\oa_{\mvec qn\ld}+\ad_{-\mvec qn\ld}),\label{eqnElectronPhonon}
\end{equation}
where $l,l'\in\{\al,\bt\}$. In Eq.~\eqref{eqnElectronPhonon}, we have introduced the electron-phonon coupling strength
\begin{equation}
A_{\mvec q\ld\chi}=A_{\mvec q\ld\chi}^\mrm d-iA_{\mvec q\ld}^\mrm p,
\end{equation}
where d and p label the deformation and piezoelectric contributions, respectively. The form of these contributions is given in Appendix~\ref{secElPhon} in terms of material parameters. In Eq.~\eq{eqnElectronPhonon}, we have also introduced the form factor
\begin{align}\label{eq:FormFactor}
S_{\chi,ll'}^n(\vq)=\int d\vecr |\al_\chi\vf_\chi(\vecr)|^2 F_{\chi l}^{n\ast}(\vecr)F_{\chi l'}^{n}(\vecr)\eul{i\vq\cdot\vecr},
\end{align}
where $\vphi_\chi(\mvec r)$ is the Bloch amplitude with wave vector $\mvec k_\chi$ corresponding to the degenerate conduction-band minimum (valley) $\chi$, and $F_{\chi l}^n(\vecr)$ is the corresponding envelope function for impurity state $l$ of fluctuator $n$. $\al_\chi$ is the coefficient for valley $\chi$ appearing in the wave function $\sum_\chi\al_\chi F_{\chi l}^n(\mvec r)\vphi_\chi(\mvec r)$ of impurity state $l$.
The coupling between pairs of impurity states is suppressed if they are separated by more than the impurity size, $\ell_\mrm{imp}$, describing the extent of the envelope $F_{\chi l}(\mathbf{r})$ [see Eq.~\eqref{eqnsq}, below]. Here, we assume $\ell_\mrm{imp}$ satisfies
\begin{equation}
\ell_\mrm{imp}<\hbar v_\ld/\kB\temp\;\forall\;\ld, \label{eqnab}
\end{equation}
where $v_\ld$ is the phase velocity of branch $\ld$. Under the above condition, the typical phonon wavelength $2\pi/q_\mrm{th}\sim hv_\ld/\kB\temp$ is much longer than the spacing between coupled impurity states. The form factor $S_{\chi,ll'}^n(\vq)$ defined in Eq.~\eq{eqnElectronPhonon} can then be approximated in the small-$q$ (long-wavelength) limit,
\begin{align}
S_{\chi,ll'}^n(\vq)&\simeq i|\al_\chi|^2\vq\cdot\mvec{\sq}^{\chi n}_{ll'}, \label{eqntmullp}\\
\mvec{\sq}^{\chi n}_{ll'}&=\int d\vecr\,\vecr|\vf_\chi(\vecr)|^2F_{\chi l}^{n\ast}(\mvec r) F_{\chi l'}^n(\mvec r) \label{eqnsq},
\end{align}
where $\mvec{\sq}_{ll'}^{\chi n}$ is the transition dipole matrix element between states $l$ and $l'$.
To obtain Eq.~\eq{eqntmullp}, we have used the first non-vanishing term of a Taylor expansion around $\vq=0$. This amounts to neglecting phonon-bottleneck effects,\cite{benisty1995reduced} which suppress the contribution from short-wavelength (high-energy) phonons having a typical wavelength on the order of the impurity spacing. For $v_\ld=3070\;\mrm{m/s}$ (the smallest phase velocity among all the relevant branches in GaAs and silicon) and $T=100$~mK, Eq.~\eq{eqnab} implies that these bottleneck corrections can be neglected when $\ell_\mrm{imp}<2\;\upmu\mrm m$.
At higher temperature or in the presence of a non-thermal source of phonons, it may be necessary to account for the full $\mathbf{q}$-dependence in Eq.~\eqref{eq:FormFactor}. This can be done, in principle, although the resulting temperature dependences will generally be more complicated, not described by the robust power laws we find here in the low-temperature limit.
\subsection{Direct (one-phonon) processes \label{secPhononsDirect}}
Figure \ref{figPhonons} illustrates the fluctuator-phonon processes considered in this section. In the leading-order process, the fluctuator absorbs or emits a phonon with frequency $\w_{\mvec q\ld}=\w_n$ [see Fig.~\ref{figPhonons}(a)]. The equilibration rate corresponding to this process is obtained from the coupling Hamiltonian, Eq.~\eq{eqnElectronPhonon}, using Fermi's golden rule, Eq.~\eq{eqnFermi}. In GaAs, the conduction band has a unique minimum (a single valley), such that $\al_\chi=\dt_{\chi,1}$ in Eq.~\eq{eqntmullp}. In contrast, the conduction-band minimum of bulk silicon is six-fold degenerate. For silicon, we take $\al_\chi=1/\sqrt{6}\;\forall\;\chi$, consistent with the ground state for donor impurities.\cite{kohn1955theory,yu1996fundamentals} Other choices of $\alpha_\chi$ would not change the final temperature dependence of the equilibration rate. We also assume the transition dipole matrix element to be valley-independent, $\mvec{\sq}_{ll'}^{\chi n}=\mvec{\sq}^n_{ll'}\;\forall\;\chi$. Valley-independence of $\mvec{\sq}^n_{ll'}$ amounts to neglecting anisotropy of the envelope functions $F_{\chi l}^{n}(\vecr)$ and thus of the effective mass.\cite{kohn1955theory} With the above assumptions, we find the equilibration rate for the direct process
\begin{align}
\frac{1}{\tau^\mrm D_n}
&=\left[\frac{1}{3}\Xi^2\left(\frac{\w_n}{v_\mrm{LA}}\right)^4
+\frac{4}{35}\left(1+\frac{4}{3}\z^2\right)
\left(\frac{ee_{14}}{\ve}\right)^2\left(\frac{\w_n}{v_\mrm{LA}}\right)^2\right]\notag\\
&\qquad\times\frac{9\pi}{\hbar}\frac{\w_n}{m_\mrm{at}\w_\mrm{D}^3}|\mvec{\sq}_{\al\bt}^n|^2\coth\left(\frac{\hbar\w_n}{\kB T}\right),
\label{eqnRate1phonon}
\end{align}
where $\z=v_\mrm{LA}/v_\mrm{TA}$, with $v_\mrm{LA}$ and $v_\mrm{TA}$ the phase velocities of the longitudinal and transverse acoustic branches, respectively. Equation \eq{eqnRate1phonon} assumes the piezoelectric tensor for a zincblende-structure material, such as GaAs. For this structure, the only non-vanishing tensor element is $e_{14}$ (in Voigt notation). Silicon is not piezoelectric, resulting in $e_{14}=0$. We have also introduced the Debye frequency $\w_\mrm{D}$, the elementary charge $e$, the mass per lattice atom $m_\mrm{at}$, and the static dielectric constant $\veps$. In GaAs, $\Xi=a(\G_{1c})\simeq-8.6$~eV, where $a(\G_{1c})$ is the volume deformation potential for the conduction-band minimum. In silicon, $\Xi=\Xi_d+\frac13\Xi_u$, where $\Xi_d$ and $\Xi_u$ are deformation potentials at zone boundaries.\cite{herring1956transport,yu1996fundamentals}
\begin{figure}
\begin{center}
\includegraphics{phonons.pdf}
\end{center}
\caption{(Color online) Coupling of a fluctuator consisting of two localized electron states interacting with a phonon bath. We consider transitions up to second order in the electron-phonon interaction. (a) Direct phonon absorption. (b) Excitation due to the two-phonon sum process. (c) Raman excitation. (a-c) We also include the corresponding relaxation processes (not shown). (d) Equilibration rate for a fluctuator coupled to phonons through the deformation and piezoelectric mechanisms, calculated with Eqs.~\eq{eqnRate1phonon} to \eq{eq:totaltaun}. Solid red line: GaAs lattice. Dashed blue line: silicon lattice. In GaAs,\cite{ioffe1998new,arlt1968piezoelectricity,yu1996fundamentals} $\Xi=a(\G_{1\mrm c})=-8.6$~eV, $e_{14}=-0.16\;\mrm{C/m^2}$, $v_\mrm{LA}=5210\;\mrm{m/s}$, $v_\mrm{TA}=3070\;\mrm{m/s}$, $\hbar\w_\mrm{D}/\kB=360$ K, and $\ve=12.9\,\veps_0$. In silicon, $\Xi=\Xi_d+\frac13\Xi_u$, $\Xi_d=5$~eV, $\Xi_u=8.77$~eV, $e_{14}=0$, $v_\mrm{LA}=9040\;\mrm{m/s}$, $v_\mrm{TA}=5400\;\mrm{m/s}$, and $\hbar\w_\mrm{D}/\kB=640$ K. For both GaAs and silicon, we take $|\mvec{\sq}^n_{\al\bt}|=1\;\mrm{nm}$, $\sq^n_0=10\;\mrm{nm}$, $\hbar\w_n=10\;\mrm{neV}$, and $\hbar\w_\g=1\;\mrm{meV}$. \label{figPhonons}}
\end{figure}
\subsection{Two-phonon processes \label{secPhononsTwo}}
We now consider the second-order processes stemming from the coupling Hamiltonian, Eq.~\eq{eqnElectronPhonon}. We first consider the two-phonon sum process. In this case, two phonons with frequencies satisfying $\w_{\mvec q \ld}+\w_{\mathbf{q}'\ld'}=\w_n$ are simultaneously absorbed or emitted [see Fig.~\ref{figPhonons}(b)]. We also include the Raman process, in which a phonon in mode $\mvec q\ld$ is absorbed and another is emitted in mode $\mvec q'\ld'$, with the constraint $\w_{\mvec q \ld}-\w_{\mvec q'\ld'}=\w_n$ [see Fig.~\ref{figPhonons}(c)]. Both of these second-order processes require the presence of an auxiliary third level $\ket \g_n$, with energy splittings relative to states $\ket\al_n$ and $\ket\bt_n$ denoted by $\hbar\w_{\al\g}^n$ and $\hbar\w_{\bt\g}^n$. We obtain the effective Hamiltonians for these second-order processes using the leading-order Schrieffer-Wolff expansion, Eq.~\eq{eqnSW}, taking $\hat V^n=\hat H_\mrm{FB}^n$ from Eq.~\eq{eqnElectronPhonon}. In general, resonant denominators arise in these second-order processes for $\w_{\mvec q \ld}=\w_{\al\g}^n$, $\w_{\bt\g}^n$. We neglect contributions from these resonances, which are exponentially suppressed for $\hbar\w_{\al\g}^n$, $\hbar\w_{\bt\g}^n\gg\kB T$.\cite{van1940paramagnetic,yen1964phonon} We then evaluate the corresponding fluctuator equilibration rates using Fermi's golden rule, Eq.~\eq{eqnFermi}. For $\hbar\w_n<\kB T$, we find the temperature and fluctuator-splitting dependences of the sum and Raman processes shown in Table~\ref{tabPhonons}. Explicitly, the equilibration rate for the Raman process is
\begin{align}
\frac1{\tau_n^\mrm R}
&\simeq \frac{(2\pi)^7(\sq^n_0)^4}{(\hbar\w_\g^n)^2m_\mrm{at}^2\w_\mrm{D}^6}
\left[\frac{15\pi^{4}}{11}\frac{\Xi^4}{v_\mrm{LA}^8}\left(\frac{\kB T}{\hbar}\right)^{11}\right.\notag\\
&\qquad+ \frac{18\pi^{2}}{175}\left(\Xi\frac{ee_{14}}{\veps}\right)^2\frac{1+\frac43\z^2}{v_\mrm{LA}^6}\left(\frac{\kB T}{\hbar}\right)^{9}\notag\\
&\qquad\left.
+ \frac{27}{8575}\left(\frac{ee_{14}}{\veps}\right)^4\frac{(1+\frac43\z^2)^2}{v_\mrm{LA}^4}\left(\frac{\kB T}{\hbar}\right)^{7}\right],
\label{eqnRateRaman}
\end{align}
where we have introduced $\w_\g^n=(1/\w_{\al\g}^n+1/\w_{\bt\g}^n)^{-1}$ and $(\sq^n_0)^4=|\mvec{\sq}_{\al\g}^n|^2|\mvec{\sq}_{\g\bt}^n|^2+|\mvec{\sq}^n_{\al\g}\cdot\mvec{\sq}^{n\ast}_{\g\bt}|^2$.
The equilibration rate for the sum process is given by Eq.~\eq{eqnRateSum} in Appendix~\ref{secSum}. Comparing Eq.~\eq{eqnRateSum} with Eq.~\eq{eqnRateRaman}, we immediately see that the prefactors are identical up to a factor of order one. Thus, using the $\w_n$ and $T$ dependences summarized in Table~\ref{tabPhonons}, the condition for the Raman process to dominate over the sum process can be shown to be $\hbar\w_n<\kB\temp$. In other words, the Raman process always dominates over the sum process for fluctuators that participate significantly to qubit dephasing. Thus, we neglect the sum process in the rest of this paper, regardless of the material. In contrast, the condition for the Raman process to dominate over the direct process does depend on the relevant material parameters.
\begin{table}
\begin{center}
\begin{tabular}{lll}
\hline
\hline
& Deformation ($\sim \Xi$) & Piezoelectric ($\sim e_{14}$)\\
\hline
Direct, $1/\tau^\mrm{D}_n$ & $\propto\;\w_n^4\times T$ & $\propto\;\w_n^2\times T$\\
Sum, $1/\tau^\Sg_n$ & $\propto\;\w_n^9\times T^2$ & $\propto\;\w_n^5\times T^2$\\
Raman , $1/\tau^\mrm{R}_n$& $\propto\;T^{11}$ & $\propto\; T^7$\\
\hline
\hline
\end{tabular}
\end{center}
\caption{Power-law dependences of each contribution to the fluctuator equilibration rates $1/\tau_n$ on $\w_n$ and $\kB\temp$ for the electron-phonon interaction when $\hbar\w_n<\kB\temp$.\label{tabPhonons}}
\end{table}
In Fig.~\ref{figPhonons}(d), we plot the total equilibration rate,
\begin{align}\label{eq:totaltaun}
\frac{1}{\tau_n}=\frac{1}{\tau_n^\mrm D}+\frac{1}{\tau_n^\mrm{R}},
\end{align}
as a function of temperature. The solid red (dashed blue) line shows the equilibration rate for a fluctuator in GaAs (silicon). For either material, Fig.~\ref{figPhonons}(d) illustrates a typical crossover from a low-temperature rate dominated by the direct (one-phonon) process $1/\tau_n\simeq 1/\tau_n^\mrm D \propto T$ to a high-temperature rate dominated by the two-phonon Raman process ($1/\tau_n\simeq 1/\tau_n^\mrm R \propto T^7$ for piezoelectric coupling and $1/\tau_n\simeq 1/\tau_n^\mrm R \propto T^{11}$ for deformation-potential coupling; see Table \ref{tabPhonons}).
From Eq.~\eq{eqnRate1phonon}, the piezoelectric contribution dominates in the direct (one-phonon) process when $\w_n<\w_\mrm{crit}^\mrm D$ where
\begin{align}
\w_\mrm{crit}^\mrm D &= \sqrt{\frac{12}{35}\left(1+\frac{4}{3}\z^2\right)} \frac{e|e_{14}|}{\ve} \frac{v_\mrm{LA}}{\Xi}.
\label{eqnwcrit}
\end{align}
For the Raman process, the piezoelectric mechanism dominates [see Eq.~\eq{eqnRateRaman}] when $T<T_\mrm{crit}$, where
\begin{align}
T_\mrm{crit}&=\frac{1}{2\pi}\left(\frac{11}{35}\right)^{1/4}\frac{\hbar\w_\mrm{crit}^\mrm D}{\kB}.\label{eq:Tcrit}
\end{align}
From Eq.~\eqref{eq:Tcrit}, $\kB T_\mrm{crit}<\hbar\w_\mrm{crit}^\mrm D$. Thus, for fluctuators that contribute significantly to qubit dephasing (having $\hbar\omega_n<\kB\temp$), if the piezoelectric contribution dominates in the Raman process ($T<T_\mrm{crit}$), then it also dominates for direct absorption and emission: $\hbar\w_n<\kB T<\kB T_\mrm{crit}<\hbar\w_\mrm{crit}^\mrm D$. Using the GaAs parameters given in Fig.~\ref{figPhonons}, the piezoelectric contribution then dominates in both the direct (one-phonon) and two-phonon Raman processes if $T<1.0\;\mrm K$. Thus, in GaAs, the crossover from piezoelectric to deformation-potential mechanisms occurs at
\begin{equation}
T_\mrm{crit}=1.0\;\mrm{K}\qquad\mrm{[GaAs]}. \label{eqnTcritGaAs}
\end{equation}
This feature is indeed visible in Fig.~\ref{figPhonons}(d). Quite significantly, $T_\mrm{crit}$ depends only on material parameters and is therefore completely independent of the details of the fluctuators themselves.
In summary, all qualitative differences between the results for GaAs and silicon in Fig.~\ref{figPhonons}(d) arise for $T<T_\mrm{crit}$(GaAs), where the piezoelectric contribution dominates in GaAs.
\section{Coherence time and stretching parameter from microscopic models \label{secMicro}}
In this section, we use the expressions for $\D\xi_n^2$ and $1/\tau_n$ found from microscopic models in Secs.~\ref{secElectrons} and \ref{secPhonons} to find the temperature dependences of $T_2^s$ and $\al^s$. We first proceed numerically, which allows us to access the full temperature range. We then find explicit analytical expressions in either the slow-noise ($\tau_c\gg T_2^s$) or fast-noise ($\tau_c\lesssim T_2^s$) regime. We finally discuss implications for the interpretation of experiments.
\subsection{Numerical evaluation}
For numerical evaluation, we take the fluctuator frequency $\omega_n$ to vary inhomogeneously between fluctuators, but take all other parameters (tunnel couplings, form factors, fluctuator-qubit couplings) to be approximately independent of $n$. Taking the continuum limit of Eqs.~\eqref{eqnfs} and \eq{eqnTdM} for a large number of fluctuators [$\sum_n\to \int d\omega $, $\Delta\xi_n\to \Delta\xi(\omega)$, $\tau_n\to\tau(\omega)$] then gives
\begin{align}
f^s(t_s)&=\frac{t_s}{T_{2\mrm M}}-\int_0^\infty d\w D(\w)\D\xi^2(\w)\tau^2(\w)h^s[t_s/\tau(\w)], \label{eqnfstsNumerical}\\
1/T_{2\mrm M}&=\int_0^\infty d\w D(\w)\D\xi^2(\w)\tau(\w), \label{eqnT2MNumerical}
\end{align}
where $D(\w)$ is the fluctuator density of states. The qubit coherence time $T_2^s$ is then given directly from the numerical solution of Eq.~\eqref{eqnDefT2s} and the stretching parameter $\al^s$ is given by Eq.~\eq{eqnAlpha}. The resulting temperature dependences strongly depend on the density of states $D(\w)$. Here, we assume a near-constant density of states $D(\w)\simeq D(0)$ for $\omega \lesssim k_B T/\hbar$, where the integrand carries appreciable weight [the integral in Eq.~\eqref{eqnfstsNumerical} is cut off by $\D\xi^2(\w)\sim e^{-\hbar\omega/k_B T}$ at large frequency].\footnote{The assumption of a constant density of states in Eq.~\eqref{eqnfstsNumerical} at low temperature is consistent, e.g., with capacitance-probe spectroscopy experiments, where the inhomogeneous broadening of shallow donor levels in the dopant layer of a GaAs/AlGaAs heterostructure has been measured to be $\sim1$ meV ($1\,\mathrm{m}e\mathrm{V}/k_B\sim 10\,\mathrm{K}$).\cite{kuljanishvili2008scanning,tessmer2008nanometer} It is also a standard assumption for two-level systems in glasses.\cite{phillips1987two}} In systems where a non-constant density of states is expected or measured, this could easily be incorporated in Eq.~\eqref{eqnfstsNumerical}, above.
We use the numerical method described above to evaluate $T_2^\mrm e$ and $\al^\mrm e$ as a function of temperature $T$ accounting for the two-phonon Raman [Figs.~\ref{figDecay}(a),(b)] and direct-tunneling [Figs.~\ref{figDecay}(c),(d)] processes. A measurement of the distinct temperature dependences shown in Fig.~\ref{figDecay} could be used to distinguish different microscopic mechanisms. In Figs.~\ref{figDecay}(a),(b), the red solid lines show the temperature dependences expected in GaAs, where piezoelectric coupling to phonons dominates for $T<T_\mathrm{crit}\simeq 1.0\,\mathrm{K}$, but the deformation mechanism dominates for $T>T_\mathrm{crit}$. The blue dashed lines in Figs.~\ref{figDecay}(a),(b) show the expected behavior for silicon, where only the deformation mechanism is relevant. The transition between distinct power-law dependences in $T_2^\mathrm{e}$ shown in Figs. \ref{figDecay}(a),(c) occurs in the crossover regime, when $\tau_c/T_2^\mathrm{e}\sim 1$. Unlike $T_\mathrm{crit}$, discussed above, the temperature scale determining this crossover is generally non-universal, depending on the specific details of the fluctuators and their coupling to the qubit. The distinct upturn in $T_2^\mathrm{e}$ at large $T$ in Fig.~\ref{figDecay}(a) is due to motional averaging; the Raman mechanism leads to a strong reduction in the noise correlation time at large $T$ ($\tau_c\propto 1/T^7$ or $\tau_c\propto 1/T^{11}$), which cannot be compensated by the slow growth in the noise amplitude ($\propto T$) for a constant density of states. The result is a fast averaging of the noise and a resulting increase in coherence time $T_2^\mathrm{e}$. It should be possible to observe such an upturn experimentally when other high-temperature qubit-dephasing mechanisms can be suppressed. These mechanisms may arise, e.g., from direct coupling of the qubit to phonons, resulting in exponentially-activated pure dephasing from single-phonon absorption and emission,\cite{semenov2004phonon} or from strongly temperature-dependent pure-dephasing rates due to multi-phonon processes.\cite{roszak2009phonon,kornich2014phonon}
For all processes investigated here, there is a crossover, as a function of temperature, from the fast-noise (Markovian) limit, $\tau_c/T_2^\mathrm{e}\ll 1$, in which $\alpha^\mathrm{e}\simeq 1$, to the slow-noise limit, $\tau_c/T_2^\mathrm{e}\gg 1$, where $\alpha^\mathrm{e}\simeq 3$ (see Sec.~\ref{secCompressed}). Strikingly, for the Raman process, the crossover is from the slow-noise to the fast-noise limit with increasing temperature [Fig.~\ref{figDecay}(b)]. In contrast, the tunneling process leads to a crossover from fast- to slow-noise with increasing temperature [Fig.~\ref{figDecay}(d)]. In the case of the Raman process, the fast-noise limit is naturally reached at large temperature because of the rapid decrease of the noise correlation time ($\tau_c\sim 1/T^7$ or $\tau_c\sim 1/T^{11}$) in combination with an increase in $T_2^s$ due to motional averaging (see the discussion above). For the tunneling process, the correlation time saturates at high temperature $\tau_c\sim \tau_n\propto \mathrm{const.}$ (see Table \ref{tabTunnel}), while the amplitude of the noise increases as progressively more fluctuators satisfying $\hbar\omega_n\lesssim k_B T$ contribute, leading to a decrease in $T_2^s$ and a corresponding transition to the slow-noise limit $\tau_c/T_2^s\gg 1$ at high temperature.
\begin{figure}
\begin{center}
\includegraphics{exp.pdf}
\end{center}
\caption{(Color online) Hahn-echo coherence time $T_2^\mrm e$ and corresponding stretching parameter $\al^\mrm e$ (a) Coherence time from the Raman phonon process in GaAs (solid red line) and silicon (dashed blue line). (b) Corresponding stretching parameter. (c) Coherence time for the direct tunneling process. (d) Corresponding stretching parameter. (a-b) Material parameters take the same values as given in the caption of Fig.~\ref{figPhonons} for GaAs and silicon. For the Raman process, we choose $\sq_{\al\bt}^n=0$, $\sq_0^n=100\;\mrm{nm}$, $\hbar\w_\g^n=200~\upmu$eV, and $D(0)\W^2_n=1\times10^3$~$\mrm s^{-1}$ $\forall\,n$. These numbers are chosen to give $T_2^\mrm e\sim1\;\upmu$s and $\al^\mrm e\simeq1$ for $T\sim100$~mK, consistent with Ref.~\onlinecite{dial2013charge}. The decay is given by a compressed exponential with $\alpha^\mathrm{e}>1$ at low temperature, but becomes exponential ($\alpha^\mathrm{e}=1$) at higher temperature. (c-d) For the direct tunneling process, we choose $D(0)\W^2_n=1\times10^2$~$\mrm s^{-1}$ and $\frac{2\pi}{\hbar}D_\mrm{el}(\e_{\al n})|t_n(\e_{\al n})|^2=10^6$~$\mrm s^{-1}$ $\forall$ $n$. The behaviors of $T_2^\mrm e(T)$ and $\al^\mrm e(T)$ are radically different for the Raman mechanism and tunneling mechanism, which makes them easily distinguishable. As explained in the main text, various other qubit dephasing channels can become relevant at higher temperatures, possibly obscuring the crossovers seen here in any given experiment.\label{figDecay}}
\end{figure}
\begin{table*}
\begin{center}
\begin{tabular}{l|c|ll|c|l}
\hline\hline
& Fast noise (Markovian, $\tau_c\ll T_2^s$) & \multicolumn{2}{c|}{Slow noise ($\tau_c\gg T_2^s$)} & Crossover ($\tau_c\lesssim T_2^s$) & \\
Process & $1/T_2^\ast=1/T_2^\mrm e=1/T_{2\mrm M}$ & $1/T_2^\ast$ & $1/T_2^\mrm e$ & $\bt^s\propto\eta$ & $\al^s$\\
\hline
Direct tunneling & $\propto T$ & $\propto T^{1/2}$ & $\propto T^{1/3}$ & $\propto T$ & $\upa$\\
Cotunneling & $\propto1$ & $\propto T^{1/2}$ & $\propto T^{2/3}$ & $\propto T^{-1}$ & $\dwna$\\
\hline
Direct (deformation)
& $\propto T^{-17/2}$ & $\propto T^{1/2}$ & $\propto T^2$ & $\propto T^{-39/2}$ & $\dwna$\\
Direct (piezoelectric)
& $\propto T^{-4}$ & $\propto T^{1/2}$ & $\propto T^{4/3}$ & $\propto T^{-11}$ & $\dwna$\\
Raman (deformation) & $\propto T^{-10}$ & $\propto T^{1/2}$ & $\propto T^{4}$ & $\propto T^{-21}$ & $\dwna$\\
Raman (piezoelectric) & $\propto T^{-6}$ & $\propto T^{1/2}$ & $\propto T^{8/3}$ & $\propto T^{-13}$ & $\dwna$\\
\hline\hline
\end{tabular}
\vspace{5mm}
\end{center}
\caption{Temperature dependence of the coherence factor $C^s(t_s)\simeq\exp[-(t_s/T_2^s)^{\al^s}]$ for a qubit coupled to fluctuators interacting with either an electron bath (first two rows) or a phonon bath (last four rows). We give the coherence-time temperature dependences for both free-induction decay ($s\rightarrow\ast$) and Hahn echo ($s\rightarrow\mrm e$) in the limits of fast noise (Markovian, $\tau_c\ll T_2^s$) and slow noise ($\tau_c\gg T_2^s$). In the crossover regime, we also give the temperature dependence of $\bt^s=\al^s-1$ for $\tau_c\lesssim T_2^s$. The last column indicates whether $\al^s$ increases ($\upa$) or decreases ($\dwna$) as a function of temperature $T$. All these results are obtained for a near-constant fluctuator density of states $D(\w)\simeq D(0)$ for $\w\lesssim\kB T/\hbar$. Different densities of states could easily be accounted for using Eqs.~\eq{eq:T2Star} to \eq{eqnPropbeta}. Predictions for the two-phonon sum process are absent since, for $\hbar\w_n<\kB\temp$, these processes are always negligible relative to the Raman processes (see Sec.~\ref{secPhonons}). \label{tabT2}}
\end{table*}
\subsection{Slow- and fast-noise regimes}
As described above, given sufficient microscopic information, it is possible to make quantitative predictions for the temperature dependence of the qubit coherence time $T_2^s$ and stretching parameter $\alpha^s$. To do this, we would need a good description of the relevant transition dipole matrix elements $\mvec{\sq}^n_{\alpha\beta}$ or tunnel couplings $t_{\alpha n}(\epsilon)$ as well as the fluctuator density of states and microscopic material-specific parameters. When the specific impurities associated with charge noise can be identified, it may be possible to estimate or measure these quantities. In many experiments, however, it may be difficult to establish the specific source of charge noise and the associated parameters. In this case, we can still make strong analytical predictions about the scaling of $T_2^s$ with temperature in either the fast-noise ($\tau_c\lesssim T_2^s$) or slow-noise ($\tau_c \gg T_2^s$) regime.
We allow the qubit-fluctuator couplings $\W_n$, dipole matrix elements $\mvec{\sq}^n_{\al\bt}$, etc. to vary generally with $n$. However, to make analytical progress, we assume that these parameters are approximately independent of $\w_n$ for $\w_n\lesssim\kB\temp/\hbar$ where $\Delta\xi^2(\omega_n)$ is appreciable. To determine the simple scaling behavior, we replace the exponential dependence $\Delta\xi^2(\omega_n)\sim e^{-\hbar\omega_n/\kB\temp}$ with a hard cutoff at $\hbar\omega_n = \kB\temp$. Taking the continuum limit of Eq.~\eqref{eqnTdM} for the fast-noise limit $(\tau_c\ll T_2^s)$ then gives
\begin{equation}
\frac1T_2^\ast=\frac1T_2^\mrm e=\frac{1}{T_{2\mrm M}}\propto\int_0^{\kB\temp/\hbar}d\w\;D(\w)\tau(\w,T).\label{eqnPropM}
\end{equation}
With the same assumptions, we perform the continuum limit in Eqs.~\eqref{eqnTdNMst} and \eqref{eqnTdNMe} for the slow-noise limit $(\tau_c\gg T_2^s)$, giving
\begin{eqnarray}
\frac{1}{T_2^\ast}&\propto &\left[\int_0^{\kB T/\hbar} d\w\;D(\w)\right]^{1/2},\label{eq:T2Star}\\
\frac{1}{T_2^\mrm e}&\propto &\left[\int_0^{\kB\temp/\hbar}d\w\frac{D(\w)}{\tau(\w,T)}\right]^{1/3}. \label{eqnPropNM}
\end{eqnarray}
From Eqs.~\eq{eqnBetaStar} to \eq{eqneta} for $\bt^s$ and $\eta$, we also have, in the fast-noise regime
\begin{align}
\bt^s\propto\eta\propto\int_0^{\kB T/\hbar}d\w\;D(\w)\tau^2(\w,T). \hspace{7mm}(\tau_c\lesssim T_2^s) \label{eqnPropbeta}
\end{align}
In the slow-noise limit, the inhomogeneously broadened decay time $T_2^*$ is independent of the fluctuator equilibration time $\tau_n$. This decay time is therefore independent of the specific microscopic mechanism giving rise to fluctuator dynamics and can be used to measure the frequency dependence of the fluctuator density of states. Indeed, taking $D(\w)=D_0\w^a$, Eq.~\eq{eq:T2Star} gives
\begin{align}
1/T_2^\ast\propto T^{\frac{a+1}{2}}, \hspace{2cm}[\tau_c\ggT_2^s] \label{eqnT2sDOS}
\end{align}
where we have assumed $a>-1$. Thus, the scaling with temperature of $1/T_2^\ast$ in the slow-noise regime can be used to determine $a$ under the assumption that fluctuator parameters other than $\w$ (i.e. $\Omega_n$, $\mvec{\sq}_{\al\bt}^n$, etc.) are approximately frequency-independent for $\w\lesssim\kB T/\hbar$.
In Tables~\ref{tabTunnel} and \ref{tabPhonons}, we give the $\w$ and $T$ dependences of $1/\tau$ for all fluctuator-bath processes considered in this paper. Substituting these dependences into Eqs.~\eq{eqnPropM} to \eq{eqnPropNM} and assuming a constant fluctuator density of states ($a=0$) gives the power-law scalings for $T_2^s$ shown in Table~\ref{tabT2}. These scalings are consistent with those obtained numerically in Fig.~\ref{figDecay}. Similar tables could easily be built for different values of $a$, i.e., for non-constant fluctuator densities of states.
In Fig.~\ref{figBeta}, we plot $\bt^\mrm e=\al^\mrm e-1$ as a function of temperature for the Raman process [Fig.~\ref{figBeta}(a)] and direct tunneling [Fig.~\ref{figBeta}(b)]. We evaluate Eqs.~\eq{eqnfstsNumerical} and \eq{eqnT2MNumerical} numerically with the same assumptions and parameters as described in the caption of Fig.~\ref{figDecay}. These numerical results are represented in Fig.~\ref{figBeta} by circles and triangles. The analytical predictions of Table~\ref{tabT2} are also plotted as straight lines. As expected from the discussion above Eq.~\eq{eqnT2StarFast}, these analytical results only substantially deviate from exact numerical calculations when $\bt^\mrm e\simeq3\eta\sim1$, corresponding to $\tau_c\simT_2^\mrm e$. Indeed, when $\eta\rightarrow\infty$ (the slow-noise limit), Eq.~\eq{eqnPropbeta} predicts an unbounded growth of $\bt^s$, while, from Eqs.~\eq{eqnTdNMst} and \eq{eqnTdNMe}, $\bt^s$ saturates to $1(2)$ for free-induction decay (Hahn echo). However, for $\tau_c> T_2^s$, Eq.~\eq{eqnPropbeta} and the corresponding power laws in Table~\ref{tabT2} still give the trends in $\al^s$ [increasing ($\upa$) or decreasing ($\dwna$)] shown in Table~\ref{tabT2}.
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{beta.pdf}
\end{center}
\caption{(Color online) Parameter $\bt^\mrm e=\al^\mrm e-1$. (a) Raman process. Black circles (black triangles): $\bt^\mrm e$ for GaAs (silicon), from the numerical method of Eqs.~\eq{eqnDefT2s}, \eq{eqnAlpha}, and Eqs.~\eq{eqnfstsNumerical}, \eq{eqnT2MNumerical}. Solid red line (dashed blue line): analytical temperature dependence for $\tau_c\lesssim T_2^s$ from Table~\ref{tabT2} for GaAs (silicon). (b) Direct tunneling. Red circles: numerical method. Solid black line: analytical prediction. Assumptions and microscopic parameters are the same as in Fig.~\ref{figDecay}. \label{figBeta}}
\end{figure}
In Table~\ref{tabT2}, all processes we have considered can be distinguished from a combined measurement of the temperature dependence of $T_2^\mrm s$ and $\bt^s$. From this table, it should be possible to rule out specific fluctuator noise mechanisms based on a measurement of $T_2^s$ and $\al^s$ as a function of temperature.
\subsection{Relevance to experiment \label{secExp}}
To assess the usefulness of the approach described here, we now consider an application to a recent experiment. In Ref.~\onlinecite{dial2013charge}, Dial \emph{et al.} have observed coherence decay as a function of temperature for a qubit defined by singlet and triplet spin states in a two-electron double quantum dot in GaAs. These measurements revealed an approximate linear dependence of the inhomogeneously broadened decay time, $T_2^\ast\propto A-BT$, with $\al^\ast=2$ for temperatures between $\sim 50$~mK and $\sim250$~mK. This behavior may be compatible with any dependence $T_2^\ast\propto1/T^{\frac{a+1}{2}}$ given by Eq.~\eqref{eqnT2sDOS} with $a>-1$. Thus, a more precise measurement of $T_2^\ast$ as a function of temperature may establish the specific form of the fluctuator density of states in this experiment.
Under the assumption of a constant fluctuator density of states [$D(\w)\sim\w^a$ with $a=0$], we attempt to apply the results of Table~\ref{tabT2} to describe the experimental results of Ref.~\onlinecite{dial2013charge}. In Ref.~\onlinecite{dial2013charge}, the authors measured $T_2^\mrm e(T)$ and $\al^\mrm e(T)$ and found that: (i) $T_2^\mrm e(T)\propto T^{-\g}$ with $\g\sim2$ for the whole temperature range of the experiment, (ii) $\bt^\mrm e$ decreases monotonically as $T$ increases from $\sim$ 50 mK to $\sim$ 150 mK, and (iii) $\bt^\mrm e\lesssim0.7$ for the whole temperature range, corresponding to the fast-noise regime, in which $\tau_c\lesssim T_2^\mrm e$. In this regime, Eqs.~\eq{eqnT2eFast} and \eq{eqnBetaE} yield $T_2^\mrm e\simeq T_\mrm{2M}/(1+\bt^\mrm e)\sim T_\mrm{2M}$ up to a correction $\mathcal O(\bt^\mrm eT_{2\mrm M})$. The first column of Table~\ref{tabT2} should then accurately reflect the trend in $T_2^\mrm e(T)$ in the fast-noise regime, consistent with $\beta^\mrm e\to0$. For all phonon mechanisms, we find that $T_2^\mrm e$ increases with temperature for $\tau_c\lesssim T_2^\mrm e$, while the data from Ref.~\onlinecite{dial2013charge} exhibit the opposite trend. The only mechanism in Table~\ref{tabT2} for which $T_2^\mrm e$ correctly decreases when $T$ increases in the fast-noise regime is direct tunneling. However, for this process, $\bt^\mrm e$ increases monotonically with temperature, in contradiction with the experimental data of Ref.~\onlinecite{dial2013charge}. Therefore, under the assumption of a constant density of states, none of the physical processes displayed in Table~\ref{tabT2} can, alone, explain all the observations listed above.
One of the assumptions behind Table~\ref{tabT2} may be violated in the context of Ref.~\onlinecite{dial2013charge}. Here, we review the assumptions and limitations leading to this table. To begin with, it may be that the true fluctuator density of states was not constant in the experiment of Ref.~\onlinecite{dial2013charge}. A precise measurement of $T_2^\ast$ in the slow-noise regime can be used to establish the true frequency dependence of the fluctuator density of states through Eq.~\eq{eqnT2sDOS}. In addition, for phonon mechanisms, we have assumed a long-wavelength limit to establish the low-frequency behavior of the fluctuator equilibration rates. From Eq.~\eq{eqnab}, this assumption may be violated for fluctuators with large extended orbital states, or at high temperatures, leading to phonon-bottleneck effects.\cite{benisty1995reduced}
Finally, we have assumed that the dominant dephasing mechanism results from coupling to charge fluctuators. It is, of course, possible that other decay channels become relevant. For example, in the presence of an independent extrinsic Markovian dephasing process, the coherence factor takes the form
\begin{align}
C^\mrm e(t_s)=\exp\left[-\frac{t_s}{T'_2}-\left(\frac{t_s}{T_2^s}\right)^{\alpha^s}\right].\label{eqnMulti}
\end{align}
In the above equation, $T_2^s$ and $\al^s$ are the decay time and stretching parameter for the fluctuator processes presented here, while $T'_2$ is the decay time due to an additional Markovian dephasing process acting directly on the qubit. At high temperature, many extrinsic dephasing mechanisms (not related to charge fluctuators) may become relevant (these may be due, e.g., to coupling to phonons\cite{mccumber1963linewidth,roszak2009phonon,kornich2014phonon}). The first term in Eq.~\eq{eqnMulti} may then dominate over the second. To ensure that the fluctuator mechanisms presented in this paper are the dominant source of dephasing, it may be necessary to understand and suppress alternative sources of dephasing (by, e.g., working at sufficiently low temperature). Alternatively, when these alternate sources of dephasing are well understood, a combined formula such as Eq.~\eq{eqnMulti} could be used to extract the values of $T_2^s$ and $\alpha^s$ associated with fluctuator dynamics, even in the presence of extrinsic dephasing mechanisms.
To further illustrate how Eq.~\eq{eqnMulti} can be used to identify interactions at the origin of fluctuator dynamics, we apply it to the analysis of the data from Ref.~\onlinecite{dial2013charge}. We take $T_2^\mrm e$ to be the Hahn-echo decay time for one of the fluctuator processes of Table~\ref{tabT2} in the slow-noise limit (in which $\al^\mrm e=3$). When $T'_2< T_2^\mrm e$, the contribution to qubit decay of the extrinsic Markovian process dominates over the contribution of the fluctuators. We then find the qubit decay time $T_2$ including both fluctuator and extrinsic processes. We do so by setting the argument of the exponential in Eq.~\eq{eqnMulti} equal to one and solving for $t_s\equiv T_2$ using an expansion in increasing powers of $T'_2/T_2^\mrm e$. Substituting the resulting expression for $T_2$ in the definition of the stretching parameter $\al$, Eq.~\eq{eqnAlpha}, we find the form of $\bt$ including both processes (fluctuator and extrinsic) to leading order in $T'_2/T_2^\mrm e$. We take $T_2\simeq T'_2\propto T^{-\dt}$ for the extrinsic dephasing mechanism, with $\dt$ the exponent obtained from the experiment of Ref.~\onlinecite{dial2013charge}, and $T_2^\mrm e\propto T^{-\g}$, with $\g$ the appropriate exponent for the relevant fluctuator mechanism from Table~\ref{tabT2}. We then find, to leading order in $T'_2/T_2^\mrm e$,
\begin{align}
\bt\propto T^{3(\g-\dt)} . \label{eqnGamma}
\end{align}
The decreasing trend for $\bt(T)$ observed in Ref.~\onlinecite{dial2013charge} from $\sim50$~mK to $\sim150$~mK is thus reproduced for $\g<\dt$. For $\dt=2$, as written in Ref.~\onlinecite{dial2013charge}, the decreasing trend for $\bt(T)$ is consistent with all the fluctuator mechanisms from Table~\ref{tabT2} in the slow-noise limit except for the Raman processes (from either piezoelectric or deformation mechanisms). However, for $100$~mK$<T<200$~mK, Kornich \etal~have predicted Markovian decay of singlet-triplet coherence at a rate $\propto T^{3}$ due to two-phonon processes including spin-orbit coupling (see Fig.~3 of Ref.~\onlinecite{kornich2014phonon}). This behavior is compatible with the experimental data of Ref.~\onlinecite{dial2013charge} in the relevant temperature range ($100$~mK$<T<200$~mK). Taking $\dt=3$ in Eq.~\eq{eqnGamma} implies that the observed decreasing trend for $\bt(T)$ with $T<100$~mK becomes compatible with all the fluctuator mechanisms in Table~\ref{tabT2} except the Raman process due to deformation coupling to phonons. With the help of Eq.~\eq{eqnGamma} and knowing $\dt$ from a precise measurement of $T_2(T)$ in the fast-noise regime, $\g$ could be estimated through a precise measurement of $\bt$ as a function of $T$, allowing for further identification of fluctuator processes.
\section{Conclusions}\label{secConclusions}
We have described qubit dephasing due to two-level fluctuators undergoing equilibration dynamics with either electron or phonon reservoirs. Even for a Lorentzian noise spectrum, which arises naturally for two-level fluctuators, the qubit coherence factor is well approximated by a compressed exponential $\exp[-(t_s/T_2^s)^{\al^s}]$. In contrast with the situation for $1/f$ noise,\cite{cywinski2008how,medford2012scaling} here the stretching parameter $\al^s$ depends on the chosen pulse sequence $s$ and obeys a universal relation, $(\al^\mrm e-1)/(\al^\ast-1)\simeq3$, in the fast-noise regime, in which $T_2^s\gtrsim\tau_c$.
We have determined the explicit temperature dependences for the stretching parameter $\al^s$ and coherence time $T_2^s$ from several microscopic mechanisms giving rise to fluctuator equilibration dynamics. These mechanisms include direct tunneling and cotunneling between localized electronic states and an electron reservoir. We have also considered coupling of two-level charge fluctuators to a phonon bath. In the latter case, we have allowed for direct phonon absorption and emission, as well as the two-phonon sum and Raman processes. We have found that different fluctuator-bath processes lead to distinct temperature dependences for $T_2^s$ and $\al^s$. A measurement of the predicted temperature dependences should thus allow to experimentally distinguish between physical processes at the origin of fluctuator noise, providing an additional tool to suppress charge noise.
\begin{acknowledgments}
We acknowledge financial support from NSERC, CIFAR, FRQNT, INTRIQ, and the W.~C.~Sumner Foundation.
\end{acknowledgments}
| 2023-04-23T06:40:25.858Z | 2015-05-01T02:09:07.000Z | redpajama/arxiv | arxiv_0001 | 373 | 12,236 |
8a62b2bc16afa4b8b1d099e3ecb8d4e328d11cc8 | \section{Introduction}
Fractional calculus has turned out to be a powerful analytical tool in various disciplines: It has been recognized that
especially in more recently emerging fields dealing with complex, chaotic, turbulent, critical, fractal and anomalous transport phenomena, problems can appropriately be described by equations
which involve fractional operators. A broad overview on applications of the fractional approach can be found in the review articles of Metzler and Klafter \cite{metzler,metzler2014}.
There exist various definitions (Riemann, Liouville, Caputo, Gr\"unwald-Letnikow, Marchaud, Weyl, Riesz, Feller, and others) for fractional derivatives and integrals, e.g. \cite{hilfer-2008,metzler,samko,samko2003,podlubny} among many others. This diversity of definitions is due to the fact that fractional operators take different kernel representations in different function spaces which is a consequence of the non-local character of fractional kernels.
To apply fractional calculus to a physical problem, it must be carefully analyzed which fractional operator has to be chosen.
Often discrete models which yield in a continuum limit the correct fractional operators are most helpful.
As an example for such a procedure may serve our recently developed discrete self-similar spring model leading to dispersion relations and
to discrete self-similar Laplacian operators in the form of Weierstrass-Mandelbrot fractal functions \cite{michel}. A continuum limit of this discrete self-similar Weierstrass-Mandelbrot type Laplacian operator can be defined which yields the 1D infinite space {\it fractional Laplacian} (Riesz fractional derivative)\footnote{We use in this paper synonymously the terms ``fractional Laplacian" and ``Riesz fractional derivative".} \cite{michel14,michel-fcaa}.
Especially phenomena of anomalous diffusion are described by diffusion equations where instead of the conventional Laplacian $\Delta$
the fractional Laplacian $-(-\Delta)^{\frac{\alpha}{2}}$ occurs, leading to L\'evy $\alpha$-stable distribution-solutions. L\'evy $\alpha$-stable distributions were introduced by Paul L\'evy as a generalization of gaussian distributions \cite{levy}.
The associated microscopic random motions to L\'evy distributions are the so called L\'evy flights. L\'evy flights are the natural generalization of Brownian motion, the latter being described by gaussian distributions. L\'evy distributions contain as special case gaussian distributions and Brownian motion is a special case of L\'evy flight, and the (nonlocal) fractional Laplacian contains the (local) conventional Laplacian as a special case. The role of the conventional Laplacian as the
generator for gaussian distributions is the same as the role of the fractional Laplacian as the generator of $\alpha$-stable L\'evy distributions.
So we always may ask whether a "gaussian phenomenon" has a "L\'evy phenomenon" generalization, governed by equations where the Laplacian is generalized by the fractional Laplacian. Indeed Laskin demonstrated in a series of seminal papers
\cite{Laskin2000,Laskin,Laskin2010} (and see also the references therein)
that a fractional generalization of Quantum Mechanics and Statistical Mechanics can be introduced\footnote{for which Laskin coined the notions fractional Quantum Mechanics and fractional Statistical Mechanics, respectively.}: Fractional Quantum Mechanics is based on" Laskin path integrals" on L\'evy flyer paths which generalize the Feynman path integrals taken over gaussian paths. As a result the Laplacian of the kinetic part in the Schr\"odinger equation occurs in a fractionally generalized manner in the form of fractional Laplacian. It needs to be emphasized that this is not a simple replacement of operators, but well based on the statistical deduction by means of path integrals.
Last but not least the fractional approach seems to be the appropriate language to capture especially fractal aspects of phenomena. An interesting application to problems in turbulence is presented in the paper of Chen
\cite{chen-fract-turb}. Further due to the nonlocal characteristics of fractional operators, it
has turned out to be appropriate to capture certain kind of nonlocal material behavior \cite{Carpinteri11,Challamel13},
when the elastic interaction is scale free \cite{michelc}.
Ortigueira developed in a seminal paper a discrete fractional approach and demonstrated its interlink to the
continuous Riesz fractional kernels \cite{riesz2}: Starting ad-hoc with a fractional centered differences formulation, Ortigueira developed a fractional generalization of Cauchy's integral formula for complex functions, leading in the continuum limit to the well known standard fractional Laplacian and
their inverse operator kernels, the Riesz potentials, on the 1D infinite space.
In the present paper we deduce a discrete fractional model from a harmonic potential
defined on the periodic chain of arbitrary particle numbers $N$.
Our model recovers in the infinite chain limit $N\rightarrow \infty$ the Ortigueira model of \cite{riesz2},
and leads to the continuum limit kernels of the fractional Laplacian (Riesz fractional derivative)
for the infinite and finite $L$-periodic strings, respectively.
Zoia et al also analyze some aspects of the infinite chain model, discussing applications
to L\'evi flights \cite{Zoia2007}, however, without directly analyzing continuum limits of the discrete fractional approach.
In the present paper our goal is the development of a fractional generalization of classical harmonic Montroll-Potts lattice
models for periodic especially finite 1D lattices (chains) and their continuum limits. The classical harmonic approximations of lattices (e.g.
\cite{Born54},\cite{Maradudin63}) describe harmonic inter-particle interactions by inter-particle springs
connecting close neighbor particles resulting in Born-von-Karman models involving the discrete second-order centered differences operator (Born-von-Karman Laplacian).
In the fractional approach to be developed in the present paper,
instead of the classical discrete Born-von-Karman Laplacian, its fractional generalization occurs
in the form of a power law operator function of the Born-von-Karman Laplacian.
The present paper is organized as follows.
We deduce from ``fractional harmonic lattice potentials" on the cyclically closed linear chain a discrete fractional Laplacian matrix.
We do so by applying our recent approach to generate
nonlocal lattice models by matrix functions where the generator operator is the discrete centered Born von
Karman Laplacian \cite{michel-collet}. First we obtain the discrete fractional Laplacian
in explicit form for the infinite chain for particle numbers $N\rightarrow \infty$,
being in accordance with the fractional centered difference models of Ortiguiera \cite{riesz2}
and Zoia et al. \cite{Zoia2007}.
Utilizing the discrete infinite chain fractional Laplacian matrix we construct an
explicit representation for the {\it fractional Laplacian matrix on the $N$-periodic finite chain} where the particle number
$N$ can be arbitrary not necessarily large. To the best of our knowledge the fractional Laplacian matrix
on the finite $N$-periodic chain, so far has not been reported in the literature.
Then we analyse continuum limits of the discrete fractional model:
The infinite space continuum limit of the fractional Laplacian matrix yields the well known infinite space kernel
of the standard fractional Laplacian. The periodic string continuum
limit yields an explicit representation for the kernel of the fractional Laplacian (Riesz fractional derivative) which fulfills periodic boundary conditions and is defined on the finite $L$-periodic string.
The periodic string fractional Laplacian is obtained by convolution of the infinite space kernel with the periodic unity projection operator ($L$-periodic Dirac's $\delta$-function). The $L$-periodic string fractional Laplacian kernel represents the periodic string continuum limit expression of the $N$-periodic finite chain fractional Laplacian matrix.
To the best of our knowledge the fractional Laplacian on the $L$-periodic string, as developed in this paper so far is not
given in the literature.
\section{Preliminaries}
We consider a periodic, cyclically closed linear chain with equidistant lattice points $p=0,..,N-1$ consisting of $N$ identical particles. Each particle has the same mass $\mu$ and each mass point $p$ has equilibrium position at $0\leq x_p=ph\leq L$ ($p=0,..,N-1$) where $L$ denotes the length of the chain and $h$ the interparticle distance (lattice constant). Let us denote by $u_p=u(x_p)$ ($0 \leq x_p=ph < L=Nh$) the displacement field of particle $p$.
Further we impose periodicity (cyclic closure of the chain)
\begin{equation}
\label{period}
u_p=u_{p+N} ,\hspace{2cm} u(x_p)= u(x_p+L)
\end{equation}
for the displacement field for which we use the equivalent notations $u_p=u(x_p)$. We utilize for the cyclically closed chain the cyclic index convention, namely $p \rightarrow p \,\,\, mod\, (N) \in \{0,1,..,N-1\}$. We can imagine the cyclic chain as a closed ring of $N$ identical particles
without boundary.
The starting point of our approach is to propose
harmonic potentials which lead by Hamilton's variational principle to discrete {\it fractional Laplacian operators} being {\it power law matrix functions} of the discrete centered second difference operator (discrete Born von Karman Laplacian) as generator. We refer these potentials to as fractional harmonic potentials.
Following our recently proposed general approach to generate nonlocal constitutive behavior by matrix functions \cite{michel-collet}, we can write any elastic potential on the 1D cyclic chain in the harmonic approximation in the following compact form \cite{michel-collet}
\begin{equation}
\label{compfo}
V_f = \frac{\mu}{2}\sum_{p=0}^{N-1}u_p^*f(2{\hat 1}-D-D^{\dagger})u_p = -\frac{1}{2} \sum_{p=0}^{N-1}\sum_{q=0}^{N-1} u_q^*
\Delta_f(|p-q|)u_p
\end{equation}
where $\Delta_f(|p-q|)$ indicates the (negative-semidefinite) Laplacian $N\times N$-matrix (discrete Laplacian operator) of the problem which fulfills also the periodicity condition $\Delta_f(|p-q|)=\Delta_f(|p-q+N|)$.
We have introduced in (\ref{compfo}) the shift operator operator $D(\tau)$
which is defined by its action $D(\tau)u(x)= u(x+\tau)$. When we utilize in this paper the notation $D$ by skipping the argument, we mean the right hand sided next-neighbor particle shift operator $Du_p=u_{p+1}$, i.e. we utilize synonymously $D=:D(h)$
and its adjoint operator $D^{\dagger}=:D(-h)$ which indicates the left hand sided next-neighbor shift operator $D^{\dagger}u_p=u_{p-1}$.
The operators
$D=D(h)$ and $D^{\dagger}=D(-h)$ are adjoint (hermitian conjugate) to each other, where shift operators are unitary which is expressed by $DD^{\dagger} =D^{\dagger}D={\hat 1}$ with unity operator $D(0)={\hat 1}$. Shift operators on the cyclically
closed chain are $N$-periodic including the unity operator\footnote{
${\hat 1}=D(0)=D^{Nn}=D(nNh)=D(nL)$ ($n \in {\bf \Z}_0$)}.
Further we introduced in (\ref{compfo}) the ``characteristic function" $f$ which is specified in below equation (\ref{charfu}). We utilize the equivalent notations
\begin{equation}
\label{shift}
Du_p = u_{p+1} , \hspace{0.5cm} D^{\dagger}u_p = u_{p-1} ,\hspace{0.5cm} D(h)u(x_p)=u(x_p+h)=u(x_{p+1}), \hspace{0.5cm} D(-h)u(x_p)=u(x_p-h)=u(x_{p-1})
\end{equation}
with the matrix elements\footnote{We utilize often the more simple notation $2-D-D^{\dagger}$ synonymously for
$2{\hat 1}-D(h)-D(-h)$.}
\begin{equation}
\label{matrixrep}
D_{pq} =\delta_{p+1,q} , \hspace{0.5cm} D^{\dagger}_{pq} =\delta_{p-1,q} ,
\hspace{0.5cm} [2{\hat 1}-D-D^{\dagger}]_{pq}= 2\delta_{pq}-\delta_{p+1,q}-\delta_{p-1,q}
\end{equation}
where cyclic index convention is always assumed including for the Kronecker symbol $\delta_{ij}={\hat 1}_{ij}$.
Assuming that $u(x_p)$ is sufficiently smooth we can put $D(\pm h)=\exp{(\pm h\frac{d}{dx})}$. The central symmetric second difference operator (discrete Born von Karman Laplacian) then is $D(h)+D(-h) -2 = (D(\frac{h}{2})-D(-\frac{h}{2}))^2 = 4\sinh^2{\frac{h}{2}\frac{d}{dx}}$ which is a useful representation especially for the determination of
the eigenvalues (dispersion relation).
The function $f$ which we introduced in (\ref{compfo}) as operator function ($N\times N$-matrix function), we refer to as {\it characteristic function}\footnote{The characteristic function itself is defined as a scalar function.}. $\mu f$ contains the entire constitutive information (the material constants) of the harmonic system. The characteristic function has to fulfill the following physically necessary properties \cite{michel-collet}
\begin{equation}
\label{charfu}
f(\lambda) >0 ,\hspace{1cm} 0<\lambda \leq 4 ,\hspace{1cm} f(\lambda=0)=0
\end{equation}
The positiveness for $0<\lambda \leq 4$ is equivalent to elastic stability of the chain, and $f(\lambda=0)=0$ reflects translational invariance (zero elastic energy for uniform translations).
The characteristic function has the dimension $[f] = sec^{-2}$ and the {\it dispersion relation} of the harmonic system (\ref{compfo}) is simply determined by \cite{michel-collet}
\begin{equation}
\label{disprel}
\omega_f^2(\kappa_l) = f(\lambda = 4 \sin^2{\frac{\kappa_l}{2}})
\end{equation}
where $-\pi \leq \kappa_l=\frac{2\pi}{N}l \leq \pi$ ($-\frac{N}{2} \leq l \leq \frac{N}{2}$, $l\in {\bf Z}_0$) denote the $N$ distinct {\it non-dimensional} wave numbers within the first Brioullin zone. It follows from (\ref{charfu}) and (\ref{disprel}) that the characteristic matrix function $f(2-D-D^{\dagger})$ is a positive (semi-) definite $N\times N$ and self-adjoint (symmetric) Toeplitz type matrix, i.e. of the form $f_{pq}= f(|p-q|)$.
The matrix elements fulfill periodicity and reflection-symmetry with respect to $\frac{N}{2}$, namely\footnote{This symmetry is easily seen by putting $p=\frac{N}{2}+\chi$ and by accounting for
$f(|\frac{N}{2}+\chi|)=f(|-\chi-\frac{N}{2}+N|)=f(|\frac{N}{2}-\chi|)$.}
\begin{equation}
\label{percharfu}
\begin{array}{l}
f(|p|) = f(|p+nN|) ,\hspace{1cm} n \in {\bf \Z}_0 ,\hspace{1cm} 0\leq p \leq N-1\nonumber \\ \nonumber \\
f(|\frac{N}{2}+\chi| = f(|\frac{N}{2}-\chi|) = f(|nN+\frac{N}{2}-\chi|) ,\hspace{1cm} \frac{N}{2}\pm \chi \in {\bf \Z}_0 , \hspace{0.5cm} n\in {\bf \Z}_0
\end{array}
\end{equation}
Due to periodicity, the points of reflection-symmetry repeat periodically being
located at $p_n=\frac{N}{2} +nN$ ($n \in {\bf Z}_0$). Corresponding symmetries to (\ref{percharfu}) also exist
for the dispersion relation (\ref{disprel}) in the reciprocal $\kappa$-space.
\newline\newline
In order to analyze continuum limits (long-wave limits) subsequently, it is convenient to introduce the dimensional wave number $k_l=\frac{\kappa_l}{h}=\frac{2\pi}{L}l$ ($L=Nh$)
having dimension $cm^{-1}$.
Dispersion relation (\ref{disprel}) gives the $N$ eigenvalues of the $N\times N$ characteristic matrix function $f(2-D-D^{\dagger})$ acting in the $N$-dimensional ($N$-periodic) space of particle displacement vectors ${\bf u} = (u_p)$ on the chain.
The dispersion relation (\ref{disprel}) is obtained by
\begin{equation}
\label{detrel}
f(2-D-D^{\dagger}) \, \frac{e^{i\kappa_l p}}{\sqrt{N}} = f(4 \sin^2{\frac{\kappa_l}{2}}) \,\frac{e^{i\kappa_l p}}{\sqrt N}
\end{equation}
where $4 \sin^2{\frac{\kappa_l}{2}}$ is the eigenvalue determined by $(2-D-D^{\dagger})e^{i\kappa_l p}=4\sin^2{\frac{\kappa_l}{2}} \, e^{i\kappa_l p} \, $.
Further we introduce the generalized $N\times N$ {\it Laplacian matrix}
\begin{equation}
\label{laplamat}
\Delta_f = -\mu f(2-D-D^{\dagger}) ,\hspace{1cm} (\Delta_f)_{pq}= -\frac{\partial^2}{\partial u_p\partial u_q}V_f
\end{equation}
We utilize frequently in this paper synonymously the terms $\Gamma$-function and generalized factorial function defined by \cite{abramo}
\begin{equation}
\label{gammafu}
\beta !=: \Gamma(\beta+1) = \int_0^{\infty}\tau^{\beta} e^{-\tau}{\rm d}\tau , \hspace{0.5cm} \Re (\beta)>-1
\end{equation}
If $\beta$ is an integer, (\ref{gammafu}) recovers the usual definition of the factorial. Integral (\ref{gammafu})
exists for complex $\beta$ with\footnote{$\Re(..)$ indicates the real part of a complex number $(..)$.} $\Re(\beta) >-1$ where in this paper we deal only with real $\beta$. The main definition of the $\Gamma$-function (\ref{gammafu}) can be extended (analytically continued) to any complex arguments including $\Re (\beta+1) < 0$ (except arguments of negative integers and zero) by the recursion
\begin{equation}
\label{recursiongamma}
\Gamma(\beta-n+1) = (\beta-n) ! = \beta ! \prod_{s=0}^{n-1}\frac{1}{(\beta-s)}
\end{equation}
This recursion defines the analytical continuation of the $\Gamma$-function and defines it for any complex arguments except
negative integers and zero where the analytically continued $\Gamma$-function has singularities.
Further we employ the so-called {\it ceiling}-function $ceil(\xi)$ which may be defined by \cite{abramo}
\begin{equation}
\label{ceiling}
ceil(\xi)=: min(n\in {\bf \Z}_0| n\geq \xi)
\end{equation}
indicating the smallest integer greater or equal to $\xi$, and by ${\bf \Z}_0$ we denote the complete set of integers (positive, negative, and zero).
\section{Fractional discrete chain model}
In the framework of this simple matrix function approach defined on the cyclically closed (periodic) linear chain,
our goal is now, starting with a {\it characteristic function of power law form}, to deduce the
fractional discrete Laplacian matrices for the infinite and finite $N$-periodic linear chain, respectively.
We assume the characteristic function (\ref{charfu}) in power law form
\begin{equation}
\label{powerlaw}
f^{(\alpha)}(\lambda) = \Omega_{\alpha}^2 \,\lambda^{\frac{\alpha}{2}} , \hspace{1cm} \alpha >0
\end{equation}
where $\frac{\alpha}{2}$ denotes a positive, real (non-integer or integer) scaling exponent where we focus especially on the {\it fractional} cases where $\frac{\alpha}{2}$ is non-integer. Note that positiveness of $\alpha$ is a consequence of the requirement that uniform translations of the chain do not contribute to the elastic energy. As a consequence (see relation (\ref{charfu})), the trivial value
$\alpha=0$ physically is a forbidden case \cite{michel-collet}.
The positiveness of $\alpha$ guarantees that the problem has physically ''good" properties.
$\Omega_{\alpha}^2$ is a positive dimensional constant where $\Omega_{\alpha}$ has physical
dimension of a frequency $sec^{-1}$ so that the characteristic function (\ref{powerlaw}) has dimension $sec^{-2}$.
The fractional elastic potential can then with (\ref{compfo}) and (\ref{powerlaw}) be written as
\begin{equation}
\label{Valpha}
V_{\alpha} = \frac{\mu\Omega_{\alpha}^2}{2} \sum_{p=0}^{N-1}u_p^*(2-D-D^{\dagger})^{\frac{\alpha}{2}}u_p =: \frac{\mu}{2} \sum_{p=0}^{N-1}\sum_{q=0}^{N-1}u_q^*f^{(\alpha)}(|p-q|)u_p
\end{equation}
with the matrix elements $f^{(\alpha)}(|p-q|) =[(2-D-D^{\dagger})^{\frac{\alpha}{2}}]_{|p-q|}$
of the {\it fractional characteristic matrix function} which are to be determined in explicit form,
first for the infinite chain ($N\rightarrow \infty$), and then for the finite $N$-periodic chain for
arbitrary not necessarily large $N$.
First from relation
(\ref{disprel}) we obtain the dispersion relation
\begin{equation}
\label{disprelat}
\omega_{\alpha}^2(\kappa_l) = f^{(\alpha)}\left(\lambda=4\sin^2{(\frac{\kappa_l}{2})}\right) = \Omega_{\alpha}^2\,2^{\alpha}\,|\sin{(\frac{\kappa_l}{2}})|^{\alpha} ,\hspace{0.5cm} \kappa_l=\frac{2\pi}{N}l , \hspace{0.15cm} 0 \leq l \leq N-1
\end{equation}
with the only zero value for $l=0$ reflecting translational invariance of (\ref{Valpha}), and
$N-1$ positive values for $1\leq l \leq N-1$. The case $\alpha=2$ corresponds to the
classical Born von Karman chain with next neighbor particle springs, where (\ref{disprelat})
then recovers the well known classical dispersion of $\omega_{\alpha}^2(\kappa_l) =4\Omega_2^2\sin^2{(\frac{\kappa_l}{2}})$.
\newpage
\begin{figure*}[H]
\hskip3.5cm
\includegraphics[scale=0.5]{DispersionFract1.pdf}
\caption{$\frac{\omega_{\alpha}(\kappa)}{\omega_0} =0.5 \times 2^{\frac{\alpha}{2}}|\sin{(\frac{\kappa}{2}})|^{\frac{\alpha}{2}}$ (normalized with $\omega_0=2\Omega_{\alpha}$) of (\ref{disprelat}).}
\label{fig:1}
\end{figure*}
{\bf Figure 1}
\newline
{\it In figure 1 the non-dimensional $\frac{\omega_{\alpha}(\kappa)}{\omega_0} = 0.5\times 2^{\frac{\alpha}{2}}|\sin{(\frac{\kappa}{2}})|^{\frac{\alpha}{2}}$ (normalized with $\omega_0=2\Omega_{\alpha}$) of (\ref{disprelat}) is plotted over $0\leq \kappa \leq \pi$
and $0\leq \alpha \leq 2$ (plot on the left) and further for $2\leq \alpha \leq 4$ (plot on the right).
One can see that for small $\alpha\rightarrow 0$ the eigenfrequencies approach the value $\frac{\omega_{\alpha}(\kappa)}{\omega_0}\rightarrow 1 \times 0.5 $. When $\alpha$ increases, $\frac{\omega_{\alpha}(\kappa)}{\omega_0}$ becomes more and more narrowly distributed around $\kappa=\pi$ taking there its maximum value $0.5\times 2^{\frac{\alpha}{2}}$. For $\alpha=2$ one can see in these plots that the classical Born von Karman dispersion relation $\frac{\omega_{2}(\kappa)}{\omega_0}
= |\sin{(\frac{\kappa_l}{2}})|$ is recovered where the normalization is chosen such that maximum value $1$ is taken in the classical Born von Karman case $\frac{\alpha}{2}=1$.}
\newline\newline
The positive semi-definite $N\times N$ {\it fractional characteristic matrix function} $f^{(\alpha)}$ can be written as
\begin{equation}
\label{alphacarfou}
f^{(\alpha)}(2-D-D^{\dagger}) = \Omega_{\alpha}^2\, (2-D(h)-D(-h))^{\frac{\alpha}{2}}= \Omega_{\alpha}^2\, (-1)^{\frac{\alpha}{2}}\left\{D(\frac{h}{2})-D(-\frac{h}{2})\right\}^{\alpha}= \Omega_{\alpha}^2\left(-4\sinh^2{\frac{h}{2}\frac{d}{dx}}\right)^{\frac{\alpha}{2}}
\end{equation}
The $N\times N$ {\it fractional Laplacian matrix} is then by using (\ref{laplamat}) given by
\begin{equation}
\label{laplacianalpha}
\begin{array}{l}
\displaystyle \Delta_{\alpha} = -\mu\Omega_{\alpha}^2(2-D(h)-D(-h))^{\frac{\alpha}{2}} = -\mu f^{(\alpha)}(2-D-D^{\dagger})\nonumber \\ \nonumber \\
\displaystyle \Delta_{\alpha}(|p-q|) = \Delta(|x_p-x_q|) = -\frac{\mu}{N} \Omega_{\alpha}^2\sum_{l=0}^{N-1} e^{i\kappa_l(p-q)} \left(4\sin^2{(\frac{\kappa_l}{2}})\right)^{\frac{\alpha}{2}}
\end{array}
\end{equation}
where $x_p-x_q= h(p-q)$ and $k_lx_p=\kappa_l \,p$. Note that (\ref{laplacianalpha}) depends symmetrically on $D+D^{\dagger}$ which makes the fractional Laplacian matrix self-adjoint.
Hence it should exist a representation for (\ref{alphacarfou}) in the form
\begin{equation}
\label{charfualpha}
\begin{array}{l}
\displaystyle \Omega_{\alpha}^2(2-D(h)-D(-h))^{\frac{\alpha}{2}} = \sum_{p=-\infty}^{\infty}f^{(\alpha)}(|p|)\, D(hp)
= f^{(\alpha)}(0)+\sum_{p=1}^{\infty}f^{(\alpha)}(|p|)\left\{D(ph)+D(-ph)\right\} \nonumber \\ \nonumber \\
\displaystyle \Omega_{\alpha}^2(2-D-D^{\dagger})^{\frac{\alpha}{2}} u_n = f^{(\alpha)}(0)u_n+\sum_{p=1}^{\infty}f^{(\alpha)}(|p|)\left\{u_{n-p}+u_{n+p}\right\}
\end{array}
\end{equation}
where this series is written for the infinite chain $N\rightarrow \infty$.
First of all we notice that
for integers $\frac{\alpha}{2}=m \in {\bf \N}$ we obtain the matrix elements directly from the binomial expansion
\begin{equation}
\label{binomi}
\begin{array}{l}
\displaystyle f^{(2m)} =\Omega_{2m}^2\left\{-(D(\frac{h}{2})-D(-\frac{h}{2}))^2\right\}^{m} \nonumber \\ \nonumber \\ \displaystyle \hspace{1cm} =\Omega_{2m}^2(-1)^m\left(D(\frac{h}{2})-D(-\frac{h}{2})\right)^{2m} = \Omega_{2m}^2\sum_{p=-m}^{m} (-1)^{p} \frac{(2m)!}{(m+p)!(m-p)!}D(ph)
\end{array}
\end{equation}
which gives the matrix elements in terms of (centered) binomial coefficients
\begin{equation}
\label{binomialco}
f^{(2m)}(|p|) = \Omega_{2m}^2\, (-1)^{p} \frac{(2m)!}{(m+p)!(m-p)!}
\end{equation}
which are non-zero only for $|p|\leq m$.
Let us now determine explicitly the matrix elements $f^{(\alpha)}(|p|) $ for any especially also {\it non-integer} $\frac{\alpha}{2} >0$ first for the infinite, and then for $N$-periodic finite chain. With ''infinite chain" we refer to as the limiting case that the particle number $N \rightarrow \infty$ (where the length $L=Nh$ of the chain may either remain finite or tend to infinity). The spectral representation (\ref{laplacianalpha}) can then asymptotically be written as an integral\footnote{Since $\sum_{l=-\frac{N}{2}}^{\frac{N}{2}} \frac{G(\kappa_l)}{N} \sim \frac{1}{2\pi} \int_{-\pi}^{\pi}G(\kappa){\rm d}\kappa $ with (${\rm d}\kappa \sim
\kappa_{l+1}-\kappa_l \sim \frac{2\pi}{N}$)} and leads for $N\rightarrow \infty$ to
\begin{equation}
\label{fractlattice}
f^{(\alpha)}(|p|) = \frac{\Omega_{\alpha}^2}{2\pi}\int_{-\pi}^{\pi}e^{i\kappa p}\left(4\sin^2{\frac{\kappa}{2}}\right)^{\frac{\alpha}{2}} {\rm d}\kappa =\Omega_{\alpha}^2 \frac{2^{\alpha + 1}}{\pi}\int_{0}^{\frac{\pi}{2}}
\sin^{\alpha}(\varphi)\cos{(2p\varphi}){\rm d}\varphi
\end{equation}
The matrix elements of the fractional Laplacian matrix (\ref{laplacianalpha}) are
\begin{equation}
\label{fractla}
\Delta_{\alpha}(|p-q| =-\mu f^{(\alpha)}(|p-q|)
\end{equation}
We emphasize the opposite sign of the negative semi-definite fractional Laplacian matrix (\ref{fractla}) and the positive semi-definite fractional characteristic matrix function $f^{(\alpha)}$
(\ref{fractlattice}).
After some manipulations which are outlined in appendix I, we arrive at the representation\footnote{where we put from now on in all formulas $p=|p|$.}
\begin{equation}
\label{fractalat}
f^{(\alpha)}(|p|) =\Omega_{\alpha}^2\,\frac{2^{\alpha}}{\sqrt{\pi}}\frac{1}{(p-\frac{1}{2})!} \int_0^1\xi^{\frac{\alpha}{2}}\frac{d^p}{d\xi^p}\left\{\xi(1-\xi)\right\}^{p-\frac{1}{2}}{\rm d}\xi
\end{equation}
Now consider first the diagonal element ($p=0$) which yields
\begin{equation}
\label{diag}
f^{(\alpha)}(0) = \Omega_{\alpha}^2\,\frac{2^{\alpha}}{\pi}\int_0^1 \xi^{\frac{\alpha-1}{2}}(1-\xi)^{-\frac{1}{2}}{\rm d}\xi = \Omega_{\alpha}^2\, \frac{2^{\alpha}}{\pi} \frac{(\frac{\alpha-1}{2})!(-\frac{1}{2})!}{\frac{\alpha}{2}!} =
\Omega_{\alpha}^2\,\frac{\alpha !}{\frac{\alpha}{2}!\frac{\alpha}{2}!} >0
\end{equation}
necessarily being positive since related with the trace $\frac{1}{N}Tr(f^{\alpha})= f^{(\alpha)}(0)$ of the positive semi-definite $f^{(\alpha)}$.
For the $p^{th}$ element (where $p=|p|$) after multiple partial integrations which are performed in the appendix I, we obtain
\begin{equation}
\label{matrixelei}
f^{(\alpha)}(|p|) = \Omega_{\alpha}^2\,\frac{\alpha!}{\frac{\alpha}{2}!(\frac{\alpha}{2}+|p|)!}(-1)^p\prod_{s=0}^{|p|-1}(\frac{\alpha}{2}-s) ,\hspace{1cm} p \in {\bf \Z}_0
\end{equation}
We observe that for integers $\frac{\alpha}{2}=m \in {\bf \N}$ expression (\ref{matrixelei}) coincides with the binomial coefficients of
(\ref{binomialco}). This can be seen from the obvious relation
\begin{equation}
\label{relationfacyuly}
\prod_{s=0}^{p-1}(m-s) = \frac{m!}{(m-p)!} ,\hspace{1cm} 0\leq p\leq m
\end{equation}
This relation can be extended to non-integers holding for any $\frac{\alpha}{2} >0$, namely ($p=|p|$)
\begin{equation}
\label{generalisationalph2}
\prod_{s=0}^{p-1}(\frac{\alpha}{2}-s) = \frac{\frac{\alpha}{2}!}{(\frac{\alpha}{2}-p)!} ,\hspace{1cm} \frac{\alpha}{2}-p+1>0
\end{equation}
and holds when we employ the main definition of the $\Gamma$-function (\ref{gammafu}) as long $(\frac{\alpha}{2}-p)!=\Gamma(\frac{\alpha}{2}-p+1)$ is well defined, i.e. for $\frac{\alpha}{2}-p+1>0$.
This is the case for
$0\leq p \leq p_0$ where $p_0=ceil(\frac{\alpha}{2})$ with
the ceiling-function defined by (\ref{ceiling}).
Then with (\ref{generalisationalph2}) the matrix elements with $0\leq |p| \leq ceil(\frac{\alpha}{2})$ can be expressed
by a ``generalized centered binomial coefficient", namely
\begin{equation}
\label{generalizationalf}
f^{(\alpha)}(|p|) = \Omega_{\alpha}^2 \,(-1)^p\, \frac{\alpha!}{(\frac{\alpha}{2}-p)!(\frac{\alpha}{2}+p)!} ,
\hspace{0.5cm} 0\leq |p| \leq ceil(\frac{\alpha}{2})
\end{equation}
Representation (\ref{matrixelei}) shows that if $\frac{\alpha}{2} = m \in {\bf N}$ is an integer, matrix elements $f^{2m}(p>m=\frac{\alpha}{2}) =0$ with $p>m$ are vanishing and the characteristic matrix remains ''localized" where (\ref{binomialco}) is recovered by (\ref{matrixelei}). For non-integer $\frac{\alpha}{2} \notin {\bf \N}$, all elements $f^{\alpha}(|p|)$ are non-vanishing, reflecting non-locality of the fractional characteristic matrix, and (\ref{matrixelei}) can be written in the form (appendices I and II)
\begin{equation}
\label{matrixformii}
f^{(\alpha)}(|p|) = -\Omega_{\alpha}^2\frac{\Gamma(\alpha+1)}{\pi}\sin{(\frac{\alpha\pi}{2})}\frac{\Gamma(p-\frac{\alpha}{2})}{\Gamma(\frac{\alpha}{2}+p+1)} = -\Omega_{\alpha}^2\frac{\alpha!}{\pi}\sin{(\frac{\alpha\pi}{2})}\frac{(p-\frac{\alpha}{2}-1)!}{(\frac{\alpha}{2}+p)!} ,\hspace{0.5cm} p =|p| > \frac{\alpha}{2}
\end{equation}
When $\frac{\alpha}{2} \rightarrow m+0 $ ($m\in {\bf \N}$) approaches integers, then the asymptotic value of the $p_0^{th}$ ($p_0=ceil(\frac{\alpha}{2})$) element of (\ref{matrixformii})
tends asymptotically versus $f^{(\alpha)}(|p_0|=ceil(\frac{\alpha}{2}) \rightarrow (-1)^{m}\Omega_{2m}^2$ which is the value given by (\ref{generalizationalf}) for $p=m$ and the elements with $f^{(2m)}(|p|>m)$ are vanishing (reflected by the vanishing of $\sin{(\frac{\alpha\pi}{2})}$ for integers $\frac{\alpha}{2} = m \in {\bf \N}$ in (\ref{matrixformii})). This reflects again the localization of the fractional matrix for integer orders $\frac{\alpha}{2} \in {\bf \N}$ where only the $m+1$ elements (\ref{generalizationalf}) are non-zero taking the integer binomial form (\ref{binomialco}).
In order to give a more compact representation for the matrix elements $f^{(\alpha)}(|p|)$, we introduce the following definition of generalized {\it centered binomial coefficients} (where $\alpha >0 \in {\bf \R}$, $p=|p|$)
\begin{equation}
\label{cgenerlizedbinomi}
\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right) =: \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} -p
\end{array}\right)
=
\frac{\alpha!}{\frac{\alpha}{2}!(\frac{\alpha}{2}+|p|)!}\prod_{s=0}^{|p|-1}(\frac{\alpha}{2}-s)
\left\{\begin{array}{l} \displaystyle \frac{\alpha!}{(\frac{\alpha}{2}+p)!((\frac{\alpha}{2}-p)!} ,\hspace{0.5cm} 0\leq |p| \leq ceil(\frac{\alpha}{2}) \nonumber \\ \nonumber \\ \displaystyle (-1)^{p+1}\frac{\alpha!}{\pi}\sin{(\frac{\alpha\pi}{2})}\frac{(|p|-\frac{\alpha}{2}-1)!}{(\frac{\alpha}{2}+|p|)!}
, \hspace{0.5cm}|p| > \frac{\alpha}{2}
\end{array} \right.
\end{equation}
We have written (\ref{cgenerlizedbinomi}) such that all arguments of $\Gamma$-functions are positive,
so that they are defined through the main integral definition (\ref{gammafu}).
When we use the analytically continued recursive definition
for the $\Gamma$-function (\ref{recursiongamma}), we can write (\ref{cgenerlizedbinomi}) by utilizing
the Euler relation (\ref{eulergen}) $\forall p$ in unified way for non-integer\footnote{Since for
$\frac{\alpha}{2} \notin {\bf \N}$ no singularities of the
analytically continued $\Gamma$-function occur.} $\frac{\alpha}{2} \notin {\bf \N}$ alternatively
either by expression (\ref{cgenerlizedbinomi})$_1$ or expression by (\ref{cgenerlizedbinomi})$_2$.
The generalized centered binomial coefficients (\ref{cgenerlizedbinomi}) are per definition symmetric with respect to
$p \leftrightarrow -p$.
They fulfill the recursion relation
\begin{equation}
\label{recursion}
\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p+1
\end{array}\right) = \frac{(\frac{\alpha}{2}-p)}{(\frac{\alpha}{2}+p+1)} \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right)
\end{equation}
and as an important consequence of (\ref{recursion}) reflecting the properties of power functions, they fulfill the addition rule
\begin{equation}
\label{additionrule}
\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right) +
\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p-1
\end{array}\right)
= \frac{(\alpha+1)!}{(\frac{\alpha}{2}+p)!((\frac{\alpha}{2}-p+1)!} =
\left(\begin{array}{l}
\hspace{0.2cm} \alpha+1 \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right) = \left(\begin{array}{l}
\hspace{0.2cm} \alpha+1 \nonumber \\
\frac{\alpha+1}{2} +p-\frac{1}{2}
\end{array}\right)
\end{equation}
where the last term in this equation is written in ``centered" representation of the generalized binomial coefficients.
Generalized binomial coefficients we define as $\left(\begin{array}{l} \xi \nonumber \\ \xi_1 \end{array}\right) = \left(\begin{array}{l} \xi \nonumber \\ \xi_2 \end{array}\right) = \frac{\xi!}{\xi_1 !\xi_2 !} ,\hspace{0.3cm} \xi=\xi_1+\xi_2$
where in general the analytically continued definition of the $\Gamma$-function (\ref{recursiongamma}) is assumed.
The properties (\ref{recursion}) and (\ref{additionrule})
are in full correspondence with the recursion- and addition rules of the usual binomial coefficients. Note that the centered generalized binomial coefficients (\ref{cgenerlizedbinomi}) maintain their sign $+1$
as long as $0\leq p \leq p_0=ceil(\frac{\alpha}{2})$. At $p=ceil(\frac{\alpha}{2})+1$ their sign switches to $(-1)$ and alternates for all $p>p_0$ as $(-1)^{p-p_0}$ (see also appendix I).
Then with definition (\ref{cgenerlizedbinomi}) the matrix elements $f^{\alpha}(|p|)$ can $\forall p$ be written in compact form\footnote{The notation $f^{\alpha}_{\infty}(|p|)$ with subscipt $_{\infty}$ indicates that
this expression holds for $N\rightarrow \infty$, and will be used for clarity in the subsequent paragraph.}
\begin{equation}
\label{compactanalmogy}
f^{\alpha}(|p|) = f^{\alpha}_{\infty}(|p|) = \Omega_{\alpha}^2 (-1)^p \, \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} + p
\end{array}\right) , \hspace{1cm} p\in {\bf Z}_0
\end{equation}
where for $\frac{\alpha}{2}= m \in {\bf N}_0$ (\ref{compactanalmogy}) recovers the integer case (\ref{binomialco}).
The elements of the {\it fractional Laplacian matrix} are connected with (\ref{compactanalmogy}) by
\begin{equation}
\label{laplamatel}
\Delta_{\alpha}(|p|) = -\mu f^{\alpha}(|p|) = \mu\Omega_{\alpha}^2 (-1)^{p+1} \, \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right) , \hspace{1cm} p\in {\bf Z}_0
\end{equation}
The fractional characteristic matrix elements (\ref{compactanalmogy}) coincide with the centered differences representation introduced as starting point of the fractional centered differences model of Ortiguera (eq. (3.8) in \cite{riesz2}). A fractional Laplacian matrix for the infinite chain in accordance with expression (\ref{matrixformii}) also
was reported by Zoia et al \cite{Zoia2007}.
\newline\newline
The representation of the fractional Laplacian matrix by means of the generalized binomial coefficients is a highly convenient notation generalizing the integer binomial expansion to non-integer cases.
In contrast to the binomial coefficients (\ref{cgenerlizedbinomi}), the matrix elements (\ref{compactanalmogy}) due to their additional prefactor $(-1)^p$ alternate in sign for $0\leq p\leq p_0$ and maintain their
sign\footnote{$\frac{\alpha}{2}=p_0-\chi$ ($0<\chi<1$ and $sign(\sin{\pi\chi})=1$ thus $-sign(\sin{\frac{\alpha\pi}{2}})= -sign(\sin{\pi(p_0-\chi)}) = (-1)^{p_0}$}
$(-1)^{p_0} =-sign(\sin{\frac{\alpha\pi}{2}})$ for $p> p_0$.
We can write the fractional characteristic matrix (\ref{alphacarfou}) as centered fractional differences series
\begin{equation}
\label{copactchar}
\begin{array}{l}
\displaystyle f^{(\alpha)}(2-D-D^{\dagger}) = \Omega_{\alpha}^2(2-D(h)-D(-h))^{\frac{\alpha}{2}} =
\Omega_{\alpha}^2\sum_{p=-\infty}^{\infty}(-1)^p\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} + p
\end{array}\right)D(hp) \nonumber \\ \nonumber \\ \nonumber \\ {\rm with} \nonumber \\
\displaystyle \hspace{2.5cm}
(-1)^{\frac{\alpha}{2}}\left(D(\frac{h}{2})-D(-\frac{h}{2})\right)^{\alpha}
= \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\hspace{0.2cm} \frac{\alpha}{2}
\end{array}\right) +\sum_{p=1}^{\infty} (-1)^p \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} + p
\end{array}\right) \left(D(hp)+D(-hp)\right)
\end{array}
\end{equation}
For non-integer $\frac{\alpha}{2} \notin {\bf \N}$ (\ref{copactchar}) is an infinite series reflecting the non-locality of the fractional operator,
whilst for $\frac{\alpha}{2} \in {\bf \N}$ it takes the representation of finite centered binomial expansions (\ref{binomi}) where all terms with
$|p|>\frac{\alpha}{2}$ are vanishing, reflecting ''locality" in the integer cases.
We emphasize that (\ref{laplamatel}) with (\ref{compactanalmogy})
are asymptotic infinite chain limit expressions ($N\rightarrow \infty$).
In the subsequent subsection we construct explicitly the discrete fractional Laplacian matrix for the $N$-periodic chain where $N$ can be any arbitrary not necessarily large integer.
We can establish with (\ref{compactanalmogy}) and (\ref{copactchar}) the following interesting relation which {\it holds only on the unit circle $|z|=1$}, namely
\begin{equation}
\label{converge}
\begin{array}{l}
\displaystyle (2-z-z^{-1})^{\frac{\alpha}{2}} = \sum_{p=-\infty}^{\infty}(-1)^p\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right)
z^{p} = \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\hspace{0.2cm} \frac{\alpha}{2}
\end{array}\right) +\sum_{p=1}^{\infty}(-1)^p\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right) (z^{p}+z^{-p}),\hspace{0.5cm} |z|=1 \nonumber \\ \nonumber \\
\displaystyle \left\{4\sin^2{\frac{\varphi}{2}}\right\}^{\frac{\alpha}{2}} =\left\{2(1-\cos{\varphi})\right\}^{\frac{\alpha}{2}} = \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\hspace{0.2cm} \frac{\alpha}{2}
\end{array}\right) +2\sum_{p=1}^{\infty} (-1)^p \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right) \cos{(p\varphi)}
\end{array}
\end{equation}
This relation diverges everywhere in the complex $z$-plane {\it except on the unit circle $|z|=1$} and
determines the dispersion relation
of the infinite chain by putting $z=e^{i\varphi}$, where $-\pi \leq \varphi \leq \pi$ indicates the (for the infinite chain limit $N\rightarrow \infty$)
(quasi-) continuous dimensionless wave number within the first Brillouin zone. Translational invariance
(corresponding to $\varphi=0$ in (\ref{converge})) yields
the zero eigenvalue
\begin{equation}
\label{transla}
\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\hspace{0.2cm} \frac{\alpha}{2}
\end{array}\right) +2\sum_{p=1}^{\infty} (-1)^p \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right) = 0
\end{equation}
and further relations of interest for special values of $\varphi$ can be established. We will show in the subsequent paragraph, that the infinite chain relation (\ref{converge}) is especially useful to construct the discrete fractional Laplacian matrix on the {\it finite $N$-periodic chain}.
\subsection{Fractional Laplacian matrix on the finite periodic chain}
In this paragraph we develop in explicit form the fractional characteristic matrix (\ref{alphacarfou}) for the discrete finite $N$-periodic
chain, where the particle number $N$ is not necessarily large.
To this end we reconsider the infinite chain relation (\ref{converge}) for $z=e^{i\kappa_l}$ where
$\varphi =\kappa_l=\frac{2\pi}{N}l$ now is a Bloch wave number of the {\it finite} $N$-periodic chain
\begin{equation}
\label{finitechain}
\left\{4\sin^2{\frac{\kappa_l}{2}}\right\}^{\frac{\alpha}{2}} =
\sum_{p=-\infty}^{\infty}(-1)^p\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right)
e^{i\kappa_lp} = \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\hspace{0.2cm} \frac{\alpha}{2}
\end{array}\right) +2\sum_{p=1}^{\infty} (-1)^p \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right) \cos{(p\kappa_l)}
\end{equation}
We observe that the left hand side of this relation determines dispersion relation (\ref{disprelat}) of the {\it finite chain}.
Now evaluate this series by taking into account the $N$-periodicity of the Bloch phase
factors $e^{i\kappa_lp} =e^{i\kappa_l(p+N)}$ by collecting all terms with same cyclic indices $0\leq p\leq N-1$, i.e. with the same phase factors into the coefficients $f_N^{(\alpha)}(|p|)$,
where only $N$ distinguished phase factors for $0\leq p\leq N-1$ occur. So we get a representation of the form
\begin{equation}
\label{collect}
\Omega_{\alpha}^2
\left\{4\sin^2{\frac{\kappa_l}{2}}\right\}^{\frac{\alpha}{2}} = \sum_{p=0}^{N-1}e^{i\kappa_lp}f^{(\alpha)}_N(|p|) ,\hspace{1cm} \kappa_l=\frac{2\pi}{N} l , \hspace{0.5cm} (l=0,..,N-1)
\end{equation}
which indeed is the discrete dispersion relation (\ref{disprelat}) (the $N$ discrete eigenvalues) of the fractional Laplacian matrix of the {\it finite chain} of $N$ particles. The coefficients $f_N^{\alpha}(|p|)$ are given in below expression (\ref{finiteelem}).
Inverting relation (\ref{collect}) confirms that the coefficients $f_N^{(\alpha)}(|p|)$ are the
matrix elements of the fractional characteristic matrix $\Omega_{\alpha}(2-D-D^{\dagger})^{\frac{\alpha}{2}}$ for finite $N$, namely
\begin{equation}
\label{invert}
f_N^{(\alpha)}(|p-q|) = \Omega_{\alpha}^2\sum_{l=0}^{N-1}\frac{e^{i\kappa_l(p-q)}}{N}\left\{4\sin^2{\frac{\kappa_l}{2}}\right\}^{\frac{\alpha}{2}}
\end{equation}
which indeed is consistent with the spectral representation given in (\ref{laplacianalpha})$_2$ for the fractional Laplacian matrix for finite $N$.
The fractional characteristic function matrix elements {\it $f^{\alpha}_N(|p|)$ of the finite $N$-periodic chain} are obtained as
\begin{equation}
\label{finiteelem}
\begin{array}{l}
\displaystyle f^{(\alpha)}_N(|p|) = \sum_{n=-\infty}^{\infty}f^{(\alpha)}_{\infty}(|p-nN|) = \displaystyle f^{(\alpha)}_{\infty}(|p|)+\sum_{n=1}^{\infty}
\left(f^{(\alpha)}_{\infty}(|p+nN|)+ f^{(\alpha)}_{\infty}(|p-nN|)\right) ,\hspace{1cm} 0\leq p \leq N-1 \nonumber \\ \nonumber \\
\displaystyle \hspace{1.5cm} = \Omega_{\alpha}^2 (-1)^p \, \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right) + \Omega_{\alpha}^2\sum_{n=1}^{\infty}(-1)^{p+nN}\left(\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p +nN
\end{array}\right)+\left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p -nN
\end{array}\right) \right) \nonumber \\ \nonumber \\
\displaystyle \hspace{1.5cm} = \Omega_{\alpha}^2\, \frac{2^{\alpha}}{N}\sum_{l=0}^{N-1}e^{i\kappa_lp}|\sin{\frac{\kappa_l}{2}}|^{\alpha} , \hspace{1cm} \kappa_l=\frac{2\pi}{N}l
\nonumber \\ \nonumber \\
\displaystyle \hspace{1.5cm}= \Omega_{\alpha}^2\left[(2-D-D^{\dagger})^{\frac{\alpha}{2}}\right]_{|p|} ,\hspace{1cm}
D^{p}=D^{p+nN} ,\hspace{1cm} n\in {\bf Z}_0
\end{array}
\end{equation}
In this relation $f^{(\alpha)}_{\infty}(|p|)$ indicate the matrix elements (\ref{compactanalmogy}) of the infinite chain.
We observe that (\ref{finiteelem}) contains an interesting evaluation of the spectral sum (\ref{finiteelem})$_3$ by means of the generalized centered binomial coefficients (\ref{cgenerlizedbinomi}).
The fractional characteristic matrix function elements of the finite chain (\ref{finiteelem}) obey further the general symmetries (\ref{percharfu}), and
the fractional Laplacian matrix of the $N$-periodic finite chain is
\begin{equation}
\label{laplafini}
\Delta_{\alpha,N}(|p|) = -\mu f^{(\alpha)}_N(|p|) ,\hspace{1cm} 0\leq p \leq N-1
\end{equation}
Relation (\ref{finiteelem}) is also obtained when considering the infinite series (\ref{copactchar}) and application of the cyclic index convention (equivalent to the periodicity of the shift operators on the finite chain).
We emphasize that relation (\ref{finiteelem}) is an exact representation for the $N$-periodic fractional characteristic matrix function, and (\ref{laplafini}) the corresponding fractional Laplacian matrix for the finite chain of $N$ particles where $N$ can be any integer not necessarily large. Especially for computational purposes (\ref{finiteelem}) seems to be useful. By utilizing a truncated part of the series
(\ref{finiteelem}) allows to represent the matrix elements in a desired accuracy.
Note that analogous representations as (\ref{finiteelem})$_1$ relating infinite chain Laplacian matrices with finite periodic chain Laplacian matrices can be estabished for any (including non-fractional) admissible characteristic function $f$.
We observe further that in the infinite chain limit
\begin{equation}
\label{obvious}
\lim_{N\rightarrow\infty} f^{(\alpha)}_N(|p|) = f^{(\alpha)}_{\infty}(|p|)
\end{equation}
the infinite chain characteristic matrix (\ref{compactanalmogy})
is recovered by (\ref{finiteelem}). We will see in the subsequent section that analogous expressions to (\ref{finiteelem})
exist for the continuum limit kernels of the fractional Laplacian on the finite $L$-periodic string. The fractional
Laplacian kernel of the $L$-periodic string deduced subsequently represents the (periodic-string-)
continuum limit kernel expression of (\ref{laplafini}) with (\ref{finiteelem}).
Relations between finite and infinite chain fractional characteristic matrix functions
such as established in this section
appear to be useful to develop discrete fractional calculus on finite lattices.
\section{Continuum limits of the discrete fractional chain model: \\1D Fractional Laplacians for the infinite and $L$-periodic string}
The aim of this section is to deduce the continuum limit kernels of the fractional characteristic matrices of previous section. The continuum limit kernels to be deduced can be considered as definitions of the fractional Laplacian (Riesz fractional derivatives) on the infinite and periodic string, respectively.
Generally we can define two kinds of continuum limits: \newline\noindent (i) The periodic string continuum limit where the length of the chain
$L=Nh$ is kept finite and $h\rightarrow 0$ (.i.e. $N(h)= L h^{-1} \rightarrow \infty$). \newline\noindent (ii) The infinite space limit where $h\rightarrow 0$, however, the length of the chain\footnote{which we refer to as {\it string} in the continuum limit} tends to infinity $N(h)h=L(h) \rightarrow \infty$\footnote{which can be realized for instance by chosing by $N(h)\sim h^{-\delta}$ where $\delta > 1$.}. The kernels of the infinite space limit can be recovered from those of
the periodic string limit by letting $L\rightarrow \infty$. The central aim of this section is the deduction of the $L$-periodic string continuum limit (i) for the fractional Laplacian, which is to the best of our knowledge, so far is not reported the
literature.
Following our recent approach \cite{michel-collet} we require in the continuum limit that extensive physical quantities, i.e. quantities
which scale with the length of the 1D system, such as the total mass $N\mu=M$ and the total elastic energy of the chain remain finite when $L$ is kept finite\footnote{In the case of infinite string $L\rightarrow \infty$ we require the mass per unit length and elastic energy per unit length to remain finite.}, i.e. neither vanish nor diverge. From the finiteness of the total mass of the chain, it follows that the particle mass $\mu=\frac{M}{N}=\frac{M}{L} h = \rho_0 h$ scales $\sim h$. Then by employing expression (\ref{Valpha}) for the fractional elastic potential,
the total continuum limit elastic energy ${\tilde V}_{\alpha}$ can be defined by
\begin{equation}
\label{elasten}
{\tilde V}_{\alpha} = \lim_{h\rightarrow 0+} V_{\alpha} = \frac{\mu\Omega_{\alpha}^2}{2}\sum_{p=0}^{N-1} u^*(x_p)\left(-4\sinh^2{\frac{h}{2}\frac{d}{dx}}\right)^{\frac{\alpha}{2}}u(x_p)
\end{equation}
Accounting for $2-D(h)-D(-h) =-4\sinh^2{\frac{h}{2}\frac{d}{dx}} \approx -h^2\frac{d^2}{dx^2} + O(h^4)$ so we get
\begin{equation}
\label{fraclap}
\lim_{h\rightarrow 0} \left(-4\sinh^2{\frac{h}{2}\frac{d}{dx}}\right)^{\frac{\alpha}{2}} = h^{\alpha} (-\frac{d^2}{dx^2})^{\frac{\alpha}{2}}
\end{equation}
Since (\ref{fraclap}) tends to
zero with leading order $h^{\alpha}$, (\ref{elasten}) can only remain finite if the constant $\Omega_{\alpha}^2$ and particle mass $\mu$ scale for $h\rightarrow 0$ asymptotically as
\begin{equation}
\label{scaling}
\Omega_{\alpha}^2(h) =A_{\alpha} h^{-\alpha} , \hspace{1cm} \mu(h) = \rho_0 h ,\hspace{1cm} A_{\alpha}, \rho_0 >0
\end{equation}
where $\rho_0$ denotes the mass density with dimension $g \times cm^{-1}$ and $A_{\alpha}$ denotes a positive dimensional constant
of dimension $ sec^{-2}\times cm^{\alpha}$, where the new constants $\rho_0, A_{\alpha}$ are independent of $h$. Note that the dimensional
constant $A_{\alpha}$ is only defined up to a non-dimensional positive
scaling factor as its absolute value does not matter due to the scale-freeness of the power function.
We obtain then as continuum limit of the elastic energy\footnote{by accounting for $\sum_{p=0}^{N-1}h G(x_p) \rightarrow \int_0^L G(x){\rm d}x$ and $h\rightarrow dx$, $x_p\rightarrow x$.}
\begin{equation}
\label{contilimielasten}
\begin{array}{l}
\displaystyle
{\tilde V}_{\alpha} = \lim_{h\rightarrow 0} \frac{\mu(h)}{2}
\sum_{q=0}^{N-1}\sum_{p=0}^{N-1}u_q^* f^{(\alpha)}_N(|p-q|)u_p \nonumber \\ \nonumber \\
\displaystyle
{\tilde V}_{\alpha} = \frac{\rho_0 A_{\alpha}}{2}
\int_0^Lu^*(x)\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}}u(x)\,{\rm d}x
=: -\frac{1}{2} \int_0^L\int_0^Lu^*(x'){\tilde \Delta}_{\alpha}(|x-x'|)u(x){\rm d}x
\end{array}
\end{equation}
The continuum limit Laplacian kernel ${\tilde \Delta}_{\alpha}(|x-x'|)$ can then formally be represented by the distributional representation\footnote{In the distributional sense of generalized functions \cite{gelfand}.}
\begin{equation}
\label{contilimlap}
{\tilde \Delta}_{\alpha,L}(|x-x'|) = -\rho_0A_{\alpha}\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}}\delta_L(x-x')
\end{equation}
where this relation contains as special case the distributional representation
of integer orders $m=\frac{\alpha}{2}\in {\bf \N}$ which we analyzed earlier \cite{michel-collet}). In (\ref{contilimlap}) $\delta_L(x-x')$ indicates the $L$-periodic Dirac's $\delta$-function.
From this consideration it follows that the continuum limit kernel indeed is to be conceived as the distributional representation of the {\it fractional Laplacian}\footnote{having due to the distributional definition dimension $[-\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}}\delta_L(x-x')]$ namely $cm^{-\alpha-1}$ as the dimension of the $\delta$-function $cm^{-1}$ is included}
(Riesz fractional derivative) $-\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}}\delta_L(x-x')$, indicating the (negative semi-definite) distributional representation of fractional powers of the 1D-Laplacian $\frac{d^2}{dx^2}$ (see for instance \cite{riesz,samko,gorenflo,michel-fcaa} and references therein). We notice that (\ref{fraclap}) is just a (non-distributional) formal definition of the fractional Laplacian. Its explicit spatial distributional representations for the infinite and especially for the finite $L$- periodic string are to be constructed subsequently.
\newline\newline
The periodic string fractional Laplacian kernel can be explicitly obtained from (\ref{contilimielasten})$_1$
by employing scaling relations (\ref{scaling}) and asymptotic expressions for the $N$-periodic finite chain fractional characteristic matrix
$f^{(\alpha)}_N(|p-q|)$ of (\ref{finiteelem}).
Before we construct the fractional Laplacian kernel on the $L$-periodic string explicitly, let us briefly evoke the general connection between
an infinite space kernel $g_{\infty}(|x|)$ defined over $-\infty < x < \infty$ and its $L$-periodic counterpart $g_L(|x|)$ defined on a finite principal interval\footnote{which
also can be chosen as $-\frac{L}{2} \leq x \leq \frac{L}{2}$ as a consequence of the symmetries (\ref{periodmirror})} of length $L$
for instance
$0\leq x\leq L$. We have in full correspondence to the symmetry properties of the discrete case (\ref{percharfu}), these symmetries also in the continuous case, namely
for the $L$-periodic kernel
\begin{equation}
\label{periodmirror}
\begin{array}{l}
g_L(|x|)=g_L(|x+nL|), \hspace{1cm} n \in {\bf \Z}_0 \nonumber \\ \nonumber \\
g_L(|\frac{L}{2}+\xi|)= g_L(|\frac{L}{2}-\xi|) =g_L(|nL+\frac{L}{2}-\xi|) ,\hspace{1cm} n\in {\bf \Z}_0
\end{array}
\end{equation}
(\ref{periodmirror})$_1$ is the periodicity condition and (\ref{periodmirror})$_2$ the reflection-symmetry condition, where the principal axes of reflection symmetry are periodically repeating at $\xi_n=nL+\frac{L}{2}$ ($n\in {\bf \Z}_0$).
\newline\newline
If the infinite space kernel $g_{\infty}$ is known we can construct by the following
projection convolution the $L$-periodic kernel
\begin{equation}
\label{projectionop}
g_L(|x|) = \int_{-\infty}^{\infty} \delta_L(x-x')\, g_{\infty}(|x'|)\,{\rm d}x' = \sum_{l=-\infty}^{\infty}{\hat g}(k_l)\Phi_l(x)\Phi_l^*(x')
\end{equation}
where the periodic kernel $g_L(|x|)$ has a discrete eigenvalue spectrum $\{{\hat g}_L(k_l)\}$ at the discrete set of Bloch wave numbers $k_l=\frac{2\pi}{L}l$ with $l\in {\bf \Z}_0$. Now consider the
infinite space kernel which has the spectral representation
\begin{equation}
\label{kernelinfinitspace}
g_{\infty}(|x|) = \frac{1}{2\pi}\int_{-\infty}^{\infty} {\hat g}_{\infty}(k) e^{ikx}{\rm d}k
\end{equation}
with the continuous spectrum $\{{\hat g}_{\infty}(k)\}$ ($-\infty < k < \infty$). Then account for
the $L$-periodic $\delta$-function $\delta_L(x-x')$ (unity operator in the space of $L$-periodic functions)
which can be written as
\begin{equation}
\label{periodicdelta}
\delta_L(x-x')= \sum_{n-\infty}^{\infty} \delta_{\infty}(x-x'-nL) = \sum_{l=-\infty}^{\infty} \Phi_l(x)\Phi_l^*(x') , \hspace{0.25cm} \Phi_l(\xi)=\frac{e^{ik_l\xi}}{\sqrt{L}} ,\hspace{0.25cm} k_l =\frac{2\pi}{L}l ,\hspace{0.25cm} l \in {\bf Z}_0
\end{equation}
where $\delta_{\infty}(\xi)$ denotes the infinite space Dirac's $\delta$-function. The spectral representation of the $L$-periodic $\delta$-function $\delta_L(\xi)$ is expressed by the $L$-periodic Bloch eigenfunctions $\Phi_l$. Performing convolution (\ref{projectionop}) yields with (\ref{periodicdelta})
\begin{equation}
\label{convolper}
g_L(|x|) = \sum_{n=-\infty}^{\infty} g_{\infty}(|x-nL|) = g_{\infty}(|x|) +\sum_{n=1}^{\infty}
\left(g_{\infty}(|x-nL|)+g_{\infty}(|x+nL|)\right) ,\hspace{0.3cm} 0\leq x\leq L
\end{equation}
This relation is the generalization of the so called {\it Ewald sum} which appears in the context of a Coulomb potential in a periodic cell \cite{Ewald21}.
Relation (\ref{convolper}) allows to represent the $L$-periodic kernel $g_L$ if the infinite space kernel $g_{\infty}$ is known.
Expression (\ref{projectionop}) for $g_L(x)$ is defined on its principal interval $0\leq x \leq L$ and is in full analogy with representation (\ref{finiteelem}) of the fractional $N$-periodic characteristic matrix function of the finite chain.
The series (\ref{convolper}) contains the infinite space kernel $g_{\infty}(|x|)$ plus image terms. The periodic kernel $g_L$ of (\ref{convolper}) and the infinite space kernel $g_{\infty}$ are generally connected by
\begin{equation}
\label{connect}
g_{\infty}(|x|) = \lim_{L\rightarrow \infty} g_L(|x|)
\end{equation}
where the image terms are vanishing for $L\rightarrow \infty$ due to
$|\int_L^{\infty}g_{\infty}(\xi){\rm d}\xi| \rightarrow 0$.
Relation (\ref{connect}) connects infinite space kernel with periodic string kernel and allows to recover
the infinite space kernel if only the $L$-periodic string kernel is known.
It is easy to see that the series (\ref{convolper}) indeed fulfills the symmetries (\ref{periodmirror}) and converges as good as $|\int_{-\infty}^{\infty} g_{\infty}(|\xi|){\rm d\xi}| < \infty$. Let us briefly consider the relation of the Fourier transforms of both kernels.
(\ref{convolper}) has a spectral representation
\begin{equation}
\label{fourierser}
g_L|x-x'|)=\sum_{l=-\infty}^{\infty} {\hat g}_L(k_l)\Phi_l(x)\Phi_l^*(x'),\hspace{1cm} \Phi_l(x)=\frac{e^{ik_lx}}
{\sqrt{L}} ,\hspace{0.15cm} k_l=\frac{2\pi}{L}l ,\hspace{0.15cm} l\in {\bf \Z}_0
\end{equation}
with the {\it discrete eigenvalue spectrum ${\hat g}(k_l)$} and $\Phi_l(x)$ are the ortho-normal Bloch-eigenfunctions\footnote{i.e. $\int_0^L\Phi_i(x)^*\Phi_l(x){\rm d}x =\delta_{il}$}. The eigenvalues ${\hat g}(k_l)$ are obtained utilizing
(\ref{convolper}) as piecewise integrations over the entire infinite space
\begin{equation}
\label{fourier}
\begin{array}{l}
\displaystyle {\hat g}_L(k_l)\Phi_l(x) = \int_0^L g_L(|x-x'|)\Phi_l(x'){\rm d}x'= \Phi_l(x)
\sum_{n=-\infty}^{\infty} \int_{nL}^{(n+1)L}g_{\infty}(|\xi|)e^{-ik_l\xi }{\rm d}\xi \nonumber \\ \nonumber \\
\displaystyle {\hat g}_L(k_l) = \int_{-\infty}^{\infty} g_{\infty}(|\xi|)e^{-ik_l\xi }\, {\rm d}\xi = {\hat g}_{\infty}(k=k_l)
\end{array}
\end{equation}
where (\ref{fourier})$_2$ is the inversion of (\ref{kernelinfinitspace}) for $k=k_l$.
The discrete eigenvalues ${\hat g}_L(k_l)={\hat g}_{\infty}(k=k_l)$ are a discrete subset taken at the distinct values of Bloch wave numbers $k=k_l$ from the continuous spectrum of eigenvalues
${\hat g}_{\infty}(k)$ of the infinite space kernel.
All required information to construct the periodic kernel $g_L$ is available if the infinite space kernel $g_{\infty}$ and the length of the string $L$ are known. Therefore, in order to get the fractional Laplacian for the finite $L$-periodic string, let us first consider the infinite space continuum limit of the fractional characteristic function matrix.
\newline\newline
{\bf \noindent Infinite space continuum limit} \newline Let us consider the continuum limit of the fractional matrix elements (\ref{compactanalmogy}) for $p=\frac{x}{h} >>1$ and $0\leq x \leq L \rightarrow \infty$ which is obtained from the asymptotics of (\ref{matrixformii}). Taking into account Sterling's asymptotic formula \cite{abramo} which holds for sufficiently large $\beta >>1$, namely $\displaystyle \beta! \sim \sqrt{2\pi\beta} \,\,\frac{\beta^{\beta}}{{e^{\beta}}}$, we get for $a,b$ finite and constant a power law behavior
\begin{equation}
\label{asympbet}
\frac{(\beta +a)!}{(\beta+b)!} \sim \beta^{a-b} \hspace{1cm} \beta >>1
\end{equation}
Then we obtain for $p>>1$ for the matrix elements (\ref{compactanalmogy})
the asymptotic representations
\begin{equation}
\label{asympmat}
f^{(\alpha)}(|p|)=\Omega_{\alpha}^2(-1)^p \left(\begin{array}{l}
\hspace{0.2cm} \alpha \nonumber \\
\frac{\alpha}{2} +p
\end{array}\right)= -\Omega_{\alpha}^2\frac{\alpha!}{\pi}\sin{(\frac{\alpha\pi}{2})}\frac{(p-\frac{\alpha}{2}-1)!}{(\frac{\alpha}{2}+p)!}
\sim -\Omega_{\alpha}^2\,\,\frac{\alpha!}{\pi}\sin{(\frac{\alpha\pi}{2})} \,\, p^{-\alpha-1}
\end{equation}
leading to the limit kernel of the fractional Laplacian matrix (\ref{laplamatel}) $\frac{|x|}{h} >>1$\footnote{The additional prefactor $h^{-2}$ comes into play as in the continuum limit a double sum $\sum_p\sum_q \rightarrow \int \int\frac{{\rm d}x{\rm d}x'}{h^2}$.}
\begin{equation}
\label{asymp}
{\tilde \Delta}_{\alpha}(|x|) = \lim_{h\rightarrow 0+} -\mu(h) \frac{1}{h^2}f^{(\alpha)}\left(\frac{|x|}{h}\right) = \rho_0A_{\alpha}
\frac{\alpha!}{\pi}\sin{(\frac{\alpha\pi}{2})} \,\, |x|^{-\alpha-1} ,\hspace{0.5cm} 0< |x| < \infty
\end{equation}
The continuum limit kernel (\ref{asymp}) has per construction the physical dimension $g\times sec^{-2}\times cm^{-2}$.
Comparison of (\ref{asymp}) with (\ref{contilimlap}) yields for the (negative-semi-definite) {\it fractional Laplacian kernel} (Riesz fractional derivative)
of the infinite space
\begin{equation}
\label{fractlaplkernel}
-\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}}\delta_{\infty}(x) =
\frac{\alpha !}{\pi}\frac{\sin{(\frac{\alpha\pi}{2})}}{ |x|^{\alpha+1}} ,\hspace{0.5cm} 0< |x| < \infty
\end{equation}
We emphasize that we are considering the infinite space and $\delta_{\infty}(x)$ indicates the infinite space Dirac's $\delta$-function.
(\ref{fractlaplkernel}) is the well known expression for the kernel of infinite space fractional Laplacian
(e.g. \cite{riesz2,gorenflo,michelb,michel-ima2014})\footnote{Eq. (4.12) in \cite{riesz2}
uses a different sign convention. The expression there denotes the positive (semi-definite) operator $\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}}$ which is referred in that paper to as ``fractional Laplacian''.}. The hypersingular behavior at $x=0$ of (\ref{fractlaplkernel}) can be removed by introducing a regularization in the distributional sense \cite{michel-ima2014}
\begin{equation}
\label{distri}
-\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}}\delta_{\infty}(x) = - \lim_{\epsilon\rightarrow 0+}\frac{\alpha !}{\pi}{\Re
\large\{\frac{i^{\alpha+1}}{(x+i\epsilon)^{\alpha+1}}}\large\}
\end{equation}
For $x\neq 0$ (\ref{distri}) coincides with (\ref{fractlaplkernel}), however, when integrating over $x=0$ (\ref{distri}) extends (\ref{fractlaplkernel}) such that regularity is achieved:
The regularization in (\ref{distri}) mimics the alternating behavior of the discrete fractional Laplacian (\ref{laplamatel}) at $0\leq p \leq ceil(\frac{\alpha}{2})$ by oscillations taking place when $x\rightarrow 0$
where the phase of the complex kernel (\ref{distri}) jumps from zero (at $x=0$) to $\frac{\pi(\alpha+1)}{2}$ (at $x=0\pm \delta x$). As a consequence the fractional Laplacian (\ref{distri}) applied to a constant field yields zero \cite{michel-ima2014}, which reflects relation (\ref{transla}). It is demonstrated in \cite{michel-ima2014} that the regularized representation for the fractional infinite space Laplacian (\ref{distri}) indeed is valid for any positive $\alpha >0$ as it removes the hypersingularity at $x=0$ of the non-regularized kernel (\ref{fractlaplkernel}) for all $\alpha >0$.
Further for integer $m=\frac{\alpha}{2}$, relation (\ref{distri}) takes indeed the distributional representation of integer order-Laplacian
\begin{equation}
\label{integerorder}
-\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}=m}\delta_{\infty}(x) = (-1)^{m+1}\frac{d^{2m}}{dx^{2m}} \lim_{\epsilon\rightarrow 0+}\frac{1}{\pi}\Re\{ \frac{i}{(x+i\epsilon)} \} = (-1)^{m+1}\frac{d^{2m}}{dx^{2m}} \lim_{\epsilon\rightarrow 0+}\frac{1}{\pi}\frac{\epsilon}{(x^2+\epsilon^2)} ,\hspace{1cm} m\in {\bf \N_0}
\end{equation}
where $\displaystyle \delta_{\infty}(x)=\lim_{\epsilon\rightarrow 0+}\frac{1}{\pi}\frac{\epsilon}{(x^2+\epsilon^2)}$ is Dirac's infinite space $\delta$-function.
We devote the subsequent paragraph to the analysis of the periodic string limit and the construction of the fractional Laplacian defined on the $L$-periodic string.
\newline\newline
{\bf \noindent Periodic string continuum limit: Fractional Laplacian on $L$-periodic string}
\newline\newline
With the above general consideration (relations (\ref{periodmirror})-(\ref{fourier})), we can directly pass
to the explicit representation of the $L$-periodic fractional Laplacian of (\ref{contilimlap}). The $L$-periodic fractional Laplacian kernel which we denote by $K_L^{(\alpha)}(|x|)$ takes the form
\begin{equation}
\label{contilimlapfinal}
\begin{array}{l}
\displaystyle -\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}}\delta_L(x) = K_L^{(\alpha)}(|x|) = \frac{\alpha ! \sin{(\frac{\alpha\pi}{2})}}{\pi} \sum_{n=-\infty}^{\infty}\frac{1}{|x-nL|^{\alpha+1}} \nonumber \\ \nonumber \\
\displaystyle \hspace{3cm}K_L^{(\alpha)}(|x|) = \frac{\alpha !\sin{(\frac{\alpha\pi}{2})}}{\pi L^{\alpha+1}}\left\{-\frac{1}{|\xi|^{\alpha+1}}+
{\tilde \zeta}(\alpha +1,\xi) + {\tilde \zeta}(\alpha +1,-\xi) \right\} ,\hspace{1.5cm} \xi=\frac{x}{L} \nonumber \\ \nonumber \\
\displaystyle \hspace{3cm}K_L^{(\alpha)}(|x|) = -\frac{\alpha !}{\pi} \lim_{\epsilon\rightarrow 0+} \Re\left\{\sum_{n=-\infty}^{\infty}
\frac{i^{\alpha+1}}{(x-nL+i\epsilon)^{\alpha+1}}\right\} \nonumber \\ \nonumber \\ \displaystyle \hspace{3cm} K_L^{(\alpha)}(|x|)=\frac{\alpha !}{\pi L^{\alpha+1}} \lim_{\epsilon\rightarrow 0+}
\Re \left\{\ i^{\alpha+1} \left( \frac{1}{(\xi+i\epsilon)^{\alpha+1}}
-\zeta(\alpha+1,\xi+i\epsilon)-\zeta(\alpha+1,-\xi+i\epsilon) \right)\right\}
\end{array}
\end{equation}
by accounting for (\ref{convolper}) and above representations for the infinite space fractional Laplacian
(\ref{fractlaplkernel}) and (\ref{distri}).
The fractional Laplacian kernel (\ref{contilimlapfinal}) is defined on the principal interval $0 \leq \xi = \frac{x}{L} \leq 1$ over the length of the string.
(\ref{contilimlapfinal}) represents the periodic string continuum limit $h\rightarrow 0$ with $Nh=L$ finite, of the finite chain fractional Laplacian matrix (\ref{laplafini}).
In (\ref{contilimlapfinal})$_2$ for the (hyper-) singular representation of the kernel, we have introduced a slightly modified version of Hurwitz $\zeta$-functions ${\tilde\zeta}(..)$\footnote{implemented as ``HurwitzZeta" in {\it Mathematica}
Eq. (2)
at http://mathworld.wolfram.com/HurwitzZetaFunction.html }. The distributional representation (\ref{contilimlapfinal})$_4$ is expressed by standard Hurwitz $\zeta$-functions denoted by $\zeta(..)$. These two variants of $\zeta$- functions are defined by \cite{whitaker}
\begin{equation}
\label{hurwitz}
{\tilde \zeta}(\beta,x)= \sum_{n=0}^{\infty}\frac{1}{|x+n|^{\beta}} \, ,\hspace{2cm} \zeta(\beta,x)=\sum_{n=0}^{\infty}
\frac{1}{(x+n)^{\beta}} ,\hspace{1.5cm} \Re\, \beta >1
\end{equation}
We see for $\alpha>0$ and $x\neq 0$ that the series in (\ref{contilimlapfinal}) is absolutely convergent as good as the power function integral $\int_1^{\infty}\xi^{-\alpha-1}{\rm d}\xi$. The expressions (\ref{contilimlapfinal})$_{3,4}$ represent the regularized distributional representation of the kernel where
(\ref{distri}) has been taken into account. In (\ref{contilimlapfinal})$_{3,4}$ we take into account that $\Re \frac{i^{\alpha+1}}{(\xi+i\epsilon)^{\alpha+1}} = \Re \frac{i^{\alpha+1}}{(-\xi+i\epsilon)^{\alpha+1}} $ is for $\epsilon\rightarrow 0+$ an even distribution with respect to $\xi$.
In full correspondence with the symmetries of the matrix elements of the finite chain fractional matrix, the periodic string fractional Laplacian kernel (\ref{contilimlapfinal}) fulfills the symmetries (\ref{periodmirror}), i.e. $L$-periodicity and reflection-symmetry with respect to the axes $x_n= \frac{L}{2}+nL$ ($n\in {\bf \Z}_0$).
We see in view of the hyper-singular representations (\ref{contilimlapfinal})$_{1,2}$ that $K_L^{(\alpha)}$ has periodically repeating singularities at $x=nL$ ($n\in {\bf \Z}_0$).
In the limiting case when $\frac{\alpha}{2}=m\in {\bf \N}$ takes integers, kernel (\ref{contilimlapfinal})
is vanishing everywhere except at the singularities (due to $\sin{\frac{\pi\alpha}{2}}=0$ for $\frac{\alpha}{2} \in {\bf \N}$).
In view of the regularized representation (\ref{contilimlapfinal})$_{3,4}$ and by accounting for (\ref{integerorder}) we notice
that (\ref{contilimlapfinal}) takes then the form
\begin{equation}
\label{integerperiodic}
\begin{array}{l}
\displaystyle K_L^{(\alpha=2m)}(|x|)= (-1)^{m+1}\frac{d^{2m}}{dx^{2m}} \sum_{n=-\infty}^{\infty} \lim_{\epsilon\rightarrow 0+}\frac{1}{\pi}\frac{\epsilon}{((x-nL)^2+\epsilon^2)} ,\hspace{1cm} \frac{\alpha}{2} = m\in {\bf \N_0}\nonumber \\ \nonumber \\
\displaystyle \hspace{3cm} =
(-1)^{m+1}\frac{d^{2m}}{dx^{2m}} \sum_{n=-\infty}^{\infty}\delta_{\infty}(x-nL) = -\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}=m}\delta_L(x)
\end{array}
\end{equation}
which indeed is, by accounting for the representation of (\ref{periodicdelta})
for the $L$-periodic $\delta_L$-function, the distributional representation of the {\it integer-order
Laplacian on the $L$-periodic space}. This includes when $\alpha$ approaches the (forbidden value) zero $\frac{\alpha}{2} \rightarrow 0+0$ the (negative) $L$-periodic unity operator $-\delta_L(x)$.
The integer cases (\ref{integerperiodic}) contain for $m=\frac{\alpha}{2}=1$ the continuum limit of the classical $L$-periodic Born von Karman chain, leading to classical elasticity governed by the Laplacian kernel $K_L^{(2)}(|x|)=\frac{d^2}{dx^2}\delta_L(x)$.
One can further see that in the limiting case of an infinitely long string $L\rightarrow\infty$ (\ref{contilimlapfinal})
recovers the infinite space fractional Laplacian of (\ref{fractlaplkernel}) and (\ref{distri})
where the series of the image terms \newline $\displaystyle \sim \sum_{n=1}^{\infty}
\left\{(|x\pm nL|^{-\alpha-1} + |x\pm nL|^{-\alpha-1}\right\} \sim L^{-\alpha}$
tends to zero as $L^{-\alpha}$.
\newline\newline
\begin{figure*}[H]
\hskip3.5cm
\includegraphics[scale=0.5]{Kernel1.pdf}
\caption{$L$-periodic fractional Laplacian kernel $K_L^{(\alpha)}$ of (\ref{contilimlapfinal}) for $L=1$.}
\label{fig:2}
\end{figure*}
{\bf Figure 2}
\newline
{\it In figure 2 the $L$-periodic fractional Laplacian kernel $K_L^{(\alpha)}$ of (\ref{contilimlapfinal}) is plotted for $L=1$
over two periods $0< x < 2$ for $0< \alpha < 4$. Due to the $\sin{(\frac{\pi\alpha}{2})}$-multiplyer, the kernel is positive the interval $0< \alpha< 2$, vanishing at $\alpha=2$ and taking negative values for $2<\alpha<4$. When approaching the boundaries $x_n=0,1,2,..$, the kernel quickly takes huge values absolute values (positive values for $0\leq \alpha \leq 2$, and negative values for $2\leq \alpha < 4$).}
\newline\newline
Further, we see from the general relation (\ref{fourier}) when applying it to (\ref{contilimlapfinal})
that
\begin{equation}
\label{eigeneq}
\begin{array}{l}
\displaystyle \int_0^LK_L^{(\alpha)}(|x-x'|)\Phi_l(x') {\rm d}x'\cdot =
-\Phi_l(x)\frac{\alpha !}{\pi}
\Re\int_{-\infty}^{\infty}\frac{e^{ik_lx}}{(\epsilon-ix)^{\alpha+1}}{\rm d}x \nonumber \\ \nonumber \\ \displaystyle \hspace{4cm} = -\Phi_l(x)\frac{1}{2\pi}\Re \int_{-\infty}^{\infty}{\rm d}\tau |\tau|^{\alpha} \int_{-\infty}^{\infty} e^{i x(k_l+\tau)}{\rm d}x \nonumber \\ \nonumber \\ \nonumber \\
\displaystyle \hspace{4cm} = - \Phi_l(x)\Re \int_{-\infty}^{\infty} |\tau|^{\alpha}\delta(\tau+k_l) {\rm d}\tau \nonumber \\ \nonumber \\
\displaystyle \hspace{4cm} = - |k_l|^{\alpha} \Phi_l(x)
\end{array}
\end{equation}
This convolution with the Bloch eigenfunctions $\Phi_l$ indeed yields the discrete eigenvalues $-|k_l|^{\alpha} = -(\frac{2\pi }{L})^{\alpha}|l|^{\alpha}$
($l \in {\bf \Z}_0$), especially with the lowest eigenvalue $k_0^{\alpha}=0$ (since $\alpha >0$) which means that application of the $L$-periodic fractional Laplacian to a constant is zero.
Indeed we have with (\ref{fourierser}) and (\ref{eigeneq}) the spectral representation of the $L$-periodic fractional Laplacian
\begin{equation}
\label{fourierserfractalperi}
-\left(-\frac{d^2}{dx^2}\right)^{\frac{\alpha}{2}}\delta_L(x-x')
= - \sum_{l=-\infty}^{\infty} |k_l|^{\alpha} \Phi_l(x)\Phi_l^*(x'),\hspace{0.5cm} \Phi_l(x)=\frac{e^{ik_lx}}
{\sqrt{L}} ,\hspace{0.3cm} k_l=\frac{2\pi}{L}l ,\hspace{0.3cm} l \in {\bf \Z}_0
\end{equation}
\noindent {\it Zero dimensional string (point) continuum limit.}
A further observation is mention worthy, namely in the limiting case $L\rightarrow 0$ when the length of the string shrinks to a point (limiting case of a zero-dimensional string), the summation over $n$ in the series (\ref{contilimlapfinal}) takes the form of an integral over the quasi-continuous variable $\xi=nL$ where the resulting fractional Laplacian kernel for the zero-dimensional string yields due to
\begin{equation}
\lim_{\epsilon\rightarrow 0+}\Re\int_{-\infty}^{\infty}
\frac{i^{\alpha+1}}{(x+i\epsilon)^{\alpha+1}}{\rm d}\xi = 0
\end{equation}
a vanishing constant.
\section{Conclusions}
In this paper we have deduced the fractional discrete Laplacian matrices in explicit forms for the infinite chain ($N\rightarrow \infty$),
and the $N$-periodic finite chain where the particle number $N$ is arbitrary and not necessarily large. The $N$-periodic finite chain fractional Laplacian matrix is obtained in explicit form in terms of an infinite series of the
infinite chain fractional Laplacian matrix.
These results appear to be useful to be generalized to define discrete
fractional operators on finite lattices in $n=1,2,3,..$ dimensions of the physical space.
Further we analyze continuum limits of the chain fractional Laplacians: The infinite space continuum limit yields the distributional representations of the well known 1D fractional Laplacian kernel (Riesz fractional derivative). The infinite space representation is utilized to deduce
the $L$-periodic kernel of the fractional Laplacian on the $L$-periodic string. For all these cases, infinite space and periodic string, respectively, the fractional Laplacians take in the special case of integer exponents $\frac{\alpha}{2} \in {\bf \N}_0$ the distributional representations of the corresponding
integer-orders of the Laplacian.
The $L$-periodic string fractional Laplacian kernel represents the periodic string continuum limit of the discrete fractional Laplacian matrix of the finite $N$-periodic chain. The exact representation of the periodic string fractional Laplacian kernel is expressed in compact form
by Hurwitz type $\zeta$-functions. Especially this representation appears to be useful for
computational purposes.
The present model has the potential to be extended to construct the fractional Laplacians in finite domains when boundary conditions are imposed. This could be done by constructing an appropriate projection formalism, where a unit operator (projector kernel) has to be constructed which spans the space of functions fulfilling the prescribed boundary conditions.
Generalizations of the discrete fractional matrices deduced in this paper can be extended to
nD periodic unit-cells (nD cyclically closed tori). In this way a discrete fractional
calculus on finite $nD$ lattices can be developed.
There is a vast field of open problems in anomalous diffusion (L\'evy flights), wave propagation, fractional Quantum Mechanics, turbulence and many other areas where the exact representations for the discrete and continuous fractional Laplacian as deduced in this paper can be useful.
Especially a challenging task could be the development of formulations of
discrete fractional Quantum Mechanics by utilizing the 1D discrete fractional operators of the present paper and their $nD$
generalizations.
\section{Acknowldgements}
Fruitful discussions with G\'erard Maugin are greatfully acknowledged.
\section{Appendix I}
In the derivations to follow, always the main definition (\ref{gammafu}) of the $\Gamma$-function is assumed, i.e. the arguments of $\Gamma$-functions are positive.
Those cases where the analytically continued definition (\ref{recursiongamma}) is employed are explicitly marked.
\newline\newline
Let us evaluate in details the important integral (\ref{fractlattice}) for the matrix elements of the fractional Laplacian matrix
\begin{equation}
\label{fractlatticeb}
f^{(\alpha)}(|p|) = \frac{\Omega_{\alpha}^2}{2\pi}\int_{-\pi}^{\pi}e^{i\kappa p}\left(4\sin^2{\frac{\kappa}{2}}\right)^{\frac{\alpha}{2}} {\rm d}\kappa =\Omega_{\alpha}^2 \frac{2^{\alpha + 1}}{\pi}\int_{0}^{\frac{\pi}{2}}
\sin^{\alpha}(\varphi)\cos{(2p\varphi}){\rm d}\varphi \hspace{0.5cm} \alpha >0 ,\hspace{0.5cm} p\in {\bf \Z}_0
\end{equation}
Introducing $\xi=\sin^2(\varphi)$ (${\rm d}\varphi =2^{-1}\left[\xi(1-\xi)\right]^{-\frac{1}{2}}{\rm d}\xi$) with $0 \leq \xi \leq 1$ and $\cos{\varphi} = \sqrt{1-\xi} \geq 0,\sin{\varphi}=\sqrt{\xi} \geq 0$ where $0\leq \varphi \leq \frac{\pi}{2}$.
Further let us put in the following deduction $p=|p|$. Then (\ref{fractlatticeb}) writes as
\begin{equation}
\label{matelwrites}
f^{(\alpha)}(|p|) = \Omega_{\alpha}^2\frac{2^{\alpha}}{\pi}\Re \int_{0}^{1} \xi^{\frac{\alpha}{2}-\frac{1}{2}} (1-\xi)^{-\frac{1}{2}}\left(\sqrt{1-\xi}+i\sqrt{\xi} \right)^{2p} {\rm d}\xi
\end{equation}
Then we utilize
\begin{equation}
\displaystyle \cos{(2p\varphi}) = \Re\{(\sqrt{1-\xi}+i \sqrt{\xi})^{2p} \} =\sum_{s=0}^p \frac{(2p)!}{(2s)!(2p-2s)!}(-1)^s\xi^s(1-\xi)^{p-s}
\end{equation}
where $\Re(..)$ denotes the real part of $(..)$. Further we account for
\begin{equation}
\label{factorials}
(2n)! = 2^{2n}n!\frac{(n-\frac{1}{2})!}{(-\frac{1}{2})!} \hspace{1cm} n\in {\bf \N}_0
\end{equation}
so that (\ref{matelwrites}) can be written as
\begin{equation}
\label{easytosee}
f^{(\alpha)}(|p|) = \Omega_{\alpha}^2 \frac{2^{\alpha}}{\sqrt{\pi}} \frac{(2p)!}{2^{2p}!p!}\times
\int_{0}^{1}\xi^{\frac{\alpha}{2}}\sum_{s=0}^p\frac{p!}{s!(p-s)!}(-1)^s
\frac{\xi^{s-\frac{1}{2}}}{(s-\frac{1}{2})!}\frac{(1-\xi)^{p-s-\frac{1}{2}}}{(p-s-\frac{1}{2})!}{\rm d}\xi
\end{equation}
where
\begin{equation}
\label{productderivative}
\sum_{s=0}^p\frac{p!}{s!(p-s)!}(-1)^s
\frac{\xi^{s-\frac{1}{2}}}{(s-\frac{1}{2})!}\frac{(1-\xi)^{p-s-\frac{1}{2}}}{(p-s-\frac{1}{2})!} = \frac{1}{(p-\frac{1}{2})!(p-\frac{1}{2})!}\frac{d^p}{d\xi^p}\left\{\xi(1-\xi) \right\}^{p-\frac{1}{2}}
\end{equation}
Thus we obtain (\ref{fractalat})
\begin{equation}
\label{fractalatb}
f^{(\alpha)}(|p|) = \Omega_{\alpha}^2 \frac{2^{\alpha}}{\sqrt{\pi}}\frac{1}{(p-\frac{1}{2})!} \int_0^1\xi^{\frac{\alpha}{2}}\frac{d^p}{d\xi^p}\left\{\xi(1-\xi)\right\}^{p-\frac{1}{2}}{\rm d}\xi
\end{equation}
where $(-\frac{1}{2})!=\Gamma(\frac{1}{2})=\sqrt{\pi}$. Note that for the (in our model) forbidden exponent $\alpha=0$ the matrix element (\ref{fractalatb}) yields $f^{(\alpha=0)}(|p|) = \Omega_0^2\,\delta_{p0}$ in accordance with ${\hat 1}\Omega_{0}^2$ of
relation (\ref{alphacarfou}) which also can directly be seen from (\ref{fractlatticeb}).
Now consider
\begin{equation}
\label{considerintehral}
\int_0^1\xi^{\frac{\alpha}{2}}\frac{d^p}{d\xi^p}\left\{\xi(1-\xi)\right\}^{p-\frac{1}{2}}{\rm d}\xi = \left\{\xi^{\frac{\alpha}{2}}\frac{d^{p-1}}{d\xi^{p-1}}\left\{\xi(1-\xi)\right\}^{p-\frac{1}{2}} \right\}_0^1 -\frac{\alpha}{2}\int_0^1\xi^{\frac{\alpha}{2}-1}\frac{d^{p-1}}{d\xi^{p-1}}\left\{\xi(1-\xi)\right\}^{p-\frac{1}{2}}{\rm d}\xi
\end{equation}
The lowest relevant orders of the boundary term $\{..\}$ is behaving as $\sim \xi^{\frac{\alpha+1}{2}}(1-\xi)^{\frac{1}{2}} $ + higher order terms and is hence vanishing at the boundary $\xi=0,1$. Performing $n \leq p$ times partial integration yields boundary terms with
lowest powers in $\xi$ and $1-\xi$ of the form
\begin{equation}
\label{boundaryterms}
\sim \xi^{\frac{\alpha}{2}-(n-1)}\frac{d^{p-n}}{d\xi^{p-n}}\left\{\xi(1-\xi)\right\}^{p-\frac{1}{2}}\sim
\xi^{\frac{\alpha+1}{2}}(1-\xi)^{n-\frac{1}{2}} ,\hspace{1cm} 1\leq n\leq p
\end{equation}
which are vanishing at the boundary $\xi=0,1$. Performing this procedure $p$ times yields then
\begin{equation}
\label{ntimes}
\begin{array}{l}
\displaystyle \int_0^1\xi^{\frac{\alpha}{2}}\frac{d^p}{d\xi^p}\left\{\xi(1-\xi)\right\}^{p-\frac{1}{2}}{\rm d}\xi = \int_0^1\xi^{\frac{\alpha}{2}-p}\left\{\xi(1-\xi)\right\}^{p-\frac{1}{2}}{\rm d}\xi \times\, (-1)^p \prod_{s=0}^{p-1}(\frac{\alpha}{2}-s)\nonumber \\ \nonumber\\
\displaystyle = \int_0^1\xi^{\frac{\alpha -1}{2}}\left\{(1-\xi)\right\}^{p-\frac{1}{2}}{\rm d}\xi \times\, (-1)^p \prod_{s=0}^{p-1}(\frac{\alpha}{2}-s)\nonumber \\ \nonumber \\
\displaystyle = \frac{\frac{\alpha-1}{2}!(p-\frac{1}{2})!}{(\frac{\alpha}{2}+p)!}\times\, (-1)^p \prod_{s=0}^{p-1}(\frac{\alpha}{2}-s)
\end{array}
\end{equation}
where in the last line we have used
\begin{equation}
\label{Bfunction}
\int_0^1\xi^{\beta_1}(1-\xi)^{\beta_2}{\rm d}\xi =\frac{\beta_1!\beta_2!}{(\beta_1+\beta_2+1)!},\hspace{1cm} \Re\, \beta_i >-1
\end{equation}
With (\ref{ntimes})$_3$ we get for the matrix element (\ref{fractalatb}) where always $p=|p|$
\begin{equation}
\label{matrixele}
f^{(\alpha)}(|p|) = \Omega_{\alpha}^2\frac{2^{\alpha}}{\sqrt{\pi}}\frac{\frac{(\alpha-1)}{2}!}{(\frac{\alpha}{2}+p)!}(-1)^p\prod_{s=0}^{p-1}(\frac{\alpha}{2}-s)
\end{equation}
To get this into a more convenient form we consider (\ref{Bfunction}) for $\beta_1=\beta_2=\frac{\alpha-1}{2}$
and by introducing $\xi=\frac{1}{2}(1+\sqrt{\eta})$ (${\rm d}\xi= 2^{-2}\eta^{-\frac{1}{2}}{\rm d}\eta$).
Then we have
\begin{equation}
\label{Bfualp}
\frac{(\frac{(\alpha-1)}{2})!(\frac{(\alpha-1)}{2})!}{\alpha !} =
2^{-\alpha}\int_0^1(1-\eta)^{\frac{(\alpha-1)}{2}}\eta^{-\frac{1}{2}}{\rm d}\eta = 2^{-\alpha} \frac{(\frac{(\alpha-1)}{2})!}{\frac{\alpha}{2}!}(-\frac{1}{2})!
\end{equation}
which is known as duplication formula \cite{abramo}. It follows with $(-\frac{1}{2})!=\sqrt{\pi}$ that
\begin{equation}
\label{2alphapi}
\frac{\alpha !}{\frac{\alpha}{2}!} = \frac{2^{\alpha}}{\sqrt{\pi}}(\frac{(\alpha-1)}{2})!
\end{equation}
Plugging (\ref{2alphapi}) into(\ref{matrixele}) yields a more illuminating representation, namely
\begin{equation}
\label{matrixeleibb}
f^{(\alpha)}(|p|) = \Omega_{\alpha}^2\frac{\alpha!}{\frac{\alpha}{2}!(\frac{\alpha}{2}+p)!}(-1)^p\prod_{s=0}^{p-1}(\frac{\alpha}{2}-s)
\end{equation}
We can distinguish two cases, namely (i) $ p \leq p_0=ceil(\frac{\alpha}{2}) $, and (ii) $ p \geq p_0=ceil(\frac{\alpha}{2})$ where (ii) is only relevant for non-integer $\frac{\alpha}{2}$ and then always equivalent to $p>\frac{\alpha}{2}$.
\newline\newline
{\bf (i) Case $ p \leq p_0=ceil(\frac{\alpha}{2}) $}:
\newline
Now we observe that we can write
\begin{equation}
\label{observation}
\prod_{s=0}^{p-1}(\frac{\alpha}{2}-s) = \frac{\frac{\alpha}{2}!}{(\frac{\alpha}{2}-p)!}
\end{equation}
since $(\frac{\alpha}{2}-p)!$ is well defined in case (i). Thus we can write
\begin{equation}
\label{matrixelecasei}
f^{(\alpha)}(|p|) = \Omega_{\alpha}^2(-1)^p\frac{\alpha!}{(\frac{\alpha}{2}-p!)(\frac{\alpha}{2}+p)!}
\end{equation}
which obviously is a generalization of the binomial coefficients including integer and non-integer $\alpha$.
Note this expression obviously contains the cases of {\it integers $\frac{\alpha}{2} = m \in {\bf \N}$} with the binomial coefficients of
(\ref{binomialco}). In that case all matrix elements (\ref{matrixelecasei}) with $p\geq \frac{\alpha}{2}+1$ are vanishing.
When $\frac{\alpha}{2} \notin {\bf \N}$ is not an integer, expression (\ref{matrixelecasei}) covers all
$p$ with $0\leq p \leq p_0={\rm ceil}(\frac{\alpha}{2})$ where $p_0={\rm ceil}(\frac{\alpha}{2})$ indicates the smallest integer greater $\frac{\alpha}{2}$ ($0<p_0-\frac{\alpha}{2}<1$).
\newline\newline
{\bf (ii) Case $ p \geq p_0 =ceil(\frac{\alpha}{2}) $ \hspace{0.25cm} ($p>\frac{\alpha}{2}$)}:
\newline
This case is only relevant when $\frac{\alpha}{2}$ is not integer, i.e. $p_0-\frac{\alpha}{2} >0 $.
In this case where $(\frac{\alpha}{2}-p)!$ is not well defined for $p>p_0$ when the main definition of the $\Gamma$-function (\ref{gammafu}) is assumed. It is convenient now to write the product in different manner, namely
\begin{equation}
\label{caseii}
\prod_{s=0}^{p-1}(\frac{\alpha}{2}-s) = \prod_{s=0}^{p_0-1}(\frac{\alpha}{2}-s)\prod_{s=p_0}^{p-1}(\frac{\alpha}{2}-s) =
\frac{\frac{\alpha}{2}!}{(\frac{\alpha}{2}-p_0)!}\prod_{s=p_0}^{p-1}(\frac{\alpha}{2}-s)
,\hspace{1cm} p_0={\rm ceil}(\frac{\alpha}{2})
\end{equation}
Taking into account that for $\frac{\alpha}{2}\notin {\bf \N}$ it holds $0 < p_0-\frac{\alpha}{2} < 1$, i.e. $\Gamma(p_0-\frac{\alpha}{2})$ is well defined, so that we can write for the second product in (\ref{caseii})
\begin{equation}
\label{furtherel}
\prod_{s=p_0}^{p-1}(\frac{\alpha}{2}-s) =(-1)^{p-p_0}\frac{(p-1-\frac{\alpha}{2})!}{(p_0-1-\frac{\alpha}{2})!} =(-1)^{p-p_0}
\frac{\Gamma(p-\frac{\alpha}{2})}{\Gamma(p_0-\frac{\alpha}{2})}
\end{equation}
We then can apply the Euler relation (e.g. \cite{abramo,michelb} briefly deduced in appendix II)
\begin{equation}
\label{Euler}
\Gamma(p_0-\frac{\alpha}{2})=\frac{1}{\Gamma(1-(p_0-\frac{\alpha}{2}))}\frac{\pi}{\sin{(\pi(p_0-\frac{\alpha}{2})}}
=(-1)^{p_0+1}\frac{\pi}{(\frac{\alpha}{2}-p_0)!\sin{\frac{\alpha\pi}{2}}}
\end{equation}
where all arguments of $\Gamma$-functions are positive since $0<p_0-\frac{\alpha}{2}< 1$ {\it and} $0<1-(p_0-\frac{\alpha}{2})<1$.
Utilizing (\ref{caseii}) with (\ref{Euler}) and (\ref{observation}) for $p=p_0$ we get
for the entire product
\begin{equation}
\label{takeacount}
\prod_{s=0}^{p-1}(\frac{\alpha}{2}-s) = \frac{(-1)^{p+1}}{\pi}(\frac{\alpha}{2})! \sin{(\frac{\alpha\pi}{2})}
\,\,\Gamma(p-\frac{\alpha}{2}) ,\hspace{1cm} p\geq p_0=ceil(\frac{\alpha}{2})
\end{equation}
The matrix elements (\ref{matrixelei}) finally assume the form (where always $p=|p|$)
\begin{equation}
\label{matrixform}
f^{(\alpha)}(|p|) = -\Omega_{\alpha}^2\frac{\Gamma(\alpha+1)}{\pi}\sin{(\frac{\alpha\pi}{2})}\frac{\Gamma(p-\frac{\alpha}{2})}{\Gamma(\frac{\alpha}{2}+p+1)} = -\Omega_{\alpha}^2 \frac{\alpha!}{\pi}\sin{(\frac{\alpha\pi}{2})}\frac{(p-\frac{\alpha}{2}-1)!}{(\frac{\alpha}{2}+p)!} ,\hspace{1cm} p > \frac{\alpha}{2}
\end{equation}
We see that this relation vanishes in the integer cases $\frac{\alpha}{2} \in {\bf \N}$ for $p>\frac{\alpha}{2}$ (since then $\sin{(\frac{\alpha\pi}{2})}=0$), reflecting the localization of
$f^{\alpha}(|p|)$ where only the elements (\ref{matrixelecasei}) assuming then (\ref{binomialco}) for $0\leq p \leq m$ ($m=\frac{\alpha}{2} \in {\bf \N}$) are non-vanishing.
Hence (\ref{matrixform}) holds for $p>\frac{\alpha}{2}$ for any integer or non-integer $\frac{\alpha}{2} > 0$.
Its validity can extended to all $p\in {\bf Z}_0$ when instead of the main
definition of the $\Gamma$-function (\ref{gammafu}), the analytically continued
definition of the $\Gamma$-function (\ref{recursiongamma}) is assumed.
\subsection{Appendix II}
Let us briefly deduce the Euler relation. To this end consider the Fourier integral $0< \mu <1$
\begin{equation}
\label{fourierintmu}
\begin{array}{l}
\displaystyle \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{ikx}|k|^{-\mu}{\rm d}k =\frac{1}{\pi}\Re\lim_{\epsilon\rightarrow 0+}\int_0^{\infty}e^{-k(\epsilon-ix)}k^{-\mu}{\rm d}k ,\hspace{1cm} 0\leq \mu<1 \nonumber \\ \nonumber \\
\displaystyle =\lim_{\epsilon\rightarrow 0+} \frac{1}{\pi}\Re\,(\epsilon-ix)^{\mu-1}\int_0^{\infty}e^{-\tau}\tau^{-\mu}{\rm d}\tau = \lim_{\epsilon\rightarrow 0+}\frac{\Gamma(1-\mu)}{\pi} \Re\,(\epsilon-ix)^{\mu-1} \nonumber \\ \nonumber \\
\displaystyle = \frac{|x|^{\mu-1}}{\pi} \Gamma(1-\mu)\sin{(\frac{\mu\pi}{2})} \hspace{1cm} x \neq 0
\end{array}
\end{equation}
which is a well-defined integral in the range $0 < \mu<1$. The inverse Fourier transformation of (\ref{fourierintmu})
gives then
\begin{equation}
\label{inverse}
\begin{array}{l}
\displaystyle |k|^{-\mu} = \frac{\Gamma(1-\mu)}{\pi}\sin{(\frac{\mu\pi}{2})}\int_{-\infty}^{\infty} e^{-ikx} |x|^{\mu-1}{\rm d}x
\hspace{0.5cm} \nonumber \\ \nonumber \\
\displaystyle |k|^{-\mu} = \frac{2}{\pi}\Gamma(1-\mu)\sin{(\frac{\mu\pi}{2})}
\lim_{\epsilon\rightarrow 0+}\Re \int_0^{\infty} e^{-x(\epsilon+ik)}x^{\mu-1}{\rm d}x = \frac{2}{\pi}\Gamma(1-\mu)\sin{(\frac{\mu\pi}{2})}\lim_{\epsilon\rightarrow 0+}\Re(\epsilon+ik)^{-\mu} \int_0^{\infty} e^{-\tau}\tau^{\mu-1}{\rm d}\tau \nonumber \\ \nonumber \\
\displaystyle 1 = \, \frac{2}{\pi}\cos{(\frac{\pi\mu}{2})}\sin{(\frac{\pi\mu}{2})} \Gamma(1-\mu)\Gamma(\mu) = \frac{\sin{(\pi\mu)}}{\pi}\Gamma(\mu)\,\,\Gamma(1-\mu)
\end{array}
\end{equation}
where the last relation (\ref{inverse})$_3$ is the Euler relation, also referred to as Euler reflection formula \cite{abramo} employed in (\ref{Euler}) (put there $\mu=p_0-\frac{\alpha}{2}$ and $p_0=ceil(\frac{\alpha}{2})$).
So far we still are restricted to $0< \mu < 1$. Let us consider now arbitrary arguments of the $\Gamma$-functions (except negative integers and zero) by utilizing the analytically continued recursive definition of the $\Gamma$-function (\ref{recursiongamma}).
Then we observe for $n\in {\bf \N}_0$
\begin{equation}
\label{extend}
\begin{array}{l}
\displaystyle \Gamma(1-\mu) = (-\mu)! = (-1)^n\Gamma(1-\mu-n)\,\prod_{s=0}^{n-1}(\mu+s) \nonumber \\ \nonumber \\
\displaystyle \Gamma(\mu) = (\mu-1)! = \Gamma(\mu+n)\,\prod_{s=0}^{n-1}\frac{1}{(\mu+s)}
\end{array}
\end{equation}
and obtain the identity
\begin{equation}
\label{gammagen}
\Gamma(1-\mu)\Gamma(\mu)=(-1)^n\Gamma(1-\mu-n)\Gamma(\mu+n) = \frac{\pi}{\sin{(\pi\mu)}}
\end{equation}
and by taking into account $(-1)^n\sin{(\pi\mu)}= \sin{\pi(\mu+n)}$, then (\ref{gammagen}) takes the form
\begin{equation}
\label{eulergen}
\Gamma(1-\mu-n)\Gamma(\mu+n) = \frac{\pi}{\sin{(\pi(\mu+n))}}
\end{equation}
which is Euler's reflection formula for arbitrary including negative non-integer real
arguments of the analytically continued $\Gamma$-function, where arguments of negative integers and zero are to be excluded due to the singularities at these points of the $\Gamma$-function.
| 2023-04-23T06:40:26.985Z | 2014-12-19T02:12:11.000Z | redpajama/arxiv | arxiv_0001 | 412 | 14,332 |
064bd40bf0bced25651e842f5a93fd610c04da91 | \section{Introduction}
Although studied for several decades, the emission mechanisms in gamma-ray burst (GRB) prompt emission still remain the subject of debate.
It is by now agreed that GRBs are connected to a relativistically expanding outflow or jet \citep{woo93}, but the mechanisms for converting kinetic energy
to observable radiation remain unknown. A number of different models have been proposed, and the radiative processes involved vary from synchrotron
\citep[e.g.,][]{rm92}, Compton scattering \citep{bel10} to thermal emission from a photosphere \citep{mes02}.
The key component in deciphering the physics behind GRBs are their spectra. Despite the fact that lightcurve structure and duration vary
greatly between events, the overall spectral shape is remarkably similar. This has led to the use of the empirical Band function \citep{band}
to fit the spectra. The function is a smooth joining of two power-laws (with low- and high-energy index $\alpha$ and $\beta$, respectively), and
provides a good fit to most GRB spectra. However, it lacks any physical motivation and the derived parameters have no immediate interpretation.
The success of the Band function lies in the fact that it can adequately fit the shape of most GRB spectra. The function can mimic both thermal
and non-thermal processes, and various attempts have been made to connect its parameters to physically derived properties in the models. Most
such analysis has been done at the level of individual GRBs; however, some attempts have been made to look at the distribution
of parameters, most notable when comparing the low-energy $\alpha$ index to that of the synchrotron spectrum, leading to the so-called ``line
of death problem" \citep{preece}.
It was quickly found that GRBs could be divided into two classes, long/soft and short/hard, based on their duration and hardness \citep{kou93}. The
two classes are believed to be the result of different progenitors: long GRBs from collapsing massive stars and short GRBs from the merger of two
neutron stars or a neutron star and a black hole. Indeed, supernova signatures have been found both in the afterglow spectrum and lightcurve of
long GRBs \citep[see, e.g.,][and references therein.]{hb12}. Given these two very different progenitors it is remarkable how similar the prompt emission
spectra are, with short GRBs on
average having slightly harder spectra than long ones. Any additional property in which the two classes differ would therefore provide welcome
information to help probe the differences in progenitor mechanisms.
In this paper we present results based on the two largest samples of broad-band GRB spectra: the Burst and Transient Source Experiment (BATSE) on the
{\it Compton Gamma-ray Observatory} ({\it CGRO}) and the the Gamma-ray Burst Monitor \citep[GBM;][]{gbmpaper} onboard the {\it Fermi Gamma-ray
Space Telescope}. We will use the standard peak flux spectral fits to constrain the emission mechanisms based on the overall {\it shape} of the spectra. This
can be derived from the Band function fits irrespective of their lack of physical motivation, and thus provide a model independent test. We will focus on
the width of the $EF_E$ spectra, as this is a well-defined and easily calculated property. We begin by describing the data analysis and definition of width
in Sect.~2. Thereafter we present our results, as well as comparisons to spectra from basic radiative processes, in Sect.~3. Finally, we discuss our results.
\section{Data analysis}
In this study we use the peak flux spectral fits presented in the 2nd GBM catalog \citep{gru14}. The catalog contains 943 GRB spectra, detected by {{{\it Fermi}/GBM}} in
the energy range of 8\,keV to 40\,MeV between 2008 and 2012. As part of the standard procedure, all spectra are fit using the Band function and we use these
parameters to determine the shape of the spectrum. Although the Band function does not provide any clues as to the physical processes behind the emission,
in this step we are focused on finding a good description of the spectral shape in the energy range where most of the power is radiated. We choose the spectral
fits made of the peak flux spectra in order to minimize the effect of spectral evolution. The integration time for these spectra are 1.024\,s for long GRBs and
64\,ms for short GRBs \citep{gru14}.
Augmenting the {{{\it Fermi}/GBM}} results, we also analyse 1970 GRB observations from BATSE. Not only does this make our results more instrument independent, but
the energy ranges and the mean energy of the spectra are slightly different (the BATSE range is 20--600\,keV). We use the sample presented in \citet{gol13}.
As in the case of {{{\it Fermi}/GBM}}, we analyse peak flux spectra (the integration time is 2.048\,s for all BATSE spectra) and the methodology used is the same for both
catalogs.
In our analysis, we separate long and short GRBs. This is done based on the $T_{90}$ duration, i.e. the time during which 90\% of the emission is measured.
Following standard classification we separate long and short GRBs at 2\,s.
\subsection{Definition of width}
As measurement of the width of the spectra, we use the full width half maximum (FWHM) of the $EF_E$ versus $E$ spectra. As the absolute width
is dependent on the location of the spectral peak, we define the width $W$ as
\begin{equation}
W=\log{\left(\frac{E_2}{E_1}\right)}\, ,
\label{widthdef}
\end{equation}
\noindent where $E_1$ and $E_2$ are the lower and upper energy bounds of the FWHM range, respectively. With this measure, $W$ only depends
on the Band function parameters $\alpha$ and $\beta$, corresponding to the low- and high-energy spectral index. In order for the Band
function to have a peak (and thereby a width) in the $EF_E$ representation, $\alpha$ must be greater than $-2$ while $\beta$ must be smaller than $-2$.
To minimize effects when the values are close to their limits we apply a cut requiring $\alpha > -1.9$ and $\beta < -2.1$, with the cut in $\beta$ being
most restrictive. The reason for this is that as the peak energy approaches the upper boundary of the energy range, the high energy part of the ``turn over''
disappears leading to artificially hard or unconstrained $\beta$ values. Since our spectra are at peak flux, it is more common for the peak energy to be
at a high value and therefore far away from the low energy boundary. We further tested that our results do not depend on the exact choice of cutoff values.
The cut reduced the BATSE sample by 359 and the GBM sample by 252 GRBs (20\% and 27\%, respectively).
From the definition in Eq.~\ref{widthdef}, it is clear that $W$ will be in ``units'' of dex. Furthermore, it is an invariant, redshift independent quantity. As the area in
the $\log EF_E$ vs $\log E$ representation indicates at what energies most of the power is radiated, $W$ will give a measure of how spread (or compact) in
energy decades that power is.
\section{Results}
As noted above, we analyse long and short GRBs separately for each instrument. The distribution of widths for the spectra of the long GRBs are presented in
Figure~\ref{longwidths}.
\begin{figure}
\includegraphics[width=8.4cm,angle=0]{fig1c.pdf}
\caption{Distribution of spectral width parameter $W$ for the sample of spectra from long GRBs. The two histograms show the 1279 BATSE bursts and the 594
GRBs observed with {{{\it Fermi}/GBM}}. Solid lines show the kernel density estimation, a non-parametric way to estimate the probability density function. The distribution
is very similar for both instruments, peaking around $W\sim1$ and a tail extending to larger widths.}
\label{longwidths}
\end{figure}
\begin{figure}
\includegraphics[width=8.4cm,angle=0]{fig2c.pdf}
\caption{Distribution of spectral width parameter $W$ for the sample of spectra from short GRBs. The histograms show the 332 BATSE bursts and the 87
short GRBs observed with {{{\it Fermi}/GBM}}. Solid lines show the kernel density estimation.The distribution is very similar for both instruments, peaking below
$W\sim1$ and with a tail extending to larger widths.}
\label{shortwidths}
\end{figure}
The distribution peaks at $W\sim1$, although many GRBs can be found in the tail extending towards larger widths. Virtually no spectra
have $W<0.5$. The median value of $W$ is 1.05 for the BATSE sample and 1.07 for the GBM sample. Looking at the tail extending to larger widths, we
find that it is for both instruments populated by spectra
where $\beta$ is close to our cutoff value of $-2.1$. Due to this, we choose the median and quartile deviation to describe the distribution as they
are robust estimators, despite the drawback that associated probability tests have to be calculated numerically, for which we use the bootstrap
method \citep[see e.g.,][]{press}. We note that values of $\beta>-2$ is an artificial problem arising from the limited energy window, as a peak must
exist somewhere in the spectrum.
Testing the GBM sample against the (larger) BATSE sample, we find that the difference between the medians is not significant. This confirms
that the distribution is not dependent on instrument, but intrinsic to the spectra themselves.
We now analyze the short GRBs. Figure~\ref{shortwidths} shows the distribution of $W$ for the short bursts in our sample, separated for the two instruments.
As in the case of long GRBs, the distribution of $W$ shows a unimodal distribution where most of the values are clustered in a narrow range. In the case
of the short GRBs, this peak is slightly below a value of 1. Again, the two instruments have very similar distributions, with a median of 0.91 for the BATSE
sample and 0.86 for the GBM sample. The medians of the two distributions are not significantly different.
The characteristics for all sample distributions are summarized in Table~\ref{locationtable}. In order to estimate the mean relative uncertainty for
individual burst widths we use Monte Carlo methods, and find that it is $\sim0.15$.
\begin{table}
\begin{tabular}{l l l l l l}
& \multicolumn{2}{c}{Long} & & \multicolumn{2}{c}{Short} \\
\cline{2-3} \cline{5-6}
& BATSE & GBM & & BATSE & GBM \\
Sample size & 1279 & 594 & & 332 & 87\\
Median & 1.05 & 1.07 & & 0.91 & 0.86\\
Quartile dev. & 0.23 & 0.23 & & 0.20 & 0.12\\
\end{tabular}
\caption{Characteristics of the $W$ distributions of the two instruments, separated for long and short GRBs.}
\label{locationtable}
\end{table}
As the integration times in the GBM data are different for long and short GRBs, we compare the two classes using only the BATSE bursts.
We find that the chance probability that the two samples come from the same distribution is less than $10^{-6}$, and therefore conclude
that there is a highly significant difference between the two types of GRBs. If the difference were due to spectral evolution, we would expect
short GRBs to have larger values of $W$, as most of their duration lies within the integration time (2.048\,s). The fact that short GRB spectra
are seen to be more narrow thus strengthens the case for the difference being intrinsic. The result cannot be explained by differences in
the peak energy between the two types of bursts, as we find no correlation between peak energy and $W$. Neither do we see any
correlation with $T_{90}$.
In order to confirm that the difference is not due to a binning effect, we also analyze the time-integrated spectra. As expected, the
median width increases, with a larger change seen in long GRBs. These spectra show an even stronger and statistically more significant
disparity between short and long bursts (the median is 0.96 for short GRBs and 1.26 for long).
\subsection{Comparison to emission mechanisms}
As noted above, the Band function does not provide any physical interpretation of the GRB spectrum. We have therefore calculated the width of spectra
from known physical processes. In order to make the comparison as general as possible, we select only a few ``basic" alternatives: thermal emission and
synchrotron radiation for a few electron distributions. Cooling is neglected in the synchrotron calculations, meaning the spectra are at the extreme limit for slow
cooling. Fast cooling synchrotron spectra are significantly wider. We stress that these are not to be seen as models for GRB prompt emission; rather, their simplicity
make them serve as fundamental limits to which any model including these processes will adhere. For each of these we have calculated $W$ according
to Eq.~\ref{widthdef}. The results are presented in Table~\ref{width_proc}.
\begin{table}
\begin{tabular}{l l}
Process & $W$ \\
\hline
Planck function & 0.54 \\
Monoenergetic synchrotron & 0.93 \\
Synchrotron from Maxwellian e$^-$ & 1.4 \\
Synchrotron from power-law e$^-$, index $-2$ & 1.6 \\
Synchrotron from power-law e$^-$, index $-4$ & 1.4 \\
\end{tabular}
\caption{Width parameters for spectra generated by thermal emission and synchrotron emission from basic electron distributions.}
\label{width_proc}
\end{table}
To facilitate the comparison with the data, we overplot some of these limits on the data in Fig.~\ref{widthcomp}. Interestingly, the peak of the
distribution occurs close to the width of monoenergetic synchrotron, which is not a physically realistic scenario. Figure~\ref{widthcomp} also
shows that synchrotron emission from all electron distributions gives significantly wider spectra than generally observed. Conversely, almost
no observed spectrum is more narrow than the Planck function (and all those are consistent with $W=0.5$ within our estimated uncertainty).
\begin{figure}
\includegraphics[width=8.6cm,angle=0]{fig3a.pdf}
\caption{Distribution of spectral width parameter $W$ for long and short bursts from both instruments. Thin solid lines show the kernel density estimation. The
vertical lines indicate $W$ for spectra from the corresponding physical process: blackbody radiation (dashed), monoenergetic synchrotron (thick), synchrotron
from Maxwellian electrons (dotted) and synchrotron from power-law electrons with index $-2$ (dash-dot).}
\label{widthcomp}
\end{figure}
We stress that the definition in Eq.~\ref{widthdef} means that $W$ becomes a constant for the processes considered, and therefore independent
of the location of the spectral peak. For example, a Planck function will have the same value of $W$ for all temperatures. Similarly,
for Band function fits to the spectra, $W$ is independent of the peak energy and only depends on $\alpha$ and $\beta$. This property is particularly
valuable, as the peak energy of GRB spectra varies throughout the burst.
\section{Discussion}
The standard procedure of GRB analysis is to fit individual spectra with more or less physically motivated models. However, any emission mechanism
proposed must not only be able to match a single GRB spectrum, but also have the potential to reproduce the distribution in the entire observed GRB
sample. When it comes to width of the spectrum, it is relatively easy to broaden the spectrum. For instance, spectral evolution or a combination of several
components will give a broader spectrum than predicted by a single, stable emission process. The lines in Fig.~\ref{widthcomp} may therefore be seen
as lower limits - a process can be part of any wider spectrum, but cannot be a strong component in more narrow spectra. In contrast, even though we
have used peak flux spectra, finite integration time is needed for sufficient signal-to-noise. We can therefore not rule out that rapid spectral evolution
has broadened the observed spectra. The measured width parameter must thus be seen as an upper limit.
At first glance it seems striking that the peak of the width distribution occurs around the same values as the width of monoenergetic synchrotron.
However, the physical conditions required for this - constant magnetic field strength and no spread in electron energies - seem highly unreasonable.
Even if such conditions were somehow created at the onset, as the electrons radiate they would quickly cool and spread in energy. Additionally,
as the mean relative uncertainty is only 15\%, monoenergetic synchrotron could not explain the distribution down to $W\sim0.5$. We therefore
conclude that this overlap is most likely coincidental.
The fact that nearly half of the observed spectra are more narrow than monoenergetic synchrotron poses serious problems for this emission mechanism.
Assuming a distribution of electron energies makes the emitted spectrum even broader, worsening the issue. There are of course many parameters which
could in principle be varied, and such attempts have been made to alleviate the ``line of death" issue. For instance, Klein-Nishina losses can significantly
alter the low-energy spectrum of synchrotron emission \citep{dai11}. Additionally, \citet{uz14} have shown that altering the magnetic
field structure along the radial direction of the outflow can also modify the low-energy slope. However, the example spectra shown in \citet{uz14} all have
$W \geq 2$. In all cases, the spectrum of monoenergetic synchrotron must be seen as extreme, and any realistic assumption will give broader spectra. Based
on this ``width of death" we therefore conclude that synchrotron emission under the standard assumption of isotropic pitch angle electron distributions cannot
be the dominant emission process in the majority of the observed spectra. More complex models, such as small pitch angle synchrotron, can be invoked to
create more narrow spectra. However, even the most narrow spectrum in the limit of very small pitch angles has a width parameter of $W\sim1.1$ \citep{lp02}.
It is also interesting to note that practically no spectrum is more narrow than the Planck function. This makes physical sense, as a blackbody is by definition the
most efficient radiative process. However, there are only a few GRBs narrow enough to match the Planck function. So while thermal emission can be a component
in all observed spectra, other mechanisms must be involved. Particularly, if invoking thermal radiation a way must be found to broaden the spectrum. Such
suggestions have for instance been subphotospheric dissipation \citep{ryd11} and inverse Compton scattering \citep{peer06,bel10}.
There have recently been several studies where GRB spectra have been fit with a combination of blackbody and Band components, with the blackbody
appearing as a subdominant peak around 100 keV \citep[e.g.,][]{gui11,axe12,bur14}. These fits tend to result in higher values of the Band function peak energy,
and softer values of $\alpha$, which may to some extent alleviate the ``line of death" problematics, and they are often interpreted as a the result of both
thermal (the blackbody) and synchrotron radiation (the Band function). However, we stress that if the spectrum comprises more than one component, one or
both of these components must be {\it more} narrow than the total spectrum. The Band component in these multi-component spectra will therefore always be
more narrow than the spectrum as a whole, so these approaches {\it worsen} the mismatch between theoretical and observed width if assuming a synchrotron
origin.
The observed spread in the distributions, represented by quartile deviation in Table~\ref{locationtable}, is a combination of intrinsic spread and
measurement uncertainty. Our estimated mean uncertainty of 0.15 thereby implies that the intrinsic spread must be quite small. This provides a strong
constraint to models explaining the emission: regardless of the emission process assumed it seems likely that there is a large range of possible parameter
values, and one would therefore expect a correspondingly large spread in the width of the spectra.
In a physical picture, there is no reason why the dominant emission process should be the same in all GRBs. In the context of the fireball model,
it is likely that energy dissipation can occur at different places in the jet for different bursts. GRBs where the bulk of the dissipation occurs below the
photosphere will have a strong thermal component \citep[which may not need to be a blackbody; see for example][]{nym11}. In other cases the bulk of the
dissipation may occur further out in the jet giving rise to completely different spectra. We stress that although our results tend to rule out synchrotron as
the dominant process behind the majority of GRB spectra, it may still be present as a weak component.
Our discovery that there is a significant difference between long and short GRB spectra is of particular interest, as there are very few ways that these two
groups are known to differ. Such differences could for example be due to the central engine or differences in the jet properties.
The fact that the distributions still look relatively similar seems to indicate that the same processes are active in both types of GRBs, but that the
conditions may on average be slightly different. We will explore this further in a future study.
\section{Summary and Conclusions}
We have analysed the width of 2291 gamma-ray burst spectra from both {\it CGRO}/BATSE and {{{\it Fermi}/GBM}}, and find that most lie in a narrow range.
When comparing the observations to the spectra from fundamental physical processes (thermal radiation and synchrotron emission) we find that
synchrotron emission faces severe difficulties in explaining the data; in particular, synchrotron radiation from a distribution of electron energies
will give a spectrum wider than the majority of observed GRB spectra. On the other hand, models of photospheric emission must include mechanisms
to significantly broaden the emitted spectrum from a Planck function to match the data. We further find that there are significant differences between
long and short GRBs, with the latter having more narrow spectra.
\section*{Acknowledgements}
This work was supported by the Swedish National Space Board. We thank Vah{\'e} Petrosian for valuable comments.
| 2023-04-23T06:40:27.217Z | 2014-12-19T02:05:36.000Z | redpajama/arxiv | arxiv_0001 | 425 | 3,582 |
00728481f75eaf769ea8727fc224f6f324ec2b1c | \section{Introduction}
Quantum information theory is well-developed for information transmission over noisy quantum channels, dating back to the work of Holevo in the 70's \cite{Hol72, Hol73}, either for the transmission of classical information \cite{Hol98, SW97}, quantum information \cite{Lloyd97, Shor02, Dev05}, and even if we allow for pre-shared entanglement between sender and receiver \cite{BSST99, BSST02}. It describes the ultimate limits for (unidirectional) data transmission over noisy quantum channels without concern for explicit, efficient construction of codes. Closely related is the area of quantum coding theory, which takes a more practical approach toward the construction of quantum error correcting codes (QECC) \cite{Shor95, Ste95} by providing explicit and efficient constructions \cite{CS96, Ste95, Gott96} of codes, and by providing bounds on their existence \cite{CRSS98, FM04, Rai99}.
Quantum communication complexity has also been studied in depth since Yao's seminal paper introduced the field in 1993 \cite{Yao93}. It is an idealized setting in which local computation is free and communication is noiseless but expensive, and two parties want to compute a classical function of their joint input while minimizing the number of qubits they have to exchange to do so. Exponential separations have been shown for some promise problems between their classical and quantum communication complexity \cite{BCW98}, even if we allow for some bounded error \cite{Raz99}. Moreover, for both classical and quantum communication complexity, interaction has been proved to be a powerful resource: exponential separations in the communication complexity of some functions have also been established between protocols restricted to some bounded number of messages $k$, and protocols with $k + 1$ messages \cite{NW91, KNTZ07}. Cleve and Buhrman \cite{CB97} defined an alternative model for communication complexity in a quantum setting, in which players are allowed to pre-share an arbitrary entangled state but transmit classical rather than quantum bits. This model is at least as powerful as Yao's (up to a factor of $2$), since entanglement along with 2 classical bits can be used to teleport an arbitrary quantum state. It is still an open question whether the two models are essentially equivalent, since no good bound on the amount of entanglement required in the Cleve-Buhrman model is known.
However, quantum communication, even more so than classical communication, is prone to transmission errors in the real world. Moreover, with the ubiquity of distributed computing nowadays, it has become increasingly important to develop an information and coding theory for interactive protocols. To the best of our knowledge, this problem has never been studied before in a quantum setting. In the realm of classical communication, Schulman initiated the field with his pioneering works \cite{Sch92, Sch93, Sch96} showing that it is possible to simulate any protocol defined over a noiseless channel over a noisy channel with exponentially small probability of error while only dilating the protocol by a constant factor. This multiplicative dilation factor, in the case of a binary symmetric channel, is proportional to the inverse of the capacity, as in the data transmission case. However, it does not go to $1$ asymptotically in this case. For the case of adversarial errors, Schulman also shows how to withstand up to a corruption rate of $\frac{1}{240}$. Recent work by Braverman and Rao \cite{BR11} shows how to withstand error rates of $\frac{1}{4} - \epsilon$ in the case of an adversarial channel, and they also show this is optimal in their model of noisy communication. Even more recently, Franklin, Gelles, Ostrovsky and Schulman \cite{FGOS12} were able to show that in an alternative model in which Alice and Bob are allowed to share a secret key unknown to the adversary Eve, they can withstand error rates up to $\frac{1}{2} - \epsilon$, which is also shown to be optimal in their model. All of the above works use the notion of tree codes, introduced by Schulman. These tree codes are shown to exist for various parameters, but no efficient construction is known. A relaxation of the tree code condition still strong enough for most applications in interactive coding was stated by Gelles, Moitra and Sahai \cite{GMS12}, and they were able to provide an efficient randomized construction for these so-called potent tree codes. Using these in a random error model leads to efficient decoding on the average, hence to efficient simulation protocols (of course, given black-box access to the original protocol, which might be inefficient in itself) but in a worst-case adversarial scenario, the decoding might still take exponential time with these potent tree codes. It was only recently that an alternative coding strategy developed by Brakerski and Kalai \cite{BK12} was able to take care efficiently of the adversarial error case by cleverly splitting the whole communication into blocks of logarithmic length in which tree encoding is used but also some history information is sent in between the blocks that enables efficient decoding. This construction was further improved by Brakerski and Naor in \cite{BN13}. A survey article by Braverman \cite{Bra12} provides a good overview of results and open questions in the area of classical interactive communication circa 2011, though some of the important questions raised there have been addressed since. In particular, the question of interactive capacity of binary symmetric channels that was recently investigated by Kol and Raz \cite{KR13} for which they find that indeed, the communication capacity of the binary symmetric channel with capacity close to unity behaves differently in the asymptotic limit of long interactive protocols than in the data transmission case.
The approach taken in all of the above is inherently classical and does not generalize well to the quantum setting. In particular, the fact that classical information can be copied and resent multiple times is implicitly used, and therefore the fact that the information in the communication register can be destroyed by noise is without consequence. By contrast, in the quantum case, if the information in some communication register is destroyed, it could not have been copied before, and in particular it cannot be resent. A naive strategy, which applies in the quantum as well as in the classical case, would be to encode each round separately. But, in a random error model, a constant dilation of each round would not be sufficient in the worst case of one-qubit transmission to reach good fidelity, and a super-constant dilation leads to a communication rate of zero asymptotically. Moreover, in the case of adversarial errors, no constant rate of error can be withstood with such a strategy: the adversary can then always disrupt a whole block (unless the number of round is constant). Using properties of classical information, it was possible to design clever simulation protocols that were able to withstand constant error rates at constant communication rates, and succeed in simulating protocols designed for noiseless classical channels over noisy channels by reproducing the whole transcript of the noiseless protocol. However, it is not clear that it is possible, given an arbitrary quantum protocol designed for a noiseless bidirectional quantum channel, to simulate it over noisy quantum channels with constant error rate at a constant communication rate. Even in the case of protocols in the Cleve-Buhrman model, in which the communication is classical, it is not clear that we can achieve results similar to what is done for classical communication protocols when we replace noiseless classical channels by noisy ones. Indeed, if a quantum measurement is performed on the entangled state shared by the two parties and it is later realized that the choice of the measurement was based on wrong information, such a measurement will in general be irreversible, and the naive approach to adapt the previous simulation to the Cleve-Buhrman model does not work.
\section{Overview of Results}
We show that despite the above setbacks, it is indeed possible to simulate arbitrary quantum protocols over noisy quantum channels with good communication rates. We consider two models for interaction over noisy channels: one analogous to Yao's model, in which all communication is over noisy quantum channels but the parties do not pre-share entanglement, and one analogous to Cleve-Buhrman's model in which all communication is over noisy classical channels but parties are allowed to pre-share noiseless entanglement. We call these models the quantum and shared entanglement models, respectively. We show that with only a constant dilation factor, it is possible to withstand error rates of $\frac{1}{2} - \epsilon$ in the shared entanglement model, for arbitrarily small $\epsilon > 0$, thus matching the highest tolerable error rate in the analogous shared secret key model for classical interactive communication \cite{FGOS12}. In the quantum model, we can withstand error rates close to the best achievable for quantum data transmission. The result for the shared entanglement model is optimal when the noiseless protocol requires bidirectional communication and protocols over the noisy channel are restricted to being non-adaptive, i.e.~the order in which the parties talk depends only on the protocol and maybe on the input, but not on the previous actions of the adversary. This restriction is natural for protocols over noisy channels: the view of each party is different and depends on previous errors, so this restriction enforces that they do not simultaneously try to speak over the channel. Moreover, in our simulation protocol for this model, all communication is classical, which is in general a much less expensive resource than quantum communication. The approach we take is to teleport the quantum communication register back and forth. When the register is in some party's possession, he tries to evolve the simulation by applying one of his unitaries in the noiseless protocol, or one of its inverses if he realizes at some point he applied it wrongly before. The important point is that all operations on the quantum registers are reversible, being a sequence of noiseless protocol unitaries and random Pauli operators. Of particular importance to our work is the notion of tree codes, introduced by Schulman for the purpose of simulating classical protocols over noisy classical channels, which we use to transmit our classical information. We can adapt the techniques we develop in the shared entanglement model for the quantum communication model in which parties do not pre-share entanglement, but have access to a noisy quantum channel: we first distribute a linear amount of entanglement using standard quantum information and coding theory techniques. We can tolerate an adversarial error rate of up to $\frac{1}{6}$ in that case. We can also adapt our techniques for an adversarial error model to the case of a random error model. Then, dilation factors proportional to $\frac{1}{Q}$ for a depolarizing channel of quantum capacity $Q$ in the quantum model, and proportional to $\frac{1}{C}$ for a binary symmetric channel of capacity $C$ in the shared entanglement model, are sufficient. We also show that the result in the shared entanglement model is asymptotically optimal: there exist a family of binary functions for which a dilation factor proportional to $\frac{1}{C}$ is necessary. We further extend the study in the shared entanglement model to consider noisy entanglement in the form of noisy EPR pairs in the so-called Werner states. For any non-separable Werner state, we give simulation protocols with linear noisy classical communication and noisy EPR pair consumption.
If we compare our approach for the quantum case to those for the classical case, as described in a recent paper on efficient interactive coding \cite{BN13}, the high level logic of all proposed solutions until now for classical protocol simulation can be described as follow: try to evolve the protocol, and if it is later realized there has been some error, try to have the parties go back to where they last agreed (in a protocol tree representation, this would be their least common ancestor). However, in our case, the parties roughly try to follow the same idea, but are not able to do this passively as in the classical case for two reasons. First, the underlying classical communication is not fixed at the beginning of the protocol but depends on the random outcomes of the teleportation measurement, so even when they would try to synchronize based on previous errors, they would have to actively teleport during resynchronization, leading to potentially more errors on the joint quantum register. Second, there is no underlying transcript (or protocol tree) that the parties try to synchronize on, except that they want to evolve their sequence of unitaries, and so they have to actively rewind previous unitaries and wrong teleportation decoding instead of just going back to a point in the protocol where they agreed on the previous transcript.
Since the parties need to backtrack the simulation of the noiseless protocol, we remark that it is surprising that we can tolerate error rates as high as $\frac{1}{2} -\epsilon$. Indeed, all recent classical schemes tolerating high error rates had the property that the parties were always going forward with the communication by using the tree structure of classical protocols, in comparison to Schulman's original tree code based scheme in which there is some form of backtracking and for which he can tolerate a much lower adversarial error rate of $\frac{1}{240}$. To obtain such a result, we follow \cite{FGOS12} and use the fact that a blueberry code effectively turns most adversarial errors into erasure, so that concatenating such a code on top of a tree code yields a tree code with an erasure symbol. Since actual errors are twice as harmful as erasures for the tree code condition, which is stated in terms of Hamming distance, it was shown in \cite{FGOS12} that if the error rate is below $\frac{1}{2} - \epsilon$, then the number of rounds in which both parties correctly decode a long prefix is large enough to imply success of the simulation. This condition is not sufficient for our purpose: the number of rounds in which both parties decode correctly even the whole string could be high, but if these rounds alternate with rounds in which at least one of the parties makes a decoding error, then the protocol could stall, and simulation would fail. This is due to the fact that in our case, the parties could agree on some transcript at the end of the simulation, but this transcript could be completely useless, consisting mostly of random teleportation measurement outcome. To circumvent this possibility, Lemma \ref{lem:optcor} develops a new bound on tree codes with an erasure symbol, which might be of independent interest for classical interactive coding, and is sufficient to obtain the same maximum tolerable error rate of $\frac{1}{2} - \epsilon$ for our quantum protocol as was obtained in \cite{FGOS12} with blueberry codes over the protocol of \cite{BR11}. Another important ingredient in our simulation protocols is the representation for noisy quantum protocols that we develop, which is quite powerful and will be used in forthcoming papers to adapt classical results on computationally efficient interactive computation over adversarial channels \cite{BK12} and on the interactive capacity of the binary symmetric channel \cite{KR13} to the quantum regime. Note that due to the use of tree codes in the present paper, the protocols presented are not computationally efficient. However, as was just stated, it is possible to extend classical results on efficient interactive coding to our case, and will do so in upcoming work.
The paper is structured as follows: in section \ref{sec:not}, we set up notation and state definitions we use, in particular those relevant for the different models of communication. In section~\ref{sec:bas}, we state and prove a simpler version of our main result for the adversarial case in the shared entanglement model, and then show how to modify it to obtain optimal results in section~\ref{sec:opt}. Finally, section~\ref{sec:oth} shows how to adapt the results of the previous sections to obtain various other interesting results, in particular for the quantum model and in the case of a random error model. We conclude with a discussion of our results, and further directions of research are also explored.
\section{Preliminaries}
\label{sec:not}
\subsection{Quantum Mechanics}
Let us set some notation. We first briefly review the quantum formalism, mainly to set notation; for a more thorough treatment, we refer the interested reader to good introductions in a quantum information theory context \cite{NC00, Wat08, Wilde11}. For every quantum system $A$, we associate a finite dimensional Hilbert space, which by abuse of notation we also denote by $A$. The state of quantum system $A$ is represented by a density operator $\rho^A$, a positive semi-definite operator over the Hilbert space $A$ with unit trace. We denote by $\D (A)$ the set of all density operators representing states of system $A$. Composite quantum systems are represented by the (Kronecker) tensor product space of the underlying spaces, i.e. for systems $A$ and $B$, the allowed states of the composite system $A \otimes B$ are (represented by) the density operators in $\D(A \otimes B)$. We sometimes use the shorthand $AB$ for $A \otimes B$. The evolution of a quantum system $A$ is represented by a completely positive, trace preserving linear map (CPTP maps) $\N^A$ such that if the state of the system was $\rho \in \D(A)$ before evolution through $\N$, the state of the system is $\N(\rho) \in \D(A)$ after. We refer to such maps as quantum channels, and to the set of all channels acting on $A$ as $\L(A)$. An important quantum channel that we consider is the quantum depolarizing channel $\T_{\epsilon}$ with depolarizing parameter $0 \leq \epsilon \leq 1$: it takes as input a qubit $\rho$, and outputs a qubit $\T_{\epsilon}(\rho) = (1 - \epsilon) \rho + \epsilon \frac{I}{2}$, i.e.~with probability $1 - \epsilon$ it outputs $\rho$ and with complementary probability $\epsilon$ it outputs a completely mixed state. We also consider quantum channels with different input and output systems; the set of all quantum channels from a system $A$ to a system $B$ is denoted $\L(A, B)$. Another important operation on a composite system $A \otimes B$ is the partial trace $\Tr{B}{\rho^{AB}}$ which effectively gets rids of the part of the quantum state $\rho^{AB}$ in the $B$ subsystem and keeps the corresponding marginal state of the $A$ subsystem. Fixing a basis $\{ \ket{i} \}$ for $B$, the action of the partial trace can be evaluated as $\Tr{B}{\rho^{AB}} = \sum_i \bra{i} \rho \ket{i}$, and this is a valid quantum channel in $\L(A\otimes B, A)$.
An important special case for quantum systems are pure states, whose density operators have a special form: rank-one projectors $\kb{\psi}{\psi}$. In such a case, a more convenient notation is provided by the pure state formalism: a state is represented by the unit vector $\ket{\psi}$ (up to an irrelevant complex phase) the density operator projects upon. We denote by $\H(A)$ the set of all such unit vectors (up to equivalence of global phase). Pure state evolution is represented by a unitary operator $U^A$ acting on $\ket{\psi}^A$, denoted $U \ket{\psi}^A$. Evolution of the $B$ register of a state $\ket{\psi}^{AB}$ under the action of a unitary $U^B$ is represented by $(I^{A} \otimes U^{B})\ket{\psi}^{AB}$, for $I^{A}$ representing the identity operator acting on the $A$ system, and is denoted by the shorthand $U^{B} \ket{\psi}^{AB}$ for convenience. We might drop the superscripts when the systems are clear from context. The evolution under consecutive action of unitaries $U_j$'s is denoted by:
\begin{equation}
\prod_{j=1}^\ell U_j \ket{\psi} = U_\ell \cdots U_1 \ket{\psi}.
\end{equation}
We represent a classical random variable $X$ with probability density function $p_X$ by a density operator $\sigma^X$ that is diagonal in a fixed basis $\{ \ket{x} \}_{x \in \X}$: $\sigma^X = \sum_{x \in \X} p_X(x) \kb{x}{x}^X$. For a quantum system $A$ classically correlated with a random variable $X$, we represent the corresponding classical-quantum state by the density operator $\rho^{XA} = \sum_{x \in \X} p_X(x) \kb{x}{x}^X \otimes \rho_x^A$, in which $\rho_x^A$ is the state of system $A$ conditioned on the random variable $X$ taking value $x \in \X$. The extraction of classical information from a quantum system is represented by quantum instruments: classical-quantum CPTP maps that take classical-quantum states on a composite system $X \otimes A$ to classical-quantum states. Viewing classical random variables as a special case of a quantum system, quantum instruments can be viewed as a special case of quantum channels.
Our simulation protocols make heavy use of the teleportation protocol between Alice and Bob \cite{BBCJPW93}, which uses the following resource state shared by Alice and Bob, called an EPR pair: $\ket{\Phi^+}^{T_A T_B} = \frac{1}{\sqrt{2}} (\ket{00} + \ket{11})$, with the qubit in the $T_A$ register held by Alice, and the qubit in the $T_B$ register held by Bob. The teleportation protocol then uses one of these resource states to teleport one qubit either from Alice to Bob, or from Bob to Alice. If Alice wants to teleport a qubit $\ket{\psi}$ in the register $C$ to Bob, with whom she shares an EPR pair, she applies a joint Bell measurement, which can perfectly distinguish the Bell states $\{\ket{\Phi_{xz}} = \frac{1}{\sqrt{2}} (\ket{0x} + (-1)^z \ket{1\bar{x}}) \}_{x, z \in \{0, 1 \}}$, to the registers $C T_A$ she holds, and obtains uniformly random measurement outcomes $xz \in \{0, 1 \}^2$. After this measurement, the state in the $T_B$ register is $X^x Z^z \ket{\psi}$, for $X$ and $Z$ the Pauli operators corresponding to bit flip and phase flip in the computational ($Z$) basis, respectively. If Alice transmits the two bits $xz$ to Bob, he can then decode the state $\ket{\psi}$ on the $T_B$ register by applying $(X^x Z^z)^{-1} = Z^z X^x$. Teleportation from Bob to Alice is performed similarly (EPR pairs are symmetric).
Another technique we use is that of making classical operations coherent: measurements and classically-controlled operations are replaced by corresponding unitaries (and ancilla register preparation). The coherent version of a measurement is called a pseudo-measurement: instead of measuring a qubit to obtain a binary classical outcome in $\{0, 1 \}$ with probability $p_0$ and $p_1$, respectively, a classical value that can be distributed classically among two parties (or more), a pseudo-measure is applied to the quantum state, leaving it in a pure quantum state in which the two qubits will then act the same as if they had been measured, provided they do not further interact. The technique to do so uses a controlled-$X$ gate, i.e.~a gate mapping $\ket{x}^S \ket{b}^T$ to $\ket{x}^S \ket{b \oplus x}^T$ for some source qubit in register $S$ to be kept at the previous measurement point, and some target qubit in register $T$ to be distributed. Then a pseudo-measure is done by preparing a fresh ancilla qubit in state $\ket{0}$ in some register $T$, and if we relabel the register of the previously measured qubit by register $S$, we simply use the controlled-$X$ gate as described above. Then, if the $S$ and $T$ registers are left as is, a subsequent measurement on any one of these registers will still output $0$ or $1$ with probability $p_0$ and $p_1$, respectively, and if both registers are measured, both outcomes are perfectly correlated. The usefulness of this operation is that, contrary to actual measurements, they are not irreversible, and if it is later realized that a qubit should not have been measured, the pseudo-measure can be undone.
To measure the success of the simulation, we use the trace distance $\| \rho - \sigma \|_1^A$ between two arbitrary states $\rho^A$ and $\sigma^A$, in which $\| O \|_1^A = \Tr{}{O^\dagger O}^{\frac{1}{2}}$ is the trace norm for operators on system $A$. We might drop the $A$ subscript if the system is clear from context. The trace distance has the operational interpretation to be (four times) the best possible bias to distinguish between the two states $\rho^A$ and $\sigma^A$, given a single unknown copy of one of these two states. To distinguish between quantum channels, we first consider the induced norm for quantum channels $\N \in \L(A, B)$: $\|\N \| = \max{ \{\|\N(\sigma) \|_1^B : \sigma \in \D(A) \}}$. Correlations with another quantum system can help distinguish between quantum channels, so the appropriate norm to use to account for this fact is the diamond norm \cite{AKN97}: $\| \N \|_\diamond = \|\N \otimes I^R \|$ for some reference system $R$ of the same dimension as the input system $A$. Then, for two quantum channels $\N, \M \in \L(A, B)$, $\| \N - \M \|_\diamond$ has the operational interpretation to be (four times) the best possible bias possible to distinguish between the two channels, given a single unknown use of one of these two channels.
\subsection{Quantum Communication Complexity}
\label{sec:qucc}
In Yao's model for quantum communication complexity \cite{Yao93}, Alice is given a classical input $x \in X$, Bob a classical input $y \in Y$, and they want to compute a classical function $f: X \times Y \rightarrow Z$ (often $X = Y = \{0, 1 \}^n, Z = \{0, 1 \}$) of their joint input by communicating as few quantum bits as possible, but without regard to the local computation cost. Often, we are only interested in $x \in X, y \in Y$ satisfying some promise $P : X \times Y \rightarrow \{0, 1 \}$. A global quantum system is split into three subsystems: the $A$ register is the register held by Alice, the $B$ register is the one held by Bob, and the $C$ register is the communication register, initially held by Alice, and exchanged back-and-forth by Alice and Bob in each round. Our formal description of the protocols in this model is based upon the one given in \cite{Kre95}.
An length $N$ protocol is defined by a sequence of unitaries $U_1, \cdots U_{N + 1}$ in which for $i$ odd, $U_i$ acts on the $AC$ register, and for $i$ even, $U_i$ acts on the $BC$ register. Initially, all the qubits in the $A, B, C$ registers are set to the all $\ket{0}$ state, except for $n$ qubits in the $A$ register initially set to $x \in X$, and $n$ in the $B$ register set to $y \in Y$. The number of qubits $m_A, m_B \in \mathbb{N}$ in the $A$ and $B$ registers is arbitrary (of course, $m_A, m_B \geq n$) and not taken into account in the cost of the protocol, and neither is the complexity of the $U_i$'s, since local computation is considered free. However, the number of qubits $c$ in the $C$ register is important and is taken into account in the communication cost, which is $N \cdot c$. The outcome of the protocol is obtained by measuring the first qubit of the $C$ register after application of $U_{N + 1}$. The protocol succeeds if the outcome of the measurement is $f(x, y)$ with good probability for any $x, y$ (satisfying the promise).
Another model for quantum communication complexity is the one introduced by Cleve and Buhrman \cite{CB97}. In their model, communication is classical, but parties are allowed to pre-share an arbitrary entangled quantum state at the outset of the protocol. We can view protocols in this model as a modification on those of Yao's model in which the initial state $\ket{\psi}$ on the $ABC$ register is arbitrary except for $n$ qubits in each of the $A, B$ registers initialized to $x, y$ respectively. Also, each qubit in the $C$ register is measured in the computational basis, and it is the outcome of these measurements that is communicated to the other party. Note that by using pseudo-measurements instead of actual measurements in each round, the parties can use quantum communication instead of classical communication. Then the two models become almost identical, except for the initial state, which is arbitrary in the Cleve-Buhrman model, and fixed to the all $0$ state in Yao's model (not including each party's classical input). Since our simulation protocols consider general unitary local processing but do not assume any particular form for the initial state, they work on this slight adaptation of the Cleve-Buhrman model as well as for Yao's model of quantum communication complexity.
\subsection{Quantum Communication Model}
\label{sec:qucomm}
\subsubsection{Noiseless Communication Model}
\label{sec:nslss}
In the \emph{noiseless quantum communication model} that we want to simulate, there are four quantum registers: the $A$ register held by Alice, the $B$ register held by Bob, the $C$ register, which is the communication register exchanged back-and-forth between Alice and Bob and initially held by Alice, and finally the $E$ register, which purifies the initial (and then also the final) state of the $ABC$ registers and might be held by Eve, a potential adversary. The initial state $\ket{\psi_\mathrm{init}}^{ABCE} \in \mathcal{P}$ is chosen arbitrarily from the set of allowed inputs $\mathcal{P}$, and is fixed at the outset of the protocol, but possibly unknown (totally or partially) to Alice and Bob. Note that to allow for composition of quantum protocols in an arbitrary environment, we consider arbitrary quantum states as input, maybe entangled with some reference system $E$. A protocol $\Pi$ is then defined by the sequence of unitaries $U_1^{AC}, U_2^{BC}, \cdots , U_{N + 1}$, with $U_{2i+1}$ known at least to Alice (or given to her in a black box), and $U_{2i+2}$ known at least to Bob (or given to him in a black box). Without loss of generality, we assume $N$ is even: this affects the total cost of communication by at most one communication of the $C$ register. On a particular input state $\ket{\psi_\mathrm{init}}$, the protocol generates the output state $\ket{\psi_\mathrm{final}}^{ABCE} = U_{N + 1} \cdots U_1 \ket{\psi_\mathrm{init}}^{ABCE}$, for which at the end of the protocol the $A$ and $C$ registers are held by Alice, the $B$ register is held by Bob, and the $E$ register is held by Eve. We sometimes also write $\Pi (\ket{\psi_\mathrm{init}})$ for $\Tr{E}{\kb{\psi_\mathrm{final}}{\psi_\mathrm{final}}^{ABCE}}$, and by abuse of notation also represent the induced quantum channel from $ABCE$ to $ABC$ simply by $\Pi$. Since we consider local computation to be free, the sizes of $A$ and $B$ can be arbitrarily large, but still of finite size, say $m_A$ and $m_B$ qubits, respectively. Also, we consider the case of a single-qubit $C$ register, which is the worst case for interaction. This can be done without affecting the cost of communication by more than a factor of two (if a party has to speak when it is not his turn, he sends a $\ket{0}$ qubit), but maybe at the expense of much more interaction. Note however that it is straightforward to apply our results to registers $C$ of arbitrary size. Also note that both Yao's and Cleve-Buhrman's models of quantum communication complexity can be recast in this framework by making all operations coherent: put the initial classical registers into quantum registers, replace classically controlled operations by quantumly controlled operations, also replace measurements by pseudo-measurements, and then replace any classical communication by quantum communication. In particular, this gets rid of the problem of the non-reversibility of measurements, which is especially present in the Cleve-Buhrman model.
We need to embed length $N$ protocols into others of larger length $N^\prime > N$. To perform such \emph{noiseless protocol embedding}, we define some dummy registers $\tilde{A}, \tilde{B}, \tilde{C}$ isomorphic to $A, B, C$, respectively. $\tilde{A}$ and $\tilde{C}$ are part of Alice's scratch register and $\tilde{B}$ is part of Bob's scratch register. Then, for any isomorphic quantum registers $D, \tilde{D}$, let SWAP$_{D \leftrightarrow \tilde{D}}$ denote the quantum unitary that swaps the $D, \tilde{D}$ registers. In a noiseless protocol embedding, for $i \in \{1, 2, \cdots N-1 \}$, we leave $U_i$ untouched. We replace $U_{N}$ by SWAP$_{B \leftrightarrow \tilde{B}} \circ U_{N}$ and $U_{N + 1}$ by SWAP$_{AC \leftrightarrow \tilde{A} \tilde{C}} \circ U_{N + 1}$. Finally, for $i \in \{N+2, N+3, \cdots N^\prime + 1 \}$, we define $U_i = I$, the identity operator.
We refer later to the \emph{unidirectional model}; in this noiseless model, we allow for large local registers $A^\prime, B^\prime$ and for a large communication register $C^\prime$ that is used only once, either from Alice to Bob or from Bob to Alice, depending on the protocol. These registers can be further decomposed such that when used for simulation, the $A$ register of the protocol to be simulated is a subsystem of $A^\prime$, $B$ is one of $B^\prime$, and $C$ of $A^\prime$. We also allow for classical registers $X, Y$ held by Alice and Bob, respectively. For concreteness we consider here the case of communication from Alice to Bob; the other case is symmetric. A simulation protocol $U$ in the unidirectional model is defined by two quantum instruments $\M_1^{X A^\prime C^\prime}, \M_2^{Y B^\prime C^\prime}$, and the output of the protocol on input $\ket{\psi} \in \H (A \otimes B \otimes C \otimes E)$ is the $ABC$ subsystem of $\M_2 \M_1 (\ket{\psi})$, and is denoted $U(\ket{\psi})$. By abuse of notation, the induced quantum channel from $ABCE$ to $ABC$ is also denoted $U$.
\subsubsection{Noisy Communication Model}
\label{sec:nqucomm}
There are many possible models for noisy communication. We consider two in particular: one analogous to Yao's model with no shared entanglement but noisy quantum communication, which we call the \emph{quantum model}, and one analogous to Cleve-Buhrman's model with noiseless pre-shared entanglement but noisy classical communication, which we call the \emph{shared entanglement model}. A further variation on the shared entanglement model in which the entanglement is also noisy is considered in section \ref{sec:nsent}. For simplicity, we formally define in this section what we sometimes refer to as alternating communication models, in which Alice and Bob alternatively transmit each other the communication register, and this is the model most of our protocols are defined in. However, somewhat more general models to which our definitions easily adapt are referred to as oblivious communication models, following \cite{BR11}, in which Alice and Bob do not necessarily transmit their messages in alternation, but nevertheless in a fixed order known to all (Alice, Bob and Eve) depending only on the round, and not on the particular input or the actions of Eve.
For the \emph{quantum model}, Alice possesses a local classical-quantum register $X \otimes A^\prime$ in which $X$ is the classical register and the quantum register $A^\prime$ contains five subsystems of interest: to act a noiseless protocol $\Pi$ as a black-box, the $A$ and $C_A$ parts correspond to the registers of the noiseless communication protocol, while $\tilde{A}$ and $\tilde{C}_A$ are the corresponding registers defined by the noiseless protocol embedding, and $A^{\prime \prime}$ is some scratch register used for her local quantum computation in the simulation. Similarly, Bob possesses a local classical-quantum register $Y \otimes B^\prime$ in which $Y$ is the classical register and the quantum register $B^\prime$ contains four subsystems of interest: to act $\Pi$ as a black-box, the $B$ and $C_B$ parts correspond to the registers of the noiseless communication protocol, while $\tilde{B}$ is the corresponding register defined by the noiseless protocol embedding, and $B^{\prime \prime}$ is some scratch register used for his local quantum computation in the simulation. Eve possesses a local classical-quantum register $Z \otimes E^\prime$ in which $Z$ is the classical register and the quantum register $E^\prime$ contains two subsystems of interest: the $E$ part corresponds to the reference register of the noiseless communication protocol and $E^{\prime \prime}$ is some scratch register used for her local quantum computation in the simulation. A quantum communication register $C^\prime$, of some fixed size $q$ independent of the length $N$ of the protocol to be simulated, is exchanged back-and-forth between Alice and Bob by passing through Eve's hand; it is held by Alice at both the beginning and the end of the simulation protocol. A simulation protocol $Q$ in the quantum model is defined by a sequence of quantum instruments $\M_1^{X A^\prime C^\prime}, \M_2^{Y B^\prime C^\prime}, \cdots, \M_{N^\prime +1}^{X A^\prime C^\prime}$ such that, on input a state $\ket{\psi_{\mathrm{init}}^\prime}^{A^\prime B^\prime C^\prime E^\prime} = \ket{\psi_{\mathrm{init}}}^{A B C_A E} \otimes \ket{0}$, given black-box access to a noiseless protocol $\Pi$, and against an adversary $A^Q$ (which only has to make the simulation fail on some particular protocol, and on some particular input, to characterize the simulation protocol as bad against her) defined by a sequence of quantum instruments $\N_1^{Z E^\prime C^\prime}, \cdots, \N_{N^\prime}^{Z E^\prime C^\prime}$, the protocol outputs the $\tilde{A} \tilde{B} \tilde{C}$ subsystems of
\begin{align}
\rho_{\mathrm{final}} = \M_{N^\prime + 1}^\Pi N_{N^\prime} \M_{N^\prime}^\Pi \cdots \M_2^\Pi \N_1 \M_1^\Pi (\kb{\psi_{\mathrm{init}}^\prime}{\psi_{\mathrm{init}}^\prime}).
\end{align}
We denote this output by $Q^\Pi(A^Q (\ket{\psi_{\mathrm{init}}}))$, and the induced quantum channel from $ABCE$ to $\tilde{A} \tilde{B} \tilde{C} \cong ABC$ by $Q^\Pi (A^Q)$. The success of the simulation is measured by how close the simulation output state is to the final state of the noiseless protocol on the $ABC$ registers, and is captured by the following definition:
\begin{definition}
A simulation protocol $Q$ in the quantum model of length $N^\prime$ succeeds with error $\epsilon$ at simulating all length $N$ noiseless protocols against all adversaries in some class $\A^{N^\prime}$ if, for all noiseless protocols $\Pi$ of length $N$, for all adversaries $A^Q \in \A^{N^\prime}$, $\| \Pi - Q^\Pi (A^Q) \|_\diamond \leq \epsilon$. The communication rate $R_Q$ of $Q$ is $R_Q = \frac{N}{N^\prime \log{q}}$ for $q \geq 2$ the alphabet size of the communication register $C^\prime$.
\end{definition}
In a random error model (analogous to that studied in quantum information theory, \`a la Shannon), Eve is a non-malicious passive environment, and $\N_i = \N^Q$ for some fixed quantum channel $\N^Q$, and the class $\A^{N^\prime}$ contains a single element $(\N^{C^\prime} \otimes I^{Z \otimes E^\prime})^{\otimes N^\prime}$. For simplicity, we then say that the simulation succeeds over $\N^Q$. In an adversarial error model (analogous to that studied in quantum coding theory, \`a la Hamming), Eve is a malicious adversary who wants to make the protocol fail, and we are interested in particular classes of adversaries which we denote $\A_\delta^Q$ for some parameter $0 \leq \delta < 1$. The class $\A_\delta^Q$ contains all adversaries with a bound $\delta$ on the fraction of communications of the $C^\prime$ register they corrupt, in the following sense: for any reference register $R$ and classical register $X$, for any state $\rho \in \D (Z^{\otimes N^\prime} \otimes E^{\prime \otimes N^\prime} \otimes C^{\prime \otimes N^\prime} \otimes R \otimes X)$ and for all possible classical states $z \in \Z$ of the classical register $Z^{ \otimes N^\prime}$, if the action of some adversary in $\A_\delta^Q$ on $\rho$ is
\begin{align}
(\N_{1}^{E^\prime C^{\prime}} \otimes & \cdots \otimes \N_{N^\prime}^{E^\prime C^{\prime}}) (\rho) = \notag \\
\sum_{x \in \X, z \in \Z} p_{XZ}(x, z) \kb{x}{x}^X & \otimes \kb{z}{z}^{Z^{\otimes N^\prime}} \otimes (\N_{1} (z)^{E^\prime C^{\prime}} \otimes \cdots \otimes \N_{N^\prime} (z)^{E^\prime C^{\prime}}) (\rho(x, z)),
\end{align}
for some channels $N_i(z)$, quantum states $\rho(x, z)$, and a probability density function $p_{XZ}$, then we must have that the size of $\{i : \N_i(z)^{E^\prime C^\prime} \not= \N_i^\prime (z)^{E^\prime} \otimes I^{C^\prime} \}$ is bounded by $\delta N^\prime$. Note that this allows for adaptive, probabilistic, entangled strategies for Eve, but such that conditioned on any sequence of measurement outcomes $z$ (recorded in the $Z$ registers), at most a $\delta$ fraction of the the actions of Eve act non-trivially on the $C^\prime$ register, and so we say that the fraction of error is bounded by $\delta$ for all adversary in $\A_\delta^Q$.
For the \emph{shared entanglement model}, Alice, Bob and Eve possess local classical-quantum registers split analogously to those in the quantum model. In addition to the entanglement inherent in $\ket{\psi_{\mathrm{init}}}^{ABCE}$, Alice and Bob also share entanglement to be consumed during the simulation in the form of a large state $\ket{\phi}^{T_A T_B}$ with the registers $T_A, T_B$ held by Alice and Bob, respectively. In general, the entanglement registers have a product decomposition $T_A = T_A^1 \otimes \cdots \otimes T_A^{N^\prime}, T_B = T_B^{1} \otimes \cdots \otimes \cdots T_B^{N^\prime}$. A classical communication register $C^{\prime \prime}$, of some fixed size $q$ independent of the length $N$ of the protocol to be simulated, is exchanged back-and-forth between Alice and Bob by passing through Eve's hand; it is held by Alice at both the beginning and the end of the simulation protocol. A simulation protocol $S$ in the shared entanglement model is defined by a sequence of quantum instruments $\M_1^{X A^\prime T_A C^{\prime \prime}}, \M_2^{Y B^\prime T_B C^{\prime \prime}}, \cdots, \M_{N^\prime + 1}^{X A^\prime T_A C^{\prime \prime}}$ such that, on input a state $\ket{\psi_{\mathrm{init}}^\prime}^{A^\prime B^\prime C^{\prime \prime} E^\prime} = \ket{\psi_{\mathrm{init}}}^{A B C_A E} \otimes \ket{0}$, given black-box access to a noiseless protocol $\Pi$, and against an adversary $A_S^\Pi$ defined by a sequence of quantum instruments $\N_1^{Z E^\prime C^{\prime \prime}}, \cdots, \N_{N^\prime}^{Z E^\prime C^{\prime \prime}}$, the protocol outputs the $\tilde{A} \tilde{B} \tilde{C}$ subsystems of
\begin{align}
\rho_{\mathrm{final}} = \M_{N^\prime + 1}^\Pi N_{N^\prime} \M_{N^\prime}^\Pi \cdots \M_2^\Pi \N_1 \M_1^\Pi (\kb{\psi_{\mathrm{init}}^\prime}{\psi_{\mathrm{init}}^\prime}).
\end{align}
We denote this output by $S^\Pi ( A^S (\ket{\psi_{\mathrm{init}}}))$, and the induced quantum channel from $ABCE$ to $\tilde{A} \tilde{B} \tilde{C} \cong ABC$ by $S^\Pi (A^S)$. The success of the simulation is measured by how close the simulation output state is to the final state of the noiseless protocol on the $ABC$ registers, and is captured by the following definition:
\begin{definition}
A simulation protocol $S$ in the shared entanglement model of length $N^\prime$ succeeds with error $\epsilon$ at simulating all length $N$ noiseless protocols against all adversaries in some class $\A^{N^\prime}$ if, for all noiseless protocols $\Pi$ of length $N$, for all adversaries $A^S \in \A^{N^\prime}$, $\| \Pi - S^\Pi (A^S) \|_\diamond \leq \epsilon$. The communication rate $R_C$ of $S$ is $R_C = \frac{N}{N^\prime \log{q}}$ for $q \geq 2$ the alphabet size of the classical communication register $C^{\prime \prime}$, and the entanglement consumption rate $R_E$ is $R_E = \frac{\log{(\min{(\dim{T_A}, \dim{T_B})})}}{N^\prime \log{q}}$ for $T_A, T_B$ the entanglement registers used for the simulation by Alice and Bob, respectively.
\end{definition}
In a random error model, Eve is a non-malicious passive environment, and $\N_i = \N^S$ for some fixed classical channel $\N^S$, and the class $\A^{N^\prime}$ contains a single element $(\N^{C^{\prime \prime}} \otimes I^{Z \otimes E^\prime})^{\otimes N^\prime}$. For simplicity, we then say that the simulation succeeds over $\N^S$. In an adversarial error model, Eve is a malicious adversary who wants to make the protocol fail, and we are interested in particular classes of adversaries which we denote $\A_\delta^S$ for some parameter $0 \leq \delta < 1$. The class $\A_\delta^S$ contains all adversaries with a bound $\delta$ on the fraction of communications of the $C^{\prime \prime}$ classical register they corrupt, in the following sense: for any reference register $R$ and classical register $X$, for any state $\rho \in \D (Z^{\otimes N^\prime} \otimes E^{\prime \otimes N^\prime} \otimes C^{\prime \prime \otimes N^\prime} \otimes R \otimes X)$ and for all possible classical states $z \in \Z$ of the classical register $Z^{\otimes N^\prime}$, if the action of some adversary in $\A_\delta^S$ on $\rho$ is
\begin{align}
(\N_1^{E^\prime C^{\prime \prime}} & \otimes \cdots \otimes \N_{N^\prime}^{E^\prime C^{\prime \prime}}) (\rho) = \notag \\
\sum_{x \in \X, z \in \Z} p_{XZ}(x, z) \kb{x}{x}^X & \otimes \kb{z}{z}^{Z^{\otimes N^\prime}} \otimes (\N_1 (z)^{E^\prime C^{\prime \prime}} \otimes \cdots \otimes \N_{N^\prime} (z)^{E^\prime C^{\prime \prime}}) (\rho(x, z)),
\end{align}
for some channels $N_i(z)$, quantum states $\rho(x, z)$, and a probability density function $p_{XZ}$, we must have that the size of $\{i : \N_i(z)^{E^\prime C^{\prime \prime}} \not= \N_i^\prime (z)^{E^\prime} \otimes \Delta^{C^{\prime \prime}} \}$ is bounded by $\delta N^\prime$, for $\Delta$ the noiseless classical channel (in the communication basis) on $C^{\prime \prime}$. Note that this allows for adaptive, probabilistic strategies for Eve, but such that conditioned on any sequence of measurement outcome $z$ (recorded in the $Z$ registers), at most a $\delta$ fraction of the the actions of Eve act non-trivially on the $C^{\prime \prime}$ register, even though she can copy all classical transmission in the $Z$ registers, and so we say that the fraction of error is bounded by $\delta$ for all adversary in $\A_\delta^Q$.
Note that the adversaries in the quantum and in the shared entanglement models are fundamentally different: in the shared entanglement model, Eve can copy all classical messages and gather the corresponding information to establish her strategy, but she cannot modify Alice or Bob's quantum information, except for what is possible by corrupting their classical communication and by using the information in the quantum register $E$ purifying the input state. By contrast, in the quantum model, she cannot always ``read'' the quantum messages, but she can apply entangled, fully quantum corruptions to the quantum register when she chooses to.
\subsection{Classical Communication Protocols}
\label{sec:clcomm}
Our simulation protocols contain an important classical component. In our setting, we are interested in protocols in which each party sends a message from some message set $[d] = \{1, 2, \cdots, d-1, d \}$ of size $d$ in alternation, for some fixed number of rounds $N^\prime$ (actually, $\frac{N^\prime}{2}$ in our protocols). A round consists of Alice sending a message to Bob and then Bob sending a message back. Parties only have access to some noisy channels, so they need to encode these messages in some way. The codes used to do so in an interactive setting are described in the next subsection. For the moment, let us focus on the actual messages the parties wish to transmit.
In round $i$, Alice transmits a message $a_i \in [d]$ to Bob, and then Bob sends back a message $b_i \in [d]$. These messages depend on the previous messages $a_1, a_2, \cdots, a_{i-1} \in [d]$ and $b_1, b_2, \cdots, b_{i-1} \in~[d]$ Alice and Bob have sent in the previous rounds, respectively. Following \cite{Sch96}, we refer to these sequences of messages (at the end of round $i$) as Alice's state $s_A = a_1 \cdots a_i \in [d]^i$ and Bob's state $s_B = b_1 \cdots b_i \in [d]^i$, respectively. Note that these states are updated in each round, and that each state, at the end of round $i$, can be represented as a node at depth $i$ in some $d$-ary tree of depth $N^\prime$. This tree is called a state tree. The whole (noiseless) communication can be extracted from the information in these two states.
Since the communication is noisy, in some rounds the parties make errors when trying to guess the other party's state. When comparing the actual state $s = s_1 \cdots s_i \in [d]^i$ of a party in round $i$ with the other party's best guess $s^i = s_1^i \cdots s_i^i \in [d]^i$ about that state based on the communication he received up to that point, the least common ancestor of $s$ and $ s^i$ is the node at depth $i-\ell$ such that $s_1 \cdots s_{i-\ell} = s_1^i \cdots s_{i-\ell}^i$ but $s_{i-\ell+1} \not= s_{i-\ell+1}^i$. We call $\ell$ the \emph{magnitude} of the error of such a guess $s^i$, and in general for two states $s, s^i \in [d]^i$ satisfying the above (with least common ancestor at depth $i-\ell$) we write $L(s, s^i) = \ell$. Note that we can compute $\ell$ as $i - \max{ \{t: (\forall j \leq t) [s_j = s_j^i] \}}$.
\subsection{Online Classical Codes}
\subsubsection{Tree Codes \cite{Sch96}}
\label{sec:tc}
Standard error correcting codes are designed for data transmission and therefore are not particularly well suited for interactive communication over noisy channels. In his breakthrough papers on interactive communication \cite{Sch93, Sch96}, Schulman defined tree codes, which are particular codes designed for such interactive communication. Indeed, these tree codes can perform encoding and decoding by rounds (following \cite{FGOS12}, we refer to such codes as online codes), such that for each round, a message from the message set $[d]$ is transmitted, but even if there is some decoding error in this round, for each additional round we perform (without transmission error), the more likely it is that this previous decoding error is correctly decoded. We describe this property in more details after formally defining these tree codes. We use the following for our definition. Given a set $A$ and its $k$-fold cartesian product $A^k = A \times \cdots \times A$ ($k$-times), we denote, for any $n \in \mathbb{N}$, $A^{\leq n} = \cup_{k=1}^n A^k$. Also, given a transmission alphabet $\Sigma$ and two words $\bar{e} = e_1 \cdots e_t \in \Sigma^t$ and $\bar{e}^\prime = e_1^\prime \cdots e_t^\prime \in \Sigma^t$ over this alphabet, we denote by $\Delta(\bar{e}, \bar{e}^\prime)$ (the Hamming distance) the number of different symbols, i.e.\ $\Delta(\bar{e}, \bar{e}^\prime) = |\{i: e_i \not= e_i^\prime \}|$.
\begin{definition}
\emph{(Tree codes \cite{Sch96})} Given a message set $[d]$ of size $d > 1$, a number of rounds of communication $N^\prime \in \mathbb{N}$, a distance parameter $0 < \alpha < 1$ and a transmission alphabet $\Sigma$ of size $|\Sigma| > d$, a $d$-ary \emph{tree code} of depth $N^\prime$ and distance parameter $\alpha$ over alphabet $\Sigma$ is defined by its encoding function $E: [d]^{\leq N^\prime} \rightarrow \Sigma$. It must also satisfy the following distance property, called the tree code property, in which we define $\bar{e} = e_1 \cdots e_t = \bar{E}(a), \bar{e}^\prime = e_1^\prime \cdots e_t^\prime = \bar{E}(a^\prime)$:
\begin{align*}
(\forall t \leq N^\prime) (\forall a, a^\prime \in [d]^t)
[L(a, a^\prime) = \ell \rightarrow \Delta(\bar{e}, \bar{e}^\prime) \geq \alpha \cdot \ell],
\end{align*}
in which, given an encoding function $E$, we also define its extension $\bar{E}: [d]^{\leq N^\prime} \rightarrow \Sigma^{\leq N^\prime}$ satisfying
\begin{align*}
(\forall t \leq N^\prime) (\forall a = a_1 \cdots a_t \in [d]^t)
[\bar{E}(a) = E(a_1) E(a_1 a_2) \cdots E(a_1 \cdots a_{t-1}) E(a_1 \cdots a_t) \in \Sigma^t].
\end{align*}
The decoding function $D: \Sigma^{\leq N^\prime} \rightarrow [d]^{\leq N^\prime}$ satisfies
\begin{align*}
(\forall t \leq N^\prime) (\forall \bar{e}^\prime \in \Sigma^t)
[D(\bar{e}^\prime) \in \{a: a \in [d]^t \mathrm{\ minimizes\ } \Delta(\bar{E}(a), \bar{e}^\prime) \}].
\end{align*}
\end{definition}
Note that the decoding function is not uniquely defined for a given tree code: we could avoid ambiguity by outputting a special failure symbol for $D(\bar{e}^\prime)$ whenever $|\{a: a \in [d]^t \mathrm{\ minimizes\ } \Delta(\bar{E}(a), \bar{e}^\prime) \}| > 1$. Also note that we can view tree codes in the following alternate way, connecting them with the state tree representation defined above. Starting with a state tree, we can label the arcs out of each node by a symbol from $\Sigma$ corresponding to the encoding of that path in the tree code. The $\bar{E}$ encoding function represents the concatenation of the symbols on the path from root to node $a$, and the distance property is related to the distance of $a, a^\prime$ to their least common ancestor in the protocol tree, and to the number of errors during these corresponding $L(a, a^\prime)$ last transmissions. The following was proved in \cite{Sch96} about the existence of tree codes:
\begin{lemma}
\label{lem:tccode}
Given a message set $[d]$ of size $d > 1$, a number of round of communication $N^\prime \in \mathbb{N}$ and a distance parameter $0 < \alpha < 1$, taking transmission alphabet $\Sigma$ with $|\Sigma| = 2 \lfloor ( 2 \cdot 2^{H(\alpha)} \cdot d )^{\frac{1}{1-\alpha}} \rfloor -1$ suffices to label the arcs of some tree code, i.e. there exist an encoding function $E$ satisfying the tree code property, and the required alphabet size is independent of $N^\prime$, the number of rounds of communication. Here, $H(\alpha) = -\alpha \cdot \log{\alpha} - (1-\alpha) \cdot \log{(1-\alpha)} $ is the binary entropy function.
\end{lemma}
In fact, the result of Schulman is even stronger: there exists an unbounded depth tree code with $\Sigma$ of the size discussed above. This stronger result could be useful in the case in which the number of rounds $N^\prime$ is not bounded at the beginning of the protocol, and has been used to authenticate streams of classical data in \cite{FGOS12}.
The distance property of tree codes assures us of the following: if in round $t$ the decoding is good for the first $t-\ell$ messages sent $(\ell \geq 0)$, but wrong for the message sent in round $t-\ell+1$ (and possibly also for some other messages), then the reencoding of the sequence of decoded messages must be distinct from the transmitted one in at least $\alpha \cdot \ell$ positions in the last $\ell$ rounds. Then, bad decoding implies that there must have been at least $\frac{1}{2} \cdot \alpha \cdot \ell$ transmission errors during those rounds, independently of what was sent in the first $t-\ell$ rounds. More precisely, given a transmitted message $\bar{a} \in [d]^t$, encoded as $\bar{e} = \bar{E}(\bar{a}) \in \Sigma^t$, received as $\bar{e}^{\prime \prime} \in \Sigma^t$, and decoded as $\bar{a}^\prime = D(\bar{e}^{\prime \prime}) \in [d]^t$, with $\bar{e}^\prime = E(\bar{a}^\prime)$, if we have $a_1 \cdots a_{t-\ell} = a_1^\prime \cdots a_{t-\ell}^\prime$ but $a_{t-\ell+1} \not= a_{t-\ell+1}^\prime$, i.e. $L(a, a^\prime) = \ell$, then $\Delta(\bar{e}, \bar{e}^\prime) \geq \alpha \cdot \ell$ and $\Delta(e_{t-\ell+1} \cdots e_t, e_{t-\ell+1}^{\prime \prime} e_t^{\prime \prime}) \geq \frac{1}{2} \cdot\alpha \cdot \ell$ (Note $e_1 \cdots e_{t-\ell} = e_1^\prime \cdots e_{t-\ell}^\prime$). This property is the one so useful for interactive communication: even if bad decoding of a message is performed in some round, with enough correct transmissions from further rounds, we can later correct that previous error. This property is essential to our analysis of the simulation protocol.
\subsubsection{Blueberry Codes \cite{FGOS12}}
\label{sec:bbc}
Another kind of online codes we need to withstand the highest possible error rates are randomized error detection codes called blueberry codes in \cite{FGOS12}. To use these, Alice and Bob encode and decode messages with a shared secret key in a way that weakly authenticates and encrypts each message, and in this way the adversary Eve cannot apply a corruption of her choosing. Such codes unknown to the adversary were termed private codes in \cite{Lan04}. At best, with some small (but constant) probability she is able to corrupt a message in such a way that Alice and Bob do not detect it and this results in an effective decoding error, but most of the time a corruption of Eve results in an effective erasure decoding. Since the tree code property, and hence also its decoding, is defined in terms of Hamming distance, actual errors are twice as harmful as erasures in the tree decoding (we can view the erasure flag $\perp$ as a special symbol in $\Sigma$ never used in the encoding, but which helps in decoding). Moreover, in rounds in which actual decoding errors arise, parties are not immediately aware of it and on the basis of this wrong information might perform wrong operations on the quantum registers that need to later be corrected, while a party immediately realizes when an erasure happens and this prevents him from performing such wrong operations. Hence, concatenating a blueberry encoding on the tree encoding enables significant improvement in the allowed error rates.
These blueberry codes were defined in \cite{FGOS12} for the purpose of authenticating streams of classical messages and for the simulation of interactive classical protocols. The authors gave the following definition for them and proved the following properties:
\begin{definition}
\emph{(Blueberry codes)} For $i \geq 1$ let $B_i : \Gamma \rightarrow \Gamma$ be a random and independent permutation. The \emph{blueberry code} maps a string $e \in \Sigma^t \subset \Gamma^t$ of arbitrary length $t$ to $B(e) = B_1(e_1) B_2(e_2) \cdots B_t(e_t)$. We denote such a code as $B: \Sigma^* \rightarrow \Gamma^*$, and define the erasure parameter of this code as $\beta = 1 - \frac{|\Sigma| - 1}{|\Gamma| - 1}$, and its complement $\epsilon_\beta = 1 - \beta = \frac{|\Sigma| - 1}{|\Gamma| - 1}$.
\end{definition}
\begin{definition}
Assume that at some time $i$, $d_i = B_i (e_i)$ is transmitted and $d_i^\prime \not= d_i$ is received. If $B_i^{-1} (d_i^\prime) \not\in \Sigma$, we mark the transmission as an erasure, and the decoding algorithm outputs $\perp$. Otherwise, this event is called an error.
\end{definition}
\begin{corollary}
Let $e \in \Sigma^t$ and assume $B(e)$ is communicated over a noisy channel. Every symbol corrupted by the channel causes either an error with probability $\epsilon_\beta$, or an erasure with probability $\beta$.
\end{corollary}
\begin{lemma}
\label{lem:bbc}
Assume a blueberry code $B : \Sigma^* \rightarrow \Gamma^*$ is used to transmit a string $e \in \Sigma^t$ over a noisy channel. For any constant $0 \leq c \leq 1$, if the channel's corruption rate is $c$, then with probability $1 - 2^{- \Omega(t)}$ at least a $c(1 - 2 \epsilon_\beta)$-fraction of the transmissions are marked as erasures.
\end{lemma}
\begin{corollary}
If out of $t$ received transmissions, $ct$ were marked as erasures by a blueberry code $B : \Sigma^* \rightarrow \Gamma^*$, then except with probability $2^{- \Omega(t)}$ over the shared randomness, the adversarial corruption rate is at most $c / (1 - 2 \epsilon_\beta)$.
\end{corollary}
\section{Basic Simulation Protocol}
\label{sec:bas}
We start by describing a basic simulation protocol, which attains the first goal of simulating quantum protocols with asymptotically positive communication and error rates, and constant entanglement consumption rate. This provides an interactive analogue of a family of good quantum codes.
\subsection{Result}
We focus on the shared entanglement model since techniques to distribute entanglement in both random and adversarial error models are well-studied, so we can combine our findings with these entanglement distribution techniques to translate these results in the quantum model. Also, we focus on an adversarial model of error, and then we can adapt these results to a random error model. Such extensions of this result to other models of communication are studied in section \ref{sec:oth}. For the basic simulation protocol described in this section, entanglement is only used to teleport the quantum information back-and-forth between the two parties. In section \ref{sec:opt}, we show how to tolerate maximum error rates by also using entanglement to generate a shared secret key unknown to the adversary, thus enabling the two honest parties to detect most adversarial errors as effective erasures.
\begin{theorem}
\label{th:bas}
Given an adversarial channel in the shared entanglement model with low enough error rate, we can simulate perfectly any noiseless protocol of length $N$ over this channel using a number of transmission linear in $N$, and consuming a linear number of EPR pairs. More precisely, there exists constant error rate $\delta > 0$, communication rate $R_C > 0$, transmission alphabet size $q \in \mathbb{N}$, and entanglement consumption rate $R_E \in \mathbb{R}^+$ such that for all noiseless protocol lengths $N \in 2 \mathbb{N}$, there exist a universal simulator $S$ in the shared entanglement model of length $N^\prime$ with communication rate at least $R_C$, transmission alphabet size $q$, entanglement consumption rate $R_E$, which succeeds with zero error at simulating all noiseless protocols of length $N$ against all adversaries in $\A_\delta^S$.
\end{theorem}
\subsection{Intuition for the simulation protocol}
\label{sec:int}
Before describing in detail the basic simulation protocol, let us first give some intuition on how it succeeds in simulating a noiseless quantum protocol over a noisy channel. The strategy to avoid losing the quantum information in the communication register over the noisy channel is to teleport the $C$ register of the noiseless protocol back and forth into Alice's $C_A$ register and Bob's $C_B$ register, creating a virtual $C$ register which is either in Alice's or in Bob's hand. They use the shared entanglement in $T_A T_B$ to do so, as well as the provided noisy classical channels to transmit their teleportation measurement outcome. Whenever Alice possesses the virtual $C$ register she can try to evolve the simulation of the noiseless protocol by applying one of her noiseless protocol unitaries on the virtual $AC$ register, and similarly for Bob on the virtual $BC$ register. If it is later realized that there has been some error in the teleportation decoding, they might have to apply inverses of these operations, but overall, everything acting on the virtual $ABC$ quantum register can be described as an intertwined sequence of Pauli operators acting on the $C$ register and noiseless protocol unitaries (and their inverses) acting on the $AC$ and the $BC$ registers. There are two important things to notice here. First, the sequence of operations acting on the joint register is a sequence of reversible unitaries acting either only on the $C$ register (for the Pauli operators appearing during teleportation) or on pairs $AC$ or $BC$ of registers (for noiseless protocol unitaries and their inverses). Hence, if the parties can keep track of the sequence of operations on the joint register, at least one of the parties can reverse any of the operations when he is in possession of the virtual $C$ register. Second, both parties know the order in which these operators have been applied while only one knows exactly which one was applied: for Pauli operators, both parties know $\pm X^x Z^z$ is applied at some point, but only one knows for sure the value of $x z \in \{0, 1 \}^2$, and similarly both know $U_j^M$ (with $U_j^{+1} = U_j, U_j^{-1} = U_j^\dagger, U_j^0 = I$) is applied at some point, but only one knows for sure the values of $j \in \{1, \cdots, N^\prime + 1 \}$ and $M \in \{-1, 0, +1 \}$. This is the classical information they try to transmit each other so that both can know exactly the sequence of operations that have acted on the joint register up to some point. The tree codes of Schulman are particularly well suited for noisy communication in this interactive scenario.
More concretely, in each round the parties first need to decode the teleportation before trying to evolve the simulation of the quantum protocol and finally teleporting back the communication register to the other party. We want the parties to be able to know exactly where they are in the simulation of the protocol when they are able to correctly decode the classical messages sent by the other party up to that point. To enable a party to learn exactly what actions were taken by the other party in each previous round, the message set in each round is $\{0, 1 \}^2 \times \{-1, 0, +1\} \times \{0, 1 \}^2$, and messages are encoded with a tree code before being sent. The first pair of bits corresponds to the teleportation decoding operation done at the beginning of a party's turn. Then the trit is associated to the evolution in the noiseless protocol: $+1$ stands for going forward with the protocol, a unitary of the noiseless protocol was applied to the joint state of the party's local register and the communication register; $-1$ stands for going back with the protocol, the inverse of an unitary of the noiseless protocol applied by that party to the joint state was performed; $0$ stands for holding the protocol idle, no action is done by that party to evolve the protocol in that round. Note that the index $j$ of the unitary $U_j^M$ a party applies can be computed solely from the sequence of trits sent by that party, and such an explicit calculation is defined in the simulation description. Finally, the last pair of bits corresponds to the outcome of the measurement in the teleportation of the communication register, to enable the other party to correctly decode the teleportation.
For each party, we call his \emph{state} at some point the sequence of these triplets of messages he transmitted up to that point (see section \ref{sec:clcomm}). If a party succeeds in correctly decoding the state of the other party, he then possesses all the information about what operations were applied on the joint quantum register, and can choose his next move accordingly. Note that the information about which Pauli operator was used to decode the teleportation might appear to be redundant, but it is not when there are decoding errors. In such a case, the wrong Pauli operators might be applied to do the teleportation decoding. Even though the party who applied the wrong Pauli operator will realize his mistake later when the tree code enables him to finally decode this message correctly, the other party still need to be informed that the decoding of the teleportation in that particular round was different from what it should have been based on the initial teleportation measurement outcome. Sending the information about which Pauli operator was used to do the teleportation decoding implicitly provides that information, and even enables the other party to correct this wrong teleportation decoding by himself if needs be. We indeed use this property in the simulation.
\subsection{Description of the simulator}
All communication is done with a tree encoding over some alphabet $\Sigma$. To later simplify the analysis, we fix the distance parameter to $\alpha = \frac{39}{40}$. The message set consist of $\{0, 1 \}^2 \times \{-1, 0, +1 \} \times \{0, 1 \}^2 \cong [4] \times [3] \times [4] \cong [48]$, so we take arity $d=48$. Also, taking $N^\prime = 4 (1 + \frac{1}{N}) N$ is sufficient. By Lemma~\ref{lem:tccode}, we know that there exist a $q \in \mathbb{N}$ independent of $N^\prime$ such that an alphabet $\Sigma$ of size $q$ suffices to label the arcs of a tree code of any depth $N^\prime \in \mathbb{N}$. Both parties have already agreed before the protocol begins on such a tree code of depth $N^\prime$ with corresponding encoding and decoding functions $E$ and $D$ (both parties use a different instance of the same tree code to transmit their messages to the other party). Also, we want to tolerate error rate $\delta = \frac{1}{80}$.
The convention we use for the variables of the protocol is the following: on Alice's side, in round $i$, $x_{iAD} z_{iAD} \in \{0, 1 \}^2$ correspond to the bits she uses for the teleportation decoding on the $X$ and $Z$ Pauli operators, respectively, $x_{iAM} z_{iAM} \in \{0, 1 \}^2$ correspond to the bits of the teleportation measurement on the corresponding Pauli operators, $j_{iA} \in \mathbb{Z}$ and $M_{iA} \in \{-1, 0, +1 \}$ correspond respectively to the index of the unitary she is using in round $i$ and to whether she is using $U_{j_{iA}}$, its inverse $U_{j_{iA}}^{-1} = U_{j_{iA}}^\dagger$, or simply acting the identity channel $U_{j_{iA}}^0 = I$ on the quantum register, and the counter $C_{iA}$ keeps track of the sum of all previous messages $M_{\ell A}$, $l \leq i$. Similarly, on Bob's side, all the same variables are used, with $A$'s changed to $B$'s. When discussing variables obtained from decoding in round~$i$, a superscript $i$ is added to account for the fact that this decoding might be wrong and could be corrected in later rounds, and similarly when discussing other variables which are round dependent.
The actions Alice and Bob take in round~$i$ is based on the following two representations for the form of the state $\ket{\psi_i}$ of the joint register at the beginning of round~$i$ ($\ket{\psi_1} = \ket{\psi_\mathrm{init}}$) which can be classically computed from the information in their two state trees. The first one can be directly computed as
\begin{align}
\label{eq:psii}
\ket{\psi_i}^{ABCE} =
\prod_{\ell=1}^{i-1} ( X^{x_{\ell BM}} Z^{z_{\ell BM}}
U_{j_{\ell B}}^{M_{\ell B}} Z^{z_{\ell BD}} X^{x_{\ell BD}}
X^{x_{\ell AM}} Z^{z_{\ell AM}}
U_{j_{\ell A}}^{M_{\ell A}} Z^{z_{\ell AD}} X^{x_{\ell AD}} )
\ket{\psi_{\mathrm{init}}}^{ABCE}.
\end{align}
Here, from the state $s_A$ of Alice's state tree, we can directly obtain from the $\ell$th message sent by Alice, for $\ell = 1 \cdots i-1$, the two bits $x_{\ell AD} z_{\ell AD}$ used to decode the teleportation, the trit $M_{\ell A}$ corresponding to the evolution of the protocol performed in round $\ell$, and then the two bits $x_{\ell AM} z_{\ell AM}$ corresponding to the outcome of the teleportation measurement. We then use counters $C_{\ell A}$'s that maintain the sums of the $M_{\ell A}$'s to compute the indices $j_{\ell A}$'s of the noiseless protocol unitaries used by Alice in round~$\ell$: $C_{0A} = 0, C_{\ell A} = C_{(\ell-1)A} + M_{\ell A}, j_{\ell A} = 2 C_{(\ell-1)A} + M_{\ell A}$. Note that $j_{iA}$ depends only on the sequence of messages $M_{1A}, M_{2A}, \cdots , M_{(i-1)A}, M_{iA}$. Similarly, the state $s_B$ of Bob's state tree is used to obtain $x_{\ell BD} z_{\ell BD}, x_{\ell BM} z_{\ell BM}$, as well as $M_{\ell B}$, and to compute $C_{0B} = 0, C_{\ell B} = C_{(\ell-1)B} + M_{\ell B}, j_{\ell B} = 2 C_{(\ell-1)B} + M_{\ell B} + 1$. We define $U_j^M = I$ whenever $j \leq 0$ or $M=0$. Note that if $M \not= 0$ then $j_{\ell A}$ is odd and $U_{j_{\ell A}}^{M}$ acts on Alice's side, $j_{\ell B}$ is even and then $U_{j_{\ell B}}^{M}$ acts on Bob's side. Also note that $j \leq N^\prime + 1$ so the $U_j$'s are well defined by the noiseless protocol embedding described in section \ref{sec:nslss}.
From this first representation of the form of the state $\ket{\psi_i}$, we can classically compute a second one by recursively cleaning up the first representation. The cleanup is performed by collapsing together as many of the operators as possible (Pauli operators together, $U_\ell$'s with $U_\ell^{-1}$'s) to obtain something in the form:
\begin{align}
\label{eq:coll}
\ket{\psi_{i}}^{ABCE} =
\hat{\sigma}^i \cdot \tilde{U}_{t_i}^i & \cdot \tilde{\sigma}_{t_i}^i \cdot \tilde{U}_{t_i-1}^i \cdot \tilde{\sigma}_{t_i-1}^i \cdots
\tilde{U}_2^i \cdot \tilde{\sigma}_2^i \cdot \tilde{U}_1^i & \cdot \tilde{\sigma}_1^i \cdot U_{r_i} \cdot U_{r_i-1} \cdots U_2 \cdot U_1 \ket{\psi_{\mathrm{init}}}^{ABCE}
\end{align}
with $\hat{\sigma}^i = \pm X^{\hat{x}^i} Z^{\hat{z}^i}, \tilde{\sigma}_\ell^i = X^{x_\ell^i} Z^{z_\ell^i}$ for $\hat{x}^i \hat{z}^i, x_\ell^i z_\ell^i \in \{0, 1 \}^2$, and $\tilde{U}_\ell^i = U_{\ell^\prime}^{\pm 1}$ for some $r_i - 2t_i \leq \ell^\prime \leq r_i + 2t_i$. The rules to be used recursively to perform the cleanup are the following: in the case that $\tilde{\sigma}_\ell^i = I$, we require, if $\ell > 1$, that $\tilde{U}_{\ell}^i \not= (\tilde{U}_{\ell-1}^i)^{-1}$, and if $\ell=1$, that $\tilde{U}_1^i \not= U_{r_i+1}$. This last rule is what determines the cut between $U_{r_i}$ and $\tilde{U}_1^i \tilde{\sigma}_1^i$. The parameter $r_i$ determines the number of noiseless protocol unitaries the parties have been able to successfully apply on the joint register before errors start to arise on it, and the parameter $t_i$ determines the number of errors the parties have to correct before being able to resume the simulation. Note that this is well-defined: there is a unique representation in the form (\ref{eq:coll}) corresponding to any in the form (\ref{eq:psii}).
To decide which action to take in round i, Alice starts by decoding the possibly corrupted messages $f_1^\prime, \cdots, f_{i-1}^\prime \in \Sigma$ received from Bob up to this point to obtain her best guess $s_B^i = D(f_1^\prime, \cdots, f_{i-1}^\prime)$ for the state $s_B$ of his state tree. Along with the state $s_A$ of her state tree, she uses it to compute her best guess of the form (\ref{eq:coll}) of the joint state. If her decoding of Bob's state is good, then she has all the information she needs to compute the form of the joint state $\ket{\psi_i}$. She can then choose the right actions to take to evolve the simulation. She takes the following actions based on the assumption that her decoding is good. If it is not, errors might accumulate on the joint register $ABC$, which she will later have to correct.
Alice's next move depends on whether (she thinks) $t_i=0$ or not. If $t_i=0$, then she wishes to evolve the protocol one round further, if it is her turn to do so. That is, if $r_i$ is even, then she sets $M_{iA} = +1$ to apply $U_{r_i + 1}^{AC}$, but if $r_i$ is odd, Bob should be the next to apply a unitary of the protocol, so she sets $M_{iA} = 0$. If $t_i \not= 0$, then she wishes to correct the last error not yet corrected, if she is the one who applied it. That is, if $\tilde{U}_{t_i} = U_{\ell^\prime}^{M^\prime}$ for $\ell^\prime$ odd, then she sets $M_{iA}= - M^\prime \in \{ \pm 1 \}$ (note that in this case it holds that she sets $j_{iA} = \ell^\prime$), else she sets $M_{iA} = 0$ and she hopes Bob will next correct $\tilde{U}_{t_i}$. In all cases, with $\hat{\sigma}_i^C = \pm X^{\hat{x}_i} Z^{\hat{z}_i}$, she sets $x_{iAD} = \hat{x}_i, z_{iAD}=\hat{z}_i$ and computes $C_{iA} = C_{(i-1)A} + M_{iA}, j_{iA} = 2 C_{(i-1)A} + M_{iA}$. Note that she does not care about the irrelevant global phase factor $\pm 1$ appearing in $\hat{\sigma}_i$ during the cleanup from the form (\ref{eq:psii}) to the form (\ref{eq:coll}) because of the fact that the $X$ and $Z$ Pauli operators anticommute.
After this classical preprocessing, she can now perform her quantum operations on the $AC$ registers: she first decode the teleportation operation (and possibly some other Pauli errors remaining on the $C$ register) by applying $Z^{z_{iAD}} X^{x_{iAD}}$ on the $T_A^{2(i-1)}$ register (note that in round $1$, Alice already possesses the $C$ register so this part is trivial: let $T_A^0 = C_A$ and set $x_{1AD} z_{1AD} = 00$) before swapping registers $T_A^{2(i-1)}$ and $C_A$, effectively putting the virtual $C$ register into $C_A$. She then performs $U_{j_{iA}}^{M_{iA}}$ on the virtual $AC$ register to try to evolve the protocol (or correct a previous error), before teleporting back the virtual $C$ register to Bob using the half of entangled state in the $T_A^{2i-1}$ register, obtaining measurement outcome $x_{iAM} z_{iAM} \in \{0, 1 \}^2$. She updates her state $s_A$ by following the edge $a_i = (x_{iAD} z_{iAD}, M_{iA}, x_{iAM} z_{iAM})$ in the state tree, and transmits message $e_i = E(a_1 \cdots a_i)$ over the noisy classical channel, with $E$ the encoding function of the tree code.
Upon reception of the message $e_i^\prime$, a possibly corrupted version of $e_i$, Bob obtains his best guess $s_A^i$ for Alice's state $s_A$ by computing, with previous messages $e_1^\prime \cdots e_{i-1}^\prime$, $s_A^i = D(e_1^\prime \cdots e_i^\prime)$. He uses it along with his own state $s_B$ to compute his best guess of the representation of
\begin{align}
( X^{x_{iAM}} Z^{z_{iAM}} U_{j_{iA}}^{M_{iA}} Z^{z_{iAD}} X^{x_{iAD}}) \ket{\psi_i}
\end{align}
analogous to the form in (\ref{eq:psii}), then clean this up to obtain a representation analogous to (\ref{eq:coll}), and based on this latest representation chooses in the same way as Alice his $x_{iBD} z_{iBD}, M_{iB}$, and then uses $M_{iB}$ to compute $C_{iB}, j_{iB}$. After this classical preprocessing, he can then perform his quantum operations: he first decodes the teleportation operation by applying $Z^{z_{iBD}} X^{x_{iBD}}$ on the $T_B^{2i-1}$ register and by swapping it with $C_B$, creating a virtual $C$ register, then performs $U_{j_{iB}}^{M_{iB}}$ on the virtual $BC$ register to try to evolve the protocol, before teleporting back the virtual $C$ register to Alice using the half of entangled state in the $T_B^{2i}$ register, and obtains measurement outcome $x_{iBM} z_{iBM}$. He updates his state $s_B$ by following the edge $b_i = (x_{iBD} z_{iBD}, M_{iB}, x _{iBM} z_{iBM})$, and transmits message $f_i = E(b_1 \cdots b_i)$ over the channel. The round completes when Alice receives message $f_i^\prime$, a possibly corrupted version of $f_i$. After the $\frac{N^\prime}{2}$ rounds, Alice and Bob take the particular registers $\tilde{A}, \tilde{B}$ and $\tilde{C}$ specified by the noiseless protocol embedding (see section \ref{sec:nslss}), and use them as their respective outcome for the protocol. If the simulation is successful (and it is if the error rate is below $\frac{1}{80}$), the output quantum state corresponds to the $ABC$ subsystem of $\ket{\psi_{\mathrm{final}}}^{ABCE}$ specified by the original noiseless protocol.
We summarize the protocol as follow: Alice and Bob repeat the following for $i = 1 \cdots \frac{N^\prime}{2}$:
\begin{enumerate}
\item Alice computes $s_B^i = D(f_1^\prime \cdots f_{i-1}^\prime)$, and extracts $b_\ell^i = (x_{\ell BD}^i z_{\ell BD}^i, M_{\ell B}^i, x_{\ell BM}^i z_{\ell BM}^i), \ell=1 \cdots i-1$, her best guess for Bob's messages, and the corresponding $C_{\ell B}^i, j_{\ell B}^i$.
\item Also using $s_A$, she computes her best guess for the form (\ref{eq:coll}) of the state $\ket{\psi_{i}}$ of the joint register, and the corresponding $x_{iAD} z_{iAD}, M_{iA}, C_{iA}, j_{iA}$.
\item She decodes the teleportation by applying $Z^{z_{iAD}} X^{x_{iAD}}$ to register $T_A^{2(i-1)}$ and swaps this with the $C_A$ register.
\item She tries to evolve the simulation by applying $U_{j_{iA}}^{M_{iA}}$ to the $AC_A$ register.
\item She teleports back the $C_A$ register to Bob using entanglement in register $T_A^{2i-1}$ and gets outcomes $x_{iAM} z_{iAM}$.
\item Alice updates her state $s_A$ by following edge $a_i = (x_{iAD} z_{iAD}, M_{iA}, x_{iAM} z_{iAM})$ and transmits message $e_i = E(a_1 \cdots a_i)$ using the channel to Bob, who receives $e_i^\prime$, a possibly corrupted version of $e_i$.
\item Bob computes $s_A^i = D(e_1^\prime \cdots e_i^\prime)$ and also using $s_B$, performs actions on his side analogous to Alice's, first swapping register $T_B^{2i-1}$ with $C_B$, then using the $T_B^{2i}$ register to teleport back the $C_B$ register to Alice, transmits $f_i$, and round $i$ completes upon reception by Alice of $f_i^\prime$, a possibly corrupted version of $f_i$.
\end{enumerate}
After these $\frac{N^\prime}{2}$ rounds, they both extract their protocol outcome from the $\tilde{A} \tilde{B} \tilde{C}$ registers specified by the noiseless protocol embedding.
\subsection{Analysis}
\label{sec:anal}
The analysis is done conditioned on some overall classical state (and in particular, some respective views of Alice and Bob of the transcript) at each round. In particular, if the adversary has an adaptive, probabilistic strategy, we condition on some strategy based on the outcome of her previous measurements. We come back later to this issue.
We define two kinds of rounds: good rounds in which both parties decode correctly the other party's state, and bad rounds in which at least one party makes a decoding error. We define a quantity $P(i) \in \mathbb{Z}$ which increases at least by some (strictly positive) amount in good rounds, and decreases by at most some other (bounded) amount in bad rounds, and such that $P(\frac{N^\prime}{2} +1) \geq N + 1$ implies success of the simulation. Hence, it is sufficient to bound the ratio of good to bad rounds as a function of the error rate to prove the success of the simulation.
Let us now define $P(i)$ more formally. To do so, we use the representation (\ref{eq:coll}) for the form of the quantum state of the joint registers at the beginning of round $i$ (or equivalently, at the end of round $i-1$). Remembering $r_i$ determines the number of noiseless protocol unitaries the parties have been able to successfully apply on the joint register before errors start to arise on it, and $t_i$ determines the number of errors the parties have to correct before being able to resume the simulation, we define
\begin{equation}
P(i) = r_i - 2t_i,
\end{equation}
in which the factor of $2$ in front of $t_i$ is due to the fact that in the worst case all remaining $\tilde{U}_l^i$'s are applied by the same party who applied $U_{r_i-1}$ and $\tilde{U}_{t_i}^i = U_{r_i-1-2(t_i-1)}^{-1}$. We now prove the following technical lemma which bounds $P(i)$ as a function of the number of good and bad rounds.
\begin{lemma}
\label{lem:pi}
At the end of round $i$, define
\begin{align*}
N_g^i = |\{j: j \leq i, \mathrm{round\ } j \mathrm{\ was\ good}\}|, \\
N_b^i = |\{j: j \leq i, \mathrm{round\ } j \mathrm{\ was\ bad}\}|.
\end{align*}
Then $P(i + 1) \geq N_g^i - 4 N_b^i$.
\end{lemma}
\begin{proof}
Let us adopt the following notation: for $m \in [\frac{N^\prime}{2}]$, $V_1 = U_1, V_2 = U_3, \cdots V_{m+1} = U_{2m + 1}$, i.e.~the $V_m$'s are the $U_l$'s acting on Alice's side, and $W_1 = U_2, W_2 = U_4, \cdots W_m = U_{2m}$, i.e.~the $W_m$'s are the $U_l$'s acting on Bob's side. We can then observe the following three facts whenever $t_i\geq 1$ at the end of round $i$, given our way to compute $j_{iA}$ and $j_{iB}$ defined in the protocol description above. Their proofs follow by noting that a statement analogous to the second one holds for the representation (\ref{eq:psii}) and any two consecutive $V$'s (or $W$'s), and that statement still holds at each step of the recursive cleanup until arriving at form (\ref{eq:coll}).
First, looking at the $\tilde{U}$'s acting on Alice's side (if such a $\tilde{U}$ exists), the first one, say $\tilde{U}_{\ell_0^i}^i$ for some $1 \leq \ell_0^i \leq t_i$, satisfies the following: $\tilde{U}_{\ell_0^i}^i \in \{V_{\ell^\prime +1}, V_{\ell^\prime}^{-1} \}$ for $V_{\ell^\prime} = U_{r_i}$ or $U_{r_i - 1}$ (whichever acts on Alice's side). A similar statement holds for Bob with the $W_i$'s.
Second, for any two successive $\tilde{U}$'s acting on Alice's side (if two such $\tilde{U}$'s exist), say $\tilde{U}_{\ell_1^i}^i$ and $\tilde{U}_{\ell_2^i}^i$ for some $\ell_1^i < \ell_2^i$, if $\tilde{U}_{\ell_1^i}^i = V_{\ell^\prime}^{M_1}$ for some $1 \leq \ell^\prime \leq N^\prime$ and $M_1 \in \{\pm 1 \}$, then $\tilde{U}_{\ell_2^i}^i = V_{l^\prime + M^\prime}^{M_2}$ for $M^\prime = \frac{M_1 + M_2}{2}$. A similar statement holds for Bob.
Third, the choice of $M_{iA}$ and $j_{iA}$ are good, i.e.~in a good round in which Alice tries to correct the last $\tilde{U}$ acting on her register, say $\tilde{U}_{\ell_3^i}^i = V_{\ell^\prime}^{M_3}$, then $U_{j_{iA}}^{M_{iA}} = V_{\ell^\prime}^{- M_3}$ indeed. Note that the choice of $j_{iA}$ is also good when $t_i = 0$, and similar statements hold for Bob.
Using these facts, we can prove Lemma \ref{lem:pi} by induction. For the base case, $\ket{\psi_1} = \ket{\psi_\mathrm{init}}$, so $P(1) = 0$ and the statement holds trivially. To give us a flavor, let us look at $P(2)$ at the end of round $1$. In round $1$, Alice applies $U_1$ then teleport. Then on Bob's side if there is no decoding error he applies $U_2$ and teleport back, leaving $\hat{\sigma} U_2 U_1 \ket{\psi_{\mathrm{init}}}$ on the register, so $P(2) = 2 \geq 1 = N_g^1$ in this case ($N_b^1 = 0$), else there is a decoding error and at worst he badly decodes the teleportation and still applies $U_2$, leaving $\hat{\sigma} U_2 \tilde{\sigma} U_1 \ket{\psi_{\mathrm{init}}}$ on the register and $P(2) = 1 - 2 = -1 \geq -4 = -4 N_b^1$ in this case ($N_g^1 = 0$). For the induction step, given the state $\ket{\psi_{i}}$ at the end of round $i-1$, if the $i$th round is a good round ($N_g^{i} = N_g^{i-1} + 1, N_b^{i} = N_b^{i-1}$), then at least one of Alice or Bob can act on the joint register, and so, by the way they choose their actions and by the above argument, if $t_{i} \geq 1$, then $t_{i+1} \leq t_{i} - 1$, and $r_{i+1} \geq r_{i}$, else $t_{i+1} = t_{i} = 0$ and $r_{i+1} \geq r_{i} + 1$, so in all cases
\begin{align*}
P(i + 1) &= r_{i+1} - 2 t_{i+1} \\
& \geq r_{i} - 2 t_{i} +1 \\
& = P(i) + 1 \\
& \geq N_g^{i-1} - 4 N_b^{i-1} + 1 \\
& = N_g^{i} - 4 N_b^{i}.
\end{align*}
If it is a bad round ($N_g^{i} = N_g^{i-1}, N_b^{i} = N_b^{i-1} + 1$), then the worst that can happen is if both parties apply a wrong unitary and $t_{i+1} = t_{i} + 2, r_{i+1} = r_{i}$ ($t_{i+1} = t_{i} + 1, r_{i+1} = r_{i} - 1$ or $t_{i+1} = t_{i}, r_{i+1} = r_{i} -2$ and others are also possible, but not as bad for $P(i + 1)$) so
\begin{align*}
P(i+1) & = r_{i+1} - 2 t_{i+1} \\
&\geq r_{i} - 2 t_{i} - 4 \\
&= P(i) - 4 \\
&\geq N_g^{i-1} - 4 N_b^{i-1} - 4 \\
& = N_g^{i} - 4 N_b^{i}.
\end{align*}
In all cases, $P(i+1) \geq N_g^i - 4 N_b^i$ which proves our claim.
\end{proof}
\begin{corollary}
\label{cor:pi}
If $P(\frac{N^\prime}{2} + 1) \geq N + 1$, then the simulation succeeds.
\end{corollary}
\begin{proof}
For notational convenience, in this proof let $r = r_{\frac{N^\prime}{2} + 1}, t = t_{\frac{N^\prime}{2} + 1}$, and also let the superscript $\frac{N^\prime}{2} + 1$ be implicit in all the $\tilde{U}$'s. The proof of Lemma \ref{lem:pi} also establishes that for two successive indices acting on either Alice's or Bob's side, these do not differ by more than $2$. Also, $P(\frac{N^\prime}{2} + 1) = r - 2t \geq N + 1$ implies $r \geq N + 1 + 2t$ for $t \geq 0$, and in particular we have $r \geq N + 1$. Then we know that after $U_{r}$, the noiseless protocol embedding has already put the $ABC$ registers of the noiseless protocol into safe local registers $\tilde{A}, \tilde{B}, \tilde{C}$, which are never accessed by $U_{N + 2} \cdots U_{N^\prime + 1}$, and neither by $X^C, Z^C$. Is left to verify that all $\tilde{U}$'s have indices strictly higher than $N + 1$. But the indices of the $\tilde{U}$'s of Alice decrease by at most two at once, and similarly for Bob, so clearly the worst case is if all $\tilde{U}$'s are for the same party, and are inverses of the noiseless protocol unitaries. Without loss of generality, we consider only this case. If it is the same party who applied $U_r$ who applies all the $\tilde{U}$'s, then $\tilde{U}_1 = U_r^{-1}, \tilde{U}_2 = U_{r-2}^{-1} \cdots \tilde{U}_t = U_{r-2(t-1)}^{-1}$ and $r-2(t-1) > r-2t = P(\frac{N^\prime}{2} + 1) \geq N + 1$, so we are good. Similarly if it is the same party who applied $U_{r-1}$ who applies the $\tilde{U}$'s, then $\tilde{U}_1 = U_{r-1}^{-1}, \tilde{U}_2 = U_{r-3}^{-1}, \cdots \tilde{U}_{t} = U_{r-2t+1}$ and $r-2t+1 >r-2t = P(\frac{N^\prime}{2} + 1) \geq N + 1$. In all cases, we are good, and the safe registers $\tilde{A} \tilde{B} \tilde{C}$ to be outputted by the parties hold the $ABC$ subsystem of $\ket{\psi_\mathrm{final}}$ at the end of round $\frac{N^\prime}{2}$ whenever $P(\frac{N^\prime}{2} + 1) \geq N + 1$.
\end{proof}
We now want to show that if the number of errors as a fraction of $N^\prime$ (the total number of classical symbols transmitted over the adversarial channel) is bounded by a particular constant $\delta > 0$, we are then sure that the simulation succeeds. We do this in two steps: we first give a bound on the fraction of bad rounds as a function of the error rate, and then use it to show that below a certain error rate, the simulation succeeds.
The bound on the fraction of bad rounds as a function of the error rate we use follows as a corollary from the more general result in Lemma \ref{lem:optcor}, which we prove in the next section when studying the way to tolerate the highest possible error rates. The result we use here is the following: if the error rate is bounded by $\delta$ (so there are at most $\delta N^\prime$ errors) and the tree code distance of both Alice and Bob's tree code is at least $\alpha = 1- \epsilon_\alpha$, then the number of bad round $N_b$ is bounded by $N_b \leq 2 \delta N^\prime + \epsilon_\alpha N^\prime = (2 \delta + \epsilon_\alpha) N^\prime$. Note that since we use a standard tree code without an erasure symbol, we could also obtain results with a weaker dependence on $\epsilon_\alpha$ to improve on the (binary) communication rates.
We are now ready to prove that the simulation succeeds with the parameters of our protocol. We have $\epsilon_\alpha = \frac{1}{40}, \delta = \frac{1}{80}, l_r = \frac{N^\prime}{N} = 4 (1 + \frac{1}{N})$, so
\begin{align*}
P(\frac{N^\prime}{2} + 1) &\geq N_g - 4 N_b \\
& = \frac{N^\prime}{2} - 5 N_b \\
& \geq \frac{N^\prime}{2} - 5(2 \delta + \epsilon_\alpha)N^\prime \\
& = N^\prime (\frac{1}{2} - \frac{10}{80} - \frac{5}{40}) \\
& = \frac{1}{4} N^\prime \\
& = N + 1,
\end{align*}
in which the first inequality is from Lemma \ref{lem:pi}, the first equality is by definition of $N_g, N_b$, i.e.~$\frac{N^\prime}{2} = N_g + N_b$, and the second inequality is from our bound on $N_b$ due to Lemma \ref{lem:optcor}. The fact that the simulation succeeds is then immediate from Corollary \ref{cor:pi}.
Note that the simulation protocol does not depend on the particular protocol to be simulated, but only on its length $N$ and the noise parameter of the adversarial channel we want to tolerate. Also note that even if the adversary is adaptive and probabilistic (with adaptive, random choices depending on her measurement outcomes, as allowed by the model), the simulation succeeds no matter her choice of action, as long as the corruption rate is bounded by $\delta$, since then in each branch of the probabilistic computation our analysis holds. We use the definition of the class $\A_\delta^S$ to prove that indeed, the simulation succeeds with zero error.
For $\ket{\psi} \in \H(A \otimes B \otimes C \otimes E \otimes R)$, with $R$ a purifying system of the same size as $A \otimes B \otimes C \otimes E$, with have that
\begin{align*}
(\Pi \otimes I^R) (\ket{\psi}) = \Tr{E}{U_N \cdots U_1 \kb{\psi}{\psi} U_1^\dagger \cdots U_N^\dagger},
\end{align*}
while for $A^S \in \A_\delta^S$,
\begin{align*}
(S^\Pi (A^S) \otimes I^R ) (\ket{\psi}) =
\Tr{ \neg (\tilde{A} \tilde{B} \tilde{C} R) }{\M_{N^\prime + 1}^\Pi N_{N^\prime} \M_{N^\prime}^\Pi \cdots \M_2^\Pi \N_1 \M_1^\Pi (\kb{\psi}{\psi})},
\end{align*}
in which the $\neg (\tilde{A} \tilde{B} \tilde{C} R)$ subscript argument for the partial trace means that we trace everything except the $\tilde{A} \tilde{B} \tilde{C} R$ registers. But then we can rewrite
\begin{align*}
(S^\Pi (A^S) \otimes I^R ) (\ket{\psi}) =
\sum_{x_T y_T z} p_{X_T Y_T Z} (x_T, y_T, z) &\kb{x_T}{x_T}^{X_T} \otimes
\kb{y_T}{y_T}^{Y_T} \otimes &\kb{z}{z}^{Z} \otimes \rho(x_T, y_T, z)
\end{align*}
for $X_T, Y_T$ the registers containing the views $x_T, y_T$ of the transcript as seen by Alice and Bob, respectively, for some quantum states $\rho(x_T, y_T, z)$, and a probability density function $p_{X_T Y_T Z}$. But, by definition of the class $\A_\delta^S$, we have that, conditioned on some classical state $z$ of Eve, $\rho(x_T, y_T, z)$ has suffered at most $\delta N^\prime$ corruptions by Eve, for any possible transcript views $x_T, y_T$, and so, by the above analysis, its $\tilde{A} \tilde{B} \tilde{C} R$ subsystems contains $\Tr{E}{U_N \cdots U_1 \kb{\psi}{\psi} U_1^\dagger \cdots U_N^\dagger}$, a perfect version of $(\Pi \otimes I^R)(\ket{\psi})$ for any views $x_T, y_T$ of the transcripts of Alice and Bob, respectively. Hence, tracing over all subsystems but $\tilde{A} \tilde{B} \tilde{C} R$, we obtain $(\Pi \otimes I^R) (\ket{\psi})$, and the simulation protocol succeeds with zero error at simulating any noiseless protocol of length $N$ against all adversaries in $\A_\delta^S$.
We have thus established the following: with $q = |\Sigma|$ chosen according to Lemma \ref{lem:tccode} (we use a tree code of arity $d=48$ and distance parameter $\alpha = \frac{39}{40}$), $R_C = \frac{1}{l_r \log{q}} = \frac{1}{4 (1 + \frac{1}{N}) \log{q}} \geq \frac{1}{8 \log{q}}$, $R_E = \frac{1}{\log{q}}$ and $\delta = \frac{1}{80}$, we have that for all $N$, there exists a universal simulation protocol in the shared entanglement model which, given black-box access to any two-party quantum protocol of length $N$ in the noiseless model, succeeds with zero error at simulating the noiseless protocol on any input (independent of what is in the purifying register held by Eve) while transmitting $\frac{1}{R_C \log{q}} N$ symbols in an alphabet $\Sigma$ of size $q$ over any adversarial channel with error rate $\delta$, and consuming $\frac{R_E}{R_C} N$ EPR pairs, which proves Theorem~\ref{th:bas}.
\section{Tolerating Maximal Error Rates}
\label{sec:opt}
We show how we can modify the basic protocol described in the last section such that an improved analysis can show that it tolerates up to $\frac{1}{2} - \epsilon$ error rate, for arbitrarily small $\epsilon > 0$, in the shared entanglement model. This is optimal: we also prove that no interactive protocol can withstand an error rate of $\frac{1}{2}$ in that model. More formally, we prove the following results.
\begin{theorem}
\label{th:optobl}
Given any two-party quantum protocol of length $N$ in the noiseless model, no protocol in the shared entanglement model can tolerate an error rate of $\frac{1}{2}$ and succeed in simulating the protocol with lower error than the best unidirectional protocol in the worst case. This result holds in oblivious as well as alternating communication models. More precisely, for all noiseless protocol lengths $N \in \mathbb{N}$, for all communication rates $R_C > 0$, transmission alphabet sizes $q \in \mathbb{N}$, entanglement consumption rates $R_E \geq 0$, for all simulation protocols $S$ in the shared entanglement model with the above parameters, there exists an adversary $A^S \in \A_\frac{1}{2}^S$ and an unidirectional protocol $U$ such that for all noiseless protocols $\Pi$ of length $N$, $\|S^\Pi(A^S) - \Pi \|_\diamond \geq \|U - \Pi \|_\diamond$.
\end{theorem}
\begin{theorem}
\label{th:optach}
Given an adversarial channel in the shared entanglement model with error rate strictly smaller than $\frac{1}{2}$, we can simulate any noiseless protocol of length $N$ with negligible error over this channel using a number of transmission linear in $N$, and consuming a linear amount of EPR pairs. More precisely, there exists a constant $c > 0$ such that for arbitrary small $\epsilon > 0$, there exist a communication rate $R_C > 0$, an alphabet size $q \in \mathbb{N}$, and an entanglement consumption rate $R_E \geq 0$ such that for all noiseless protocol lengths $N \in 2 \mathbb{N}$, there exists a universal simulator $S$ in the shared entanglement model of length $N^\prime$ with communication rate $R_C$, transmission alphabet size $q$, entanglement consumption rate $R_E$, which succeeds with error $2^{- c N}$ at simulating all noiseless protocols of length $N$ against all adversary in $\A_{\frac{1}{2} - \epsilon}^S$.
\end{theorem}
\subsection{Proof of Optimality}
To prove Th. \ref{th:optobl}, the argument of \cite{FGOS12} in the classical case applies here as well: we only need to notice that if the error rate is $\frac{1}{2}$ with alternating communication in the shared entanglement model, then an adversary can completely corrupt all of the transmissions of either Alice or Bob, at his choosing, say Bob. In particular, he could replace all of his transmission by a fixed message, and leave Alice's message unchanged. But then effectively Bob does not transmit any information to Alice, and this protocol can be simulated in the unidirectional model. Indeed, for a fixed register $E$, transmission alphabet $\Sigma$ of size $q$, noiseless protocol length $N$, simulation protocol length $N^\prime$, taking the adversary $A_{\frac{1}{2}}^S$ described above which maps all transmissions from Bob to Alice to a fixed symbol $e_0 \in \Sigma$, for any simulator $S$ of length $N^\prime$ that tries to simulate a noiseless protocol $\Pi$ of length $N$, we can take $\M_1^U$ which runs sequentially all operations of Alice in $S$ while replacing all messages of Bob by $e_0$, then the quantum communication from Alice to Bob would be the simulation protocol messages to Bob along with Bob's share of the entanglement in $T_B$, who would then take $\M_2^U$ to be the sequential application of all his operations in $S$. This actually simulate $S$ running against adversary $A_\frac{1}{2}^S$ for any noiseless protocol and any input, and the outputs are the same.
Note that the proof also applies in an oblivious model for noisy communication, since in an oblivious model, the order in which the parties speak is fixed by the protocol and does not depend on the input or the actions of the adversary, and then the adversary can choose to disrupt all the messages of the party who communicate at most half the messages. Hence, the proof also extends to the case of oblivious communication, but not necessarily alternating. In such a case, the simulation protocol would also define a function $\mathrm{speak}: [N^\prime] \rightarrow \{A, B \}$ known to all (Alice, Bob and Eve) which tells whose turn it is to speak and is independent of both the input and of the action of Eve.
We can even extend the argument to the case of a speak function which depends on some secret key and is unknown to Eve, so Eve does not always know who is going to speak more often. In that case, Eve can flip a random bit (for example by measuring a $\ket{+}$ state in the computational basis) to decide which party's communication she is going to corrupt (of course, with the reasonable assumption in the case of classical communication that Eve can see who speaks before she decides whether or not to corrupt a message). In this case, the statement is changed to $\|S^\Pi (A^S) - \Pi \|_\diamond$ is bounded away from zero, as can be seen by considering, for increasing $N$, some family of protocols computing, for example, the bitwise parity function of $\frac{N}{2}$ bits output by both parties or the swap function in which Alice and Bob want to exchange their $A, B$ registers. An extension of the argument of the proof of Theorem \ref{th:iidshentopt} shows that the fidelity is also bounded away from $1$ for the case of protocols computing the inner product binary function. To reach the $\frac{1}{2}$ bound on the tolerable error rate, the parties would then need an adaptive strategy which depends on the sequence of errors applied by the adversary. However, this is dangerous in a noisy model: depending on the error pattern, the parties might not agree on whose turn it is to speak, and they could run into synchronisation problems.
\subsection{Proof of Achievability}
\subsubsection{Description of the Simulation}
The proof of achievability is somewhat more involved. It follows ideas similar to that of the basic simulation, but everything must be carefully analysed and optimized. We start by setting up the new notation enabling us to do so. The simulation protocol is essentially the same as the basic simulation one, the intuition given in section \ref{sec:int} still applies here, but different parameters which were fixed in the basic case now depend on the parameter $\epsilon$. In particular, the distance parameter $\alpha = 1 - \epsilon_\alpha$ now varies, as well as the length of the protocol $N^\prime = l_r N$. Since the parties have access to shared entanglement, they do not need to distribute it at the beginning of the protocol, and they can also use it to generate a secret key unknown to the adversary Eve. The secret key is used to generate a blueberry code with erasure parameter $\epsilon_\beta = \frac{|\Sigma| - 1}{|\Gamma| - 1}$, for $\Sigma$ the tree code alphabet and $\Gamma$ the blueberry code alphabet. Each of the tree code transmission alphabet symbols are then reencoded with the blueberry code before transmission over the noisy channel, and an error caused by the adversary is detected as an erasure with probability $1 - \epsilon_\beta$. When an erasure is detected by either party in a round, that party does not try to evolve the protocol in that particular round, so the corresponding trit sent is going to be $0$, and also the teleportation decoding bits are $00$. Otherwise, the structure of the protocol is mainly unchanged. The summary of the optimized protocol is as follows: Alice and Bob repeat the following for $i = 1 \cdots \frac{N^\prime}{2}$:
\begin{enumerate}
\item Alice decodes the blueberry encoding of Bob's possibly corrupted last transmission: if she detects an erasure, she sets $M_{iA} = 0, x_{iAD} = z_{iAD} = 0$ and $f_{i-1}^\prime = \perp$, and skips to step $4$ below. Else, she decodes the transmission as $f_{i-1}^\prime \in \Sigma$, a possibly corrupted version of Bob's last tree encoding $f_{i-1}$, and continue with step $2$.
\item Alice computes $s_B^i = D(f_1^\prime \cdots f_{i-1}^\prime)$, and extracts $b_\ell^i = (x_{\ell BD}^i z_{\ell BD}^i, M_{\ell B}^i, x_{\ell BM}^i z_{\ell BM}^i), \ell=1 \cdots i-1$, her best guess for Bob's messages, and the corresponding $C_{\ell B}^i, j_{\ell B}^i$.
\item Also using $s_A$, she computes her best guess for the form $\ket{\psi_{i}}$ of the joint register, and the corresponding $x_{i AD} z_{i AD}, M_{iA}, C_{iA}, j_{iA}$.
\item She decodes the teleportation by applying $Z^{z_{i AD}} X^{x_{i AD}}$ to register $T_A^{2(i-1)}$ and swaps this with the $C_A$ register.
\item She tries to evolve the simulation by applying $U_{j_{iA}}^{M_{iA}}$ to the $A C_A$ register.
\item She teleports back the $C_A$ register to Bob using entanglement in register $T_A^{2i-1}$ and gets outcomes $x_{iAM} z_{iAM}$.
\item Alice updates her state $s_A$ by following edge $a_i = (x_{iAD} z_{iAD}, M_{iA}, x_{iAM} z_{iAM})$, computes $e_i = E(a_1 \cdots a_i)$ and transmits the blueberry encoding of $e_i$ using the channel to Bob.
\item Upon reception of a possibly corrupted version of Alice's last transmission, Bob decodes the blueberry code layer: he either detects an erasure and sets $e_i^\prime = \perp$, or else decode the transmission as $e_i^\prime \in \Sigma$, a possibly corrupted version of $e_i$.
\item Bob computes $x_{iBD} z_{iBD}, M_{iB}$ analogously to Alice, depending on whether or not he detects an erasure. If not, he decodes $s_A^i = D(e_1^\prime \cdots e_i^\prime)$ and also uses $s_B$ to compute these. He then performs actions on his side analogous to Alice's, first swapping register $T_B^{2i-1}$ with $C_B$, then using the $T_B^{2i}$ register to teleport back the $C_B$ register to Alice, computes $f_i$ and transmits the blueberry encoding of $f_i$ to Alice. Round $i$ completes upon reception by Alice of a possibly corrupted version of this message.
\end{enumerate}
After these $\frac{N^\prime}{2}$ rounds, they both extract their protocol outcome from the $\tilde{A} \tilde{B} \tilde{C}$ registers specified by the noiseless protocol embedding.
\subsubsection{Analysis}
\label{sec:analopt}
Similar to section \ref{sec:anal}, the analysis is first carried conditional on some respective views of Alice and Bob of the transcript at each round, but now averaging over the shared secret key used for the blueberry code, and also conditional on some classical state $z$ of the $Z$ register of Eve, and the conclusion holds in that case. In particular, if the adversary has an adaptive and probabilistic strategy, we condition on some strategy consistent with the transcript already conditioned on. We come back later to this issue.
To analyse this protocol, we once again define a function $P(i)$ such that we know the protocol succeeds whenever $P(\frac{N^\prime}{2} + 1) \geq N + 1$. Here also, if we refer to the form of the state $\ket{\psi_i}$ on the joint register $A B C E$ at the beginning of round $i$ (or at the end of round $i-1$) rewritten as in (\ref{eq:coll}), then $P(i) = r_i- 2t_i$ will do. We now have three kinds of rounds: good rounds in which both parties can decode correctly the other party's state, bad rounds in which at least one party makes a decoding error, and the new erasures rounds, in which no party makes an actual decoding error, but at least one of them decodes an erasure from the blueberry code, and so does not try to do anything on the quantum register before teleporting back. We have an analogue of the technical Lemma \ref{lem:pi} and its corollary. The proofs are omitted since they are nearly identical to the proofs in the basic simulation case, the only difference being that a party who detects an erasure does not take any action and by consequence does not affect $P(i)$.
\begin{lemma}
\label{lem:erapi}
At the end of round $i$, define
\begin{align*}
N_g^i &= |\{j: j \leq i, \mathrm{round\ } j \mathrm{\ was\ good}\}|, \\
N_b^i &= |\{j: j \leq i, \mathrm{round\ } j \mathrm{\ was\ bad}\}|, \\
N_e^i &= |\{j: j \leq i, \mathrm{round\ } j \mathrm{\ was\ an\ erasure\ round}\}|.
\end{align*}
Then $P(i + 1) \geq N_g^i - 4 N_b^i$.
\end{lemma}
\begin{corollary}
\label{cor:erapi}
If $P(\frac{N^\prime}{2}+1) \geq N + 1$, then the simulation succeeds.
\end{corollary}
Hence, we once again want to bound the ratio of bad to good rounds as a function of the corruption rate to prove the success of the simulation. To do so, we show that depending on a given tolerable error rate $\frac{1}{2} - \epsilon$, we can vary the distance parameter $\alpha = 1-\epsilon_\alpha$ of the tree codes used by Alice and Bob and also the erasure parameter $\beta = 1-\epsilon_\beta$ of the blueberry codes they use, and get this ratio as low as desired (except with negligible probability in the random choice of the shared secret key used for the blueberry code). However, since there is now a third kind of rounds, wel also need to make sure that the ratio of good rounds vs.\ erasure rounds does not get arbitrarily low, so that we can show $P(\frac{N^\prime}{2}+1) \geq N + 1$. We focus on the number $N_g = N_g^{\frac{N^\prime}{2} + 1}$, $N_b = N_b^{\frac{N^\prime}{2} + 1}$ and $N_e = N_e^{\frac{N^\prime}{2} + 1}$ of good, bad and erasure rounds in the whole simulation, respectively. To bound the fraction of bad rounds as a fraction of the corruption rate, we need the corollary of the following technical lemma, which derives a new bound on tree codes with an erasure symbol. This result only talks about the structure of such codes independently of our application, and so might have applications in a classical interactive coding setting as well.
\begin{lemma}
\label{lem:optcor}
If there is a bound $\delta$ on the fraction of the total number of transmission $N^\prime$ that are corrupted and not detected as erasure by the blueberry code, then the number $N_b$ of bad rounds in the whole simulation is bounded by $N_b \leq (2 \delta + \epsilon_\alpha) N^\prime$ for $\alpha = 1- \epsilon_\alpha$ the distance parameter of the tree code used by Alice and Bob.
\end{lemma}
\begin{proof}
For any $1 \leq i \leq j \leq \frac{N^\prime}{2}$, let $I_e^A(i, j), I_b^A(i, j), I_g^A(i, j)$ be the subset of rounds $i, i+1, \cdots, j-1, j$ in which the symbol Alice gets from the blueberry decoding is an erasure, an actual error, or the actual non-corrupted transmission, respectively. Note that these are disjoint sets satisfying $I_e^A(i, j) \cup I_b^A(i, j) \cup I_g^A(i, j) = [i,j] = \{i, i+1, \cdots, j-1, j \}$. Similarly, let $J_e^A(i, j), J_b^A(i, j), J_g^A(i, j)$ be the subset of $[i, j]$ in which the sequence of messages Alice gets from the tree decoding corresponds to a failure (note $I_e^A (i, j) \subseteq J_e^A (i, j)$), an actual decoding error, or a correct decoding, respectively. Again note that $J_e^A(i, j) \cup J_b^A(i, j) \cup J_g^A(i, j) = [i,j]$, a disjoint union. We can set up similar notation for Bob with $A$'s replaced by $B$'s, and then we have
\begin{align*}
N_b = |J_b^A (1, \frac{N^\prime}{2}) \cup J_b^B (1, \frac{N^\prime}{2})|,\\
|I_b^A (1, \frac{N^\prime}{2})| + |I_b^B (1, \frac{N^\prime}{2})| \leq \delta N^\prime,
\end{align*}
so the statement we wish to prove is
\begin{align*}
|J_b^A (1, \frac{N^\prime}{2}) \cup J_b^B (1, \frac{N^\prime}{2})| &\leq 2 \delta N^\prime + \epsilon_\alpha N^\prime.
\end{align*}
We prove the following stronger statements:
\begin{align*}
|J_b^A (1, \frac{N^\prime}{2})| &\leq 2 |I_b^A (1, \frac{N^\prime}{2})| + \frac{1}{2} \epsilon_\alpha N^\prime
\end{align*}
and
\begin{align*}
|J_b^B (1, \frac{N^\prime}{2})| \leq 2 |I_b^B (1, \frac{N^\prime}{2})| + \frac{1}{2} \epsilon_\alpha N^\prime.
\end{align*}
Note that everything is symmetric from Alice's and Bob's point of view, so we only prove the statement from Alice's. To lighten the notation, we drop the $A$ superscripts. For any subset $C = \{c_1, \cdots c_{|C|} \}$ of $[\frac{N^\prime}{2}]$ and any two strings $\bar{e} = e_1 \cdots e_t, \bar{e}^\prime = e_1^\prime \cdots e_t^\prime \in \Sigma^t$, define $\Delta_C(\bar{e}, \bar{e}^\prime) = |\{i \in C : i \leq t, e_i \not= e_i^\prime \}|$. Note that with $\bar{C} = [\frac{N^\prime}{2}] \setminus C$, $\Delta (\bar{e}, \bar{e^\prime}) = \Delta_C (\bar{e}, \bar{e}^\prime) + \Delta_{\bar{C}}(\bar{e}, \bar{e}^\prime)$, and $\Delta_C(\bar{e}, \bar{e}^\prime) \leq |C|$.
We are now ready to prove the statement. We prove by induction on $t$ that $|J_b (1, t)| \leq 2|I_b(1, t)| + \epsilon_\alpha t$. The base case is obvious: for $t=1$, if there is no transmission error during the first round, then there is no decoding error, and otherwise $1 \leq 2 + \epsilon_\alpha$. If in round $t$ Alice detects an erasure or decodes correctly, then the induction is trivial. Hence, for the induction step, we consider the case of a bad decoding. Let $\bar{a} \in [d]^t$ be the sequence of transmitted messages, $\bar{e} = \bar{E} (\bar{a}) \in \Sigma^t$ the corresponding sequence of transmissions, $\bar{e}^\prime \in \Sigma^t$ the sequence of possibly corrupted receptions, $\bar{a}^\prime = D(\bar{e}^\prime) \in [d]^t$ the sequence of decoded messages, and $\bar{e}^{\prime \prime} = \bar{E} (\bar{a}^\prime)$ its reencoding. Then, by the decoding condition, $\Delta(\bar{e}^{\prime \prime}, \bar{e}^\prime ) \leq \Delta(\bar{e}, \bar{e}^\prime)$. Let $\ell = L(\bar{a}, \bar{a}^\prime)$ be the distance of $\bar{a}, \bar{a}^\prime$ to their least common ancestor, then $\Delta_{[1, t - \ell]}(\bar{e}^{\prime \prime}, \bar{e}) = 0$. Note that $1 \leq \ell \leq t$. By the induction hypothesis,
\begin{align*}
|J_b (1, t - \ell)| \leq 2|I_b(1, t - \ell)| + \epsilon_\alpha (t - \ell),
\end{align*}
in which we vacuously set $J_b (1, 0) = I_b (1, 0) = \emptyset$. We then have by definition
\begin{align*}
|J_b (1, t)| &= |J_b (1, t - \ell)| + |J_b (t - \ell+1, t)|, \\
|I_b(1, t)| &= |I_b(1, t - \ell)| + |I_b(t - \ell + 1, t)|,
\end{align*}
and $|[t - \ell+1, t]| = \ell$, so we only have to prove
\begin{align*}
|J_b (t - \ell+1, t)| \leq 2 |I_b(t - \ell + 1, t)| + \epsilon_\alpha \ell
\end{align*}
and we are done.
Let $K_\ell = [t - \ell + 1, t]$ and $K_s = \{i \in K_\ell : e_i^{\prime \prime} = e_i\}$, then $|K_\ell| = \ell = \Delta_{K_\ell} (\bar{e}^{\prime \prime}, \bar{e}) + |K_s|$, and in fact $\Delta (\bar{e}^{\prime \prime}, \bar{e}) = \Delta_{K_\ell} (\bar{e}^{\prime \prime}, \bar{e})$ since $\Delta_{[1, t - \ell]} (\bar{e}^{\prime \prime}, \bar{e}) = 0$. But then by the tree code condition, $\Delta_{K_\ell} (\bar{e}^{\prime \prime}, \bar{e}) \geq \alpha \ell$, and since we have $\alpha = 1 - \epsilon_\alpha$, we find $|K_s| \leq \epsilon_\alpha \ell$. If we define, for $v \in \{e, b, g \}$,
\begin{align*}
J_v &= J_v (t - \ell+1, t), \\
I_v &= I_v (t - \ell+1, t), \\
K_d &= \{ i \in K_\ell \setminus ( K_s \cup I_e ) : e_i^\prime \not= e_i \mathrm{~and~} e_i^\prime \not= e_i^{\prime \prime} \}, \\
K_a &= [t] \setminus ( [1, t - \ell] \cup K_s \cup I_e \cup K_d ) \\
&= (I_b \cup I_g) \setminus (K_s \cup K_d),
\end{align*}
then we can notice that, since $\Delta_{[1, t - \ell]} (\bar{e}^{\prime \prime}, \bar{e}) = \Delta_{K_s} (\bar{e}^{\prime \prime}, \bar{e}) = 0, \Delta_{I_e} (\bar{e}^{\prime \prime}, \bar{e}^\prime) = \Delta_{I_e} (\bar{e}, \bar{e}^\prime) = |I_e|$ and $\Delta_{K_d} (\bar{e}^{\prime \prime}, \bar{e}^\prime) = \Delta_{K_d} (\bar{e}, \bar{e}^\prime) = |K_d|$, the decoding condition $\Delta (\bar{e}^{\prime \prime}, \bar{e}^\prime) \leq \Delta (\bar{e}, \bar{e}^\prime)$ is equivalent to $\Delta_{K_a} (\bar{e}^{\prime \prime}, \bar{e}^\prime) \leq \Delta_{K_a} (\bar{e}, \bar{e}^\prime)$. But for all $i \in~K_a$, $e_i^{\prime \prime}~\not=~e_i$ and either $e_i^\prime = e_i^{\prime \prime}$ or $e_i^\prime = e_i$, so that exactly one of the two equalities holds. Hence,
\begin{align*}
\Delta_{K_a} (\bar{e}^{\prime \prime}, \bar{e}^\prime)
&= | \{i \in (I_b \cup I_g) \setminus (K_s \cup K_d) : e_i^\prime = e_i \} | \\
&= |I_g \setminus (K_s \cup K_d)|
\end{align*}
and
\begin{align*}
\Delta_{K_a} (\bar{e}, \bar{e}^\prime) = |I_b \setminus (K_s \cup K_d)|,
\end{align*}
so we can restate the equivalent decoding condition as
\begin{align*}
|I_g \setminus (K_s \cup K_d)| \leq |I_b \setminus (K_s \cup K_d)|.
\end{align*}
We have
\begin{align*}
K_\ell &= J_e \cup J_b \cup J_g \\
&= I_e \cup I_b \setminus (K_s \cup K_d) \cup I_g \setminus (K_s \cup K_d) \cup K_s \cup K_d,
\end{align*}
so
\begin{align*}
\ell &= |K_\ell| \\
&= |J_e| + |J_b| + |J_g| \\
&\leq |I_e| + |I_b \setminus (K_s \cup K_d)| + |I_g \setminus (K_s \cup K_d)| + |K_s| + |K_d| \\
&\leq |I_e| + 2 |I_b \setminus (K_s \cup K_d)| + |K_s| + |K_d| \\
&\leq |I_e| + 2 |I_b| + |K_s|
\end{align*}
in which we used the fact that $K_d \subseteq I_b$ in the last inequality. But then $|I_e| \leq |J_e|$, $|J_g| \geq 0$ and $|K_s| \leq \epsilon_\alpha \cdot \ell$, so $|J_b| \leq 2 |I_b| + \epsilon_\alpha \ell$, as required.
\end{proof}
\begin{corollary}
\label{cor:optcor}
If the corruption rate satisfies $0 \leq c < \frac{1}{2}$, then except with probability smaller than $2^{- \Omega(N^\prime)}$ for $N^\prime$ the length of the simulation protocol, the total number of bad rounds in the simulation is bounded by $N_b \leq (2 \epsilon_\beta + \epsilon_\alpha) N^\prime$ for $\alpha = 1 - \epsilon_\alpha$ the distance parameter of the tree code and $\beta = 1- \epsilon_\beta$ the erasure parameter of the blueberry code.
\end{corollary}
\begin{proof}
If the transmitted symbol is $g_i \in \Gamma$ after a blueberry encoding $B_i$ (actually, $B_i^A$ or $B_i^B$) and, conditional on the classical state of Eve and based on some measurement outcomes $z_i$, she chooses to corrupt $g_i$ into a different $g_i^\prime \in \Gamma$, this action is independent from the randomness used in $B_i$, and then it holds that Pr$[B_i^{-1} (g_i^\prime) \in \Sigma | z_1, \cdots, z_i] = \epsilon_\beta$. This is independent of the classical state and any measurement outcome $z_i$ of Eve. Then with a corruption rate $c$ bounded by some constants $\epsilon_\beta \leq c < \frac{1}{2}$, the proof of Lemma \ref{lem:bbc} tells us that with probability $1 - 2^{- \Omega (N^\prime)}$ at least a $c (1 - 2 \epsilon_\beta)$-fraction of the transmissions are detected as erasures. But the total number of corruption is $c N^\prime$, so there are at most $c N^\prime - (c - 2c \epsilon_\beta) N^\prime = 2 c \epsilon_\beta N^\prime < \epsilon_\beta N^\prime$ actual transmission error, except with probability negligible in $N^\prime$. Taking $\delta = \epsilon_\beta$ in the statement of Lemma \ref{lem:optcor} gives the result. If $0 \leq c \leq \epsilon_\beta$, then the result is immediate from Lemma \ref{lem:optcor} and the total number of corruption, also with $\delta = \epsilon_\beta$.
\end{proof}
With the above result in hand, we can show that if the corruption rate is below $\frac{1}{2}$ and we take $\epsilon_\alpha = \frac{1}{20} \epsilon, \epsilon_\beta = \frac{1}{40} \epsilon, l_r = \frac{N^\prime}{N} \geq \frac{2}{\epsilon} (1 + \frac{1}{N})$, then except with negligible probability, the simulation succeeds:
\begin{align*}
P(\frac{N^\prime}{2} + 1) &\geq N_g - 4 N_b \\
& = \frac{N^\prime}{2} - N_e - 5 N_b \\
& \geq \epsilon N^\prime - 5 N_b \\
& \geq \epsilon N^\prime - 5(2 \epsilon_\beta + \epsilon_\alpha)N^\prime \\
& = N^\prime (\epsilon - \frac{10}{40} \epsilon - \frac{5}{20} \epsilon) \\
& = \frac{1}{2} \epsilon N^\prime \\
& = \frac{1}{2} \epsilon l_r N \\
& \geq N + 1.
\end{align*}
The first inequality is from Lemma \ref{lem:erapi}, the first equality is by definition of $N_g, N_b, N_e$, i.e.~$\frac{N^\prime}{2} = N_g + N_b + N_e$, the second inequality is from the fact that the number of erasure rounds is bounded by the number of corruption, i.e.~$N_e \leq (\frac{1}{2} - \epsilon) N^\prime$, and the third inequality is from our bound on $N_b$ due to Corollary \ref{cor:optcor}, which holds except with negligible probability. The fact that the simulation succeeds is then immediate from Corollary \ref{cor:erapi}.
The above statement holds conditioned on some classical state $z$ of the $Z$ register of Eve, and some respective views of Alice and Bob of the transcript at each round. To prove Theorem~\ref{th:optach}, we have to argue similar to what is done in section \ref{sec:anal} how to translate these results to the state output by the protocols, even when we consider inputs entangled with some reference register $R$. We do not repeat this whole analysis here, since it is nearly identical once we make the following note. An arbitrary Eve fitting in the framework of the shared entanglement model could have adaptive, probabilistic behavior based on previous measurement outcomes. But these probabilistic choices must be independent of the secret key generated by Alice and Bob for the blueberry code, so similarly to section \ref{sec:anal}, for each probabilistic choice of Eve, the above result holds, so summing over all such choices, the result stays the same, proving Theorem~\ref{th:optach}.
\section{Results in Other Models}
\label{sec:oth}
By adapting the results we have obtained in the shared entanglement model for an adversarial error model, we can obtain many other interesting results. We first complete our study of the shared entanglement model with results in a random error setting. We then consider the quantum model and obtain results for both adversarial and random error settings. We also present a result hinting at the fact that the standard forward quantum capacity of the quantum channels used might not be the quantity that is best suited for our interactive communication scenario. We also consider a variation on the shared entanglement model in which, along with the noisy classical communication, the shared entanglement is also noisy.
\subsection{Shared Entanglement Model with Random Errors}
\begin{theorem}
\label{th:iidshent}
Given a two-party quantum protocol of length $N$ in the noiseless model and any $C > 0$, there exists a simulation protocol in the shared entanglement model that is of length $O(\frac{1}{C} N)$ and succeeds in simulating the original protocol with negligible error over classical binary symmetric channels of capacity $C$. More precisely, there exists constants $c, l_r > 0$ such that given any classical binary symmetric channel $\M_C^S$ of capacity $C > 0$ and noiseless protocol length $N \in 2 \mathbb{N}$, there exist a universal simulator $S$ in the shared entanglement model of length $N^\prime$ with communication rate $R_C \geq l_r C$, transmission alphabet of size $2$, entanglement consumption rate $R_E \leq 1$, which succeeds with error $2^{- c N}$ at simulating all noiseless protocols of length $N$ over $\M_C^S$.
\end{theorem}
\begin{theorem}
\label{th:iidshentopt}
There exist a sequence of two-party quantum protocols of increasing length $N$ in the noiseless model such that for all $C > 0$, any corresponding sequence of simulation protocol of length $o(\frac{1}{C} N)$ in the shared entanglement model fails at outputting the final state with low error on some input over classical binary symmetric channels of capacity $C$. Moreover, the family of quantum protocol can be chosen to be one computing a distributed binary function. More precisely, there exists a sequence $\{\Pi_N \}_{N \in 2 \mathbb{N}}$ of two-party quantum protocols and constants $d, \epsilon > 0$ such that for all $N_0 \in \mathbb{N}$, there exist $N \geq N_0$ and $C > 0$ such that for any $R_E \geq 0$ and any simulation protocol $S$ in the shared entanglement model of length $N^\prime = \frac{d}{C} N$ with communication rate $R_C = \frac{N}{N^\prime}$ and arbitrary entanglement consumption rate $R_E$, the simulation does not succeed with error $\epsilon$ over the binary symmetric channels.
\end{theorem}
\subsubsection{Discussion About Optimality}
The above results show that, in the regime where we use binary symmetric channels of classical capacity close to $0$, we cannot expect to do much better than what we achieve, up to a multiplicative constant in front of the $\frac{1}{C}$ dilation factor. If we want to perform better in that regime, we would have to use the specifics of the operations implemented by the noiseless protocol instead of just using it as a black-box, even if we are restricting to protocols computing binary functions. We could however hope to be able to get much better hidden constants, since ours do not match the case of one-way communication in which the constant can be made arbitrarily close to $\frac{1}{2}$ as the quantum message size increases. Another regime of interest would be for channels of capacity close to $1$, in which our techniques dilate the length of the protocols by a large multiplicative constant even when the error rate is low. In the classical case, recent results of Kol and Raz \cite{KR13} show how to obtain communication rates going to $1$ as the capacity goes to $1$. Using our representation for quantum protocols, we are able to adapt their techniques with ideas similar to those used here to obtain comparable results in the shared entanglement model (up to a factor of $2$ for teleportation), and this result will appear in a forthcoming paper.
\subsubsection{Proof of Theorem \ref{th:iidshent}}
In \cite{Sch96}, it is stated that, given a transmission alphabet $\Sigma$ and a desired bound $\epsilon$ on the probability of transmission error, there exists a $d > 0$ such that given a binary symmetric channel $\M_C$ of capacity $C$ , there is a $p \in \mathbb{N}, p \leq d \frac{1}{C}$, an encoding function $E: \Sigma \rightarrow \{0, 1 \}^p$ and a decoding function $D: \{0, 1 \}^p \rightarrow \Sigma$ such that Pr$[D( \M_C (E (e))) \not= e] \leq \epsilon$. We use this in conjunction with the result of Theorem \ref{th:bas} and the Chernoff bound to obtain the following result: Taking $\epsilon = \frac{1}{90} < \frac{1}{80}$, $\Sigma$ given by Lemma \ref{lem:tccode} for a tree code of arity $48$ and distance parameter $\alpha = \frac{39}{40}$ and the corresponding $d > 0$, given a binary symmetric channel of capacity $C$ and the corresponding $p \in \mathbb{N}, E$ and $D$, if all the $\Sigma$ transmissions in the basic simulation protocol are done by reencoding over $\{0, 1 \}^p$ with $E$ (and decoding with $D$), then except with probability $2^{- \Omega(N^{\prime \prime})}$ for $N^{\prime \prime} = 4 (1 + \frac{1}{N}) N$ the length of the basic simulation protocol over alphabet $\Sigma$, $N^\prime = p N^{\prime \prime}$ the length of the oblivious simulation protocol over the binary symmetric channel, and $N$ the length of the noiseless protocol to be simulated, the error rate for transmission of $\Sigma$ symbols is going to be below $\frac{1}{80}$ and then by Theorem \ref{th:bas} the simulation succeeds.
\subsubsection{Proof of Theorem \ref{th:iidshentopt}}
It is known that for a classical discrete memoryless channel such as the binary symmetric channel, entanglement-assistance cannot increase the classical capacity \cite{BSST02}, and it is also known that allowing for classical feedback does not allow neither to increase the classical capacity. However, we might hope that allowing for both simultaneously might lead to improvements. This is not the case: classical feedback augmented by shared entanglement can be seen to be equivalent to quantum feedback, and it is also known that for discrete memoryless quantum channels, the classical capacity with unlimited quantum feedback is equal to that with unlimited entanglement assistance \cite{Bow04}. Hence, in the shared entanglement model, the classical capacity of the binary symmetric channels used is not increased by the entanglement assistance and the other binary symmetric channel's feedback. It is clear that for some protocols of length $N$ fitting our general framework in the noiseless model, like those accomplishing a quantum swap function or even a classical swap or bitwise XOR functions on inputs of size $\frac{N}{2}$, the parties must effectively exchange their whole inputs to output their final state. Hence, a dilation factor proportional to the inverse of the capacity $\frac{1}{C}$ is necessary since these protocols are equivalent to a communication of $\frac{N}{2}$ bits or qubits in each direction. What we want to prove is even stronger: there exists a family of distributed binary functions such that this is necessary. We consider the inner product function $IP_n: \{0, 1 \}^n \times \{0, 1 \}^n \rightarrow \{0, 1 \}, IP_n (x, y) = \oplus_{i=1}^n x_i \wedge y_i$, which has been proved to have communication complexity of $\Theta (n)$ in both Yao's and Cleve-Buhrman's quantum communication complexity model \cite{CvDNT98}.
But we know from \cite{CvDNT98} that any black-box protocol evaluating coherently the $IP_n$ function with small error can be used to transmit $n$ bits of classical information with small probability of error, and that any non-coherent unitary protocol to compute a classical function can be made coherent by doubling the amount of quantum communication (make a pseudo-copy of the output, and then run the protocol backward to get rid of the junk) while keeping the error parameter small. Hence, these protocols can be used to transmit $n$-bit strings over a channel of classical capacity $C$ with some small probability of failure, and by consequence for small enough error it requires at least $\frac{1}{C} n$ uses of the channel. Since for any small enough error, the communication complexity of $IP_n$ is $\Theta (n), N \in \Theta (n)$ (if the protocol does not waste communication), and $N^\prime \in \Omega (\frac{1}{C} n) = \Omega (\frac{1}{C} N)$ is required for the simulation to succeed. Note that we have made the reasonable assumption that we can run the simulation backward over the noisy channel at the same communication cost (or else that we start with a coherent protocol for the inner product function; the restriction of having the protocol compute the function in a coherent way is natural if we want to compose our quantum simulation protocols, since then they may be called on arbitrary quantum inputs). Details will appear in a future version of this work.
\subsection{Quantum Model with Adversarial Errors}
\begin{theorem}
\label{th:advstand}
Given an adversarial channel in the quantum model with error rate strictly smaller than $\frac{1}{6}$, we can simulate any noiseless protocol of length $N$ over this channel using a number of transmission linear in $N$. More precisely, there exists a constant $c > 0$ such that for arbitrary small $\epsilon > 0$, there exist a communication rate $R_C > 0$ and an alphabet size $q \in \mathbb{N}$ such that for all noiseless protocol lengths $N \in 2 \mathbb{N}$, there exists a universal simulator $S$ in the quantum model of length $N^\prime$ with communication rate at least $R_C$, transmission alphabet size $q$, which succeeds with error $2^{- c N}$ at simulating all noiseless protocols of length $N$ against all adversary in $\A_{\frac{1}{6} - \epsilon}^Q$.
\end{theorem}
\subsubsection{Discussion About Optimality}
If we consider only perfect quantum error correcting codes for quantum data transmission, it is known that we cannot tolerate error rates of more than $\frac{1}{4}$ asymptotically, and so with the approach of first distributing entanglement and then using our $\frac{1}{2} - \epsilon$ error rate protocol, this leads to overall tolerable error rates for the simulation of less than $\frac{1}{6}$. However, Cr\'epeau, Gottesman and Smith \cite{CGS05} showed how to tolerate up to $\frac{1}{2}$ error rate asymptotically for data transmission if we consider approximate quantum error correcting codes, and using these would lead to $\frac{1}{4} - \epsilon$ tolerable error rate for a two phase simulation protocol as described above. However, their register size as well as the number of communicated registers are linear in the number of transmitted qubits, so for our purpose using these would lead to communication rates of $0$ asymptotically. It would be interesting to see whether we can do something similar with register size independent of the transmission size, but possibly dependent on the fidelity we want to reach and how close to $\frac{1}{2}$ (or some other fraction strictly larger than $\frac{1}{4}$) we want to get. Using these kinds of codes, if we want to do the simulation in two steps, an entanglement distribution part and then an actual simulation part, this is the best we can do. To tolerate higher error rates than what we achieve, we might hope to develop a fully quantum analogue of tree codes that does not require to first distribute entanglement before it can be used while robustly transmitting quantum information. However, to be able to coherently apply the noiseless protocol unitaries in the simulation, the developed quantum codes would require some properties for fault-tolerant computation, a problem not present in the classical case, since we can copy classical information and perform the computation on the copy. Finally, note that the proof of Theorem \ref{th:optobl} also establishes that the bound of $\frac{1}{2}$ on the maximum error rate tolerable in an oblivious communication model applies here as well: no simulation protocol in the quantum model can succeed with arbitrarily small error against all adversaries in $\A_\frac{1}{2}^Q$.
\subsubsection{Proof of Theorem \ref{th:advstand}}
The approach we take in the quantum model is to emulate the approach in the shared entanglement model by first using the provided quantum channels to distribute sufficient entanglement, and then by using them effectively as classical channels along with the entanglement to run the simulation protocol of section \ref{sec:opt}. Let us look at the parameters of the quantum error correcting codes (QECCs) we use to distribute entanglement.
For a given $\epsilon > 0$, let $s = \frac{(|\Gamma|!)}{(|\Gamma|-|\Sigma|)!}$ be the size of the shared secret key used to do the blueberry encoding in each round, so that in each round, two maximally entangled states of size $2 s$ (i.e.~states of the form $\sum_{j=0}^{2 s - 1} \ket{j}^{T_A} \ket{j}^{T_B}$) are used to generate the secret keys required in the protocol of section $\ref{sec:opt}$ and to create the EPR pairs required for teleportation. Then, for any given communication register size $q$ and simulation protocol in the shared entanglement model of length $N^\prime$, we need to distribute a maximally entangled state of $ N^\prime \log_q (2s)$ registers of that size to perform the whole protocol of section $\ref{sec:opt}$.
If we allow, in the entanglement distribution phase, for $l_c N^\prime$ transmissions of registers of size $q$ from Alice to Bob, we want quantum error correcting codes on a register of size $q$, a transmission rate $R_Q = \frac{1}{l_c} \log_q (2 s)$, and some corresponding maximum tolerable error rate $\delta$. We only consider here exact quantum error correcting codes, but the analysis extends to approximate ones for which we might also allow for some deviation from perfect transmission. To choose $q, l_c$ and $\delta$, we first note that in the actual simulation part we need to transmit classical messages chosen from a set of size $|\Gamma|$ over the same quantum channel used to distribute entanglement, so a first constraint is $q \geq |\Gamma|$. Then, to make sure that the simulation succeeds in the second part, the total number of corruptions should be bounded by $(\frac{1}{2} - \epsilon) N^\prime = \frac{N^\prime}{2} - \epsilon N^\prime$. Hence, since an adversary could choose to put all of her allowed corruptions in the first part instead of the second part, the QECC should also be able to recover from the same number of errors, $\frac{N^\prime}{2} - \epsilon N^\prime$. If the QECC can tolerate an error rate of $\delta$, we need the length of the entanglement distribution part to satisfy $l_c \geq \frac{1 - 2 \epsilon}{2 \delta}$, and then the whole simulation protocol can tolerate $\frac{N^\prime}{2} - \epsilon N^\prime$ adversarial errors during a total of $(l_c + 1) N^\prime$ communications, i.e.~it can tolerate an error rate of $\frac{1 - 2\epsilon}{2 (l_c + 1)}$ (note that if we restrict ourselves to an alternating instead of oblivious communication model, a factor of $2$ appears in front of $l_c$ due to the fact that the adversary can choose to corrupt the transmissions of one particular party during the entanglement transmission phase, but there is now twice as much communication during that phase).
We now use a high-dimensional quantum Gilbert-Varshamov bound \cite{AK01, FM04} stating that for arbitrarily small $\epsilon^\prime > 0$, there exists strictly positive communication rate $R_Q > 0$ and large enough transmission alphabet size such that families of quantum codes of arbitrarily large length exist which can tolerate a fraction $\frac{1}{4} - \epsilon^\prime$ of errors and still perfectly correct the quantum state. We use these and the above analysis to tolerate error rate $\frac{1}{6} - \epsilon$ for our simulation protocols (this result is obtained in an oblivious model of communication; in an alternating model of communication, we are able to tolerate error rates of $\frac{1}{10} - \epsilon$).
Taking $l_c = 2 \frac{(1 - 2 \epsilon)}{(1 - 4 \epsilon)}$ for $0 < \epsilon < \frac{1}{4}$ and choosing a $q$ large enough as a function of $\epsilon$ such that $R_Q$ is low enough for a QECC with the required parameters to exist, Alice uses her first $l_c N^\prime$ transmissions to distribute perfectly entanglement to Bob with the above QECC. By the above analysis, since the overall error rate is bounded by $\frac{1}{6} -\epsilon$, the error rate in the entanglement distribution phase is bounded by $\frac{1}{4} - \epsilon$ and the QECC can perfectly recover from this error rate and produce perfect entanglement. They then share enough entanglement to run the simulation of section \ref{sec:opt}. During the simulation phase, before transmission and after reception of an element of $|\Gamma|$ through the channel, both the sender and the receiver measure the quantum communication register. These measurements have the effect of transforming all possible quantum actions of Eve into effectively classical actions. Indeed, conditioned on the results of the two measurements, the corresponding branches of the simulation proceed exactly as if the sender and the receiver had transmitted and received classical information over a classical channel, and doing so restricted the action of Eve into an essentially classical one. Moreover, if $q$ is larger than $\Gamma$ and Eve maps some of these classical messages outside of the span of $\Gamma$, Alice and Bob only have to mark these as erasures so Eve does not gain anything by leaving the span of $\Gamma$.
But then, the corresponding corruption rate of the adversary during the actual simulation phase is lower than $\frac{1}{2} - \epsilon$, and so, given any strategy which must be independent of the generated secret key, the fraction of branches in which the secret key enables Alice and Bob to succeed is overwhelming over the measurement outcomes, and the remainder of the analysis goes as in section \ref{sec:analopt}, proving Theorem \ref{th:advstand}.
\subsection{Quantum Model with Random Errors}
\begin{theorem}
\label{th:iidstand}
Given a two-party quantum protocol of length $N$ in the noiseless model and any $Q > 0$, there exists a simulation protocol in the quantum model that is of length $O(\frac{1}{Q} N)$ and succeeds in simulating the original protocol with arbitrarily small error over quantum depolarizing channels of quantum capacity $Q$. More precisely, there exist a constant $l_r > 0$ and a function $f: \mathbb{N} \rightarrow \mathbb{R}^+$ with $\lim_{N \rightarrow \infty} f(N) = 0$ such that given any depolarizing channel $\M_Q$ of quantum capacity $Q > 0$ and noiseless protocol length $N \in 2 \mathbb{N}$, there exist a universal simulator $P$ in the quantum model of length $N^\prime$ with communication rate $R_Q \geq l_r Q$, transmission alphabet size $2$, which succeeds with error $f(N)$ at simulating all noiseless protocols of length $N$ over $\M_Q$.
\end{theorem}
\begin{theorem}
\label{th:iidstandconv}
There exist a sequence of two-party quantum protocols of increasing length $N$ in the noiseless model such that for all $Q_B > 0$, any corresponding sequence of simulation protocol of length $o(\frac{1}{Q_B} N)$ in the quantum model fails at outputting the final state with low error on some input over quantum depolarizing channels of quantum capacity with classical feedback $Q_B$. Moreover, the family of quantum protocol can be chosen to be one computing a distributed binary function. More precisely, there exists a sequence $\{\Pi_N \}_{N \in 2 \mathbb{N}}$ of two-party quantum protocols and constants $d, \epsilon > 0$ such that for all $N_0 \in \mathbb{N}$, there exists $N \geq N_0$ and $Q_B > 0$ such that for any simulation protocol $P$ in the quantum model of length $N^\prime = \frac{d}{Q_B} N$ with communication rate $R_Q = \frac{N}{N^\prime}$, the simulation does not succeed with error $\epsilon$ over the quantum depolarizing channels.
\end{theorem}
\begin{theorem}
\label{th:iidstandopt}
Given a two-party quantum protocol of length $N$ in the noiseless model, there exists a quantum depolarizing channel of unassisted forward quantum capacity $Q=0$ and a simulation protocol in the quantum model with asymptotically positive rate of communication which succeeds in simulating the original protocol with arbitrarily small error over that quantum channel. More precisely, there exists constants $c, R_Q > 0$ such that given a particular depolarizing quantum channel $\M_0^Q$ of forward quantum capacity $Q = 0$ and any noiseless protocol length $N \in 2 \mathbb{N}$, there exist a universal simulator $P$ in the quantum model of length $N^\prime$ with communication rate at least $R_Q$, transmission alphabet size $2$, which succeeds with error~$2^{- cN}$ at simulating all noiseless protocols of length $N$ over $\M_0^Q$
\end{theorem}
\subsubsection{Discussion About Optimality}
It is known that for some range of the depolarizing parameter, the quantum capacity with classical feedback $Q_B$ of the depolarizing channel is strictly larger than its unassisted forward quantum capacity $Q$ \cite{BDSW96}. In particular, there exists values for which $Q=0$ but $Q_B > 0$. A careful analysis of the related $2$-way entanglement distillation protocols (in particular their communication cost and their amount of interaction) reveals that there is some range of the depolarizing parameter for which we can achieve successful simulation even though $Q=0$, by using the depolarizing channels in each direction to transmit the classical information. Note that $Q_B > 0$ if and only if the depolarizing parameter $\epsilon^\prime < \frac{2}{3}$, and so $Q_B > 0$ if and only if the quantum capacity assisted by two-way classical communication $Q_2 > 0$. In the case where we are given a depolarizing channel with $Q_B > 0$, we can modify the method used in the proof of Theorem \ref{th:iidstandopt} by iteratively using the recurrence method a constant number of times (constant in $N$, not in the depolarizing parameter!) on the noisy distributed EPR pairs until the depolarizing channels induced through teleportation over the noisy distilled EPR pairs has $Q > 0$, and then distribute entanglement over the induced channels using standard QECCs. We achieve asymptotically positive rates of communication for our simulation protocols. It is an interesting open question whether we can close the gap between our lower and upper bounds and always achieve successful simulation at a rate $O(\frac{1}{Q_B} N)$. The separation result regarding the forward, unassisted quantum capacity of the depolarizing channel requires some technical work, but the case of the erasure channel already makes it clear that in general for discrete memoryless quantum channels, the unassisted forward quantum capacity is not the most suitable quantity to consider in the setting of interactive quantum communication.
\subsubsection{Proof of Theorem \ref{th:iidstand}}
For the random error case in the quantum model, we use techniques similar to the adversarial error case. Indeed, we split the protocol into two phases: an entanglement distribution part and an actual simulation phase.
To avoid technicalities, it is sufficient to adapt the result from section $\ref{sec:bas}$ for a basic simulation protocol of length $N^{\prime \prime}$ over some large alphabet $\Sigma$. We then only need to distribute $N^{\prime \prime}$ EPR pairs. For any depolarizing channel of quantum capacity $Q > 0$, we use standard quantum Shannon theory type coding to distribute entanglement at a rate of $\frac{d}{Q}$ for some $d > 0$ with low error. Then, for the actual simulation part, we use both the fact that the classical capacity $C$ is at least as large as the quantum capacity $Q$ for any quantum channel, and that a classical capacity achieving strategy for the depolarizing channel is just to simulate a binary symmetric channel (BSC) of capacity $C$ for each transmission by measuring the output in the computational basis, and then do block coding over the corresponding BSC (details are provided in \cite{Wilde11}). We can then translate the arguments of the proof of Theorem \ref{th:iidshent} to design our classical strategy which succeeds with overwhelming probability (assuming perfect entanglement for now), and the output is arbitrarily close to the noiseless protocol one. Combining the bound on this error with the one from the entanglement distribution part, the simulation can be made to succeed with error less than $f(N)$ over the depolarizing channel of quantum capacity $Q$, for some function $f: \mathbb{N} \rightarrow \mathbb{R}^+$ which asymptotically goes to zero. Details will appear in a future version of this work.
\subsubsection{Proof of Theorem \ref{th:iidstandconv}}
The idea for this proof is to use the symmetry of the depolarizing channel for entanglement distribution to actually simulate one direction of the use of the quantum depolarizing channel with classical feedback used for teleportation. Then apply a coherent version of the idea to use the inner product protocol to communicate, as in the proof of Theorem \ref{th:iidshentopt}, to use the depolarizing channel to distribute quantum entanglement, and then further use the depolarizing channel (again with the inner product protocol used this time to communicate classical information) to teleport.
Similar to what was argued in the proof of Theorem \ref{th:iidshentopt} for classical communication, it is clear that for some protocols of length $N$ fitting our general framework in the noiseless model can be used to communicate up to $\frac{N}{2}$ qubits in each direction. Hence, since our simulation protocols of length $N^\prime$ can be simulated by $N^\prime$ uses of a depolarizing channel from Alice to Bob supplemented by classical feedback from Bob to Alice, we cannot have a rate of communication better than $\frac{N}{2 Q_B}$ for small enough error. To prove that a protocol to compute a binary function is sufficient, we once again consider the inner product function $IP_n$. Note that what we achieved in the proof of Theorem \ref{th:iidshentopt} using the protocol for $IP_n$ is actualy stronger than $\Theta (N)$ bits of classical communication: we had a coherent bit channel \cite{Har04} for $\Theta (N)$ cobits (coherent bits), which can be used to distribute $\Theta (N)$ ebits (EPR pairs). But then we can perform teleportation of $\Theta (N)$ qubits from Alice to Bob by once again using the $IP_n$ protocol, but this time to transmit the classical teleportation measurement information.
We have thus used the length $N^\prime$ simulation protocol at most $4$ times (depending whether or not the noiseless one was computing the $IP_n$ function coherently to begin with) over the depolarizing channel from Alice to Bob, with (free, perfect) classical feedback from Bob to Alice, and succeeded at transmitting $\Theta (N)$ qubits, so we must have that $N^\prime \in \Omega (\frac{1}{Q_B} N)$ for the simulation to succeed with small error. Note that we once again make the reasonable assumption that in the case in which the initial protocol is not coherent, we can run the simulation protocol backward over the noisy channel at the same communication cost. Details will appear in a future version of this work.
\subsubsection{Proof of Theorem \ref{th:iidstandopt}}
The case of the depolarizing channel requires some technical work, so for simplicity we first consider the case of the quantum erasure channel. For the quantum erasure channel, we use the fact that, for erasure probability $\frac{1}{2} \leq p < 1$, the (forward, unassisted) quantum capacity is $0$ while the classical capacity is $1-p$ and the entanglement generation capacity with classical feedback is at least $1-p$. Moreover, the feedback required to achieve this bound is only one message of length linear in the size of the quantum communication. The strategy we use is the following: for a basic simulation protocol of length $N^{\prime \prime}$ over $\Sigma$, Alice distribute $N^{\prime \prime}$ EPR pairs to Bob by sending $\frac{4 N^{\prime \prime}}{(1-p)}$ halves of EPR pairs over the quantum erasure channel. Then, except with negligible probability, at least $N^{\prime \prime}$ of them are received correctly, and Bob knows which these are. The feedback consist of informing Alice which $N^{\prime \prime}$ pairs to use in the protocol, so that they both agree. This can be done over the
quantum erasure channel (again except with negligible probability) with a classical message of length linear in $N^{\prime \prime}$.
Then, given a message set $\Sigma$ we can use the quantum erasure channel a constant number of times to decrease the probability of error in a classical transmission of any symbol $e \in \Sigma$ below $\frac{1}{90}$. Except with negligible probability, the fraction of $N^{\prime \prime}$ transmissions of symbols of $\Sigma$ transmitted in this way is below $\frac{1}{80}$. We can then use ideas similar to those in the proof of Theorem \ref{th:advstand} to argue that the output is arbitrarily close to the noiseless protocol one. Details will appear in a future version of this work.
Now for the depolarizing channel, the idea is mostly the same, but we have to work harder to obtain (almost) noiseless entanglement. The unassisted forward capacity of the depolarizing channel is shown in \cite{BDSW96} to be equivalent to one-way entanglement distillation yield. To separate one-way and two-way entanglement distillation, they use a combination of the recurrence method of \cite{BBPSSW95}, which is an explicitly two-way entanglement distillation protocol which can purify highly noisy entanglement, but does not have a positive yield in the limit of high fidelity distillation, along with their hashing method, a one-way protocol with positive yield in the perfect fidelity limit, but which does not work on highly noisy entanglement. However, we cannot hope to use this strategy to distill near perfect EPR pairs in our scenario since the hashing method as they describe it requires too much communication (however, we could probably use some derandomization argument to avoid communicating the random strings). To reduce the communication cost, we instead use a hybrid approach of entanglement distillation followed by quantum error correction.
Starting with a depolarizing channel with depolarizing parameter as high as possible, but still low enough to have $Q = 0$, we use it to distribute imperfect EPR pairs. This yields (rotated) Werner states with the highest possible fidelity to perfect EPR pairs, but such that one-way entanglement distillation protocols cannot have a positive yield of EPR pairs while two-way entanglement distillation protocols can (see section \ref{sec:nsent} for a definition of Werner states). We then do one round of the recurrence method for entanglement distillation to obtain a lesser number of Werner states of higher fidelity to perfect EPR pairs, and so we could now use one-way distillation protocols on these to obtain a positive yield of near perfect EPR pairs. Note that the amount of classical communication required up to this point is one message from Alice to Bob of linear length informing him of her measurement outcomes, and then one classical message of linear length from Bob to Alice informing her which states to keep as well as which rotation to apply to these (to go back to symmetric Werner form; $\log{12}$ bits of information per pair is sufficient for this purpose \cite{BDSW96}). But we can now use these EPR pairs along with teleportation to effectively obtain a depolarizing channel of quantum capacity $Q > 0$, and so we use standard Quantum Shannon theory type coding over this quantum channel to distribute $N^{\prime \prime}$ near perfect EPR pairs. This new step has only required a linear amount of classical communication, and so after the initial very noisy entanglement distribution step, we only have three classical messages to send over the depolarizing channel of classical capacity $C > 0$, and so we can generate near perfect entanglement using the depolarizing channel a linear amount of times, and then go on to do the actual simulation phase as above. Note that we are not yet insured of an exponential decay of the error at this point, only that the error tends to zero in the limit of large $N$. To get exponential decay, adapt the above protocol such that before using teleportation and QECC to distribute good entanglement, we perform a few more rounds of the recurrence method until the Werner states reach fidelity parameter above $0.82$. Except with negligible probability, starting with some linear amount of noisy EPR pairs, after a constant number of rounds of the recurrence method, we are left with sufficiently many less noisy EPR pairs for our next step. At this point, it is known that there exist stabilizer codes achieving the hashing bound (which has strictly positive yield for this noise parameter) and which have negligible error. Using the fact that some classical capacity achieving strategy for the depolarizing channel also has negligible error, we get the stated exponential decay in the error. Details will appear in a future version of this work.
\subsection{Noisy Entanglement}
\label{sec:nsent}
The last model we consider is a further variation on the shared entanglement model, in which, along with the noisy classical links between the honest parties, the entanglement these parties share is also noisy. Details will appear in a future version of this work.
There are many possible models for noisy entanglement; we consider a simple one in this section, in which parties share noisy EPR pairs instead of perfect pairs. Following \cite{BBPSSW95}, we consider the so-called (rotated) Werner states $W_F = F \kb{\Phi_{00}}{\Phi_{00}} + \frac{1 - F}{3} (\kb{\Phi_{01}}{\Phi_{01}} + \kb{\Phi_{10}}{\Phi_{10}} + \kb{\Phi_{11}}{\Phi_{11}})$, which are mixtures of the four Bell states parametrized by $0 \leq F \leq 1$. Note that these are the result of passing one qubit of an EPR pair through a $\T_{\epsilon^\prime}$ depolarizing channel, for $F = 1 - \frac{3 \epsilon^\prime}{4}$. The purification of these noisy EPR pairs is given to Eve. We use the result of \cite{BBPSSW95} to show that for any $F > \frac{1}{2}$, simulation protocols with asymptotically (in $N \rightarrow \infty$, not in $F \rightarrow \frac{1}{2}$) positive communication rates and which can tolerate a positive error rate can succeed with asymptotically zero error. This is optimal: at $F = \frac{1}{2}$, Werner states are separable, so there is no way to use them in conjunction with classical communication to simulate quantum communication.
\subsubsection{Adversarial Errors on the Classical Communication}
We first consider the case of adversarial errors. Let $l_c$ be the number of rounds of the recurrence method for entanglement distillation necessary to reach the $F = 0.82$ bound. This number is independent of $N$, and depends only on the initial $F$. As described in the proof of Theorem \ref{th:iidstandopt}, each round of the recurrence method only requires a linear length message in each direction. After this bound is reached, one last linear length classical message is sufficient to generate a linear amount of entanglement through teleportation via an induced depolarizing channel of quantum capacity $Q > 0$, so standard quantum error correction techniques enables us to extract near perfect entanglement at this point. Once we have near perfect entanglement, we can use techniques from the basic simulation protocol to perform successful simulation of noiseless protocols, hence achieving our goals. The protocol described above requires the communication of $2 l_c + 1$ messages to distill near perfect entanglement, independent of $N$, followed by an actual simulation phase, so the simulation protocol can tolerate a constant error rate (though inversely proportional in $l_c$), requires a constant rate of noisy entanglement consumption (though exponential in $l_c$ since each round of the recurrence method consumes at least half of the noisy EPR pairs), and has a constant, positive rate of communication (though inversely proportional in the amount of consumed noisy EPR pairs). Details will appear in a future version of this work.
\subsubsection{Random Errors on the Classical Communication}
The case of noisy communication through binary symmetric channels once again is immediate from the adversarial error case by a concentration argument. The communication rate is inversely proportionnal in the classical capacity $C$, and also in the number of noisy EPR pairs consumed. Details will appear in a future version of this work.
\section{Conclusion: Discussion and Open Questions}
We perform simulation of interactive quantum protocols over noisy channels with only a linear dilation factor. In particular, our approach is to replace irreversible measurements by reversible pseudo-measurements in the Cleve-Buhrman model (shared entanglement, classical communication), and then in the analogous noisy model to teleport the corresponding quantum communication register to avoid losing the quantum information it contains over the noisy channel. With this approach, we have been able to prove that it is possible to simulate the evolution of quantum protocols designed for noiseless quantum channels over noisy channels with only a linear dilation factor. Moreover, in the case of adversarial channel errors in which parties are allowed to pre-share a linear amount of entanglement, we were able to prove that the error rate of $\frac{1}{2} - \epsilon$ our simulation protocol can sustain is optimal unless we generalize the noisy communication model such that the order in which the parties take turn in the protocol can be adapted to the errors. But in a noisy setting, restricting to non-adaptive protocols is natural. Otherwise, depending on the particular view of each party on the evolution of the protocol due to previous errors, they could disagree on whose turn it is to speak, and this would result in protocols that are not well defined.
To simplify the exposition, we chose not to optimize different parameters, such as communication and entanglement consumption rates and communication register size. It is possible to modify our results in a straightforward manner to transmit larger (noiseless protocol) communication registers in each round, hence decreasing the amount of interaction while still tolerating high error rates. It is also possible to adapt our findings to a random error model in which parties are allowed to share entanglement but communicate over binary symmetric channels of capacity $C > 0$, and then we obtain communication rates proportional to $C$. It can be shown that, up to a hidden constant, this is optimal for some family of distributed binary functions, for example the inner product functions $\mbox{\textsl{IP}}_n : \{0, 1 \}^n \times \{0, 1 \}^n \rightarrow \{0, 1 \}, \mbox{\textsl{IP}}_n (x, y) = \oplus_{i=1}^n x_i \cdot y_i$. Our findings can also be adapted to obtain similar (though not optimal) results for the quantum model (the noisy version of Yao's model), in which our simulation protocols run in two phases: a first phase in which a linear amount of entanglement is distributed with standard techniques from quantum Shannon theory for random noise and from quantum coding theory for adversarial noise, and then an actual simulation part in which the parties perform actions similar to those of the shared entanglement model. In an adversarial noise model, we show that we can tolerate an error rate of $\frac{1}{6} - \epsilon$ in the quantum model, while for a random noise model in which the parties communicate over depolarizing channels of capacity $Q > 0$, we obtain rates proportional to $Q$. We also show that the use of depolarizing channels in both directions enables the simulation to succeed even for some quantum channels of unassisted forward quantum capacity $Q = 0$, and we extend our ideas to perform simulation with noisy entanglement as well as noisy classical channels.
Further directions for this research program would be to try to obtain better communication rates in all of the models discussed. In particular, we would like to study the interactive capacity of the depolarizing channel with depolarizing parameter $\epsilon^\prime$. The question of interactive capacity for the binary symmetric channel was raised in the classical context by Braverman in a survey article on the topic of interactive coding \cite{Bra12}, and there has been recent developments in providing lower and upper bounds for this quantity \cite{KR13}. In the classical setting, a particular problem with worst case interaction of one bit transmissions to which all classical interactive protocols can be mapped was proposed to study such a quantity. Since every interactive quantum protocol can be mapped onto our general problem, we propose to study such a quantity in the quantum domain. Would the interactive capacity of the binary symmetric channel (with entanglement assistance) for quantum protocols be the same as for classical protocols \cite{KR13}, up to a factor of two for teleportation? We will show in upcoming works that for bit flip probability $\epsilon$, the lower bound of $\frac{1}{2} - O(\sqrt{H(\epsilon)})$ holds, but do the techniques developed in \cite{KR13} adapt to the quantum setting to obtain a matching upper bound of $\frac{1}{2} - \Omega(\sqrt{H(\epsilon)})$? What about the depolarizing (and others) channel? Another question that remains open is that of the highest tolerable adversarial error rate that can be withstood in the quantum model. To study this question, we would like to develop a fully quantum approach to our problem, and to do so new kinds of quantum codes might need to be developed. In particular, ideas from fault-tolerant quantum computation might need to be borrowed, due to the nature of quantum information. Another important question in the quantum setting is what would happen in a shared entanglement setting if along with the noisy classical communication, the entanglement provided were also noisy; we only investigate this question for a simple noise model for the entanglement, but other interesting models would be interesting to study. In particular, what about adversarial noise on the shared EPR pairs above the $\frac{1}{8}$ binary error rate limit? Note that below that bound, standard quantum error correction for qubits with teleportation can be used for distillation. Finally, the question of efficient simulation also is an interesting one, and we will show in upcoming works how to adapt the techniques developed by Brakerski and Kalai \cite{BK12} to our setting to efficiently process the classical communication in our simulation protocols.
\paragraph{Acknowledgement}
The authors are grateful to Louis Salvail, Benno Salwey and Mark M.\ Wilde for useful discussions. G.B.~is supported in part by the Natural Sciences and Engineering Research Council of Canada, the Canada Research Chair program and the Canadian Institute for Advanced Research. A.T.~is supported by NSERC and CIFAR. D.T.~is supported by a Fonds de Recherche Qu\'ebec-Nature et Technologies B2 Doctoral research scholarship.
| 2023-04-23T06:40:27.735Z | 2013-09-12T02:00:24.000Z | redpajama/arxiv | arxiv_0001 | 439 | 27,921 |
707b233a2af700611fc255c3574e5169bdf958fe | \section{Chargino contributions to \\ $h \rightarrow \gamma \gamma$ and $h \rightarrow \gamma Z $ }
We begin our study with an exploration of the possible range of corrections from very light charginos to the $h \rightarrow \gamma \gamma$ and $h \rightarrow \gamma Z $ rates. The one-loop chargino contributions to these rates have been computed in
Refs.~\cite{Kalyniak:1985ct,Bates:1986zv,Gunion:1988mf,Weiler:1988xn,Djouadi:1996pb,Djouadi:1996yq}.
These contributions have been incorporated into the {\sc CPSuperH} package for MSSM Higgs phenomenology, and formulae for the partial decay widths are collected in Refs.~\cite{Lee:2012wa,Lee:2007gn}. For the numerical results below, we have recalculated the partial decay rates and cross checked the results with {\sc CPSuperH}.
As alluded to above, one does not generally expect large contributions from chargino loops to these processes, as the $h^0\tilde \chi^+ \tilde \chi^-$ coupling is governed by the weak gauge coupling $g$. We can see this explicitly by employing the low-energy theorem~\cite{Ellis:1975ap,Shifman:1979eb} to compute the effective $h\gamma\gamma$ coupling, which in the decoupling limit is given by
\begin{equation}
{\cal L} \supset \frac{\alpha \, b_{\tilde \chi^+} }{16 \sqrt{2} \pi v} \!
\left[ \! \left( \frac{\partial}{\partial \log v_u} \! + \! \frac{\partial}{\partial \log v_d} \right)
\! \log {\bf X}{\bf X}^\dag \! \right] h^0 F_{\mu\nu}F^{\mu\nu}, ~
\label{LET}
\end{equation}
where $v_u, v_d$ are the vacuum expectation values for the up- and down-type Higgs, $v \equiv \sqrt{v_u^2 + v_d^2} = 174$ GeV, $b_{\tilde \chi^+} = 4/3$ is QED beta function coefficient for the chargino, and ${\bf X}$ is the chargino mass matrix,
\begin{equation}
{\bf X} = \left(
\begin{array}{cc}
M_2 & g v_u \\
g v_d & \mu
\end{array}
\right),
\end{equation}
with $M_2$ the wino soft mass and $\mu$ the supersymmetric Higgs mass parameter.
We can straightforwardly take the derivatives in Eq.~(\ref{LET})
and compute the correction to the partial decay width:
\begin{equation}
\frac{ \Gamma(h\rightarrow \gamma\gamma)}{ \Gamma(h\rightarrow \gamma\gamma)_{\rm SM}} \simeq
\bigg\vert 1 + \frac{b_{\tilde \chi^+}}{A_{\gamma}^{\rm SM} } \frac{g^2 v^2 \sin {2 \beta} }{M_2 \mu - \tfrac{1}{2} g^2 v^2 \sin{2\beta}} \bigg\vert^2,~~
\label{rgamma-LET}
\end{equation}
where $\tan\beta \equiv v_u/v_d$ and $A_{\gamma}^{\rm SM} \approx 6.5$ comes mainly from the $W$ and top loop. We observe that the chargino contributions is maximized for $\tan \beta \rightarrow 1$. However, even in this case, if the lightest chargino is above $\sim 100$ GeV, only a moderate contribution to the $h\rightarrow \gamma \gamma$ partial width is achieved. For example, with $\tan\beta = 1$, $M_2 = \mu = 185$ GeV, we obtain the chargino mass eigenvalues $(m_{\tilde \chi^+_1},m_{\tilde \chi^+_2}) =(104, 266)$ GeV and an enhancement in the $h^0 \rightarrow \gamma\gamma$ rate of $\sim 20\%$ from Eq.~(\ref{rgamma-LET}). In this case, the lightest chargino is just above the LEP limit of $103.5$ GeV assuming the decay $\tilde \chi_1^+ \rightarrow W^+ \tilde \chi^0$ and R-parity conservation resulting in a stable neutralino $\tilde \chi^0$. This is the maximum enhancement possible for charginos heavier than the LEP bound, and the effect decreases dramatically as the lightest chargino mass is raised and as $\tan\beta$ is increased.
\begin{figure}
\centerline{
\includegraphics[width= 0.9\columnwidth]{FormFactor.pdf}
}
\caption{Form factor $A_{1/2}(m_{\tilde \chi^+})$ for $h \rightarrow \gamma\gamma$ mediated by a chargino loop (our definition of $A_{1/2}$ matches that of Ref.~\cite{Djouadi:2005gi}). We display the real (blue) and imaginary (orange) parts of the form factor. The red line indicates the Higgs mass. The dark shaded region $m_{\tilde \chi^+} < m_h/2$ is not allowed by the Higgs signal strength dataset, since for such light charginos the Higgs would dominantly decay to chargino pairs. Direct searches at LEP place model dependent constraints in the light shaded region $m_{\tilde \chi^+} \lesssim 100$ GeV. As the mass of the chargino is lowered, the form factor increases from $A_{1/2} = 4/3$ for very heavy charginos to $A_{1/2} = 2$ for $m_{\tilde \chi^+} = m_h/2$. }
\label{fig:form-factor}
\end{figure}
We now relax the assumption that the chargino is heavier than 100 GeV and consider lighter charginos. We restrict to values of the lightest chargino mass eigenstate greater than $m_h/2$ since otherwise the Higgs boson will dominantly decay to a pair of charginos, which is not allowed by the Higgs signal strength dataset. Later we will present a concrete scenario in which such a light chargino evades all direct searches at LEP, but the chargino contributions to the loop-induced Higgs couplings are independent of these considerations. The point we wish to emphasize here is very simple: as the mass of the lightest chargino $m_{\tilde \chi_1^+}$ decreases towards $m_h/2$, the one loop form factor $A_{1/2}(m_{\tilde \chi^+_1})$ exhibits a rise, as displayed in Fig.~\ref{fig:form-factor}. The form factor increases from its asymptotic value of $A_{1/2} = 4/3$ for heavy charginos to $A_{1/2} = 2$ for $m_{\tilde \chi_1^+} = m_{h}/2$. This basic observation, which of course applies not only for charginos but for all light charged particles that couple to the Higgs, allows for a large correction to the loop-induced couplings.
We present in Figs.~(\ref{fig:hGaga}) and (\ref{fig:hGaZ}) the ratios of the partial decay widths to their SM values, including the one-loop chargino contributions, for $h\rightarrow \gamma\gamma$ and $h\rightarrow Z \gamma$, respectively. The results are displayed as a function of $\tan \beta$ for several values of the lightest chargino mass eigenstate between $m_h/2$ and the LEP limit. In this plot we have fixed $M_2 = \mu$, which maximizes the chargino contribution. We observe that it is possible to obtain enhancements as large as $70\%$ (75$\%$) for the $h\rightarrow \gamma\gamma$ ($h\rightarrow \gamma Z$) rates in the extreme case of $m_{\tilde \chi^+_1} = 64$ GeV and $\tan \beta = 1$. However, even for somewhat larger chargino masses and $\tan \beta$ values, it is possible to achieve a sizable enhancement. For instance, if $m_{\tilde \chi^+_1} = 70$ GeV and $\tan \beta = 4$, one obtains enhancements of order $30\%$.
\begin{figure}[t!]
\centerline{
\includegraphics[width= 0.9\columnwidth]{hGaGaMax.pdf}
}
\caption{Ratio of the partial decay width for $h\rightarrow \gamma\gamma$ to the SM prediction including the contribution of charginos. We have fixed $M_2 = \mu$ and work in the decoupling limit. The prediction is shown for lightest chargino masses $m_{\tilde \chi^+_1} = 64, 70, 80, 103.5$ GeV.}
\label{fig:hGaga}
\end{figure}
\begin{figure}[t!]
\centerline{
\includegraphics[width= 0.9\columnwidth]{hZgammaMax.pdf}
}
\caption{ Ratio of the partial decay width for $h\rightarrow \gamma Z$ to the SM prediction including the contribution of charginos. We have fixed $M_2 = \mu$ and work in the decoupling limit. The prediction is shown for lightest chargino masses $m_{\tilde \chi^+_1} = 64, 70, 80, 103.5$ GeV. }
\label{fig:hGaZ}
\end{figure}
We note that the LHC can eventually measure the $h\gamma\gamma$ coupling with a precision of $5-10\%$ with a high luminosity $3000\, {\rm fb}^{-1}$ run, while a future linear collider can achieve a precision of a few percent~\cite{Peskin:2012we},\cite{Klute:2013cx}. In contrast to the $h \rightarrow \gamma \gamma$ channel, very few detailed studies exist for the $h \rightarrow \gamma Z$ channel. The analyses of Refs.~\cite{Gainer:2011aa},\cite{Campbell:2013hz} indicate that the SM prediction for this channel can be probed with $O(100\,{\rm fb}^{-1})$ at the LHC, but as far as we are aware no projections for future colliders exist.
\section{Collider bounds on charginos}
While we have speculated in the previous section on the existence of a light chargino with mass $m_h/2 < m_{\tilde \chi^+} \lesssim 100$ GeV, the conventional wisdom states that charginos are ruled out up to the kinematic reach at LEP2, $m_{\tilde \chi^+} \gtrsim 100$ GeV. We now survey the existing limits from LEP on such light charginos, the assumptions going into such limits, and potential loopholes in these analyses. We will break the discussion into scenarios with and without R-parity conservation. In the next section we will present a concrete scenario with a chargino as light as $m_h/2$ which evades all direct collider constraints.
\paragraph{\bf R-parity conservation}
We first consider scenarios with a conserved R-parity and stable LSP.
\begin{itemize}
\item {\it Chargino LSP}
\footnote{In the MSSM the neutralino is generically lighter than the chargino, although an exception to this rule occurs when
${\rm sign}(M_1) \neq {\rm sign}(M_2) = {\rm sign}(\mu)$.
See Ref.~\cite{Kribs:2008hq} for a detailed analysis.}: A stable (or long-lived) chargino LSP is ruled out up to the kinematic limit at LEP by searches for charged massive stable particles, {\it e.g.},
Refs.~\cite{Abdallah:2002rd,Abbiendi:2003yd}. The actual bound is much higher in chargino mass by now due to similar searches at the Tevatron and LHC. Beyond direct collider searches, strong bounds on cosmologically stable charginos exist from from searches for heavy water isotopes, see, {\it e.g.}, Ref.~\cite{Yamagata:1993jq}.
\item {\it Neutralino LSP, Chargino NLSP}: This is the canonical scenario for a light chargino. There are two cases to consider:
1) In the case of sizable chargino-neutralino mass splitting $\Delta M$, the limit on the chargino mass is 103.5 GeV~\cite{neutralino-chargino-RPC}. This limit assumes that the sneutrinos are heavy, $m_{\tilde \nu} > 300$ GeV. If a light electron-type sneutrino is present in the spectrum, $m_{\tilde \nu_e} \lesssim 100$ GeV, the chargino pair production cross section can be as small as $\sim 0.5$ pb (for $M_2 = \mu$) depending on the parameters. The upper limit on the chargino pair production cross section~\cite{neutralino-chargino-RPC} is larger than this in some regions of parameters space, particularly for chargino masses $ m_{\tilde \chi_1^+} \gtrsim 65$ GeV and light neutralinos, $ m_{\tilde \chi_1^0} \lesssim 20$ GeV. A detailed investigation of this possible opening is beyond the scope of this paper, but we note that there are several important additional considerations besides the LEP cross section limit: 1) constraints from Higgs decays to neutralinos, 2) implications of a light sneutrino (we will discuss this issue below), 3) collider searches from Tevatron and LHC (the chargino/neutralino production cross section at hadron colliders does not depend on the properties of the sneutrino).
2) In the case of a small chargino-neutralino mass splitting $\Delta M$, dedicated searches were carried out to cover this region~\cite{neutralino-chargino-RPC-small-DM}. The limits are quite strong in general, but there is a region from $0.1\,{\rm GeV} \lesssim \Delta M \lesssim 3\,{\rm GeV}$ where the limit is slightly weaker than the naive kinematic limit, $m_{\tilde \chi^+_1} \gtrsim 92$ GeV. We note that in this case, one can obtain a maximum enhancement of about $30\%$ in the $h\rightarrow \gamma\gamma$ decay rate from chargino loops.
\item {\it Sneutrino LSP - Chargino NLSP}: In this case, the chargino will decay to a lepton and a stable sneutrino, leading to a slepton-like signature of acoplanar leptons plus missing momentum. The combined LEP2 upper limits on the slepton production cross section are very strong in this case, below 0.2 pb for the case of stau, and much lower for smuon and selectron~\cite{slepton}. There is a possible loophole in the region of small mass splitting between the slepton (or chargino) and LSP, as in this regime the leptons are very soft and the missing momentum is small as the LSPs travel back-to-back.
For example, the ALEPH slepton search~\cite{Heister:2001nk} requires each lepton to have $p_T > 0.5\%\sqrt{s}$ and missing transverse momentum $p_{T \rm miss} > 1\% \sqrt{s}$, criteria which will generically not be met for splittings smaller than a GeV. We note that, unlike a neutralino and chargino, there is no particular reason that a chargino and a sneutrino should be so close in mass, but it is nevertheless an interesting possibility.
\end{itemize}
\paragraph{\bf R-parity violation}
We now consider scenarios with R-parity violation, in which the LSP is unstable. The signatures of chargino pair production are rich and varied in this case, with multiple leptons, jets and missing energy (from neutrinos) as a possible final states. If the chargino is the LSP, it will decay to three SM fermions: 2 charged leptons and a neutrino through $LLE$, 2 jets and a lepton or neutrino via $LQD$, and three jets via $UDD$. If the chargino is the NLSP, or there are additional light superpartners in the spectrum, the final state multiplicity is even higher. The LEP experiments performed a broad array of searches to cover this diversity of signatures.
The ALEPH~\cite{Heister:2002jc}, DELPHI~\cite{Abdallah:2003xc}, and OPAL~\cite{Abbiendi:2003rn} analyses present {\it inclusive} limits from multiple search channels interpreted within the framework of the MSSM in the $\mu-M_2$ parameter space; the chargino mass limit is $m_{\tilde \chi^+_1} \gtrsim 100$ GeV in all cases. In contrast, the L3 analysis~\cite{Achard:2001ek} presents limits on specific decay modes for assumed spectra and RPV couplings, which are more easily interpreted. Again, charginos are ruled out up to $\sim 100$ GeV in all cases, though not all RPV couplings are considered.
For the ALEPH, DELPHI, and OPAL analyses, since explicit limits on specific decay modes of the chargino are not presented, it is difficult to make a concrete statement on the mass limit of the chargino for a specific spectrum and RPV coupling without further detailed analysis and simulation to recast these results. Nonetheless, in all of the RPV searches referenced above the observed number of events and expected backgrounds are reported for each selection, and no statistically significant excess was observed. Given the large chargino pair production cross section, one expects that the upper limit on the chargino mass will be close to the kinematic reach for any conceivable decay path.
\section{Chargino hiding at LEP}
As we have seen, a variety of searches at LEP place a fairly robust bound on charginos close to the naive kinematic limit, $m_{\tilde \chi^+_1} \gtrsim 100~\ensuremath{\mathrm{GeV}}$.
An explicit assumption in the RPV searches is that the LSP decays promptly.
However, as emphasized recently in Ref.~\cite{Graham:2012th}, these bounds are often severely weakened or even nullified if the LSP has a displaced decay within the detector.
In this section we describe a viable scenario in which the lightest chargino has a mass $m_h/2 < m_{\tilde \chi^+_1} \lesssim 100~\ensuremath{\mathrm{GeV}}$. In our scenario, the LSP is a sneutrino that has a macroscopic decay length of order $10 - 100~{\rm cm}$.
We will demonstrate that this scenario is compatible with all existing direct searches by experiments at LEP, Tevatron, and LHC.
\paragraph{\bf Long-lived Neutral LSP at LEP}
We begin by describing how long-lived neutral states decaying to jets and/or charged leptons can evade the suite of searches performed at LEP. We may first ask how such objects are classified during event reconstruction. For concreteness, let us focus on the ALEPH experiment~\cite{Buskulic:1994wz}, which uses an energy flow algorithm. The process begins with the identification of ``good'' charged tracks. Any charged track which originates from within a cylinder of length 20 cm and coaxial radius 2 cm from the beam line and centered about the interaction point will be considered as a ``good'' track.
Crucially, for a long-lived neutral particle with a macroscopic lifetime, $c \tau \gtrsim 10$ cm, the tracks associated with the charged decay products will be ignored as they will generally not point back to the collision point. The LEP RPV searches typically require a minimum number of ``good'' tracks, a significant fraction of which are assumed to originate from the prompt decay of the LSP. This selection criterion will generally not be met if the LSP decays far away from the primary interaction region. Similar conclusions hold for the DELPHI, OPAL, and L3 experiments~\cite{Abdallah:2003xc,Abbiendi:2003rn,Achard:2001ek}.
What then becomes of the particles produced in the displaced decay of the long-lived neutral particle? If the decay is displaced but still within the detector, the final state jets and/or leptons will deposit energy in the hadronic and/or electromagnetic calorimeters, and as such the long-lived neutral particle will not leave a signature of missing transverse momentum. Instead, these particles will be classified as neutral hadrons by the energy flow algorithm~\cite{Buskulic:1994wz}.
For lifetimes greater than several meters, the LSP can escape the detector altogether, leading to events with missing transverse momentum. Thus, in this regime standard SUSY searches will place severe constraints on sparticle masses. However, as we will show, there is an important gap for lifetimes of ${\cal O}(10 - 100\,{\rm cm})$ for which a displaced neutral LSP can evade the searches at LEP.
\paragraph{\bf Very Light Chargino}
We now describe a viable scenario for a very light chargino with mass $m_h/2 < m_{\tilde \chi^+_1} \lesssim 100~\ensuremath{\mathrm{GeV}}$. For concreteness, we take the electron-type sneutrino to be the LSP and assume it decays via the $\lambda_{121}$ $LLE$ coupling to an electron and anti-muon,
\begin{equation}
\tilde \nu_e \rightarrow e^- + \mu^+.
\label{eq:sneutrino-decay}
\end{equation}
The partial decay width is given by
$\Gamma_{\tilde \nu_e \rightarrow e^- \mu^+} \simeq \lambda_{121}^2 m_{\tilde \nu_e}/16\pi$, implying a sneutrino decay length
\begin{equation}
c\tau_{\tilde \nu_e} \approx 1\,{\rm cm} \times \left(\frac{10^{-7}}{\lambda_{121}} \right)^2 \left( \frac{70\,~\rm GeV}{m_{\tilde \nu_e}}\right).
\end{equation}
Such a small RPV coupling of order $10^{-7}$ is allowed by all experimental constraints~\cite{Barbier:2004ez}. Furthermore, constraints from lepton flavor violation are model dependent and can be evaded with appropriate choices of the slepton and sneutrino soft masses and trilinear couplings.
In addition to the sneutrino and chargino, there will necessarily be a light selectron and neutralino in the spectrum, and we will also consider the constraints on these states as well. Although our quantitative results apply to the specific case of an electron sneutrino LSP decaying according to (\ref{eq:sneutrino-decay}), there are other viable possibilities for the flavor and decay mode of the sneutrino. We will comment on these and other possibilities for the LSP and its decays below.
We now carefully consider the existing constraints on this scenario from direct collider searches and derive exclusion regions as a function of the sneutrino lifetime.
To obtain our numerical results we have used {\sc{MadGraph5}}~\cite{madgraph} for event generation, implementing the RPV-MSSM model with {\sc{FeynRules}}~\cite{Christensen:2008py}, in conjunction with a private code to to compute the efficiency of track selection accounting for the finite decay length of the sneutrino.
\paragraph{ Sneutrino LSP}
We first consider the constraints on direct pair production of the sneutrino LSP at LEP. With the sneutrino decay mode in Eq.~(\ref{eq:sneutrino-decay}), pair production will lead to a four lepton final state. The ALEPH search~\cite{Heister:2002jc} places a lower bound on the $\tilde \nu_e$ mass close to the kinematic limit, $m_{\tilde \nu_e} \gtrsim 100$ GeV, in the case when the sneutrino decays promptly. However, the analysis requires 4 good tracks~\cite{Barate:1997ra}, and this requirement will generally not be met for sneutrinos with a sufficiently displaced decay.
In Fig.~\ref{fig:sneutrino} we display the fraction of signal events accepted as a function of $c \tau_{\tilde \nu_e}$ after demanding that each lepton points back to a cylinder of length 20 cm and coaxial radius 2 cm centered about the collision. We observe that the acceptance decreases dramatically for sneutrino lifetimes larger than the cylinder radius, as in this regime the leptons do not typically point back to the cylinder.
In Fig.~\ref{fig:sneutrino-bound} we present the bounds from this search in the sneutrino lifetime ($c\tau_{\tilde \nu_e}$) - sneutrino mass plane. To derive these bounds, we equate the product of the sneutrino pair production cross section and the efficiency to select four good tracks
to the RPV four lepton cross section limit in Fig.~6a of Ref.~\cite{Heister:2002jc}.
The sneutrino pair production cross section can be found in Ref.~\cite{Dreiner:2008tw}. It is important to note that for electron sneutrino pair production in $e^+ e^-$ reactions, there is an additional diagram involving the $t$-channel exchange of the chargino. We have made the assumption $m_{\tilde \chi_1^+} = m_{\tilde \nu_e} + 10$ GeV,
$M_1 = \mu$, and $\tan\beta = 1.5$ in Fig.~\ref{fig:sneutrino-bound}.
In fact, the chargino diagram dominates in this case, leading to a cross section as large as $3$ pb for $m_{\tilde \nu_e} = 65$ GeV; the bound on a muon or tau flavored sneutrino would be much weaker. Nevertheless, we conclude from Fig.~\ref{fig:sneutrino-bound}
that electron sneutrino LSPs decaying to $e\mu$ pairs with decay lengths of 25 cm or longer are not constrained by this search. We note that Ref.~\cite{Graham:2012th} also derived bounds on a long-lived sneutrino LSP for the case in which it decays to a $b\bar b$ pair.
\begin{figure}
\centerline{
\includegraphics[width=0.9\columnwidth]{Sneutrino-Acc-70GeV.pdf}
}
\caption{
Efficiency to select events with four good tracks in sneutrino pair production followed by $\tilde \nu_e \rightarrow e^- \mu^+$ for $\sqrt{s} = 208$ GeV as a function of the sneutrino lifetime $(c \tau_{\tilde \nu_e})$. Here we have fixed $m_{\tilde \nu_e} = 70$ GeV.
}
\label{fig:sneutrino}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width= 0.9\columnwidth]{Sneutrino-bound.pdf}
}
\caption{Constraints from ALEPH RPV four lepton searches~\cite{Heister:2002jc} on an electron sneutrino LSP decaying via $\tilde \nu_e \rightarrow e^- \mu^+$ in the sneutrino lifetime ($c\tau_{\tilde \nu_e}$) - sneutrino mass plane. Here we have fixed $m_{\tilde \chi_1^+} = m_{\tilde \nu_e} + 10$ GeV, $M_1 = \mu$, and $\tan\beta = 1.5$
}
\label{fig:sneutrino-bound}
\end{figure}
What is the lower limit on the sneutrino mass? Sneutrinos lighter than $m_Z/2$ will be constrained by the total width of the $Z$ boson. Furthermore, the Higgs can decay to sneutrino pairs if
$m_{\tilde \nu_e} < m_h/2$. The coupling of the Higgs to sneutrinos is
$\lambda_{h^0 \tilde \nu \tilde \nu^*} = m_Z^2 \cos{2\beta}/\sqrt{2} v$, and thus $h\rightarrow \tilde \nu_e \tilde \nu_e^*$ would be the dominant decay mode for all values of $\tan \beta $ except those very close to unity.
Up to the additional phase space suppression for $m_{\tilde \nu_e} \lesssim m_h/2$,
we find that for $\tan \beta \lesssim 1.2$, the Higgs branching fraction to sneutrino pairs is
${\rm Br}(h\rightarrow \tilde \nu_e \tilde \nu_e^*) \lesssim 0.2$ which is allowed by current LHC data on the Higgs signal strength measurements. However, it is important to emphasize that an additional decay mode of the Higgs to sneutrino pairs will also dilute any enhancement in $h\rightarrow \gamma \gamma, \gamma Z$ due to the light chargino. We will therefore restrict to sneutrino masses greater than $m_h/2$.
\paragraph{Light Chargino}
Given that the electron sneutrino LSP can be lighter than 100 GeV provided its decay is displaced, $c \tau_{\tilde \nu_e } \gtrsim 25$ cm, we now investigate whether a very light chargino with mass $m_h/2 < m_{\tilde \chi^+} \lesssim 100$ GeV is also allowed in this scenario. The chargino will decay promptly via
\begin{equation}
\tilde \chi_1^+ \rightarrow e^+ + \tilde \nu_e,
\label{charginodecay}
\end{equation}
followed by the displaced sneutrino decay (\ref{eq:sneutrino-decay}) to a $e^- \mu^+$ pair. Chargino pair production will thus lead to a six lepton final state, with four of the leptons being displaced. There are two classes of LEP searches that are sensitive to chargino pair production in this case.
The first class of searches are for multi-lepton final states, as in RPV searches~\cite{Heister:2002jc,Abdallah:2003xc,Abbiendi:2003rn,Achard:2001ek}. These searches will be relevant for prompt sneutrino decays. However, as in the case of direct sneutrino pair production discussed above, the multi-lepton RPV searches explicitly require a minimum number of good tracks and their reach will be significantly diminished as the sneutrino lifetime increases.
The second class involves searches for acoplanar leptons plus large missing transverse momentum~\cite{Heister:2001nk,Abdallah:2003xe,Achard:2003ge,Abbiendi:2003ji}, which is the canonical signature of sleptons in R-parity conserving scenarios, and is relevant for very long sneutrino lifetimes such that the sneutrino decays outside the detector.
We present in Fig.~\ref{fig:chargino-bound} the bounds on chargino pair production as a function of the chargino mass and sneutrino LSP lifetime. First, the red shaded region is constrained by the OPAL RPV six lepton search~\cite{Abbiendi:2003rn}.
To derive these bounds, we equate the product of the sneutrino pair production cross section (see, {\it e.g.}, Ref.~\cite{Dreiner:2008tw}) and the efficiency to select at least 5 good tracks obtained through our simulation
to the six lepton cross section limit in Fig.~7 of Ref.~\cite{Abbiendi:2003rn} \footnote{While Fig.~7 of Ref.~\cite{Abbiendi:2003rn} presents a limit on slepton pair production resulting in six lepton events with large missing energy, we note that the search also included selections sensitive to six leptons without missing energy, and thus applies to our scenario.}. A ``good'' track for the OPAL search is one that points back to a cylinder with a 1 cm coaxial radius and 40 cm length around the primary collision~\cite{Abbiendi:2003rn}. The presence of an electron sneutrino LSP leads to an additional $t$-channel sneutrino exchange diagram in chargino pair production, which interferes destructively with the $s$-channel $\gamma,Z$ exchange amplitudes. For the limit in Fig.~\ref{fig:chargino-bound} we have fixed $m_{\tilde \nu_e} = 65$ GeV, $M_2 = \mu$, and $\tan\beta = 1.5$.
\begin{figure}
\centerline{
\includegraphics[width= 0.93\columnwidth]{Chargino-bound.pdf}
}
\caption{Constraints in the chargino mass - sneutrino lifetime plane on chargino pair production
with the subsequent decay chain $\tilde \chi_1^+ \rightarrow e^+ + \tilde \nu_e$, $\tilde \nu_e \rightarrow e^- + \mu^+$. The red shaded region is ruled out by the OPAL six lepton RPV search~\cite{Abbiendi:2003rn}, while the blue shaded region is ruled out by the ALEPH slepton search (acoplanar leptons plus missing transverse momentum)~\cite{Heister:2001nk}. We have fixed $m_{\tilde \nu_e} = 65$ GeV, $M_2 = \mu$, and $\tan\beta = 1.5$.}
\label{fig:chargino-bound}
\end{figure}
The blue shaded region in Fig.~\ref{fig:chargino-bound} is constrained by the ALEPH slepton search for acoplanar leptons plus missing transverse momentum~\cite{Heister:2001nk}. To place the bound we calculate as a function of $c \tau_{\tilde \nu_e}$ the efficiency for both sneutrinos emitted in the decays of the charginos to escape the detector before decaying. The chargino cross section scaled by this efficiency is then compared to the observed cross section limit on selectron pair production (Fig.~7a of Ref.~\cite{Heister:2001nk}). We note that the search~\cite{Heister:2001nk} employs a neutral veto to eliminate SM dilepton events with hard initial or final state radiation.
The invariant mass between any good charged track and neutral energy flow particle is required to be very small. If one of the sneutrinos has a displaced decay within the detector it will be classified as a neutral hadron, and the event will be cut as a result of the neutral veto. In our simulation, we therefore select events in which both sneutrinos decay outside the detector
We note that for larger lifetimes there can be an additional source of missing transverse momentum due to the mis-measured momentum of the sneutrino. As the displaced sneutrinos in our scenario will be classified as neutral hadrons, it is likely their momentum is determined by tracing hits in the calorimeter back to the event vertex. This will result in a measured momentum that is different than their true momentum. As such, the total momentum of visible objects in the event will not balance. We have simulated this effect and find that for lifetimes of order 100 cm or more the missing momentum can be $\sim 5-10$ GeV, large enough to pass the missing $p_T$ selection of the slepton search in Ref.~\cite{Heister:2001nk}. However, as discussed above, the neutral veto employed in this analysis requires both sneutrinos to escape the detector. Thus, the bound in Fig.~\ref{fig:chargino-bound} is not affected by this additional source of missing transverse momentum. See Ref.~\cite{Fan:2012jf} for further discussion of the effects of mis-measured momentum.
We conclude from Figs.~\ref{fig:sneutrino-bound} and \ref{fig:chargino-bound} that there is a large range of sneutrino LSP lifetime between $25\,{\rm cm} \lesssim c \tau_{\tilde \nu_e} \lesssim 300\,{\rm cm}$ for which light charginos with mass $m_{\tilde \chi^+_1} \gtrsim m_h/2$ are allowed by direct searches at LEP.
\paragraph{Light neutralino and selectron}
Thus far we have discussed only the sneutrino LSP and the very light chargino below 100 GeV . However, in addition to these states, there will also necessarily be a light neutralino and light selectron in the spectrum. We now discuss the possible constraints on these light states from LEP searches.
We first consider the lightest neutralino. To see that there is a light neutralino, consider the limit
$M_2 = \mu$, $\tan \beta = 1$ relevant for large modifications to the $h\gamma\gamma$ and $h\gamma Z$ couplings. For $|M_1| \gg M_2$ we obtain the following approximate expressions for the mass eigenvalues of the chargino-neutralino system:
\begin{eqnarray}
M_{\tilde \chi^+} & \approx & \left( M_2 -\frac{g v}{\sqrt{2} } , M_2 +\frac{g v}{\sqrt{2}} \right) \nonumber \\
M_{\tilde \chi^0} & \approx & \left( M_2 -\frac{g v}{\sqrt{2}}, M_2, M_2 +\frac{g v}{\sqrt{2}} , |M_1|
\right)
\end{eqnarray}
One observes that the lightest chargino and neutralino are degenerate in this limit. Moving away from this limit, one still always finds a light neutralino eigenvalue accompanying the light chargino, which can be heavier or lighter than the chargino depending on the sign and magnitude of $M_1$.
We must therefore carefully consider the constraints on the lightest neutralino at LEP in our scenario with a sneutrino LSP with a displaced decay.
The neutralino will decay to a neutrino and a sneutrino:
\begin{equation}
\tilde \chi^0_1 \rightarrow \nu_e + \tilde \nu_e.
\label{eq:neutralino-decay}
\end{equation}
For a displaced sneutrino decay consistent with the constraints derived in Figs.~\ref{fig:sneutrino-bound},~\ref{fig:chargino-bound}, neutralino pair production will result in the signature of neutral hadrons and missing momentum. Such a signature was never searched for by the experiments at LEP. Another possible channel we must consider is neutralino pairs produced in association with a hard photon. This can lead to events in searches for a single photon with missing energy. A number of searches were performed at LEP for this signature~\cite{Abdallah:2008aa,Abdallah:2003np,Abreu:2000vk,Achard:2003tx,Acciarri:1999kp,Abbiendi:2000hh,Abbiendi:1998yu,Barate:1997ue}. However, various cuts employed in these analyses will generally remove the signal events, {\it e.g.}, a veto on additional showers in the calorimeters~\cite{Abdallah:2003np}, which would be caused by the decay products of the sneutrino.
Finally, we also note that for small $\tan \beta$ the lightest neutralino coupling to the $Z$ boson is suppressed, and thus the pair production cross section of neutralinos will be very small at LEP. Thus, the LEP searches place no strong constraints on very light neutralinos decaying via (\ref{eq:neutralino-decay}).
We must also consider the selectron $\tilde e_L$ that accompanies the sneutrino LSP $\tilde \nu_e$.
In the limit that the soft masses satisfy $m^2_{\tilde E} \gg m^2_{\tilde L}$, the lightest slepton mass can be written as
\begin{eqnarray}
m_{\tilde e_L}^2 & \simeq & m_{\tilde \nu_e}^2 - m_W^2 \cos {2\beta}.
\end{eqnarray}
Even for fairly small values of $\tan \beta$, the selectron mass can be pushed above 100 GeV and thus outside the kinematic domain of the LEP searches. As an example, for a 65 GeV electron sneutrino, the selectron mass is $m_{\tilde e_L} \approx 100$ GeV for $\tan\beta \approx 4$.
Even if the selectron is lighter than 100 GeV, it will still be allowed by LEP searches due to the displaced decay of the sneutrino LSP. The selectron will decay via
\begin{equation}
\tilde e_L \rightarrow
\begin{cases}
\tilde \chi^- + \nu_e \rightarrow e^- + \nu_e + \tilde \nu_e^*, \\
\tilde \chi^0 \, + e^- \rightarrow e^- + \nu_e + \tilde \nu_e^*,
\end{cases}
\end{equation}
followed by the subsequent displaced decay of the sneutrino according to (\ref{eq:sneutrino-decay}). Slepton pair production has a final state containing two acoplanar leptons plus missing transverse momentum, and so could show up in standard R-parity conserving slepton searches. However, as discussed in the case of the chargino above, these searches employ a neutral veto which would disqualify signal events with a sneutrino displaced decay within the detector. These searches can therefore place no additional constraints on our scenario beyond those presented in Figs.~\ref{fig:sneutrino-bound},~\ref{fig:chargino-bound}.
\paragraph{SUSY searches at Tevatron and LHC}
We have demonstrated that our scenario with a very light chargino and displaced electron sneutrino LSP can not be constrained for sneutrino lifetimes $25\,{\rm cm} \lesssim c \tau_{\tilde \nu_e} \lesssim 300\,{\rm cm}$ by searches at LEP.
We now consider additional possible direct constraints on this scenario from searches at the Tevatron and LHC including standard and RPV SUSY searches.
The lightest states in the spectrum can be produced with substantial rates at hadron colliders. For example, the $\tilde \chi_1^+ \tilde \chi_1^-$ and $\tilde \chi_1^\pm \tilde \chi_1^0$ production cross sections range from $O(1-10\, {\rm pb})$ for charginos/neutralinos in the mass range of interest.
Moreover, because we are considering values of $\mu$, $M_2$ of order 100 GeV, there will be additional light electroweak-inos with masses at the weak scale. Production of these particles at hadron colliders can yield via their cascade decays final states with multiple jets, leptons and missing energy.
However, standard searches at the Tevatron and LHC looking for these signatures
place quality cuts on visible final state objects~\cite{Khachatryan:2011tk,ATLAS:2012ona,ATLAS:2011ad,ATLAS:2013rla,Abazov:2007kg,Abulencia:2006in} that are generally not suitable for the displaced decay of the neutral LSP in our scenario.
Lepton candidates are required to have good tracks originating from primary vertex~\cite{Aad:2011mk,Aad:2010yt,CMS:pfalgo,Chatrchyan:2012xi}.
Furthermore, jets are disqualified if the associated charged tracks carry too small a fraction of the total jet energy~\cite{Aad:2011he,ATLAS:dataquality,CMS:calojetqual}.
But large impact parameter tracks and neutral trackless jets are expected in events with a displaced sneutrino LSP decaying to dilepton pairs in (\ref{eq:sneutrino-decay}) at the end of the cascade.
With regards to the lightest states in the spectrum, although the production rate is large the missing transverse energy is small and the prompt leptons are soft. This is simply because these states are light $\lesssim 100$ GeV and the decays feature a small $\sim {\cal O}(10\, {\rm GeV})$ mass gap.
However, standard SUSY searches typically require large missing transverse energy and impose hard $p_T$ cuts on leptons.
Due to these considerations, we do not expect standard SUSY searches at hadron colliders to constrain our scenario.
Despite these reservations,
as a conservative check we have investigated the sensitivity of tri-lepton plus missing energy search~\cite{ATLAS:2013rla, CMS:aro}, which currently provides the best sensitivity for gaugino and slepton pair production. The heavier electroweak-inos in our scenario decaying via $W,Z$ bosons can yield events with multiple hard prompt leptons and neutrinos. We simulate the ATLAS search~\cite{ATLAS:2013rla} for following benchmark point (yielding an enhancement of $\sim 50\%$ in the $h\rightarrow \gamma\gamma$, $h\rightarrow \gamma Z$ decay rate):
\begin{eqnarray}
& \tan \beta = 1.5, ~ \mu = M_2 = 149\, {\rm GeV}, ~ M_1 = 1\,{\rm TeV}, & \nonumber \\
& m_{L_1} = 76\,{\rm GeV}, ~ m_{E_1} = 1\,{\rm TeV}. &
\end{eqnarray}
The spectrum contains a sneutrino LSP at $64$ GeV and along with a neutralino, chargino and slepton at 70, $71$ and $82$ GeV, respectively. Heavy electroweak-inos at this point have masses of $\sim 150$ GeV and 230 GeV. We generate all possible electroweak-ino pair production processes and subsequent decays with {\sc MadGraph}, interfaced with {\sc Pythia}~\cite{Sjostrand:2006za} for showering. Jets are reconstructed with the anti-$k_T$ algorithm~\cite{Cacciari:2008gp} with radius parameter $R = 0.4$ using {\sc FastJet}~\cite{Cacciari:2011ma}. We assume that the sneutrino LSP does not yield missing energy as it decays within the detector. Furthermore, we conservatively assume that events are not rejected by any track or jet quality cuts. We find that this benchmark point is safe from the three-lepton search, which can be understood as a consequence of the smaller production rate of the heavier electroweak-inos in comparison with pure winos considered in Refs.~\cite{ATLAS:2013rla, CMS:aro}, as well as
a suppression of missing energy due to decay of the LSP inside the detector.
We further note that it is possible to raise the mass of the heavy neutralinos/chargino as desired by splitting $\mu, M_2$, while still obtaining a sub-100 GeV chargino, though at the expense of smaller corrections to the $h\rightarrow \gamma\gamma$, $h\rightarrow \gamma Z$ decay rates. This will further weaken the sensitivity of standard SUSY searches at the LHC.
Current multi-lepton RPV searches at LHC are also not sensitive to our scenario. As with standard SUSY searches, multi-lepton RPV searches also reject events with
jets that fail to pass the quality cuts described above~\cite{ATLAS:2012153,ATLAS:2013qla,CMS:multilep} and require all leptons to come from a common primary vertex~\cite{ATLAS:2013qla,CMS:multilep,CMS:oxa}.
\paragraph{Dedicated searches for long-lived neutral particles}
A number of dedicated searches for long-lived neutral particles have been performed in the past which in principle could be sensitive to our scenario or variations thereof, which we now survey.
At LEP, such searches were primarily limited to acoplanar photons and a single non-pointing photon plus missing transverse momentum (see, {\it e.g.}, Refs.~\cite{Heister:2002vh,Barate:1999gm,Barate:1998zp}), as motivated by decays of long-lived neutralinos to a photon and gravitino in gauge mediation scenarios.
The displaced sneutrino can potentially fake a photon if it decays near or within the electromagnetic calorimeter (ECAL). However, the muon that originates from the decay of the sneutrino will leave hits in the hadronic calorimeter (HCAL) and muon chambers. These events are thus subject to a veto designed to suppress cosmic backgrounds~\cite{Barate:1999gm}.
At the Tevatron, dedicated searches for Hidden Valley scenarios~\cite{Strassler:2006im} in events containing a displaced neutral particle decaying to two jets were performed by both CDF~\cite{Aaltonen:2011rja}
and D0~\cite{Abazov:2009ik}. These searches do not directly apply to our scenario, but could lead to constraints on scenarios in which the sneutrino decays hadronically. D0 furthermore performed searches for long-lived neutral particles decaying to $e^+ e^-$, $\mu^+ \mu^-$, and photon pairs~\cite{Abazov:2009ik}, \cite{Abazov:2006as}. However, these searches will not apply to displaced sneutrinos decaying to different flavor lepton pairs, such as $e\mu$ in our scenario.
The LHC experiments are pursuing a broader program of searches sensitive to displaced decays. ATLAS has set limits on long-lived neutral particles that decay in the outer hadronic calorimeter or in the muon spectrometer (MS)~\cite{ATLAS:2012av}. The analysis employs a dedicated muon cluster RoI (Region of Interest) trigger. A muon RoI is simply a coincidence of hits in the MS. The trigger requires at least three muon RoIs in a $\Delta R = 0.4$ cone.
The long-lived sneutrino decaying via (\ref{eq:sneutrino-decay}) will yield at most two RoIs. Therefore, the search is not applicable to our scenario.
Ref.~\cite{Aad:2012kw} performs the search for the Higgs decaying to
lepton jets~\cite{ArkaniHamed:2008qp,Falkowski:2010cm,Falkowski:2010gv}.
This involves at the intermediate stage long-lived neutral particles decaying to collimated muon pairs. There are several selections that make this search inapplicable to our scenario. In particular, the search reconstructs muon jets -- $\mu^+\mu^-$ pairs in a narrow cone, which are not present in our scenario.
There is an ATLAS search for a long-lived neutralino decaying
to a muon and two jets~\cite{Aad:2012zx}. The analysis reconstructs a displaced vertex from a muon and other charged particles. In particular, the displaced vertex is required to have at least 5 associated tracks. In our scenario, with a displaced sneutrino decaying to an $e\mu$ pair, there will only be two tracks for each displaced vertex, so this search will not apply.
Finally, CMS has carried out searches for the Higgs decaying to two long-lived particles which subsequently decay to $e^+e^-$, $\mu^+ \mu^-$ pairs~\cite{Chatrchyan:2012jna}, and dijets pairs~\cite{CMS:dis-dijet}.
Notably, the dilepton search does not select $e\mu$ resonances as would be present in our scenario. However, this search likely sets strong constraints on very light sneutrino LSP with a displaced decay to $e^+e^-$ or $\mu^+ \mu^-$. The dijet search imposes a number of cuts which are not well-suited for our scenario, {\it e.g.}, hard jet $p_T$ and scalar $p_T$ sum $H_T$ cuts, as well as minimum vertex multiplicity selections.
\paragraph{\bf Testing the scenario at the LHC}
We now discuss how this scenario can be tested at LHC. The characteristic signature is a displaced $e\mu$ resonance arising when a sneutrino decays in the inner detector.
The experimental techniques necessary to search for this signature can likely be adapted from existing searches for prompt different-flavor dilepton resonances~\cite{Aad:2012bwa} and displaced same-flavor dilepton resonances~\cite{Chatrchyan:2012jna}.
For example, the displaced same-flavor search~\cite{Chatrchyan:2012jna} requires either two energy deposits in the ECAL
or two track segments in MS, along with associated displaced
tracks without track quality criteria imposed.
Similarly, the signal events in our scenario can be selected by one ECAL deposit + one MS track.
Although cosmic muons can fake a single MS track trigger, they can be suppressed by the further requirement of an associated displaced vertex pointing back to primary vertex.
This search maybe useful for sneutrino decays within about 50 cm~\cite{Chatrchyan:2012jna,CMS:tracking} where tracking is available.
If one or both of the sneutrinos in our scenario decay near the ECAL and consequently some tracks are not reconstructable, an alternative trigger based on non-standard objects such as trackless jets (objects characterized by calorimeter hits isolated from tracks in the inner detector) can be employed. ATLAS has developed dedicated triggers for trackless jets utilizing either 1) an associated trackless muon, or 2) large HCAL to ECAL energy deposition~\cite{Aad:2013txa}.
Finally, we note that the displaced vertex reconstruction techniques utilizing capabilities of different parts of detectors (not just of tracker) discussed in Ref.~\cite{Meade:2010ji} may also be useful for our scenario.
\paragraph{\bf Other scenarios for light charginos}
Here we wish to make some preliminary comments on variations of the basic scenario considered above, reserving a detailed investigation for future work.
If the sneutrino has a different decay mode (\ref{eq:sneutrino-decay}), there can be additional constraints which are applicable. For instance,
if the sneutrino decays to a dielectron or dimuon pair, then the CMS search for two displaced $e^+e^-$ or $\mu^+ \mu^-$ resonances~\cite{Chatrchyan:2012jna} will place stronger constraints on the lower end of the sneutrino lifetime range than the LEP RPV searches displayed in Fig.~\ref{fig:sneutrino-bound}. This highlights the power of dedicated searches for displaced neutral particles.
However, final states containing displaced resonances with $\tau$ leptons are not explicitly covered by the CMS displaced same-flavor dilepton search~\cite{Chatrchyan:2012jna}. In particular, the search requires that the reconstructed dilepton resonance momentum vector is collinear with the vector pointing from the primary vertex to the
displaced vertex, which will generally not be satisfied as the $\tau$ shares its energy among the final state lepton and neutrinos. Thus, sneutrinos decaying at a macroscopic distance to $e\tau$, $\mu\tau$, or $\tau\tau$ may also still be viable. Dedicated searches for these modes should be performed.
If the sneutrino decays to $q \bar q$ through a $LQD$ operator, then the situation is somewhat more complicated.
RPV $UDD$ searches at LEP involving final state jets generally require 8 or more good charged tracks
({\it i.e.}, tracks with small impact parameter).
Despite the fact that the sneutrino is displaced, an appreciable efficiency to meet the track multiplicity requirements is expected even for lifetimes $\gtrsim 10$ cm because each quark can yield $O(10)$ charged particles during hadronization. Preliminary simulations confirm this expectation, although there is likely still an open window for this scenario from LEP searches for sneutrino lifetimes of order 100 cm. At the LHC, searches for displaced vertices formed purely from jets are difficult due to the multijet background.
Either an extra muon is required~\cite{Aad:2012zx,Aad:2013txa}, or dedicated vertex track requirements are utilized~\cite{CMS:dis-dijet}.
The latter search, which likely has some sensitivity to this scenario, imposes strong $p_T$ and $H_T$ cuts which are not ideal for very light, long-lived sneutrinos decaying to dijet pairs. A search targeted at low-mass, long-lived dijet resonances should be developed.
Another scenario deserving of consideration contains a chargino NLSP and neutralino LSP with a displaced RPV decay (sneutrino and slepton are heavy). In this scenario, the chargino decays via $\tilde \chi^+_1 \rightarrow \tilde \chi^0 W^{+^*}$. The prompt hadronic decays of the virtual $W^{+*}$ boson will lead to many good charged tracks, implying that the track multiplicity selection will be highly efficient. It therefore appears difficult to hide a light chargino in this scenario.
\section{Precision electroweak data}
As we have demonstrated above, the very light charginos in our scenario can generate sizable one-loop corrections to the effective couplings $h\gamma\gamma$ and $h\gamma Z$. One might therefore expect similarly large one-loop contributions to the gauge boson vacuum polarizations, which can affect the predictions for precision electroweak observables. To investigate this issue, we have performed a global fit to the precision data along the lines of the Gfitter group~\cite{Baak:2012kk}. A detailed description of the experimental observables and theoretical predictions entering in the fit can be found in Ref.~\cite{Batell:2012ca}.
The modifications to the $W,Z,\gamma$ vacuum polarizations are encoded in the $STUVWX$ extended oblique parameters of Ref.~\cite{Maksymyk:1993zm,Burgess:1993mg}. This extended formalism is necessary since we are considering charginos masses of order the $Z$ boson mass and lighter. In the Appendix we report the predictions for the observables entering into our fit in terms of these oblique parameters. For the chargino and neutralino contributions to the gauge boson vacuum polarizations we use the results of Ref.~\cite{Martin:2004id}.
In Figure.~\ref{fig:EWPD1} we display in black the $68\%$, $95\%$ C.L. contours from our fit to the precision data for the case of $M_2 = \mu$, $M_1 = 1$ TeV in the chargino mass - $\tan \beta$ plane. For comparison we have also overlaid in red contours of the chargino contribution to the diphoton signal strength. The best fit in this region is obtained for $\tan\beta \approx 5$ and a chargino mass around 80 GeV, although the $\chi^2$ is essentially flat in $m_{\tilde \chi^+_1}$ above this value. The total $\chi^2$ at the best fit point, $\chi^2_{\rm min} = 19.7$ is somewhat smaller than the SM value, $\chi^2_{\rm SM} = 20.7$.
We observe in Figure.~\ref{fig:EWPD1} that the theoretical description of the precision data becomes worse for lighter charginos and small values of $\tan \beta$. There are two observables in this region (beyond the already discrepant $A_{FB}^b$ and $R_b$) that display a slight $\sim 2 \sigma$ tension: 1) The $W$ boson mass, which the SM predicts to be $(m_W)_{\rm SM} = 80.362 \pm 0.007$ GeV, becomes smaller as a result of a small negative $U$ parameter, widening the gap with the experimental value $(m_W)_{\rm exp} = 80.385 \pm 0.015$ GeV. 2) The leptonic asymmetry parameter $A_\ell$, predicted in the SM to be $(A_\ell)_{\rm SM} = 0.1472 \pm 0.0007$, becomes smaller due to a small positive $X$ parameter, increasing the tension with the experimental average $(A_\ell)_{\rm exp} = 0.1499 \pm 0.0018$.
\begin{figure}
\centerline{
\includegraphics[width= 0.93\columnwidth]{Chargino-chi2.pdf}
}
\caption{Precision electroweak data $68\%$, $95\%$ C.L. contours (black) in the $m_{\tilde \chi_1^+} - \tan\beta$ plane. We also display in red contours of the chargino contribution to the diphoton signal strength. Here we have fixed $M_2 = \mu$, $M_1 = 1$ TeV. }
\label{fig:EWPD1}
\end{figure}
The tension in this region can be relaxed with a small positive $T$ parameter from a source other than the charginos.
A new contribution of order $T \sim 0.05 - 0.1$ raises the values of $m_W$ and $A_\ell$,
leading to a global fit that resembles that of the SM.
As an example, a light, highly-mixed stop
can easily yield the required contribution to $T$, while at the same time not upsetting the observed Higgs production rates. This is because the
coupling of the light stops to the Higgs is proportional to $\sim (1 - A_t^2/m_{U_3}^2)$ and therefore only mildly disturbs the gluon fusion rate if $A_t \sim m_{U_3}$.
For instance, with the inputs $m_{Q_3} = 200$ GeV, $m_{U_3} = 1.7$ TeV, $A_t = 1.2$ TeV, $\mu = 150$ GeV, $\tan \beta = 1.5$,
one obtains a stop with mass close to the top, a contribution $T \sim 0.07$, as well as a small $10\%$ enhancement of the gluon fusion rate. See the discussion in Ref.~\cite{Carena:2013iba} for more details.
A detailed investigation of the collider bounds on the light stop and sbottom in this case is beyond the scope of this paper, although the presence of a displaced sneutrino LSP in the decay chain implies that these events 1) contain significantly less missing energy than in standard R-parity conserving scenarios and 2) may be subject to the jet and charged track quality cuts described above.
\section{Discussion and outlook}
In this work we have investigated a scenario with a very light chargino in the mass range $m_h/2 < m_{\tilde \chi^+} \lesssim 100$ GeV, below the naive kinematic reach of LEP2. A chargino in this mass range can lead to dramatic deviations in the $h\rightarrow \gamma \gamma$ and $h\rightarrow Z \gamma$ decay rates, which could be measured at the LHC and future high energy colliders. A variety of LEP2, Tevatron, and LHC searches place strong constraints on this hypothesis, but the analyses are model dependent and based on assumed decay channels of the charginos. We presented a concrete scenario which is not covered by existing searches: a chargino decaying to sneutrino LSP, which subsequently decays through a small RPV coupling with a macroscopic decay length of order $10\,{\rm cm} - 100\,{\rm cm}$. The charged particles in the sneutrino decay products generically fail to point back to the primary vertex, and as such do not pass basic track impact parameter selection cuts required at LEP. Furthermore, standard SUSY searches at LEP looking for missing transverse momentum are weakened by a neutral veto which will be in effect unless both sneutrinos decay outside the detector.
The 125 GeV Higgs mass can be obtained in this scenario in several ways. In the region of low $\tan\beta$ relevant for modifications to the $h\rightarrow \gamma\gamma, Z \gamma$ rate, it is difficult to obtain a Higgs mass of 125 GeV in the MSSM with superpartners at the TeV scale. One possibility is to simply raise the masses of the scalars (except for the light sneutrino and slepton), and in particular the stops into the multi-TeV range. Of course, such heavy stops require a finely-tuned weak scale, and in this scenario one cannot rely on the stops to improve the precision electroweak data. For slightly larger values $\tan\beta \gtrsim 4$ it is also possible to obtain the observed Higgs mass in the MSSM with highly mixed stops at the TeV scale. Finally, one can investigate extensions of the MSSM with new $F$- or $D$-term contributions to the Higgs mass, see for example Refs.~\cite{Kane:1992kq,Espinosa:1992hp,Ellwanger:2009dp,Batra:2003nj,Lu:2013cta}.
These extensions can lead to modifications of the chargino interactions with the Higgs (see, {\it e.g.}, Refs.~\cite{Huo:2012tw},\cite{Gherghetta:2012gb}).
While our discussion has focused on charginos, similar considerations apply to other hypothetical light charged particles. With regards to the corrections to $h\rightarrow \gamma\gamma$, the one-loop form factor for scalars, fermions, as vectors exhibit a rise in magnitude as the mass of the charged particle in the loop approaches $m_h/2$, as we have illustrated for the case of the fermion in Fig.~\ref{fig:form-factor}. The basic scenario for a hidden chargino described in this work can, with a few modifications, be used to hide other light charged particles. For example, a light stau NLSP decaying to
a long-lived neutralino LSP will be weakly constrained if the neutralino lifetime is $O(10\,{\rm cm} - 100\,{\rm cm})$. It would also be interesting to consider non-supersymmetric scenarios. For example, light vector-like leptons above ~100 GeV can lead to large deviations in $h\rightarrow \gamma\gamma$ at the expense of a low scale $\sim 10$ TeV vacuum instability
~\cite{Joglekar:2012vc,ArkaniHamed:2012kq,Reece:2012gi,Batell:2012mj,Kearney:2012zi,McKeen:2012av,Davoudiasl:2012ig,
Batell:2012zw,Fan:2013qn,Joglekar:2013zya,Altmannshofer:2013zba}.
However, if the charged states are lighter are below 100 GeV, the same effect can be obtained with much weaker couplings of the vector-like leptons to the Higgs, allowing for a much higher UV completion scale.
Given this window in which light charginos, and by extension various other hypothetical light charged particles with displaced neutral particles in their decay products, are unconstrained, it is important that the LHC experiments develop dedicated searches to probe this scenario. We have touched on a few possible strategies to cover the scenario proposed here, although detailed feasibility studies and concrete search strategies are still required. Furthermore, it should be possible to reanalyze the LEP2 data to test this scenario. While the current motivation for such an effort is simply to cover an interesting open window in SUSY parameter space,
we stress that any future definitive observation of a discrepancy in the $h\rightarrow \gamma\gamma$ rate will
mandate reconsideration of the possibility of charged particles below 100 GeV.
\subsubsection*{\bf Acknowledgements}
We thank Y.~Gershtein, P.~Ko, H.~M.~Lee, S.~Martin, T. Roy, P.~Saraswat, and L.~T.~Wang for helpful discussions.
Work at ANL is supported in part by the U.S. Department of Energy under Contract No. DE-AC02-06CH11357.
B.B. is supported by the NSF under grant PHY-0756966 and the DOE Early Career Award under grant DE-SC0003930.
S.J. thanks KIAS Center for Advanced Computation for providing computing resources.
B.B. and C.W. thank the Aspen Center for Physics and the KITP, Santa Barbara, where part of the work has been done.
B.B. also thanks KIAS and the 2013 Santa Fe workshop INFO, sponsored by Los Alamos National Laboratory, where part of this work was completed.
| 2023-04-23T06:40:27.758Z | 2013-09-11T02:00:28.000Z | redpajama/arxiv | arxiv_0001 | 440 | 9,409 |
f99902a6f88700b97f4c6faf425aa01a6d6e3d9a | \section{Introduction}
The nuclei close to the particle drip lines are very weakly bound and can have exotic properties according
to our knowledge of stable nuclei, which makes them fascinating quantum systems exhibiting the threshold effects, such
as halo states~\cite{halo-exp,halo-87,halo,halo-rev}. There have been numerous theoretical studies on the halos of weakly bound nuclei, but most of them are spherical~\cite{Mizutori,schunck,rotival}.
In particular, for weakly-bound deformed nuclei, exotic deformed halos with decoupled surface deformations from the cores are predicted~\cite{misu,sgzhou,pei2013}. It was known that the continuum degree of freedom plays an important role in weakly-bound nuclei~\cite{continuum-jacek,continuum,forssen,hagen,pei2013}. Therefore, the theoretical descriptions of weakly-bound deformed nuclei should precisely
take into account the continuum effect, deformations and large spatial extensions. In this context, the self-consistent HFB approach of density functional theory with continuum couplings is a suitable method. However, the exact
treatment of continuum states in the HFB approach for deformed nuclei is rare and much more complicated
compared to the spherical case~\cite{matsuo2009}.
In the HFB approach, the continuum can be treated either by the discretization method or by exact solutions with scattering boundary conditions ~\cite{Michel,matsuo2009}.
In the discretization method, the continuum can be discretized on a discrete set of basis functions~\cite{hfbtho,hfodd} or on the coordinate-space lattice~\cite{hfbrad,teran}.
Conventionally, the HFB solvers are based on the HO basis which are very efficient but not suitable for describing weakly-bound systems.
It has been demonstrated that the coordinate-space HFB approach
is very precise for describing weakly-bound nuclei and continuum effects~\cite{hfbrad,teran,pei2011}.
For halo structures with large spatial extensions, the coordinate-space HFB descriptions definitely need
large box sizes. In addition, it was known that the resulted number of continuum states increases significantly ($\propto L^3$)
as the box size $L$ increases~\cite{continuum-jacek}. The very dense quasiparticle energy spectrum can provide good resolutions for the resonance and
non-resonant continuum. Therefore it will be very useful to study the weakly-bound deformed nuclei by the HFB approach in large boxes.
However, the computational cost will
be increased tremendously as the box sizes is increased in deformed cases. Fortunately, due to the development of supercomputing capabilities,
the deformed HFB descriptions in large box can be realized through
large-scale parallel calculations~\cite{pei2012}.
In deformed weakly-bound nuclei, the subtle interplay
among surface deformations, surface diffuseness, and continuum coupling can result in exotic structures, and the theoretical studies need
precise HFB solutions. For example, there was an argument about the existence of two-neutron deformed halos based on the three-body model~\cite{nues}.
while the existence of one-neutron deformed halos has been confirmed experimentally~\cite{Ne31,halo-rev},
In our previous work~\cite{pei2013}, we have shown that large-box HFB calculations give
an exotic ``egg"-like halo structure in $^{38}$Ne showing the core-halo deformation decoupling, which is associated with the phase space decoupling in the quasiparticle energy spectra. It was demonstrated that the near-threshold non-resonant continuum is mainly responsible for such an exotic halo structure~\cite{pei2013}. Thus the quasiparticle spectrum provides an important test for the precision of HFB solutions.
Some earlier studies showed that calculations with small box sizes may not be sufficient to describe the pairing properties~\cite{grasso2001} and quasiparticle spectrum of pairing occupation numbers~\cite{matsuo2009}. Besides, one may be cautious about the continuum discretization method for descriptions of broad quasiparticle resonances as the reaction model CDCC has been suffering~\cite{CDCC}. It is also important to evaluate the influences of the precision of HFB solutions in the prediction of drip lines.
Indeed, the usually adopted accuracy benchmark of HFB calculations by comparing total binding energies may not be sufficient for descriptions of weakly-bound nuclei in details.
Our motivation in this paper
is to further examine the HFB calculations of weakly bound deformed nuclei and study the box size dependence in several aspects,
including energetic properties, deformations, densities and pairing densities, and the near-threshold quasiparticle spectra of resonances and continuum,
with much larger box sizes compared to our previous work. This
will be useful for further studies of novel structures and excitation modes in weakly-bound deformed nuclei so that one is sure that they do not arise from
uncontrolled approximations.
\section{Theoretical Method}
In this work, the Skyrme-HFB equation is solved by the {\sc hfb-ax} code~\cite{Pei08,pei2011,pei2012} within a 2D lattice box, based on B-spline techniques for
axially symmetric deformed nuclei~\cite{teran}. To obtain sufficient accuracy, the adopted 2D box size is very large up to 36$\times$36 fm.
The maxima mesh size is 0.6 fm and the order of B-splines is 12.
This is the first deformed HFB calculations with such a large box size,
while in our previous work the adopted 2D box size is 30$\times$30 fm~\cite{pei2013}.
For calculations employing large boxes and small lattice spacings,
the discretized continuum spectra would be very dense and provide good resolutions.
Because the computing cost is extremely high, the hybrid MPI+OpenMP parallel programming is implemented to get
converged results within a reasonable time~\cite{pei2012}. It has to be noted that the {\sc hfb-ax} code has
been improved remarkably for descriptions of large systems~\cite{pei2012}, compared to its initial version~\cite{Pei08}.
Calculations were performed in China's top supercomputer Tianhe-1A.
For the particle-hole interaction channel, the SLy4 force~\cite{sly4} is adopted as
it is one of the mostly used parameterizations for neutron rich nuclei.
For the particle-particle channel, the density dependent surface pairing interaction
is used~\cite{mix-pairing}. The pairing strengthes are fitted to the neutron gap of $^{120}$Sn.
The HFB equation in the coordinate-space representation can be written as
\begin{equation}\label{1}
\left[
\begin{array}{cc}
h(\rr)-\lambda & \Delta(\rr) \\
\Delta^{*}(\rr) & -h(\rr)+\lambda \\
\end{array}
\right]
\left[
\begin{array}{c}
U_{k}(\rr) \\
V_{k}(\rr) \\
\end{array}
\right]
= E_k
\left[
\begin{array}{c}
U_{k}(\rr) \\
V_{k}(\rr) \\
\end{array}
\right],
\end{equation}
where $h$ is the Hartree-Fock Hamiltonian; $\Delta$ is the pairing potential;
$U_k$ and $V_k$ are the upper and lower components of quasi-particle
wave functions, respectively; $E_k$ is the quasi-particle energy; and
$\lambda$ is the Fermi energy (or chemical potential).
For bound systems, $\lambda < 0$ and the self-consistent densities and fields
are localized in space.
For $|E_k|<-\lambda$, the eigenstates are discrete and
$V_k(\rr)$ and $U_k(\rr)$ decay exponentially.
The quasiparticle continuum corresponds to $|E_k|>-\lambda$. For those states, the upper component of the
wave function always has a scattering asymptotic form. By applying the
box boundary condition, the continuum becomes discretized and one obtains
a finite number of continuum quasi-particles. In principle, the box solution
representing the continuum can be close to the exact
solution when a sufficiently big box and small mesh size are adopted.
Based on the quasiparticle wave functions, the particle density $\rho(\mathbf{r})$ and the pairing density $\tilde{\rho}(\mathbf{r})$ can be written as
\begin{equation}
\begin{array}{c}
\rho(\mathbf{r})=\sum_k V_k^{*}(\rr)V_k(\rr) \vspace{3pt}\\
\tilde{\rho}(\mathbf{r})=-\sum_k V_k(\rr)U_k^{*}(\rr)
\end{array},
\end{equation}
where in the sum the quasiparticle energy cutoff is taken as ($60-\lambda$) MeV. We also discussed
the particle occupation numbers $n_k$ and pairing occupation numbers $\tilde{n}_k$ as defined below:
\begin{equation}
\begin{array}{c}
n_k = \int V_k^{*}(\rr)V_k(\rr) d^3 \rr \vspace{3pt}\\
\tilde{n}_k=-\int V_k(\rr)U_k^{*}(\rr) d^3 \rr
\end{array}.
\label{occupationn}
\end{equation}
\section{Calculations and discussions}
\subsection{Density and pairing density distributions at nuclear surfaces}
Firstly we study the particle density and pairing density distributions of weakly-bound deformed nuclei.
Here we focus on the deformed Ne and Mg isotopes near the neutron drip line.
Experimentally, $^{34}$Ne and $^{40}$Mg are the neutron-richest isotopes known experimentally so far~\cite{nature-mg}.
Theoretically, there is a possibility of deformed neutron halos in this region~\cite{sgzhou,pei2013}.
In Fig.\ref{mg42} we display the neutron density profiles of $^{42}$Mg obtained by different solving methods.
The comparison between the HFB solutions based on the Harmonic Oscillator (HO) basis, the transformed HO (THO) basis~\cite{hfbtho} and the coordinate-space HFB solutions of {\sc hfb-ax} are shown in logarithmic scale to emphasize different surface asymptotics. The densities are
displayed along
the cylindrical coordinates $z$-axis (the axis of symmetry) and $r$-axis (the axis perpendicular to $z$-axis and $r$=$\sqrt{x^2+y^2}$), respectively.
The differences between the density profiles $\rho_{z({r=0})}$ and $\rho_{r({z=0})}$ actually reflect the surface deformations.
It can be seen that the results obtained from three methods have very different asymptotic behaviors at large distances. With the HO basis, the density distributions decays very rapidly
due to the Gaussian asymptotic of HO basis.
For this, the THO basis aims to improve the descriptions of weakly-bound nuclei compared to HO basis~\cite{mario03}. However, THO basis calculations still fail to
reproduce the halo structure and the surface deformation in $^{42}$Mg compared to the coordinate-space calculations.
Based on the comparison, the accuracy and advantages of coordinate-space calculations have been clearly illustrated for descriptions of weakly-bound deformed nuclei.
\begin{figure}[t]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.5\textwidth,clip]{density_mg42.eps}}
\caption{(Color online) The neutron density profiles of $^{42}$Mg are calculated by HFB-HO, HFB-THO and HFB-AX. The HFB-HO and HFB-THO calculations are based on 30 HO shells and the HFB-AX calculations are done within a box of 30 fm. The density distributions are displayed along the cylindrical coordinates $z$-axis(solid line) and $r$-axis(dashed line), respectively. }
\label{mg42}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{pairing_density_ne_all.eps}\\
\caption{(Color online) The neutron density and neutron pairing density profiles of $^{34}$Ne,$^{36}$Ne and $^{38}$Ne. The density $\rho$ and pairing density $\tilde{\rho}$ distributions are displayed along the cylindrical coordinates $z$-axis (solid line) and $r$-axis (dashed line), respectively. }
\label{neiso}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{pairing_density_mg_all.eps}\\
\caption{(Color online) The neutron density and neutron pairing density profiles of $^{40}$Mg,$^{42}$Mg and $^{44}$Mg. The density $\rho$ and pairing density $\tilde{\rho}$ distributions are displayed along the cylindrical coordinates $z$-axis(solid line) and $r$-axis(dashed line), respectively. }
\label{mgiso}
\end{figure}
In Figs. \ref{neiso} and \ref{mgiso} we show the neutron density distributions of Ne and Mg isotopes near the neutron drip line.
It can be seen that as the neutron number increase, in general, the surface densities enhance and the halo structures become more pronounced.
In Ne isotopes, the surface deformations have interesting evolutions. In Fig.\ref{neiso}, $^{34}$Ne has a spherical core and a small surface deformation;
$^{36}$Ne has a deformed core and a small surface deformation; $^{38}$Ne has a spherical core plus a well deformed prolate halo.
Such an ``egg''-like halo structure in $^{38}$Ne has been pointed out in the previous work as a result of
the subtle interplay between the surface diffuseness, surface deformation and continuum couplings~\cite{pei2013}.
In Mg isotopes, all the cores and the skins/halos are well prolate deformed. It has to be noted that the surface deformations increase in both Ne and
Mg isotopes close to the neutron drip line. It was known that the surface deformation and the surface diffuseness are mainly contributed by
the near-threshold non-resonance continuum~\cite{pei2013}. Therefore the increase of surface deformations can be attributed to the enhanced non-resonant
continuum close to the neutron drip line.
In Figs. \ref{neiso} and \ref{mgiso}, we have also displayed the neutron pairing density distributions $\tilde{\rho}_r$ and $\tilde{\rho}_z$ of Ne and Mg isotopes.
As we have pointed out earlier, the deformation decoupling also occurs in the pairing density distributions~\cite{pei2013}.
In the spherical case, the much larger spatial extensions of the pairing densities have
been observed in drip line nuclei, due to the different asymptotic behaviors of $\rho$ and $\tilde{\rho}$ at large distances~\cite{continuum-jacek}.
In the deformed cases, it can be seen that not only the spatial extensions but also the surface deformations of pairing density distributions are
larger than that of the density distributions. The deformed halo structures are more significant in the pairing density distributions.
This can also be interpreted as that the non-resonant continuum has much larger influences in pairing properties than
in normal densities~\cite{pei2011,zhang}.
\begin{figure}[htb]
\includegraphics[width=0.45\textwidth]{Density_and_Pairing_density_of_Ne38.eps}\\
\caption{(Color online) The neutron density $\rho_{n}$ (top), and neutron pairing density $\tilde{\rho}_{n}$ (bottom) of $^{38}$Ne calculated with box sizes of 24 fm, 30 fm and 36 fm. The density and pairing density distributions are displayed along the cylindrical coordinates $z$-axis (solid line) and $r$-axis (dashed line), respectively. }
\label{denbox}
\end{figure}
Fig. \ref{denbox} displays the neutron density $\rho(r)$ and the neutron pairing density $\tilde{\rho}(r)$ of $^{38}$Ne calculated with the box sizes of 24 fm, 30 fm and 36 fm, respectively. It can be seen that, for different box sizes, the densities have the same asymptotics and surface deformations before meeting the box boundaries. This implies the
deformed halo structure of $^{38}$Ne is rather robust as the box size changes.
We can see that the box size of 30 fm is sufficient to give rise to the deformed halo in the density distributions. For the pairing density distribution with
a much larger spatial extension, however, we can see that the HFB calculations require an even larger box.
\subsection{Quasiparticle spectrum near thresholds}
It was known that the near-threshold quasiparticle spectrum, in particular the
non-resonant continuum, is mainly responsible for the halo structures in weakly-bound nuclei~\cite{forssen,zhang,pei2013}.
While the conventional interpretation of halos in terms of weakly-bound single-particle wavefunctions could be oversimplified
for the two-neutron halos.
Therefore it is of great interest to study the structures of quasiparticle spectrum of resonances and continuum, especially for
deformed weakly-bound nuclei.
To compare the different quasiparticle spectra calculated by the HO basis, THO basis and the coordinate-space discretization method,
we display the smoothed occupation numbers $\tilde{n}_i$ [as defined in Eq.(\ref{occupationn})], as showed in Fig.\ref{smooth1}.
The neutron quasiparticle occupation numbers $n_{i}$ of $\Omega^{\pi}=1/2^{\pm}$ of $^{42}$Mg are smoothed with a Lorentz shape function and a smoothing parameter of $50 keV$.
The smoothing method was described in Ref.\cite{pei2011}.
For quasiparticle resonances, the related Nillsson labels are given.
The quasiparticle spectra from HFB-HO have a significant shift compared to HFB-AX, and the THO basis shows improved agreement compared to HFB-AX.
Indeed, HFB-THO has better surface asymptotics than HFB-HO for $^{42}$Mg as shown in Fig.\ref{mg42}.
However, both the HO and THO calculations have problems to represent the 1/2[321] which is a broad quasiparticle resonance.
The HFB-HO calculations tends to underestimate the widths of broad resonances.
It is known that the discretized spectra of quasiparticle resonances can roughly have the Breit-Wigner shape~\cite{pei2011}.
For quasiparticle states with narrow widths, the basis expansion
method should be fine. It is obvious that the coordinate-space method is superior over the HO or THO basis expansion method
in describing the broad quasiparticle resonances. This is because the coordinate-space HFB calculations produce much denser quasiparticle spectra than
the HO and THO calculations.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{res-ho-tho-ax-mg42.eps}\\
\caption{(Color online) The smoothed particle occupation numbers of $\Omega^{\pi}=1/2^{\pm}$ neutron quasiparticle states of $^{42}$Mg,
The quasiparticle spectra obtained by HFB-HO (dotted line) and HFB-THO (dashed line) with 30 shells of HO/THO basis, and by HFB-AX (solid line) with a box size of 30 fm. }
\label{smooth1}
\end{figure}
Next we demonstrate that larger box calculations can further improve the descriptions of broad quasiparticle resonances and non-resonant continuum.
To distinguish the quasiparticle resonances and the continuum, we display the smoothed neutron quasiparticle spectra of $^{38}$Ne obtained with box sizes of 24fm, 30fm and 36fm,
as shown in Fig.\ref{ne38qp1}.
It was known that the energies of quasiparticle resonances are stationary with different box calculations, while continuum states are not~\cite{doba1984}.
With box-changed calculations one can get the information about non-resonant continuum contributions.
It is seen that the quasiparticle spectrum above the thresholds consisting of several resonances,
and a remarkable non-resonant continuum background below 3 MeV. In our previous study~\cite{pei2013}, the near-threshold non-resonant continuum is mainly responsible
for the ``egg''-like deformed halo structure in $^{38}$Ne.
In Fig.\ref{ne38qp1}, the distributions of 1/2[200] and 1/2[211] states don't change as the box size varies and they are obviously quasiparticle resonances.
However, it is difficult to identify the broad resonance 1/2[300] around 4 MeV. By increasing the box sizes, it can be seen that
the resonance 1/2[300] are stationary and becomes more and more like a Breit-Wigner shape. There are some small peaks close to the threshold that fades away
as the box size increases. Actually they should be completely dissolved into the continuum in even larger box calculations
or exact HFB solutions.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{res_ax_ne38.eps}\\
\caption{(Color online) The smoothed particle occupation numbers of $\Omega^{\pi}=1/2^{\pm}$ neutron quasiparticle states of $^{38}$Ne obtained by HFB-AX with box sizes of 24 fm, 30 fm, and 36 fm. The continuum thresholds, i.e. $-\lambda_{n}$, are given, where $\lambda_{n}$ is the neutron Fermi energy.}
\label{ne38qp1}
\end{figure}
\begin{table}[htb]
\caption{\label{ne38tab}
Calculated results of $^{38}$Ne obtained by HFB-AX with box sizes of 24 fm, 27 fm, 30 fm and 36fm.
The total binding energy $E_{tot}$, Coulomb energy $E_c$, pairing energy $E_{pair}$, kinetic energies $E_{kin}$,
the quadruple deformations $\beta_2$, the r.m.s radii $R_{rms}$ (in fm), the pairing gaps $\Delta$, the Fermi surface energies $\lambda$
are listed. The energies are given in MeV.}
\begin{tabular}{lrrrrrrr}
\hline
\hline\\[-0.7em]
\vspace{0.2mm}
~~~~$^{38}$Ne &~ 24fm &~ 27fm &~ 30fm &~ 36fm \\
\hline\\[-0.7em]
~~ $E_{tot}$ &~~ $-$220.29~ &~~$-$220.29 &~~ $-$220.35 &~~ $-$220.33 \\
~~ $E_{c}$ &~~18.94~ &~~18.95 &~~18.95 &~~18.96 \\
~~ $E_{pair}$ &~~ $-$68.10~ &~~ $-$67.77 &~~ $-$67.41 &~~ $-$67.43 \\
~~ $E^{p}_{kin}$ &~~ 138.34~ &~~ 138.46 &~~ 138.51 &~~ 138.56 \\
~~ $E^{n}_{kin}$ &~~ 444.84~ & ~~443.48 &~~ 442.85 &~~442.10 \\
~~ $\beta_{2p}$ &~ ~0.00~ &~~ 0.00 &~~ 0.00 &~~ 0.00 \\
~~ $\beta_{2n}$ &~~ 0.13~ &~~ 0.19 &~~ 0.24 &~~ 0.34 \\
~~ $R_{rms}$ &~ 9.39~ &~ 9.38 &~ 9.38 &~ 9.37 \\
~~ $\Delta_{p}$ &~~ 1.46~ & ~~1.47 &~ ~1.47 &~ ~1.46 \\
~~ $\Delta_{n}$ &~~ 2.97~ &~ ~2.95 &~~ 2.93 &~ ~2.92 \\
~~ $\lambda_{p}$ & $-$23.853 &~ $-$23.802 &~ $-$23.784 &~ $-$23.747 \\
~~ $\lambda_{n}$ & $-$0.079 &~ $-$0.096 &~ $-$0.103 &~ $-$0.116 \\
\hline
\hline
\end{tabular}
\end{table}
Fig. \ref{ne38qp2} displays the smoothed pairing occupation numbers $\tilde{n}_i$ obtained by different box sizes, corresponding to the particle occupation numbers $n_i$ in Fig.\ref{ne38qp1}. We can see that the near-threshold non-resonant continuum have significant contributions that exceed the contributions of resonances.
The dominance of the non-resonant continuum in the pairing channel of weakly-bound nuclei has been pointed out in \cite{zhang,forssen}.
For each quasiparticle resonance, the spectrum of pairing occupation numbers has a corresponding peak.
For the broad resonance 1/2[300], it can be seen that the description of the pairing occupation numbers can be improved by increasing the box size.
The failure of descriptions of pairing occupation numbers in small box solutions has been pointed out by~\cite{matsuo2009}.
In Fig. \ref{ne38qp2}, the non-resonant part of the pairing occupation numbers still has notable non-smooth distributions even with a box size of 36 fm.
Based on the comparison between Fig. \ref{ne38qp1} and Fig. \ref{ne38qp2}, one can understand that even larger box sizes are needed for descriptions of pairing properties, especially
for the surface peaked pairing shown in Fig.\ref{denbox}.
Table \ref{ne38tab} shows the result of deformed HFB calculations for $^{38}$Ne with different box sizes.
With the increasing box size, the total binding energy $E_{tot}$ increases generally , indicating the stability is being enhanced.
The pairing energy $E_{pair}$ and the neutron kinetic energy $E_{kin}$ decrease significantly as the box size increases. The combination $E_{pair}+E_{kin}$
is less sensitive to the box sizes.
It was pointed out in \cite{grasso2001} that in the vicinity of the drip lines pairing correlations can be overestimated by the continuum discretization in box HFB calculations.
Indeed, the convergence of the pairing energy is slow with respect to the increasing box sizes. Again this indicates that large box calculations
are essential for describing pairing properties of weakly-bound nuclei and exotic deformed halo structures. We also observed
the increase in the surface quadrupole deformations $\beta_{2n}$ of neutrons as the box size increases, although they have consistent surface asympotics as shown in Fig.\ref{denbox}. Nevertheless the deformations in this region are very soft~\cite{terasaki}.
The decreasing of the neutron Fermi surface energy with increasing box sizes implies the halo structure becomes more stable in larger box calculations,
mainly due to the better treatment of the continuum effects and pairing correlations.
\begin{figure}[htb]
\includegraphics[width=0.45\textwidth]{pairing_number_density_38ne.eps}\\
\caption{(Color online) The smoothed pairing occupation numbers of $\Omega^{\pi}=1/2^{\pm}$ neutron quasiparticle states of $^{38}$Ne obtained by HFB-AX with box sizes of 24 fm, 30 fm, and 36 fm.
The continuum thresholds, $-\lambda_{n}$, are given, where $\lambda_{n}$ is the neutron Fermi energy.}
\label{ne38qp2}
\end{figure}
\subsection{Peninsula of stability beyond the drip line}
Once the accurate HFB descriptions of weakly-bound deformed nuclei are realized, it is interesting to
address the question whether there can possibly exist islands or peninsulas of stability beyond the neutron drip line~\cite{nazarewicz99}.
There have been some studies based on Hartree-Fock+BCS method to explore the islands or peninsulas of stability~\cite{tarasov}.
Also we like to know the influences of the HFB solving precision on the prediction of the neutron drip line.
We are particularly interested in the possible islands or peninsulas of stability due to deformations and continuum effects
based on the coordinate-space HFB calculations.
Note that the HFB-HO and HFB-THO calculations even with 30 shells of basis still can not obtain the deformed halos in $^{38}$Ne and $^{44}$Mg.
To examine the HFB precision in the heavy mass region, we studied the well deformed Nd ($Z$=60) isotopes with the SLy4 force and
the mixed pairing interaction~\cite{mix-pairing}.
In Fig.\ref{nd-iso}, the Fermi surface energies of Nd isotopes are shown as a function of neutron numbers.
It can be seen that the normal two-neutron drip line is obtained by different methods as $N$=126.
The Nd isotopes from $N$=128 to 136 with positive Fermi surface energies are not bound systems.
It is reasonable that the Fermi surface energies in HFB-AX calculations are systematically lower than that in
HFB-THO calculations.
The interesting point is that the Nd isotope with $N$=138 is slightly bound in the coordinate-space HFB calculations, with
a negative $\lambda_n$ of $-$35 KeV. While $\lambda_n$ is $68$ KeV in the HFB-THO calculations with 30 shells of transformed HO basis.
By using the approximation $S_{2n}(Z,N)\approx -2\lambda_{n}$ (for even $N$)~\cite{erler}, $^{198}$Nd seems to be bound for two-neutron emissions.
Hence $^{198}$Nd may be seen as the extension of the peninsula of stability in the deformed Nd-Sm-Gd region around $N$=138~\cite{erler}.
The nucleus $^{198}$Nd is well deformed ($\beta_2$=0.28) and has considerable pairing correlations ($\Delta_n$=0.59 MeV) as well as continuum effects. It has to be noted that, however, its halo structure
is not so significant as that in light nuclei. Calculations with mixed pairing indeed give arise to less significant halo features compared to the surface pairing.
In addition, the decreased possibility of heavy halos has been discussed in Refs.\cite{pei2013,schunck,rotival}.
With the Nd example studied, we might predict the possibilities of other islands or peninsulas of stability by accurate HFB calculations.
\begin{figure}
\includegraphics[width=0.45\textwidth]{lambda_n-nd.eps}\\
\caption{(Color online) The neutron Fermi energies of Nd isotopes calculated by HFB-THO (with 30 shells of transformed HO basis) and HFB-AX (with a 30 fm box). }
\label{nd-iso}
\end{figure}
\section{Summary}
The main objective of this study is to perform detailed analysis of deformed weakly bound nuclei based on the self-consistent Skyrme-HFB approach within large coordinate-space boxes. The advantages of the coordinate-space HFB calculations are its capability of the precisely treatment
of the surface deformations, continuum effects and large spatial extensions, which are
essential mechanisms of the exotic structures in weakly-bound deformed nuclei.
Based on detailed analysis and the comparison with the HO and THO basis expansion methods, we demonstrated that large box HFB calculations
are necessary to describe the asymptotic behaviors of deformed halos and pairing density distributions, broad quasiparticle resonances and non-resonant continuum,
the predictions on islands or peninsulas of stability and drip lines. In addition, the large box calculations are especially needed for
descriptions of pairing occupation numbers and pairing density distributions of weakly-bound nuclei which have more pronounced deformed halo structures.
This is related to the fact that the pairing density distributions have lager surface deformations and larger spatial extensions than that of the
normal densities. For this, the remarkable contribution from near-threshold non-resonant continuum plays an important role.
One should keep in mind that the precise predictions of deformed halos would be dependent upon
models and effective interactions that one used.
\section*{Acknowledgments}
This work was supported by the
National Key Basic Research Program of China under Grant 2013CB834400,
and the National Natural Science Foundation of China under Grants No.11375016, 11235001 and 11320101004.
We also acknowledge that computations in this work were performed in the Tianhe-1A supercomputer
located in the Chinese National Supercomputer Center in Tianjin.
\nocite{*}
| 2023-04-23T06:40:28.571Z | 2013-09-11T02:02:38.000Z | redpajama/arxiv | arxiv_0001 | 470 | 4,500 |
3987f347e222c944e5f51b3024bcea180878b050 | \section{Introduction}
Coherent dynamical processes in complex systems have become popular in different fields of science, ranging from chemistry and statistical physics \cite{muelken2011continuous, van1992stochastic}
to quantum computation \cite{nielsen2010quantum}. The systems can be vastly different, say, optical waveguides \cite{lahini2011hanbury, heinrich2012disorder},
ultracold Rydberg gases \cite{deiglmayr2006coherent, westermann2006dynamics, mulken2007survival, cote2006quantum} or carbon nanotube networks \cite{kumar2005percolating, snow2003random, hu2004percolation}. Quantum mechanically as well as classically,
transport in these systems takes place over different topologies which can vary from very ordered (regular) lattices to randomly build networks of interacting nodes.
Then, an excitation is created at one or more of the nodes: the dynamics of the excitation is then described in the classical (diffusive) case by continuous-time random walks
(CTRW) and in the quantum case by continuous-time quantum walks (CTQW) \cite{muelken2011continuous}.
In many cases one is interested in the transport {\it through} a network, i.e., an excitation
is created somewhere in the network and can leave the network at a given set of nodes. The topological influence on the dynamics is
then captured in the survival probability of the excitation to remain within the network. Here, we consider the example of a set of $N$
disconnected nodes arranged on two-dimensional lattices of different aspect ratios (AR) to which we randomly add a fixed number of bonds, $B$, between axially nearest-neighboring nodes.
This resembles the random two-dimensional lattices of nanotubes whose conductivity properties have been studied experimentally \cite{kumar2005percolating, snow2003random, hu2004percolation}.
There, the interest was in the conductivity from, say,
the left side of the lattice to the right side.
In order to elucidate the transport properties of such networks, we calculate for each $B$ the long-time behavior (LTB) of the survival probabilities
for CTQW and compare them to the ones for CTQW. We define $p_{0.5}^{QW}=B_{0.5}^{QW}/B_{\rm max}$, where $B_{0.5}^{QW}$ is
that number of bonds, out of the total number $B_{\rm max}$, which is needed in order
for the LTB of the CTQW survival probability to have reached (roughly) the value $0.5$. The corresponding CTRW
probability is $p_{0.5}^{RW}$. Clearly, for the same AR, $p_{0.5}^{QW}$ and $p_{0.5}^{RW}$ can be vastly different, as the quantum-mechanical localization
of eigenstates may lead to higher $p$-values for CTQW than for CTRW, see also Ref. \cite{leung2010coined} for a study of discrete-time quantum walks.
Before continuing with our analysis we mention the obvious connection to percolation theory \cite{stauffer1994introduction, sahimi1994applications}. While we focus on the survival
probabilities and their decay due to existing connections from left to right, classical bond percolation focusses on the (first) appearance of such a connection. In our case,
typically several of these connections are needed in order to reach the values $0.5$ for the LTB of both, CTQW and CTRW. We further focus on the time-independent
case where bonds are permanent, i.e.,
they cannot be removed from the lattice once they are placed. In dynamical percolation, bonds might also be removed, see Ref. \cite{darazs2012time, kollar2012asymptotic}.
The paper is organized as follows: Section \MakeUppercase{\romannumeral 2}
introduces the general concepts of CTRW and of CTQW. Furthermore, it discusses the trapping model and the different two-dimensional systems considered here.
Section \MakeUppercase{\romannumeral 3} displays our numerical results obtained for lattices with different AR for classical and for quantum
mechanical transport. The paper ends in Section \MakeUppercase{\romannumeral 4} with our conclusions.
\section{Transport over random structures}
\subsection{General considerations}
We start by considering both classical and quantum transport over two-dimensional structures consisting of
$N_x\times N_y = N$ nodes. We denote the position of a site by $j=(j_x,j_y)$, with $j_x=1,\dots,N_x$ and $j_y=1,\dots,N_y$,
i.e. $j_x$ and $j_y$ are integers which label the lattice
in the $x$- and the $y$-directions. Several of these nodes get connected by the $B$-bonds distributed over the structure. This
procedure leads to a group of clusters of sites. The information
about these bonds is encoded in the $N\times N$ connectivity matrix $\bf
A$ (see, for instance, \cite{muelken2011continuous}). The non-diagonal elements of $\mathbf{A}$: pertaining to two sites are $-1$ if the
sites are connected by one of the $B$-bonds and zero otherwise. The diagonal element of $\mathbf{A}$ corresponding to
a particular site is $f$, where $f$ equals the number of $B$-bonds to which the particular site belongs. Now, it is non-negative definite, i.e.
all its eigenvalues are positive or zero. When the structure contains no
disconnected parts, $\bf A$ has a single vanishing
eigenvalue \cite{biswas2000polymer}. In the following we describe
the dynamics of purely coherent and of purely incoherent transport by
using the CTQW and the CTRW models,
respectively \cite{farhi1998quantum}. In both cases, the dynamics depends
very much on the topology of the structure, i.e., on $\mathbf A$.
In a bra-ket notation, an excitation localized at node $j$ will be viewed as being in the
state $|j\rangle \equiv |j_x\rangle\
\otimes |j_y\rangle \equiv |j_x,j_y\rangle$. The states $\{|j\rangle \}$ form an
orthonormal basis set. Classically, the transport over unweighted and undirected
graphs can be described by CTRW with the transfer matrix
$\mathbf{T}=-\gamma\mathbf{A}$ \cite{van1992stochastic, farhi1998quantum,
muelken2011continuous}; here, for simplicity, we assume equal transition rates $\gamma=1$ for all
the nodes.
\subsection{CTQW and CTRW}
Quantum mechanically, the set of states $\{ |j\rangle \}$ spans the whole accessible
Hilbert space. The time evolution of an excitation starting at
node $|j\rangle$ can be described by the discrete Hamiltonian $\bf H$; Fahri and Guttmann assumed in \cite{farhi1998quantum} that
$\bf H=-\bf T$ which defines the CTQW corresponding to a CTRW with a given transfer matrix $\bf T$.
The CTRW and the CTQW transition probabilities from the state
$|j\rangle$ at time $t=0$ to the state $|k\rangle$ at
time $t$ read \cite{muelken2011continuous}:
\begin{eqnarray}
&& p_{k,j}(t)=\langle k|\exp{(-\mathbf{T}t)}|j\rangle
\label{eq:trans_prob_cl}\\
\mbox{and} \qquad && \pi_{k,j}(t)=\arrowvert\langle\ k|\exp{(-i\mathbf{H}t)}|\ j\rangle \arrowvert^2,
\label{eq:trans_prob}
\end{eqnarray}
respectively, where we assume $\hbar=1$ in Eq.(\ref{eq:trans_prob}).
\subsection{The role of absorption}
An excitation does not necessarily stay forever in a particular system: it can
either decay or get absorbed at certain sites. Since we assume
the lifetime of the excitation to be much longer than all the other relevant time
scales, we neglect the global radiative decay. However, there are specific
nodes where the excitation can get absorbed (trapped). We call these nodes traps
and denote their set by $\mathcal{M}$. We also denote by $M$ the number of elements in $\mathcal{M}$ \cite{muelken2007localization}.
The presence of traps
leads to the decay of the probability to find the
excitation in the system as a function of time \cite{muelken2011continuous}.
For a trap-free structure we denote the transfer matrix and the Hamiltonian by $\mathbf {T}_0$ and by
$\mathbf {H}_0$, respectively. We assume the trapping operator $\hat{\mathbf \Gamma}$ to be given by a sum over all
trap-nodes $|m\rangle=|m_x,m_y\rangle$ \cite{muelken2011continuous, muelken2010coherent}:
\begin{equation}
\label{eq:trapping matrix}
\hat{\mathbf{\Gamma}}= \sum_{m\in\mathcal{M}} \Gamma_m
| m \rangle \langle m|.
\end{equation}
Then $\bf T$ and $\bf H$ can be written as $\mathbf{T}= \mathbf {T}_0-\mathbf{\Gamma}$ and $\mathbf{H}= \mathbf {H}_0-i\mathbf{\Gamma}$.
In the CTRW case the transfer
matrix stays real; then the transition probabilities can be calculated as:
\begin{equation}
p_{k, j}(t) = \sum_{n=1}^{N}e^{-\lambda_nt} \langle k |\phi_n \rangle \langle \phi_n |j
\rangle.
\label{SurvProbCTRW}
\end{equation}
In Eq.(\ref{SurvProbCTRW}) $\lambda_n$ are the (real) eigenvalues $\lambda_n$ and the $|\phi_n\rangle$ are the eigenstates of $\mathbf{T}$ .
In the quantum mechanical case, $\mathbf {H}$ is non-hermitian and can have up to $N$ complex eigenvalues $E_n=\epsilon_n-i\gamma_n$, ($n=1,\dots,N$).
Then the transition probabilities read:
\begin{equation}
\pi_{k, j}(t) = \left |
\sum_{n=1}^{N}e^{-i\epsilon_nt}e^{-\gamma_nt} \langle k|\psi_n \rangle \langle \widetilde \psi_n | j
\rangle \right|^2,
\label{SurvProbCTQW}
\end{equation}
where $|\psi_n \rangle$ and $\langle \widetilde \psi_n |$ are the right
and the left eigenstates of $\mathbf{H}$, respectively. Obviously,
the imaginary parts $\gamma_l$ of $E_l$ determine the temporal decay of $\pi_{k, j}(t)$.
\subsection{Structures with different aspect ratios}
We now turn to specific examples two-dimensional structures with different AR, see Fig.~\ref{fig:InCond}. We distinguish the structures by their aspect ratio
$N_y/N_x$; in particular we denote the configurations of lattices with $N_y/N_x<1$ as ``landscapes''
and with $N_y/N_x>1$ as ``portraits''; the case $N_y/N_x=1$ is the square.
As stated above, we start from a set of $N=N_x\times N_y$ disconnected nodes, to which we randomly add $B$ bonds between nearest neighbor sites. This
can be viewed as having bonds occupied
with probability $p=B/B_{\rm max}$, with $B_{\rm max}$ being $B_{\rm max} = 2N_x N_y - (N_x+N_y)$. A simply connected component of
this graph is called a
cluster; every two nodes of such a cluster are connected to each other by at least one unbroken
chain of nearest-neighbors bonds.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{fig1.eps}
\caption[networks]{Sketches of structures with square, portrait, and
landscape configurations. Here, the triangles denote possible sources and the squares denote the traps (sinks).
The $B$-bonds are places on the horizontal and vertical connectivity segments.}
\label{fig:InCond}
\end{figure}
We now focus on the transport in the $x$-direction. For this we depict the sites in the
first column of the lattice by triangles and call them sources; their coordinates are $|1,l_y\rangle$, where $l_y=1,\dots,N_y$, see Fig.~\ref{fig:InCond}.
In a similar way, we depict the nodes of the last column by squares and call them traps (sinks). Their coordinates are $|N_x, m_y\rangle$, see
Fig.\ref{fig:InCond}. Thus, $\hat{\mathbf{\Gamma}}=
\sum_{m_y=1}^{N_y}\Gamma{(|N_x, m_y\rangle \langle m_y, N_x|)}$. Now, a typical process starts by exciting one of the sources. The process
gets repeated by exciting another of the sources, and so forth. The classical and the quantum mechanical survival probabilities
$P(t)$ and $\Pi(t)$ are now:
\begin{equation}
\label{eq:mean survival probability clas}
P(t)= \frac{1}{N N_y}\sum_{l_y,k_y=1}^{N_y}
\sum_{k_x=1}^{N_x}\langle\
k_y,k_x|e^{-\mathbf{T}t}|1,l_y\rangle,
\end{equation}
and
\begin{equation}
\label{eq:mean survival probability}
\Pi(t)=
\frac{1}{NN_y}\sum_{l_y,k_y=1}^{N_y}
\sum_{k_x=1}^{N_x}\arrowvert\langle\
k_y,k_x|e^{-i\mathbf{H}t}|1,l_y\rangle \arrowvert^2.
\end{equation}
Note that in this way $p_{k,j}(t)$ and $\pi_{k,j}(t)$ are averaged over all possible
initial states $|1,l_y\rangle$ and over all possible final states $|k_x,k_y\rangle$.
Furthermore, the time evolution of $p_{k,j}(t)$ and $\pi_{k,j}(t)$ depends on the particular realization of the structure, since for a given, fixed $B$ the
distribution of bonds and hence the structure is, in general, random. We evaluate interesting
quantities through ensemble averaging over $R=1000$ random structure realisations and set:
\begin{equation}
\label{eq:ensemble}
\langle...\rangle_R\equiv\frac{1}{R}\sum_{r=1}^{R}[...]_r.
\end{equation}
In such a way, we obtain
ensemble-averaged survival probabilities $\langle P(t)\rangle_R$ and
$\langle \Pi(t)\rangle_R$ along with their long-time behavior (LTB)
$\langle P_\infty \rangle_R=\lim_{t \to \infty} \langle P(t)\rangle_R$ and
$\langle \Pi_\infty \rangle_R=\lim_{t \to \infty} \langle
\Pi(t)\rangle_R$.
As stressed above, our interest is to determine for which values of $B$ $\langle P_\infty \rangle_R$ and $\langle \Pi_\infty \rangle_R$
reach the value $0.5$. We denote these values by $B_{0.5}^{(RW)}$ and $B_{0.5}^{QW)}$, respectively, and obtain thus
$p_{0.5}^{(RW)}=B_{0.5}^{(RW)}/B_{\rm max}$ and $p_{0.5}^{(QW)}=B_{0.5}^{(QW)}/B_{\rm max}$.
\section{Numerical results}
\subsection{$p_{0.5}^{(RW)}$ for CTRW and $p_{0.5}^{(QW)}$ for CTQW}
Figure~\ref{fig:Pcr} summarises our findings for the classical $p_{0.5}^{(RW)}$ and for the quantum $p_{0.5}^{(QW)}$ as a function of the AR, namely of $N_y/N_x$.
In general, we find $p_{0.5}^{(QW)}>p_{0.5}^{(RW)}$. For structures with $N_y/N_x<1$, i.e. in landscape
configurations,
$p_{0.5}^{(RW)}$ and $p_{0.5}^{(QW)}$ behave quite similarly as a function of $N_y/N_x$.
Now, increasing $N_y/N_x$ we find that $p_{0.5}^{(RW)}$ has a minimum at $N_y/N_x\approx1$, which is not the case for $p_{0.5}^{(QW)}$.
For structures with $N_y/N_x>1$, i.e. in portrait configurations, the behavior of $p_{0.5}^{(RW)}$ and of $p_{0.5}^{(QW)}$ differs with increasing AR: In the CTRW case
$p_{0.5}^{(RW)}$ decreases with increasing
AR, reflecting the fact that the opposite ends get then closer, so that lower $p$-values are sufficient to ensure on efficient transport.
In the CTQW case we find that for $N_y/N_x>1$ $p_{0.5}^{(QW)}$ increases with increasing AR, a quite counter-intuitive
effect which we will discuss in detail in the following.
\begin{figure}[ht!]
\centering
\includegraphics[width=\columnwidth]{fig2.eps}
\caption[networks]{Values of $p_{0.5}^{(RW)}$
and of $p_{0.5}^{(QW)}$ for different AR, $N_y/N_x$. Note
the logarithmic-linear scales.}
\label{fig:Pcr}
\end{figure}
In Fig.~\ref{fig:PercolationExamples} we show particular examples of the
$p$-dependence of
$\langle P_\infty \rangle_R$ and $\langle \Pi_\infty \rangle_R$
for structures with different AR but with roughly the
same total number $N$ of nodes. Displayed are:(a) a landscape configuration with $24\times2$ nodes,
(b) a square configuration with $7\times7$
nodes, and (c) a portrait configuration with $2\times24$ nodes.
One observes as a function of $p$ the transition from states with very inhibited
transport, for which $\langle P_\infty \rangle_R$ and
$\langle \Pi_\infty \rangle_R$ are very close to unity, to states in which the transport is very effective,
so that $\langle P_\infty \rangle_R$ and
$\langle \Pi_\infty \rangle_R$ get very close to zero. From Fig.~\ref{fig:PercolationExamples} the values of $p_{0.5}^{(RW)}$ and of $p_{0.5}^{(QW)}$
may be read off.
Due to the finite size of the lattices
the transition region is rather broad; it gets sharper while increasing $N$.
The difference in behavior between $\langle P_\infty
\rangle_R$ and $\langle \Pi_\infty \rangle_R$ is most evident for the portrait
configuration, see Fig.~\ref{fig:PercolationExamples}(c). Furthermore, in the portrait case the
CTRW $\langle P_\infty \rangle_R$ is smaller than in the square and in the landscape configurations.
This is different than for the CTQW case, where $p_{0.5}^{(QW)}$ is larger than in the square
and in the landscape configurations.
\begin{figure}[h]
\centering
\subfigure{\includegraphics[width=0.48\textwidth]{fig3-a.eps}}
\subfigure{\includegraphics[width=0.48\textwidth]{fig3-b.eps}}
\subfigure{\includegraphics[width=0.49\textwidth]{fig3-c.eps}}
\caption[networks]{Values of $\langle P_\infty \rangle_R$ (circles) and of $\langle
\Pi_\infty \rangle_R$ (squares) as a function of $p$
for structures with different aspect ratios but with the same $B$ and roughly the
same $N$: (a) a landscape configuration with $24\times2$ nodes, (b) a square configuration with $7\times7$
nodes, and (c) a portrait configuration with $2\times24$ nodes.}
\label{fig:PercolationExamples}
\end{figure}
In the landscape configuration, the limit $N_y/N_x \to 0$ leads to
the situation of a very long (infinite) chain. In this case already one
broken bond is enough to inhibit transport, this is in line with our findings, both in the classical and in the quantum
mechanical cases, where we have $p_{0.5}^{(RW)} = p_{0.5}^{(QW)} = 1$.
On the other hand, in the limit $N_y/N_x \to \infty$ one finds that for CTRW only a small number of bonds $B$, i.e., a small
probability $p$ is sufficient to cause a drop in $\langle P_\infty \rangle_R$. This is readily seen in the limit $N_x=2$, when
a horizontal bond is guaranteed in average when $B$ is around $3$ (one has for $N_x=2$ roughly twice as many vertical as horizontal bonds), i.e.
for $p\simeq3/3N_y=1/N_y$. Such a bond connects a source to a trap and this $p$ value, $p\simeq1/N_y$
tends to zero as $N_y/N_x \to \infty$.
The picture is not so simple in the CTQW case.
Here, the survival
probability depends on specific features of the eigenstates
$|\psi_n\rangle$. If these are localized, transport from one node to the
other will be inhibited as in the Anderson localization
\cite{anderson1958absence}. In the next section we will analyze the eigenstates of
$\mathbf H$ in order to understand the relatively large values of $p_{0.5}^{(QW)}$ compared to $p_{0.5}^{(RW)}$ for
lattices with portrait configurations.
\subsection{Participation ratio and eigenstates}
We recall that the participation ratio $| \langle j |
\psi_{n,r}^{(0)} \rangle|^4$, where $|\psi_{n,r}^{(0)}\rangle$ is the
$n$th eigenstate of the $r$th realization of the $\mathbf{H}_0$, is a
measure of the localization of the different eigenstates. In
order to take the ensemble averaging into account, we introduce
\begin{equation}
\langle\Xi_{j,n}\rangle_R= \frac{1}{R}\sum_{r} | \langle j |
\psi_{n,r}^{(0)} \rangle|^4
\end{equation}
as the ensemble averaged participation ratio
\cite{muelken2007small}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=1.0\textwidth]{fig4.eps}
\caption[part]{Ensemble averaged participation ratios
$\langle\Xi_{j,n}\rangle_R$ for different values of $p$, namely
$p<p_{0.5}^{(RW)}$, $p=p_{0.5}^{(RW)}$, $p_{0.5}^{(RW)} < p < p_{0.5}^{(QW)}$, $p=p_{0.5}^{(QW)}$, and $p>p_{0.5}^{(QW)}$,
for :
(a) Landscape configuration for a lattice of $24\times2$ nodes.
(b) Square configuration for a lattice of $7\times7$ nodes.
(c) Portrait configuration for a lattice of $2\times24$ nodes.
}
\label{fig:partratio}
\end{figure*}
Figure \ref{fig:partratio} shows in contour plots $\langle\Xi_{j,n}\rangle_R$ for lattices
whose configuration is (a) landscape, (b) square, and (c) portrait. Here, in each separate panel each row
reflects the average contribution of every node $|j\rangle$ of the lattice
to a given eigenstate $|\psi_{n,r}\rangle$. In order to see the transition
from the situation for $p<p_{0.5}^{(RW)}$ to the one for $p>p_{0.5}^{(QW)}$,
we present $\langle\Xi_{j,n}\rangle_R$ for distinct $p$ values, namely
for $p<p_{0.5}^{(RW)}$, for $p=p_{0.5}^{(RW)}$, for $p_{0.5}^{(RW)}<p<p_{0.5}^{(QW)}$, for $p=p_{0.5}^{(QW)}$, and for
$p>p_{0.5}^{(QW)}$. Bright shadings correspond to low while dark shadings correspond to high
values of $\langle\Xi_{j,n}\rangle_R$. Therefore, localized dark regions
indicate localized eigenstates. These, in turn, will inhibit the transport.
This is well in line with the information obtained from Fig.~\ref{fig:PercolationExamples}, presented in Fig.~\ref{fig:PercolationExamples}(a) for the
landscape configuration. We ramark that, as already noticeable from Fig.~\ref{fig:PercolationExamples}(a), for the landscape configuration the
quantum and the classical $p_{0.5}(t)$-probabilities lie very close together,
being $p_{0.5}^{(RW)}=0.757$ and $p_{0.5}^{(QW)}<0.786$. In the depicted case $p_{0.5}^{(RW)}$ and $p_{0.5}^{(QW)}$ differ only by $4\%$, i.e., for $N=48$
only by $2$ bonds in $B_{0.5}$. The eigenstates stray localized up to
$p=p_{0.5}^{(QW)}$, see the first panel in
Fig.~\ref{fig:partratio}(a). For $p>p_{0.5}^{(QW)}$ the eigenstates
get more delocalized, which is visible as the grey gets more evenly-distributed over the different nodes $n$.
For the square configuration, Fig.~\ref{fig:partratio}(b), the relative difference between $p_{0.5}^{(RW)}$ and $p_{0.5}^{(QW)}$ is about twice as large as for the landscape configuration.
Here, one notices a strong localization of the eigenstates for $p$-values up to
$p_{0.5}^{(RW)}$, see the first two panels, while this effect is getting less pronounced for
larger values of $p$, this already indicates that quantum transport is
strongly inhibited for $p$-values below and close to $p_{0.5}^{(RW)}$.
This effect is even more enhanced for the portrait configuration, as may be seen from Fig.~\ref{fig:partratio}(c): Up to $p_{0.5}^{(RW)}$ one ramarks very
strong localization. This persists even up to $p_{0.5}^{(QW)}=0.8$ which value is more than twice as large as
$p_{0.5}^{(RW)}=0.314$. In this particular example one has $N=48$, $B_{0.5}^{(RW)}=22$ and $B_{0.5}^{(RW)}=56$. This means that one needs
more than twice more bonds in order to render the quantum transport as efficient as the classical one, in this particular portrait configuration. For smaller
$B$ values, the eigenstates are too localized for the quantum transport to be efficient.
\section{Conclusions}
We have studied the coherent, continuous-time quantum transport on two-dimensional structures
of different aspect ratios $N_y/N_x$ with a given, fixed number $B$ of randomly
placed bonds. Having focused on three types of configurations -- landscape, square, and portrait -- we investigated the long-time probability for an excitation not to get trapped.
Our analysis shows that
in the average the quantum excitation transport in the $x$-direction becomes very inefficient for structures
with portrait configurations, i.e., for those where $N_y\gg N_x$. This is particularly remarkable, since the opposite holds for
(incoherent) continuous-time random
walks, where the transport becomes more efficient when the $AR$ increases. This is rendered clear by our evaluations of the classical and
quantum mechanical probabilities $p_{0.5}^{(RW)}$ and
$p_{0.5}^{(QW)}$ which we have introduced in this article.
The behavior in the quantum case can be understood based on an analysis of the corresponding eigenstates. Their participation ratios show that in portrait
configurations the eigenstates are still localized for probabilities $p$ such that
$p_{0.5}^{(RW)} < p < p_{0.5}^{(RW)}$. Only for $p > p_{0.5}^{(RW)}$ the eigenstates do
become delocalized and thus can readily support the transport.
\section*{Acknowledgments}
We thank Piet
Schijven for fruitful discussions.
Support from the Deutsche Forschungsgemeinschaft (DFG Grant No. MU2925/1-1), from the Fonds
der Chemischen Industrie, from the Deutscher Akademischer Austauschdienst
(DAAD Grant No. 56266206), and from the Marie Curie International Research Staff Exchange
Science Fellowship within the 7th European Community Framework Program SPIDER
(Grant No. PIRSES-GA-2011-295302) is gratefully acknowledged.
| 2023-04-23T06:40:28.703Z | 2013-09-13T02:05:01.000Z | redpajama/arxiv | arxiv_0001 | 472 | 3,835 |
33d28da6dd3aaca54256778f84da218c1c938048 | \section{Introduction}
\setcounter{equation}{0}
\label{sec:intro}
\subsection{The $N$ particle system}
We study the evolution of $N$ particles interacting via Coulombian or
gravitational interaction in dimension one. The
position and speed of the $i$-th particle will be denoted respectively by
$X_i^N$ and $V_i^N$, and we will also use the short-cut $Z_i^N=(X_i^N,V_i^N)$.
The large vector containing the positions and speeds of all the particles will
be denoted by ${\mathcal Z}^N = (Z_i^N)_{i \le N}$.
Our results are valid on the torus and on ${\mathbb{R}}$, and with (repulsive) Coulombian or (attractive) gravitational interactions. In fact we will neither use the preservation of the energy and the sign of the potential energy, nor the attractive and repulsive character of the force. In that article, we will consider only the case of Coulombian interaction in the periodic domain ${\mathbb{T}}$, in order to keep the notation as simple as possible. The adaptation to the case of the unbounded domain ${\mathbb{R}}$ does not raise many difficulties, and it is in
some sense simpler, since for instance the interaction potential and force have simpler expression. The adaptation to the gravitational case is also simple.
So in all the sequel, the positions $X_i$ belong to ${\mathbb{T}}$,
and the velocities $V_i$ to ${\mathbb{R}}$. On the torus ${\mathbb{T}}$ that we will identify for simplicity to $\Bigl[ - \frac12,\frac12 \Bigr)$, the Coulombian interaction potential $W$ and the associated force $-W'$ are given by
\begin{equation} \label{def:PotW}
W(x) := \frac{x^2 - |x|}2, \qquad
%
- W'(x) :=
\begin{cases}
-\frac12 -x &\text{ if } \; x \in \bigl[-\frac12,0\bigr), \\
\frac12 -x &\text{ if } \; x \in \bigl(0, \frac12\bigr), \\
0 & \text{ if } \; x = 0.
\end{cases}
\end{equation}
This particular kernel correspond for instance to the situation where we study electrons in a fixed background of ions.
The equality $W'(0)=0$ may seem strange since the interaction force is singular
at $0$. But, it will be very
convenient in the sequel, and will allow for many simplification in the
notation. That convention corresponds to the fact that there is no self-interaction.
Remark that $W'$ may be decomposed in a singular and a non singular part
\begin{equation} \label{eq:Wdec}
\text{for }\;|x| \le \frac12, \quad
W'(x) = -x + \frac12 \sign (x),
\qquad \sign(x) := \frac x{|x|} \quad \bigr(= 0 \text{ if } \; x=0
\bigr)
\end{equation}
From these formula, we see that the only singularity of the Poisson kernel $W$
is that its second derivative is a Dirac $- \delta_0$. But the interaction force
$-W'$ is globally bounded, so that in this sense it is not so singular. The situation is very different in larger
dimensions, where the Coulombian force is always ``strongly'' singular : $\nabla W \notin
L^{d/(d-1)}$ in dimension $d$.
The evolution of the $N$ particles is classically driven by the following system
of $N$ second order ODEs
\begin{equation} \label{eq:Npart}
\forall \; i \le N, \qquad \dot X_i^N = V_i^N, \qquad \dot V_i^N = - \frac1N \sum_{j \neq i} W'(X_i^N-X_j^N).
\end{equation}
Remark that thanks to the assumption that there is
no self interaction ($W'(0)=0$), we may also use a full sum $\sum_{j=1}^N$
instead of the more usual $\sum_{j\neq i}$ in the second equation. The $\frac 1N$ factor is not important for a fixed $N$, but it is necessary if we want to obtain the mean field limit when $N$ goes to infinity. It appears when the ``physical'' system is observed at the appropriate scales of time, position and speed. Here, we will not emphasize more on this point, and we consider uniquely non-dimensional equations in the sequel.
\medskip
\subsection{About the existence of solution to the system of ODEs}
As the vector-field that drives this system of ODE is singular (not continous),
the existence of solutions to that system is not completely obvious.
So to be precise, we shall first define properly the notion of solution that we will
use. We only define global (in time) solutions, because we will always deal with such solutions in the sequel.
\begin{defi} \label{def:ODErig}
A global solution of the ODE~\eqref{eq:Npart} with initial condition ${\mathcal Z}^N(0)$ is an
application \mbox{$t \mapsto {\mathcal Z}^N(t)$} defined on ${\mathbb{R}}^+$
such that for all $t \in {\mathbb{R}}^+$, for all $i \le N$
\begin{equation} \label{eq:ODEpre}
X^N_i(t) = X^N_i(0) + \int_0^t V_i^N(s) \,ds, \qquad
V_i^N(t) = V_i^N(0) - \frac1N \sum_{j \neq i} \int_0^t W'(X_i^N(s)-X_j^N(s))
\,ds.
\end{equation}
\end{defi}
An alternative definition, equivalent to that one, would be to require ${\mathcal Z}^N$
to be differentiable almost everywhere in time and~\eqref{eq:Npart} to hold
for all $i \le N$ and a.e. $t \in {\mathbb{R}}^+$.
In our setting, the now usual theory initiated by DiPerna-Lions
\cite{DipLions} applies and provides existence and uniqueness of a measure
preserving flow. Historically, the first result that can be applied here
was given by Bouchut \cite{Bou02} : it covers the case of second order ODEs of
the previous type with $BV$ force-field. But we may also apply the more general
result of Ambrosio \cite{Amb04}, valid for any ODE with $BV$ vector-field with
bounded divergence (in $L^\infty$ or even in $L^1$).
As we shall see later,
the uniqueness of
the solution to the system of ODEs~\eqref{eq:Npart} is not important for the mean-field limit,
and the existence of solutions will be enough for our work.
The existence of a ``global'' measure preserving flow implies the existence of solutions in the sense of Definition~\ref{def:ODErig} for almost all initial conditions. Remark also that the uniqueness of the global measure preserving flow does not implies anything for a particular initial condition. But this is not an issue here since we are not interested by uniqueness.
\medskip
But here, in our weakly singular setting, global existence of solutions to~\eqref{eq:Npart} for \emph{any
initial condition} may be obtained thanks to the theory of \emph{differential
inclusion}. Using a result by Filippov \cite{Filippov} we will precisely prove
the following Proposition
\begin{prop} \label{prop:Nexis}
For any initial configuration ${\mathcal Z}^N(0)$, there exists at least one global
solution to the system of ODE~\eqref{eq:Npart} in the sense of
Definition~\ref{def:ODErig}.
\end{prop}
The proof of that Proposition is given in Section~\ref{sec:proof}. Thanks to that result, we may now concentrate on the mean-field limit.
\medskip
\paragraph{ \bf The mean-field limit.}
In the limit of large number of particles, we assume that the initial
distribution of particles converges towards a (maybe smooth) profile $f_0$. To
measure this convergence, we introduce the empirical measure
$\mu_{\mathcal Z}^N$ defined by
\begin{equation}
\mu_{\mathcal Z}^N(t) := \frac1N \sum_{i = 1}^N \delta_{Z_i^N(t)} = \frac1N \sum_{i = 1}^N \delta_{(X_i^N(t),V_i^N(t))}.
\end{equation}
We precisely define the convergence towards $f_0$ by the convergence in the weak sense of measures of $\mu_{\mathcal Z}^N$ towards $f_0$
$$
\mu_{\mathcal Z}^N(0) \rightharpoonup f_0 \quad \ast-\text{weakly in }\; {\mathcal M}({\mathbb{T}}\times {\mathbb{R}}).
$$
If the distribution of particles also converges at time $t$ towards a
distribution profile denoted by $f(t,x,v)$, then we may formally
replace in the second equation of~\eqref{eq:Npart} the discrete sum by an
integral of $W'$ against the profile $f(t)$ and get a limit ODE, depending of
$f$, that drives the limit trajectories of the system
\begin{equation} \label{eq:ODElim}
\qquad \dot X(t) = V(t), \qquad \dot V(t) = - \int W'(X(t)-y) f(t,y,w) \,dydw.
\end{equation}
From this ODE, using that $f$ must be constant on the trajectories associated
to~\eqref{eq:ODElim}, we obtain the so-called Vlasov-Poisson equation (that
should rather be called ``collisionless Boltzmann equation'' or ``Jeans-Vlasov''
equation \cite{Henon})
\begin{align} \label{eq:Vla}
&\partial_t f + v \, \partial_x f - \partial_x \phi(t,x) \, \partial_v f = 0,\\
&\phi(t,x) = [ W \ast \rho(t) ](x) = \int W(x-y) f(t,y,w) \,dydw
\nonumber
\end{align}
Here and below we will always refer to $f$ as the distribution of particles (or profile), and to
$$
\rho(t,x) := \int f(t,x,v) \,dxdv
$$
as the density (in position), always denoted by $\rho$, associated to the full distribution $f$.
The argument above is only a formal justification, but a more rigorous one is given in the following Lemma.
\begin{lem} \label{lem:NParttoVla}
Assume that ${\mathcal Z}^N$ is a global solution to the system of ODE~\eqref{eq:Npart} in the sense of Definition~\ref{def:ODErig}. Then with the assumption that $W'(0)=0$, the associated empirical measure $\mu^N_{\mathcal Z}$ is a global weak solution to the Vlasov-Poisson equation~\eqref{eq:Vla}.
\end{lem}
As usual ``weak solution'' means here solution in the sense of distribution.
We will not write the proof of that property which is classical, at least in the case where $W'$ is Lipschitz. We shall just mention that here, we still have a sufficient regularity to perform the requested calculations. In fact, for any smooth $\varphi$, we may use the chain rule to differentiate in time $t \mapsto \varphi(t,X_i^N(t),V_i^N(t))$ almost everywhere in time. Once the chain rule is applied, we get the result by integration in time and
summation on all the particles. Remark that the assumption $W'(0)=0$ is crucial here.
That property is interesting because, it helps to rewrite the problem of the mean field limit in a problem of stability of the Vlasov-Poisson equation~\eqref{eq:Vla}. If the sequence of initial empirical measures $\bigl(\mu^N_{\mathcal Z}(0)\bigr)$ converges towards some distribution $f^0$, the mean filed limit will holds if :
\begin{itemize}
\item[i)] there is a unique solution starting form $f^0$ in a suitable class,
\item[ii)] that solution is stable (in time) when we approach it by sum of
Dirac masses.
\end{itemize}
In that article, we shall in fact show a "weak-strong" stability principle for the Vlasov-Poisson equation~\eqref{eq:Vla}: if
a solution $f$ of the Vlasov equation has a
density $\rho$ that remains bounded (uniformly in position) along time, then it is stable in the class of
measure solutions with finite first order moment in $v$. Of course, this stability result implies the uniqueness in the class of measures (with finite first moment in $v$)
of the solution starting from the initial condition $f^0$ of $f$. For that reason, that kind of result is more commonly
called ``weak-strong'' uniqueness principle. But we choose to emphasize on
the stability, that is maybe more important for the mean-field limit.
\medskip
\subsection{Mean-field limit using notation from probability.}
The system of ODE~\eqref{eq:ODElim} has also an interesting reformulation if we use an idea and a notation borrowed to the theory of probability and more precisely to the theory of continuous stochastic processes. We may say that the $X_t =X(t)$ and $V_t=V(t)$ appearing in the ODE~\eqref{eq:ODElim} are trajectories of a
``non stochastic'' process. Even if their is no noise, the probabilistic notation is quite useful for two reasons :
\begin{itemize}
\item We can interpret $f^0$ as the law of the initial position and speed $(X_0,V_0)$,
\item the low regularity of the interaction force $W'$ implies that the
solutions of~\eqref{eq:ODElim} are not unique if $f_t$ is
not regular enough. This introduces naturally some ``randomness'', as some choices have to be made for the
construction of solutions.
\end{itemize}
We mention this ``probabilistic'' point of view was already introduced for
(deterministic) ODE by Ambrosio in his study of linear
transport and associated ODE with low regularity \cite[Section 5]{Amb04} (See also references therein for earlier works with a similar point of view). In fact, we shall use several times a results proved in its well written lecture notes \cite{AmbNotes}).
His ideas were also generalized later by Figalli to the case of SDEs with possibly degenerate noise \cite{Fig08}.
Here we mostly adopt this point of view in the nonlinear setting, and with a slightly different notation.
Precisely, the system~\eqref{eq:ODElim} maybe
rewritten as a \emph{nonlinear ODE}. $f_t = f(t)$ will be the law at time $t$
of the couple $Z_t=(X_t,V_t)$ and we use the notation $Z_\cdot$ to refer to a
trajectory, \emph{i.e.} an element of $C([0,+\infty),{\mathbb{T}} \times {\mathbb{R}})$.
\begin{defi}[Nonlinear ODE] \label{def:ODENL}
A solution of the non linear ODE below with initial condition $\nu^0 \in \mathcal P({\mathbb{T}} \times {\mathbb{R}})$ is a probability $\mathbb P$ on $C([0,+\infty),{\mathbb{T}} \times {\mathbb{R}})$ such that $\mathbb P(Z_0 \in \cdot) = \nu^0$ and
the following equalities hold for $\mathbb P \times \mathbb P$-almost every $(Z_\cdot ,\bar Z_\cdot)$
\begin{equation} \label{eq:ODENL}
\forall t \in [0,+\infty), \quad \begin{cases}
\displaystyle X_t = X_0 + \int_0^t V_s \,ds, \\
\displaystyle V_t = V_0 - \int_0^t \mathbb{E}_{\bar Z_\cdot} \bigl[ W'(X_s- \bar X_s) \bigr] \,ds.
\end{cases}
\end{equation}
%
If as usual, we prefer to speak with random variables than with laws, we must
just say that $Z_\cdot$ is a random variable, that the law of $Z_0$ is $\nu_0$,
and that the previous equation are satisfied with $\bar Z_\cdot = (\bar
X_\cdot,\bar V_\cdot)$ an independent copy of $Z_\cdot$, \emph{i.e.} a
independent process with same law than $Z_\cdot$. Remark that as in the
previous system of ODEs, the requirement of equation~\eqref{eq:ODENL} is
equivalent to ask that the associated equality on $\dot X_t$ and $\dot V_t$ are
true almost surely and for almost all time $t \in [0,+\infty)$.
\end{defi}
In fact this definition is just a reformulation of the
system~\eqref{eq:ODElim}, with expectation instead of integral, and with the
vocabulary of stochastic processes.
A process $Z_\cdot$ solution to~\eqref{eq:ODENL} is concentrated on deterministic solutions to~\eqref{eq:ODElim}. But it has some interest when we
will use optimal transport and coupling techniques. The probabilistic notation in
which the effective probability spaces and optimal mappings are never
explicitly written is sometimes more simple to handle.
It also allows to work directly on trajectories with a
simple notation.
We refer to~\eqref{eq:ODENL} as a nonlinear ODE, in reference to the nonlinear SDEs introduced by
McKean in~\cite{McKean}, also known as McKean-Vlasov processes. These nonlinear
SDEs correspond to add a brownian drift (or more generally a random force field)
in the second equation of~\eqref{eq:ODENL}, and possibly the action of an
exterior potential. As in our deterministic case, these nonlinear SDEs are
natural limits of N particle systems with noise \cite{McKean2,Meleard}.
In the case of smooth interactions, the so-called \emph{methods of characteristics} implies that the Vlasov-Poisson equation~\eqref{eq:Vla} and associated the nonlinear ODE~\eqref{def:ODENL} give exactly the same solutions. In fact, when the force field is sufficiently regular for uniqueness of the trajectories to hold, the two problems are equivalent. With non smooth interactions, the uniqueness of trajectories is lost and we are only able to state the following lemma.
\begin{lem} \label{lem:ODENL_mart}
Assume that $Z_\cdot$ is a solution of the nonlinear ODE~\eqref{eq:ODENL}.
Then, the law $\nu_t$ of its time marginals $Z_t$ is a weak solution of the
Vlasov-Poisson equation~\eqref{eq:Vla}.
Reciprocally, if $\nu_t$ is a bounded measure solution to
the Vlasov-Poisson equation~\eqref{eq:Vla}, there
exists a process $Z_\cdot$ solution to~\eqref{eq:ODENL} whose time marginals
are exactly the $\nu_t$.
\end{lem}
Without noise, the first part of this lemma is just an application of the chain rule.
The second part of the lemma is more delicate. It is a consequence of a theorem
of Ambrosio, for linear transport equation. In fact $\nu_t$ is a solution of
the linear transport equation
$$
\partial_t \mu + v \partial_x \mu + E[\nu_t] \partial_v \mu = 0,
$$
where the fixed force field is given by $E[\nu_t] := - \int
W'(x-y) \nu_t(dy,dw)$. In that setting, we can apply
\cite[Theorem 3.2]{AmbNotes}
to obtain a process $Z_\cdot$ solution to~\eqref{eq:ODENL} with time marginals
$\nu_t$.
\medskip
Another
interesting property associated to these nonlinear ODE is the following
\begin{lem} \label{lem:NPtoODENL}
If ${\mathcal Z}^N_\cdot$ is a solution to the N particles system~\eqref{eq:Npart}, then the empirical measure on the trajectories
$$
\mathbb P^N_{{\mathcal Z}_\cdot} := \frac1N \sum_{i=1}^N \delta_{Z^N_{i,\cdot}}
$$
is a solution of the nonlinear ODE~\eqref{eq:ODENL} with initial condition $\mu_{\mathcal Z}^N(0)$.
\end{lem}
As for Lemma~\ref{lem:NParttoVla}, this is more a way of rewriting the equations~\eqref{eq:Npart},
and we will not be more precise. Note that there is no reciprocal statement :
along solutions of the nonlinear ODE~\eqref{eq:ODENL} starting from sum of Dirac masses,
the singularity of the Force allows to split some Dirac masses in smaller ones and also in less singular measures supported on a line \cite{ZM2,ZM3,DFV13}.
\medskip
Using approximation by sum of Dirac masses (a strategy already used in
aggregation equation, see \cite{CDFLS} and references therein), we will be able
to give another proof of a general existence theorem for the one-dimensional VP equation obtained by Zheng and Majda~\cite{ZM1}
\begin{thm}\label{thm:ODENLex
Assume that $\nu^0$ is a probability measure with finite first moment in velocity : $\int |v|
\,\nu^0(dx,dv) < \infty$, and recall that we use the convention $W'(0)=0$.
Then there exists (at least one) global solution to the Vlasov equation~\eqref{eq:Vla},
with initial condition $\nu^0$, and also one process solution of the nonlinear
ODE~\eqref{eq:ODENL} with the same initial condition.
\end{thm}
The proof of Theorem~\ref{thm:ODENLex} is done in Section~\ref{sec:proof}. Here we will only make some comments :
\begin{itemize}
\item In order to define properly the interaction force (which has to be defined everywhere if we want to define measure solutions) we have introduced the convention $W'(0)=0$. Madja and Zheng~\cite{ZM1} use a strategy that may seem different, but both are in fact equivalent.
\item Our result is more general that the one of Zheng and Majda, which requires a exponential moment (in $v$) on the initial measure $\nu^0$.
\item Our proof is also in some sense more simple. It relies on the approximation of the initial data by sum of Dirac masses, for which we known by Proposition~\ref{prop:Nexis} that there exists global solutions to~\eqref{eq:Npart} and thus~\eqref{eq:ODENL}. Then, the tightness of the law of these processes is
obtained rather easily, but more work is required to characterize the limit. For the last point, we first proof that the time marginals associated to any limit process are solution of the VP equation~\eqref{eq:Vla}. Once the time marginals, and therefore the force field are known, we can
apply the second point of Lemma~\ref{lem:ODENL_mart} to get the existence of a process solution to~\eqref{eq:ODENL} with the requested time marginals. But we were not able to prove that the limit process is a solution to the nonlinear ODE~\eqref{eq:ODENL}. Of course, the use of the nonlinear ODE~\eqref{eq:ODENL} is not mandatory
if we are interest only by the result on the Vlasov-Poisson equation, but it will nevertheless be convenient, since we will need to perform some estimates on the trajectories.
\end{itemize}
\subsection{A weak-strong stability estimate around solutions to VP equation with bounded density}
Here we will state the main result of our article : a weak-strong stability
estimate around solutions of the Vlasov-Poisson equation~\eqref{eq:Vla} with
bounded density. Recall that in this article we use the word ``density'' only
to refer to the density in physical space $\rho(t,x) = \int f(t,x,v) \,dv$.
We introduce the Monge-Kantorovitch-Wasserstein distance of order one, denoted
by $W_1$ constructed with the Euclidian distance on ${\mathbb{T}}\times {\mathbb{R}}$. We refer to the
clear book of Villani \cite{Vill03} for an introduction to this object. We will
use it both with random variables or their law (which are probabilities) in the argument, depending of
the situation.
\begin{thm} \label{thm:proc}
Assume that $Z^1_\cdot$ is a solution of the nonlinear ODE~\eqref{eq:ODENL},
with time marginals $f_t$ that have bounded density $\rho_t$ for any time $t
\ge 0$.
Then for any global solution $Z^2_\cdot$ of the nonlinear ODE~\eqref{eq:ODENL} with finite first order moment in $v$,
we have the following stability estimate
$$
\forall t \in {\mathbb{R}}^+,
\qquad
W_1( Z^1_t, Z^2_t) \le e^{a(t) } W_1(Z^1_0,Z^2_0),
\qquad \text{with}
\quad
a(t) := \sqrt 2 \, t + 8 \int_0^t \| \rho_s \|_\infty \,ds.
$$
\end{thm}
\begin{thm} \label{thm:VP}
Assume that $f_t$ is a solution of the Vlasov Poisson
equation~\eqref{eq:Vla}
with bounded density $\rho_t$ for any time $t \ge 0$. Then for any
global measure solution $\nu$
to the same equation with finite first order moment in $v$, we have the following stability estimate
$$
\forall t \in {\mathbb{R}}^+,
\qquad
W_1( f_t, \nu_t) \le e^{a(t)} W_1(f^0,\nu_0),
\qquad \text{with}
\quad
a(t) := \sqrt 2 \, t + 8 \int_0^t \| \rho_s \|_\infty \,ds.
$$
\end{thm}
Theorem~\ref{thm:VP} implies Theorem~\ref{thm:proc}, since time marginals of solutions of the nonlinear ODE~\eqref{eq:ODENL} are solutions of the
Vlasov-Poisson equation~\eqref{eq:Vla}. But, the converse is also true, since
thanks to the second part of Lemma~\ref{lem:ODENL_mart}, any solution of the
VP equation may be represented as time marginals of a solution of the
nonlinear ODE. In view of that, we will only proof Theorem~\ref{thm:proc} in
section~\ref{sec:proof}. But remark also that this proof written with the
formalism of processes maybe adapted to the formalism of solution of Vlasov -Poisson
equation~\eqref{eq:Vla}, with less simple notation (in our opinion). So, again, the choice of working with VP equation of the nonlinear ODE is rather a matter of convenience.
\subsection{Existence of strong solutions.}
The later result is interesting only if solutions with bounded density do
exist. But, this is a well-known fact \cite{CK80,Bos05,Guo95,LMR10}.
Here we will restate a Proposition about the existence of such solutions that
relies precisely on Theorem~\ref{thm:proc} or~\ref{thm:VP}. Remark that theorem~\ref{thm:VP} implies that such solutions are automatically unique.
\begin{prop} \label{prop:exisVP}
Assume that $f_0$ satisfies
\begin{equation} \label{eq:condL1}
\int_< 0^{+\infty} g_0(v) \,dv < + \infty, \quad
\| f_0 \|_\infty = g(0) < + \infty,
\qquad \text{where} \quad
g_0(v) := \sup_{y \in {\mathbb{T}}, |w| \ge v} |f_0(x,w)|.
\end{equation}
Then there exists one unique solution $Z_\cdot$ to the nonlinear
ODE~\eqref{eq:ODENL}, with initial condition $f^0$.
There also exists a unique solution to the Vlasov-Poisson
equation~\eqref{eq:Vla} with initial condition $f^0$, given by the time
marginals $f_t$ of $Z_\cdot$.
The density (in position) $\rho_t$ associated to $f_t$ satisfies in particular
the bound
$$
\| \rho_t \|_\infty \le 2 \int_0^{+\infty} g_0(v) \,dv +
\|f_0\|_\infty t.
$$
\end{prop}
\subsection{Mean-field limit.}
If the initial condition $f_0$ satisfies the
condition~\eqref{eq:condL1}, the mean-field limit around the unique solution $f$ starting from $f_0$ is a direct consequence of
Theorem~\ref{thm:VP}. In fact, it suffice to apply this theorem to the particular case
of initial data $\nu_0$ given by sum of $N$ Dirac masses. This is precisely
stated in the following corollary, for which we will not give a specific proof.
\begin{cor}[of Theorem~\ref{thm:VP}] \label{cor:MFL}
Assume that $f_0$ satisfies the condition~\eqref{eq:condL1} and denote by
$f_t$ the unique solution of the VP equation~\eqref{eq:Vla} with initial condition $f_0$.
Proposition~\ref{prop:exisVP} ensures that its density $\rho_t$ is bounded.
For any ${\mathcal Z}^N$ solution of the $N$ particles system~\eqref{eq:Npart} with
initial empirical measure $\mu^N_{\mathcal Z}(0)$, we have the following stability
estimate
\begin{equation} \label{eq:MFL}
W_1(\mu^N_{\mathcal Z}(t),f_t) \le e^{a(t)} W_1(\mu^N_{\mathcal Z}(0), f_0)
\qquad \text{with}
\quad
a(t) := \sqrt 2 \, t + 8 \int_0^t \| \rho_s \|_\infty \,ds.
\end{equation}
\end{cor}
This implies the mean-field limit in large number of particles since if the
positions and velocities of the particles are chosen so that
$$
\mu^N_{\mathcal Z}(0) \xrightarrow
[N \rightarrow +\infty]{} f^0 \; \text{ weakly},
\qquad
\sup_{n \in {\mathbb{N}}} \int |v| \mu_{\mathcal Z}^N(0,dxdv)
\rightarrow
\int |v| f_0(dx,dv),
$$
then $W_1(\mu_{\mathcal Z}^N(0),f_0) \rightarrow 0$ (See \cite{Vill03}). The above corollary implies that the weak convergence holds at any time.
Remark that we may also obtain a similar result on the trajectories, \emph{i.e.}
on the solutions of the nonlinear ODE~\eqref{eq:ODENL}.
\subsection{Propagation of molecular chaos}
Around profiles with bounded density, the propagation of chaos is also a
consequence of the ``weak-strong'' stability theorem.
\begin{cor}[of Theorem~\ref{thm:VP}]
Assume that $f^0$ satisfies the condition~\eqref{eq:condL1}. Then
with the same notation than in Corollary~\ref{cor:MFL}, we have
$$
\mathbb{E} \bigl[
W_1(\mu^N_{\mathcal Z}(t),f_t) \bigr]
\le e^{a(t)} \mathbb{E} \bigl[ W_1(\mu^N_{\mathcal Z}(0), f_0) \bigr]
$$
\end{cor}
We shall not give the proof of that results, which is obtain by taking the expectation in~\eqref{eq:MFL}. We just make some comments :
\begin{itemize}
\item it is also possible to obtain other related results, for instance bounds
on the
exponential moments of $W_1(\mu^N_{\mathcal Z}(t),f_t)$ can be obtain in terms of
exponential moments of $W_1(\mu^N_{\mathcal Z}(0), f_0)$,
which are known to exists, see for instance \cite{BoissardPhD},
\item the result may be adapted to the ``language'' of nonlinear ODE,
\item our results are stated in terms of convergence in law of the empirical
measure towards $f_t$. For the relations to the more usual convergence of
$k$-particles marginals ($k\ge 2$), we refer for instance to the famous lecture notes by
Sznitman~\cite{Sznitman}. We also mentioned that
quantitative equivalence estimates were also recently obtained
in \cite{MischMou} and \cite{HauMisch}.
\end{itemize}
\section{Proofs of the main results} \label{sec:proof}
\subsection{Proof of Proposition~\ref{prop:Nexis}}
The existence will be proved with the help of the theory of differential inclusion \cite{Filippov}.
First we will construct an associated differential inclusion, to which solutions do exist, and then prove that these solutions are indeed solutions to the original problem.
\medskip
{\sl Step 1. The construction of an adapted differential inclusion.}
We will replace the system of equation~\eqref{eq:Npart} by a differential inclusion
\begin{equation} \label{eq:inc}
\dot {\mathcal Z}^N(t) \in \mathcal B^N \bigl({\mathcal Z}^N(t)\bigr),
\end{equation}
where $ \mathcal B^N$ is set-valued, \emph{i.e.} is an application from ${\mathbb{R}}^{2N}$ into $\mathcal P({\mathbb{R}}^{2N})$. A Theorem by Filippov \cite[Chapter 2, Theorem 1]{Filippov} ensures the existence of global solutions to the differential inclusion~\eqref{eq:inc} for any initial condition if :
\begin{itemize}
\item for each ${\mathcal Z}^N \in {\mathbb{R}}^{2N}$, the set $\mathcal B^N \bigl({\mathcal Z}^N\bigr)$ is bounded, and closed, and convex,
\item $\mathcal B^N$ is locally bounded, \emph{i.e.} for any compact set $K \in {\mathbb{R}}^{2N}$ there exists a compact set $K' \in {\mathbb{R}}^{2N}$ such that $\mathcal B^N({\mathcal Z}^N) \subset K'$ for all ${\mathcal Z}^N \in K$,
\item $\mathcal B^N$ is upper continuous with respect to the inclusion.
\end{itemize}
A natural $\mathcal B^N$ may be constructed as follow. We will not write the full application $\mathcal B^N$, but assume that it has the form
$$
\dot {\mathcal Z}^N(t) \in \mathcal B^N \bigl({\mathcal Z}^N(t)\bigr)
\;\Leftrightarrow \;
\dot X^N_i = V_i, \qquad \dot V_i = \frac1N \sum_i F_{ij}^N,
$$
where the forces $F^N_{ij}$ satisfy some conditions to be specified.
A first choice for the $F^N_{ij}$ is to ask that for all $i,j \le N$:
\begin{itemize}
\item $\displaystyle F_{ij}^N = -W'(X^N_i-X^N_j)$ when $X^N_i \neq X^N_j$,
\item $\displaystyle F_{ij}^N \in \Bigl[-\frac12, \frac12 \Bigr]$ when $X^N_i = X^N_j$.
\end{itemize}
The associated $\mathcal B^N$ satisfies all the required properties,
but unfortunately, the associated solutions will not be solution of the original ODE system in general.
For instance if you take two particles, both originally at position $0$, with velocity $0$, then according to~\eqref{eq:Npart} and the fact that $W'(0)=0$, they should never move. But with our construction of $\mathcal B^N$, the solution where the two particles remain stuck together and accelerate uniformly with acceleration $\frac12$ is admissible.
To overcome this problem, we should define $\mathcal B^N$ almost as before, but with the additional conditions that we want the \emph{action-reaction principle} to be valid in any case. The precise conditions are that for all $i <j \le N$:
\begin{itemize}
\item $\displaystyle F_{ij}^N = -F_{ji}^N = -W'(X^N_i-X^N_j)$ when $X^N_i \neq X^N_j$,
\item $\displaystyle F_{ij}^N = -F_{ji}^N \in \Bigl[-\frac12, \frac12 \Bigr]$ when $X^N_i = X^N_j$.
\end{itemize}
It is not difficult that to see that this $\mathcal B^N$ still fulfils the requested properties, and \cite[Chapter 2, Theorem 1]{Filippov} ensures the existence of a solution also denoted ${\mathcal Z}^N$ to the associated differential inclusion.
\medskip
{\sl Step 2. Solutions of the differential inclusion are solutions to the $N$ particles problem. }
According to the final definition of $\mathcal B^N$, we see that problems may occur only when two particles are at the same position. Here, we will only deal with particles $1$ and $2$, the others case being similar. But since for any $t \in {\mathbb{R}}^+$ and any $\lambda >0$, the set $\bigl\{ s \in [0,t] \; \text {s.t. } X^N_1(s) =X^N_2(s)
\; \text{ and } | V^N_1(s) - V^N_1(s) | \ge \lambda \bigr\}$, is made of isolated points, we have also that
the set
$$
A_t := \bigl\{ s \in [0,t] \; \text {s.t. } X^N_1(s) =X^N_2(s)
\; \text{ and } V^N_1(s) \neq V^N_2(s) \bigr\}
$$
is negligible with respect to the Lebesgue measure.
Since the exact value of the force used on negligible set has no influence on the solution, it remains only to understand what happens on the set
$$
B_t := \bigl\{ s \in [0,t] \; \text {s.t. } X^N_1(s) =X^N_2(s) \; \text{ and } V^N_1(s) = V^N_1(s) \bigr\}
$$
For this set, we will use the following lemma, that we shall prove later.
\begin{lem} \label{lem:Winf}
Assume that $g : [0,t] \rightarrow {\mathbb{R}}$ is a Lipschitz function (differentiable almost everywhere by the Rademacher theorem). Then the set
$$
\{ s \in [0,t] \; \text{ s.t. } \; g(0) = 0 \;\text{ and } \; g'(0) \neq 0 \} \quad \text{is negligible.}
$$
\end{lem}
Applying this Lemma with the function $g(s) = V^N_1(s)- V^N_2(s)$ we get that the set
$$
\bigl\{ s \in [0,t] \; \text {s.t. } X^N_1(s) =X^N_2(s) \; \text{ and } \; V^N_1(s) = V^N_1(s)
\; \text{ and } \;
(V^N_1)'(s) \neq (V^N_1)'(s)
\bigr\}
$$
is negligible. Then the only part we have really to deal with is
$$
C_t := \bigl\{ s \in [0,t] \; \text {s.t. } X^N_1(s) =X^N_2(s) \; \text{ and } \; V^N_1(s) = V^N_1(s)
\; \text{ and } \;
(V^N_1)'(s) = (V^N_1)'(s)
\bigr\}.
$$
But remark that for $ s \in C_t$ the particles are at the same position and are submitted to the same global force. If no other particle share their common position at this time, it follows from the (final) definition of $\mathcal B^N$ that $F_{12}^N = -F{21}^N=0$.
If there is another particle that share their common position, the situation is a little bit different.
Say for instance that the third particles is also at the same position. Then, the configuration where $F_{i,i+1} = -F_{i+1,i} = \frac12$ (whit the convention $3+1 =1$) is admissible. In that case , $F_{12} \neq 0$, but it is also clear that we can set all the forces $F_{i,i+1}$ to zero without changing anything to all the global forces seen by the three particles.
In general, since the particle $1$ and $2$ are submitted to the same force at time $s$ (remember that $s \in C_t$), and because the action-reaction principle is preserved by the differential inclusion, this common force is also the one that applies to their center of mass. And in the case where $F_{12}(s) \neq 0$, we will not change the solution if we replace the $F_{ij}^N$ by the force $\tilde F_{ij}^N$ acting on their centrer of mass, defined by
$$
\tilde F_{12}=0 , \qquad \tilde F_{1i}= \tilde F_{2i} = - \tilde F_{i1} = - \tilde F_{i2}= \frac12 (F_{1i}+ F_{2i}) \; \text{ for }\; i \ge 3.
$$
In fact that replacement is acceptable by the convexity of $\mathcal B^N$, and this modification does not affect the global force seen by each particle. It means that the original equation\eqref{eq:Npart} is already fulfilled at this time $s$.
Finally, we have proved that the system of equations~\eqref{eq:Npart} is satisfied for almost all time and for all couple $i \neq j$. As already said, this is equivalent to the definition~\ref{def:ODErig}.
\begin{proof}[Proof of Lemma~\ref{lem:Winf}]
For simplicity we will assume that $g$ has a Lipschitz constant equals to one. If for all $\lambda>0$, the set $A_ \lambda := \{ s \in [0,t] \; \text{ s.t. } \; g(0) = 0 \;\text{ and } \; g'(0) \ge \lambda \}$ is negligible, and if also a similar statement is true for the negative value of $g'$, then it is not difficult to conclude.
So we will prove by contradiction that $A_\lambda$ is of zero measure for any $\lambda >0$. If not, then we may choose a $\lambda >0$ such that
$A_\lambda$ has strictly positive measure. It is then classical that $A_\lambda$ possesses an accumulation point $s$ (See for instance \cite{Stein}, in particular for the argument with non centered interval), for which
$$
\lim_{\varepsilon,\varepsilon' \rightarrow 0} \alpha(\varepsilon,\varepsilon') = 1, \qquad
\text{where} \quad \alpha(\varepsilon,\varepsilon') :=
\frac{ \bigl|
\{ u \in [s-\varepsilon,s+ \varepsilon'] \; \text{ s.t. } \; g(0) = 0 \;\text{ and } \; g'(0) \ge \lambda \}
\bigr| }{ \varepsilon+\varepsilon'}
$$
But we also have
\begin{align*}
g(s+\varepsilon') - g(s-\varepsilon) & = \int_{s-\varepsilon}^{s+\varepsilon'} g'(r) \,dr \ge
(\varepsilon + \varepsilon') \bigl[ \lambda \alpha(\varepsilon,\varepsilon') - (1-\alpha(\varepsilon,\varepsilon')) \bigr]
\\
& \ge
(\varepsilon + \varepsilon') \bigl[ (1+ \lambda) \alpha(\varepsilon,\varepsilon') - 1 \bigr],\\
& \ge \frac \lambda 2 (\varepsilon + \varepsilon') \qquad \text{for all } \; \varepsilon,\varepsilon' \; \text{ small enough}.
\end{align*}
But this contradicts the fact that $s$ is an accumulation point for $A_\lambda$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:ODENLex}}
The proof is separated in five steps. In the first one, we construct an approximating sequence made of solutions to the N particles systems. In the second, we show the tightness of that sequence, and in the last three ones, we show that the time marginals of the limit process are solution of the Vlasov-Poisson equation~\eqref{eq:Vla}. Afterthat, the existence of a process solution to the nonlinear ODE~\eqref{eq:ODENL} can be obtained by an application of the second point of Lemma~\ref{lem:ODENL_mart}.
\medskip
{\sl Step 1. Construction of the approximating sequence.}
This is quite simple. For each N choose a sequence ${\mathcal Z}^N_0$ of initial positions and velocities, such that the associated empirical measures $\mu^N_0 = \mu_{\mathcal Z}^N(0)$ converges weakly towards the measure $\nu_0$. Since $\int |v| \nu_0(dx,dv) <+ \infty$, we may also assume that
\begin{equation} \label{estim:mominit}
\sup_{N \in {\mathbb{N}}} \int |v| \mu_{\mathcal Z}^N(0,dx,dv) < + \infty.
\end{equation}
Applying Proposition~\ref{prop:Nexis}, we obtain a global solution ${\mathcal Z}^N_\cdot$ to the system of ODE~\eqref{eq:Npart}. Then Lemma~\ref{lem:NPtoODENL} ensure that it also define a process denoted by $Z^N_\cdot$, solution to the nonlinear ODE~\eqref{eq:ODENL} with initial
condition $\mu_{\mathcal Z}^N(0)$.
\medskip
{\sl Step 2. Tightness of the approximating sequence.}
Using the equation~\eqref{eq:ODEpre}, we get that for almost all $(s,u)$ satifying $0<s<u<t$
\begin{equation} \label{estim:VLip}
|V^N_u -V^N_s| \le \int_s^u \mathbb{E}_{\bar Z_\cdot }\bigl[ | W'(X^N_r - \bar X^N_r| \bigr] \,dr
\le \frac12 |u-s| .
\end{equation}
And with the particular choice $s=0$, we obtain the bound $| V_u^N| \le |V_0^N| + \frac u 2$.
That bound allows to obtain a similar estimate for the position
\begin{equation} \label{estim:XLip}
|X^N_u -X^N_s| \le \int_s^u |V^N_r| \,dr \le \Bigl( |V^N_0| + \frac t2 \Bigr) |s-u|.
\end{equation}
Merging estimates~\eqref{estim:VLip} and~\eqref{estim:XLip} and taking the supremum on all couple $(u,s)$ we get
$$
\sup_{s,u \in [0,t]} \frac{ |Z^N_u -Z^N_s|}{|u-s|} \le |V^N_0| + \frac {t+1}2.
$$
Taking the expectation and using~\eqref{estim:mominit}, we get that
\begin{equation} \label{estim:exp}
\sup_{N \in {\mathbb{N}}} \mathbb{E} \bigg[ \sup_{s,u \in [0,t]} \frac{ |Z^N_u -Z^N_s|}{|u-s|} \biggr] < + \infty.
\end{equation}
This implies the tightness. In fact, by the Arzel\`a-Ascoli theorem, the subsets $\mathcal B_R^\lambda(t)$ of $C([0,t],{\mathbb{T}} \times {\mathbb{R}})$ defined by
$$
\mathcal B_R^\lambda(t) := \Bigl\{ Z_\cdot \text{ s.t. } |X_0| \le R \quad \text{and} \quad
\bigl\| Z' \bigr\|_\infty \le \lambda
\Bigr\}
$$
are compact for all $R,\lambda,t \in {\mathbb{R}}^+$. But the fact that
$\mathbb P(Z_0^N \in \cdot) = \mu^N_0$, together with estimates\eqref{estim:mominit} and~\eqref{estim:exp} imply that the compact sets $\mathcal B_R^\lambda(t)$ are almost of full measure when $R$ and $\lambda$ are large.
Therefore, up to the extraction of a subsequence, we may assume that the sequence of processes $(Z^N_\cdot)$ converges weakly towards a process $Z_\cdot$. To understand this properly, we must say that $C([0,+\infty),{\mathbb{T}} \times {\mathbb{R}})$ is endowed with the topology of uniform convergence on every bounded time interval.
Remark also that the processes $Z^N_\cdot$ are concentrated on the set of trajectories such that $t \mapsto V^N_t$ are uniformly Lipschitz with constant $\frac12$, a set that is closed under the topology of uniform convergence on any bounded time interval. So this is also true for the process $Z_\cdot$ : almost surely, the velocity component of the limit process $Z_\cdot$ has Lipschitz trajectories, with constant $\frac12$.
\medskip
{\sl Step 3. Characterization of the limit process $Z_\cdot$.}
First by construction it is clear that the limit process $Z_\cdot$ satisfies the required initial condition.
Next we will show that its time marginals satisfy the Vlasov-Poisson equation~\eqref{eq:Vla}.
For this choose two times $s < t$ and a smooth function $\varphi$. We need to show that
\begin{equation} \label{eq:mart2}
\begin{split}
\mathbb{E}_{(Z_\cdot,\bar Z_\cdot)} \biggl[ \Bigl(
\varphi(t,Z_t) - \varphi(s,Z_s) - \int_s \bigl[ \partial_t \varphi(u,Z_u) + V_s \partial_x \varphi(u,Z_u)
\\
- W'(X_u -\bar X_u) \partial_v \varphi(u,Z_u) \bigr] \,du
\Bigr) \biggr] = 0.
\end{split}
\end{equation}
since $Z^N$ are solution of the nonlinear ODE~\eqref{eq:ODENL}, it is clear that this is satisfied if we replace $Z_\cdot$ by $Z_\cdot^N$. Because of the continuity of $\varphi$, we can use classical results, see Billingsley \cite[p. 30]{Bill}, and pass to the limit in
$$
\mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \bigl[ \varphi(t,Z^N_t) - \varphi(s,Z^N_s) \bigr]
\xrightarrow[\;N \rightarrow + \infty \;]{}
\mathbb{E}_{(Z_\cdot,\bar Z_\cdot)} \bigl[ \varphi(t,Z_t) - \varphi(s,Z_s) \bigr].
$$
In fact, only the term containing $W'$ is more involved, since the interaction force is discontinuous at the origin.
To control it, we need to separate the distant and close interactions.
For this, we choose an $\varepsilon>0$ and introduce a smooth cut-off function $\chi_\varepsilon$ defined on ${\mathbb{T}}$ and satisfying (we use again the identification ${\mathbb{T}} = \bigl[-\frac12, \frac12 \bigr)$)
$$
\chi_\varepsilon(x) = 1 \;\text{ if } \; |x| \le \varepsilon, \qquad \chi_\varepsilon(x) = 0 \;\text{ if }\; |x| \ge 2 \varepsilon.
$$
Then, we can decompose
\begin{align*}
\mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \biggl[ \int_s^t & W'(X^N_u -\bar X^N_u) \partial_v \varphi(u,Z^N_u) \bigr] \,du \biggr] \\
& = \mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \biggl[ \int_s^t W'(X^N_u -\bar X^N_u) (1 - \chi_\varepsilon(X^N_u-\bar X^N_u)) \partial_v \varphi(u,Z^N_u) \bigr] \,du \biggr] \\
& \hspace{10mm} +
\mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \biggl[ \int_s^t W'(X^N_u -\bar X^N_u) \chi_\varepsilon(X^N_u-\bar X^N_u) \partial_v \varphi(u,Z^N_u) \bigr] \,du \biggr].
\end{align*}
In the first term of the r.h.s that take into account the interactions between distant particles, everything is now continuous. So, as before, it is not difficult to pass in the limit in that term. It converges towards
$$
\mathbb{E}_{(Z_\cdot,\bar Z_\cdot)} \biggl[ \int_s^t W'(X_u -\bar X_u) \bigl(1- \chi_\varepsilon(X_u-\bar X_u) \bigr) \partial_v \varphi(u,Z_u) \bigr] \,du \biggr]
$$
It remains to control the second term. We will show that it goes to zero as $\varepsilon$ goes to zero, uniformly in $N$, and that this convergence also holds for the limit process $Z_\cdot$. This will conclude the proof.
\medskip
For this we choose a $\beta < \frac12$ and introduce a second smooth cut-off function defined on ${\mathbb{R}}$, such that
$$
\xi_\varepsilon(v) = 1 \;\text{ if } \; |v| \le \varepsilon^\beta, \qquad \xi_\varepsilon(v) = 0 \;\text{ if }\; |v| \ge 2 \varepsilon^\beta.
$$
We use it to separate the remaining term in two : one term taking into account the close interactions with large relative speed, and the second one taking into account the close interactions with small relative speed
\begin{align*}
\mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \biggl[ & \int_s^t W'(X^N_u -\bar X^N_u) \chi_\varepsilon(X^N_u-\bar X^N_u) \partial_v \varphi(u,Z^N_u) \bigr] \,du \biggr] \\
& = \mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \biggl[ \int_s^t W'(X^N_u -\bar X^N_u) \chi_\varepsilon(X^N_u-\bar X^N_u)
(1- \xi_\varepsilon(V^N_u-\bar V^N_u) ) \partial_v \varphi(u,Z^N_u) \bigr] \,du \biggr] \\
& \hspace{10mm} +
\mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \biggl[ \int_s^t W'(X^N_u -\bar X^N_u) \chi_\varepsilon(X^N_u-\bar X^N_u)
\xi_\varepsilon(V^N_u-\bar V^N_u) \partial_v \varphi(u,Z^N_u) \bigr] \,du \biggr].
\end{align*}
\medskip
{\sl Step 4. The effects of close interaction with large relative speed.}
To understand the first term in the r.h.s. of the above decomposition, we choose a particular couple of trajectories, denoted by $(Z^N_\cdot,\bar Z^N_\cdot)$
(the notation is the same than for processes, which are probabilities on trajectories, but we will always explicitly mention which case we are referring to). Since $\partial_v \varphi$ and $W'$ are bounded, we may write
\begin{align*}
\Bigl| \int_s^t & W'(X^N_u -\bar X^N_u) \chi_\varepsilon(X^N_u-\bar X^N_u) (1- \xi_\varepsilon(V^N_u-\bar V^N_u) ) \partial_v \varphi(u,Z^N_u) \bigr] \,du \Bigr| \le C \, \bigl| A(Z^N,\bar Z^N) \bigr| \\
& \text{where }\; A(Z^N,\bar Z^N) := \bigl\{ u \in [s,t] \; \text{ s.t. } \; |X^N_u-\bar X^N_u| < 2 \varepsilon \; \text{ and } \;
|V^N_u-\bar V^N_u| > \varepsilon^\beta
\bigr\}.
\end{align*}
We used the notation $|A|$ to denotes the Lebesgue measure of any measurable subset $A$ of ${\mathbb{R}}$.
Now, we pick up a time $r \in A(Z^N,\bar Z^N) $. Since $V^N_\cdot$ and $\bar V^N_\cdot$ are almost surely Lipschitz with constant $\frac12$, we may write that
$$
\forall u \in I^N_r := \biggl[ r, r + \frac{|V^N_r - \bar V^N_r|}2 \biggr], \quad |V^N_u - \bar V^N_u| \in \biggl[ \frac{|V^N_r - \bar V^N_r|}2, \frac{3\,|V^N_r - \bar V^N_r|}2 \biggr],
$$
and in particular, the sign of the relative speed do not change on the interval $I^N_r$.
The condition $\beta< \frac12$ ensures that the two particles will escape the zone $\{ |X^N_u-\bar X^N_u| < 2 \varepsilon \}$ during the time interval $I^N_r$. Precisely, they are inside this zone at time $r$, and will after that have an increase (or decrease) of relative position larger than $C |V^N_r - \bar V^N_r|^2 \ge C \varepsilon^{2 \beta} > 4 \varepsilon$ if $\varepsilon $ is small enough.
Next, the previous bound by above on the relative velocities, implies that on the time interval $I_r^N$ the two particles can not stay $2\varepsilon$-close too long.
Such an encounter has a maximal duration of $\frac{8\varepsilon}{|V^N_r - \bar V^N_r|}$. Of course, by periodicity the same particles may have several close encounters on $I^N_r$ if their relative speed is large, but they can not have more than $C |V^N_r - \bar V^N_r|^2$.
Summing up, we find than the total duration of this close encounters on the time interval $I^N_r$ is smaller than
$ C \varepsilon |V^N_r - \bar V^N_r| = C' \varepsilon \bigl| I^N_r \bigr|. $
Covering $A(Z^N_\cdot,\bar Z^N_\cdot)$ with such tile intervals, we end up with
$$
\bigl| A(Z^N,\bar Z^N) \bigr| \le C \varepsilon |s-t|.
$$
Taking now the expectation, we get
\begin{align*}
\mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \biggl[ \int_s^t W'(X^N_u -\bar X^N_u) \chi_\varepsilon(X^N_u-\bar X^N_u)
(1- \xi_\varepsilon(V^N_u-\bar V^N_u) ) \partial_v \varphi(u,Z^N_u) \bigr] \,du \biggr] \le C \varepsilon \, |t-s|,
\end{align*}
uniformly in $N$. Remark that since the previous argument only requires that the velocities are almost surely Lipschitz (in time). So
it also apply with $Z_\cdot$, instead of $Z^N_\cdot$ (See the last remark at the end of Step 2).
\medskip
{\sl Step 5. The effect of close collision with small relative speed.}
For the remaining term, we shall use a symmetry argument similar to the one used by Delort in its proof of existence of weak solutions to the 2D Euler equation \cite{Delo91}.
In fact using the oddness of the interaction force $W'$
\begin{align*}
& \mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \biggl[ \int_s^t W'(X^N_u -\bar X^N_u) \chi_\varepsilon(X^N_u-\bar X^N_u)
\xi_\varepsilon(V^N_u-\bar V^N_u) \partial_v \varphi(u,Z^N_u) \bigr] \,du \biggr] \\
& =
\mathbb{E}_{(Z^N_\cdot,\bar Z^N_\cdot)} \biggl[ \int_s^t W'(X^N_u -\bar X^N_u) \chi_\varepsilon(X^N_u-\bar X^N_u)
\xi_\varepsilon(V^N_u-\bar V^N_u) ( \partial_v \varphi(u,Z^N_u) - \partial_v \varphi(u,\bar Z^N_u) \bigr] \,du \biggr].
\end{align*}
But thanks to the two cut-off functions $\chi$ and $\xi$, we now have for all $(x,v,\bar x, \bar v)$
$$
\chi_\varepsilon(x- \bar x )
\xi_\varepsilon(v -\bar v ) \bigl| \partial_v \varphi(u,x,v) -
\partial_v \varphi(u,\bar x , \bar v) \bigr|
\le C \| \nabla^2 \varphi \|_\infty \varepsilon^\beta,
$$
and this implies that our last expectation is bounded by $C \varepsilon^\beta$, with a constant $C$
independent of $N$. That bound also apply when we replace $Z^N$ by $Z$.
\subsection{Proof of Theorem~\ref{thm:proc}}
The proof of our central result is rather short.
\medskip
Choose an optimal coupling of the random variables $Z^1_0$ and $Z^2_0$. We then couple the processes $Z^1_\cdot$ and $Z_\cdot^2$, using the previous coupling on the initial position independently of what happens after time $0$.
This is relatively simple if there exists a unique trajectory starting from any initial condition.
In the general case, the coupling is defined has the law on the couples $(Z^1_\cdot, Z^2_\cdot)$ of trajectories such that for any smooth function $\varphi$ on $({\mathbb{T}}\times {\mathbb{R}})^2$, and $\psi^1$ and $\psi^2$ on $C[0,\infty,{\mathbb{T}} \times {\mathbb{R}})$
\begin{align*}
\mathbb{E}_{(Z^1_\cdot, Z^2_\cdot)} \bigl[ \varphi(Z^1_0,Z^2_0) \psi^1(Z^1_\cdot) \psi^2(Z^2_\cdot )\bigr] =
\mathbb{E}_{(Z^1_0, Z^2_0)} \bigl[\varphi(Z^1_0,Z^2_0) \bigr]
\times
\mathbb{E}_{Z^1_\cdot} \bigl[\psi^1(Z^1_\cdot) \big| Z^1_0 \bigr]
\times
\mathbb{E}_{Z^2_\cdot} \bigl[ \psi^2(Z^2_\cdot) \big| Z^2_0 \bigr].
\end{align*}
We also denote by $(\bar Z^1_\cdot,\bar Z^2_\cdot)$
an independent copy of the couple $(Z^1_\cdot,Z^2_\cdot)$. We can now
differentiate $\mathbb{E}[|Z^1_t - Z_t^2|]$ with respect to time and get
\begin{align*}
\frac d{dt} \mathbb{E}\bigl[|Z^1_t - Z_t^2| \bigr] & \le
\mathbb{E}\bigl[|V^1_t - V_t^2|\bigr]
+ \mathbb{E}\bigl[|W'(X^1_t - \bar X_t^1) - W'(X^2_t - \bar X_t^2)|\bigr] \\
& \le \sqrt 2 \, \mathbb{E}\bigl[|Z^1_t - Z_t^2| \bigr]
+ \mathbb{E} \bigl[|\sign(X^1_t - \bar X_t^1) - \sign(X^2_t - \bar X_t^2)|\bigr]
\end{align*}
To obtain the second line, we use the decomposition~\eqref{eq:Wdec} of $W'$,
and the fact that \mbox{$a+ b \le \sqrt{2(a^2+b^2)}$} for any $a,b \ge 0$.
To understand the remaining term, we remark first that $X^1_t-X^2_t$ and $\bar
X^1_t - \bar X^2_t$ are ``small'' since the trajectories of respectively
$(Z^1_\cdot,Z^2_\cdot)$ and $(\bar Z^1_\cdot, \bar Z^2_\cdot)$ are coupled in
a more or less optimized way, while $X^1_t - \bar X_t^1$ is usually ``large''
since the trajectories of $Z^1_\cdot$ and $\bar Z^1_\cdot$ are independent (the same is true for $Z^2_\cdot$ and $\bar Z^2_\cdot$).
Neglecting the problem raised by the periodization of the $\sign$ function,
which is not relevant, we see that they $X^1_t - \bar X_t^1$ and $X^2_t - \bar X_t^2$ may have different signs only if
$$
|X^1_t - \bar X_t^1| \le |X^1_t-X^2_t| + |\bar X^1_t - \bar X^2_t|
\le 2 \max\bigl(|X^1_t-X^2_t|,|\bar X^1_t - \bar X^2_t|\bigr) .
$$
This condition is not symmetric with respect to $Z^1_\cdot$ and $Z^2_\cdot$,
but this asymmetry is very important in order to obtain our weak-strong stability
principle. Next
\begin{align*}
\mathbb{E}\bigl[|\sign(X^1_t - \bar X_t^1) - \sign(X^2_t - \bar X_t^2)| \bigr]
& \le \mathbb{E}\bigl[ {\mathbb 1}_{|
X^1_t - \bar X_t^1| \le 2 \max\bigl(|X^1_t-X^2_t|,|\bar X^1_t - \bar
X^2_t|\bigr)} \bigr] \\
& \le
\mathbb{E}\bigl[ {\mathbb 1}_{|
X^1_t - \bar X_t^1| \le 2 |X^1_t-X^2_t|} \bigr]
+
\mathbb{E}\bigl[ {\mathbb 1}_{|
X^1_t - \bar X_t^1| \le 2 |\bar X^1_t- \bar X^2_t|} \bigr] \\
& \le 2\, \mathbb{E}\bigl[ {\mathbb 1}_{|
X^1_t - \bar X_t^1| \le 2 |X^1_t-X^2_t|} \bigr] \\
& \le 2 \, \mathbb{E}_{Z^1_\cdot,Z^2_\cdot} \Bigl[ \mathbb{E}_{\bar Z^1_\cdot}
\bigl[ {\mathbb 1}_{|
X^1_t - \bar X_t^1| \le 2 |X^1_t-X^2_t|} \bigr]
\Bigr] \\
& \le 8 \| \rho_t\|_\infty \mathbb{E} \bigl[ |X^1_t-X^2_t| \bigr].
\end{align*}
In the third line, we have used the symmetry in $Z_\cdot, \bar Z_\cdot$.
The last line is obtained by using the independence of $\bar Z^1$ with respect
to $(Z^1_\cdot,Z^2_\cdot)$ and the assumption that $\bar X^1_t$ has a bounded density
$\rho_t$. All in all, we get that
$$
\frac d{dt} \mathbb{E} \bigl[|Z^1_t - Z_t^2|\bigr] \le (\sqrt 2 + 8 \| \rho_t\|_\infty )\mathbb{E} \bigl[|Z^1_t -
Z_t^2|\bigr].
$$
This concludes the proof.
\subsection{Proof of Proposition~\ref{prop:exisVP}}
The proof is made in four steps. In the first one, we construct an approximating sequence.
In the second one, we establish an a priori estimate on the density. In the third, we show that the approximating sequence has the Cauchy property. Finally, in the last one we prove that the limit process is a solution of the nonlinear ODE~\eqref{eq:ODENL}.
\medskip
{\sl Step 1. Construction of an approximating sequence.}
We will as usual mollify the interaction force $W'$. For $\varepsilon \in \bigl( 0, \frac12 \bigr)$ we define a piecewise linear and continuous approximation of $W'$ by
$$
W_\varepsilon(x) :=
\begin{cases}
\frac{x^2 - |x|}2 &\text{ if } \; |x| \in \bigl(\varepsilon, \frac12\bigr) \\
- \bigl(\frac1{2\varepsilon} -1 \bigr) \frac{x^2}2 - \frac \varepsilon 4 & \text{ if } \; x \in [-\varepsilon,\varepsilon]
\end{cases},
%
\qquad
- W'_\varepsilon(x) :=
\begin{cases}
-\frac12 -x &\text{ if } \; x \in \bigl[-\frac12, - \varepsilon\bigr) \\
\frac12 -x &\text{ if } \; x \in \bigl(\varepsilon, \frac12\bigr) \\
\bigl(\frac1{2\varepsilon} -1 \bigr) x & \text{ if } \; x \in [-\varepsilon,\varepsilon]
\end{cases}.
$$
Remark that $W'_\varepsilon$ is bounded by $\frac12$ and has Lipschitz regularity with constant $\frac1{2\varepsilon}$, and that $W_\varepsilon$ (as $W$) take values in $\bigl[ -\frac18,0 \bigr]$. We now consider the nonlinear ODE with the interaction potential $W_\varepsilon$. In others words, we look at solutions, in the sense of Definition~\eqref{def:ODENL}, to
\begin{equation} \label{eq:ODENLep}
\forall t \in [0,+\infty), \qquad
X^\varepsilon_t = X^\varepsilon_0 + \int_0^t V^\varepsilon_s \,ds, \qquad
V^\varepsilon_t = V^\varepsilon_0 - \int_0^t \mathbb{E}_{\bar Z^\varepsilon_\cdot} \bigl[ W'_\varepsilon(X^\varepsilon_s- \bar X^\varepsilon_s) \bigr] \,ds,
\end{equation}
with initial data with law $f_0$. But the well-posedness of the associated ``smoothed" Vlasov-Poisson equation is now well understood. The existence and uniqueness of solutions is known even if $f_0$ is a measure. In that setting, the first important works were done by Braun and Hepp \cite{BraHep77}, and Dobrushin \cite{Dobr79}, and Neunzert and Wick \cite{Neun79}.
Once we have a solution of the mollified VP equation~\eqref{eq:Vla}, we can use the second part of Lemma~\ref{lem:ODENL_mart} to obtain a process $Z^\varepsilon_\cdot$ solution of the nonlinear ODE~\eqref{eq:ODENLep}. But, a (maybe) better strategy will be to adapt on of the proofs mentioned above to the
setting of nonlinear ODE. In fact, this will correspond to adapt our proof of the weak-strong stability result to the more simple case of Lipschitz interaction forces.
\medskip
{\sl Step 2. An a priori estimates on $\| \rho^\varepsilon_t \|_\infty$.}
From the equations~\eqref{eq:ODENLep} satisfied by $Z^\varepsilon_\cdot$, we get for all $t \in {\mathbb{R}}^+$
$$
| V_t^\varepsilon| \le |V_0^\varepsilon| + \int^t_0 \mathbb{E}_{\bar Z^\varepsilon_\cdot }\bigl[ | W'(X^\varepsilon_s - \bar X^\varepsilon_s| \bigr] \,ds
\le |V_0^\varepsilon| + \frac t 2.
$$
Since $f^\varepsilon$ is constant along the trajectories in our smooth setting, we get using~\eqref{eq:condL1} that
$$
| f^\varepsilon_t(X^\varepsilon_t,V_t^\varepsilon) | = | f_0(X^\varepsilon_0,V^\varepsilon_0) | \le g_0(|V^\varepsilon_0|)
\le g_0 \Bigl( |V_t^\varepsilon| - \frac t 2 \Bigr).
$$
We used that $g_0$ is decreasing. Remark that the previous bound has no sense if $|V_0^\varepsilon| \le \frac t 2$.
But in that case, we always have $| f^\varepsilon_t(X^\varepsilon_t,V_t^\varepsilon) | \le \|f_0\|_\infty = g_0(0)$.
So it will be consistent if we extend $g$ on ${\mathbb{R}}^-$, setting $g(x) = g(0)$ for all $x<0$.
So for a trajectory such that $(X^\varepsilon_t,V^\varepsilon_t) =(x,v)$, we get with that convention
$$
f_t^\varepsilon(x,v) \le g_0 \Bigl( |v| - \frac t 2 \Bigr).
$$
Integration on the variable $v$ leads to the bound
\begin{equation} \label{estim:Rho}
\rho^\varepsilon_t(x) = \int_{\mathbb{R}} f_t^\varepsilon(x,v) \,dv \le \int_{\mathbb{R}} g_0 \Bigl( |v| - \frac t 2 \Bigr) \,dv
\le g(0)\, t + 2 \int_0^{+\infty} g_0(v) \,dv.
\end{equation}
{\sl Step 3. The Cauchy property of the sequence $Z^\varepsilon_t$.}
Thanks to the a priori bound~\eqref{estim:Rho}, we may use a variant of the stability argument of Theorem~\ref{thm:proc}. In fact, if we want to compare two processes $Z^\varepsilon_\cdot$ and $Z^{\varepsilon'}_\cdot$ for two different $\varepsilon,\varepsilon'$, we can basically apply the same strategy than in the proof of that theorem with $Z^1_\cdot = Z^\varepsilon_\cdot$ and $Z^2_\cdot=Z^{\varepsilon'}_\cdot$. The only difference is that since $Z^\varepsilon_\cdot$ is driven by the interaction potential $W_\varepsilon$ and $Z^{\varepsilon'}_\cdot$ is driven by $W_{\varepsilon'}$, we will get an extra term (and also a less important additional $\varepsilon$)
\begin{equation} \label{estim:Zep}
\frac d{dt} \mathbb{E}[|Z^\varepsilon_t - Z_t^{\varepsilon'}|] \le (\sqrt 2 + 8 \| \rho^\varepsilon_t\|_\infty ) \bigr( \mathbb{E}[|Z^\varepsilon_t -
Z_t^{\varepsilon'}|] + 2 \varepsilon' \bigr) +
\mathbb{E} \bigl[|W'_\varepsilon(X^\varepsilon_t - \bar X_t^\varepsilon) - W'_{\varepsilon'}(X^\varepsilon_t - \bar X_t^\varepsilon)| \bigr].
\end{equation}
But, from the definition of $W'_\varepsilon$ and, we have
$$
|W'_\varepsilon(X^\varepsilon_t - \bar X_t^\varepsilon) - W'_{\varepsilon'}(X^\varepsilon_t - \bar X_t^\varepsilon)| \le \frac12 {\mathbb 1}_{|X^\varepsilon_t - \bar X_t^\varepsilon| \le \max(\varepsilon,\varepsilon')}.
$$
But, if we take the expectation and use the bound on $\rho^\varepsilon_t$ has in the proof of Theorem~\ref{thm:proc}, we can bound the second term in the right hand side of~\eqref{estim:Zep} by
$$
\mathbb{E} \bigl[|W'_\varepsilon(X^\varepsilon_t - \bar X_t^\varepsilon) - W'_{\varepsilon'}(X^\varepsilon_t - \bar X_t^\varepsilon)| \bigr]
\le
\| \rho^\varepsilon_t\|_\infty \max(\varepsilon,\varepsilon'),
$$
and finally get
$$
\frac d{dt} \mathbb{E}[|Z^\varepsilon_t - Z_t^{\varepsilon'}|] \le (\sqrt 2 + 9 \| \rho^\varepsilon_t\|_\infty ) \bigr( \mathbb{E}[|Z^\varepsilon_t -
Z_t^{\varepsilon'}|] + 2 \max(\varepsilon,\varepsilon') \bigr).
$$
After slight modifications, this calculation will also lead to the stronger statement that there exists a constant $C_t$ such that
$$
\mathbb{E}\Bigl[ \sup_{s \le t} |Z^\varepsilon_s - Z^{\varepsilon'}_s|\Bigr] \le C_t \max(\varepsilon,\varepsilon')
$$
This implies that the $Z^\varepsilon_\cdot$ form a Cauchy sequence for the weak topology on $\mathcal P \bigl(C([0,+\infty),{\mathbb{T}} \times {\mathbb{R}})\bigr)$.
So in particular they converges towards a stochastic process $Z_\cdot$.
\medskip
{\sl Step 4. Characterization of the limit process $Z_\cdot$.}
In order to prove that the limit process $Z_\cdot$ is a solution fo the nonlinear ODE~\eqref{eq:ODENL} with the original interaction force $W'$, we must for instance prove that for every time $t >0$,
\begin{equation} \label{estim:Cont}
\mathbb{E}_{Z_\cdot} \Bigl[ \Bigl\vert X_t - X_0 - \int_0^t V_s \,ds \Bigr\vert \Bigr]=0,
\qquad
\mathbb{E}_{Z_\cdot} \Bigl[ \Bigl\vert V_t - V_0 + \int_0^t \mathbb{E}_{\bar Z_\cdot}\bigl[ W'(X_s-\bar X_s)\bigr] \,ds \Bigr\vert \Bigr]=0.
\end{equation}
The first point is simple. For a fixed time $t$, the application that maps a trajectory $\tilde Z_\cdot \mapsto \Bigl\vert \tilde X_t - \tilde X_0 - \int_0^t \tilde V_s \,ds \Bigr\vert$ is continuous on $C([0,+\infty),{\mathbb{T}} \times {\mathbb{R}})$. It implies, see \cite[p.30]{Bill}
that the application that sent a process $Z_\cdot$ to the first expectation in~\eqref{estim:Cont} is continuous. Since the expectation is already equals to zero for the approximated process $Z^\varepsilon_\cdot$, we can pass to the limit and get the required equality.
For the second expectation, we have to be more careful because of the discontinuities in the interaction force. But if we define $F(x) := - \mathbb{E}_{Z_\cdot}\bigl[ W'(x-\bar X_s)\bigr]$ (where $Z_\cdot$ stand for the fixed limit process), then the bound on the time marginals $\rho_t$ of $X_\cdot$ implies that $F$ is continuous. Then, the application
$$
\tilde Z_\cdot \mapsto \mathbb{E}_{\tilde Z_\cdot} \Bigl[ \Bigl\vert V_t - V_0 - \int_0^t F(X_s) \,ds \Bigr\vert \Bigr],
$$
is continuous. The problem is now that the previous expectation is not $0$ for the approximated process. It will vanish for the approximated process only if we replace $F$ by $F^\varepsilon(x) = - \mathbb{E}_{\bar Z^\varepsilon_\cdot}\bigl[ W'_\varepsilon(x-\bar X_s)\bigr]$. But even if it is not emphasize, we have proved in the second step that $F^\varepsilon$ is a Cauchy sequence in $L^1({\mathbb{T}})$, and in particular it converges strongly in the $L^1$-norm towards $F$. This and the uniform bound on the density allows to pass to the limit and we obtain that the second expectation in~\eqref{estim:Cont} also vanishes. This concludes the proof.
\newcommand{\etalchar}[1]{$^{#1}$}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
| 2023-04-23T06:40:28.783Z | 2013-09-11T02:09:17.000Z | redpajama/arxiv | arxiv_0001 | 478 | 10,504 |
23280b29615e6ce1589612f590f3dde947b4d1dc | \section{\textbf{Introduction}}
Let $F$ be a $C^{\infty }$-submersion from a Riemannian manifold ($M,g_{M})$
onto a Riemannian manifold $(N,g_{N}).$ Then according to the conditions on
the map $F:(M,g_{M})\rightarrow (N,g_{N}),$ we have the following
submersions:
semi-Riemannian submersion and Lorentzian submersion \cite{FAL}, Riemannian
submersion (\cite{BO1}, \cite{GRAY}), slant submersion (\cite{CHEN}, \cit
{SAHIN1}), almost Hermitian submersion \cite{WAT}, contact-complex
submersion \cite{IANUS}, quaternionic submersion \cite{IANUS2}, almost $h
-slant submersion and $h$-slant submersion \cite{PARK1}, semi-invariant
submersion \cite{SAHIN2}, $h$-semi-invariant submersion \cite{PARK2}, etc.
As we know, Riemannian submersions are related with physics and have their
applications in the Yang-Mills theory (\cite{BL}, \cite{WATSON}),
Kaluza-Klein theory (\cite{BOL}, \cite{IV}), Supergravity and superstring
theories (\cite{IV2}, \cite{MUS}). In \cite{SAHIN}, Sahin introduced
anti-invariant Riemannian submersions from almost Hermitian manifolds onto
Riemannian manifolds. He also suggested to investigate anti invariant
submersions from almost contact metric manifolds onto Riemannian manifolds
in \cite{SAHIN3}.
So the purpose of the present paper is to study similar problems for slant
Riemannian submersions from Sasakian manifolds to Riemannian manifolds. We
also want to carry anti-invariant submanifolds of Sasakian manifolds to
anti-invariant Riemannian submersion theory and to prove dual results for
submersions. For instance, a slant submanifold of a $K$-contact manifold is
an anti invariant submanifold if and only if $\nabla Q=0$ (see: Proposition
4.1 of \cite{alfonso}). We get similar result as Proposition 4. Thus, it
will be worth the study area which is anti-invariant submersions from almost
contact metric manifolds onto Riemannian manifolds.
The paper is organized as follows: In section 2, we present the basic
information about Riemannian submersions needed for this paper. In section
3, we mention about Sasakian manifolds. In section 4, we give definition of
slant Riemannian submersions and introduce slant Riemannian submersions from
Sasakian manifolds onto Riemannian manifolds. We survey main results of
slant submersions defined on Sasakian manifolds. We also give an example of
slant submersions such that characteristic vector field $\xi $ is vertical.
\section{\textbf{Riemannian Submersions}}
In this section we recall several notions and results which will be needed
throughout the paper.
Let $(M,g_{M})$ be an $m$-dimensional Riemannian manifold , let $(N,g_{N})$
be an $n$-dimensional Riemannian manifold. A Riemannian submersion is a
smooth map $F:M\rightarrow N$ which is onto and satisfies the following
axioms:
$S1$. $F$ has maximal rank.
$S2$. The differential $F_{\ast }$ preserves the lenghts of horizontal
vectors.
The fundamental tensors of a submersion were defined by O'Neill (\cite{BO1}
\cite{BO2}). They are $(1,2)$-tensors on $M$, given by the formula
\begin{eqnarray}
\mathcal{T}(E,F) &=&\mathcal{T}_{E}F=\mathcal{H}\nabla _{\mathcal{V}E
\mathcal{V}F+\mathcal{V}\nabla _{\mathcal{V}E}\mathcal{H}F, \label{AT1} \\
\mathcal{A}(E,F) &=&\mathcal{A}_{E}F=\mathcal{V}\nabla _{\mathcal{H}E
\mathcal{H}F+\mathcal{H}\nabla _{\mathcal{H}E}\mathcal{V}F, \label{AT2}
\end{eqnarray
for any vector field $E$ and $F$ on $M.$ Here $\nabla $ denotes the
Levi-Civita connection of $(M,g_{M})$. These tensors are called
integrability tensors for the Riemannian submersions. Note that we denote
the projection morphism on the distributions ker$F_{\ast }$ and (ker$F_{\ast
})^{\perp }$ by $\mathcal{V}$ and $\mathcal{H},$ respectively. The following
\ Lemmas are well known (\cite{BO1},\cite{BO2}).
\begin{lemma}
For any $U,W$ vertical and $X,Y$ horizontal vector fields, the tensor fields
$\mathcal{T}$, $\mathcal{A}$ satisfy
\begin{eqnarray}
i)\mathcal{T}_{U}W &=&\mathcal{T}_{W}U, \label{TUW} \\
ii)\mathcal{A}_{X}Y &=&-\mathcal{A}_{Y}X=\frac{1}{2}\mathcal{V}\left[ X,
\right] . \label{TUW2}
\end{eqnarray}
\end{lemma}
It is easy to see that $\mathcal{T}$ $\ $is vertical, $\mathcal{T}_{E}
\mathcal{T}_{\mathcal{V}E}$ and $\mathcal{A}$ is horizontal, $\mathcal{A=A}_
\mathcal{H}E}$.
For each $q\in N,$ $F^{-1}(q)$ is an $(m-n)$ dimensional submanifold of $M$.
The submanifolds $F^{-1}(q),$ $q\in N,$ are called fibers. A vector field on
$M$ is called vertical if it is always tangent to fibers. A vector field on
M$ is called horizontal if it is always orthogonal to fibers. A vector field
$X$ on $M$ is called basic if $X$ is horizontal and $F$-related to a vector
field $X$ on $N,$ i. e., $F_{\ast }X_{p}=X_{\ast F(p)}$ for all $p\in M.$
\begin{lemma}
Let $F:(M,g_{M})\rightarrow (N,g_{N})$ be a Riemannian submersion. If $\ X,$
$Y$ are basic vector fields on $M$, then:
\end{lemma}
$i)$ $g_{M}(X,Y)=g_{N}(X_{\ast },Y_{\ast })\circ F,$
$ii)$ $\mathcal{H}[X,Y]$ is basic, $F$-related to $[X_{\ast },Y_{\ast }]$,
$iii)$ $\mathcal{H}(\nabla _{X}Y)$ is basic vector field corresponding to
\nabla _{X_{\ast }}^{^{\ast }}Y_{\ast }$ where $\nabla ^{\ast }$ is the
connection on $N.$
$iv)$ for any vertical vector field $V$, $[X,V]$ is vertical.
Moreover, if $X$ is basic and $U$ is vertical then $\mathcal{H}(\nabla
_{U}X)=\mathcal{H}(\nabla _{X}U)=\mathcal{A}_{X}U.$ On the other hand, from
\ref{AT1}) and (\ref{AT2}) we hav
\begin{eqnarray}
\nabla _{V}W &=&\mathcal{T}_{V}W+\hat{\nabla}_{V}W \label{1} \\
\nabla _{V}X &=&\mathcal{H\nabla }_{V}X+\mathcal{T}_{V}X \label{2} \\
\nabla _{X}V &=&\mathcal{A}_{X}V+\mathcal{V}\nabla _{X}V \label{3} \\
\nabla _{X}Y &=&\mathcal{H\nabla }_{X}Y+\mathcal{A}_{X}Y \label{4}
\end{eqnarray
for $X,Y\in \Gamma ((\ker F_{\ast })^{\bot })$ and $V,W\in \Gamma (\ker
F_{\ast }),$ where $\hat{\nabla}_{V}W=\mathcal{V}\nabla _{V}W.$
Notice that $\mathcal{T}$ \ acts on the fibres as the second fundamental
form of the submersion and restricted to vertical vector fields and it can
be easily seen that $\mathcal{T}=0$ is equivalent to the condition that the
fibres are totally geodesic. A Riemannian submersion is called a Riemannian
submersion with totally geodesic fiber if $\mathcal{T}$ $\ $vanishes
identically. Let $U_{1},...,U_{m-n}$ be an orthonormal frame of $\Gamma
(\ker F_{\ast }).$ Then the horizontal vector field $H$ $=\frac{1}{m-n
\dsum\limits_{j=1}^{m-n}\mathcal{T}_{U_{j}}U_{j}$ is called the mean
curvature vector field of the fiber. If \ $H$ $=0$ the Riemannian submersion
is said to be minimal. A Riemannian submersion is called a Riemannian
submersion with totally umbilical fibers if
\begin{equation}
\mathcal{T}_{U}W=g_{M}(U,W)H \label{4a}
\end{equation
for $U,W\in $ $\Gamma (\ker F_{\ast })$. For any $E\in \Gamma (TM),\mathcal{
}_{E\text{ }}$and $\mathcal{A}_{E}$ are skew-symmetric operators on $(\Gamma
(TM),g_{M})$ reversing the horizontal and the vertical distributions. By
Lemma 1 horizontally distribution $\mathcal{H}$ is integrable if and only if
\ $\mathcal{A=}0$. For any $D,E,G\in \Gamma (TM)$ one ha
\begin{equation}
g(\mathcal{T}_{D}E,G)+g(\mathcal{T}_{D}G,E)=0, \label{4b}
\end{equation
\begin{equation}
g(\mathcal{A}_{D}E,G)+g(\mathcal{A}_{D}G,E)=0. \label{4c}
\end{equation}
We recall the notion of harmonic maps between Riemannian manifolds. Let
(M,g_{M})$ and $(N,g_{N})$ be Riemannian manifolds and suppose that $\varphi
:M\rightarrow N$ is a smooth map between them. Then the differential
\varphi _{\ast }$ of $\varphi $ can be viewed a section of the bundle $\
Hom(TM,\varphi ^{-1}TN)\rightarrow M,$ where $\varphi ^{-1}TN$ is the
pullback bundle which has fibres $(\varphi ^{-1}TN)_{p}=T_{\varphi (p)}N,$
p\in M.\ Hom(TM,\varphi ^{-1}TN)$ has a connection $\nabla $ induced from
the Levi-Civita connection $\nabla ^{M}$ and the pullback connection. Then
the second fundamental form of $\varphi $ is given by
\begin{equation}
(\nabla \varphi _{\ast })(X,Y)=\nabla _{X}^{\varphi }\varphi _{\ast
}(Y)-\varphi _{\ast }(\nabla _{X}^{M}Y) \label{5}
\end{equation
for $X,Y\in \Gamma (TM),$ where $\nabla ^{\varphi }$ is the pullback
connection. It is known that the second fundamental form is symmetric. If
\varphi $ is a Riemannian submersion it can be easily prove that
\begin{equation}
(\nabla \varphi _{\ast })(X,Y)=0 \label{5a}
\end{equation
for $X,Y\in \Gamma ((\ker F_{\ast })^{\bot })$.A smooth map $\varphi
:(M,g_{M})\rightarrow (N,g_{N})$ is said to be harmonic if $trace(\nabla
\varphi _{\ast })=0.$ On the other hand, the tension field of $\varphi $ is
the section $\tau (\varphi )$ of $\Gamma (\varphi ^{-1}TN)$ defined b
\begin{equation}
\tau (\varphi )=div\varphi _{\ast }=\sum_{i=1}^{m}(\nabla \varphi _{\ast
})(e_{i},e_{i}), \label{6}
\end{equation
where $\left\{ e_{1},...,e_{m}\right\} $ is the orthonormal frame on $M$.
Then it follows that $\varphi $ is harmonic if and only if $\tau (\varphi
)=0 $, for details, \cite{B}.
\section{\textbf{Sasakian Manifolds}}
A $n$-dimensional differentiable manifold $M$ is said to have an almost
contact structure $(\phi ,\xi ,\eta )$ if it carries a tensor field $\phi $
of type $(1,1)$, a vector field $\xi $ and 1-form $\eta $ on $M$
respectively such that
\begin{equation}
\phi ^{2}=-I+\eta \otimes \xi ,\text{ \ }\phi \xi =0,\text{ }\eta \circ \phi
=0,\text{ \ \ }\eta (\xi )=1, \label{phi^2}
\end{equation
where $I$ denotes the identity tensor.
The almost contact structure is said to be normal if $N+d\eta \otimes \xi =0
, where $N$ is the Nijenhuis tensor of $\phi $. Suppose that a Riemannian
metric tensor $g$ is given in $M$ and satisfies the condition
\begin{equation}
g(\phi X,\phi Y)=g(X,Y)-\eta (X)\eta (Y),\text{ \ \ }\eta (X)=g(X,\xi ).
\label{metric}
\end{equation
Then $(\phi ,\xi ,\eta ,g)$-structure is called an almost contact metric
structure. Define a tensor field $\Phi $ of type $(0,2)$ by $\Phi
(X,Y)=g(\phi X,Y)$. If $d\eta =\Phi $ then an almost contact metric
structure is said to be normal contact metric structure. A normal contact
metric structure is called a Sasakian structure, which satisfies
\begin{equation}
(\nabla _{X}\phi )Y=g(X,Y)\xi -\eta (Y)X, \label{Nambla fi}
\end{equation
where $\nabla $ denotes the Levi-Civita connection of $g$. For a Sasakian
manifold $M=M^{2n+1}$, it is known that
\begin{equation}
R(\xi ,X)Y=g(X,Y)\xi -\eta (Y)X, \label{R(xi,,X)Y}
\end{equation
\begin{equation}
S(X,\xi )=2n\eta (X) \label{S(X,xi)}
\end{equation
and
\begin{equation}
\nabla _{X}\xi =-\phi X. \label{nablaXxi}
\end{equation
\cite{BL2}.
Now we will introduce a well known Sasakian manifold example on
\mathbb{R}
^{2n+1}.$
\begin{example}[\protect\cite{BL1}]
We consider
\mathbb{R}
^{2n+1}$ with Cartesian coordinates $(x_{i},y_{i},z)$ $(i=1,...,n)$ and its
usual contact form
\begin{equation*}
\eta =\frac{1}{2}(dz-\dsum\limits_{i=1}^{n}y_{i}dx_{i}).
\end{equation*
The characteristic vector field $\xi $ is given by $2\frac{\partial }
\partial z}$ and its Riemannian metric $g$ and tensor field $\phi $ are
given b
\begin{equation*}
g=\frac{1}{4}(\eta \otimes \eta
+\dsum\limits_{i=1}^{n}((dx_{i})^{2}+(dy_{i})^{2}),\text{ \ }\phi =\left(
\begin{array}{ccc}
0 & \delta _{ij} & 0 \\
-\delta _{ij} & 0 & 0 \\
0 & y_{j} &
\end{array
\right) \text{, \ }i=1,...,n
\end{equation*
This gives a contact metric structure on
\mathbb{R}
^{2n+1}$. The vector fields $E_{i}=2\frac{\partial }{\partial y_{i}},$
E_{n+i}=2\left( \frac{\partial }{\partial x_{i}}+y_{i}\frac{\partial }
\partial z}\right) $, $\xi $ form a $\phi $-basis for the contact metric
structure. On the other hand, it can be shown that
\mathbb{R}
^{2n+1}(\phi ,\xi ,\eta ,g)$ is a Sasakian manifold.
\end{example}
\section{\textbf{Slant Riemannian submersions}}
\begin{definition}
Let $M(\phi ,\xi ,\eta ,g_{M})$ be a Sasakian manifold and $(N,g_{N})$ be a
Riemannian manifold. A Riemannian submersion $F:M(\phi ,\xi ,\eta
,g_{M})\rightarrow $ $(N,g_{N})$ is said to be slant if for any non zero
vector $X\in \Gamma (\ker F_{\ast })-\{\xi \}$, the angle $\theta (X)$
between $\phi X$ and the space $\ker F_{\ast }$ is a constant (which is
independent of the choice of $p\in M$ and of $X\in \Gamma (\ker F_{\ast
})-\{\xi \}$). The angle $\theta $ is called the slant angle of the slant
submersion. Invariant and anti-invariant submersions are slant submersions
with $\theta =0$ and $\theta =\pi /2$, respectively. A slant submersion
which is not invariant nor anti-invariant is called proper submersion.
\end{definition}
Now we will introduce an example.
\begin{example}
\mathbb{R}
^{5}$ has got a Sasakian structure as in Example 1. Let $F
\mathbb{R}
^{5}\rightarrow
\mathbb{R}
^{2}$ be a map defined by $F(x_{1},x_{2},y_{1},y_{2},z)=(x_{1}-2\sqrt{2
x_{2}+y_{1},2x_{1}-2\sqrt{2}x_{2}+y_{1})$. Then, by direct calculations
\begin{equation*}
\ker F_{\ast }=span\{V_{1}=2E_{1}+\frac{1}{\sqrt{2}}E_{4},V_{2}=E_{2},V_{3}
\xi =E_{5}\}
\end{equation*
and
\begin{equation*}
(\ker F_{\ast })^{\bot }=span\{H_{1}=2E_{1}-\frac{1}{\sqrt{2}}E_{4},\text{
H_{2}=E_{3}\}.
\end{equation*
Then it is easy to see that $F$ is a Riemannian submersion. Moreover, $\phi
V_{1}=2E_{3}-\frac{1}{\sqrt{2}}E_{2}$ and $\phi V_{2}=E_{4}$ imply that
\left\vert g(\phi V_{1},V_{2})\right\vert =\frac{1}{\sqrt{2}}$. So $F$ is a
slant submersion with slant angle $\theta =\frac{\pi }{4}.$
\end{example}
In Example 2, we note that the characteristic vector field $\xi $ is
vertical. If $\xi $ is orthogonal to $\ker F_{\ast }$ we will give following
Theorem.
\begin{theorem}
Let $F$ be a slant Riemannian submersion from a Sasakian manifold $M(\phi
,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N})$. If $\xi $ is
orthogonal to $\ker F_{\ast }$, then $F$ is anti-invariant.
\end{theorem}
\begin{proof}
By (\ref{nablaXxi}), (\ref{2}), (\ref{4b}) and (\ref{TUW}) we have
\begin{eqnarray*}
g(\phi U,V) &=&-g(\nabla _{U}\xi ,V)=-g(T_{U}\xi ,V)=g(T_{U}V,\xi ) \\
&=&g(T_{V}U,\xi )=g(U,\phi V)
\end{eqnarray*
for any $U,V\in \Gamma (\ker F_{\ast }).$ Using skew symmetry property of
\phi $ in the last relation we complete the proof of the Theorem.
\end{proof}
\begin{remark}
We note Lotta \cite{LOTTA} proved that if $M_{1}$ is a submanifold of
contact metric manifold of $\tilde{M}_{1}$ and $\xi $ is orthogonal to
M_{1} $, then $M_{1}$ is anti-invariant submanifold. So, our result can be
seen as a submersion version of Lotta's result
\end{remark}
Now, let $F$ be a slant Riemannian submersion from a Sasakian manifold
M(\phi ,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N}).$ Then for
any $U,V\in \Gamma (\ker F_{\ast }),$ we put
\begin{equation}
\phi U=\psi U+\omega U, \label{TAN}
\end{equation
where $\psi U$ and $\omega U$ are vertical and horizontal components of
\phi U$, respectively. Similarly, for any $X\in \Gamma (\ker F_{\ast
})^{\perp }$, we have
\begin{equation}
\phi X=\mathcal{B}X+\mathcal{C}X, \label{NOR}
\end{equation
where $\mathcal{B}X$ (resp.$\mathcal{C}X),$ is vertical part (resp.
horizontal part) of $\phi X$.
From (\ref{metric}), (\ref{TAN}) and (\ref{NOR}) we obtain
\begin{equation}
g_{M}(\psi U,V)=-g_{M}(U,\psi V) \label{ANTT}
\end{equation}
an
\begin{equation}
g_{M}(\omega U,Y)=-g_{M}(U,\mathcal{B}Y). \label{ANTN}
\end{equation
for any $U,V$ $\in \Gamma (\ker F_{\ast })$ and $Y\in \Gamma ((\ker F_{\ast
})^{\bot })$.
Using (\ref{1}), (\ref{TAN}) and (\ref{nablaXxi}) we obtai
\begin{equation}
\mathcal{T}_{U}\xi =-\omega U,\text{ \ \ }\hat{\nabla}_{U}\xi =-\psi U
\label{CONNECTION}
\end{equation
for any $U\in \Gamma (\ker F_{\ast }).$
Now we will give the following proposition for a Riemannian submersion with
two dimensional fibers which is similar to Proposition 3.2. of \cite{AL}.
\begin{proposition}
Let $F$ be a Riemannian submersion from almost contact manifold onto a
Riemannian manifold. If dim($\ker F_{\ast })=2$ and $\xi $\ is vertical then
fibers are anti-invariant.
\end{proposition}
As the proof of the following proposition is similar to slant submanifolds
(see \cite{alfonso}) we don't give its proof.
\begin{proposition}
Let $F$ be a Riemannian submersion from a Sasakian manifold $M(\phi ,\xi
,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N})$ such that $\xi \in
\Gamma (\ker F_{\ast })$. Then $F$ is anti-invariant submersion if and only
if $D$ is integrable, where $D=\ker F_{\ast }-\{\xi \}$.
\end{proposition}
\begin{theorem}
Let $M(\phi ,\xi ,\eta ,g_{M})$ be a Sasakian manifold \ of dimension $2m+1$
and $(N,g_{N})$ is a Riemannian manifold of dimension $n$. Let $F:M(\phi
,\xi ,\eta ,g_{M})\rightarrow $ $(N,g_{N})$ be a slant Riemannian
submersion. Then the fibers are not totally umbilical.
\end{theorem}
\begin{proof}
Using (\ref{1}) and (\ref{nablaXxi}) we obtai
\begin{equation}
\mathcal{T}_{U}\xi =-\omega U \label{T1}
\end{equation
for any $U\in \Gamma (\ker F_{\ast })$. If the fibers are totally umbilical,
then we have $\mathcal{T}_{U}V=g_{M}(U,V)H$ for any vertical vector fields
U,V$ where $H$ is the mean curvature vector field of any fibre. Since
\mathcal{T}_{\xi }\xi $ $=0$, we have $H=0$, which shows that fibres are
minimal. Hence the fibers are totally geodesic, which is a contradiction to
the fact that $\mathcal{T}_{U}\xi =-\omega U\neq 0$.
\end{proof}
By (\ref{1}), (\ref{2}), (\ref{TAN}) and (\ref{NOR}) we hav
\begin{equation}
(\nabla _{U}\omega )V=\mathcal{C}T_{U}V-\mathcal{T}_{U}\psi V, \label{W}
\end{equation
\begin{equation}
(\nabla _{U}\phi )V=\mathcal{BT}_{U}V-\mathcal{T}_{U}\omega V+R(\xi ,U)V,
\label{F}
\end{equation
wher
\begin{eqnarray*}
(\nabla _{U}\omega )V &=&\mathcal{H}\nabla _{U}\omega V-\omega \hat{\nabla
_{U}V \\
(\nabla _{U}\psi )V &=&\hat{\nabla}_{U}\psi V-\psi \hat{\nabla}_{U}V,
\end{eqnarray*
for $U,V\in \Gamma (\ker F_{\ast }).$Now we will characterize slant
submersion by following Theorem.
\begin{theorem}
Let $F$ be a Riemannian submersion from a Sasakian manifold $M(\phi ,\xi
,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N})$ such that $\xi \in
\Gamma (\ker F_{\ast })$. Then, $F$ is a slant Riemannian submersion if and
only if there exist a constant $\lambda \in \lbrack 0,1]$ such that
\begin{equation}
\psi ^{2}=-\lambda (I-\eta \otimes \xi ). \label{SLANT}
\end{equation
Furthermore, in such case, if $\theta $ is the slant angle of $F$, it
satisfies that $\lambda =\cos ^{2}\theta .$
\end{theorem}
\begin{proof}
Firstly we suppose that $F$ is not an anti-invariant Riemannian submersion.
Then, for $U\in \Gamma (\ker F_{\ast }),
\begin{equation}
\cos \theta =\frac{g_{M}(\phi U,\psi U)}{\left\vert \phi U\right\vert
\left\vert \psi U\right\vert }=\frac{\left\vert \psi U\right\vert ^{2}}
\left\vert \phi U\right\vert \left\vert \psi U\right\vert }=\frac{\left\vert
\psi U\right\vert }{\left\vert \phi U\right\vert }. \label{COS1}
\end{equation
Since $\phi U\perp \xi ,$ we have $g(\psi U,\xi )=0.$ Now, substituting $U$
by $\psi U$ in (\ref{COS1}) \ and using (\ref{metric}) we obtai
\begin{equation}
\cos \theta =\frac{\left\vert \psi ^{2}U\right\vert }{\left\vert \phi \psi
U\right\vert }=\frac{\left\vert \psi ^{2}U\right\vert }{\left\vert \psi
U\right\vert }. \label{COS2}
\end{equation
From (\ref{COS1}) and (\ref{COS2}) we have
\begin{equation}
\left\vert \psi U\right\vert ^{2}=\left\vert \psi ^{2}U\right\vert
\left\vert \phi U\right\vert \label{COS2A}
\end{equation
On the other hand, one can get following
\begin{eqnarray}
g_{M}(\psi ^{2}U,U) &=&g_{M}(\phi \psi U,U)=-g_{M}(\psi U,\phi U)
\label{COS3} \\
&=&-g_{M}(\psi U,\psi U)=-\left\vert \psi U\right\vert ^{2}. \notag
\end{eqnarray
Using (\ref{COS2A}) and (\ref{COS3}) we get
\begin{eqnarray}
g_{M}(\psi ^{2}U,U) &=&-\left\vert \psi ^{2}U\right\vert \left\vert \phi
U\right\vert \notag \\
&=&-\left\vert \psi ^{2}U\right\vert \left\vert \phi ^{2}U\right\vert
\label{COS4}
\end{eqnarray
Also, one can easily get
\begin{equation}
g_{M}(\psi ^{2}U,\phi ^{2}U)=-g_{M}(\psi ^{2}U,U). \label{COS5}
\end{equation
So, by help (\ref{COS4}) and (\ref{COS5}) we obtain $g_{M}(\psi ^{2}U,\phi
^{2}U)=\left\vert \psi ^{2}U\right\vert \left\vert \phi ^{2}U\right\vert $
and it follows that $\psi ^{2}U$ and $\phi ^{2}U$ are colineal, that is
\psi ^{2}U=\lambda \phi ^{2}U=-\lambda (I-\eta \otimes \xi ).$ Using the
last relation together with (\ref{COS1}) and (\ref{COS2}) we obtain $\cos
\theta =\sqrt{\lambda }$ is constant and so $F$ is a slant Riemannian
submersion.
If $\ F$ is anti-invariant Riemannian submersion $\phi U$ is normal, $\psi
U=0$ and it is equivalent to $\psi ^{2}U=0.$ In this case $\theta =\frac{\pi
}{2}$ and so the equation (\ref{COS1}) is again provided.
\end{proof}
By using (\ref{metric}), (\ref{TAN}), (\ref{ANTT}) and (\ref{SLANT}) we have
following Lemma.
\begin{lemma}
Let $F$ be a slant Riemannian submersion from a Sasakian manifold $M(\phi
,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N})$ with slant angle
\theta .$Then the following relations are valid
\begin{equation}
g_{M}(\psi U,\psi V)=\cos ^{2}\theta (g_{M}(U,V)-\eta (U)\eta (V)),
\label{COS5A}
\end{equation
\begin{equation}
g_{M}(\omega U,\omega V)=\sin ^{2}\theta (g_{M}(U,V)-\eta (U)\eta (V))
\label{COS6}
\end{equation
for any $U,V\in \Gamma (\ker F_{\ast }).$
\end{lemma}
We denote the complementary orthogonal distribution to $\omega (\ker F_{\ast
})$ in $(\ker F_{\ast })^{\bot }$ by $\mu .$ Then we hav
\begin{equation}
(\ker F_{\ast })^{\bot }=\omega (\ker F_{\ast })\oplus \mu . \label{A1}
\end{equation}
\begin{lemma}
Let $F$ be a proper slant Riemannian submersion from a Sasakian manifold
M(\phi ,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N})$ then $\mu $
is an invariant distribution of $(\ker F_{\ast })^{\bot },$ under the
endomorphism $\phi $.
\end{lemma}
\begin{proof}
For $X\in \Gamma (\mu ),$ from (\ref{metric}) and (\ref{TAN}), we obtai
\begin{eqnarray*}
g_{M}(\phi X,\omega V) &=&g_{M}(\phi X,\phi V)-g_{M}(\phi X,\psi V) \\
&=&g_{M}(X,V)-\eta (X)\eta (V)-g_{M}(\phi X,\psi V) \\
&=&-g_{M}(X,\phi \psi V).
\end{eqnarray*
Using (\ref{SLANT}) and (\ref{A1}) we hav
\begin{eqnarray*}
g_{M}(\phi X,\omega V) &=&-\cos ^{2}\theta g_{M}(X,V-\eta (V)\xi ) \\
&=&g_{M}(X,\omega \psi V) \\
&=&0.
\end{eqnarray*
In a similar way, we have $g_{M}(\phi X,U)=-g_{M}(X,\phi U)=0$ due to $\phi
U\in \Gamma ((\ker F_{\ast })\oplus \omega (\ker F_{\ast }))$ for $X\in
\Gamma (\mu )$ and $U\in \Gamma (\ker F_{\ast }).$Thus the proof of the
lemma is completed.
\end{proof}
By help (\ref{COS6}), we can give following
\begin{corollary}
Let $F$ be a proper slant Riemannian submersion from a Sasakian manifold
M^{2m+1}(\phi ,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N^{n},g_{N}).
Le
\begin{equation*}
\left\{ e_{1},e_{2},...e_{2m-n},\xi \right\}
\end{equation*
be a local orthonormal basis of $(\ker F_{\ast }),$ then $\left\{ \csc
\theta we_{1},\csc \theta we_{2},...,\csc \theta we_{2m-n}\right\} $ is a
local orthonormal basis of $\omega (\ker F_{\ast }).$
\end{corollary}
By using (\ref{A1}) and Corollary 1 one can easily prove the following
Proposition.
\begin{proposition}
Let $F$ be a proper slant Riemannian submersion from a Sasakian manifold
M^{2m+1}(\phi ,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N^{n},g_{N}).
Then dim$(\mu )=2(n-m).$ If $\mu =\left\{ 0\right\} ,$then $n=m.$
\end{proposition}
By (\ref{ANTT}) and (\ref{COS5A}) we have
\begin{lemma}
Let $F$ be a proper slant Riemannian submersion from a Sasakian manifold
M^{2m+1}(\phi ,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N^{n},g_{N}).$
If $e_{1},e_{2},...e_{k},\xi $ are orthogonal unit vector fields in $(\ker
F_{\ast }),$the
\begin{equation*}
\left\{ e_{1},\sec \theta \psi e_{1},e_{2},\sec \theta \psi
e_{2},...e_{k},\sec \theta \psi e_{k},\xi \right\}
\end{equation*
is a local orthonormal basis of $(\ker F_{\ast }).$ Moreover dim$(\ker
F_{\ast })=2m-n+1=2k+1$ and $\dim N=n=2(m-k).$
\end{lemma}
\begin{lemma}
Let $F$ be a slant Riemannian submersion from a Sasakian manifold $M(\phi
,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N})$ \ If $\omega $ is
parallel then we hav
\begin{equation}
\mathcal{T}_{\psi U}\psi U=-\cos ^{2}\theta (\mathcal{T}_{U}U+\eta (U)\omega
U) \label{SEC1}
\end{equation}
\begin{proof}
If $\omega $ is parallel, from (\ref{W}), we obtain $\mathcal{C}T_{U}V
\mathcal{T}_{U}\psi V$ for $U,V\in \Gamma (\ker F_{\ast }).$We interchange
U $ and $V$ and use (\ref{TUW}) we get
\begin{equation*}
\mathcal{T}_{U}\psi V=\mathcal{T}_{V}\psi U.
\end{equation*
Substituting $V$ by $\psi U$ in the above equation and then using Theorem 3
we get the required formula.
\end{proof}
\end{lemma}
We give a sufficient condition for a slant Riemannian submersion to be
harmonic as an analogue of a slant Riemannian submersion from a Sasakian
manifold onto a Riemannian manifold in \cite{SAHIN1}.
\begin{theorem}
Let $F$ be a slant Riemannian submersion from a Sasakian manifold $M(\phi
,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N})$ \ If $\omega $ is
parallel then $F$ is a harmonic map.
\end{theorem}
\begin{proof}
From \cite{EJ} we know that $F$ is harmonic if and only if $F$ has minimal
fibres. Thus $F$ is harmonic if and only if $\dsum\limits_{i=1}^{n_{1}
\mathcal{T}_{e_{i}}e_{i}=0.$ Thus using the adapted frame for slant
Riemannian submersion and by the help of (\ref{6}) and Lemma 5 we can write
\begin{equation*}
\tau =-\dsum\limits_{i=1}^{m-\frac{n}{2}}F_{\ast }(\mathcal{T}_{e_{i}}e_{i}
\mathcal{T}_{\sec \theta \psi e_{i}}\sec \theta \psi e_{i})-F_{\ast }
\mathcal{T}_{\xi }\xi ).
\end{equation*
Using $\mathcal{T}_{\xi }\xi =0$ we have
\begin{equation*}
\tau =-\dsum\limits_{i=1}^{m-\frac{n}{2}}F_{\ast }(\mathcal{T
_{e_{i}}e_{i}+\sec ^{2}\theta \mathcal{T}_{\psi e_{i}}\psi e_{i})
\end{equation*
By virtue of (\ref{SEC1}) in the above equation, we obtai
\begin{eqnarray*}
\tau &=&-\dsum\limits_{i=1}^{m-\frac{n}{2}}F_{\ast }(\mathcal{T
_{e_{i}}e_{i}+\sec ^{2}\theta (-\cos ^{2}\theta (\mathcal{T
_{e_{i}}e_{i}+\eta (e_{i})\omega e_{i}))) \\
&=&-\dsum\limits_{i=1}^{m-\frac{n}{2}}F_{\ast }(\mathcal{T}_{e_{i}}e_{i}
\mathcal{T}_{e_{i}}e_{i})=0
\end{eqnarray*
So we prove that $F$ is harmonic.
\end{proof}
Now setting $Q=\psi ^{2}$,we define $\nabla Q$ b
\begin{equation*}
(\nabla _{U}Q)V=\mathcal{V}\nabla _{U}QV-Q\hat{\nabla}_{U}V
\end{equation*
for any $U,V\in \Gamma (\ker F_{\ast }).$We give a characterization for a
slant Riemannian submersion from a Sasakian manifold $M(\phi ,\xi ,\eta
,g_{M})$ onto a Riemannian manifold $(N,g_{N})$ by using the value of
\nabla Q$.
\begin{proposition}
Let $F$ be a slant Riemannian submersion from a Sasakian manifold $M(\phi
,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N}).$ Then, $\nabla
Q=0 $ if and only if $F$ is an anti-invariant submersion.
\end{proposition}
\begin{proof}
By using (\ref{SLANT}),
\begin{equation}
Q\hat{\nabla}_{U}V=-\cos ^{2}\theta (\hat{\nabla}_{U}V-\eta (\hat{\nabla
_{U}V)\xi ) \label{Q1}
\end{equation
for each $U,V\in \Gamma (\ker F_{\ast }),$ where $\theta $ is slant angle.
On the other hand
\begin{equation}
\mathcal{V}(\nabla _{U}QV)=-\cos ^{2}\theta (\hat{\nabla}_{U}V-\eta (\hat
\nabla}_{U}V)\xi +g(V,\psi U)\xi +\eta (V)\psi U). \label{Q2}
\end{equation
So, from (\ref{Q1}) and $\nabla Q=0$ if and only if $\cos ^{2}\theta
(g(V,\psi U)\xi +\eta (V)\psi U)=0$ which implies that $\psi U=0$ or $\theta
=\frac{\pi }{2}$. Both the cases verify that $F$ is an anti-invariant
submersion.
\end{proof}
We now investigate the geometry of leaves of ($\ker F_{\ast })^{\perp }$ and
$\ker F_{\ast }.$
\begin{proposition}
Let $F$ be a slant Riemannian submersion from a Sasakian manifold $M(\phi
,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N}).$Then the
distribution $(\ker F_{\ast })^{\bot }$ defines a totally geodesic foliation
on $M$ if and only if
\begin{equation*}
g_{M}(\mathcal{H}\nabla _{X}Y,\omega \psi U)-\sin ^{2}\theta g_{M}(Y,\phi
X)\eta (U)=g_{M}(\mathcal{A}_{X}\mathcal{B}Y,\omega U)+g_{M}(\mathcal{H
\nabla _{X}\mathcal{C}Y,\omega U)
\end{equation*
for any $X,Y\in \Gamma ((\ker F_{\ast })^{\bot })$ and $U\in \Gamma (\ker
F_{\ast }).$
\end{proposition}
\begin{proof}
From (\ref{Nambla fi}) and (\ref{TAN})we hav
\begin{eqnarray}
g_{M}(\nabla _{X}Y,U) &=&-g_{M}(\phi \nabla _{X}\phi Y,U)+g_{M}(Y,\phi
X)\eta (U) \label{TOTA} \\
&=&g_{M}(\nabla _{X}\phi Y,\phi U)+g_{M}(Y,\phi X)\eta (U) \notag \\
&=&g_{M}(\nabla _{X}\phi Y,\psi U)+g_{M}(\nabla _{X}\phi Y,\omega
U)+g_{M}(Y,\phi X)\eta (U). \notag
\end{eqnarray
for any $X,Y\in \Gamma ((\ker F_{\ast })^{\bot })$ and $\ U\in \Gamma (\ker
F_{\ast })$.
Using (\ref{Nambla fi}) and (\ref{TAN}) in (\ref{TOTA}), we obtain
\begin{eqnarray}
g_{M}(\nabla _{X}Y,U) &=&-g_{M}(\nabla _{X}Y,\psi ^{2}U)-g_{M}(\nabla
_{X}Y,\omega \psi U) \label{TOT3} \\
&&+g_{M}(Y,\phi X)\eta (U)+g_{M}(\nabla _{X}\phi Y,\omega U). \notag
\end{eqnarray
By (\ref{NOR}) and (\ref{SLANT}) we hav
\begin{eqnarray}
g_{M}(\nabla _{X}Y,U) &=&\cos ^{2}\theta g_{M}(\nabla _{X}Y,U)-\cos
^{2}\theta \eta (U)\eta (\nabla _{X}Y) \label{TOT4} \\
&&-g_{M}(\nabla _{X}Y,\omega \psi U)+g_{M}(Y,\phi X)\eta (U) \notag \\
&&+g_{M}(\nabla _{X}\mathcal{B}Y,\omega U)+g_{M}(\nabla _{X}\mathcal{C
Y,\omega U) \notag
\end{eqnarray
Using (\ref{3}), (\ref{4}) and (\ref{nablaXxi}) in the last equation we
obtai
\begin{eqnarray*}
\sin ^{2}\theta g_{M}(\nabla _{X}Y,U) &=&\sin ^{2}\theta g_{M}(Y,\phi X)\eta
(U)-g_{M}(\mathcal{H}\nabla _{X}Y,\omega \psi U) \\
&&+g_{M}(\mathcal{A}_{X}\mathcal{B}Y,\omega U)+g_{M}(\mathcal{H}\nabla _{X
\mathcal{C}Y,\omega U)
\end{eqnarray*
which prove the theorem.
\end{proof}
\begin{proposition}
Let $F$ be a slant Riemannian submersion from a Sasakian manifold $M(\phi
,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N}).$If the
distribution $\ker F_{\ast }$ defines a totally geodesic foliation on $M$
then $F$ is an invariant submersion.
\end{proposition}
\begin{proof}
By (\ref{CONNECTION}), if the distribution $\ker F_{\ast }$ defines a
totally geodesic foliation on $M$ then we conclude that $\omega U=0$ for any
$U\in \Gamma (\ker F_{\ast })$ which shows that $F$ is an invariant
submersion.
\end{proof}
\textbf{OpenProblem:}
Let $F$ be a slant Riemannian submersion from a Sasakian manifold $M(\phi
,\xi ,\eta ,g_{M})$ onto a Riemannian manifold $(N,g_{N})$. In \cit
{alfonso2},Barrera et.al. define and study the Maslov form of non-invariant
slant submanifolds of $S$-space form $\tilde{M}(c)$. They find conditions
for it to be closed. By similar discussion in \cite{alfonso2} we can define
Maslov form $\Omega H$ of $M$ as the dual form of the vector field $\mathcal
B}H$, that is
\begin{equation*}
\Omega H(U)=g_{M}(U,\mathcal{B}H)
\end{equation*}
for any $U\in \Gamma (\ker F_{\ast }).$ So.it will be interesting for giving
a chararacterization respect to $\Omega H$ for slant submersions, where
H=\dsum\limits_{i=1}^{m-\frac{n}{2}}\mathcal{T}_{e_{i}}e_{i}+\mathcal{T
_{\sec \theta \psi e_{i}}\sec \theta \psi e_{i}$ and
$\left\{ e_{1},\sec \theta \psi e_{1},e_{2},\sec \theta \psi
e_{2},...e_{k},\sec \theta \psi e_{k},\xi \right\} $ is a local orthonormal
basis of $(\ker F_{\ast }).$
| 2023-04-23T06:40:29.545Z | 2013-09-11T02:08:25.000Z | redpajama/arxiv | arxiv_0001 | 506 | 5,709 |
e18e5784702ceeb6093f385b9576c46d611edc22 | \section{Introduction}
The recent experimental observation
of wavelike thermal transport in graphite at temperatures above
$100 \, {\rm K}$ \cite{huberman_etal-s-2019},
which confirms the just derived theoretical prediction \cite{ding_etal-nl-2018},
solves the long-standing challenge to establish the existence,
in certain materials, of phonon hydrodynamics and second sound phenomenon
at relatively high temperatures, see e.g \cite{lindsay_etal-jap-2019},
\cite{lee_li-2020}.
In fact,
the occurrence of second sound was previously limited
to a handful of materials at low temperatures and
therefore the scientific and practical significance of this phenomenon
was limited. This new experimental evidence indeed potentially indicate
an important role of second sound in microscale transient heat transport
in two-dimensional and layered materials in a wide temperature range.
A wavelike thermal transport implies a phonon hydrodynamics regime
that is intermediate between ballistic and diffusive regimes,
and it is properly described by a generalization of Fourier's law
into the viscous heat equation (or damped wave equation)
\cite{simoncelli_etal-prx-2020}.
Thus, second sound in solids occurs when the
local temperature follows an hyperbolic equation analogue to
the telegrapher's equation in electromagnetism and to Cattaneo's equation
in conduction problems \cite{hardy-prb-1970} that reads
\begin{equation}
\frac{\partial^2 f}{\partial t^2} +
\frac{\partial f}{\partial t} =
\frac{\partial^2 f}{\partial x^2} \,.
\label{TE}
\end{equation}
In this respect, we remind that when the same
hyperbolic equation governs the pressure or the density
then we have the first sound.
In general, for second sound phenomena the damping term
dominates the inertial term and we have the diffusion equation,
while for the first sound phenomena the opposite is true and we
have the wave equation.
This experimental ascertainment at relative high temperature
of the second sound in graphen, and in general in solids,
motivated us to a mathematical investigation
of non-local extensions of the viscous heat equation
(\ref{TE}) in the spirit
of taking into account the already widely established
evidences of non-local effects both in diffusive,
see, e.g., \cite{klm}, and viscous, see, e.g. \cite{mainardi-2010}, systems.
Since from the physical point of view,
it is more appropriate to speak about telegrapher's equation
when we speak of applications to electromagnetism,
while it is common to use the name Cattaneo equation
in problems regarding heat conduction and anomalous transport processes,
then hereinafter we refer to the Cattaneo equation,
which, we remind, still calls for a deep mathematical analysis
both in the classical formulation
\cite{spigler-mmas-2020,carillo_etal-mcs-2020}
and in the non-local extension
\cite{ferrillo_etal-siamjam-2018,angelani_etal-jpa-2020}.
The Cattaneo equation plays indeed a relevant role in many different physical
contexts. In particular, in random motion and heat propagation models
with finite front velocity \cite{Preziosi}
and, more recently, also in run-and-tumble bacterial dynamics \cite{luca}.
The space- and time-fractional counterpart of this equation has gained
a relevant interest both for physical applications
and stochastic models related to continuous-time persistent random walks
\cite{gianni,mas}.
Non-local generalization of the Cattaneo equation
through the fractional calculus has been already investigated,
see e.g., \cite{compte_etal-jpa-1997,metzler_etal-pa-1999},
\cite{Enzo}, \cite{mirko}, and more recently \\
\cite{gianni,mas}.
Here, the novel contribution with respect to the literature,
lays on the analysis of the role of the tempering of fractional derivative
when applied for generalizing the space-derivative.
We remind that, indeed, the tempered fractional diffusion equation
has been already investigated,
e.g., \cite{liemert_etal-fcaa-2017,lischke_etal-fcaa-2019}.
The rest of the paper is organized as follows.
First, we provide in Section 2 the preliminaries notions
on the tempered derivatives and the related processes,
and later in Section 3 the main results are reported.
\section{Preliminaries on tempered fractional derivatives and
related stochastic processes}
The \textit{shifted fractional derivative}
has been used in the physical literature for mathematical models of
wave propagation in porous media \cite{hany}
and in probability in relation with
the Tempered Stable Subordinator (TSS).
The shifted fractional derivative is defined as
\begin{equation}
\left(\lambda + \frac{d}{dx}\right)^\alpha f(x) =
e^{-\lambda x} D_x^\alpha \, [e^{\lambda x}f(x)] \,,
\quad \alpha\in (0,1) \,, \quad \lambda \geq 0 \,, \quad x \geq 0 \,,
\end{equation}
where $D_x^\alpha$ denotes the space fractional derivative
in the sense of Caputo of order $\alpha \in (0,1)$,
i.e.,
\begin{equation}
D_x^\alpha f(x) = \frac{1}{\Gamma(1-\alpha)}\int_0^x (x-\xi)^{-\alpha}
\frac{\partial f}{\partial \xi} \, d\xi \,.
\end{equation}
Thus, the Laplace transform of the shifted fractional derivative
is given by \cite{hany}
\begin{equation}\label{lt}
\mathfrak{L}
\bigg\{\left(\lambda+\frac{d}{dx}\right)^\alpha f(x)\bigg\}(s) =
(s+\lambda)^\alpha \widetilde{f}(s) - (s+\lambda)^{\alpha-1} f(0^+)
\,,
\end{equation}
where we denoted by $s$ the Laplace parameter and $\widetilde{f}(s)$
the Laplace transform of the function $f(x)$, i.e.,
$\displaystyle{\int_0^\infty e^{-sx}f(x) \, dx=\widetilde{f}(s)}$.
In probability the transition density $f_{\lambda, \alpha}(x,t)$
of the TSS can be introduced by "tempering" the transition density
of the $\alpha$-stable subordinator $h_\alpha(x,t)$ as follows
\begin{equation}
\label{1.3}
f_{\lambda, \alpha}(x,t):= e^{-\lambda x+\lambda^\alpha t} h_\alpha(x,t)
\,.
\end{equation}
Indeed, it is possible to prove \cite{luisa} that the density of
the TSS satisfies the following tempered-fractional equation
\begin{equation}
\frac{\partial}{\partial t}f_{\lambda, \alpha}(x,t) =
\lambda^\alpha
f_{\lambda, \alpha}(x,t) -
\left(\lambda + \frac{\partial}{\partial x}\right)^\alpha
f_{\lambda, \alpha}(x,t) =
- \frac{\partial^{\lambda,\alpha}}{\partial x^{\lambda,\alpha}}
f_{\lambda, \alpha}(x,t) \,,
\end{equation}
where
\begin{equation}
\frac{\partial^{\lambda,\alpha}}{\partial x^{\lambda,\alpha}}f(x,t):=
\left(\lambda + \frac{\partial}{\partial x}\right)^\alpha f(x,t)
- \lambda^\alpha f(x,t) \,,
\end{equation}
under the conditions
\begin{equation}
f_{\lambda, \alpha}(x,0) = \delta(x) \,, \quad f_{\lambda, \alpha}(0,t)= 0 \,.
\end{equation}
We here briefly recall the notion of fractional tempered stable (TS)
process \cite{luisa}.
We denote by $\mathcal{L}_\alpha(t):=\inf\{s:\mathcal{H}_\alpha(s)>t\}$,
$t\geq 0$, $\alpha\in(0,1)$,
the inverse of the $\alpha$-stable subordinator $\mathcal{H}_\alpha(t)$,
then
\begin{definition}
Let $\mathcal{L}_\nu(t)$, with $t\geq 0$,
be the inverse of the stable subordinator,
then the fractional TS process is defined as
\begin{equation}
\mathcal{T}^\nu_{\lambda, \alpha}(t):=
\mathcal{T}_{\lambda, \alpha}(\mathcal{L}_\nu(t)) \,,
\quad t\geq 0 \,, \quad \lambda \geq 0 \,, \quad \nu \,, \alpha \in (0,1) \,,
\end{equation}
where $\mathcal{L}_\nu$ is independent of
the tempered stable subordinator (TSS) $\mathcal{T}_{\lambda, \nu}$.
\end{definition}
The density of the fractional TS process
$\mathcal{T}^\nu_{\lambda, \alpha}(t)$ satisfies
the tempered fractional equation \cite[Theorem 6]{luisa}
\begin{equation}\label{in0}
D_t^\nu f =
\bigg[\lambda^\alpha-\left(\lambda+\frac{\partial}{\partial x}\right)^{\alpha}
\bigg]f \,,
\quad \lambda\geq 0 \,,
\quad \nu \,, \alpha \in (0,1) \,,
\end{equation}
under the initial-boundary conditions
\begin{equation}
\begin{cases}
f(x,0)= \delta(x) \,,\\
f(0,t) = 0 \,.
\end{cases}
\end{equation}
Finally, we recall also that the density
of the time-changed Brownian motion
\begin{equation}\label{1.7}
\mathcal{X}^{\nu}_{\lambda,\alpha}(t) :=
B(\mathcal{T}^\nu_{\lambda, \alpha}(t)) \,,
\quad \lambda\geq 0 \,,
\quad \nu , \alpha \in (0,1) \,,
\end{equation}
coincides with the solution of the tempered equation \cite{luisa}
\begin{equation}
\label{in1}
D_t^\nu g =
\bigg[\lambda^\alpha-\left(\lambda-\frac{\partial^2}{\partial x^2}
\right)^{\alpha}\bigg] \, g \,,
\quad
- \infty < x < + \infty \,.
\end{equation}
\section{The tempered space-fractional Cattaneo-type equation}
Let $\mathcal{L}^\beta(t)$, with $t>0$,
be the inverse process of the sum of two independent positively
skewed stable random variables $H_1^{2\beta}$ and $H_2^\beta$, that is
\begin{equation}
\mathcal{L}^\beta(t)
:= \inf \bigg\{s\geq 0, \ H_1^{2\beta}(s)+(2k)^{1/\beta}H_2^\beta(s) \geq t
\bigg\}\,, \quad t \,, k >0 \,, \quad \beta \in (0,1/2) \,.
\end{equation}
We recall that the Laplace transform with respect to $t$
of the law $l_\beta(x,t)$
of the process $\mathcal{L}^\beta(t)$ is given by \cite{mirko}
\begin{equation}
\widetilde{l}_\beta(x,s)=
(s^{2\beta-1}+2k s^{\beta-1})e^{-xs^{2\beta}-2k xs^\beta} \,,
\end{equation}
and satisfies the fractional equation
\begin{equation}
D_t^{2\beta} u +2k D_t^\beta u = - \frac{\partial u}{\partial x}
\,.
\end{equation}
Then, we have the following result
\begin{te}
The solution of the tempered fractional equation
\begin{equation}\label{tem2}
D_t^{2\beta} f+2k D_t^{\beta}f =
\bigg[
\lambda^\alpha-\left(\lambda-\frac{\partial^2}{\partial x^2}\right)^{\alpha}
\bigg] \, f \,,
\quad -\infty < x < +\infty \,,
\end{equation}
with $\beta \in (0,1/2)$ and $\alpha \in(0,1)$,
under the conditions $u(x,0) = \delta(x)$ and $u(0,t) = 0$,
coincides with the probability law of the process
\begin{equation}\label{tc}
W(t): = B(\mathcal{T}_{\lambda,\alpha}(\mathcal{L}^\beta(t))) \,,
\quad t>0 \,.
\end{equation}
Moreover, the fundamental solution of equation \eqref{tem2}
has the following Fourier transform with respect to $x$
\begin{equation}
\widehat{u}(\xi,t) = \frac{1}{2}\bigg[
\bigg(1+\frac{k}{\sqrt{k^2-\theta(\xi)}}\bigg)E_{\beta,1}(r_1t^\beta)+
\bigg(1-\frac{k}{\sqrt{k^2-\theta(\xi)}}\bigg)
E_{\beta,1}(r_2t^\beta)\bigg] \,,
\end{equation}
with
\begin{align}
\nonumber & \theta (\xi) = (\lambda+\|\xi\|^2)^\alpha-\lambda^\alpha \,,\\
\nonumber & r_1 = -k+\sqrt{k^2-\theta (\xi)} \,,\\
\nonumber & r_2 = -k-\sqrt{k^2-\theta (\xi)} \,,
\end{align}
and where
$$
E_{\beta, \gamma}(t) = \sum_{k=0}^\infty\frac{t^k}{\Gamma(\beta k+\gamma)} \,,
$$
is the Mittag--Leffler function, for $\beta >0$ and $\gamma \in \mathbb{C}$.
\end{te}
\begin{proof}
The probability law of the process $W(t)$ is given by
\begin{equation}
w(x,t) = \int_0^\infty v(x,\mu)l_\beta(\mu, t) \, d\mu \,,
\end{equation}
where $v(x,t)$ is the density of the tempered stable subordinator whose
Fourier-transform is given by
\begin{equation}
\widehat{v}(\xi,t) = e^{-t \theta(\xi)} \,.
\end{equation}
Therefore, the Fourier transform of $w(x,t)$ is given by
\begin{equation}
\widehat{w}(\xi, t) =
\int_0^\infty e^{-\mu \theta(\xi)}l_\beta(\mu, t) \, d\mu \,.
\end{equation}
Since the Laplace transform with respect to $t$
of $l_\beta(x,t)$ is given by
\begin{equation}
\widetilde{l}_\beta(x,s) =
(s^{2\beta-1}+2k s^{\beta-1})e^{-xs^{2\beta}-2k xs^\beta} \,,
\end{equation}
then the Fourier--Laplace transform of the probability law
of the process $W(t)$ is given by
\begin{align}
\nonumber \widehat{\widetilde{w}}(\xi,s) &=
\int_0^\infty e^{-st} \, dt
\int_0^\infty e^{-\mu \theta(\xi)}l_\beta(\mu, t) \, d\mu\\
& = (s^{2\beta-1}+2k s^{\beta-1})\int_0^\infty e^{-\mu \theta(\xi)-\mu s^{2\beta}-2k \mu s^\beta} d\mu = \frac{s^{2\beta-1}+2k s^{\beta-1}}{s^{2\beta}+2k s^\beta+\theta(\xi)}
\,.
\label{ltf}
\end{align}
If we compare it with the Fourier-Laplace transform of the
fundamental solution of the equation \eqref{tem2},
we observe that they are iqual and the claimed result holds.
Regarding the inverse time-Laplace transform of \eqref{ltf},
we report that it can be obtained through an algebraic manipulations
\cite[pp. 1021--1022]{mirko} and by recalling the Laplace transform
formulas for the Mittag--Leffler functions.
\end{proof}
We observe that the process $X(t):=B(\mathcal{T}_{\lambda,\alpha}(t))$
is indeed a L\'evy process with L\'evy exponent
\begin{align}
\nonumber \psi(\xi) &= -\frac{1}{t}\ln \mathbb{E}[e^{i\xi X(t)}]\\
\nonumber & = -\frac{1}{t}\ln \mathbb{E}[\mathbb{E}[e^{i\xi X(\mathcal{T}(t))}|\mathcal{T}(t)]] = -\ln\mathbb{E} e^{-\xi^2\mathcal{T}(t)/2}\\
\nonumber & = - \frac{1}{t} \ln \bigg[\int_0^\infty e^{-\frac{\xi^2}{2}x} f_{\lambda, \alpha}(x,t)dx\bigg]
\,,
\end{align}
and by using equation \eqref{1.3} we have that
\begin{align}
\nonumber
\psi(\xi) &= - -\frac{1}{t}\ln \bigg[\int_0^\infty e^{\lambda^\alpha t-\lambda x-\frac{\xi^2}{2}x} h_{\alpha}(x,t) \, dx\bigg]\\
&= -\frac{1}{t}\ln[e^{\lambda^\alpha t-(\frac{\xi^2}{2}+\lambda)^\alpha t}] =
\bigg[(\frac{\xi^2}{2}+\lambda)^\alpha-\lambda^\alpha\bigg]
\,.
\end{align}
Since the Laplace exponent of the sum of stable subordinators
$H_1^{2\beta}(t)+(2k)^{1/\beta}H_2^\beta(t)$ is given by
\begin{equation}
\phi(s) = (s^{2\beta}+2k s^\beta) \,,
\end{equation}
then we have that the inverse process $\mathcal{L}^\beta(t)$
has L\'evy exponent given by
\begin{equation}
\mathfrak{L}\bigg\{\mathbb{E}\mathcal{L}^\beta(t);s\bigg\} = \frac{1}{s\phi(s)} = \frac{1}{s^{2\beta+1}+2k s^{\beta+1}}
\,,
\end{equation}
whose inverse Laplace transform, namely $U(t)$, is given by
\begin{equation}
U(t):= t^{2\beta}E_{\beta,2\beta+1}(-2kt^\beta)
\,.
\end{equation}
We recall now \cite[Theorem 2.1]{Leo}
that if we have a time-changed L\'evy process $X(Y(t))$,
with X(t) an homogeneous L\'evy process and
Y(t) a non-decreasing process independent of X,
then we have
\begin{equation}
\mathbb{E}X(Y(t)) = U(t) \mathbb{E}X(1) \,,
\end{equation}
and
\begin{equation}
Var X(Y(t)) = \mathbb{E}[X(1)]^2 Var[Y(t)]+U(t)Var[X(1)]
\,,
\end{equation}
where $U(t) = \mathbb{E}Y(t)$.
In our case, since the mean value of $X(t)$ is null for all $t>0$,
we have that
\begin{equation}
\mathbb{E}X(\mathcal{L}^\beta(t)) = 0 \,, \quad {\rm for} \,
{\rm all} \quad t >0 \,,
\end{equation}
and
\begin{equation}
Var X(\mathcal{L}^\beta(t)) =
t^{2\beta}E_{\beta,2\beta+1}(-2kt^\beta)Var[X(1)]
\,.
\end{equation}
We observe that
\begin{align}
\nonumber Var[X(1)] &=\mathbb{E}\bigg[\mathbb{E}[B(\mathcal{T}_{\lambda,\alpha}(1))^2|\mathcal{T}_{\lambda,\alpha}(1)]\bigg]= \mathbb{E}\bigg[\mathcal{T}(1)^2]\bigg]\\
\nonumber& = \int_0^{+\infty} z^2 e^{-\lambda z+\lambda^\alpha}h_\alpha(z,1) \, dz= e^{\lambda^\alpha}\frac{d^2}{d\lambda^2}\int_0^{+\infty}e^{-\lambda z} h_\alpha(z,1) \, dz\\
& = e^{\lambda^\alpha}\frac{d^2}{d\lambda^2}e^{-\lambda^\alpha}= \alpha \lambda^{\alpha-2}[1-\alpha+\alpha\lambda^\alpha]
\end{align}
and we conclude that
\begin{equation}
Var X(\mathcal{L}^\beta(t)) = \alpha \lambda^{\alpha-2}[1-\alpha+\alpha\lambda^\alpha]t^{2\beta}E_{\beta,2\beta+1}(-2kt^\beta) \,.
\end{equation}
We recover, for $\lambda = 0$,
the result for the space-time fractional (non-tempered)
Cattaneo process
\cite[Theorem 4.1]{mirko}
\begin{equation}
Var X(\mathcal{L}^\beta(t)) =
\alpha [1-\alpha]t^{2\beta}E_{\beta,2\beta+1}(-2kt^\beta)
\,.
\end{equation}
\begin{os}
We observe that the probabilistic interpretation given by the time-changed process \eqref{tc} works only for $\beta\in (0,1/2)$, that is a sort of multi-term time-fractional diffusion equation with space-tempered derivatives. On the other hand, the analytical representation of the solution is correct also for $1/2\!< \beta\!<\!1$, under the additive constraint
$\partial_t u\big|_{t = 0}=0$. Moreover, we recover the Fourier transform of the fundamental solution that was
originally found by Beghin and Orsingher \cite{Enzo}.
\end{os}
We can also consider the space-Laplace transform of the solution for the more general case $\alpha \in (0,1)$, even if in this case we loose the probabilistic representation that is valid only in the case $\alpha\in (0,1/2)$.
For the particular case $\beta = 1$, we have the following
\begin{prop}
The space-Laplace transform of the solution for the fractional problem \eqref{tem2}, under the conditions
$u(x,0) = \delta(x)$ and $\partial_t u(x,t)\bigg|_{t=0}=0$ is given by
\begin{align}\label{sol0}
\tilde{u}(s,t) = \frac{e^{-kt}}{2}&\bigg[\left(1+\frac{k}{\sqrt{k^2-\psi(s)}}\right)e^{t\sqrt{k^2-\psi(s)}}\\
\nonumber &+\left(1-\frac{k}{\sqrt{k^2-\psi(s)}}\right) e^{-t\sqrt{k^2-\psi(s)}}\bigg] \,,
\label{cf}
\end{align}
where
\begin{equation}
\psi(s) = (s+\lambda)^\alpha-\lambda^\alpha \,.
\end{equation}
\end{prop}
\begin{proof}
We take the space-Laplace transform and, by using \eqref{lt},
we have that
\begin{equation}
\frac{\partial^2 \widetilde{u}}{\partial t^2}+
2 k\frac{\partial \widetilde{u}}{\partial t} =
(s+\lambda)^\alpha \widetilde{u}-\lambda^\alpha \widetilde{u} =
\psi(s) \widetilde{u}
\,,
\end{equation}
whose solution, under the given conditions, is given by \eqref{sol0}.
\end{proof}
To conclude, we consider the following Dirichlet problem
\begin{equation}\label{dr}
\begin{cases}
& \displaystyle \frac{\partial^2 u}{\partial t^2}+
2 k\frac{\partial u}{\partial t} =
\frac{\partial^{\lambda, \alpha}u}{\partial x^{\lambda, \alpha}} \,,
\quad x \geq 0 \,,\\
& u(x,0) = 0 \,, \quad u(0,t) = \phi(t) \,,\\
& \displaystyle \frac{\partial u}{\partial t}\bigg|_{t=0} = 0 \,,
\end{cases}
\end{equation}
We have the following
\begin{prop}
The time-Laplace transform of the solution for the Dirichlet problem
\eqref{dr} is given by
\begin{equation}
\widetilde{u}(x,s) = \widetilde{\phi}(s) e^{-\lambda x}
E_{\alpha,1}\left[-
(s^2+2k s+\lambda^\alpha) x^\alpha\right] \,.
\label{dirichletsol}
\end{equation}
\end{prop}
\begin{proof}
By taking the time-Laplace transform of \eqref{dr}, we have that
\begin{equation}
s^2\tilde{u}+ 2 ks \tilde{u} =
e^{-\lambda x} D_x^\alpha [e^{\lambda x}\tilde{u}]-\lambda^\alpha \tilde{u} \,.
\end{equation}
Therefore, we have that
\begin{equation}
e^{-\lambda x} D_x^\alpha [e^{\lambda x}\tilde{u}] = \left(s^2+2ks+\lambda^\alpha\right)\tilde{u} \,,
\end{equation}
whose solution,
according to the boundary condition and by recalling that the one-parameter
Mittag--Leffler function is an eigenfunction of the
Caputo fractional derivative $D_x^\alpha$,
is given by (\ref{dirichletsol}).
\end{proof}
We consider now the special case when $k=\lambda ^{\alpha /2}$,
then (\ref{dirichletsol}) can be rewritten as
\begin{equation}
\widetilde{u}(x,s)=\widetilde{\phi }(s)e^{-\lambda x}E_{\alpha
,1}\left[-(s+\lambda ^{\alpha /2})^{2}x^{\alpha }\right] \,,
\end{equation}
and the solution $u(x,t)$ can be explicitly derived.
We start by considering that
\begin{equation}
\int_{0}^{+\infty }e^{-\eta x}E_{\alpha ,1}(-\theta ^{2}x^{\alpha })dx=\frac{%
\eta ^{\alpha -1}}{\eta ^{\alpha }+\theta ^{2}} \,,
\end{equation}
and we observe that its inverse Laplace transform
with respect to $\theta $ is given by
\begin{equation}
\mathcal{L}^{-1}\left\{ \frac{\eta ^{\alpha -1}}{\eta ^{\alpha }+\theta ^{2}}%
;t\right\} =\eta ^{\alpha -1}tE_{2,2}(-\eta ^{\alpha }t^{2})
\,.
\end{equation}
Now we invert the Laplace transform with respect to $\eta $,
by considering the following representation of
the Mittag--Leffler function as H function
\cite[formula (1.136)]{mathai_etal-2010}:
\begin{equation}
E_{\alpha ,\beta }(x)=H_{1,2}^{1,1}\left[ \left. -x\right\vert
\begin{array}{cc}
(0,1) & \; \\
(0,1) & (1-\beta ,\alpha)
\end{array}
\right] \,.
\end{equation}
We then apply the inverse transformation
\cite[formula (2.21)]{mathai_etal-2010}
(after checking that the conditions
are satisfied for $\sigma =\alpha $ and $\rho =1-\alpha $), as follows%
\begin{eqnarray*}
\mathcal{L}^{-1}\left\{ \eta ^{\alpha -1}t \,
E_{2,2} (- \eta^\alpha t^2) ; x\right\}
&=&tx^{-\alpha }H_{2,2}^{1,1}\left[ \left. \frac{t^{2}}{x^{\alpha }}%
\right\vert
\begin{array}{cc}
(0,1) & (1-\alpha ,\alpha ) \\
(0,1) & (-1,2)%
\end{array}%
\right] \\
&=&\cite[{\rm formula} \, (1.60)]{mathai_etal-2010} \\
&=&\frac{1}{t}H_{2,2}^{1,1}\left[ \left. \frac{t^{2}}{x^{\alpha }}%
\right\vert
\begin{array}{cc}
(1,1) & (1,\alpha ) \\
(1,1) & (1,2)%
\end{array}%
\right] \\
&=&\cite[{\rm formula} \, (1.58)]{mathai_etal-2010} \\
&=&\frac{1}{t}H_{2,2}^{1,1}\left[ \left. \frac{x^{\alpha }}{t^{2}}%
\right\vert
\begin{array}{cc}
(0,1) & (0,2) \\
(0,1) & (0,\alpha )%
\end{array}%
\right] \\
&=&\frac{1}{t}\frac{1}{2\pi i}\int_{L}\left( \frac{x^{\alpha }}{t^{2}}%
\right) ^{-w}\frac{\Gamma (w)\Gamma (1-w)}{\Gamma (2w)\Gamma (1-\alpha w)}
\, dw
\\
&=&[\text{by the duplication property of the Gamma function}] \\
&=&\frac{2\sqrt{\pi }}{t}\frac{1}{2\pi i}\int_{L}\left( \frac{x^{\alpha }}{%
4t^{2}}\right) ^{-w}\frac{\Gamma (1-w)}{\Gamma (\frac{1}{2}+w)\Gamma
(1-\alpha w)} \, dw \\
&=&\frac{2\sqrt{\pi }}{t}H_{2,1}^{0,1}\left[ \left. \frac{x^{\alpha }}{4t^{2}%
}\right\vert
\begin{array}{cc}
(0,1) & (1/2,1) \\
(0,\alpha ) & \;%
\end{array}%
\right] \,,
\end{eqnarray*}%
where $L$ is the loop beginning and ending at $+\infty $,
denoted by $L_{+\infty }$ in the treatise
\cite[point ii), p. 3]{mathai_etal-2010},
since, in this case, $\mu =\alpha -2>0$.
As a consequence of the previous steps we can write the solution
$u(x,t)$ as follows
\begin{equation}
u(x,t)=2\sqrt{\pi }e^{-\lambda x}\int_{0}^{t}\phi (t-z)\frac{e^{-\lambda
^{\alpha /2}z}}{z}H_{2,1}^{0,1}\left[ \left. \frac{x^{\alpha }}{4z^{2}}%
\right\vert
\begin{array}{cc}
(0,1) & (1/2,1) \\
(0,\alpha ) & \;%
\end{array}%
\right] \, dz \,.
\end{equation}
\section*{Acknowledgments}
GP is supported by the Basque Government through the 2022--2025 programs
and by the Ministry of Science, Innovation and Universities:
BCAM Severo Ochoa accreditation SEV-2017-0718.
The research was carried out under the auspices of
INDAM-GNFM
(the National Group of Mathematical Physics of
the Italian National Institute of High Mathematics).
| 2023-04-23T06:40:29.708Z | 2022-06-13T02:14:36.000Z | redpajama/arxiv | arxiv_0001 | 514 | 3,693 |
4fe0a75be9be58d166b95bcfe55c78d29d66afd5 |
\section{Introduction}
A key requirement for deploying autonomous agents in many real-world domains is the ability to perform multiple novel potential tasks on demand. These tasks typically share components like the objects and the trajectory segments involved, which creates the opportunity to reuse knowledge across tasks \cite{Taylor09}. For example, a service robot on the factory floor might have to fetch the same set of components but in different orders depending on the product being assembled, in which case it should only need to learn to fetch a component once.
Linear temporal logic (LTL) \cite{pnueli1977temporal} is becoming a popular means of specifying an objective for a reinforcement learning agent \cite{littman2017environment, tor-etal-aamas18, camacho2019ltl}.
Its compositional grammar reflects the compositional nature of most tasks. However, most prior approaches to reinforcement learning for LTL specifications restart learning from scratch for each LTL formula.
We propose LTL-Transfer, a novel algorithm that exploits the compositionality inherent to LTL task specifications to enable an agent to maximally reuse policies learned in prior LTL formulas to satisfy new, unseen specifications without additional training.
For example, a robot that has learned to fetch a set of components on the factory floor should be able to fetch it in any order. LTL-Transfer also ensures that transferred subpolicies do not violate any safety constraints.
We demonstrated the efficacy of LTL-Transfer in a Minecraft-inspired domain, where the agent can complete over 90\% of 500 new task specifications by training on only 50 specifications.
Further, we demonstrate that it is possible to transfer satisfying policies with as few as 5 training specifications for certain classes of LTL formulas.
We then deployed LTL-Transfer on a quadruped mobile manipulator to show its zero-shot transfer ability in a real-world household environment when performing with fetch and delivery tasks.
\section{Preliminaries}
\textbf{Linear temporal logic (LTL) for task specification: } LTL is a promising alternative to a numerical reward function as a means of expressing task specifications. An LTL formula $\varphi$ is a boolean function that determines whether a given trajectory has satisfied the objective expressed by the formula. \citet{littman2017environment} argue that such task specifications are more natural than numerical reward functions, and they have subsequently been used as a target language for acquiring task specifications in several settings, including from natural language \cite{roma} and learning from demonstration \cite{shah2018bayesian}. Formally, an LTL formula is interpreted over traces of Boolean propositions over discrete time, and is defined through the following recursive syntax:
\begin{equation}
\varphi := p \mid \neg \varphi \mid \varphi_1 \vee \varphi_2 \mid \mathbf{X} \varphi \mid \varphi_1 ~\mathbf{U} ~ \varphi_2
\end{equation}
Here $p$ represents an atomic proposition, mapping a state to a boolean value; $\varphi,~ \varphi_1,~ \varphi_2$ are any valid LTL formulas. The operator $\mathbf{X}$ (next) is used to define a property $\mathbf{X} \varphi$ that holds if $\varphi$ holds at the next time step. The binary operator $\mathbf{U}$ (until) is used to specify ordering constraints. The formula, $\varphi_1~ \mathbf{U} ~\varphi_2$, holds if $\varphi_2$ holds at time point in the future, and $\varphi_1$ holds until $\varphi_2$ first holds. The operators $\neg$ (not), and $\vee$ (or) are identical to propositional logic operators.
We also utilize the following abbreviated operators: $\wedge$ (and), $\mathbf{F}$ (eventually), and $\mathbf{G}$ (globally or always). $\mathbf{F} \varphi$ specifies that the formula $\varphi$ must hold at least once in the future, while $\mathbf{G} \varphi$ specifies that $\varphi$ must always hold in the future.
Consider the Minecraft map depicted in Figure \ref{fig:adverserial}. The task of collecting both $wood$ and $axe$ is represented by the LTL formula $\mathbf{F} axe~ \wedge ~\mathbf{F} wood$.
The task of collecting $wood$ after collecting $axe$ is represented by the formula $\mathbf{F}(axe ~\wedge~ \mathbf{F} wood) $.
Similarly, the task of collecting $wood$ only once $axe$ has been collected is represented by the formula $\mathbf{F} wood ~\wedge ~ \neg wood~\mathbf{U}~ axe$
Every LTL formula can be represented as a B\"uchi automaton \cite{vardi1996automata, gerth1995simple} interpreted over an infinite trace of truth values of the propositions used to construct the formula, thus providing an automated translation of a specification into a transition-based representation. We restrict ourselves to the co-safe \cite{kupferman2001model, pnueli1990hierarchy} fragment of LTL that consists of formulas that can be verified by a finite length trace, thus making it ideal for episodic tasks. \citet{camacho2019ltl} showed that each co-safe LTL formula can be translated into an equivalent reward machine \cite{icarte2022reward, icarte2018using} $\mathcal{M}_\varphi = \tuple{ \mathcal{Q}_\varphi, q_{0,\varphi},\mathcal{Q}_{term, \varphi}, \varphi, T_\varphi, R_\varphi}$; where $\mathcal{Q}_\varphi$ is the finite set of states, $q_{0,\varphi}$ is the initial state, $\mathcal{Q}_{term, \varphi}$ is the set of terminal states; $T_\varphi: \mathcal{Q}_{\varphi} \times 2^{\mathcal{P}} \rightarrow \mathcal{Q}_{\varphi}$ is the deterministic transition function; and $R_\varphi: \mathcal{Q_\varphi} \rightarrow \mathbb{R}$ represents the reward accumulated by entering a given state. LTL-Transfer, our proposed algorithm for transferring learned policies to novel LTL specifications, is compatible with all algorithms that generate policies by solving a product MDP of the reward machine $\mathcal{M}_\varphi$ and the task environment.
\textbf{Options framework: } \citet{sutton_option99} introduced a framework for incorporating temporally-extended actions, called options, into reinforcement learning.
An option $o = \tuple{\mathcal{I}, \beta, \pi}$ is defined using the initiation set $\mathcal{I}$, which determines the states where the option can be executed; the termination condition $\beta$, which determines when option execution ends; and the option policy $\pi$. We utilize the options framework to define the task-agnostic skills learned by LTL-Transfer.
\section{Related work}
Most approaches aimed at extending the reinforcement learning paradigm to temporal tasks rely on the automaton equivalent of the LTL formula to augment the state space and generate an equivalent product MDP. Q-learning for reward machines (Q-RM) \cite{camacho2019ltl, icarte2022reward, icarte2018using}, geometric-LTL (G-LTL) \cite{littman2017environment}, LPOPL \cite{tor-etal-aamas18} are examples of approaches that extend the environment state-space with the automaton equivalent to the LTL specification. Notably, \citet{jothimurugan2021compositional} proposed DiRL, an algorithm that interleaves graph-based planning on the automaton with hierarchical reinforcement learning to bias exploration towards trajectories that lead to the successful completion of the LTL specification. However, while these approaches exploit the compositional structure of LTL to speed up learning, they do not exploit the compositionality to transfer to novel task specifications. The policy to satisfy a novel LTL formula must be learned from scratch.
A common approach towards generalization in a temporal task setting has been to learn independent policies for each subtask \cite{leon2020systematic, leon2021nutshell, araki_lof_icml21, andreas2017modular} an agent might perform in the environment. When given a new specification, the agent sequentially composes these policies in an admissible order.
Consider the Minecraft-inspired grid world depicted in Figure \ref{fig:adverserial} containing $wood$ and $axe$ objects. The subtask-based approaches would train policies to complete subtasks involving reaching each of these objects.
In the case of being tasked with the specification $\varphi_{test} = \mathbf{F} wood ~\wedge~ (\neg wood ~\mathbf{U} ~axe)$ (i.e. collect $wood$, but do not collect $wood$ until $axe$ is collected), agents trained with the subtask-based approaches would violate the ordering constraint by reaching $axe$ through the grid cells containing $wood$.
These approaches rely on additional fine-tuning to correctly satisfy the target task.
We propose a general framework for transferring learned policies to novel specifications in a zero-shot setting while preserving the ability to not violate safety constraints.
Our approach draws inspiration from prior works on learning portable skills in Markov domains \cite{konidaris2007building, james2020learning, bagaria2019option, bagaria2021robustly}. These approaches rely on learning a task-agnostic representation of preconditions, constraints, and effects of a skill based on the options framework \cite{sutton_option99}.
We apply this paradigm towards learning portable skills requisite for satisfying temporal specifications.
\citet{kuo2020encoding} proposed learning a modular policy network by composing subnetworks for each proposition and operators. The final policy network is created through the subnetwork modules for a new task specification.
\citet{vaezipoor2021ltl2action} propose learning a latent embedding over LTL formulas using a graph neural network to tackle novel LTL formulas.
In contrast, our approach utilizes symbolic methods to identify subpolicies best suited for transfer, thus requiring training on orders of magnitude fewer specifications to achieve comparable results.
Finally, \citet{xu2019transfer} considered transfer learning between pairs of source and target tasks, while our approach envisions training on a collection of task specifications rather than pairs of source and target tasks.
\section{Problem definition}
\label{sec:def}
Consider the environment map depicted in Figure \ref{fig:adverserial}b. Assume that the agent has trained to complete the specifications to individually collect $axe$ ($\mathbf{F} axe$) and $wood$ ($\mathbf{F} wood$).
Now the agent must complete the specification $\mathbf{F} (axe ~\wedge ~ \mathbf{F} ~wood)$, i.e. first collect $axe$, then $wood$. Here the agent should identify that sequentially composing the policies for $\mathbf{F} axe$ and $\mathbf{F} wood$ completes the new task (as depicted in blue).
Now consider a different test specification $\varphi_2 = \mathbf{F} wood ~\wedge~ \neg wood ~ \mathbf{U} axe$, i.e. collect $wood$, but avoid visiting $wood$ until $axe$ is collected. Here the agent must realize that policy for $\mathbf{F} axe$ does not guarantee that $wood$ is not visited. Therefore it must not start the task execution using only these learned skills so as to not accidentally violate the ordering constraint. We develop LTL-Transfer to generate such behavior when transferring learned policies to novel LTL tasks. We begin by formally describing the problem setting.
\begin{figure}[!!!ht]
\centering
\includegraphics[width=0.85\textwidth]{figures/Adverserial2.png}
\caption{An example 5$\times$5 map in a Minecraft-like grid world. The agent is assumed to have trained on the two training specifications, and is expected to satisfy $\varphi_1$ and $\varphi_2$.
Figure 1a depicts the trajectories adopted by an agent using a subtask-based algorithm (blue for $\varphi_1$, red for $\varphi_2$).
Figure 1b depicts the trajectories followed by LTL-Transfer, our proposed algorithm. Note that LTL-Transfer does not start the task execution for $\varphi_2$, as the training task policies do not guarantee the preservation of the ordering constraint. Figure 1c depicts the optimal trajectories for $\varphi_1$ and $\varphi_2$.}
\label{fig:adverserial}
\end{figure}
We represent the environment as an MDP without the reward function $\mathcal{M}_\mathcal{S} = \langle \mathcal{S}, \mathcal{A}, T_{\mathcal{S}} \rangle$, where $\mathcal{S}$ is the set of states, $\mathcal{A}$ is the set of actions, and $T_{\mathcal{S}}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow [0,1]$ represents the transition dynamics of the environment.
We assume that the learning agent does not have access to the transition dynamics.
Further, a set $\mathcal{P}$ of Boolean propositions $\mathbf{m}{\alpha}$ represents the facts about the environment, and a labeling function $L: \mathcal{S} \rightarrow 2^{\mathcal{P}}$ maps the state to these Boolean propositions.
These Boolean propositions are the compositional building blocks for defining the tasks that can be performed within the environment $\mathcal{M}_\mathcal{S}$.
We assume that a task within the environment $\mathcal{M}_\mathcal{S}$ is defined by a linear temporal logic (LTL) formula $\varphi$, and that the agent is trained on a set of training tasks $\varPhi_{train} = \{ \varphi_1, \varphi_2, \ldots , \varphi_n\}$.
We further assume that these policies were learned using a class of reinforcement learning algorithms that operate on a product MDP composed of the environment $\mathcal{M}_\mathcal{S}$, and the automaton representing the non-Markov LTL task specification.
Q-RM \cite{camacho2019ltl, icarte2022reward, icarte2018using}, G-LTL \cite{littman2017environment}, LPOPL \cite{tor-etal-aamas18} are examples of such algorithms. LPOPL explicitly allows for sharing policies for specifications that share progression states; therefore, as the baseline best suited for transfer in a zero-shot setting, we choose LPOPL as our learning algorithm of choice.
LTL-Transfer operates in two stages. In the first stage, it accepts the set of training tasks $\varPhi_{train}$ and the learned policies, and outputs the set of task-agnostic, portable options $\mathcal{O}_{e}$.
In the second stage, given a novel task specification $\varphi_{test}$ and the set of options $\mathcal{O}_e$, LTL-Transfer identifies and executes a sequence of options to satisfy $\varphi_{test}$.
\section{LTL-Transfer with transition-centric options}
\label{sec:methods}
An LTL specification $\varphi \in \varPhi_{train}$ to be satisfied is represented as the reward machine $\mathcal{M}_\varphi = \tuple{ \mathcal{Q}_\varphi, q_{0,\varphi},\mathcal{Q}_{term, \varphi}, \varphi, T_\varphi, R_\varphi}$. This specification must be satisfied by the agent operating in an environment $\mathcal{M}_\mathcal{S} = \tuple{\mathcal{S}, \mathcal{A}, T_\mathcal{S}}$. The policy learned by LPOPL is Markov with respect to the environment states $\mathcal{S}$ for a given RM state, i.e. the subpolicy to be executed in state $q \in \mathcal{Q}_\varphi$, $\pi^\varphi_q: \mathcal{S} \rightarrow \mathcal{A}$.
An option $o^\varphi_q$ is executed in the reward machine (RM) state $q$. Our insight is that each of these options triggers a transition in the reward machine on a path that leads towards an acceptance state, and these transitions may occur in multiple tasks.
There might be multiple paths through the reward machine to an accepting state; therefore, the target transition of an option $o^\varphi_q$ is conditioned on the environment state where the option execution was initiated.
We propose recompiling each state-centric option into multiple transition-centric options by partitioning the initiation set of the state-centric option based on the transition resulting from the execution of the option policy from the starting state.
Each resulting transition-centric option will maintain the truth assignments of $\mathcal{P}$ to ensure self-transitions till it achieves the truth assignments required to trigger the intended RM transition.
These transition-centric options are portable across different formulas. We describe our proposed algorithm in Section \ref{ss:relabel}.
Given a novel task specification $\varphi_{test} \not\in \varPhi_{train}$, the agent first constructs a reward machine representation of the specification, $\mathcal{M}_{\varphi_{test}}$, then identifies a path through the reward machine that can be traversed by a sequential composition of the options from the set of transition-centric options $\mathcal{O}_e$.
A key feature of our transfer algorithm is that it is sound and terminating, i.e. if it returns a solution with success, that task execution will satisfy the task specification. Further, it is guaranteed to terminate in finite time if it does not find a sequence of options that can satisfy a given task. We describe the details of this planning algorithm in Section \ref{ss:transfer}.
The key advantage of this approach is that the compilation of options can be computed offline for any given environment, and the options can then be transferred to novel specifications.
Thus learning to satisfy a limited number of LTL specifications can help satisfy a wide gamut of unseen LTL specifications.
\subsection{Compilation of transition-centric options}
\label{ss:relabel}
The policy learned by LPOPL to satisfy a specification $\varphi$ identifies the current reward machine state $q \in\mathcal{Q}_\varphi$ the task is in and executes a Markov policy $\pi^\varphi_q$ till the state of the reward machine progresses. This subpolicy can be represented as an option, $o^\varphi_q = \tuple{\mathcal{S}, \beta_{e^\varphi_{q,q}}, \pi^\varphi_q}$; where the initiation set is the entire state-space of the task environment; the option terminates when the truth assignments of the propositions $\mathbf{m}{\alpha}$ do not satisfy the self-transition, represented by the Boolean function $\beta_e$ defined as follows,
\begin{equation}
\beta_{e} = \begin{cases}
1, & \text{if } L(s) \nvDash e\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
A transition-centric option, $o_{e_1. e_2}$, executes a Markov policy such that it ensures that the truth assignments of $\mathcal{P}$ satisfy the self transition formula $e_1$ at all time steps until the policy yields a truth assignment that satisfies $e_2$. A transition-centric option is defined by the following tuple:
\begin{equation}
o_{e_1, e_2} = \tuple {\mathcal{S}, \beta_{e_1}, \pi, e_1, e_2, f_{e_2}}.
\end{equation}
Here, the initiation set represents the entire environment state-space $S$; the termination condition is defined by the dissatisfaction of the termination condition as represented by $\beta_{e_1}$; the option executes the Markov policy $\pi:\mathcal{S}\rightarrow \mathcal{A}$; $e_1$ and $e_2$ represent the self-transition and the target edge formulas respectively; and $f_{e_2}: \mathcal{S} \rightarrow [0,1]$ represents the probability of completing the target edge $e_2$ when starting from $s\in \mathcal{S}$.
Algorithm \ref{alg:compile} describes our approach to compiling each state-centric option $o^\varphi_q$ into a set of transition-centric options. If $\mathcal{E}$ is the set of pairs of self and outgoing edge formulas from state $q$ of the reward machine, then executing the option's policy $\pi^{\varphi}_q$ results in a distribution over the outgoing edges $\{e^{\varphi}_{q,q'}: q' \text{ is out-neighbor of }q \}$ conditioned on the environment state $s\in \mathcal{S}$ where the option execution was initiated.
Thus the distribution $f_{e^{\varphi}_{q,q'}}$ acts as a soft segmenter of the state-space $\mathcal{S}$. $f_{e^{\varphi}_{q,q'}}$ is estimated by sampling rollouts from all possible environment states in discrete domains, or can be learnt using sampling-based methods \cite{bagaria2019option, bagaria2021robustly} in continuous domains. Each state-centric option $o_q$ can be compiled into a set of transition-options, $\set{o_{e^{\varphi}_{q,q},e^{\varphi}_{q,q'}}: q \in \mathcal{Q}_\varphi, q' \text{is out-neighbor of } q}$.
\begin{algorithm*}
\scriptsize
\caption{Compile state-centric options to transition-centric options}
\begin{algorithmic}[1]
\Function{Compile}{$\mathcal{M}_\mathcal{S}, \varPhi_{train}, \mathcal{O}_q$} \Comment{Environment, training specifications, and the learned state-centric options}
\State $\mathcal{O}_e \gets \emptyset$
\For{$\varphi \in \varPhi_{train}$}
\State $\mathcal{M}_{\varphi} \gets$ \Call{GenerateRM}{$\varphi$}
\State $\mathcal{O}^{\varphi}_q \gets \set{o^{\varphi'}_q: \varphi' = \varphi, o^{\varphi'}_q \in \mathcal{O}_q}$ \Comment{All state-centric options associated with task $\varphi$}
\For{$o^{\varphi}_q = \tuple{\mathcal{S}, \beta^\varphi_{e_1}, \pi^\varphi_q} \in \mathcal{O}^{\varphi}_q$}
\State $\mathcal{E} \gets \set{(e^{\varphi}_{q,q}, e^{\varphi}_{q,q'}): e^{\varphi}_{q,q} \text{ is the self edge, } q' \text{ is an out-neighbor of } q}$
\For{$s \in \mathcal{S}$}
\State Generate $N_r$ rollouts from $s$ with $\pi^{\varphi}_q$
\State Record edge transition frequencies $n_s(e_2) ~\forall~(e_1,e_2)\in \mathcal{E}$
\State $f_{e^{\varphi}_{q,q'}}(s) \gets \frac{n_s(e^{\varphi}_{q,q'})}{N_r} ~ \forall ~ q' \in \set{q': q' \text{ is an out-neighbor of } q}$ \label{lin:learn_f}
\EndFor
\State $\mathcal{O}^{q,\varphi}_e \gets \set{o^{q,\varphi}_e = \tuple{\mathcal{S}, \beta_{e^{\varphi}_{q,q}}, \pi^\varphi_q, e^{\varphi}_{q,q}, e^{\varphi}_{q,q'}, f{e^{\varphi}_{q,q'}}}}$ \Comment{All transition-centric options from state-centric option $o^{\varphi}_q$}
\State $\mathcal{O}_e \gets \mathcal{O}_e \cup \mathcal{O}^{q,\varphi}_e$
\EndFor
\EndFor
\State \Return{$\mathcal{O}_e$}
\EndFunction
\end{algorithmic}
\label{alg:compile}
\end{algorithm*}
\subsection{Transferring to novel task specifications}
\label{ss:transfer}
Our proposed algorithm for composing the transition-centric options in the set $\mathcal{O}_e$ to solve a novel task specification $\varphi_{test}$ is described in Algorithm \ref{alg:transfer}.
Once the reward machine for the test task specification is generated, Line \ref{lin:prune} examines each edge of the reward machine, and identifies the transition-centric options that can achieve the edge transition while maintaining the self transition $e^{\varphi_{test}}_{q',q'}$, where $q'$ is the source node of the edge; if no such option is identified, we remove this edge from the reward machine.
Line \ref{lin:id_paths} identifies all paths in the reward machine from the current node to the accepting node of the RM.
Lines \ref{lin:id_options1} and \ref{lin:id_options2} construct a set of all available options that can potentially achieve an outgoing transition from the current node to a node on one of the feasible paths to the goal state.
The agent then executes the option with the highest probability of achieving the intended edge transition determined by function $f$ (Lines \ref{lin:option} and \ref{lin:execute}).
Note that the termination condition for the option, $o^*_{e_1,e_2}$ is satisfied when either the option's self transition condition is violated, i.e. $L(s) \nvDash e_1$, or when it progresses to a new state of the reward machine $\mathcal{M}_{\varphi}.$
If the option fails to progress the reward machine, it is deleted from the set (Line \ref{lin:delete}), and the next option is executed. If at any point, the set of executable options is empty without having reached the accepting state $q^{\top}$, Algorithm \ref{alg:transfer} exits with a failure (Line \ref{lin:fail}). If the reward machine progresses to $q^{\top}$, it exits with success.
\begin{algorithm*}
\scriptsize
\caption{Zero-shot transfer to test task $\varphi^*$}
\begin{algorithmic}[1]
\Function{Transfer}{$\mathcal{M}_\mathcal{S}$, $\varphi^*$, $\mathcal{O}_e$}
\State $\mathcal{M}_\varphi^* \gets$ \Call{Generate\_RM}{$\varphi^*$}
\State $\mathcal{M}_\varphi^* \gets \Call{Prune}{\mathcal{M}_{\varphi^*}}$ \label{lin:prune}
\State $s \gets$ \Call{Initialize}{$\mathcal{M}_\mathcal{S}$}
\State $q \gets q_{0,\varphi^*}$
\While{$q\neq q^{\top}$} \Comment{$q^{\top} \in \mathcal{Q}_{term,\varphi^*}$ is the accepting state for the underlying task specification.}
\State $P \gets \set{p_i~:~ p_i = [e_0,\ldots e_{n^i}] ~\text{are paths connecting } q \text{ and } q^{\top} \text{ in } \mathcal{M}_{\varphi^*}} $ \label{lin:id_paths}
\State $\forall p \in P: \mathcal{O}_{p[0]} = \set{o_{e_1, e_2}: \Call{MatchEdge}{(e_1, e_2),(e^{\varphi^*}_{q,q}, p[0])}, o_{e_1, e_2} \in \mathcal{O}_e}$ \Comment{Edge options for the first edge in each path} \label{lin:id_options1}
\State $\mathcal{O}_{[0]} = \bigcup_p\mathcal{O}_{p[0]}$ \label{lin:id_options2}
\State $\tuple{s',q'} \gets \tuple{s,q}$
\While{$\mathcal{O}_{[0]} \neq \emptyset$ and $q' = q$}
\State $o^* \gets \argmax_{o_{e_1, e_2}\in \mathcal{O}_{[0]}} f_{e2}(s)$ \label{lin:option} \Comment{Select option most likely to complete the transition}
\State $\tuple{s',q'} \gets$ \Call{Execute}{$\pi^*$} \label{lin:execute}
\Comment{$\pi^*$ is the policy corresponding to the option}
\If{$q' = q$}
\State $\mathcal{O}_{[0]} \gets \mathcal{O}_{[0]}\setminus o^*$ \label{lin:delete} \Comment{If $o^*_e$ does not induce progression, delete it}
\EndIf
\EndWhile
\If{$q' = q$}
\State \Return Failure \label{lin:fail}
\Else
\State $\tuple{s,q} \gets \tuple{s',q'}$
\EndIf
\EndWhile
\State \Return Success
\EndFunction
\end{algorithmic}
\label{alg:transfer}
\end{algorithm*}
\subsection{Matching transition-centric options to reward machine edges}\label{ss:match}
The edge matching conditions identify whether a given transition-centric option can be applied safely to transition along an edge of the reward machine on a feasible path.
Here we propose two edge matching conditions, \textit{constrained} and \textit{relaxed}, that both ensure that the task execution does not fail due to an unsafe transition.
The edge matching conditions are used to prune the reward machine graph to contain only the edges with feasible options available (Line \ref{lin:prune}) and enumerate feasible options from a given reward machine state (Line \ref{lin:id_options1}). We use a propositional model counting approach \cite{valiant1979complexity} to evaluate the edge matching conditions. We propose the following two edge matching conditions (Further details of the implementation of the edge matching conditions are provided in the supplementary material):
\textbf{Constrained}
Given a test specification $\varphi_{test}$, where the task is in the state $q$, the self-transition edge is $e^{\varphi_{test}}_{q,q}$ and the targeted edge transition is $e^{\varphi_{test}}_{q,q'}$, we must determine if the transition-centric option $o_{e_1, e_2}$ matches the required transitions.
The \textit{Constrained} edge matching criterion ensures that every truth assignment that satisfies the outgoing edge of the option, $e_2$, also satisfies the targeted transition for the test specification $e^{\varphi_{test}}_{q,q'}$.
Similarly, every truth assignment that satisfies the self-transition edge of the option $e_1$ must also satisfy the self-transition formula $e^{\varphi_{test}}_{q,q}$. This strict requirement reduces the applicability of the learned options for satisfying novel specifications but ensures that the targeted edge is always achieved.
\textbf{Relaxed} For the \textit{Relaxed} edge matching criterion, the self edges $e_1$ and $e^{\varphi_{test}}_{q,q}$ must share satisfying truth assignments, so must the targeted edges $e_2$ and $e^{\varphi_{test}}_{q,q'}$. However, it allows the option to have valid truth assignments that may not satisfy the intended outward transition; yet none of those truth assignments should trigger a transition to an unrecoverable failure state $q^{\bot}$ of the reward machine. Further, all truth assignments that terminate the option must not satisfy the self-transition condition for the test specification.
The \textit{Relaxed} edge matching conditions can retrieve a greater number of eligible options.
\section{Experiments} \label{exp}
We evaluated the LTL-Transfer algorithm in the Minecraft-inspired domains\footnote{We used the version by \citet{tor-etal-aamas18} \url{https://bitbucket.org/RToroIcarte/lpopl}} commonly seen in research into compositional reinforcement learning and integration of temporal logics with reinforcement learning \cite{andreas2017modular, tor-etal-aamas18, jothimurugan2021compositional, araki_lof_icml21}.
In these domains, the task specifications comprise a set of subtasks that the agent must complete and a list of precedence constraints defining the admissible orders in which the subtasks must be executed. These specifications belong to the class of formulas that form the support of the prior distributions proposed by \citet{shah2018bayesian}.
Our experiments are aimed at evaluating the following hypotheses:
\begin{enumerate}
\item \textbf{H1: }Both the \textit{Constrained} and \textit{Relaxed} edge matching conditions should exceed LPOPL's capability to transfer to novel specifications.
Note that while LPOPL was not explicitly developed to transfer to novel specifications in a zero-shot setting, it can satisfy specifications that are a progression of one of the formulas that the agent was trained on.
\item \textbf{H2: }\textit{Relaxed} edge matching criterion will result in a greater success rate than the \textit{Constrained} criterion.
\item \textbf{H3: }It is easier to transfer learned policies for LTL formulas conforming to certain templates (Section \ref{ss:order_type}).
\item \textbf{H4: }Training with formulas conforming to certain formula templates leads to a greater success rate when transferring to all specification types.
\end{enumerate}
\subsection{Task Environment}
We implement LTL-transfer\footnote{Code: \url{https://github.com/jasonxyliu/ltl_transfer}} within a Minecraft-inspired discrete grid-world domain \cite{andreas2017modular, tor-etal-aamas18}.
Each grid cell can be occupied by one of nine object types, or it may be vacant; note that multiple instances of an object type may occur throughout the map.
The agent can choose to move along any of the four cardinal directions, and the outcome of these actions is deterministic. An invalid action would result in the agent not moving at all.
A given task within this environment involves visiting a specified set of object types in an admissible order determined by ordering constraints.
The different types of ordering constraints are described in Section~\ref{ss:order_type}.
Task environment maps are similar to the $5\times5$ maps depicted in Figure \ref{fig:adverserial}; however, all the maps used for evaluation were $19\times19$.
\subsection{Specification Types}
\label{ss:order_type}
We considered the following three types of ordering constraints for a comprehensive evaluation of transferring learned policies across different LTL specifications. Each constraint is defined on a binary pair of propositions $a$ and $b$, and without loss of generality, we assume that $a$ should precede $b$. The three types of constraints are as follows:
\begin{enumerate}
\item \textbf{Hard:} Hard orders occur when $b$ must never be true before $a$. In LTL, this property can be expressed through the formula $\neg b ~\mathbf{U} ~a$.
\item \textbf{Soft:} Soft orders allow $b$ to occur before $a$ as long as $b$ happens at least once after $a$ holds for the first time. Soft orders are expressed in LTL through the formula $\mathbf{F}(a ~\wedge~ \mathbf{F} b)$.
\item \textbf{Strictly Soft:} Strictly soft ordering constraints are similar to soft orders; however, $b$ must be true strictly after $a$ first holds. Thus $a$ and $b$ holding simultaneously would not satisfy a strictly soft order.
Strictly soft orders are expressed in LTL through the formula $\mathbf{F}(a ~\wedge~ \mathbf{X}\mathbf{F} b)$
\end{enumerate}
We sampled five training sets: \textit{hard}, \textit{soft}, \textit{strictly soft}, \textit{no-orders}, and \textit{mixed}; with 50 formulas each that represent different specification types.
The sub-tasks to be completed and the ordering constraints were sampled from the priors proposed by \citet{shah2018bayesian}. All ordering constraints within the \textit{hard}, \textit{soft}, and \textit{strictly soft} training sets were expressed through the respective templates described here. There were no ordering constraints to be satisfied for the \textit{no-orders}.
In the $\textit{mixed}$ training set, each binary precedence constraint was expressed as one of the three ordering types described in \ref{ss:order_type}.
In addition to the training set, we sampled a test set of 100 formulas for each set type. This mimics the real-world scenario where the agent would train on a few specifications but might be expected to satisfy a wide array of specifications during deployment.
\subsection{Experiment configurations}
For each experimental run, we specified the training set type and size and the test set type. All experiments were conducted on four different grid world maps. The evaluation metrics include the success rate on each of the test set specifications. We logged the reason for any failed run.
The precomputations for compiling the set of edge-centric options were computed on a high-performance computing (HPC) cluster hosted by our university. As the compilation of state-centric options into transition-centric options allows for large-scale parallelism with no interdependency, we were able to share the workload among a large number of CPU cores.
\section{Results and Discussion}
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig_2.jpg}
\caption{}
\label{fig:lpopl_comp}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig_3a.jpg}
\caption{}
\label{fig:rigid}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig_3b.jpg}
\caption{}
\label{fig:relaxed}
\end{subfigure}
\caption{Figure \ref{fig:lpopl_comp} depicts the success rate on the \textit{mixed} test set after training on the \textit{mixed} training set of various sizes for the LPOPL baseline and for the two edge-matching criteria. Figure \ref{fig:rigid} depicts the success rate of the agent trained on \textit{mixed} training sets of various sizes using LTL-Transfer with the \textit{Constrained} edge-matching criterion when transferring to test sets of various specifications types. Figure \ref{fig:relaxed} depicts the success rates with the \textit{Relaxed} edge-matching criterion. Note that the error bars depict the 95\% credible interval if the successful transfer was modeled as a Bernoulli distribution.}
\label{fig:1}
\end{figure}
\textbf{Comparison with LPOPL}
LPOPL's use of progressions and the multi-task learning framework allow it to handle tasks that lie within the progression set of the training formulas. To compare the performance of LTL-Transfer and LPOPL, we trained each of them on \textit{mixed} training sets of varying sizes and evaluated their success rate on \textit{mixed} test set. Figure \ref{fig:lpopl_comp} depicts the success rate of LTL-Transfer (orange and blue lines) and LPOPL (green line). The error bars represent the 95\% credible interval if the success rate was modeled as the parameter of a Bernoulli distribution with a conjugate beta prior. Note that LTL-Transfer exceeds the performance of LPOPL in zero-shot transfer to novel specifications using both the \textit{Constrained} and \textit{Relaxed} edge matching criteria (Section \ref{ss:match}) thus supporting \textbf{H1}. By training on 50 specifications of the \textit{mixed} type, LTL-Transfer with the \textit{Relaxed} edge matching criterion can complete more than 90\% of unseen task specifications.
\textbf{Effect of edge matching criterion}
Next, we trained our agent on mixed specification types of varying sizes and used LTL-Transfer to transfer the learned policies to complete the specifications in all five test sets. The success rates with the \textit{Constrained} edge matching criterion are depicted in Figure \ref{fig:rigid}, while those for the \textit{Relaxed} edge matching criterion are depicted in Figure \ref{fig:relaxed}.
We note that the \textit{Relaxed} edge matching criterion is capable of successfully transferring to a larger number of novel specifications across all specification types, thus supporting \textbf{H2}
\textbf{Relative difficulty of specification type} Figure \ref{fig:rigid} indicates that the different specifications are equally difficult to transfer learned policies to when using the \textit{Constrained} edge matching criterion.
However, Figure \ref{fig:relaxed} indicates that with the \textit{Relaxed} edge matching criterion, LTL-Transfer is capable of transferring to novel specifications with Soft or Strictly Soft orders after training on very few specifications.
It also indicates that specifications with Hard orders are the most difficult to transfer to. Therefore \textbf{H3} is supported only for the \textit{Relaxed} edge matching criterion, and not for the \textit{Constrained} criterion.
\textbf{Transferring across specification type} Finally, we evaluate whether certain specification types are more capable of transferring to all specification types by training our agent on different specification types and attempting to transfer those policies to other specification types. Figure \ref{fig:rigid_heatmap} depicts the heatmap of success rates obtained by training the agent on 50 specifications of the type indicated by the row and transferring it to the test set of specification types indicated by the column while using the \textit{Constrained} edge matching criterion. Similarly, Figure \ref{fig:relaxed_heatmap} depicts the success rates using the \textit{Relaxed} edge matching criterion. Note that no single specification type proved to be the best training set, thus providing evidence against \textbf{H4}.
Note that in all of our runs, the agent never violated a constraint leading to an unrecoverable failure, which is crucial in safety-critical applications. The causes for failure to transfer included cases where there was no feasible path to an accepting state with the set of options, or the agent attempted all available options without progressing the task.
\begin{figure}
\centering
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig_4A.jpg}
\caption{}
\label{fig:rigid_heatmap}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig_4B.jpg}
\caption{}
\label{fig:relaxed_heatmap}
\end{subfigure}
\caption{Figure \ref{fig:rigid_heatmap} depicts the heatmap of success rates with various training and test specification types with the \textit{Constrained} edge matching criterion. Similarly, Figure \ref{fig:relaxed_heatmap} depicts the heatmap with the \textit{Relaxed} edge matching criterion.}
\label{fig:l}
\end{figure}
\section{Robot Demonstrations}
We demonstrate LTL-Transfer on Spot \cite{spot}, a quadruped mobile manipulator, in a household environment where the robot can fetch and deliver objects while navigating through the space. The robot was trained on 2 LTL tasks $\neg desk_a ~\mathbf{U} ~book \wedge \mathbf{F} desk_a$ and $\neg desk_b ~\mathbf{U} ~juice \wedge \mathbf{F} desk_b$, then transferred the learned skills to 8 combinations of soft-ordering tasks $\mathbf{F} (obj_* \wedge \mathbf{F} (desk_* \wedge \mathbf{F}(obj_* \wedge \mathbf{F} desk_*)))$ in zero-shot fashion.
For the tasks that LTL-Transfer cannot transfer (e.g. $\neg desk_a ~\mathbf{U} ~book ~\wedge~ \neg juice~ \mathbf{U}~ desk_a ~\wedge~ \neg desk_b ~\mathbf{U} juice~ \wedge~ \mathbf{F} desk_b$), the robot, as expected, does not start execution thus avoids violations of any constraints \footnote{Video: \url{https://youtu.be/FrY7CWgNMBk}}.
Please see the supplementary material for more details about this environment, the training and test tasks.
\section{Conclusion}
We introduced LTL-Transfer, a novel algorithm that leverages the compositionality of linear temporal logic to solve a wide variety of novel, unseen LTL specifications. It segments policies from training tasks into portable, task-agnostic transition-centric options that can be reused for any task. We demonstrate that LTL-Transfer can solve over 90\% of unseen task specifications in our Minecraft-inspired domains while training on only 50 specifications. We further demonstrated that LTL-Transfer never violated any safety constraints and aborted task execution when no feasible solution was found.
LTL-Transfer enables the possibility of maximally transferring the learned policies of the robot to new tasks. We envision further developing LTL-Transfer to incorporate long-term planning and intra-option policy updates to generate not just satisfying but optimal solutions to novel tasks.
\begin{ack}
This work was supported by NSF under grant awards DGE-2040433 and CNS-2038897, and ONR under grant awards N00014-17-1-2699 and N00014-21-1-2584.
\end{ack}
\bibliographystyle{plainnat}
\setcitestyle{square,numbers,comma}
| 2023-04-23T06:40:29.725Z | 2022-06-13T02:15:30.000Z | redpajama/arxiv | arxiv_0001 | 517 | 6,266 |
12f8cafdae81f7503cdc5e20543c47c2b24405cf |
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{} We define our relaxation of Local DP formally in Section~\ref{sec:setting}, the gossip averaging is analyzed in Section~\ref{sec:muff} and show substantial amplification, supported by the experiments in Section~\ref{sec:expe}. We define and prove guarantees for decentralized optimization in Section~\ref{sec:gd}.
\item Did you describe the limitations of your work?
\answerYes{} We discuss each theorem after in its subsection and show the effective magnitude of the privacy amplification in the experiments (Section~\ref{sec:expe}).
\item Did you discuss any potential negative societal impacts of your work?
\answerYes{} We have
included a broader impact statement at the end of the supplementary.
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{} All our theorems and corollaries have their complete set of assumptions.
\item Did you include complete proofs of all theoretical results? \answerYes{}, except when proving similar results, we do not repeat the full proof but only explain the main differences.
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{}
We provide the code needed to reproduce the results in the supplementary material.
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{} Most of our experiments have no hyperparameters to tune, and we provide the hyperparameters in Annex for Figure~\ref{fig:gd}.
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{} see Figures \ref{fig:gd} and \ref{fig:synthe}.
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerNo{} All of the simulations ran in a few minutes on a regular laptop.
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{} We use Houses Dataset and Facebook Ego dataset and cite them.
\item Did you mention the license of the assets?
\answerYes{}The dataset Houses is in the public domain, as
indicated on the link provided and the Facebook ego dataset in under BSD license.
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{}We have included our code in the supplementary.
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\end{comment}
\newpage
\section{Additional Numerical Experiments}
\label{app:expe}
\subsection{Extra Synthetic graphs}
Figure \ref{fig:synthe} summarizes the result of \emph{Muffliato} according to the shortest path length. However, other characteristics of the topology can play a role in the privacy leakage. Thus, we show the graph representation for each of the synthetic graphs we considered in Figure~\ref{fig:diffusion}.
\begin{figure}
\centering\vspace{-9pt}
\hspace{-1em}
\subfigure{
\includegraphics[width=0.22\linewidth]{fig/diff_hyper.pdf}
}
\hspace{-1em}
\subfigure{
\includegraphics[width=0.22\linewidth]{fig/diff_binom.pdf}
}
\subfigure{
\includegraphics[width=0.22\linewidth]{fig/diff_geo.pdf}
}
\subfigure{
\includegraphics[width=0.22\linewidth]{fig/diff_grid.pdf}
}
\caption{Level of the privacy loss for each node (color) with respect to a fixed node in the graph. These graphs corresponds to the graphs used in Figure \ref{fig:synthe}: from left to right, exponential graph, Erdos-Renyi graph, geometric random graph and grid.}
\label{fig:diffusion}
\end{figure}
We also report in Figure~\ref{fig:nevo} how privacy guarantees improve when $n$ increases for the exponential graph. We see that privacy guarantees improve with $n$: distance between nodes increases, but also the number of nodes with whom the contribution of a specific node is mixed. This is especially significant for pairs of nodes that are not direct neighbors but at short distance of each other.
\begin{figure}
\centering
\includegraphics[width=.4\textwidth]{fig/nevo.pdf}
\caption{Privacy loss for the exponential graph with respect to the number of nodes $n$ (following powers of $2$).}
\label{fig:nevo}
\end{figure}
\subsection{Proof of Fixed Privacy Loss for Exponential Graphs}
For an exponential graph, the pairwise privacy loss is fully determined by the shortest length path, i.e $f(u,v) = g(d(u,v))$ where $d: \mathcal{V} \rightarrow \mathbb{N}$ is the function that returns the length of the shortest path between $u$ and $v$.
This result is a consequence of the invariance per vertex permutation in the hypercube. For the hypercube with $2^m$ vertices, each vertex can be represented by a $m$-tuple in $\set{0,1}$, where there is an edge if and only if two vertices share all values of their tuple but one. Let us now fix two pairs of vertices $(u,v)$ and $(u', v')$ with the same distance between them. To prove that their privacy loss is the same, it is sufficient to exhibit an graph isomorphism $\Phi$ that sends $(u,v)$ on $(u', v')$.
By construction, $d(u,v)$ corresponds to the number of coordinates that differ between their tuple, and the same holds for $(u',v')$. The set of equal coordinates $Fix(u,v)$ is thus of the same size $m-d(u,v)$ than $Fix(u', v')$. Hence we can construct a bijective function $b$ of the coordinates that is stable for the set of fixed coordinates
\begin{equation*}
b(Fix(u,v)) = Fix(u',v') \quad b(\set{1, 2, \dots, m}\setminus Fix(u,v) ) = \set{1, 2, \dots, m}\setminus Fix(u',v')
\end{equation*}
Finally, noting that $0$ and $1$ play the same role, we match each coordinate accordingly to the value defined by our couple. We thus define our isomorphism per coordinate $\Phi(w) = (\Phi_1(w), \dots, \Phi_m(w))$ with $\Phi_i(w) = s(w_{b^{-1}(i)})$ where $s$ is the identity function if $u_{b^{-1}(i)} = u_i$ and the swap function otherwise. This function is clearly an isomorphism: by construction it is a bijection, and the edges still exist if and only if the vertices differ on only one coordinate. We have $\Phi(u) = u'$ and $\Phi(v) = v'$ and thus the privacy loss is equal between the two pairs of vertices.
\subsection{Random Geometric Graphs}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{fig/geoeuclideandistance.png}
\caption{Privacy towards all the nodes as function of the Euclidean distance in a random geometric graph. We see a high level of correlation between the Euclidean distance and the privacy loss.}
\label{fig:geo}
\end{figure}
Geometric graphs are examples of possible use cases of Pairwise Network Differential Privacy. Constructing edges when nodes are at a distance below a given threshold naturally models short-range wireless communications such as Bluetooth. In this situation, the Euclidean distance between nodes is a convenient indicator for setting the privacy loss. Indeed, it is a parameter that we can measure, and it can match the users expectations in terms of privacy loss. For instance, if direct neighbors in the graph correspond to people within $5$ meters around the sender, some information are bound to be available to them independently of what may be revealed by the communication itself: sensitive attributes such a gender, age, or overall physical fitness are leaked simply from physical proximity. However, the user might have stronger privacy expectations for people far away.
Hence, having privacy guarantees as function of the Euclidean distance can be particularly interesting.
Our experiments show that the privacy loss is extremely well correlated to the Euclidean distance, as represented in Figure~\ref{fig:geo}. It is thus possible to design algorithms where one could impose Pairwise Network DP for a function $f(u,v) = g(\NRM{z_u - z_v})$ where $g$ is a non-increasing function and $z_u$ and $z_v$ are the geolocation of nodes $u$ and $v$.
\subsection{Facebook Ego Graphs}
We report figure on the other nine graphs of the Facebook Ego dataset, following the same methodology and scale. Across these graphs, we can see that privacy losses depending on visible communities is consistent through datasets, and become more consistent as the number of nodes increase.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/allego.pdf}
\caption{Privacy loss on the 9 other Facebook Ego graphs, following the same methodology as in Figure~\ref{fig:fb}.}
\label{fig:my_label}
\end{figure}
\subsection{Logistic Regression on Houses Dataset}
We report in Table~\ref{table:par} the parameters used in the experiments of Figure~\ref{fig:gd}.
\begin{table}[t]
\caption{Parameter for the logistic regression}
\label{table:par}
\centering
\vspace*{3pt}
\begin{tabular}{lr}
\toprule
\textbf{Parameters} & Value \\
\midrule
\# of trials & $10$\\
Step-size & $0.7$\\
\# of nodes & $2000$ or $4000$\\
probability of edges $q$ & $\log(n)/n$\\
score & Mean accuracy\\
\bottomrule
\end{tabular}
\end{table}
\section{Private Randomized Gossip}
\edwige{Motivation: flexible, faster, cheaper, fully decentralized
Difficulties: randomization thus worst cases \mathieu{ou alors en proba?}
Algorithm
Utility
Privacy guarantee
}
\mathieu{Ecrire ce qui suit avec du temps continu plutôt ou non? pour la clarté du tout, je préfère ne pas accélérer, mais on peut (et c'est gratuit)}
Randomized gossip consists in adding randomness in communications: for non-negative weights $(p_{\{v,w\}})_{{\{v,w\}}\in\mathcal{E}}$ such that $\sum_{{\{v,w\}}\in\mathcal{E}}p_{\{v,w\}}=1$, at iterations $t=0,1,\ldots$
an edge ${\{v_t,w_t\}}\in\mathcal{E}$ is chosen with probability $p_{\{v_t,w_t\}}$ (independently from the past), and a local averaging is performed \citep{boyd2006randomized}. Denoting $W_{\{v,w\}}=I_n-\frac{1}{2}(e_v-e_w)(e_v-e_w)^\top$, this is equivalent to $x^{t+1}=W_{\{v_t,w_t\}} x^t$.
Randomized gossip \citep{boyd2006gossip} is motivated by the flexibility its asynchronous implementation offers, and its fast rate of convergence in terms of number of pairwise communications in the whole graph. We hereby prove that it is also more private than synchronous gossip, by a factor proportional to node degrees. As a consequence, randomized gossip on the complete graph emulates the guarantees of a trusted aggregator, up to logarithmic factors.
We would like to study the utility and DP guarantees of randomized gossip with noisy inputs (\textsc{DP-Randomized Gossip}), namely, the algorithm defined through:
\begin{equation*}
x^{t+1}=W_{\{v_t,w_t\}} x^t\,,\quad x_v^{0}=x_v+\eta_v\,,
\end{equation*}
where $(\eta_v)_v$ are \emph{i.i.d.}~centered gaussian of covariance $\sigma^2I_d$.
For $t\geq 0$, we write \begin{equation*}
W(t)=W_{\set{u_{t-1},v_{t-1}}}\cdots W_{\set{u_0,v_0}}\,,
\end{equation*}
so that $x^t=W(t)x^{0}$.
We also define $\Bar W = \esp{W_{\{v_t,w_t\}}}$ (time independent, since edges are drawn in an \emph{i.i.d.}~fashion). Importantly, notice that $I-\Bar W$ is the Laplacian of graph $G$, with weights $p_{\{u,v\}}$ at edge ${\{u,v\}}$, whose smallest eigenvalue in denoted $\lambda_2(p)$.
\mathieu{corriger notations conflictuelles entre le $p$ du erdos rényi graph, et des intensités d'activations}
\begin{theorem}The iterates $(x^t)_t$ of \textsc{DP-Randomized Gossip} verify, for any $t\geq0$:
\begin{equation*}
\frac{1}{n}\sum_{v\in\mathcal{V}}\esp{\NRM{x^t_v-\Bar{x}}^2}\leq\left( \frac{1}{n}\sum_{v\in\mathcal{V}}\esp{\NRM{x^0_v-\Bar{x}}^2} +\sigma^2\right )e^{-t\lambda_2(p)} + \frac{\sigma^2}{n}\,.
\end{equation*}
\end{theorem}
For ${\{v,w\}}\in\mathcal{E}$ and $T\geq 0$, let $\mathcal{P}_{\{v,w\}}^T=\set{t< T\,:\,{\{v_t,w_t\}}={\{v,w\}}}$. The privacy analysis of \textsc{DP-Randomized Gossip} yields the following theorem. The first equation is obtained conditionally on the sequence of edges sampled. In Equation~\ref{eq:double_random_mean}, the mean is taken over the edges sampled and the graph generated. We provide in Appendix~\mathieu{?} a version with exact probabilistic bounds.
\begin{theorem} After $T$ iterations of \textsc{DP-Randomized Gossip} and conditionally on the edges sampled, node $u\in\mathcal{V}$ is $(\varepsilon^T_{{u\to v}}(\alpha),\alpha)$-Rényi Differentially private with respect to $v$, with:
\begin{equation*}
\varepsilon^T_{{u\to v}}(\alpha)\leq \frac{\alpha}{2\sigma^2}\sum_{w\sim v}\sum_{t\in\mathcal{P}_{\{v,w\}}^T}\frac{W(t)_{\{u,w\}}^2}{\NRM{W(t)_w}^2}\,.
\end{equation*}
Running \textsc{DP-Randomized Gossip} on an Erdös-Rényi graph with $p=c\frac{\ln(n)}{n}$ for $c>1$ generated as in Algorithm~\ref{algo:ER-dp-gossip} and if $\sigma^2\geq \frac{\alpha(\alpha-1)}{2}$, we have:
\begin{equation}\label{eq:double_random_mean}
\esp{\varepsilon_{u\to v}^T(\alpha)}\leq \frac{\alpha p}{2\sigma^2} + (1-p) \esp{\frac{2\alpha \pi_v T}{(n-d_v)\sigma^2}}\,,
\end{equation}
where $\pi_v=\sum_{w:{\{v,w\}}\in\mathcal{E}}p_{\{v,w\}}$, and $\varepsilon_{u\to v}^T(\alpha)$ concentrates around its mean (Appendix \mathieu{?}).
\end{theorem}
For our choice of $p=c\frac{\ln(n)}{n}$ and for $p_{\{v,w\}}=1/|\mathcal{E}|$, we have $\pi_v=\frac{d_v}{|\mathcal{E}|}\approx \frac{1}{n}$, and since $d_v$ tightly concentrates around $c\ln(n)$, we have $1/(n-d_v)\sim 1/n$. Consequently, Equation~\ref{eq:double_random_mean} writes as
\begin{equation*}
\esp{\varepsilon_{u\to v}^T(\alpha)}\leq \mathcal{O}\left(\frac{\alpha \ln(n)}{2n\sigma^2}+\frac{\alpha \ln(n)T}{2n^2\sigma^2}\right)\,.
\end{equation*}
For our choice of parameters, $\lambda_2(p)$ concentrates around $1/n$, so that the stopping time $T^{\rm stop}$ is of order $n$ (up to logarithmic factors in $n$ and $\sigma^2$), leading to $\esp{\varepsilon_{u\to v}^T(\alpha)}\leq \tilde\mathcal{O}\left(\frac{\alpha}{2n\sigma^2}\right)$. As a consequence, randomized gossip on an Erdös-Rényi graph emulates the optimal trusted aggregator up to logarithmic factors.
\section{Private Gossip Averaging}
\label{sec:muff}
In this section, we analyze a generic algorithm with arbitrary time-varying communication matrices for averaging.
Then, we instantiate and discuss these results for synchronous communications with a fixed gossip matrix, communications using randomized gossip~\cite{boyd2006gossip}, and with Erdös-Rényi graphs.
\subsection{General Privacy Analysis of Gossip Averaging}
We consider gossip over time-varying graphs $(G_t)_{0\leq t\leq T}$, defined as $G_t=(\mathcal{V},\mathcal{E}_t)$, with corresponding gossip matrices $(W_t)_{0\leq t\leq T}$. The \emph{generic Muffliato}
algorithm $\mathcal{A}^T$ over $T$ iterations for averaging $x=(x_v)_{v\in \mathcal{V}}$ corresponds to an initial noise addition followed by gossip steps. Writing $W_{0:t}=W_{t-1}\ldots W_0$, the iterates of $\mathcal{A}^T$ are thus defined by:
\begin{equation}\label{eq:gossip_general}
\forall v\in\mathcal{V}, x^{0}_v=x_v+\eta_v \text{ with }\eta_v\sim\gau{\sigma^2},\quad \text{and } x^{t+1}=W_t x^t = {W_{0:t+1} }(x+\eta)\,.
\end{equation}
Note that the update rule at node $v\in\mathcal{V}$ writes as $x^{t+1}_v=\sum_{w\in\mathcal{N}_t(v)} (W_t)_{v,w}x^t_w$ where $\mathcal{N}_t(v)$ are the neighbors of $v$ in $G_t$, so for the privacy analysis, the view of a node is:
\begin{equation}
\label{eq:view}
\mathcal{O}_v\big(\mathcal{A}^T(\mathcal{D})\big)=\set{\big({W_{0:t}}(x+\eta)\big)_w\,|\quad {\{v,w\}}\in\mathcal{E}_{t}\,,\quad0\leq t\leq T-1}\cup\set{x_v}\,.
\end{equation}
\begin{theorem}\label{thm:privacy_general}
Let $T\geq 1$ and denote by $\mathcal{P}^T_{\{v,w\}}=\set{s<T\,:\,{\{v,w\}}\in\mathcal{E}_s}$ the set of time-steps with communication along edge~${\{v,w\}}$. Under Assumption~\ref{hyp:sensitivity}, $\mathcal{A}^T$ is $(\alpha, f)$-PNDP with:
\begin{equation}
\label{eq:pairwise-ndp-general}
f(u,v) = \frac{\alpha \Delta^2}{2\sigma^2}\sum_{w\in \mathcal{V}}\sum_{t\in\mathcal{P}^T_{\{v,w\}}}\frac{({W_{0:t}})_{u,w}^2}{\NRM{({W_{0:t}})_w}^2}\,.
\end{equation}
\end{theorem}
This theorem, proved in Appendix~\ref{app:general_formula}, gives a tight computation of the privacy loss between every pair of nodes and can easily be computed numerically (see Section~\ref{sec:expe}). Since distant nodes correspond to small entries in $W_{0:t}$, Equation~\ref{eq:pairwise-ndp-general} suggests that they reveal less to each other. We will characterize this precisely for the case of fixed communication graph in the next subsection.
Another way to interpret the result of Theorem~\ref{thm:privacy_general} is to derive the corresponding mean privacy loss:
\[
\overline{\varepsilon}_v = \frac{\alpha \Delta^2 T_v}{2n\sigma^2}\,,
\]
where $T_v$ is the total number of communications node $v$ was involved with up to time $T$. Thus, in comparison with LDP, the mean privacy towards $v$ is $n / T_v$ times smaller. In other words, a node learns much less than in LDP as long as it communicates $o(n)$ times.
\begin{figure*}[t]
\begin{minipage}[t]{0.48\linewidth}
\begin{algorithm}[H]
\DontPrintSemicolon
\KwIn{local values $(x_v)_{v\in\mathcal{V}}$ to average, gossip matrix $W$ on a graph $G$, in $T$ iterations, noise variance $\sigma^2$}
$\gamma \leftarrow 2\frac{1-\sqrt{\lambda_W(1-\frac{\lambda_W}{4})}}{(1-\lambda_W/2)^2}$\;
\vspace{.4em}
\For{all nodes $v$ in parallel}{
$x_v^{0} \leftarrow x_v + \eta_v$ where $\eta_v \sim \gau{\sigma^2}$
}
\For{$t = 0$ to $T-1$}{
\For{all nodes $v$ in parallel}{
\For{all neighbors $w$ defined by $W$}{
Send $x^t_v$, receive $x^t_w$\;
}
$x^{t+1}_v \leftarrow (1 - \gamma) x^{t-1}_v + \gamma \sum_{w \in \mathcal{N}_v } W_{v,w} x^t_w$\;
}
}
\caption{\textsc{Muffliato}}
\label{algo:dp-gossip}
\vspace{4.5pt}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\linewidth}
\begin{algorithm}[H]
\DontPrintSemicolon
\KwIn{local values $(x_v)_{v\in\mathcal{V}}$ to average, activation intensities $(p_{\{v,w\}})_{{\{v,w\}}\in\mathcal{E}}$, in $T$ iterations, noise variance $\sigma^2$}
\For{all nodes $v$ in parallel}{
$x_v^{0} \leftarrow x_v + \eta_v$ where $\eta_v \sim \gau{\sigma^2}$
}
\For{$t = 0$ to $T-1$}{
Sample ${\{v_t,w_t\}}\in\mathcal{E}$ with probability $p_{\{v_t,w_t\}}$\;
$v_t$ and $w_t$ exchange $x_{v_t}^t$ and $x_{w_t}^t$\;
Local averaging: $x_{v_t}^{t+1}=x_{w_t}^{t+1}=\frac{x_{v_t}^{t+1}+x_{w_t}^{t+1}}{2}$
For $v\in\mathcal{V}\setminus{\{v_t,w_t\}}$, $x_v^{t+1}=x_v^t$
}
\caption{\textsc{Randomized Muffliato}}
\label{algo:dp-randomized}
\vspace*{1.5pt}
\end{algorithm}
\end{minipage}
\end{figure*}
\subsection{Private Synchronous \emph{Muffliato}\label{subsec:muff}}
We now consider \emph{Muffliato} over a fixed graph (Algorithm~\ref{algo:dp-gossip}) and start by analyzing its utility. The utility decomposes as an averaging error term vanishing exponentially fast, and a \emph{bias} term due to the noise. General convergence rates are given in
Appendix~\ref{sec:muffl_app},
from which we extract the following~result.
\begin{theorem}[Utility analysis]\label{thm:utility-sync}Let $\lambda_W$ be the spectral gap of $W$.
Muffliato (Algorithm~\ref{algo:dp-gossip}) verifies:
\begin{equation*}
\frac{1}{2n}\sum_{v\in\mathcal{V}}\esp{\NRM{ x^{T^{\rm stop}}_v-\Bar{x}}^2}\leq\frac{3\sigma^2}{n}\,,\quad \text{for }~ T^{\rm stop}\leq \frac{1}{\sqrt{\lambda_W}}\ln\bigg(\frac{n}{\sigma^2}\max\Big(\sigma^2,\frac{1}{n}\sum_{v\in\mathcal{V}}\NRM{x_v-\Bar{x}}^2\Big)\bigg)\,.
\end{equation*}
\end{theorem}
For the privacy guarantees, Theorem~\ref{thm:privacy_general} still holds as accelerated gossip can be seen as a simple post-processing of the non-accelerated version. We can derive a more explicit formula.
\begin{cor}
\label{cor:sync_simplified}
Algorithm~\ref{algo:dp-gossip} satisfies $(\alpha,\varepsilon^T_{{u\to v}}(\alpha))$-PNDP for node $u$ with respect to $v$, with:
\begin{equation*}
\varepsilon^T_{{u\to v}}(\alpha)\leq \frac{\alpha\Delta^2 n}{2\sigma^2}\max_{{\{v,w\}} \in \mathcal{E}}W_{v,w}^{-2}\sum_{t=1}^T\proba{X^t=v|X^0=u}^2\,,
\end{equation*}
where $(X^t)_t$ is the random walk on graph $G$, with transitions $W$.
\end{cor}
This result allows us to directly relate the privacy loss from $u$ to $v$ to the probability that the random walk on $G$ with transition probabilities given by the gossip matrix $W$ goes from $u$ to $v$ in a certain number of steps. It thus captures a notion of distance between nodes in the graph. We also report the utility under fixed mean privacy loss $\overline{\varepsilon} \leq \varepsilon$ in Table~\ref{table:sync} for various graphs, where one can see a utility-privacy trade-off improvement of $n \sqrt{\lambda_W}/d$, where $d$ is the maximum degree, compared to LDP. Using expanders closes the gap with a trusted aggregator up to constant and logarithmic terms. Remarkably, we see that topologies that make gossip averaging efficient (i.e. with big $\sqrt{\lambda_W}/d$), such as exponential graphs or hypercubes \cite{ying2021exponential}, are also the ones that achieve optimal privacy amplification (up to logarithmic factors). In other words, \emph{privacy, utility and scalability are compatible}.
\begin{table}[t]
\caption{Utility of \emph{Muffliato} for several topologies under the constraint $ \overline{\varepsilon} \leq \varepsilon$ for the classic gossip matrix where $W_{v,w}=\min(1/d_v,1/d_w)$ and $d_v$ is the degree of node $v$. Constant and logarithmic factors are hidden. Recall that utility is $\alpha \Delta^2/ n \varepsilon$ for LDP and $\alpha\Delta^2/ n^2 \varepsilon$ for a trusted aggregator.}
\label{table:sync}
\centering
\vspace*{3pt}
\begin{tabular}{lccccc}
\toprule
\textbf{Graph} & Arbitrary & Expander & D-Torus & Complete & Ring \\
\midrule
Algorithm~\ref{algo:dp-gossip} & $\frac{\alpha\Delta^2 d}{n^2 \varepsilon \sqrt{\lambda_W}}$
& $\frac{\alpha\Delta^2}{n^2 \varepsilon}$
& $\frac{\alpha\Delta^2 D}{n^{2-1/D} \varepsilon}$
& $\frac{\alpha\Delta^2}{ n \varepsilon}$
& $\frac{\alpha\Delta^2}{ n \varepsilon}$\\
Algorithm~\ref{algo:dp-randomized} &
$\frac{\alpha\Delta^2}{n^2 \varepsilon \lambda_W}$
& $\frac{\alpha\Delta^2}{n^2 \varepsilon}$
& $\frac{\alpha\Delta^2}{n^{2-2/D} \varepsilon}$
& $\frac{\alpha\Delta^2}{ n^2 \varepsilon}$
& $\frac{\alpha\Delta^2}{ n \varepsilon}$\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Private Randomized \emph{Muffliato}}
\begin{comment}
Synchronous algorithms (such as classical gossip with fixed gossip matrix) force nodes to be updated the same number of times; this can be a problem if some nodes, called \emph{stagglers}, are much slower than others. Further, this requires a strong synchrony between nodes, that need to communicate and perform updates at the same time. In a weaker notion of synchrony, nodes are only assumed to be locally synchronized: updates of the algorithm consist of a local averaging between two adjacent nodes in the graph, a protocol referred to as randomized (or asynchronous) gossip~\citep{boyd2006gossip}. These algorithms with pairwise updates benefit from asynchronous implementations, and from a better communication efficiency. Even though randomized gossip can be accelerated as in~\citet{even2021continuized} (which we don't, for clarity of presentation), we focus on the non-accelerated historical version of~\citet{boyd2006gossip} to produce Algorithm~\ref{algo:dp-randomized}, coined as \emph{randomized muffliato}.
For a fixed connected graph $G=(\mathcal{V},\mathcal{E})$, let $p=(p_{\{v,w\}})_{{\{v,w\}}\in\mathcal{E}}$ be non negative weights, that sum to 1.
\emph{Randomized muffliato} is an instance of \eqref{eq:gossip_general}, for \emph{i.i.d.}~random matrices $(W_t)_{t\geq0}$ of the form $W_t=W_{\{v_t,w_t\}}$ where ${\{v_t,w_t\}}\in\mathcal{E}$ is sampled with probability ${\{v_t,w_t\}}$ independently from the past, and $W_{\{v_t,w_t\}}=I_n-\frac{(e_{v_t}-e_{w_t})^\top(e_{v_t}-e_{w_t})}{2}$. \emph{Randomized muffliato} corresponds to the recursion defined in Algorithm~\ref{algo:dp-randomized}: noisy inputs and local averagings.
Importantly, notice that for a gossip matrix $W$, one can choose $p_{\{v,w\}}=2W_{\{v,w\}}/n$, leading to $\lambda_2(p)=2\lambda_W/n$.
Theorem~\ref{thm:privacy_general} applied to randomized gossip yields $\mathcal{P}^T_{\{v,w\}}=\emptyset$ if ${\{v,w\}}\notin\mathcal{E}$, and $\mathcal{P}^T_{\{v,w\}}=\set{t<T\,|\,{\{v_t,w_t\}}={\{v,w\}}}$ otherwise (the activation times of edge ${\{v,w\}}$). We denote $\mathcal{P}_v^T=\bigcup_{w\sim v}\mathcal{P}_{\{v,w\}}^T$, of cardinality $|\mathcal{P}_v^T|$, a binomial random variable of parameters $T,\pi_v$ (and thus of mean $T\pi_v$), where $\pi_v=\sum_{{\{v,w\}}\in\mathcal{E}}p_{\{v,w\}}$ (of order $1/n$). Consequently, the factor $\sum_{t<T}|\mathcal{N}_t(v)|$ in Equation~\eqref{eq:mean_general} is $|\mathcal{P}_v^T|$, a binomial random variable of mean $T\pi_v$, so that for any fixed $\varepsilon>0$, if we choose $p_{\{v,w\}}=2W_{\{v,w\}}/n$ for some gossip matrix $W$, Algorithm~\ref{algo:dp-randomized} is $(\alpha,f)$-pairwise network Rényi DP, with, for any $v\in\mathcal{V}$:
\begin{equation*}
\frac{1}{n}\sum_{u\in\mathcal{V}\setminus\set{v}}\esp{f(u,v)}\leq\varepsilon\quad,\qquad \frac{1}{n}\sum_{w\in\mathcal{V}}\esp{\NRM{x^t_w-\bar x}^2}=\tilde\mathcal{O}\left(\frac{\alpha}{\varepsilon n^2\lambda_W}\right)\,.
\end{equation*}
\paragraph{Comparison with synchronous \emph{muffliato}}
For a given precision of order $\sigma^2/n$, randomized \emph{muffliato} performs $n$ times more step than its synchronous counterpart. However, the privacy losses of an agent $v$ in $\mathcal{V}$ becomes proportional to $|\mathcal{P}_v^T|$ instead of $Td_v$. Since $|\mathcal{P}_v^T|$ is a binomial random variable of mean $2T/n$, together with the $n$ times more step, we get that the factor $d_v$ is removed from the privacy losses occurred by node $v$ when using randomized \emph{muffliato}.
\end{comment}
Synchronous protocols require global coordination between nodes, which can be costly or even impossible. On the contrary, asynchronous protocols only requires separated activation of edges: they are thus are more resilient to stragglers nodes and faster in practice. In asynchronous gossip, at a given time-step a single edge ${\{u,v\}}$ is activated independently from the past with probability $p_{{\{u,v\}}}$, as described by \citet{boyd2006gossip}. In our setting, randomized \emph{Muffliato} (Algorithm~\ref{algo:dp-randomized}) corresponds to instantiate our general analysis with $W^t = W_{\{v_t,w_t\}}=I_n-(e_{v_t}-e_{w_t})(e_{v_t}-e_{w_t})^\top/2$ if ${\{v_t,w_t\}}$ is sampled at time $t$.
The utility analysis is similar to the synchronous case.
\begin{theorem}[Utility analysis]\label{thm:utility-randomized} Let $\lambda(p)$ be the spectral gap of graph $G$ with weights $(p_{\{v,w\}})_{{\{v,w\}}\in\mathcal{E}}$. Randomized \emph{Muffliato} (Algorithm~\ref{algo:dp-randomized}) verify
\begin{equation*}
\frac{1}{2n}\sum_{v\in\mathcal{V}}\esp{\NRM{ x^{T^{\rm stop}}_v-\Bar{x}}^2}\leq\frac{2\sigma^2}{n}\,,\quad \text{for }~ T^{\rm stop}\leq \frac{1}{\lambda(p)}\ln\bigg(\frac{n}{\sigma^2}\max\Big(\sigma^2,\frac{1}{n}\sum_{v\in\mathcal{V}}\NRM{x^{0}_v-\Bar{x}}^2\Big)\bigg)\,.
\end{equation*}
\end{theorem}
To compare with synchronous gossip (Algorithm~\ref{algo:dp-gossip}), we note that activation probabilities can be derived from a gossip matrix $W$ by taking $p_{{\{u,v\}}} = 2 W_{{\{u,v\}}}/n$ implying that $\lambda(p)=2\lambda_W / n$, thus requiring $n$ times more iterations to reach the same utility than by applying in a synchronous way matrix $W$. However, for a given time-horizon $T$ and node $v$, the number of communications $v$ can be bounded with high probability by a $T/n$ multiplied by a constant whereas Algorithm~\ref{algo:dp-gossip} requires $d_vT$ communications. Consequently, as reported in Table~\ref{table:sync}, for a fixed privacy mean $\overline{\varepsilon}_v$, Algorithm~\ref{algo:dp-randomized} has the same utility as Algorithm~\ref{algo:dp-gossip}, up to two differences: the degree factor $d_v$ is removed, while $\sqrt{\lambda_W}$ degrades to $\lambda_W$ as we do not accelerate randomized gossip.\footnote{One could also accelerate randomized gossip as described by \citet{even2021continuized}, obtaining $\sqrt{\lambda(p)/|\mathcal{E}|}$ instead of $\lambda(p)$ in all our results.} Randomized gossip can thus achieve an optimal privacy-utility trade-off with large-degree graphs, as long as the spectral gap is small enough.
\subsection{Erdös-Rényi Graphs}
\label{subsec:random}
\begin{comment}
So far, privacy guarantees come only from the initial randomness of noisy inputs, and suffers from proximity in the graph. Adding randomness in the graph over which the algorithm is ran and using the same privacy analysis as before, together with quasi-convexity of Rényi divergences, we show that using Erdös Rényi random graphs, pairwise Rényi DP leakages can be explicitly computed.
We recall that in an Erdös-Rényi graph of parameters $n,q$ over the set of nodes $\mathcal{V}$, each edge has a fixed probability $q$ of being present in the graph, independently from the others.
We further assume that nodes are only aware of their direct neighbors in the graph, coherent with local communications and the Erdös-Rényi model. Formally, this means that for a node $v$ and any time horizon $T\geq0$, the \emph{view} of agent $v$ conditionally on $v\cup\mathcal{N}_v$ is invariant by permutation of any two agents $w,w'\in\mathcal{V}\setminus(\set{v}\cup\mathcal{N}_v)$.
Importantly, note that with high probability, Erdös-Rényi random graphs for $q=c\ln(n)/n$ have good expander properties for $c>2$: $\lambda_W$ is of order 1 \cite{ERgaps2019} and node degrees are of order $\ln(n)$. Consequently, the number of iterations required to reach a given precision $\sigma^2/n$ is of order $1$ for synchronous \emph{Muffliato}, and of order $n$ for randomized \emph{Muffliato}.
In Theorem~\ref{thm:privacy-random-graph}, with probability $q=c\ln(n)/n$, $u$ and $v$ are neighbors in the graph, so that node $u$ is $(\alpha,\varepsilon)$ RDP with respect to $v$, with $\varepsilon=\alpha/(2\sigma^2)$ the privacy guarantees of local DP. If ${\{u,v\}}\notin\mathcal{E}$ (an event of probability $1-q$), since node $v$ only observes its neighbors, the information leaked by node $u$ is diluted in the rest of the graph; for synchronous \emph{muffliato} (\emph{resp.}~randomized \emph{muffliato}), $T$ (\emph{resp.}~$T/n$) is of logarithmic order, so that in that case the privacy guarantees of a trusted aggregator is reached (up to logarithmic order factors).
Thus, Theorem~\ref{thm:privacy-random-graph} can be summarized as follows: with probability $q$, \emph{muffliato} on an Edös-Rényi random graph has the privacy guarantees of local DP, while with probability $1-q$, the privacy guarantees of a central DP (or equivalently, of a trusted aggregator) is reached.
\end{comment}
So far the graph was considered to be public and the amplification only relied on the secrecy of the messages. In practice, the graph may be sampled randomly and the nodes need only to know their direct neighbors. We show that we can leverage this through the weak convexity of Rényi DP to amplify privacy between non-neighboring nodes. We focus on Erdös-Rényi graphs, which can be built without central coordination by picking each edge independently with the same probability $q$. For $q=c\ln(n)/n$ where $c>1$, Erdös-Rényi graphs are good expanders with node degrees $d_v = \mathcal{O}(\log n)$ and $\lambda_W$ concentrating around 1 \citep{ERgaps2019}, and we obtain the following privacy guarantee.
\begin{theorem}[\emph{Muffliato} on a random graph]\label{thm:privacy-random-graph}
Let $\alpha>1$, $T\geq 0$, $\sigma^2\geq \frac{\Delta^2\alpha(\alpha-1)}{2}$ and $q=c\frac{\ln(n)}{n}$ for $c>1$. Let $u,v\in\mathcal{V}$ be distinct nodes.
After running Algorithm~\ref{algo:dp-gossip} with these parameters, node $u$ is ($\alpha,\varepsilon_{u\to v}^T(\alpha))$-PNDP with respect to $v$, with:
\begin{equation*}
\varepsilon_{u\to v}^T(\alpha)\leq \left\{ \begin{aligned} &\quad\quad\frac{\alpha\Delta^2}{2\sigma^2}\quad\quad \text{ with probability }q\,,\\
&\frac{\alpha\Delta^2}{\sigma^2} \frac{Td_v}{n-d_v} \quad \text{ with probability }1 -q\,.\end{aligned}\right.
\end{equation*}
\end{theorem}
This results shows that with probability $q$, $u$ and $v$ are neighbors and there is no amplification compared to LDP. The rest of the time, with probability $1-q$, the privacy matches that of a trusted aggregator up to a degree factor $d_v = \mathcal{O}(\log n)$ and $T=\Tilde\mathcal{O}(1/\sqrt{\lambda_W})=\Tilde\mathcal{O}(1)$ \citep{ERgaps2019}.
\section{Broader Impact Assessment}
Our work promotes increased privacy in federated ML. The potential longer-term benefits of our
work in this respect are a wider adoption of privacy-preserving and fully decentralized ML solutions
by service providers thanks to the improved privacy-utility guarantees, as well as better confidence of users
and the general public in the ability of decentralized ML systems to avoid catastrophic data leaks. In particular, our work shows that the advantages of decentralized methods over centralized ones have been overlooked, due to the lack of privacy analysis able to capture the benefits of decentralized algorithms.
Conversely, there are potential risks of accidental or deliberate misuse of our work in the sense that it
could give a false sense of privacy to users if weak privacy parameters are used in deployment. This
applies to all work on differential privacy. More applied research is needed towards developing a
methodology to choose appropriate privacy parameters in a data-driven manner and to reliably assess
the provided protection in practical use-cases.
Our work specifically proposes a varying privacy budget that depends on the relation between users, which might be misused for giving smaller privacy guarantees than the ones that would be designed otherwise. Modularity in privacy guarantees has however already been studied, for instance in \cite{chatzikokolakis:hal-00767210} where the privacy budget is a function of a metric on the input space. Informally, defining what is an acceptable privacy budget based on the context in which some information is revealed is in line with the idea of contextual integrity \cite{a2921ab0e3bc4e2f890006101e85a15f}. According to Helen Nissenbaum's theory, the privacy expectations should take into account five elements: the sender, the receiver, the message, the medium of transmission and the purpose. Mathematically, adapting the privacy guarantee to the receiver and promoting peer-to-peer communications for building a global model thus naturally fits this view. In particular, contextual integrity emphasizes that the privacy budget should not only depend on the information being transmitted, but also on who receives it.
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{} We define our relaxation of Local DP formally in Section~\ref{sec:setting}, the gossip averaging is analyzed in Section~\ref{sec:muff} and show substantial amplification, supported by the experiments in Section~\ref{sec:expe}. We define and prove guarantees for decentralized optimization in Section~\ref{sec:gd}.
\item Did you describe the limitations of your work?
\answerYes{} We discuss each theorem after in its subsection and show the effective magnitude of the privacy amplification in the experiments (Section~\ref{sec:expe}).
\item Did you discuss any potential negative societal impacts of your work?
\answerYes{} We have
included a broader impact statement at the end of the supplementary.
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{} All our theorems and corollaries have their complete set of assumptions.
\item Did you include complete proofs of all theoretical results? \answerYes{}, except when proving similar results, we do not repeat the full proof but only explain the main differences.
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{}
We provide the code needed to reproduce the results in the supplementary material.
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{} Most of our experiments have no hyperparameters to tune, and we provide the hyperparameters in Annex for Figure~\ref{fig:gd}.
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{} see Figures \ref{fig:gd} and \ref{fig:synthe}.
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerNo{} All of the simulations ran in a few minutes on a regular laptop.
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{} We use Houses Dataset and Facebook Ego dataset and cite them.
\item Did you mention the license of the assets?
\answerYes{}The dataset Houses is in the public domain, as
indicated on the link provided and the Facebook ego dataset in under BSD license.
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{}We have included our code in the supplementary.
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\section{Conclusion}
We showed that gossip protocols amplify the LDP guarantees provided by local noise injection as values propagate in the graph. Despite the redundancy of gossip that, at first sight, could be seen as an obstacle to privacy, the amplification turns out to be significant: it can nearly match the optimal privacy-utility trade-off of the trusted curator. From the fundamental building block --- noise injection followed by gossip --- that we analyzed under the name \emph{Muffliato}, one can easily extend the analysis to other decentralized algorithms.
Our results are motivated by the typical relation between proximity in the communication graph and lower privacy expectations. Other promising directions are to assume that close people are more similar, which leads to smaller individual privacy accounting \cite{indiv}, or to design new notions of similarity between nodes in graphs that match the privacy loss variations.
\section{Experiments}
\label{sec:expe}
In this section, we show that pairwise network DP provides significant privacy gains in practice even for moderate size graphs. We use synthetic graphs and real-world graphs for gossip averaging. For decentralized optimization, we solve a logistic regression problem on real-world data with time-varying Erdos-Renyi graphs, showing in each case clear gains of privacy compared to LDP.
\textbf{Synthetic graphs.}
We generate synthetic graphs with $n=2048$ nodes and define the corresponding gossip matrix according to the Hamilton scheme. Note that the privacy guarantees of \emph{Muffliato} are deterministic for a fixed $W$, and defined by Equation~\ref{eq:gossip_general}. For each graph, we run \emph{Muffliato} for the theoretical number of steps required for convergence, and report in Figure~\ref{fig:synthe} the pairwise privacy guarantees aggregated by shortest path lengths between nodes, along with the LDP baseline for comparison.
\emph{Exponential graph} (generalized hypercubes): this has shown to be an efficient topology for decentralized learning \cite{ying2021exponential}. Consistently with our theoretical result, privacy is significiantly amplified. The shortest path completely defines the privacy loss, so there is no variance.
\emph{Erdos-Renyi graph} with $q = c \log n / n$ ($c \geq 1$) \cite{erdos59a}, averaged over 5 runs:
this has nearly the same utility-privacy trade-off as the exponential graph but with significant variance, which motivates the time-evolving version mentioned in Section \ref{sec:gd}.
\emph{Grid:} given its larger mixing time, it is less desirable than the two previous graphs, emphasizing the need for careful design of the communication graph.
\emph{Geometric random graph:} two nodes are connected if and only if their distance is below a given threshold, which models for instance Bluetooth communications (effective only in a certain radius). We sample nodes uniformly at random in the square unit and choose a radius ensuring full connectivity. While the shortest path is a noisy approximation of the privacy loss, the Euclidean distance is a very good estimator as shown in Appendix~\ref{app:expe}.
\begin{figure}
\centering\vspace{-9pt}
\hspace{-1em}
\subfigure{
\includegraphics[width=0.3\linewidth]{fig/withlenght25.pdf}
\label{fig:synthe}
}
\hspace{-1em}
\subfigure{
\includegraphics[width=0.4\linewidth]{fig/fb_ex-crop.pdf}
\label{fig:fb}
}
\subfigure{
\includegraphics[width=0.25\linewidth]{fig/bigsummary.pdf}
\label{fig:gd}
}
\caption{(a)~Left: Privacy loss of \emph{Muffliato} in pairwise NDP on synthetic graphs (best, worst and average in error bars over nodes at a given distance), confirming a significant privacy amplification as the distance increases. (b)~Middle: Privacy loss of \emph{Muffliato} from a node chosen at random on a Facebook ego graph, showing that leakage is limited outside the node's own community. (c)~Right: Privacy loss and utility of \emph{Muffliato}-GD compared to a baseline based on a trusted aggregator.
}
\end{figure}
\textbf{Real-world graphs.}
We consider the graphs of the Facebook ego dataset \cite{egofbgraph}, where nodes are the friends of a given user (this central user is not present is the graph) and edges encode the friendship relation between these nodes. Ego graphs typically induce several clusters corresponding to distinct communities: same high school, same university, same hobbies...
For each graph, we extract the giant connected component, choose a user at random and report its privacy loss with respect to other nodes. The privacy loss is often limited to the cluster of direct neighbors and fades quickly in the other communities, as seen in Figure~\ref{fig:fb}. We observe this consistently across other ego graphs (see Appendix~\ref{app:expe}). This is in line with one of our initial motivation: our pairwise guarantees are well suited to situations where nodes want stronger privacy with respect to distant nodes.
\textbf{Logistic regression on real-world data.}
Logistic regression corresponds to minimizing Equation~\ref{eq:erm_obj} with $\ell(\theta;x,y)=\ln(1+\exp(-y\theta^{\top} x))$ where $x\in
\mathbb{R}^d$ and $y\in \{-1,1\}$.
We use a binarized version of UCI Housing dataset.\footnote{\url{https://www.openml.org/d/823}} We standardize the features and
normalize each data point $x$ to have unit $L_2$ norm so that the logistic loss is $1$-Lipschitz for any $(x,y)$.
We split the dataset uniformly at random into a training set (80\%) and a test set and further split the training set across users.
For each gossiping step, we draw at random an Erdos-Renyi graph of same parameter $q$ and run the theoretical number of steps required for convergence. For each node, we keep track of the privacy loss towards the first node (note that all nodes play the same role). We compute an equivalent in federated learning setting as drawn in Figure~\ref{fig:gd}, where updates are aggregated by a trusted central server, with the same parameters, showing that we do observe the same behavior. We report the privacy loss per node for $n=2000$ and $n=4000$, showing clear gains over LDP that increase with the number of nodes.
\section{Private Decentralized Optimization}
\label{sec:gd}
We now build upon \emph{Muffliato} to design decentralized optimization algorithms.
Each node $v\in\mathcal{V}$ possesses a data-dependent function $\phi_v:\mathbb{R}^d\to\mathbb{R}$
and we wish to \emph{privately} minimize the function
\begin{equation}
\label{eq:erm_obj}
\phi(\theta)=\frac{1}{n}\sum_{v\in\mathcal{V}}\phi_v(\theta)\,, \quad \text{with }~ \phi_v(\theta)=\frac{1}{|\mathcal{D}_v|}\sum_{x_v\in\mathcal{D}_v}\ell_v(\theta,x_v)\,,\quad \theta\in\mathbb{R}^d\,,
\end{equation}
where $\mathcal{D}_v$ is the (finite) dataset corresponding to user $v$ for data lying in a space $\mathcal{X}_v$,
and $\ell_v:\mathbb{R}^d\times\mathcal{X}_v\to\mathbb{R}$ a loss function. We assume that $\phi$ is $\mu$-strongly convex, and each $\phi_v$ is $L$-smooth, and denote $\kappa=L/\mu$.
Denoting by $\theta^\star$ the minimizer of $\phi$, for some non-negative $(\zeta_v^2)_{v\in\mathcal{V}}$, $(\rho_v^2)_{v\in\mathcal{V}}$ and all $v\in\mathcal{V}$, we assume:
\begin{equation*}
\NRM{\nabla \phi_v(\theta^\star)-\nabla \phi(\theta^\star)}^2\leq \zeta_v^2\quad,\quad \esp{\NRM{\nabla \ell_v(\theta^\star,x_v)-\nabla \phi(\theta^\star)}^2}\leq \rho_v^2\,,\quad x_v\sim\mathcal{L}_v\,,
\end{equation*}
where $\mathcal{L}_v$ is the uniform distribution over $\mathcal{D}_v$.
We write $\bar \rho ^2 = \frac{1}{n}\sum_{v\in\mathcal{V}}\rho_v^2$
and $\bar\zeta ^2=\frac{1}{n}\sum_{v\in\mathcal{V}}\zeta_v^2$.
We introduce Algorithm~\ref{algo:private-decentralized-sgd}, a private version of the classical decentralized SGD algorithm studied in~\cite{pmlr-v119-koloskova20a}.
Inspired by the optimal algorithm MSDA of~\citet{scaman17optimal} that alternates between $K$ Chebychev gossip communications and expensive dual gradient computations, our Algorithm~\ref{algo:private-decentralized-sgd} alternates between $K$ Chebychev communications and local stochastic gradient steps. This alternation reduces the total number of gradients leaked, a crucial point for achieving good privacy.
Note that in Algorithm~\ref{algo:private-decentralized-sgd}, each communication round uses a potentially different gossip matrix $W_t$. In the results stated below, we fix $W_t = W$ for all $t$ and defer the more general case to Appendix~\ref{app:optim}, where different independent Erdös-Rényi graphs with same parameters are used at each communication round.
\begin{algorithm}[t]
\KwIn{initial points $\theta_i^{0}$, number of iterations $T$, step sizes $\nu>0$, noise variance $\sigma\geq0$, mixing matrices $(W_t)_{t\geq0}$, local functions $\phi_v$, number of communication rounds $K$}
\For{$t = 0$ to $T-1$}{
\For{all nodes $v$ in parallel}{
Compute
$\hat\theta_v^t=\theta_v^t-\nu\nabla_{\theta} \ell_v(\theta_v^t,x_v^t)$ where $x_v^t\sim\mathcal{L}_v$
}
$\theta_v^{t+1}=\textsc{Muffliato}\big((\hat\theta_v^t)_{v\in\mathcal{V}},W_t,K,\nu^2\sigma^2)$
}
\caption{\textsc{Muffliato-SGD} and \textsc{Muffliato-GD}}
\label{algo:private-decentralized-sgd}
\end{algorithm}
\begin{remark}
Our setting encompasses both GD and SGD. \textsc{Muffliato-GD} is obtained by removing the stochasticity, \emph{i.e.}, setting $\ell_v(\cdot)=\phi_v(\cdot)$. In that case, $\bar\rho^2=0$.
\end{remark}
\begin{comment}
\begin{theorem}[Utility analysis of Algorithm~\ref{algo:private-decentralized-sgd}]\label{thm:utility-sgd} Let $K=\left\lceil\sqrt{\lambda_W}^{-1}\ln\left(\max\big(n,\frac{\bar\zeta^2}{\sigma^2+\bar\rho^2}\big)\right)\right\rceil$. For suitable step size parameters, the iterates $(\theta^t)_{t\geq0}$ generated by Algorithm~\ref{algo:private-decentralized-sgd} verify $\esp{\phi(\Tilde \theta ^T)-\phi(\theta^\star)}=\tilde\mathcal{O}(\frac{\sigma^2+\bar\rho^2}{\mu T^{\rm stop}})$ where $\Tilde \theta ^T$ is a weighted average along the trajectory of the means $\bar \theta^t=\frac{1}{n}\sum_{v\in\mathcal{V}}\theta_v^t$, for a total number of $T^{\rm stop}=\Tilde{\mathcal{O}}\big(\kappa\big)$ computations and $T^{\rm stop}K$ communications.
\end{theorem}
\begin{theorem}[Utility analysis of Algorithm~\ref{algo:private-decentralized-sgd}]\label{thm:utility-sgd} Let $K=\left\lceil\sqrt{\lambda_W}^{-1}\ln\left(\max\big(n,\frac{\bar\zeta^2}{\sigma^2+\bar\rho^2}\big)\right)\right\rceil$. For suitable step size parameters, the iterates $(\theta^t)_{t\geq0}$ generated by Algorithm~\ref{algo:private-decentralized-sgd} verify
\begin{equation}
\esp{\phi(\Tilde \theta ^T)-\phi(\theta^\star)}=\tilde\mathcal{O}(\frac{\sigma^2+\bar\rho^2}{\mu T}) \quad \text{where} \quad \Tilde \theta ^T= \sum_{t=1}^T \omega_t \frac{1}{n}\sum_{v\in\mathcal{V}}\theta_v^t
\end{equation}
for a total number of $T=\Tilde{\mathcal{O}}\big(\kappa\big)$ computations and $TK$ communications.
\end{theorem}
\end{comment}
\begin{theorem}[Utility analysis of Algorithm~\ref{algo:private-decentralized-sgd}]\label{thm:utility-sgd}For suitable step-size parameters, for a total number of $T^{\rm stop}$ computations and $T^{\rm stop}K$ communications, with:
\begin{equation*}
T^{\rm stop}=\Tilde{\mathcal{O}}\big(\kappa\big)\,,\quad \text{and} \quad K=\left\lceil\sqrt{\lambda_W}^{-1}\ln\left(\max\big(n,\frac{\bar\zeta^2}{\sigma^2+\bar\rho^2}\big)\right)\right\rceil\,,
\end{equation*}
the iterates $(\theta^t)_{t\geq0}$ generated by Algorithm~\ref{algo:private-decentralized-sgd} verify $\esp{\phi(\Tilde \theta ^{\rm out})-\phi(\theta^\star)}=\tilde\mathcal{O}(\frac{\sigma^2+\bar\rho^2}{\mu T^{\rm stop}})$ where $\Tilde \theta ^{\rm out}$ is a weighted average of the $\bar \theta^t=\frac{1}{n}\sum_{v\in\mathcal{V}}\theta_v^t$ until $T^{\rm stop}$.
\end{theorem}
For the following privacy analysis, we need a bound on the sensitivity of gradients with respect to the data. To this end, we assume that for all $v$ and $x_v$, $\ell_v(\cdot,x_v)$ is $\Delta_{\phi}/2$ Lipschitz\footnote{This assumption can be replaced by the more general Assumption~\ref{hyp:dp_gd} given in Appendix~\ref{app:optim}}.
\begin{theorem}[Privacy analysis of Algorithm~\ref{algo:private-decentralized-sgd}]\label{thm:privacy-gd}
Let $u$ and $v$ be two distinct nodes in $\mathcal{V}$.
After $T$ iterations of Algorithm~\ref{algo:private-decentralized-sgd} with $K\geq 1$, node $u$ is $(\varepsilon_{u\to v}^T(\alpha),\alpha)$-PNDP with respect to $v$, with:
\begin{equation}
\varepsilon^T_{{u\to v}}(\alpha)\leq \frac{T\Delta_{\phi}^2\alpha}{2\sigma^2}\sum_{k=0}^{K-1}\sum_{ w:\set{v,w}\in \mathcal{E}}\frac{(W^k)_{u,w}^2}{\NRM{(W^k)_w}^2}\,.
\end{equation}
Thus, for any $\varepsilon>0$, Algorithm~\ref{algo:private-decentralized-sgd} with $T^{\rm stop}(\kappa,\sigma^2,n)$ steps and for $K$ as in Theorem~\ref{thm:utility-sgd}, there exists $f$ such that the algorithm is $(\alpha,f)$-pairwise network DP, with:
\begin{equation*}
\forall v \in \mathcal{V}\,,\quad \overline{\varepsilon}_v \leq \varepsilon\quad\text{ and }\quad \esp{\phi(\tilde \theta^{\rm out})-\phi(\theta^\star)}\leq \Tilde{\mathcal{O}}\left(\frac{\alpha \Delta_{\phi}^2 d_v }{n\mu\varepsilon\sqrt{\lambda_W}} + \frac{\bar \rho^2}{nL}\right)\,.
\end{equation*}
\end{theorem}
The term $\frac{\bar \rho^2}{nL}$ above is privacy independent, and typically dominated by the first term.
Comparing Theorem~\ref{thm:privacy-gd} with the privacy guarantees of \emph{Muffliato} (Section~\ref{subsec:muff}), the only difference lies in the factor $\Delta_{\phi}^2/\mu$. While $\Delta_{\phi}^2$ plays the role of the sensitivity $\Delta^2$, $\mu$ is directly related to the complexity of the optimization problem through the condition number $\kappa$: the easier the problem is, the more private our algorithm becomes.
Finally, the same discussion as after Corollary~\ref{cor:sync_simplified} applies here, up to the above optimization-related factors that do not affect the influence of the graph.
\section{Introduction}
Training machine learning models traditionally requires centralizing data in a single server, raising issues of scalability and privacy. An alternative is to use Federated Learning (FL), where each user keeps her data on device \citep{mcmahan2017fl,kairouz_advances_2019}. In \emph{fully decentralized} FL, the common hypothesis of a central server is also removed, letting users, represented as nodes in a graph, train the model via peer-to-peer communications along edges. This approach improves scalability and robustness to central server failures, enabling lower latency, less power consumption and quicker deployment \cite{lopes2007incremental,boyd2006gossip,scaman17optimal, neglia19infocom, alghunaim2019primaldual,Lian2017CanDA,pmlr-v119-koloskova20a}.
Another important dimension is privacy, as a wide range of applications deal with sensitive and personal data. The gold standard to quantify the privacy leakage of algorithms is Differential Privacy (DP) \cite{dwork2013Algorithmic}. DP typically requires to randomly perturb the data-dependent computations to prevent the final model from leaking too much information about any individual data point (e.g., through data memorization). However, decentralized algorithms do not only reveal the final model to the participating nodes, but also the results of some intermediate computations.
A solution is to use Local Differential Privacy (LDP) \citep{Kasiviswanathan2008,d13}, where random perturbations are performed locally by each user, thus protecting against an attacker that would observe everything that users share. This can be easily combined with decentralized algorithms, as done for instance in \cite{Huang2015a, Bellet2018a, leasgd, admm,9524471}. Unfortunately, LDP requires large amounts of noise, and thus provides poor utility.
In this work, we show that the LDP guarantees give a very pessimistic view of the privacy offered by decentralized algorithms. Indeed, there is no central server receiving all messages, and the participating nodes can only observe the messages sent by their neighbors in the graph. So, a given node should intuitively leak less information about its private data to nodes that are far away.
We formally quantify this privacy amplification for the fundamental brick of communication
at the core of decentralized optimization: gossip algorithms. Calling \emph{Muffliato} the combination of local noise injection with a gossip averaging protocol, we precisely track the resulting privacy leakage between each pair of nodes. Through gossiping, the private values and noise terms of various users add up, obfuscating their contribution well beyond baseline LDP guarantees: as their distance in the graph increases, the privacy loss decreases.
We then show that the choice of graph is crucial to enforce a good privacy-utility trade-off while preserving the scalability of gossip algorithms.
Our results are particularly attractive in situations where nodes want stronger guarantees with respect to some (distant) peers. For instance, in social network graphs, users may have lower privacy expectations with respect to close relatives than regarding strangers. In healthcare, a patient might trust her family doctor more than she trusts other doctors, and in turn more than employees of a regional agency and so on, creating a hierarchical level of trust that our algorithms naturally match.
\textbf{Contributions and outline of the paper}
\emph{\textbf{(i)}}
We introduce \emph{pairwise network DP}, a relaxation of Local Differential Privacy inspired by the definitions of \citet{cyffers2020privacy}, which is able to quantify the privacy loss of a decentralized algorithm for each pair of distinct users in a graph.
\emph{\textbf{(ii)}}
We propose \emph{Muffliato}\footnote{The name is borrowed from the Harry Potter series: it designates a ``spell that filled the ears of anyone nearby with an unidentifiable buzzing'', thereby concealing messages from unintended listeners through noise~injection.}, a privacy amplification mechanism composed of local Gaussian noise injection at the node level followed by gossiping for averaging the private values. It offers privacy amplification that increases as the distance between two nodes increases. Informally, the locally differentially private value shared by a node $u$ is mixed with other contributions, to the point that the information that leaks to another node $v$ can have a very small sensitivity to the initial value in comparison to the accumulated noise.
\emph{\textbf{(iii)}}
We analyze both synchronous gossip \citep{dimakis2010synchgossip} and randomized gossip \citep{boyd2006gossip}
under a unified privacy analysis with arbitrary time-varying gossip matrices.
We show that the magnitude of the privacy amplification is significant: the average privacy loss over all the pairs in this setting reaches the optimal utility-privacy of a trusted aggregator, up to a factor $\frac{d}{\sqrt{\lambda_W}}$, where $\lambda_W$ is the weighted graph eigengap and $d$ the maximum degree of the graph. Remarkably, this factor can be of order 1 for expanders, yielding a sweet spot in the privacy-utility-scalability trade-off of gossip algorithms.
Then, we study the case where the graph is itself random and private, and derive stronger privacy guarantees.
\emph{\textbf{(iv)}}
Finally, we develop and analyze differentially private decentralized Gradient Descent (GD) and Stochastic Gradient Descent (SGD) algorithms to minimize a sum of local objective functions.
Building on \emph{Muffliato}, our algorithms alternate between rounds of differentially private gossip communications and local gradient steps. We prove that they enjoy the same privacy amplification described above for averaging, up to factors that depend on the regularity of the global objective.
\emph{\textbf{(v)}} We show the usefulness of our approach and analysis through experiments on synthetic and real-world datasets and network graphs, illustrating how privacy is amplified between nodes in the graph as a function of their distance.
\textbf{Related work}
\emph{Gossip algorithms and decentralized optimization.}
Gossip algorithms \citep{boyd2006randomized,dimakis2010synchgossip} were introduced to compute the global average of local vectors through peer-to-peer communication, and are at the core of many decentralized optimization algorithms.
Classical decentralized optimization algorithms alternate between gossip communications and local gradient steps \citep{nedic2008distributed,koloskova2019decentralized,pmlr-v119-koloskova20a}, or use dual formulations and formulate the consensus constraint using gossip matrices to obtain decentralized dual or primal-dual algorithms \citep{scaman17optimal,hendrikx2019accelerated,even2020asynchrony,even2021continuized,kovalev2021adom, alghunaim2019primaldual}. We refer the reader to~\cite{nedic2018network} for a broader survey on decentralized optimization. Our algorithms are based on the general analysis of decentralized SGD in~\cite{pmlr-v119-koloskova20a}.
\emph{LDP and amplification mechanisms.} Limitations of LDP for computing the average of the private values of $n$ users have been studied, showing
that for a fixed privacy budget, the expected squared error in LDP is $n$ times larger than in central DP
\citep{Chan2012}. More generally, LDP is also known to significantly reduce utility for many learning problems \cite{Zheng2017,Wang2018b}, which motivates the study of intermediate trust models. Cryptographic primitives, such as secure aggregation \citep{Dwork2006ourselves,Shi2011,Bonawitz2017a,ChanSS12,Jayaraman2018,Bell2020,gopa} and secure shuffling \citep{Cheu2019,amp_shuffling,Balle2019b,Ghazi2020,clones}, as well as additional mechanisms such as amplification by subsampling \citep{Balle_subsampling} or amplification by iteration \citep{amp_iter}, can offer better utility for some applications, but cannot be easily applied in a fully decentralized setting, as they require coordination by a central server.
\emph{Amplification through decentralization.}
The idea that decentralized communications can provide differential privacy guarantees was initiated by \cite{Bellet2020a} in the context of rumor spreading. Closer to our work, \cite{cyffers2020privacy} showed privacy amplification for random walk algorithms on complete graphs, where the model is transmitted from one node to another sequentially.
While we build on their notion of Network DP, our work differs from \cite{cyffers2020privacy} in several aspects: (i) our analysis holds for any graph and explicitly quantifies its effect, (ii) instead of worst-case privacy across all pairs of nodes, we prove pairwise guarantees that are stronger for nodes that are far away from each other, and (iii) unlike random walk approaches, gossip algorithms allow parallel computation and thus better scalability.
\section{Setting and Pairwise Network Differential Privacy}
\label{sec:setting}
We study a decentralized model where $n$ nodes (users) hold private datasets and communicate through gossip protocols, that we describe in Section~\ref{subsec:gossip_opt}
In Section~\ref{subsec:dp}, we recall differential privacy notions and the two natural baselines for our work, central and local DP. Finally, we introduce in Section~\ref{subsec:pairwisedp} the relaxation of local DP used throughout the paper: the \emph{pairwise network DP}.
\subsection{Gossip Algorithms}
\label{subsec:gossip_opt}
We consider a connected graph $G=(\mathcal{V},\mathcal{E})$ on a set $\mathcal{V}$ of $n$ users. An edge ${\{u,v\}}\in\mathcal{E}$ indicates that $u$ and $v$ can communicate (we say they are neighbors). Each user $v\in\mathcal{V}$ holds a local dataset $\mathcal{D}_v$ and we aim at computing averages of private values. This averaging step is a key building block for solving machine learning problems in a decentralized manner, as will be discussed in Section~\ref{sec:gd}.
From a graph, we derive a gossip matrix.
\begin{definition}[Gossip matrix] A gossip matrix over a graph $G$ is a \emph{symmetric} matrix $W\in\mathbb{R}^{\mathcal{V}\times \mathcal{V}}$ with non-negative entries, that satisfies $W\mathds{1}=\mathds{1}$ \emph{i.e.}~$W$ is stochastic ($\mathds{1}\in\mathbb{R}^\mathcal{V}$ is the vector with all entries equal to $1$), and such that for any $u,v\in\mathcal{V}$, $W_{u,v}>0$ implies that ${\{u,v\}}\in\mathcal{E}$ or $u=v$.
\end{definition}
The iterates of synchronous gossip \citep{dimakis2010synchgossip} are generated through a recursion of the form $x^{t+1}=Wx^t$, and converge to the mean of initial values at a linear rate $e^{-t\lambda_W}$, with $\lambda_W$ defined below.
\begin{definition}[Spectral gap]
\label{def:Laplacian}
The spectral gap $\lambda_W$ associated with a gossip matrix $W$ is $\min_{\lambda\in{\rm Sp}(W)\setminus\set{1}}(1-|\lambda|)$, where ${\rm Sp}(W)$ is the spectrum of $W$.
\end{definition}
The inverse of $\lambda_W$ is the relaxation time of the random walk on $G$ with transition probabilities $W$, and is closely related to the connectivity of the graph: adding edges improve mixing properties ($\lambda_W$ increases), but can reduce scalability by increasing node degrees (and thus the per-iteration communication complexity).
The rate of convergence can be accelerated to $e^{-t\sqrt{\lambda_W}}$
using re-scaled Chebyshev polynomials, leading to iterates of the form $x^t=P_t(W)x^{0}$ \citep{berthier2020jacobi}.
\begin{definition}[Re-scaled Chebyshev polynomials]\label{def:cheby} The re-scaled Chebyshev polynomials $(P_t)_{t\geq 0}$ with scale parameter $\gamma\in[1,2]$ are defined by second-order linear recursion:
\begin{equation}\label{eq:recursion_cheby}
P_0(X)=1\,,\quad P_1(X)=X\,,\quad P_{t+1}(X)=\gamma XP_t(X) + (1-\gamma)P_{t-1}(X)\,,\,t\geq2\,.
\end{equation}
\end{definition}
\subsection{Rényi Differential Privacy \label{subsec:dp}}
Differential Privacy (DP)
quantifies how much the output of an algorithm $\mathcal{A}$ leaks about the dataset taken as input \cite{dwork2013Algorithmic}. DP requires to define an adjacency relation between datasets.
In this work, we adopt a user-level relation \cite{user-dp} which aims to protect the whole dataset $\mathcal{D}_v$ of a given user represented by a node $v\in\mathcal{V}$. Formally, $\mathcal{D} = \cup_{v\in\mathcal{V}} \mathcal{D}_v$
and $\mathcal{D}' = \cup_{v\in\mathcal{V}} \mathcal{D}_v'$ are adjacent datasets, denoted by $\mathcal{D}\sim\mathcal{D}'$, if there exists $v \in \mathcal{V}$ such that only $\mathcal{D}_v$ and $\mathcal{D}_v'$ differ. We use $\mathcal{D}\sim_v\mathcal{D}'$ to denote that $\mathcal{D}$ and $\mathcal{D}'$ differ only in the data of user $v$.
We use Rényi Differential Privacy (RDP) \cite{DBLP:journals/corr/Mironov17} to measure the privacy loss, which allows better and simpler composition than the classical $(\varepsilon,\delta)$-DP. Note that any $(\alpha, \varepsilon)$-RDP algorithm is also $(\varepsilon+\ln(1/\delta)/(\alpha-1),\delta)$-DP for any $0<\delta<1$ \cite{DBLP:journals/corr/Mironov17}.
\begin{definition}[Rényi Differential Privacy]
\label{def:DP}
An algorithm $\mathcal{A}$ satisfies $(\alpha, \varepsilon)$-Rényi Differential Privacy (RDP) for $\alpha>1$ and $\varepsilon>0$ if for all pairs of neighboring datasets $\mathcal{D} \sim \mathcal{D}'$
\begin{equation}
\label{eq:DP}
D_{\alpha} \left(\mathcal{A}(\mathcal{D}) || \mathcal{A}(\mathcal{D}') \right) \leq \varepsilon\,,
\end{equation}
where for two random variables $X$ and $Y$, $\divalpha{X}{Y}$ is the \emph{Rényi divergence} between $X$ and $Y$
\begin{equation*}
\textstyle\divalpha{X}{Y}=\frac{1}{\alpha-1}\ln \int \big(\frac{\mu_{X}(z)}{\mu_Y(z)} \big)^{\alpha} \mu_Y (z) dz \,.
\end{equation*}
with $\mu_X$ and $\mu_Y$ the respective densities of $X$ and $Y$.
\end{definition}
Without loss of generality, we consider gossip algorithms with a single real value per node (in that case, $\mathcal{D}_v = \set{x_v}$ for some $x_v\in\mathbb{R}$), and we aim at computing a private estimation of the mean $\Bar{x}=(1/n)\sum_v x_v$.
The generalization to vectors is straightforward, as done subsequently for optimization in Section~\ref{sec:gd}.
In general, the value of a (scalar) function $g$ of the data can be privately released using the Gaussian mechanism \cite{dwork2013Algorithmic,DBLP:journals/corr/Mironov17}, which adds $\eta \sim \gau{ \sigma^2}$ to $g(\mathcal{D})$. It satisfies $(\alpha, \alpha \Delta_g^2/(2 \sigma^2))$-RDP for any $\alpha >1$, where $\Delta_g=\sup_{\mathcal{D}\sim\mathcal{D}'}\|g(\mathcal{D})-g(\mathcal{D}')\|$ is the sensitivity of $g$. We focus on the Gaussian mechanism for its simplicity (similar results could be derived for other DP mechanisms), and thus assume an upper bound on the $L_2$ inputs sensitivity.
\begin{hyp}\label{hyp:sensitivity}
There exists some constant $\Delta>0$ such that for all $u\in\mathcal{V}$ and for any adjacent datasets $\mathcal{D}\sim_u\mathcal{D}'$, we have $\NRM{x_u-x_u'}\leq \Delta$.
\end{hyp}
In central DP, a trusted aggregator can first compute the mean $\Bar{x}$ (which has sensitivity $\Delta/n$) and then reveal a noisy version with the Gaussian mechanism.
On the contrary, in local DP where there is no trusted aggregator and everything that a given node reveals can be observed, each node must locally perturb its input (which has sensitivity $\Delta$), deteriorating the privacy-utility trade-off.
Formally, to achieve $(\alpha, \varepsilon)$-DP,
one cannot have better utility than:
\begin{equation*}
\esp{\NRM{x^{\rm out}-\Bar{x}}^2}\leq
\frac{\alpha\Delta^2}{2n\varepsilon} \quad \text{for local DP}\,,\quad \text{and} \quad
\esp{\NRM{x^{\rm out}-\Bar{x}}^2}\leq
\frac{\alpha\Delta^2}{2n^2\varepsilon} \quad \text{for central DP}\,,
\end{equation*
where $x^{\rm out}$ is the output of the algorithm. This $1/n$ gap motivates the study of relaxations of local~DP.
\subsection{Pairwise Network Differential Privacy}\label{subsec:pairwisedp}
We relax local DP to take into account privacy amplification between nodes that are distant from each other in the graph.
We define a decentralized algorithm $\mathcal{A}$ as a randomized mapping that takes as input a dataset $\mathcal{D} = \cup_{v\in\mathcal{V}} (\mathcal{D}_v)$ and outputs the transcript of all messages exchanged between users in the network. A message between neighboring users ${\{u,v\}}\in\mathcal{E}$ at time $t$ is characterized by the tuple $(u,m(t),v)$: user $u$ sent a message with content $m(t)$ to user $v$, and $\mathcal{A}(\mathcal{D})$ is the set of all these messages. Each node $v$ only has a partial knowledge of $\mathcal{A}(\mathcal{D})$, captured by its \emph{view}
\begin{equation*}
\mathcal{O}_v\big(\mathcal{A}(\mathcal{D})\big)=\set{(u,m(t),v)\in\mathcal{A}(\mathcal{D})\quad\text{such that}\quad{\{u,v\}}\in\mathcal{E}}\,.
\end{equation*}
This subset corresponds to direct interactions of $v$ with its neighbors, which provide only an indirect information on computations in others parts of the graph.
Thus, we seek to express privacy constraints that are personalized for each pair of nodes. This is captured by our notion of Pairwise Network DP.
\begin{definition}[Pairwise Network DP]
\label{def:indiv_ndp}
For $f: \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}^+$, an algorithm $\mathcal{A}$ satisfies $(\alpha, f)$-Pairwise Network DP (PNDP) if for all pairs of distinct users $u, v \in \mathcal{V}$ and neighboring datasets $D \sim_u D'$
\begin{equation}
\label{eq:network-pdp}
\divalpha{\Obs{v}(\mathcal{A}(\mathcal{D}))}{\Obs{v}(\mathcal{A}(\mathcal{D}'))} \leq f(u,v)\,.
\end{equation}
We note $\varepsilon_{{u\to v}} = f(u,v)$ the privacy leaked to $v$ from $u$ and say that $u$ is $(\alpha, \varepsilon_{u\to v})$-PNDP with respect to $v$ if only inequality~\eqref{eq:network-pdp} holds for $f(u,v)=\varepsilon_{u\to v}$.
\end{definition}
By taking $f$ constant in Definition~\ref{def:indiv_ndp}, we recover the definition of Network DP \cite{cyffers2020privacy}.
Our pairwise variant refines Network DP by allowing the privacy guarantee to depend on $u$ and $v$ (typically, on their distance in the graph).
We assume that users are \emph{honest but curious}: they truthfully follow the protocol, but may try to derive as much information as possible from what they observe.
We refer to Appendix~\ref{app:collusions} for an adaptation of our definition and results to the presence of colluding nodes.
In addition to pairwise guarantees, we will use the \emph{mean privacy loss} $\overline{\varepsilon}_v = \frac{1}{n}\sum_{u\in\mathcal{V}\setminus\set{v}} f(u,v)$ to compare with baselines LDP and trusted aggregator by enforcing $\overline{\varepsilon} =\max_{v \in \mathcal{V}} \overline{\varepsilon}_v \leq \varepsilon$. The value $\overline{\varepsilon}_v$ is the average of the privacy loss from all the nodes to $v$ and thus does not correspond to a proper privacy guarantee, but it is a convenient way to summarize our gain, noting that distant nodes --- in ways that will be specified --- will have better privacy guarantee than this average, while worst cases will remain bounded by the baseline LDP guarantee provided by local noise injection.
\section{Private Synchronous Gossip}
\mathieu{Introduire une sensitivité !!!} \mathieu{Assumption~\ref{hyp:sensitivity}}
We present our \textsc{DP-Gossip} algorithm: it takes as inputs node values $(x_v)_{v\in\mathcal{V}}$ to average, a gossip matrix on $G$ and a noise variance $\sigma^2$. Values are initially noised using a Gaussian mechanism, and a Chebychev acceleration of simple synchronous gossip is then used. We formally introduce this algorithm in Algorithm~\ref{algo:dp-gossip} and provide utility guarantees. We then study the individual network differential privacy properties of this algorithms. Throughout the following sections, when studying the averaging problem, we assume that Assumption~\ref{hyp:sensitivity} holds for $\Delta=1$ to ease notations.
\subsection{Private synchronous gossip algorithm and utility analysis}
\begin{algorithm}
\KwIn{local values $(x_v)_{v\in\mathcal{V}}$ to average, gossip matrix $W$ on a graph $G$, number of iterations $T$, noise variance $\sigma^2$}
$\gamma \leftarrow 2\frac{1-\sqrt{\lambda_W(1-\frac{\lambda_W}{4})}}{(1-\lambda_W/2)^2}$\;
\For{all nodes $v$ in parallel}{
$x_v^{0} \leftarrow x_v + \eta_v$ where $\eta_v \sim \gau{\sigma^2}$
}
\For{$t = 0$ to $T-1$}{
\For{all workers $v$ in parallel}{
\For{all neighbors $w$ defined by $W$}{
Send $x^t_v$\;
Receive $x^t_w$\;
}
$x^{t+1}_v \leftarrow (1 - \gamma) x^{t-1} + \gamma \sum_{w \in \mathcal{N}_v } W_{vw} x^t_w$\;
}
}
\KwOut{$ x_v^T$ at node $v\in\mathcal{V}$}
\caption{\textsc{DP-Gossip}($(x_v)_{v\in\mathcal{V}},W,T,\sigma^2)$}
\label{algo:dp-gossip}
\end{algorithm}
The iterates generated by Algorithm~\ref{algo:dp-gossip} write as:
\begin{equation*}
x^{t}=P_t(W)\big(x^{0}+\gau{\sigma^2I_n}\big)\,,\quad t\geq0\,,
\end{equation*}
where $(P_t)$ is a re-scaled $t^{\rm th}$ Chebychev polynomial (of degree $t$). The utility analysis thus consists in an optimization (or consensus) term vanishing exponentially fast, and a \emph{bias} term due to the initial noisy inputs.
\begin{theorem}[Utility analysis] For any~$T\geq0$, the iterates $(x^T)_{T\geq 0}$ of \textsc{DP-Gossip} (Algorithm~\ref{algo:dp-gossip}) verify:
\begin{equation}\label{eq:dp-gossip-utility}
\frac{1}{2n}\sum_{v\in\mathcal{V}}\esp{\NRM{ x^T_v-\Bar{x}}^2}\leq \left( \frac{1}{n}\sum_{v\in\mathcal{V}}\NRM{x^{0}_v-\Bar{x}}^2 + \sigma^2\right)e^{-T\sqrt{\lambda_W} + \frac{\sigma^2}{n}\,.
\end{equation}
\end{theorem}
There is thus a tradeoff between the variance $\sigma^2$ of the noise initially injected, and the precision reached. For a number of iterations $T^{\rm stop}(W,x,\sigma^2)$ of:
\begin{equation*}
T^{\rm stop}\big(W,(x_v)_{v\in\mathcal{V}},\sigma^2\big)=\left\lceil \sqrt{\lambda_W}^{-1}\ln\left(\frac{n}{\sigma^2}\max\left(\sigma^2,\frac{1}{n}\sum_{v\in\mathcal{V}}\NRM{x^{0}_v-\Bar{x}}^2\right)\right)\right\rceil\,,
\end{equation*}
we have a consensus error $\frac{1}{2n}\sum_{v\in\mathcal{V}}\esp{\NRM{ x^T_v-\Bar{x}}^2}\leq \frac{3\sigma^2}{n}$, for a minimal number of iterations. This property that is crucial for privacy guarantees and in the utility-privacy tradeoffs we present next.
\subsection{Privacy analysis}
We now turn to the privacy analysis of \textsc{DP-Gossip} (Algorithm~\ref{algo:dp-gossip}), in terms of individual network DP (Definition~\ref{def:indiv_ndp}). Let $\mathcal{A}^T$ be Algorithm~\ref{algo:dp-gossip} for $T$ steps. In the gossip averaging problem considered, datasets $\mathcal{D}=(\mathcal{D}_v)_{v\in\mathcal{V}}$ correspond to local values: $\mathcal{D}_v=\set{x_v}$, $v\in\mathcal{V}$.
The \emph{view} of an agent $v\in\mathcal{V}$ thus writes as, for any $T\geq1$:
\begin{equation*}
\mathcal{O}_v\big(\mathcal{A}^T(\mathcal{D})\big)=\set{\big(W^t(x+\eta)\big)_w\,|\quad {\{v,w\}}\in\mathcal{E}\,,\quad0\leq t\leq T-1}\,,
\end{equation*}
where $\eta\sim\gau{\sigma^2I_{n\times d}}$. We further assume that local values $(x_v)_{v\in\mathcal{V}}$ lie in a space of diameter $\Delta>0$, that will play the role of \emph{sensitivity}.
\begin{theorem}[Individual network DP] Let $u,v\in\mathcal{V}$ be two distinct nodes, $\alpha>0$. After $T$ iterations of \textsc{DP-Gossip} (Algorithm~\ref{algo:dp-gossip}), node $u$ is ($\varepsilon^T_{{u\to v}}(\alpha),\alpha)$ Renyi DP with respect to $v$, with:
\begin{equation}\label{eq:indp}
\varepsilon^T_{{u\to v}}(\alpha)\leq \frac{\alpha}{2\sigma^2}\sum_{t=0}^{T-1}\sum_{w\sim v}\frac{(W^t)_{\{u,w\}}^2}{\NRM{(W^t)_w}^2}\,.
\end{equation}
This formula can be interpreted in terms of random walks, as:
\begin{equation*}
\varepsilon^T_{{u\to v}}(\alpha)\leq \frac{\alpha n}{2\sigma^2}\max_{w\sim v}W_{\{v,w\}}^{-2}\sum_{t=1}^T\proba{X^t=v|X^0=u}^2\,,
\end{equation*}
where $(X_t)_t$ is the random walk on graph $G$, with transitions $W$.
\end{theorem}
Since expressing pairwise (\emph{i.e.}~between two fixed agents in the graph) utility-privacy tradeoffs using the guarantee offered by~\eqref{eq:indp} would not yield an explicit tradeoff in terms of well-known graph quantities, we formulate the following corollary using the mean formula of Corollary~\ref{cor:mean}, thus making appear the ratio $d_v/\sqrt{\lambda_W}$.
\begin{cor}[Mean formula and privacy-utility tradeoffs]\label{cor:mean}
Let $\varepsilon>0$ and $\alpha>1$. \textsc{DP-Gossip} after $T$ steps is $(\alpha,f)$-individual network Rényi DP, for $f:\mathcal{V}\times\mathcal{V}\to \mathbb{R}^+$ satisfying:
\begin{equation*}
\frac{1}{n}\sum_{u\in\mathcal{V}\setminus\set{v}}f(u,v)\leq \varepsilon \,,
\end{equation*}
while having the following utility:
\begin{equation*}
\frac{1}{2n}\sum_{v\in\mathcal{V}}\esp{\NRM{ x^{\rm out}_v-\Bar{x}}^2}\leq \Tilde\mathcal{O}\left( \frac{d_v}{\varepsilon n^2\sqrt{\lambda_W}}\right)\,,
\end{equation*}
where $x^{\rm out}$ is the output of Algorithm~\ref{algo:dp-gossip} after $T^{\rm stop}(x,W,\sigma^2)$ steps for $\sigma^2=\frac{d_v}{2\alpha\varepsilon}$, $d_v$ is the degree of node $v$, and~$\Tilde\mathcal{O}$ hides logarithmic factors in $n$ and $\varepsilon$.
\end{cor}
\mathieu{commentaires: ratio $d/\sqrt{\lambda_W}$, ce que ça donne sur quelques graphes: sur ligne et complet, on ne peut pas être plus sharp, sur grille, on est pas mal, sur bons expandeurs c'est super.}
\subsection{Adding graph randomness: the case of Erdös-Rényi graphs}
\mathieu{Erdos renyi}
So far, privacy guarantees come from the initial randomness of noisy inputs. Adding randomness in the graph over which the algorithm is ran and using the same arguments together with the quasi-convexity of Rényi divergences, we show that we can emulate up to logarithmic factors, on networks such as Erdös-Rényi graphs, the privacy guarantees of a trusted aggregator. The random graph assumptions required for our analysis are in fact more general than Erdös-Rényi graphs and presented in Appendix~\mathieu{?}.
We recall that in an Erdös-Rényi graph of parameters $n,p$ over the set of nodes $\mathcal{V}$ (abbreviated $ER(n,p)$ in the sequel), each edge has a fixed probability $p$ of being present in the graph, independently from the others. Algorithm~\ref{algo:ER-dp-gossip} emulates an Erdös-Rényi graph using local operations, before performing the \textsc{DP-Gossip} algorithm.
We further assume that nodes are only aware on their direct neighbors in the graph, coherent with local communications and the Erdös-Rényi model.
\begin{algorithm}
\KwIn{local values $(x_v)_{v\in\mathcal{V}}$ to average, parameter $p$, number of iterations $T$, noise variance $\sigma^2$}
Initialize edge set $\mathcal{E}=\emptyset$
\For{$(v,w)\in\mathcal{V}^2$ such that $u\ne v$}{
Add ${\{v,w\}}$ to $\mathcal{E}$ with probability $p$
}
Set $W_{\{v,w\}}=\frac{1}{\max(d_v,d_w)}$ for ${\{v,w\}}\in\mathcal{E}$.
Run \textsc{DP-Gossip}($(x_v)_{v\in\mathcal{V}},W,T,\sigma^2$) to obtain $x^T_v$ at node $v\in\mathcal{V}$
\KwOut{$x^T_v$ at node $v\in\mathcal{V}$}
\caption{\textsc{Random Graph DP-Gossip}($(x_v)_{v\in\mathcal{V}},p,T,\sigma^2)$}
\label{algo:ER-dp-gossip}
\end{algorithm}
\begin{theorem}[\textsc{DP-Gossip} on a random graph]\label{thm:privacy-random-graph}
Let $\alpha>1$, $T\geq 0$, $\sigma^2\geq \frac{\alpha(\alpha-1)}{2}$ and $p=c\frac{\ln(n)}{n}$ for $c>1$. Let $u,v\in\mathcal{V}$ be distinct nodes.
After running Algorithm~\ref{algo:ER-dp-gossip} with these parameters, node $u$ is ($\varepsilon_{u\to v}^T(\alpha),\alpha)$-Rényi DP with respect to $v$, with:
\begin{equation*}
\varepsilon_{u\to v}^T(\alpha)\leq \left\{ \begin{aligned} &\quad\quad\frac{\alpha}{2\sigma^2}\quad\quad \text{ with probability }\proba{{\{u,v\}}\in\mathcal{E}}\\
&\frac{\alpha}{\sigma^2} \frac{Td_v}{n-d_v} \quad \text{ with probability }1 -\proba{{\{u,v\}}\in\mathcal{E}}\end{aligned}\right.\,.
\end{equation*}
The probabilities are taken over the graph $G=(\mathcal{V},\mathcal{E})$ on the set of nodes $\mathcal{V}$.
We have $\proba{{\{u,v\}}\in\mathcal{E}}=p$, the eigengap $\lambda_W$ is of order 1 \citep{ERgaps2019} (so that $T^{\rm stop}=\Tilde\mathcal{O}(1)$), and $d_v$ is of order $\ln(n)$.
\end{theorem}
\mathieu{collusions? moyenne aussi?} | 2023-04-23T06:40:30.351Z | 2022-06-13T02:15:17.000Z | redpajama/arxiv | arxiv_0001 | 538 | 13,405 |
2789855c499dd27aaf55ac828c62085cc363c789 | \section{Introduction}
Active Galactic Nuclei (AGN) are thought to be powered by accretion onto a super massive black hole (Rees 1984). Seyfert galaxies host AGN, and are commonly divided in Seyfert\,1s and Seyfert\,2s. While Seyfert\,1 galaxies show both broad (FWHM $\geq 1,000 \rm\,km\,s^{-1}$) and narrow (FWHM $\sim 300-1,000 \rm\,km\,s^{-1}$) optical emission lines, Seyfert\,2s show only narrow lines. Seyfert\,1.5s show an optical spectrum intermediate between those of Seyfert\,1s and Seyfert\,2s.
Spectropolarimetric studies of Seyfert\,2s (Miller \& Antonucci 1983) showed the existence of broad emission lines also in some Seyfert\,2s, which led to the formation of the unified model (UM) of AGN (Antonucci \& Miller 1985). According to this model the same engine is at work in all kind of Seyfert galaxies, and differences between Seyfert\,1s and Seyfert\,2s are due to the presence of an anisotropic absorber, which covers the broad line region (BLR) in Seyfert\,2s. This absorber is often associated to a molecular torus. Several other lines of evidence support the UM, like the deficit of ionizing photons in Seyfert\,2 galaxies, which indicates that the ionizing source is hidden from direct view (Schmitt \& Kinney 1996), and the fact that the Narrow Line Regions (NLRs) are seen with conical shapes in Seyfert\,2 galaxies and halo-like shapes in Seyfert\,1s (Pogge 1988).
However, in the last years evidence of differences between Seyfert\,1s and Seyfert\,2s has been discovered. Seyfert\,1s with significant absorption have been found (e.g., Cappi et al. 2006), along with Seyfert\,2s without X-ray absorption (e.g., Bianchi et al. 2008), and Seyferts with significantly clumpy absorbers (e.g., Ricci et al. 2010).
Furthermore spectropolarimetric surveys indicate that only $\sim 30-50\%$ of the Seyfert\,2s show polarized broad lines (PBL), which might imply that not all of them harbor hidden BLRs (Tran 2001,2003).
The hard X-rays are particularly well suited to test the unified model, as at these energies the photons are unaffected by photoelectric absorption and it is possible to have a direct view of the X-ray source, at least up to values of the column densities $N_{\rm \,H}\simeq \sigma_{T}^{-1}$. For $N_{\rm \,H}> \sigma_{T}^{-1}$ Compton scattering starts in fact to play an important role, and only $\lesssim 50\%$ of the incoming photons are detected. Thus, considering only Compton-thin objects, one would expect to detect consistent average spectra for different classes of AGN.
\section{Sample and data analysis}
The sample we used consists of all the 172 Seyfert galaxies detected by {\it INTEGRAL} IBIS/ISGRI (Ubertini et al. 2003) at z<0.2 during its first 7 years of observations. Of these, 45 are Seyfert\,1s, 31 Seyfert\,1.5s, 67 Compton-thin Seyfert\,2s, 11 Narrow Line Seyfert\,1s (NLS1s), 11 Compton-thick (CT) Seyfert\,2s and 9 LINERs. The optical classifications were taken from Veron-Cetty \& Veron (2010).
To extract the average spectra of the different AGN samples we followed the procedure adopted by Walter \& Cabral (2009). We created 500$\times$500-pixels mosaic images modifying the coordinate system of each individual image, setting the coordinates of each source of the sample to an arbitrary fixed position ($\alpha$=0, $\delta$=0). The geometry of the image was also modified to have consistent PSF whatever the position of the source in the field of view (FOV). These mosaic images provide a stack of all the selected IBIS/ISGRI data for each considered sample. Individual sky images for each pointing were produced in a broad energy band (i.e. 17--250~keV), and divided in 10 bins. The spectra were extracted from the mosaics using {\tt mosaic\_spec}. In figure\,\ref{fig:ima_1} we show, as an example, a composite of the central part of the 17-80~keV mosaic image obtained for Seyfert\,1s.
\begin{figure}[t!]
\centering
\includegraphics[width=7cm]{sy1_ima.eps}
\caption{Composite of the central part of the 17-80~keV mosaic image obtained for Seyfert\,1s.}
\label{fig:ima_1}
\end{figure}%
\section{Absorption in the X-rays}
Both photoelectric absorption and Compton scattering must be taken into account when modeling the effects of absorption in the X-rays.
The photoelectric cross section $\sigma_{ph}$ has a strong dependence on the energy, whereas Compton scattering depends on the Thomson cross section $\sigma_{\rm \,T}$, and it is constant with the energy, at least up to the Klein-Nishina decline. The ratio between Klein-Nishina and Thomson cross sections $\sigma_{\rm \,K-\,N}$/$\sigma_{\rm \,T}$ starts to diverge from the unity at $E \simeq 20$ keV, and it is of $\simeq 0.6$ at 200 keV. Compton processes become significant for column densities higher than $\sigma_{\rm \,T}^{-1}=1.5\times 10^{24} \rm \,cm^{-2}$.
The cumulative effect of the two cross sections is given by:
\begin{equation}\label{abs}
M(E)=e^{-\sigma_{ph}(E)N_{\rm \,H}}\times e^{-\sigma_{\rm \,T}N_{\rm \,H}}.
\end{equation}
In Fig.\,\ref{fig:xrayabs} we show the effect of absorption (considering both photoelectric absorption and Compton scattering, as reported in Eq. \ref{abs}) on a power law model with a photon index of $\Gamma=1.95$ in the 0.1--300 keV energy range for different values of $N_{\rm\,H}$. In Fig.\,\ref{fig:escapingflux} we show the fraction of escaping flux for different values of $N_{\rm\,H}$ in the energy band we used. About 60\% of the original emission is unabsorbed for $N_{\rm \,H}=7\times10^{23}\rm \,cm^{-2}$, and we used this value as a threshold between Compton-thin and Compton-thick sources. In our final sample we considered only AGN with a value of the column density $N_{\rm \,H} \leq 7\times10^{23}\rm \,cm^{-2}$. The values of the column densities were taken from soft X-ray ($E<10\rm\,keV$) observations. We excluded from our sample the sources for which no value of $N_{\rm \,H}$ was available in the literature.
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{X_ray_absorption_thomphot.eps}
\caption{Effect of photoelectric absorption and Compton scattering on a power law model with a photon index of $\Gamma=1.95$ in the X-rays.}
\label{fig:xrayabs}
\end{figure}%
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{thomphot_escaping_flux.eps}
\caption{Effect of photoelectric absorption and Compton scattering on a power law spectrum ($\Gamma=1.95$) in the hard X-rays. The upper panel shows the observed spectrum for different values of the hydrogen column density $N_{\rm \,H}$, and the lower panel shows the fraction of escaping flux in the 10 energy bins used in this work. In the lower panel squares represent the case $N_{\rm \,H}=10^{23} \rm \,cm^{-2}$, stars $N_{\rm \,H}=7\times 10^{23} \rm \,cm^{-2}$, diamonds $N_{\rm \,H}=1.5\times 10^{24} \rm \,cm^{-2}$ and triangles $N_{\rm \,H}=5\times10^{24} \rm \,cm^{-2}$.}
\label{fig:escapingflux}
\end{figure}%
\section{Model-independent spectral analysis}
The hard X-ray spectrum of AGN can be well described by a power law continuum with an exponential cut-off, and a reflection hump peaking at E~$\simeq 30$ keV (Magdziarz \& Zdziarski 1995).
One of the drawbacks of a model dependent analysis is the parameter degeneracy. This might not allow to well constrain several parameters at the same time. An alternative method to characterize differences and similarities between different classes of Seyfert galaxies, independently of their average flux, is through a model independent approach. This has been done by normalizing the flux of the different spectra at the first bin (i.e. 17--22 keV), and then calculating their ratio.
In Fig.\,\ref{fig:modIndependent} we show the spectra and ratios obtained comparing the average hard X-ray spectrum of Seyfert\,1s to those of Seyfert\,1.5s and NLS1s. As it can be seen from the figure the average spectra of both Seyfert\,1.5s and NLS1s are consistent with that of Seyfert\,1s. NLS1s are known to have steeper soft X-ray spectra than Seyfert\,1 and Seyfert\,1.5 galaxies. The fact that our sample of NLS1s is hard X-ray selected probably introduces a bias, being the objects with a flatter spectrum more easily detected than those with a steeper spectrum. Malizia et al. (2008) studying a smaller sample of NLS1s found that their hard X-ray spectrum is on average steeper than that of Seyfert\,1s. The difference between our results and theirs is probably due to the different sample, lower exposures and smaller energy ranges used in their work.
Given the similarity of the spectra of Seyfert\,1s and Seyfert\,1.5s, we re-ran the analysis considering all the objects of the two samples, obtaining an average spectrum of Seyfert\,1s and Seyfert\,1.5s.
We compared this spectrum to that of Compton-thin Seyfert\,2s, obtaining a result remarkably different from what we had for the other classes. As it can be seen from Fig.\,\ref{fig:merged}, the average hard X-ray spectrum of Compton-thin Seyfert\,2s is significantly harder in the 22--60 keV band, showing a bump which might be linked to a stronger reflection hump. At energies higher than 60\,keV the ratio of the two spectra is consistent with the unity, which implies that, although showing a stronger reflection component, Compton-thin Sy\,2s have the same continuum of Sy\,1s and Sy\,1.5s.
\begin{figure*}[h!]
\centering
\begin{minipage}[!b]{.48\textwidth}
\centering
\includegraphics[width=7.5cm]{ratio_Sy1_SyInt.eps} \end{minipage}
\begin{minipage}[!b]{.48\textwidth}
\centering
\includegraphics[width=7.5cm]{ratio_Sy1_NLS1.eps}\end{minipage}
\hspace{0.05cm}
\begin{minipage}[t]{1\textwidth}
\caption{Spectra ({\it upper panels}) and ratios ({\it lower panels}) between the normalized spectra of Seyfert\,1s and Seyfert\,1.5s ({\it left}), Seyfert\,1s and NLS1s ({\it right}).}
\label{fig:modIndependent}
\end{minipage}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{ratio_merged.eps}
\caption{Ratio between the spectra of Compton-thin Seyfert~2s and the one of Seyfert~1s and Seyfert~1.5s.}
\label{fig:merged}
\end{figure}%
\section{The unified model of AGN}
The continuum emission of different classes of Seyfert galaxies has on average the same characteristics, which confirms the idea that different classes of objects have the same engine, as predicted by the zero-th order unified model of AGN. The stronger reflection observed in the spectrum of Compton-thin Seyfert\,2s cannot be explained in terms of absorption, as the chosen threshold of $N_{\rm\,H}$ should guarantee a transmission dominated spectrum. Moreover considering that the average column density of our sample of Seyfert\,2s is $N_{\rm\,H}=1.5\times10^{23}\rm\,cm^{-2}$, one would expect only $\simeq 10\%$ of the flux to be Compton scattered. The excess in the 20--60 keV band might be related either to an intrinsic stronger reflection component of Seyfert\,2s, or to the presence of clumps in the torus, as predicted by Elitzur \& Shlosman (2006).
\section{Conclusions}
We presented the study of the average {\it INTEGRAL} IBIS/ISGRI hard X-ray spectra of different classes of Seyfert galaxies. We found that, as predicted by the UM, the average spectra of Seyfert\,1s and Seyfert\,1.5s are consistent in the 17--250 keV band. NLS1s also show a spectrum similar to that of Seyfert\,1s and Seyfert\,1.5s, probably due to the hard X-ray selected nature of our sample. The average emission of Compton-thin Sy\,2s is harder than that of Sy\,1s and Sy\,1.5s in the 20--60 keV energy band, which might be due either to an intrinsically stronger Compton reflection or to a clumpy torus. The UM is confirmed on the zero-th order, although an homogenous anisotropic absorber alone cannot easily explain the differences between the spectrum of Seyfert\,2s and that of Seyfert\,1s and Seyfert\,1.5s.
| 2023-04-23T06:40:30.855Z | 2011-03-31T02:01:35.000Z | redpajama/arxiv | arxiv_0001 | 556 | 2,007 |
cb1b92fce5c29870c2d7ab402ecee89710aa5f8b | \section{Background}
Recombination plays a critical role in both the production of viable gametes and population genetics processes.
Structurally, chiasmata - the physical manifestations of crossovers - generate the tension between homologs often needed to ensure proper segregation during meiosis I.
The structural role of chiasmata is likely the mechanistic underpinning of the requirement in most species of at least one recombination event per chromosome.
In addition to the structural role of chiasmata, recombination also plays an important role in population genetic processes by generating novel haplotypes within populations.
The production of novel haplotypes entails both the creation and separation of beneficial alleles (or allelic combinations).
The balance between these opposing outcomes shape the adaptive value of recombination, a topic of much research \citep[e.g. ][]{Eshel1970,Feldman1996,Otto2002,Keightley2006,Barton2009}.
Recombination rates vary between species, individuals, and sexes \citep{Bell1982,Trivers1988,Burt1991,Lenormand2003,Lenormand2005,Lorch2005,Coop2007}.
We address the evolution of sex-difference in the recombination rate (i.e. heterochiasmy).
We first characterize general patterns of sex-differences in recombination rates.
After reviewing the major theories invoked to explain the evolution of heterochiasmy, we introduce the major argument of this manuscript - that the common and consistent sex-difference in the operation of meiotic drive can favor the evolution of heterochiasmy.
We conclude by considering the implications of our model in light of our current understanding of the genetic basis of recombination modification.
\subsection{Observations}
Four broad patterns describe sex differences in recombination rates. We describe these patterns and their consistency below.
\emph{The achiasmatic sex is heterogametic (The Haldane-Huxley Rule)}: When recombination is absent in one sex (i.e. it is achiasmatic), that sex is nearly always the heterogametic sex \citep[i.e. the sex bearing heteromorphic sex chromosomes -- ][]{Haldane1922,Huxley1928,Burt1991}. This observation represents more than 25 evolutionary independent origins of sex-specific achiasmy \citep{Burt1991}, and is observed under both male (e.g. \emph{Drosophila}) and female (e.g. \emph{Lepidoptera}) heterogamy, with very few known exceptions \citep{Davies2005}.
\emph{Females recombine more than males}:
When both sexes recombine, the female recombination rate often exceeds the male rate \citep{Bell1982,Trivers1988,Burt1991,Lenormand2003,Lenormand2005,Lorch2005}.
We display this pattern in Figure \ref{MaleVFemaleMaps}A by plotting the sex difference in recombination rates (measured by autosomal map lengths or chiasmata counts) in a number of taxa. We note that the grouping of our taxa is somewhat arbitrary; nonetheless, clear trends across groups in the degree of sex difference in recombination rate are apparent.
The pattern of higher female recombination rates is quite broad, occurring in animal species with XY, ZW, and environmental sex-determination.
However, there are many exceptions (e.g. marsupials, some grasshoppers, and newts) suggesting that the process of recombination is not mechanistically constrained towards higher rates in females and that the ratio of male to female rates can evolve quite rapidly (see \citet{Lenormand2005}).
In plants, there is no clear trend towards higher recombination rates in female meiosis (Figure \ref{MaleVFemaleMaps}A); however, when outbreeding angiosperms are considered separately, recombination rates are on average slightly higher in female meiosis than in male meiosis \citep{Lenormand2005}.
\begin{figure}
\begin{center}
\includegraphics[height=8cm]{Observations.eps}
\end{center}
\caption{{\bf Genome wide and regional sex-differences in recombination rates.}
{\bf A)} The difference between female and male recombination rates ( $\circ$ from linkage maps, excluding known sex-chromosomes, $\times$ from chiasmata counts) divided through by the sex-averaged rates.
Points above the dashed line indicate higher rates of recombination in females than in males.
$^*$ indicates $p < 0.05$ using a two-tailed sign test, without correcting for multiple tests or phylogeny, and ignoring ties.
{\bf B)}. Sex-standardized recombination rates across the human genome.
The sex-standardized rate equals the local recombination rate in a given sex (\textcolor{blue}{male} and \textcolor{red}{feamle}), divided by the average recombination rate in that sex.
The x-axis indicates the position of the focal genomic region (.2\% of a chromosome arm), divided by the length of the chromosome arm.
Data are presented from all metacentric human autosomes.
Lines represent a lowess smoothing of these points. }
\label{MaleVFemaleMaps}
\end{figure}
\emph{Females recombine at relatively higher rates near centromeres}:
After controlling for the genome-wide sex difference in recombination rate, females recombine more often near centromeres than do males, while males recombine relatively more near telomeres.
This pattern has been noted in fish \citep{Sakamoto2000,Singer2002,Reid2007}, humans \citep{Broman1998,Kong2002,Clark2010}, dogs \citep{Wong2010}, and mice \citep{Shifman2006, Paigen2008}, although there are some exceptions \citep[e.g. opossums][]{Samollow2004,Samollow2007}.
Utilizing data from a recent fine-scale analysis of sex-specific recombination rate in humans \citep{Kong2010}, we display an example of this pattern in Figure 1B.
This pattern is not an obvious consequence of different requirements for segregation in males and females, as it is thought that recombination near centromeres does not contribute to cohesion of homologous chromosomes.
Furthermore, recombination events too close to the centromere can generate problems during meiosis, such as an increase in the rate of precocious separation of sister chromatids, potentially leading to aneuploid gametes \citep[i.e. gametes with an abnormal number of chromosomes -- ][]{Rockmill2006}.
The higher rates of recombination close to centromere in aneuploid transmissions \citep{May1990,Lamb2005} suggests that recombination close to the centromere may actually incur fitness costs due to errors during meiosis.
\emph{Genetic control of the recombination rate is often sex specific:}
The process of gametogenesis is fundamentally different in males and females.
Consequently, the genetic control of many aspects of meiosis differs between the sexes \citep{Morelli2005}.
Perhaps due to sex differences in meiosis, alleles that influence the recombination rate in one sex often have no influence (e.g. the 17q21.31 inversion region in humans), or the opposite effect (e.g. RNF212 in humans) on the recombination rate in the other sex \citep{Kong2004,Kong2008}.
In fact, \citet{FledelAlon2011} show that there is no detectable heritable intersexual correlation in the recombination rate despite additive genetic variance in both sexes.
Although mechanisms governing sex differences in the recombination rate are not well-characterized, one candidate is the length of the synaptonemal complex (SC), a structure composed of cohesins and other proteins involved in meiosis, which stabilizes connections between homologs and facilitates crossing over. SC length is positively associated with the recombination rate, and is longer in females than in males \citep{Lynn2002,Tease2004,Dumont2011}.
Regardless of mechanism, the consequence of this pattern is clear - there is ample opportunity for sex-specific recombination rates in one sex to evolve independently of those of the other sex.
\subsection{Current theories}
Despite the large body of work on the evolution of sex and recombination \citep[for recent reviews see ][]{Otto2002,Barton2009}, the evolutionary forces responsible for the observed patterns of heterochiasmy have received curiously little theoretical attention \citep[but see ][]{Trivers1988,Lenormand2003,Lorch2005}.
Below, we summarize the current of the evolution of heterochiasmy and review their plausibility.
We note that these theories are not mutually exclusive, and it is unlikely that any one theory will explain all the observations presented above.
Further, our model (like many of the other theories of the evolution of heterochiasmy) does not argue that selection directly favors the evolution of male and female rates in opposite directions.
Rather, in our model, selection favors a modification in the female rate, with little or no direct selection on the male rate.
In both our model and others, heterochiasmy results from the sex-specificity of recombination modifiers and/or additional selective constraints (such as stabilizing selection on the sex-averaged rate) which are not made explicit.
For example, if too much recombination is generally deleterious due to the costs of ectopic recombination, but circumstances favor an increase in female recombination rates, modifiers that increase female rates but not male rates may be favored, leading to the evolution of heterochiasmy.
\emph{Sex chromosome pleiotropy}: Based on their observations that the achiasmatic sex is heterogametic, \citet{Haldane1922} and \citet{Huxley1928} proposed that heterochiasmy may evolve as a pleiotropic consequence of selection to suppress recombination between heteromorphic sex chromosomes \cite[see also][]{Nei1969}. Although suppression of recombination between sex chromosomes could well explain the Haldane-Huxley rule, it cannot fully explain the quantitative variation in autosomal recombination rates between the sexes, as they span modes of sex determination, nor can this model explain variation across chromosomal regions.
\emph{Maternal aging}: Physical connections between homologous chromosomes are necessary for proper segregation during meiosis. In many species, these physical connections are formed by chiasmata and therefore, one recombination event per chromosome is generally required for proper meiosis.
In species where female meiosis is arrested, physical connections between chromosomes may degrade with time \citep[e.g. ][]{Lamb2005a}, and thus additional chiasmata may function to stabilize chromosomes across the metaphase plate.
According to the maternal aging theory, elevated female recombination rates provide more physical connections between chromosomes, ensuring that oocytes will segregate correctly after years of insults accumulated during meiotic pachytene arrest.
This theory is supported by the finding that women with higher recombination rates have higher fertility \citep{Kong2004} and that the viable gametes of older mothers have a higher number of crossovers, consistent with the idea that selection against those gametes with too few crossovers increases throughout a mother's reproductive life \citep{Kong2004, Coop2008}.
However, this theory cannot easily explain heterochiasmy in organisms in which females create eggs throughout their lives.
Additionally, this theory cannot easily explain the spatial pattern of recombination in females, since elevated female rates are often accomplished by chiasmata placed in locations thought not to stabilize chromosomes.
\emph{Sexual selection}: \citet{Trivers1988} argued that selection to preserve high fitness genotypes favors recombination suppression in the sex with greater variance in fitness.
Both current theory \citep{Lenormand2003} and data \citep{Burt1991,Mank2009} suggest that sexual selection cannot explain the evolution of heterochiasmy.
Using multi-locus population genetic theory, \citet{Lenormand2003} showed that sex-differences in selection on diploid genotypes cannot generally favor the evolution of heterochiasmy. \citet{Burt1991} and \citet{Mank2009} showed that the degree of heterochiasmy decreases with sexual-size dimorphism (a proxy for the strength of sexual selection) - an observation counter to predictions of the sexual selection hypothesis.
\emph{Haploid and psuedo-haploid selection}: \citet{Lenormand2003} showed that simple sex differences in selection during the haploid life stage can favor the evolution of heterochiasmy.
Like the sexual selection theory, the haploid selection model argues that the sex which produces the gamete (or gametophyte) with higher variance in fitness will recombine less.
As sperm and pollen can experience intense competition for fertilization opportunities, \citet{Lenormand2003} argued that males should in general have lower recombination rates.
\citet{Lenormand2003} also proposed that selection on haploid components of diploid genotypes (e.g. selection on epistasis in \emph{cis}, or imprinted loci) can also favor the evolution of heterochiasmy; we call this the pseudo-haploid selection model.
Although theoretically plausible, it is unclear whether the small numbers of imprinted genes \citep{Morison2005} or genes expressed in sperm \citep[e.g. ][]{Joseph2004} are sufficient for the (pseudo-)haploid selection theory to explain heterochiasmy in animals.
Furthermore, a comparative study found no association between heterochiasmy rates and the inferred strength of sperm competition in eutherian mammals \citep{Mank2009}. However, the absence of a haploid stage in the female gametes of animals and occasional haploid expression in sperm makes the haploid selection theory viable.
The situation is more complex in plants, as due to the alternation of generations, there is haploid expression in both the products of male and female meiosis.
Nonetheless, \citet{Lenormand2005} argue that as in the majority of outcrossing plant species, selection is likely strong on male haploid products (due to pollen grain competition), the haploid-selection model could explain the female biased recombination rates in outcrossers.
\emph{Meiotic drive}: Below, we articulate the meiotic drive hypothesis. According to this model, sex differences in the operation of gametic drive presents a sex-specific selective pressure on linked and unlinked modifiers of the recombination rate.
Other authors have proposed that sex differences in meiotic drive may offer an opportunity for the evolution of sex-specific recombination rates \citep{Lenormand2003,Haig2010}. We discuss our work in relation to these models below.
\subsection{Meiotic drive and the evolution of heterochiasmy}
Meiosis provides an opportunity for alternative alleles to compete for representation in the functional gametes of heterozygotes. Alleles that distort meiosis and gametogenesis in their favor (i.e. gametic drivers) often do so at the expense of individual viability or fertility. Therefore, although driving alleles can benefit by distorting meiosis, individual selection generally favors Mendelian segregation (\citealt[e.g. ][]{Eshel1985}, but see \citealt{Ubeda2005,Haig2010}), creating a conflict between drivers and unlinked loci in the genome (i.e. `the parliament of genes,' \citealp{Leigh1971}).
Gametic drivers exploit the system of Mendelian segregation by providing a transmission advantage to their chromosome.
Higher recombination rates make the ultimate chromosomal context of an allele uncertain, which can prevent the evolution of drive systems \citep{Thomson1974,Charlesworth1978,Haig1991}.
It is therefore thought that modification of the recombination rate can evolve as a mechanism to alter the efficiency of gametic drivers.
Conceptually, this model holds for both male and female drive systems.
However, gametic drivers are often sex-limited and display sex differences in the mechanisms by which they operate \citep[see][]{Ubeda2005}.
Currently, the implications of sex differences in meiotic drive for the evolution of sex-specific recombination rates are unclear \citep[but see][ for an hypothesis]{Haig2010}.
Male gametic drivers (e.g. Segregation Distorter in \emph{Drosophila} and the t-haplotype in mice) usually operate after meiosis and are characterized by a two-locus damage-insensitive system \citep{Wu1991}.
When these loci are tightly linked, a damage-insensitive haplotype can increase in frequency, even if this haplotype decreases individual fitness (e.g. \citealp{Prout1973, Charlesworth1978}).
If the drive system imposes a cost, unlinked recombination enhancers can be favored for their ability to disrupt the drive system \citep{Thomson1974,Haig1991}.
However, modifiers of the recombination rate linked to and in phase with driving alleles can increase in frequency if they decrease the recombination rate between components of a drive system \citep{Thomson1974,Charlesworth1978}.
This later idea is supported by the observation that gametic drivers in males are often located in inversions \citep{Wu1991}, which act to suppress recombination locally.
Female meiosis creates a single haploid product, providing an opportunity for alleles to compete for representation in the egg (true meiotic drive \citealp{Sandler1957,Zwick1999, Pardo-ManuelDeVillena2001a, Pardo-ManuelDeVillena2001b}).
Recombination plays a fundamental role in female drive systems, as it determines at which stage of meiosis alleles can compete with each other.
Since non-sister centromeres segregate at meiosis one (hereafter MI), an allele that biases the outcome of MI in favor of its centromere becomes over-represented in oocytes, so long as there is no (or an even number of) recombination (events) between driver and centromere (see Figure \ref{DriveCartoon}A.1 and 2A.2).
The best-characterized cases of MI female drive are a subset of Robertsonian translocations in mammals \citep{Pardo-ManuelDeVillena2001c}, and a centromeric allele in \emph{Mimulus guttatus} \citep {Fishman2005,Fishman2008}.
Malik and Henikoff \citep[e.g. ][]{Malik2002,Malik2009,Malik2009a} have argued for a broad role, throughout eukaryotic evolution, of female centromeric drive at MI in driving the rapid evolution of centromeric sequence and the proteins that bind them.
Centromeres cannot drive during meiosis II (hereafter MII), as they are paired with their sister chromosomes.
However, with a single recombination (or odd number of) event(s) between a focal locus and the centromere, the products of the first meiotic dividsion (dyads) will be variable at that locus, providing an opportunity for MII drive (see Figure \ref{DriveCartoon}).
Examples of MII drivers include the Om locus in mice \citep{Wu2005} and Ab-10 in maize \citep{Rhoades1966}.
The structure of the Ab-10 haplotype (an inversion spanning multiple loci) highlights the importance of recombination in the evolution of female drivers.
A combination of the alleles in the Ab-10 system allows the chromosome to drive during MII \citep{Rhoades1966,Dawe1996}, while a distinct allele at another locus in this complex alters the recombination rate between itself and its centromere \citep{Hiatt2003}, maximizing its ability to drive \citep{Buckler1999}.
In this manuscript we show that sex differences in meiotic drive can favor the evolution of heterochiasmy.
With female meiotic drive, female recombination modifiers (unlinked to drivers) are favored when they decrease the efficacy of female drive.
With segregating MI drivers, this corresponds to a female recombination enhancer, while female recombination suppressors decrease the efficacy of MII drivers.
By contrast, when recombination modifiers and drivers are in one tightly linked haplotype, female recombination modifiers that increase the efficacy of drive are favored (Table 1).
\subsection{Relation to previous models}
Numerous authors have explored cases in which recombination modifiers are favored for their ability to break up systems of transmission ratio distortion \citep[e.g. ][]{Thomson1974,Charlesworth1978}.
\citet{Haig1991} pointed out that in addition to breaking apart drive systems, recombination can decrease the efficiency of drive by making the identity of an allele's partner in a dyad uncertain.
The potential role of meiotic drive in the evolution of heterochiasmy has received less attention.
\citet{Lenormand2003} and \citet{Lenormand2005} briefly discussed male gametic drive systems, as a special case of the haploid selection model that may favor the evolution of heterochiasmy.
As haploid chromosomes drive, the female meiotic drive model could also be considered as a special instance of the haploid selection model.
However, we present our model as a distinct hypothesis because of its focus on recombination as a mechanism to modify the efficacy of meiotic drivers.
Recently, \citet{Haig2010} found that unlinked modifiers of the female recombination rate increase in frequency when they \emph{enhance the efficacy of drivers} (the opposite of our finding, below). In both our model and Haig's, drivers decrease individual fitness. However, under Haig's model, the cost is reflected in the genetic identity of the products of meiosis, in which the fertility of a female heterozygous for a driver is determined by the genetic composition of her dyads. Haig's model equilibrates when the cost of drive on fertility is balanced by the degree of distortion. At this equilibrium, a modification of the recombination rate which increases female fertility also results in an increase in frequency of the driver, leading to the counterintuitive result that recombination modifiers unlinked to driver benefit by increasing the efficacy of drive.
In other words, in Haig's model, females produce a higher proportion of viable gametes by creating chromosomal configurations that lessen the ill effects of drive.
Somewhat paradoxically, by doing so, unlinked modifiers act to increase the frequency of drivers, despite the long term population-level fitness consequences of the spread of drivers.
In the model described below, we come to the opposite conclusion.
The discrepancy between our models arises from different conceptions of how drive influences fitness.
In our model an individual's viability depends on it genotype at the drive locus, but the viability of all dyads are equivalent.
Therefore, the recombination modifier increases in frequency by decreasing the strength of drive.
By contrast, in Haig's model, recombination directly influences female fertility by influencing the viability of eggs.
Which model is more relevant to the evolution of heterochiasmy will depend on the mechanistic underpinning of the the cost of female drive.
\begin{table}
\label{predictions}
\begin {tabular} { | p{1.cm} | p{1.5cm} | p{2.75cm} | p{2.25cm} |p{7cm} |}
\hline
Case & When & Linkage & Prediction & Reason \\ \hline
1 & MI & Unlinked, or repulsion phase & $r_\Venus > r_\Mars$ & Female recombination enhancers discourage drive and hitchhike with high fitness, non-driving haplotypes. \\ \hline
2 & MI & Coupling & $r_\Venus < r_\Mars$ & Female recombination suppressors hitchhike with driving haplotypes. \\ \hline
3 & MII & Unlinked, or repulsion phase & $r_\Venus < r_\Mars$ & Female recombination suppressors discourage drive and hitchhike with high fitness, non-driving haplotypes. \\ \hline
4 & MII & Coupling phase & $r_\Venus > r_\Mars$ & Female recombination enhancers hitchhike with driving haplotypes. \\ \hline
\end {tabular}
\caption{Relative male ($r_\Mars$) to female ($r_\Venus$) recombination rates as predicted by the meiotic drive theory.}
\label{}
\end{table}
\section{The models}
Since neutral and beneficial drivers will quickly sweep through a population, and will not present a genetic conflict, we focus on the evolution of drivers that decrease organismal fitness.
We explore two models of drive.
In the first model, the drive system consists of a single-locus that drives at either MI or MII.
In our second model, drive is achieved by a two-locus system in which the ability of an MI centromeric driver to distort meiosis depends on the genotype at a partially linked drive enhancer locus.
For the single-locus drive model, we only care whether sister alleles are or are not separated during the first meiotic division (i.e. if there there is an odd ($1,~3,~5,\cdots$) or even ($0,~2,~4,\cdots$) number of recombination event occur between the centromere and the drive locus) in drive heterozygotes.
In our model, sisters are separated at the first meiotic division with probability $r$, or remain united with probability, $1-r$.
With an even number of recombination events between drive and centromeric loci, there will be variation within, but not among dyads.
This genetic difference among dyads presents an opportunity for drive during MI, but prevents drive during MII.
By contrast, an odd number of recombination events between drive and centromeric loci produces genetically homogenous dyads, precluding the possibility of drive during MI, but facilitating MII drive.
Given an opportunity for drive, drivers are represented in $\alpha_{\text{M}i} > 1/2$ of the gametes produced by drive heterozygotes.
Without an opportunity for drive, meiosis is fair.
We then introduce another layer of biological complexity - the dependence of MI drive on the two-locus genotype at centromeric and drive enhancer loci.
In this two-locus system, the genotype at the drive enhancer locus influences the ability of the driving centromeric allele to distort segregation at MI.
Without a drive enhancer, meiosis is fair.
By contrast, the driving centromere is present in $\alpha_1$ of gametes from double heterozygotes, and $\alpha_2$ of drive enhancer homozygotes heterozygous at the centromeric drive locus.
As MII drive is unlikely to depend on the genetic identity of the centromere, we do not explore the case of two-locus drive during meiosis II.
Under this two-locus model, the outcome of multiple recombination events is somewhat complicated.
Therefore, we make the restrictive assumption that only zero or one recombination events occur on a tetrad between the centromere and the drive locus (i.e. complete crossover interference).
Thus, in our two-locus model $r$ is the probability of a single recombination event between the centromere locus and the drive locus, $1-r$ is the probability of no recombination, and we assume no double crossovers.
For each model, we contrast the case of sex-limited drive to results from a model in which both sexes drive.
In these models, we introduce an allele that modifies the recombination rate between driver and centromere, without otherwise influencing organismal fitness.
For models of single-locus drive, we compare the evolution of recombination modifiers in tight linkage with drivers to the evolution of recombination modifiers unlinked to drivers.
\begin{figure}
\begin{center}
\includegraphics{cartoon5.eps}
\end{center}
\caption{{\bf Recombination during oogenesis and the opportunity for meiotic drive.}
Only one of the four products of female meiosis is included in the egg.
Recombination is a critical determinant for the opportunity for drive because it partitions variation within and among the products of the first meiotic division (dyads).
With recombination between the marker and centromere, there is variation within but not among dyads, presenting an opportunity for drive during MII but not MI.
When recombination occurs after the marker, there is variation among but not within dyads, presenting an opportunity for drive during MI but not MII.}
\label{DriveCartoon}
\end{figure}
\section{Single-locus drive}
Consider a biallelic locus at which the driving allele, D, occurs in frequency, $f_{D}$, and the alternative allele occurs in frequency $f_{d} = 1 - f_{D}$.
As we assume that drive has a pleiotropic cost on individual fitness, heterozygotes and drive homozygotes have fitnesses $w_{Dd} \leq 1$ and $w_{DD} <1 $, respectively.
Although we assume that this cost in suffered equally by both sexes, allowing for sex-specific costs does not change the qualitative picture.
The effective strength of drive acting in the $i^\text{th}$ meiotic division, $X_{\text{Mi}}$, is a function of both the recombination rate, $r$, and the strength of distortion, $\alpha_{\text{Mi}}$.
With drive in both sexes, $X_{\text{MI}} = \alpha_{\text{MI}}(1-r)+r/2$, and $X_{\text{MII}} = r \alpha_{\text{MII}} + (1 - r)/2 $, for MI and MII drivers, respectively.
With female-limited drive, the strength of MI and MII drive in oogenesis is
$X_{\text{MI}\Venus} = \alpha_{\text{MI}\Venus}(1-r_\Venus)+r_\Venus/2$, and $X_{\text{MII}\Venus} = r \alpha_{\text{MII}\Venus} + (1 - r_\Venus)/2 $, respectively.
Note that the effective strength of MI drive decreases with the recombination rate, while the effective strength of MII drive increases with the recombination rate.
Note further that the effective strength of female drive is independent of the male recombination rate.
When drive has equivalent fitness effects in males and females, the mean population fitness, $W$, equals
\begin{equation} W = f_{dd} + f_{Dd} w_{Dd} + f_{DD} w_{DD}\label{W}\end{equation}
where $f_\bullet$ denotes genotypic frequencies in newborns.
If drive operates in both sexes, the frequency of a driver after selection and drive is equivalent in sperm and eggs, and equals
\begin{equation} f_{D\mbox{M\emph{i}}}^{\prime} = \frac{f_{DD} w_{DD} + X_{\text{Mi}} f_{Dd} w_{Dd}}{W} \label{DeltaDriver} \tag{2.a} \end{equation}
When drive is female-limited, the recursive equation describing the frequency of drivers in sperm after selection ($f_{D\Mars}^{\prime}$) is
\begin{equation} f_{D\Mars}^{\prime} = \frac{w_{DD} f_{DD} + w_{Dd} f_{Dd} / 2}{W} \label{DeltaDriverSpermFemaleDrive} \tag{2.b} \end{equation}
and the recursive equation describing the frequency of drivers in eggs ($f_{D\Venus}^{\prime}$) is
\begin{equation}
f_{D\Venus\mbox{M\emph{i}}}^{\prime} = \frac{w_{DD} f_{DD} + w_{Dd} f_{Dd} X_{\text{Mi}\Venus} }{W}\label{DeltaDriverEggMIFemaleDrive} \tag{2.c} \end{equation}
where genotypic frequencies equal
$ f_{DD}=f_{D\Venus}f_{D\Mars},
f_{Dd}=f_{D\Venus}f_{d\Mars}+f_{d\Venus}f_{D\Mars}$, and
$f_{dd}=f_{d\Venus}f_{d\Mars}$, and the subscripts, $\Venus$ and $\Mars$, represent allele frequencies in female and male gametes, respectively (i.e. egg and sperm).
With drive in both sexes, allele frequencies are identical in sperm and eggs, and therefore genotypes are in Hardy-Weinberg Equilibrium; however, with female-limited drive, Hardy-Weinberg assumptions are inappropriate.
In our appendix, we derive equilibrium frequencies of drive alleles in the absence of recombination modifiers by solving for $f_D$ when $\Delta f_D = 0$.
Since a protected polymorphism at the drive locus facilitates the evolution of recombination modifiers, the existence of these equilibria is important for the models below.
\subsection{Single-locus drive - An unlinked recombination modifier.}
We now investigate the coevolution of alleles at the drive locus and alleles at a locus which influences the recombination rate.
At the recombination modifier locus, alleles M and m occur in frequencies $f_M$ and $f_m = 1 - f_M$, respectively.
The M allele additively alters the rate of recombination between driver and the centromere, has no direct effect on individual fitness, and is unlinked to the drive locus.
The recombination rate between drive and centromeric loci is $r$, $r+\delta r$, and $r+2 \delta r$ in mm, Mm, and MM individuals, respectively.
In the case of drive in both sexes, we allow M to have equivalent effects in males and females.
With single-locus, female-limited drive we only consider the influence of M on the female recombination rate, since our results are unaffected by $r_\Mars$.
We note that since M and D are unlinked, M does not influence the rate of recombination between itself and the drive locus.
The population consists of four haplotypes (md, Md, mD, and MD). Equations describing haplotype frequencies after selection, recombination, and drive, as well as many other derivations are presented in our appendix.
To find the frequency of the recombination modifier, M, after one generation, we sum the frequencies of both haplotypes containing the M allele.
With drive in both sexes, the change in frequency of M after selection, recombination, and drive equals
\begin{equation}
\Delta f_M = -\frac{\text{LD}}{W} (f_d (1- w_{Dd}) + f_D (w_{Dd} - w_{DD}))
\label{DMBS} \tag{3.a}
\end{equation}
where LD is the linkage disequilibrium between driving and recombination modifier alleles (i.e. LD $= f_{MD}-f_M f_D$),
and is measured in the gametes which united at random to form this generation.
Equation \eqref{DMBS} shows that a genetic association between drive and modifier loci ($LD\neq 0$) is necessary for change in modifier frequency.
More specifically, if we assume that the driver is costly and this cost is less than fully dominant (i.e. $w_{DD}\leq w_{Dd}\leq 1$, and $w_{DD} < 1$), an unlinked recombination modifier increases in frequency when it is underrepresented in drive haplotypes (i.e. $LD < 0$), as it avoids the fitness cost accrued by driving alleles.
Analysis of the case of female-limited drive yields a similar conclusion. With sex-limited drive, the change in modifier frequencies in sperm and eggs equals
\begin{equation} \Delta f_{M\Mars \mbox{MI}} = \Delta f_{M\Mars \mbox{MII}} = \frac{f_{M\Mars}-f_{M\Venus}+z/W}{2} \label{DSperm} \tag{3.b} \end{equation}
\begin{equation} \Delta f_{M\Venus \mbox{MI}} = \Delta f_{M\Venus \mbox{MII}} = \frac{f_{M\Venus}-f_{M\Mars}+z/W}{2} \label{DEggs} \tag{3.c} \end{equation}
respectively, where
\begin{equation} z =
(w_{Dd}-1) (\text{LD}_\Mars f_{d\Venus}+\text{LD}_\Venus f_{d\Mars})
+ ( w_{DD}-w_{Dd})(\text{LD}_\Mars f_{D\Venus}+\text{LD}_\Venus f_{D\Mars}) \label{z} \tag{4} \end{equation}
The first term in equations \eqref{DSperm} and \eqref{DEggs}, (i.e. $f_{M\Mars}-f_{M\Venus}$, and $f_{M\Venus}-f_{M\Mars}$, respectively) captures the role of syngamy in homogenizing allele frequencies across the sexes.
The second term, $z/W$, is the change in modifier frequency due to linked selection on drive alleles.
Since we assume that drive entails a less than dominant fitness cost (i.e. $w_{DD}\leq w_{Dd}\leq 1$, and $w_{DD} < 1$), $z$ will be positive when LD is negative.
Therefore, recombination modifiers will increase in frequency when they are underrepresented on the driving haplotypes, like the case above.
Additionally, since $z/W$ has the same role in changing modifier frequency ($\Delta f_M$) in male and female gametes, this model does not generate a sex difference in modifier frequency.
Since negative LD between recombination modifier and drive alleles is necessary for the adaptive evolution of the recombination rate, the salient question is - does our model generate negative LD?
To address this question, we begin with a population in linkage equilibrium (LD = 0), and investigate the level of LD in gametes after selection, recombination, and drive.
Starting from $LD=0$, the LD generated in a single generation with MI drive in both sexes equals
\begin{equation}
LD^{\prime}_{\mbox{MI}}=\delta r f_D f_d f_M f_m w_{Dd}(1-2\alpha_{\text{MI}}) / W \tag{5} \label{LDA}
\end{equation}
The LD generated with MII drive in both sexes equals
$LD^{\prime}_{\mbox{MII}} = -LD^{\prime}_{\mbox{MI}}$.
The LD generated between a female-limited driver and a recombination modifier after a single generation of selection, recombination, and drive is equivalent to the case described above replacing $\delta r$ and $\alpha$ by $\delta r_{\Venus}$ and $\alpha_{\Venus}$.
Note that with MI drive, negative LD is created between driving and recombination enhancing alleles (Equation \eqref{LDA} is less than zero when $\delta r$ and $\delta r_\Venus$ are positive).
By contrast, with MII drive, negative LD is created between driving and recombination suppressing alleles (when $\delta r$ and $\delta r_\Venus$ are negative).
Since negative LD between driver and modifier results in an increase in modifier frequency (e.g. Equations (3.a) and (4)), recombination enhancers increase in frequency by hitchhiking with high fitness non-driving haplotypes.
Similarly, recombination suppressors are favored with MII drive.
We complement our single generation view of the creation of LD by making use of the Quasi Linkage Equilibrium (QLE) method
(e.g. \citealp{Kimura1965,Nagylaki1976,Barton1991,Kirkpatrick2002}). QLE relies on the fact that when linkage is loose, and selection and drive are weak, LD approaches an equilibrium value on a much faster time-scale than the slow change in allele frequencies.
To derive this QLE LD, we solve for the equilibrium level of $\text{LD}^*$ (i.e. the LD for which $\Delta$ LD = 0), while holding allele frequencies constant.
To make this solution analytically tractable, we further assume that the strength of drive ($\alpha -1/2$ and $\alpha_\Venus -1/2$), selection ($1-w_{Dd}$ and $1-w_{DD}$), and that the degree of recombination modification ($\delta r$ and $\delta r_\Venus$), are all on order $\xi$, and we ignore terms of higher order than $\xi^2$. Doing this we find that
\begin{equation}
\text{LD}^* \approx 2 \delta r f_d f_D f_m f_M (1 - 2 \alpha_{\text{MI}}) \tag{6.a} \label{QLE1}
\end{equation}
Comparing equation \eqref{QLE1} to equation \eqref{LDA} shows that much the equilibrium LD is generated in a single generation. Recalling that by definition, $\alpha_{\text{MI}} > 1/2$, it is clear that a recombination enhancer will be in negative LD with the drive allele, and will therefore be selectively favored. When considering MII drive, the result is conceptually similar but of opposite sign - with MII drive recombination suppressors, rather than enhancers generate negative LD with drivers.
Employing QLE methods, we can also derive the equilibrium LD between recombination modifiers and female-limited drivers, with the same assumptions as above. In this case, the equilibrium LD in sperm is a function of the LD in eggs, $\text{LD}_\Mars^* = \text{LD}_\Venus^* / 3$.
Combining this result and our finding that $f_{M\Venus}=f_{M\Mars}$ , and assuming weak selection, the equilibrium LD in eggs with MI drive is approximately
\begin{equation}
LD_\Venus^* \approx \frac{3 \delta r (1-2 \alpha)f_{\textit{Dd}}f_{\textit{Mm}}} {4 - (1-2 \alpha) (1 - r_\Venus)(f_D - 2 f_{d\Mars} )} \tag{6.b}
\end{equation}
Again, the QLE results show that most of the LD in females generated in a single generation.
Note that, although selection does not directly generate LD in males (above), the inheritance of haplotypes from females maintains a low equilibrium level of LD in males.
In summary, when unlinked to a female meiotic driver, a female recombination modifier can spread by reducing the ability of the driver distort meiosis in its favor \citep{Thomson1974, Haig1991}.
By decreasing the strength of drive, the recombination modifier becomes under-transmitted on the drive haplotype, as the driving allele drives less efficiently in the presence of the modifier.
Therefore the modifier hitchhikes with the high fitness, non-driving allele; however, unlike traditional models of hitchhiking, the modifier essentially arranges a ride for itself by increasing the expected transmission of the nondriving allele.
Furthermore, when drive is sex-limited, only recombination in the driving sex can act to decrease the efficiency of the single-locus drivers described above.
\emph{
Therefore, a modifier with equal and opposite effects of male and female recombination rates can spread,
and so our results meet the criteria of \citet{Lenormand2003} for the evolution of heterochiasmy. }
Predictions concerning the evolution of recombination modifiers unlinked to MI and MII drivers are summarized by cases 1 and 2 in Table 1, respectively.
Enhancers of the female recombination rate are favored when unlinked to female-limited MI drivers (case 1), while female recombination suppressors are favored when unlinked to female-limited MII drivers (case 3).
We display the dynamics of the case of the coevolution of a MI driver and an unlinked recombination modifier in Figure \ref{Unlinked}.
\begin{figure}
\includegraphics[width=14cm]{Unlinked.eps}
\caption{{\bf The coevolution of an MI driver and an unlinked recombination enhancer.}
The frequencies of MI drive alleles \textcolor{red}{($f_D$, red)}, and unlinked recombination modifiers {\color{SkyBlue}($f_M$, blue)} across generations.
The correlation between alleles, $\frac{LD}{\sqrt{f_Df_df_Mf_m}}$, denoted by the \textcolor{red}{red} {\color{SkyBlue} blue} line, and its value is given on the right axis.
Drive is complete and recessive lethal ($w_{Dd}=1$, $w_{dd}=0$).
The initial recombination rate is 1/4, and each copy of M increases the probability of recombining by 0.05.
Initial frequencies of drive and recombination modifier alleles equal $f_{D_0}=0.10$ and $f_{M_0}=0.01$, respectively.
{\bf \ref{Unlinked}A)} Drive in both sexes ($\alpha_{\text{MI}}=1$, $r=1/4$, $\delta r = 0.05$).
{\bf \ref{Unlinked}B)} Female-limited drive ($\alpha_{\text{MI}\Venus}=1$, $\delta r_\Venus = 0.05$).
}
\label{Unlinked}
\end{figure}
\subsection{Single-locus drive - A linked recombination modifier.}
We now explore the fate of a recombination modifier in tight linkage with the drive locus (i.e. there is no recombination between drive and recombination modifier loci).
Intuitively, a modifier linked to a driver faces two opposing pressures - the deleterious effect of drive on individual fitness, and the selfish effect of drive on allelic transmission.
These two components are captured by the equation describing the change in modifier frequency when drive operates in both sexes:
\begin{equation} \Delta f_{M\mbox{MI}} = \frac{-LD(f_d (1- w_{Dd}) + f_D (w_{Dd}-w_{DD}))}{W} +\frac{LD\:w_{Dd} (1 - (r+\delta r))}{W}\label{DMBSlinkedMI} \tag{7.a} \end{equation}
\begin{equation} \Delta f_{M\mbox{MII}} = \frac{-LD(f_d (1- w_{Dd}) + f_D (w_{Dd}-w_{DD}))}{W} +\frac{LD\:w_{Dd} (r +\delta r)}{W}\label{DMBSlinkedMII} \tag{7.b} \end{equation}
The change in frequency of a recombination modifier by individual selection is represented by the first term of equations \eqref{DMBSlinkedMI} and \eqref{DMBSlinkedMII} and is equivalent to the unlinked case (equation \eqref{DMBS}).
As in equation \eqref{DMBS}, this term is positive when LD is negative.
The later term in equations \eqref{DMBSlinkedMI} and \eqref{DMBSlinkedMII} represents the change in frequency of the modifier due to drive, which is positive when LD is positive.
Thus, although individual selection favors linked recombination modifiers in negative LD with drivers, transmission distortion favors modifiers in positive LD with drivers.
With female-limited drive, the intuition is similar; however, the recursive equations are more complex. The change in frequency of a recombination modifier in sperm, $ \Delta f_{M\Mars}$, is determined entirely by individual selection, and equals $ \frac{1}{2}(f_{M\Mars}-f_{M\Venus}+z/W)$, where $z$ retains its value from equation \eqref{z}. The change in the frequency of a recombination modifier in eggs ($\Delta f_{M\Venus}$) equals
\begin{equation} \Delta f_{M\Venus \mbox{MI}} = \frac{1}{2}(f_{M\Venus}-f_{M\Mars} + \frac{z}{W} + \frac{u\:w_{Dd}}{W} (1 - (r_\Venus + \delta r_\Venus))) \label{DMBSlinkedFemaleMI} \tag{7.c} \end{equation}
\begin{equation} \Delta f_{M\Venus \mbox{MII}} = \frac{1}{2}(f_{M\Venus}-f_{M\Mars} + \frac{z}{W} + \frac{u\:w_{Dd}}{W} (r_\Venus +\delta r_\Venus)) \label{DMBSlinkedFemaleMII} \tag{7.d} \end{equation}
where
\begin{equation} u=LD_\Venus+ LD_\Mars + (f_{D\Venus} - f_{D\Mars}) (f_{M\Venus} - f_{M\Mars}) \tag{8} \end{equation}
Therefore, with either sex-limited drive or drive in both sexes, both negative and positive LD between recombination modifier and driver can contribute to an increase in modifier frequency.
Like the unlinked case, selection can generate linkage disequilibrium in a population in which no LD existed previously (see appendix).
However, since a novel mutation must arise on one haplotype, the LD formed by the mutational history of tightly linked loci is more important than is the LD generated by selection.
We display the population genetic dynamics of recombination modifiers linked to MI drivers in Figure \ref{Linked}.
Recombination enhancers increase in frequency when they arise on the non-driving background (Both sexes, Figure \ref{Linked}A. Female-limited, Figure \ref{Linked}B. See also Table 1, case 1).
By contrast, recombination suppressors spread when they arise on the driving background (Both sexes, Figure \ref{Linked}C. Female-limited, Figure \ref{Linked}D. See also Table 1, case 2).
The opposite result holds in the case of MII drive (Table 1, cases 3 and 4)
To provide a stronger intuition of the evolution of a recombination modifier linked to a drive locus, we investigate the special case of a recessive lethal driver which distorts meiosis in both males and females ($w_{Dd}=1$, $w_{dd}=0$, and $\alpha=1$). In this case, Equations \eqref{DMBSlinkedMI} and \eqref{DMBSlinkedMII} become
\begin{equation} \Delta f_{M\mbox{MI}} = LD (1-f_D - (r + \delta r))/W \tag{9.a} \label{reclethalmi} \end{equation}
\begin{equation} \Delta f_{M\mbox{MII}} = LD (r+\delta r -f_D )/W \tag{9.b} \label{reclethalmii} \end{equation}
respectively.
Under the assumption of recessive lethality and complete drive, the equilibrium frequencies of MI and MII drivers is straightforward and equals $f_{D\text{MI}}^* = 1 - r $ and $f_{D\text{MII}}^* = r $, respectively (see appendix).
Plugging these values into equations \eqref{reclethalmi} and \eqref{reclethalmii} we find that the change in frequency of recombination modifiers in tight linkage with MI and MII drivers equals
\begin{equation} \Delta f_{M\mbox{MI}} = - LD \delta r/W \tag{9.c} \label{reclethalmib} \end{equation}
\begin{equation} \Delta f_{M\mbox{MII}} = LD \delta r/W \tag{9.d} \label{reclethalmiib} \end{equation}
respectively.
Equations \eqref{reclethalmib} and \eqref{reclethalmiib} provide a straightforward characterization of the invasion of a rare recombination modifier.
With MI drive and tight linkage (equation \eqref{reclethalmib}) a recombination enhancer will increase in frequency when it arises on the non-driving haplotype $(LD < 0, \delta r > 0$, gives $\Delta f_{M\mbox{MI}} > 0$),
while a recombination suppressor will increase in frequency when it arises on the driving haplotype $(LD > 0, \delta r < 0$, gives $\Delta f_{M\mbox{MI}} > 0$)
The opposite result holds for a recombination modifier tightly linked to an MII driver (Equation \eqref{reclethalmiib}).
We note that equations \eqref{reclethalmib} and \eqref{reclethalmiib} only hold for the first generation of selection.
The analysis of female-limited drive is more complex, but ultimately yields a similar result.
Here, we present our invasibility analysis in which we derive results under female-limited drive.
In this case, female-specific recombination enhancers are favored when in repulsion phase with MI drivers or coupling phase with MII drivers, while the opposite results hold for recombination suppressors (see `Invasibility analysis,' pages 18-19, 21 in our appendix).
For this invasibility analysis (see \citealp{Otto2007}), we write the recursion for haplotype frequencies in sperm and eggs after selection, recombination, and drive in matrix form.
When eigenvalues of the Jacobian of this matrix (evaluated at viability-drive equilibrium setting the frequency of the recombination modifiers to zero) are greater than one, a rare recombination modifier increases in frequency. For both the cases of MI and MII drive, there are two eigenvectors of interest. One eigenvector is greater than one when $\delta r_\Venus >0$, the other is greater than one when $\delta r_\Venus <0$ suggesting that both recombination enhancers and suppressors can increase in frequency). Because the LD generated in these cases is large (in fact, at equilibrium there are only two haplotypes), we do not employ the QLE approach, which works best under loose linkage \citep{Barton1991,Kimura1965,Kirkpatrick2002,Nagylaki1976}.
Unlike the case of unlinked recombination modifiers, modifiers linked to drive loci do not generally approach fixation. Assuming that the modifier is favored, it rapidly goes to fixation on the background onto which it mutated; however, this haplotype now moves to its new equilibrium frequency determined by the new recombination rate. So long as this new equilibrium is greater than zero and less than one, recombination modifiers will be stably polymorphic (see Figure \ref{Linked}).
In sum, the evolution of recombination modifiers linked to drivers yields a rich and diverse set of predictions. When tightly linked to MI drivers, recombination enhancers in repulsion phase (Table 1, case 1), and recombination suppressors in coupling phase are favored (Table 1, case 2).
When tightly linked to MII drivers, recombination suppressors in repulsion phase (Table 1, case 3), and recombination enhancers in coupling phase are favored (Table 1, case 4).
\emph{Like the unlinked model (above), the fate of a female recombination modifier linked to a female-limited driver is independent of its influence on the male recombination rate.
Thus, a modifier with equal but opposite effect on male and female recombination rates (i.e. no net effect) can spread, facilitating the evolution of heterochiasmy.}
\begin{figure}
\includegraphics[width=14cm]{Linked.eps}
\caption{
{\bf The evolution of drivers and recombination modifiers in tight linkage}
The frequencies of MI drivers \textcolor{red}{($f_D$, red)}, and linked recombination modifiers {\color{SkyBlue}($f_M$, blue)} across generations.
The correlation between alleles is denoted by the \textcolor{red}{red} {\color{SkyBlue}blue} line, and its value is given on the right axis.
Initial frequencies of driver and recombination modifier alleles are $f_{D_0}=0.10$ and $f_{M_0}=0.01$, respectively.
Drive is complete and recessive lethal. Initial recombination rate equals 1/4.
{\bf \ref{Linked}A}) Drivers and recombination enhancement in both sexes ($ \delta r = 0.05$),
M arises on a d chromosome.
{\bf \ref{Linked}B}) Female-limited driver and recombination enhancement. ($ \delta r_\Venus = 0.05$),
M arises on a d chromosome.
{\bf \ref{Linked}C}) Drivers and recombination suppression in both sexes. ($ \delta r = - 0.05$),
M arises on a D chromosome.
{\bf \ref{Linked}D}) Female-limited driver and recombination suppression. ($\delta r_\Venus = -0.05$),
M arises on a D chromosome.
}
\label{Linked}
\end{figure}
\section{Two-locus drive systems}
We now turn our attention to the more complex case of two-locus, MI drive.
In this model the strength of drive by a centromeric variant, C, depends on the genotype at the drive-enhancer locus, D, which is on the same chromosome as the centromeric driver.
Specifically, in Cc heterozygotes, meiosis is fair in a genetic background of d homozygotes, but C is represented in $\alpha_1$ and $\alpha_2$ of gametes from Dd/Cc and DD/Cc individuals, respectively (where $\alpha_2 \geq \alpha_1 \geq 0.5 $ and $\alpha_2>0.5$).
Although it is possible that the drive enhancer will incur an individual fitness cost, we focus on the case in which the drive enhancer is neutral, but the driving centromere is costly.
Imposing a fitness cost to drive enhancer adds subtle quantitative differences to the results, and this model where both loci involve costs has been well explored in the description of the SD system \citep{Hartl1975,Charlesworth1978,Haig1991}.
Since the genetic identity of a centromere seems unlikely to influence MII drive, we do not pursue a two-locus model of MII drive.
With two-locus, MI drive, a recombination enhancer can increase in frequency, and ultimately approach fixation, as in the case of single-locus MI drive \citep{Thomson1974,Haig1991}.
With drive in both sexes, the change in frequency of a recombination modifier is
\begin{equation}
\Delta f_M = \frac{-\text{LD}_{MC}}{W}(f_c(1-w_{Cc})+f_C(w_{Cc}-w_{CC}))\label{3lm}
\end{equation}
where $\text{LD}_{MC}=f_{MC}-f_M f_C$ is the linkage disequilibrium between centromere and recombination modifier.
As in the case of recombination modifiers unlinked to single-locus MI drivers, the recombination enhancer spreads by becoming underrepresented in the low fitness, centromeric driving (C) genetic background.
Recombination enhancers generate this LD by decreasing the expected transmission of the drive enhancer allele, which allows the recombination enhancer to escape from the driving haplotype (Figure \ref{3Locus}A).
When drive is female-limited, alleles that increase the recombination rate between drive enhancer and centromeric loci in either sex are favored.
However, female-limited recombination enhancers spread much more quickly and have more negative LD with driving centromeres than do male-limited recombination modifiers.
For example, in Figure \ref{3Locus}B it only takes approximately 11,400 generations for a female-limited recombination modifier to rise from a frequency of 0.1\% to 95\%, but it takes more than an order of magnitude longer for a male-limited recombination enhancer to reach this frequency (Note the order of magnitude difference on the x-axis in \ref{3Locus}B and \ref{3Locus}C).
Because female recombination enhancers are more strongly favored than male recombination enhancers in this system, alleles that increase female recombination can increase in frequency, even if they drastically reduce male recombination rates (Figure \ref{3Locus}D). Therefore, our two-locus model also passes the `no-net-effect test' \citep{Lenormand2003} - facilitating the evolution of heterochiasmy.
The intuition behind this numerical result is as follows.
Drive in females generates a positive association between centromeric drivers (C) and linked drive enhancers (D).
With this association, D increase in frequency during female (but not male) meiosis.
Recombination in females decreases both the expected transmission of drive-enhancers, and the co-transmission of D and C alleles.
Since gametes from mothers with higher recombination rates have fewer D alleles than expected, drive in their daughters is less efficient.
Ultimately, the granddaughters of females with higher recombination rates suffer less from the deleterious fitness effects of drives systems \citep[e.g. ][]{Crow1991}.
As male recombination does not directly change the transmission of a female meiotic drive enhancer, the selection on a male recombination modifier is weak.
Nonetheless, elevated male recombination rates do break down the association between D and C alleles, which ultimately allows male recombination enhancers to escape from centromeric drivers, providing a minor boost to male-limited recombination enhancers. However, this effect is meagre compared to the effect of female recombination modification (Figures \ref{3Locus}B-D).
\begin{figure}
\begin{center}
\includegraphics[width=17cm]{3locus.eps}
\end{center}
\label{3Locus}
\end{figure}
\newpage
Figure 5: {\bf The co-evolution recombination modifiers and a two-locus drive system.}
The frequencies of drive enhancer, centromeric driver, and recombination modifier alleles are displayed in \textcolor{red}{red}, \textcolor{green}{green}, and {\color{SkyBlue}blue}, respectively.
The correlation between recombination modifier and centromeric driver alleles is denoted by the \textcolor{green}{green} {\color{SkyBlue}blue} line, and its value is given on the right axis of Figure 5B (The scale is maintained in Figure 5C).
Initial frequencies drive and recombination modifier alleles equal $f_{D_0}=0.10$ and $f_{M_0}=0.01$, respectively.
The driving centromeric allele completely distort meiosis in DD and Dd genotypes (e.g. $\alpha_1=\alpha_2=1$), and is a recessive lethal.
Neither drive enhancer nor recombination modifier directly influence individual fitness.
The initial recombination rate and allele frequencies are: $r=0.10$, and $f_{D(0)} = f_{C(0)} = f_{M(0)} =0.001$, respectively.
{\bf A)} Recombination modification in both sexes ($\delta r_\Venus = 0.025$, $\delta r_\Mars=0.025$).
{\bf B)} Female-limited recombination modification ($\delta r_\Venus = 0.025$, $\delta r_\Mars=0$).
{\bf C)} Male-limited recombination modification ($\delta r_\Venus = 0$, $\delta r_\Mars=0.025$).
{\bf D)} The modifier has distinct influence on male and female recombination rates ($\delta r _\Mars$, and $\delta r _\Venus$, respectively). \textcolor{green}{Green} indicates an increase in modifier frequency, {\color{Purple} purple}
indicates a decrease. Labels above diagonal lines describe the relative change in allele frequencies over 250 generations.
\newpage
\section{Discussion}
Meiosis and recombination are deeply conserved and highly orchestrated processes, in which slight errors can have severe fitness consequences.
Nonetheless, many of the functional components of meiosis and recombination evolve rapidly \citep[e.g. ][]{Malik2002,Anderson2009,Myers2010}.
One explanation for this rapid evolution is that meiosis and gametogenesis offer a number of opportunities for genomic conflict within an individual, generating a pattern of antagonistic coevolution between selfish gametic drivers and suppressors of meiotic drive \citep[see][ for a broad overview]{Burt2006}.
Since the progression of meiosis and gametogenesis is highly sex-specific \citep{Morelli2005}, we expect that forms of conflict and conflict mediation will also be sex-specific.
The asymmetry in meiotic division during oogenesis presents an opportunity for competition between alternative alleles for representation in the egg \citep{Sandler1957,Zwick1999,Pardo-ManuelDeVillena2001b,Pardo-ManuelDeVillena2001a,Malik2009a}.
Because recombination determines the ability of female drivers to distort meiosis at MI and MII, female recombination rate influences the ability of a driver to distort meiosis.
We have shown that female meiotic drive can favor changes in the female recombination rate - female recombination modifiers are selected to enhance or suppress drive (see Table 1) with changes in male rate having little or no effect on the efficacy of female drive.
Our understanding of the frequency, severity, and operation of female meiotic drive systems is still in its infancy.
Therefore, we have not based our population genetic analysis on explicit mechanistic details of female drive.
However, our models can be related to biologically plausible mechanisms.
The model of a single-locus MI driver could correspond to an epigenetic modification of a centromere in \emph{cis}, or a structural stretch of DNA that influences the orientation of the centromere in such a way as to increase its probability of inclusion into the secondary oocyte.
In our two-locus MI drive system, the centromeric locus could correspond to a centromeric satellite that increases its probability of inclusion in the primary oocyte through an interaction with the spindle, while alternative alleles at the drive enhancer locus could represent centromeric proteins which interact with the centromeric machinery to enhance or suppress the effect of drive \citep{Malik2002,Malik2009,Malik2009a}.
Our MII model roughly corresponds to neocentromeric drive systems, such as the Ab-10 locus in maize \citep{Rhoades1966}, or telomeres that influence the orientation of meiotic chromosomes \citep[][see \citet{Anderson2008} for a discussion]{Novitski1951}.
We reiterate that our results depend solely on the ability of recombination to modify drive systems, rather than on any specific drive mechanism.
We have shown that female meiotic drive systems create a selective environment that favors the evolutionary modification of the female recombination rate.
However, selection on female recombination rates does not necessarily lead to heterochiasmy.
For example, in the extreme case where if alleles that modify the recombination rate in females have equivalent effects on the male rate, selection on the female rate will not generate heterochiasmy.
However, above we show that our model favors modification of the female recombination rate even if modifiers have opposing effects the on male rate - the standard for models of the evolution of heterochiasmy \citep{Lenormand2003}.
While we still know little about recombination modifiers, current evidence suggests that the genetic control of the global recombination rates differs by sex, and therefore selection on the female rate is likely to generate heterochiasmy.
Three distinct lines of evidence support this tentative conclusion.
First, the global control of meiosis and recombination is sexually dimorphic \citep{Morelli2005}.
Second, the few naturally occurring alleles known to modify the total genetic map length in humans do so in a highly sex-specific manner \citep{Kong2004,Kong2008,FledelAlon2011}.
Third, although there is additive genetic variation for the map length in both sexes, no heritable intersexual correlation in map length has been found \citep{FledelAlon2011}.
The predictions of our model are sensitive to biological details such as the linkage association between drivers and recombination modifiers, as well as the timing of drive (MI vs MII).
This makes it hard to generate concrete predictions about whether female drive will select for higher or lower female recombination rates. Indeed, the fact that female rates are not always higher than male rates suggests that the direction of selection may not be constant.
One concrete prediction is that since the centromere holds a special place in female meiotic drive (in both MI and MII systems) a constant influx of new female drivers will systematically select particularly for heterochiasmy in the regions surrounding centromeres.
The observation in many taxa of higher female recombination rates, especially near the centromere is consistent with two different biological scenarios elaborated in our models.
First, elevated female recombination rates, especially near centromeres, may represent the action of unlinked suppressors to prevent the spread of MI female drivers.
Alternatively, this pattern could be explained by the spread of recombination enhancers linked to MII drivers, which increase in frequency because recombination enhancers facilitate MII female drive. Empirical progress in elucidating the genetic basis of local variation in sex-specific recombination rates and female meiotic drive across the genome will shed light on which (if any) of these two models explain this pattern.
In contrast to global modifiers of the recombination rate, we know a little more about local modifiers of recombination rate, though our picture is still far from complete.
One broad class of \emph{local suppressors} are chromosomal inversions, which seem to be a common response to selection for reduced recombination \citep[see ][ for discussion]{Kirkpatrick2010}.
Inversions are {\it a priori} expected to locally suppress recombination similarly in both sexes and this heterozygous effect will be removed when the inversion is eventually lost or fixed within the population.
Therefore, given our current understanding of local recombination modification, we think it is unlikely that selection for linked suppressors of recombination will strongly contribute to heterochiasmy.
Although we focused on female-limited drive, there are well documented cases of male-limited transmission distortion involving multilocus gene complexes (Note that none of these are true meiotic drive, relying instead on sperm death) \citep{Wu1991}.
A model of the coevolution of a two-locus male drive system and a recombination modifier will yield results similar to the case of two-locus MI female drive described above (with slight differences due to the fitness of recombinants in male systems).
That is, with male-limited drive systems, we expect that recombination modifiers in coupling phase would benefit from reducing the male recombination rate, while unlinked modifiers would benefit from increasing the male recombination rate \citep[see also discussion by ][]{Lenormand2005}.
Evidence for the former is bountiful, as most known male transmission ratio distorters are tied together by complex inversions \citep[e.g. ][]{Presgraves2009}.
However, unlike female meiotic drivers, male distortion systems can arise at any chromosomal location irrespective of the distance to the centromere.
Therefore, even if male distortion systems do select for heterochiasmy, a constant influx of new male systems will not systematically select for sex differences on a chromosomal-scale.
More generally, our models may explain other broad patterns associated with recombination.
One outstanding pattern is the observation that variation in the number of chromosome arms is a better predictor of recombination rate variation among mammalian species than is variation in the number of chromosomes, or the physical size of the genome \citep{Pardo-ManuelDeVillena2001d}.
This result is somewhat surprising because only one recombination event per chromosome is required for proper segregation, arms of metacentric chromosomes are often found to be lacking a crossover \citep{Fledel-Alon2009}, and the centromere seems to offer no barrier to interference in many systems \citep{Broman2000,Fledel-Alon2009,Demarest2011}.
The meiotic drive theory can explain these observations by proposing that modifiers of the recombination rate are selected to increase recombination events between centromeres and potential drivers, as both sides of a chromosome present an opportunity for drivers to exploit meiosis.
Another broad pattern is the observation that heterochiasmy is reduced in selfing plants, which \citet{Lenormand2005} saw as support for their hypothesis of a role of haploid selection on male gametes in driving heterogamy.
We note that this observation is potentially consistent with the female meiotic drive hypothesis - since most selfing plants are largely homozygous, there is little opportunity for drive.
The alteration of generations in plants provides exciting opportunities for future research on the evolution of heterochiasmy.
The meiotic drive theory could also explain the observation of rapid changes in recombination rates \citep{Coop2007,Dumont2008,Dumont2011}, as the recombination map will constantly be evolving as recombination enhancers and suppressors respond to new drive systems across the genome.
A greater knowledge of medium-scale patterns of turnover in male and female rates would help clarify the plausibility of meiotic drive in explaining this pattern. For example, the meiotic drive theory would be strongly supported if regions close to centromeres show particularly high rates of female recombination evolution, as is observed in the \emph{Drosophila} clade \citep{True1996}.
Further tests of these predictions require the ability to identify both meiotic drivers and modifiers of the recombination rate.
Currently, our knowledge of the distribution of meiotic drivers and their fitness effects is very incomplete, and is likely biased towards the overrepresentation of strong drivers with extreme fitness effects.
However there is mounting evidence supporting the existence of subtle transmission ration distorters \citep[e.g. ][however, the mechanism of distortion in these cases is often unknown]{Reed2005,Zollner2004,Aparicio2010,Axelsson2010}.
Similarly, although there is ample evidence that the recombination rate, as well as the strength and direction of heterochiasmy varies across species, few allelic variants that influence sex-specific recombination rates have been identified.
Although we currently know very little about the frequency of female drivers, or the genetic control of sex-differences in the recombination rate across many taxa. Fortunately, as technological advances make the sequencing of offspring (or gametes) from many meioses more affordable, identification of alleles governing sex-specific transmission and recombination rates will become much easier.
More broadly, the ideas presented here are part of a larger body of the theoretical work that highlights the potential diverse role of genomic conflict in shaping the evolution of recombination, meiosis and gametogenesis (e.g. \citealp{Sandler1957,Haig1991,Haig2010,Hurst1996, Zwick1999,Malik2009,Anderson2009} \citealp{Burt2006}, Table 12.3).
For example, conflicts between \emph{cis} and \emph{trans} determinants of hotspot localization due to bias gene conversion have been put forward to explain the rapid evolution of mammalian fine-scale recombination rates and their determinants (such as the hotspot binding protein Prdm9 \citep{Baudat2010,Myers2010,Parvanov2010}) \citep{Boulton1997,Coop2007b,Ubeda2011}.
While much of the classic work on the evolution of recombination and meiosis has focused on the benefits of creating adaptive gene combinations, purging deleterious recessive alleles from an adaptive haplotype, or bringing together two beneficial mutations onto one haplotype \citep[e.g. ][]{Eshel1970,Feldman1996,Otto2002,Barton2009}, it seems equally plausible that the short-term evolution of recombination rates may be in response to conflicts created during meiosis.
\newpage
\section{Acknowledgements}
We thank Chuck Langley for discussions that inspired this paper. This paper was greatly improved by comments from David Haig, Chuck Langley, Pat Lorch, Molly Przeworski, Peter Ralph, Alisa Sedghifar, and Michael Turelli. We thank Thomas Lenormand and the other anonymous reviewer for very helpful feedback. This work was made possible by a NSF Bioinformatics postdoctoral fellowship to YB and support to GC from the Sloan Fellowship in Computational and Evolutionary Molecular Biology.
\input{BrandvainandCoop.bbl}
\end{document}
| 2023-04-23T06:40:30.937Z | 2011-12-12T02:00:42.000Z | redpajama/arxiv | arxiv_0001 | 559 | 10,613 |
4eb5694b0fc6d677ffca49c697e8d9c617d57d5c | \section{Introduction}\label{intro}
Supernova remnants are a dominant factor in the dynamics of a galaxy given that their ubiquity and size influence all phases of the interstellar medium (ISM). An investigation of these objects is crucial to understand this interaction and the mechanisms involved with transferring energy and matter to the ISM. The Cygnus Loop provides an excellent laboratory for the study of a galactic supernova remnant as it is large, $\sim3^{\circ}\times3^{\circ}$ \citep{L97}, close by, 540 pc \citep{blair05}, and relatively unabsorbed ($E(B-V)=0.08$; \cite{Fesen}). \citet{L97,L98} performed a detailed study of the optical and X-ray emission, and found that the morphology is not indicative of a typical theoretical blast wave in a uniform medium \citep{sedov,spitzer}. Instead, the precursor wind carved out a nearly spherical cavity made of clumps at high density separated by lower density gas, thus providing the ingredients for a cavity explosion \citep{McCraySnow}. Over the history of this remnant the blast wave has propagated through a low density plasma and has relatively recently interacted with the cavity wall leading to limb-brightened emission. The density enhancements decelerate the blast wave and lead to the X-ray emission.
X-ray studies of specific blast wave-cloud interactions give mixed results. Several studies suggest low temperature ($kT\sim$0.1--0.2 keV) equilibrium conditions within these dense clouds with higher temperatures toward the interior \citep{L02,L05,leahy}. Alternatively, \citet{Katsuda08} and \citet{M07} find nearly all of the best fit temperatures above $kT=0.2$ keV with nonequilibrium models. However, the general trend is a multiple temperature component model that is consistent with a lower temperature exterior and higher temperature interior, which may be additionally heated by reverse shocks arising from the cloud interactions.
The XA region is a complicated, convoluted region of shocks located on the eastern side of the Cygnus Loop, on the south end of NGC 6992 \citep{HesterCox}. Previous studies have looked at this region over a range of wavelengths \citep{Sankrit,Danforth01,MT01}. \citet{Sankrit} find complicated ionization structures within UV shocks. \citet{Danforth01} also perform UV emission line studies and formulate a picture of the XA region where a finger of dense gas is protruding into this region. In addition, \citet{MT01} observed this region with the \textit{ASCA} and \textit{ROSAT} observatories. Similar to the above X-ray studies, they find inhomogeneous structure and explain their results with a reflection-shock model resulting from the interaction of the blast wave with a single cavity wall cloud. More recently, \citet{Zhou} analyzed higher resolution \textit{XMM-Newton} data. They find a low temperature exterior ($kT\sim$0.07--0.15 keV) with depleted abundances and high temperature interior ($kT\sim$0.24--0.46 keV). They argue that the region is dense and clumpy, but identify no small scale features.
The goal of this analysis is to exploit the high spatial resolution of \textit{Chandra} to elucidate structure in the XA region and obtain more detail on the interaction of the blast wave with the precursor formed cavity wall. Combined with the ability to extract spectra, it will be possible to probe the XA region for additional complexities and attempt to model the cavity wall morphology and its physical state.
\section{Observations and Data Reduction}\label{obs}
The bright, shock rich region on the eastern edge of the Cygnus Loop, known as the XA region, was observed with the Advanced CCD Imaging Spectrometer on the \textit{Chandra X-ray Observatory} for 75 ks. The observation was made on 2001 July 28 and archived under observation ID 1961. The region was centered on the S3 chip which is back-illuminated and therefore more sensitive to the large amount of soft X-rays being emitted from this region. This paper concentrates on the interaction of the supernova blast wave with the cavity boundary, which is dominated by soft X-rays. For that reason we have decided to concentrate our analysis on data contained within only the S3 chip giving a $8.3' \times 8.3'$ field of view. The observation produced imaging with a spatial resolution of $\sim1''$ and spectral resolution E/$\Delta$E$\sim$5--20 over an energy band of E$\sim$0.5--1.5 keV (flux above 1.5 keV is negligible).
The level 1 data were reprocessed to level 2 with standard processing procedures in the Chandra Interactive Analysis of Observations (CIAO, v4.1.2, \citet{ciao}) software package with current calibration data from the Chandra Calibration Database (CALDB, v.4.1.0). Good time intervals, charge transfer efficiency, and time dependent gain variations were accounted for. The data were background corrected using extractions from eastern areas of the chip that are devoid of emission.
The reprocessed raw data is displayed in figure~\ref{rawdata}. It is evident that this region is extremely complex with many shocks existing down to the smallest scales. The emission appears to be concentrated in clumps or hot spots as opposed to filamentary shock structures. To elucidate the spectral structure within this region, a false color image, created by discriminating the data via energy, is shown in figure~\ref{color}. The raw data are segmented into 0.3--0.5 keV (red), 0.5--0.8 keV (green), and 0.8--1.5 keV energy bands. The soft, red emission is dominant on the very eastern edge where the blast wave is initially encountering the cavity wall, although some red emission is seen toward the interior, especially on the south end. The harder, green emission is interior to this red emission as is the hardest, blue emission. To gain more insight into the distribution of these energy regions a ratio map of the soft emission (0.3--0.5 keV) to hard (0.8--1.5 keV) emission is displayed in figure~\ref{ratio}. This shows the predominance of soft emission at the eastern edge where the blast wave is interacting with the cavity wall. Also, there is a large, bright, soft region centrally located in declination, along the front, that extends into the remnant considerably with relatively little high energy contribution. Finally, there is a line of soft emission that extends from the shock front westward into the interior of the remnant along a declination of $\sim31^\circ 01'$.
\section{Spectral Extractions and Fitting}\label{specext}
We extracted data from specific regions in order to perform spectral analyses. These regions were chosen for their apparent physical qualities based on inspection of the false color image and the band ratio map. The combination of these images can be used to elucidate regions with distinct physical characteristics. Spectrally extracted regions are shown in figure~\ref{regions} overlaid on both the false color image and the band ratio map. Spectra from these regions were created using the \textit{specextract} tool in CIAO. This tool also creates specific ancillary response files and redistribution matrices for each region.
The first region of interest is the soft X-ray dominated rim of the remnant. The extraction regions are depicted by the series of rectangles following the rim and labeled B1--B5. Inspection of the color image shows that these regions are dominated by soft emission. In contrast to these B regions, the regions A1 and A2 are dominated by harder emission with A2 located in a veritable void in the ratio map. The next region of interest is the centrally located soft X-ray enhancement, probed by extraction regions C1 and C2. C1 corresponds to a bright knot of emission in the false color image while C2 corresponds to a bright spot in the ratio map. This area of soft X-ray enhancement appears extremely complex with structures appearing more like clumpy hot spots rather than elongated shock filaments. Another region of interest lies along the westward soft X-ray extension along declination $\sim31^\circ 01'$. Regions E1--E3 are used to investigate this emission. Finally, there is an extremely bright knot in the southeast located at $\sim31^\circ$ of declination and right ascension of 20$^h$ 57$^{min}$ 21$^{sec}$. The high count rates allow for several extractions over this region. A grid of extraction boxes is used to investigate the spectral properties of this bright emission.
To analyze the extraction regions we use the Xspec spectral fitting package v12.5.1n \citep{Xspec}. Only data above 0.5 keV are fit due to the uncertainty of the low energy calibration. The data are also binned to $>30$ counts per bin so that Gaussian statistics can be applied. If the X-ray emission arises from supernova blast wave heating of the ISM, then we expect to see cooling emission predominantly in emission lines. Depending on the density of the plasma and the time since it has been shocked, it will either been in collisional ionization equilibrium (high density or large amount of time since heating) or on the way there, which is described by a nonequilibrium ionization model. In Xspec, the appropriate models are \textit{vequil} for equilibrium conditions, and \textit{vnei} for nonequilibrium conditions \citep{bork01,bork94,Hamilton,liedahl95}. These models are convolved with a photoelectic absorption model, \textit{phabs}, which accounts for interstellar absorption along the line of sight. Dielectronic recombination rates are taken from \citet{mazz} with abundances from \citet{angr} and cross sections from \citet{bcmc}. The line list used is an improved version of the APED line list \citep{aped1}, which has been developed by Kazik Borkowski and includes more inner shell processes especially for Fe-L lines. Additional fitting was performed using a plane parallel shock model, \textit{vpshock} \citep{bork01,sedov}, but this never resulted in better fit statistics in comparison to the \textit{vequil} or \textit{vnei} models.
Investigation of emission line lists for plasmas with $kT\sim$0.1--0.2 keV show that the emission is dominated by high ionization states of oxygen, nitrogen, and carbon. At temperatures of $kT>0.2$ keV iron and neon have significant contributions, especially above photon energies of 0.7 keV. The fits presented here typically varied the abundances for some subsection of these five elements. All non-varying elemental abundances are set to solar values. Allowing $N_H$ to vary causes it to trend toward unphysically low values with large errors in a handful of regions. Therefore, for all fits the column density of intervening material, $N_H$, was set to $8\times10^{20}$ cm$^{-2}$ consistent with other Cygnus Loop studies \citep{McECash08, Zhou, MT01}. Other variables include the temperature of the plasma, $kT$, a normalization parameter which includes emission measure, $norm$, and the ionization timescale, $\tau$. The ionization timescale is the product of the time since shock heating and the electron density of the shocked material, $\tau=n_{e}t$, with $\gtrsim10^{11}$ cm$^{-3}$ s typically signifying equilibrium conditions. This parameter is only used in the \textit{vnei} models.
A summary of the best fit parameters is shown in Tables~\ref{tbl1}, ~\ref{tbl2}, and ~\ref{tbl3}. Quoted errors are 90\% confidence intervals for the parameter of interest. The A regions are best fit with two temperature components, however A1 requires two equilibrium plasmas while A2 uses two plasmas not yet in collisional ionization equilibrium. Single temperature \textit{vequil} and \textit{vnei} fits were attempted for A1 and A2 but are unable to produce a $\chi^{2}_{red}<2$. Substituting one of the equilibrium plasmas with a nonequilibrium plasma does not improve the $\chi^2$ statistic for A1 and the $\tau$ parameter is driven to its maximum, thus suggesting equilibrium conditions. Region A2 could only be adequately fit ($\chi^{2}_{red}<2$) with nonequilibrium plasmas in its two component model. These fits tie the abundances between C, N, and O together for each component. In addition to a relatively high $kT$ component, A1 shows the presence of a low temperature plasma at $kT=0.09$ keV while A2 is characterized by only high temperature gas. The abundances in all components are reduced from solar.
In the B regions, all extracted spectra are best fit with a two temperature equilibrium model. It is only necessary to vary the abundance of O to obtain an adequate fit. The abundance of O is tied between the two temperature components. Allowing C and N to vary does not improve the fit statistics and if allowed to vary their values are typically driven to zero or to unphysically high values, e.g. tens to hundreds. Again, several combinations of models were attempted to fit each B region. Single temperature fits (\textit{vpshock}, \textit{vequil}, and \textit{vnei}) were incapable of fitting the data from any of the regions. In region B1 only a two component \textit{vequil} model is reasonable. Changing one of the components to nonequilibrium conditions does not improve statistics and the $\tau$ parameter pegs high. Including a nonequilibrium component in the B2 fit does produce a reasonable fit ($\chi^{2}_{red}=1.3$), but an F-test just favors the two component equilibrium model (F-test value $=3.1$ with probability, $P=0.06$, where $P_{crit}=0.05$). B3 is similar to B1 in that the inclusion of nonequilibrium conditions does not improve the fit and the $\tau$ parameter reaches its upper limit. In addition to the two component equilibrium model, the B4 spectrum can be fit by both a \textit{vequil} $+$ \textit{vnei} model and a \textit{vnei} $+$ \textit{vnei} model ($\chi^{2}_{red}=1.6$ for both), but an F-test clearly favors the double equilibrium model (probabilities of 0.34 and 0.45, respectively). Finally, a \textit{vequil} $+$ \textit{vnei} model fits B5, but with a marginally improved $\chi^{2}_{red}$ leading to an F-test probability of 0.94, thus negating the addition of the nonequilibrium model. Again, the O abundance is depleted from solar in all regions with a range of 0.51--0.82 (where an abundance of 1.0 would signify solar). In each region there is a low temperature, soft component with $kT$ ranging from 0.09--0.12 keV, and a harder component with temperatures ranging from 0.26--0.6 keV.
Table~\ref{tbl2} contains the fit parameters for the bright regions encompassed by C1 and C2. In both cases, the given model is the only reasonable fit obtained for these regions. The C2 fit is very similar to the B regions - O is the only varied abundance in a two component equilibrium model consisting of a low T and high T plasma ($kT=0.11$ and $0.37$ keV). The C1 fit is similar to the A2 fit, but at lower temperatures--an average of 0.19 keV versus 0.27 keV. The abundances of C, N, and O were allowed to vary independently, but each was tied between the two temperature components. The abundances of N and O are again depleted but there is a suspicious upper limit to C. Lines from \ion{C}{6} lie below our energy range and would contribute to the shape and trend of the extracted spectrum at low energies. This high C value is a result of a single high value in the lowest energy bin. Maintaining a constant level of interstellar absorption leads to an increase in the C abundance to compensate for this high bin.
The E regions sample the soft X-ray enhancement that extends to the west. The best fit parameters for these regions are given in Table~\ref{tbl3}. In contrast to the other regions, it is necessary to vary the Ne and Fe abundances in E1--E3 to obtain an adequate fit. This leads to depletions in O and Ne (0.12--0.25 and 0.32--0.41, respectively) with Fe close to or slightly higher than solar (0.8--1.6). Region E1 is adequately fit with \textit{vequil} alone as well as a two component \textit{vequil} $+$ \textit{vnei} model, but F-tests performed between these models and \textit{vnei} favor the single temperature nonequilibrium model. E3 is much different in that only a single temperature equilibrium fit is valid. Attempts at using \textit{vnei}, alone or in combination, drives the ionization parameter to equilibrium values. Finally, the only model capable of fitting region E2 is a two temperature component \textit{vequil} $+$ \textit{vnei} model. The high temperature nonequilibrium component is similar to regions E1 and E3 in temperature and elemental abundances, but the additional equilibrium component is at a lower temperature with solar abundances.
The extracted spectra from each of the regions discussed above are given in figure~\ref{spectra}. In addition to the extracted data, the best fit model is shown for each region as a solid line. Regions B2 and B3 have been chosen as representative spectra from the B regions. Visual inspection of figure~\ref{spectra} shows that the B and C regions are dominated by soft emission below 700 eV. More specifically, other than the strong \ion{O}{7} triplet from 561--574 eV, no significant emission lines stand out. Alternatively, the emission from the A and E regions appears to be harder or at least contains a higher temperature component. First, in addition to the bright \ion{O}{7} triplet, the presence of \ion{O}{8} at 654 eV, 775 eV, and 817 eV can be inferred. This may solely be due to the improved count statistics in these bright regions. More convincingly, there are humps of significant emission above 700 eV present. This emission is most likely from \ion{Fe}{17} lines which are expected around 725 eV, 727 eV, 739 eV, 812 eV, and 826 eV, as well as the He-like \ion{Ne}{9} triplet around 905--922 eV. The peaks of emission lines are systematically underestimated which may point to an error or conservatism in the redistribution matrices.
Of final note, the bright knot in the southeast stands out even in this highly complex field. In an attempt to exploit the high spatial resolution of \textit{Chandra}, we extract 42 spectra from this region using a fine grid of boxes, each measuring $27\times22$ pixels (Figure~\ref{grid}). Given a distance of 540 pc, each box probes a mere $0.034\times0.028$ parsecs of area at the remnant. The extracted spectra were fit with the same round of models but only the two temperature component models provide an adequate fit. In general, these regions are well fit with a \textit{vequil} $+$ \textit{vnei} model. Of the 42 regions, 7 were fit statistically better (via F-test) with a double nonequilibrium model. Parameter maps of the best fit parameters for $kT$ and $n_e$ over the grid are shown in figure~\ref{gridparams} where $n_e$ is calculated from the fit normalization. The lower temperature component corresponds to $kT_1$ and $n_{e1}$ and is typically in equilibrium while the higher temperature component has $kT_2$ and $n_{e2}$ and is always in nonequilibrium. The average error in the equilibrium temperature is $\sim3\%$ and for the nonequilibrium temperature it is $\sim4\%$. The average errors in density range from 8--11$\%$. The abundances of C, N, and O were allowed to vary but fixed to each other. The abundances were typically depleted relative to solar to a similar degree as the other regions (0.3--0.8 times solar) with a few grid boxes around solar values. The $\chi^{2}_{red}$ has an average and standard deviation of $1.5\pm0.4$ over the grid.
\section{Interpretation}\label{interp}
In general, the extracted spectra are best fit by two temperature component models. A handful of scenarios may be used to explain this phenomenon. First, the region may be merely confused due to multiple, physically distinct plasmas along the line of sight. This effect is expected to become less of an issue toward the rim and more significant for the interior regions. Second, the region may encompass a high density region and a low density region, both of which have been recently shocked. The high density region will equilibrate (larger ionization parameter, $n_et$) and cool more rapidly than the low density region, thus leaving two temperature components. Finally, a similar situation may exist with two distinct density regions, but regions that are causally linked. Such a situation may occur when the shock wave passes out of the precursor formed cavity and into the cavity wall. The high density gas in the cavity wall will again equilibrate more rapidly, but the low density region is located interior to the wall and a reverse shock will reshock and additionally heat this interior gas.
In order to distinguish between these scenarios (or develop a new one) we calculate the densities ($n_e$) for each of the model components in all regions. The results are given in Table~\ref{tbl4}. The density is calculated using the $norm$ parameter within each component where $norm=[10^{-14}/(4 \pi D^{2})] \int n_{e}n_{H}dV$. A distance of 540 pc is used for $D$ and it is assumed that $n_e=n_H$. The volume is calculated using the area of each region multiplied by the depth of the column of remnant gas each region intersects. This depth is found by assuming the remnant is a filled sphere with radius 1.5$^{\circ}$ or 14 pc and calculating a chord length based on the radial position of each region. It is assumed that the B regions sample the outermost radii of the sphere. These assumptions probably lead to the error in volume being the largest contributor to the error in density so that the densities in table~\ref{tbl4} should be treated as approximate. However, since the volume between components within a region remains constant, the relative density between components in a given region is accurate.
The most evident trend is in the B regions. In all 5 regions the lower temperature component ($\langle kT \rangle \sim0.10$ keV) exhibits a significant density enhancement - on average, over an order of magnitude higher than the high temperature component ($\langle kT \rangle \sim0.38$ keV). This result, combined with the location of the B regions at the easternmost edge of the X-ray emission strongly suggests that these B regions trace out the location of the cavity wall. The high density, low temperature component represents the high density enhancements in the cavity wall itself while the low density, high temperature gas most likely lies just interior to the wall in the precursor formed cavity. The blast wave has shock heated high density clouds in this wall and reverse shocks from this interaction propagate toward the interior. The high density gas equilibrates rapidly, cools via emission lines (predominantly \ion{O}{7}) and has a lower temperature than the lower density interior which has been reshocked and takes longer to cool.
It may be expected that the interior lower density gas is in nonequilibrium, but that is only true if it has been very recently shocked, i.e. $t$ and $n_e$ are sufficiently low to give a low ionization parameter. However, in all B regions the high temperature component can never be fit with a nonequilibrium plasma and any attempts drive the ionization parameter to its upper limit, which signifies equilibrium and therefore a longer timescale since being shocked. Solving the Rankine-Hugoniot relations for a strong shock and $\gamma=5/3$ gives the post-shock temperature as a function of shock velocity, $T=(3/16)\mu$$v^2/k$ with $\mu=1.2$. This can be easily applied to the cavity wall where the average shock velocity in this high density gas is 209 km/sec. If we then assume a Sedov solution the time since shock heating can be calculated as $t=(2/5)R/v$ where $R$ is the radial shock location, 14 pc. While this is an inaccurate assumption in the cavity wall it will provide an upper limit to the time since the shock velocity in the dense gas is lower than the Sedov velocity. This upper limit is $\sim$26,000 years. Ideally we could use the shock velocity combined with visible spatial offsets between the shocks in the cloud and the reverse shock to estimate a propagation timescale, but the resolution and/or count rate is not present in this observation. The situation for the interior gas is even more complicated if we assume it has been shocked twice, once by the initial supernova blast wave and again by a reverse shock. The contribution from each of these shocks to the temperature of the gas is uncertain. Therefore, a similar calculation will overestimate the shock velocity. Even so, the high temperature component in these regions is still best fit with an equilibrium plasma which suggests high densities and/or long timescales since shock heating. Therefore, this crude age estimation, the more precise velocity calculation, and the large ionization timescales suggest that the B regions have equilibrated due to their high density and possibly also due to a significant time since being shocked.
In other regions the situation is complicated by their interior locations and presumed line of sight confusion. For instance, the A regions exhibit similar properties to the B regions with high densities typifying the low temperature component. This effect is more pronounced in region A1 given the more distinct temperature difference between components in comparison to region A2. However, given the location of these regions it is more likely that line of sight confusion leads to multiple components rather than probing a single distinct cavity wall/shock interaction as with the B regions. Inspection of figure~\ref{regions} shows that A1 is dominated by hard emission, but may lie close enough to the rim of the remnant to pick up some low temperature cavity wall emission along the line of sight. This is supported by the presence of a very high density, low temperature component in the A1 fit. Also, the presence of soft x-rays is somewhat evident in both the false color and ratio maps. In region A2 though there is a distinct absence of soft X-ray emission in figure~\ref{regions}. The A2 line of sight is located interior to A1 and therefore probes through a much larger column of the high temperature gas, which dominates as shown by the best fit temperatures of $kT\sim$0.23--0.30 keV. If we assume that the low temperature component is completely contained within a shell of thickness comparable to a B region, and use a radius of 14 pc for the location of the cavity wall, then we can calculate the relative contributions of the high and low temperature components in each of these A regions. Region A1 probes a total shell volume of 0.015 pc$^3$ and interior volume of 0.12 pc$^3$ while the A1 line of sight contains a shell volume of 0.011 pc$^3$ and interior volume of 0.16 pc$^3$. This estimates that region A2 contains approximately twice as much interior, high temperature emission relative to low temperature emission. The lower density, high temperature plasmas of A2 are consistent with nonequilibrium conditions for this interior region. Furthermore, the interior plasmas of these A regions display significant emission from higher energy lines, most likely \ion{Fe}{17} and \ion{Ne}{9}, also consistent with the high temperatures in these regions. Therefore, while the A regions probably encompass some cavity wall emission, some reverse shock emission, some singly shocked low density hot emission, or some combination of all of the above, it is unclear if the temperature/density components are related given that these regions probe an interior sightline that is prone to confusion.
The C regions follow a similar trend as the A regions. C2 shares properties of A1; the line of sight probes both a low temperature, equilibrium plasma near the edge of emission and hotter ineterior gas as well. The presence of the low temperature component is quite evident upon examination of the ratio map. C2 outlines the brightest region in this map so it is not surprising that there is a significant soft component. This component is probably due to a density enhancement in the cavity wall ($n_e=5.6$), but possibly located in the foreground, toward the observer in comparison to the B regions, assuming a spherical shape to the remnant. However, the exact location of the soft emission along the line of sight is complicated by the high degree of complexity in this area as seen in the raw data and color image. Region C1 gives no further clues either given the distinctly different fit parameters in comparison to C2. This particular line of sight probes mostly higher temperature gas. It is quite bright in the raw data, but shows no significant abudance of soft X-ray emission in the ratio map, especially in comparison to C2. The lack of soft X-rays leads to the relatively higher fit temperatures ($kT=0.19$ keV versus 0.10 keV on average for soft X-ray components) while the bright emission is consistent with the high density, i.e. large emission measure, of this region. Region C1 is the only example of a high density, high temperature, nonequilibrium plasma in any of the extracted regions. If this density enhancement is in nonequilibrium then it must have been recently shocked. Using the ionization parameter and the calculated density show that this plasma was most likely shocked within the last 1000 years. It is possible that this cloud traces out where the blast wave is interacting with the cavity wall but at a larger radius than the B regions, thus leading to the more recent shock time. If this is the case, then the cloud must be located considerably in the foreground relative to B. Therefore, the C regions are again confused by line of sight projection yet show the presence of probable cavity wall emission in addition to the hotter, lower density, interior emission.
The picture is not so clear in the E regions. These extractions are even farther toward the interior than the A regions, but seem to have simpler fits. For instance, a single temperature equilibrium plasma is the only reasonable fit obtained for region E3. One might expect that a multiple component fit at high temperature and nonequilibrium may be favored. However, these regions were chosen because they sample the soft X-ray extension that protrudes westward from the edge of the remnant (see Figure~\ref{regions}). It may be the case that region E3 looks directly at a dense cavity wall cloud, which obscures the view into the hotter, low density interior. The calculated density of 3.0 cm$^{-3}$ is not overly dense, but still a few--several times denser than interior densities seen in other regions. Region E2 on the other hand is well fit with a two component model containing both a low temperature equilibrium plasma with a high temperature nonequilibrium plasma. This again may be indicative of cavity wall emission in a dense medium that has equilibrated and cooled combined with interior plasma that is at a higher temperature (possibly reshocked) and lower density leading to nonequilibrium conditions. However, the calculated densities are not as clear cut as they are in previous regions. The low temperature component for E2 is at approximately the same density as the high temperature component thus lending no resolution to the line of sight confusion. Inspection of the color map shows that while E3 is situated in the midst of soft emission, E2 is sampling a region with a mix of low and higher energy emission with no obvious distinction. Finally, E1 is even further afield from the soft emission of E3 than E2 and appears to be dominated by harder X-rays. However, the best fit spectrum is only marginally hotter than E3 at 0.17 keV and at nearly the same density. The main difference between these regions is that E1 appears to be in nonequilibrium and therefore more recently shocked relative to E3. Perhaps E1 is a medium density cloud that has been reshocked by a reverse shock originating in the denser cavity wall. While the E regions provide interesting hints into the morphology and possible interactions in this section of the Cygnus Loop, it is impossible to state anything definitively given the confusion inherent to these interior regions.
The grid analysis of the bright southeast knot shown in Figure~\ref{gridparams} has presented some intriguing results as well. Inspection of the equilibrium maps of $kT_1$ and $n_{e1}$ reveal that they appear almost as negatives of one another. As expected, the highest densities correspond to areas of bright emission with density falling off toward the edges of the grid. These dense pockets have slowed the shock considerably more than the surrounding shock regions. The slower shock velocity results in correspondingly lower post-shock temperature. The densities calculated here are similar to those found in the cavity wall along the B regions. This bright area of emission could be an indicator of a protrusion into the cavity wall or mark a particularly large, dense cloud along the cavity wall slightly in the foreground relative to the B regions, or a combination of the two. Outside of the bright emission the color map contains harder emission which is consistent with the higher temperature fits in these grid squares. These regions could be located outside of the dense cloud and probe interior gas in the background, along the line of sight. The brightest knot in the southeast, which displays high density, is located close to the edge of the X-ray emission. In contrast to region B which appears to follow an $\sim$0.43 pc vertically elongated section of the cavity wall, this particular emission seems to be located in a more discrete cloud. This cloud could possibly be oriented along the line of sight, which may help in explaining the intensity of emission and the apparent indentation in the X-ray emission at this point. Previous studies of the XA region have explained the X-ray emission as arising from a single, large protrusion which is responsible for the entire field of view studied here. However, the fine spatial resolution of \textit{Chandra} has uncovered much smaller blast wave/cloud interactions, possibly as small as a single grid square, 0.035 pc $\times$ 0.028 pc.
\section{Conclusions}\label{conc}
The high spatial resolution of \textit{Chandra} has given us insight into the structure of the complicated XA region. Specifically, the chain of B regions appear to follow the location of the blast wave interaction with the cavity wall. These B regions are located at the very edge of X-ray emission and are dominated by soft X-ray emission. Spectral analysis has shown low temperature gas at densities an order of magnitude higher than more recently shocked high temperature gas. Shock heated gas in the high density cavity wall has equilibrated rapidly and is efficiently cooling through \ion{O}{7} emission. A reverse shock has propagated from the initial supernova blast wave interaction with the cavity wall and has reheated the interior gas. Furthermore, the time since interacton of the reverse shock with the interior gas has been long enough for the plasma to equilibrate as well. Given this interpretation, the precursor wind blew a cavity that is $\sim$ 14 pc in radius.
The identification of the cavity wall is simpler and more robust than identifying structure interior to the edge. While the color map shows a general trend of soft X-ray emission on the exterior with harder emission toward the interior, this trend is not obvious in discrete extractions. Regions interior to the B regions have line of sight confusion and significant chord length through the spherically shaped remnant. Some of these regions show strong signs of including the cavity wall (high density, low temperature components), but the interpretation is less obvious given that these regions are not located at the rim of emission. Furthermore there is a mix of temperatures, densities, and ionization states with no strong trends. However, we feel that this is a significant finding given that previous studies support a much simpler morphology whether it be a single finger-like protrusion into this region or a few clumps. We would argue that the fine spatial resolution here has uncovered a situation where any region removed from the edge of emission will have significant line of sight confusion and may or may not include cavity wall emission, interior emission, interior reverse shock emission, or multiple components of each. It is not to say that any given region is physically disconnected from another since it is definitely the case that the X-ray morphology is arising from a global interaction of the blast wave with a precursor formed cavity wall. However, the intricate details of the X-ray emission and therefore the physical parameters of the plasma can vary over the smallest scales. This effect is evident in the gridded extraction region around the bright knot of emission in the southeast. The calculated densities in the grid show the presence of an isolated blast wave/cloud interaction possibly dominated by only a few grid sections. These data display a significant increase in resolution over the previously highest resolution study \citep{Zhou} where extraction regions are larger than the entire grid.
The Cygnus Loop is clearly complex both spatially and spectrally. \textit{Chandra} can image fine structures indicative of distinct interaction regions, but the nearly broadband spectra are lagging in quality. The next step in understanding the intricacies inherent to the interaction of this supernova with the ISM is to obtain high quality spectral data. Higher spectral resolution is required to constrain the parameters of the models and test their validity. Grating instruments onboard existing X-ray telescopes are capable of high spectral resolution but are designed for point sources. The diffuse nature of galactic supernova remnants cause current grating observations to be signal limited with blurred spectral resolution. To increase understanding of these objects future instruments will need to be capable of performing high efficiency and high spectral resolution observations of diffuse soft X-ray sources.
\section{Acknowledgments}\label{ack}
The authors would like to acknowledge internal funding initiatives at the University of Iowa for support of this work. The data used here were obtained from the Chandra Data Archive.
| 2023-04-23T06:40:31.262Z | 2011-03-30T02:02:09.000Z | redpajama/arxiv | arxiv_0001 | 572 | 6,416 |
35c87fc5c09c56ed49e37bc92bf456622ae03696 | \section{Introduction}
Studies on the breakdown of symmetries and their restorations have been very useful in the analysis of phenomena related to phase transitions. Such studies applied to {\it Quantum Chromodynamics} (QCD) are particularly fascinating since QCD owns a very complicated phase structure. Among them, the (partial) restoration of chiral symmetry at finite temperature ($T$) and/or quark chemical density ($\mu$) has been one of the most interesting and stimulating subjects for decades. A great number of theoretical and experimental endeavors have been devoted to this subject.
In particular, the high-$T$ and low-$\mu$ region, which resembles the early universe, shows a very noble feature of the restoration pattern.
Namely there is a crossover phase transition for the nonzero current-quark mass ($m_{f}$) and that of the second-order for $m_{f}=0$. These distinctive patterns of the phase transition are consistent with the universal-class argument of the three-dimensional Ising model~\cite{de Forcrand:2003hx}, and turns out to be highly nontrivial in QCD~\cite{Aoki:2006we,Endrodi:2011gv}. Moreover, the critical endpoint (CEP)~\cite{Gavai:2004sd,Fujii:2003bz}, tri-critical point (TCP)~\cite{Schaefer:2006ds}, the critical chiral phase transition on the $T$-$\mu$ plane have been also attracting much interest.
Recently, together with the energetic progress for the heavy-ion-collision (HIC) experiment facilities, such as the relativistic heavy-ion collider (RHIC) and large hadron collider (LHC), one can now probe hot and dense QCD matters, i.e. {\it quark-gluon plasma} (QGP), experimentally.
It has been reported that very strong magnetic field in the order of the several times of $m^{2}_{\pi}\,[\mathrm{GeV}^{2}]$ can be produced in the noncentral (peripheral) heavy-ion collision (HIC) experiment by STAR collaboration at RHIC~\cite{:2009txa,:2009uh}. According to this strong magnetic field and $CP$-violating domains created inside the QGP, signals for possible $P$ and $CP$ violations were observed as the charge separation along the direction of the external magnetic field, which is perpendicular to the collision plane. Theoretically, this phenomenon is nothing but the axial anomaly of electromagnetic currents~\cite{Kharzeev:2004ey,Voloshin:2004vk,Kharzeev:2007jp,Voloshin:2008jx,:2009txa,:2009uh,Fukushima:2008xe,Fukushima:2009ft,Warringa:2009rw,Asakawa:2010bu}. The charge separation at relatively low $T$ has been already investigated within the instanton-vacuum framework by one of the authors (S.i.N.)~\cite{Nam:2009jb,Nam:2009hq,Nam:2010nk}, in which related works and references can be found. Actually even before this sort of studies receiving much more interest recently due to the energetic progress of the HIC physics, QCD under magnetic field had been an important subject ~\cite{Menezes:2008qt,Boomsma:2009yk,Gatto:2010qs,Mizher:2010zb,Nam:2010mh}. Inside QCD matter in the presence of the external magnetic field, the spins of the quarks are aligned along the direction of induced magnetic field according to their helicities. As a result, the quark-antiquark pair couples strongly, i.e. which is a phenomena denoted by the {\it magnetic catalysis}~\cite{Boomsma:2009yk,Miransky:2002rp}. Hence, taking into account that the order parameter of the spontaneous breakdown of chiral symmetry (SB$\chi$S) is the chiral (quark) condensate $\langle \bar{q}q\rangle $, one can observe that SB$\chi$S will be enhanced by the presence of the external magnetic field. Accordingly there appear specific consequences as 1) the enhancement of the critical $T$ for SB$\chi$S, $T_c$, 2) the increase of constituent-quark mass, and 3) modifications in the low-energy constants (LEC). In the present work, we will address to all of these interesting consequences. To date, there have been many related works, for instance, from the Nambu-Jona-Lasinio (NJL) model~\cite{Menezes:2008qt,Boomsma:2009yk}, holographic QCD~\cite{Filev:2009xp,Callebaut:2011uc}, lattice QCD simulations~\cite{D'Elia:2011zu}, linear sigma model~\cite{Mizher:2011wd}, Polyakov-loop inspired model~\cite{Ruggieri:2011qy}, and so on.
In the present work, we want to investigate the (partial) chiral restoration under the strong external magnetic field in QCD matter. For this purpose, we employ the instanton-liquid model, modified by the Harrington-Shepard caloron solution finite $T$~\cite{Harrington:1976dj}. Although this approach does not manifest the quark confinement, such as the nontrivial holonomy caloron, i.e. the Kraan-van Baal-Lee-Lu caloron~\cite{Kraan:1998pm,Lee:1998bb}, as shown in Refs.~\cite{Nam:2009nn,Nam:2009jb,Nam:2009hq,Nam:2010nk,Nam:2010mh}, it is a useful nonperturbatve method to study QCD matter at finite $T$. To include the external magnetic field, we make use of the linear Schwinger method~\cite{Schwinger:1951nm,Nam:2009jb,Nam:2009hq}. Besides we also take into account the meson-loop corrections (MLC) for the SU(2) light-flavor sector as the large-$N_c$ corrections. MLC is essential to reproduce the correct current-quark mass dependence of relevant physical quantities~\cite{Nam:2008bq} and universal-class chiral restoration pattern at finite $T$~\cite{Nam:2010mh}.
Our numerical results show that the chiral order parameters, such as the constituent-quark mass and chiral condensate, are enhanced with respect to $B_0$ due to the magnetic catalysis effect. There is a region where the $u$- and $d$-quark constituent masses coincide at $eB_0\approx(7\sim9)\,m^2_\pi$, even for the explicit isospin-symmetry breaking, i.e. $m_u\ne m_d$. The critical $T$ for the chiral restoration, $T_c$ tends to be shifted higher pronouncedly in the presence of the $B_0$ in the chiral limit. On the contrary $T_{c}$ keeps
almost stationary for the physical quark mass case.
The strength of the isospin breaking between the $u$ and $d$ quark condensates is also explored in detail by defining the ratio $\mathcal{R}\equiv(\langle iu^\dagger u\rangle-\langle id^\dagger d\rangle)/(\langle iu^\dagger u\rangle+\langle id^\dagger d\rangle)$ as a function of $T$ and $B_0$. Finally, we compute the pion weak-decay constant $F_\pi$ and pion mass $m_\pi$ below $T_c$ as functions of $T$ and $B_0$, showing correct partial chiral restoration behaviors. Our result also shows the changes of the $F_\pi$ and $m_\pi$ due to the magnetic field, are relatively small in comparison to those caused by the finite $T$ effect.
The present work is structured as follows: In Section II, we make a brief introduction of the basic instanton-liquid model at vacuum without MLC and explain typical procedures to compute relevant quantities for further discussions. In Section III, we consider the inclusion of the MLC and the external magnetic field. The $T$-dependent modification of the instanton parameters using the Harrington-Shepard caloron is given in Section IV. Taking into account all the ingredients in the previous Sections, we derive the expressions for the saddle-point equation and chiral condensate as functions of $T$ ad $B_0$ in Section V. Section VI is devoted to presenting numerical results and associated discussions including the analysis of the pion properties at finite $T$ and $B_0$. Summary and conclusion are given in Section VII.
\section{E$\chi$A in the leading order of the large $N_{c}$ expansion in vacuum}
We start by making a brief introduction for the instanton-liquid model in vacuum. This theoretical framework is characterized by the average of the inter-(anti)instanton distance $\bar{R}\approx1$ and that of the (anti)instanton size $\bar{\rho}\approx1/3$ fm~\cite{Diakonov:1985eg,Diakonov:1995qy}. The effective chiral action (E$\chi$A) in the leading order (LO) of the $1/N_{c}$ expansion can be written in Euclidean space as follows:
\begin{eqnarray}
\label{eq:EA1}
\mathcal{S}_{\mathrm{eff}}[m_{f}]&=&\mathcal{C}
+\mathcal{N}\ln\lambda+2\sigma^{2}
-\int\frac{d^4k}{(2\pi)^4}\mathrm{Tr}_{c,f,\gamma}
\ln\left[\frac{\rlap{/}{k}+i[m_{f}+M(k)] }{\rlap{/}{k}+im_{f}}\right],
\end{eqnarray}
where $\mathcal{C}$, $\mathcal{N}$, $\lambda$, $\sigma$, and $m_{f}$ correspond to an irrelevant constant for further discussions, the instanton number density (packing fraction), the Lagrangian multiplier to exponentiate the $2N_{f}$-'t Hooft interaction, the saddle-point value of the chiral condensate, and current-quark mass for the flavor $f$ for the SU(2) light-flavor sector. The $\mathrm{Tr}_{c,f,\gamma}$ indicates the trace over the color, flavor, and Lorentz indices. Detailed explanations on these instanton-related quantities are given in Refs.~\cite{Diakonov:1985eg,Diakonov:1995qy}. In this picture quarks are moving inside the (anti)instanton ensemble and flipping their helicities. It results in that (anti)quarks acquire the momentum-dependent effective masses dynamically, i.e. constituent-quark masses. Assuming that the zero modes dominate the low-energy phenomena, we can write the Dirac equation for a quark for the (anti)instanton background as follows:
\begin{equation}
\label{eq:ZERO}
\left(i\rlap{/}{\partial}+\rlap{\,/}{A}_{I\bar{I}} \right)\Phi_{I\bar{I}}=0,
\end{equation}
where $A_{I\bar{I}}$ and $\Phi_{I\bar{I}}$ denote respectively the singular-gauge (anti)instanton solution and eigen function of the Dirac equation in the coordinate space~\cite{Diakonov:1985eg}. By performing Fourier transformation of the $\Phi_{I\bar{I}}$, one is led to a momentum-dependent effective quark mass:
\begin{equation}
\label{eq:MDEQM}
M(k)\equiv M_{a}=M_{f}F^{2}(k),\,\,\,\,
F(k)=2t\left[I_{0}(t)K_{1}(t)-I_{1}(t)K_{0}(t)-\frac{1}{t}I_{1}(t)K_{1}(t) \right],\,\,\,\,t\equiv\frac{|k|\bar{\rho}}{2},
\end{equation}
where $M_{f}$ stands for the constituent-quark mass for each flavor $f$, $K_{n}$ and $I_{n}$ are the modified Bessel functions~\cite{Diakonov:1995qy}. Note that the $F(k)$ can be interpreted as a quark distribution and plays the role of a natural UV regulator. Hence, in the instanton approach, UV divergences are regularized by construction without inserting any artificial form factors by hand. In practice, it is much easier to employ a parametrized form of the $F(k)$ as in Refs.~\cite{Nam:2009jb,Nam:2009hq,Nam:2010nk}:
\begin{equation}
\label{eq:FFPARA}
F(k)=\frac{2}{2+k^{2}\bar{\rho}^{2}}.
\end{equation}
From the E$\chi$A we derive the following self-consistent (saddle-point) equations. They are used to determine relevant quantities such as the constituent-quark mass $M_{f}$ at zero-momentum transfer $k^2=0$ in Eq.~(\ref{eq:MDEQM}):
\begin{equation}
\label{eq:SCE}
\frac{\partial\mathcal{S}_{\mathrm{eff}}[m_{f}]}{\partial\lambda}=0,
\,\,\,\,
\frac{\partial\mathcal{S}_{\mathrm{eff}}[m_{f}]}{\partial\sigma}=0,
\end{equation}
Similarly the chiral condensate can be computed by differentiating the E$\chi$A with respect to $m_{f}$:
\begin{equation}
\label{eq:CC}
-\frac{1}{N_{f}}
\frac{\partial\mathcal{S}_{\mathrm{eff}}[m_{f}]}{\partial m_{f}}
=\langle iq^{\dagger}q\rangle .
\end{equation}
From the first equation in Eq.~(\ref{eq:SCE}), and Eq.~(\ref{eq:CC}), one obtains the expressions for the LO contributions for the instanton number density and the chiral condensate as functions of $m_{f}$ as follows:
\begin{equation}
\label{eq:LO0}
\mathcal{N}_{\mathrm{LO}}
=2N_{c}N_{f}\int\frac{d^4k}{(2\pi)^4}
\left[\frac{M_{a}\bar{M}_{a}}
{k^{2}+\bar{M}^{2}_{a}}\right],\,\,\,\,
\langle iq^{\dagger}q\rangle_{\mathrm{LO}}
=4N_{c}\int\frac{d^4k}{(2\pi)^4}
\left[\frac{\bar{M}_{a}}{k^{2}+\bar{M}^{2}_{a}}
-\frac{m_{f}}{k^{2}+m^{2}_{f}} \right].
\end{equation}
Here we define $\bar{M}_{a}=m_{f}+M_{a}$. The value of ${\cal N}_{LO}$ is determined from the parameter $\bar{R}$ and its phenomenological value is $ \sim(200\,\mathrm{MeV})^{4}$~\cite{Diakonov:1995qy}.
At the chiral limit $m_f=0$, one solves the first equation of Eq.~(\ref{eq:LO0}) self-consistently. The value of $M_{f}$ turns out to be about $325$ MeV. It is well consistent with the constituent-quark mass employed in usual quark models, i.e. $3M_f\approx M_\mathrm{nucleon}$. Applying this value of $M_{f}$ into the second equation, we have $\langle iq^{\dagger}q\rangle \equiv-\langle \bar{q}q\rangle \approx(235\,\mathrm{MeV})^{3}$ for the SU(2) light-flavor sector. Again, this value of the chiral condensate is well matched with phenomenologically accepted ones.
In contrast to these seemingly successful numerical results for the chiral limit, the LO results for the $M_{f}$ with finite $m_{f}$ is considerably deviated from the available LQCD data~\cite{Nam:2009jb,Nam:2009hq,Nam:2010nk}. In Refs.~\cite{Goeke:2007bj,Nam:2008bq}, it was suggested that the correct $m_f$ dependence of $M_{f}$ can only be achieved by the inclusion of the meson-loop corrections (MLC) which is related to the next-to-leading order (NLO) of the large-$N_{c}$ corrections. It has been also discussed that this NLO contributions play a critical role to reproduce the appropriate universal-class pattern of the chiral restoration as a function of $T$~\cite{Nam:2010nk}. Consequently we will discuss the inclusion of the MLC and the magnetic field in the next Section.
\section{E$\chi$A with MLC and $\bm{B}$ field}
Here we use a standard functional method~\cite{Goeke:2007bj,Nam:2008bq}
to tackle the MLC corresponding to the large-$N_{c}$ corrections.
Taking into account the mesonic fluctuations around their saddle-point values, one can write the E$\chi$A via a standard functional method as follows:
\begin{eqnarray}
\label{eq:EA2}
\mathcal{S}_{\mathrm{eff}}[m_{f}]&=&\mathcal{C}
+\mathcal{N}\ln\lambda+2\sigma^{2}
-\underbrace{\int\frac{d^4q}{(2\pi)^4}\mathrm{Tr}_{c,f,\gamma}
\ln\left[\frac{\rlap{/}{k}_a+i\bar{M}_{a} }{\rlap{/}{k}_a
+im_{f}}\right]}_\mathrm{LO}
\cr
&+&\underbrace{\frac{1}{2}\sum_{i=1}^{4}\int\frac{d^4k}{(2\pi)^4}
\ln\left\{1-\frac{1}{4\sigma^{2}}
\int\frac{d^4k}{(2\pi)^4}\mathrm{Tr}_{c,f,\gamma}
\left[ \frac{M_{a}}{\rlap{/}{k}_a+i\bar{M}_{a}}\Gamma_{i}
\frac{M_{b}}{\rlap{/}{k}_b+i\bar{M}_{b}}
\Gamma_{i}\right]\right\}}_\mathrm{NLO},
\end{eqnarray}
where $\Gamma_{i}=(1,\gamma_{5},i\bm{\tau},i\bm{\tau}\gamma_{5})$ relates to the fluctuations from the isoscalar-scalar, isoscalar-pseudoscalar, isovector-scalar, and isovector-pseudoscalar mesons with the Pauli matrix denoted by $\bm{\tau}$. As for the saddle-point values, we integrated out all the meson contributions except for the scalar one which signals for the SB$\chi$S. For more details for Eq.~(\ref{eq:EA2}) can be found in Refs.~\cite{Goeke:2007bj,Nam:2008bq}. The $k_a$ and $k_b$ denote $k$ and $k+q$, respectively.
To study the impact of the external EM field on the QCD matter we need embed the external EM field in the E$\chi$A.
Since we are only interested in the external magnetic field, assuming that it is static and aligned along the $z$ axis as $\bm{B}=B_{0}\hat{z}$, we can choose the EM field configuration as follows:
\begin{equation}
\label{eq:AAA}
A_{\mu}=\left(-\frac{B_{0}}{2}y,\frac{B_{0}}{2}x,0,0 \right).
\end{equation}
This EM field configuration in Eq.~(\ref{eq:AAA}) makes the field-strength tensor, satisfying $B_{0}=F_{12}$. Employing the linear Schwinger method~\cite{Schwinger:1951nm,Nam:2009jb,Nam:2009hq}, one can write the E$\chi$A in Eq.~(\ref{eq:EA2}) as a function of the EM field strength:
\begin{eqnarray}
\label{eq:EA3}
\mathcal{S}_{\mathrm{eff}}[m_{f},F_{\mu\nu}]&\approx&\mathcal{C}
+\mathcal{N}\ln\lambda+2\sigma^{2}_{F}
-\int\frac{d^4k}{(2\pi)^4}\mathrm{Tr}_{c,f,\gamma}
\ln\left[\frac{\rlap{\,/}{K}_a+i\bar{M}_{A} }
{\rlap{\,/}{K}_a+im_{f}}\right]
\cr
&+&\frac{1}{2}\sum_{i=1}^{4}\int\frac{d^4q}{(2\pi)^4}
\ln\left\{1-\frac{1}{4\sigma^{2}_{F}}
\int\frac{d^4k}{(2\pi)^4}\mathrm{Tr}_{c,f,\gamma}
\left[ \frac{M_{a}}{\rlap{/}{k}_a+i\bar{M}_{a}}\Gamma_{i}
\frac{M_{b}}{\rlap{/}{k}_b+i\bar{M}_{b}}
\Gamma_{i}\right]\right\}.
\end{eqnarray}
Here, the subscript $F$ denotes the quantities under the external EM field. In Eq.~(\ref{eq:EA3}), we use the notation $K_{\mu}=k_{\mu}+e_{f}A_{\mu}$. Here $e_f=Q_f e$, here $e$ is the electric charge of the proton.
Note that we have assumed that the NLO part is not modified by the external EM field, since the EM contribution from the NLO is small in comparison to the leading one. In the presence of the external EM field, the quark propagator is modified and approximated as~\cite{Nam:2009jb,Nam:2009hq,Nam:2008ff}:
\begin{equation}
\label{eq:BPRO}
\frac{1}{\rlap{\,/}{K}_a+i\bar{M}_{a}}
\approx\frac{\rlap{\,/}{K}_a-i[\bar{M}_{a}+e_{f}N_{a}(\sigma\cdot F)]}
{k^{2}_a+\bar{M}^{2}_{a}}+\mathcal{O}(Q^{n\ge3}_f),
\end{equation}
where we have ignored the terms proportional to $\mathcal{O}(Q^{n\ge3}_f)$, taking into account that $(Q_u,Q_d)=(2/3,-1/3)$. The relevant mass functions are also defined by the following expressions:
\begin{equation}
\label{eq:MMMMM}
\bar{M}_{a}=m_{f}+M_{a}
=m_{f}+M_{f}\left(\frac{2}{2+k^{2}_a\bar{\rho}^{2}} \right)^{2},
\,\,\,\,
N_{a}=-\frac{4M_{f}\bar{\rho}^{2}}
{(2+k^{2}_a\bar{\rho}^{2})^{3}}.
\end{equation}
From Eq.~(\ref{eq:EA3}), the LO contributions for the instanton number density $\mathcal{N}$ in Eq.~(\ref{eq:SCE}) and chiral condensate in Eq.~(\ref{eq:CC}), in the presence of the external magnetic field, can be derived up to $\mathcal{O}(Q^{2}_{f})$:
\begin{eqnarray}
\label{eq:LO}
\mathcal{N}_{\mathrm{LO},F}
&=&2N_{c}N_{f}\int\frac{d^4k}{(2\pi)^4}
\left[\frac{M_{a}\bar{M}_{a}}
{k^{2}_a+\bar{M}^{2}_{a}}
+\frac{2N^{2}_{a}\mathcal{B}^{2}_{f}}
{k^{2}_a+\bar{M}^{2}_{a}}\right],
\cr
\langle iq^{\dagger}q\rangle_{\mathrm{LO},F}&=&
4N_{c}\int\frac{d^4k}{(2\pi)^4}
\left[\frac{\bar{M}_{a}}{k^{2}_a+\bar{M}^{2}_{a}}
-\frac{m_{f}}{k^{2}_a+m^{2}_{f}}\right],
\end{eqnarray}
where we assigned $\mathcal{B}_{f}$ as $e_{f}B_{0}=Q_{f}(eB_{0})$ for simplicity, and the terms proportional to $\mathcal{O}(Q_{f})$ does not appear in the $\mathcal{N}_\mathrm{LO,F}$, due to $\mathrm{Tr}_{\gamma}(\sigma\cdot F)=0$. If $B_{0}=0$ the expressions for the $\mathcal{N}$ and chiral condensate in Eq.~(\ref{eq:LO}) recover those for vacuum given in Section II. Note that, although we do not have the explicit terms $\propto\mathcal{B}^2$ for the chiral condensate as far as we employ the quark propagator in Eq.~(\ref{eq:BPRO}), the condensate depends on the magnetic field because the $M_f$ itself does implicitly. Moreover, the constituent-quark mass under the external magnetic field, which is determined from the saddle-point equation in the first line of Eq.~(\ref{eq:LO}), becomes different for the two flavors $u$ and $d$, i.e. $M_{u,F}\ne M_{d,F}$. It is because that they behave distinctively according to their electric charges. Similarly the chiral condensate also becomes flavor-dependent as shown in the second line of Eq.~(\ref{eq:LO}). Here is one caveat: For all the ingredients discussed so far, we have assumed that the instanton-packing fraction, $\mathcal{N}$ is immune from the external magnetic field as well as the flavor degrees of freedom by considering that the (anti)instanton is electrically neutral and non-flavored object.
It is worth mentioning the differences between our theoretical framework and other chiral models. For instance, using the NJL model, Refs.~\cite{Menezes:2008qt,Boomsma:2009yk} obtained the magnetic-field dependent effective action. Most apparent difference between the present approach and theirs is how to regularize the UV divergence appearing in relevant physical quantities. The UV divergence is regularized by the nonlocal quark-instanton interaction in the present approach (see the quark form factors given in Eqs.~(\ref{eq:MDEQM}) and (\ref{eq:FFPARA})), whereas the regularization is achieved by adding and subtracting the lowest-Landau level (LLL) and the vacuum contribution of the chiral order parameters in the NJL model~\cite{Menezes:2008qt,Boomsma:2009yk}. In this sense, in our approach, all the Landau levels are taken into account by construction in principle. Hence we do not need any specific analytic manipulations. A more detailed discussions on this regularization process based on the Landau level for the chiral models is given in Ref.~\cite{Gatto:2010pt} where the highest levels are naturally cut off, since the constituent mass at large momenta is suppressed when the magnetic becomes strong.
The numerical value for $\sigma_{F}$ in Eq.~(\ref{eq:LO}) is obtained from the relation $\sigma^{2}=\mathcal{N}/2$ in the LO contributions as in Ref.~\cite{Nam:2009nn}. Thus, using the LO part in the right-hand side of Eq.~(\ref{eq:LO}), we can write as follows:
\begin{equation}
\label{eq:SIGMA}
\sigma^{2}_{F}=\frac{\mathcal{N}_{\mathrm{LO},F}}{2}
+\mathrm{NLO\,contributions}\approx\frac{\mathcal{N}_{\mathrm{LO},F}}{2},
\end{equation}
where we have rather safely ignored the NLO ones for the numerical calculations, since the NLO contributions are finite but much small in comparison to the LO one. By doing this, one can express the $\sigma^{2}_{F}$ simply as a function of the external magnetic field. Considering all the ingredients discussed so far, we write the $\mathcal{N}$ containing both of the LO and NLO (MLC) contributions, using Eq.~(\ref{eq:SIGMA}):
\begin{eqnarray}
\label{eq:NOVMLC}
\mathcal{N}&\approx&
2N_{c}N_{f}\int\frac{d^4k}{(2\pi)^4}[F_1(k)+F_2(k)\mathcal{B}^{2}_{f}]
+\frac{3}{2}\frac{\int\frac{d^4q}{(2\pi)^4}\frac{d^4k}{(2\pi)^4}\,F_3(k,q)}
{\int\frac{d^4k}{(2\pi)^4}[F_1(k)+F_2(k)\mathcal{B}^{2}_{f}]},
\end{eqnarray}
where the reduced functions $F_{1\sim3}$ are defined as
\begin{eqnarray}
\label{eq:FFFFF}
F_1(k)&=&\frac{M_a\bar{M}_a}{D^2_a},\,\,\,\,F_2(k)=\frac{2N^2_a}{D^2_a},\,\,\,\,
F_3(k,q)=\frac{M_aM_b
[k_a\cdot k_b+\bar{M}_{a}\bar{M}_{b}+M_{a}M_{b}+\frac{m_f}{2}(M_{a}+M_{b})]}{D^2_aD^2_b}.
\end{eqnarray}
Here, we have used the notations $D^2_{a,b}=k^2_{a,b}+\bar{M}^2_{a,b}$. Similarly, the chiral condensate can be evaluated from Eq.~(\ref{eq:CC}) as follows:
\begin{eqnarray}
\label{eq:CCMLC}
\langle iq^{\dagger}q\rangle_{\mathrm{LO+NLO},F}\approx
4N_{c}\int\frac{d^4k}{(2\pi)^4}[G_1(k)+G_2(k)]+\frac{3}{2N_f}\frac{\int\frac{d^4q}
{(2\pi)^4}\frac{d^4k}{(2\pi)^4}\,G_3(k,q)}{\int\frac{d^4k}{(2\pi)^4}
[F_1(k)+F_2(k)\mathcal{B}^2_f]}.
\end{eqnarray}
Here, the functions $G_{1\sim3}$ are assigned as
\begin{eqnarray}
\label{eq:GGGG}
G_1(k)&=&\frac{\bar{M}_a}{D^2_a},\,\,\,\,G_2(k)=-\frac{m_f}{D^2_0},\,
G_3(k,q)=\frac{M_aM_b(\bar{M}_a+\bar{M}_b)}{D^2_aD^2_b},
\end{eqnarray}
where $D^2_0=k^2+m^2_f$. It is interesting to see from Eqs.~(\ref{eq:LO}) and (\ref{eq:CCMLC}) that, if the isospin symmetry is almost intact, $m_{u}\approx m_{d}$, the difference between the condensates of the two flavors, i.e $\langle iu^{\dagger}u\rangle-\langle id^{\dagger}d\rangle$, becomes negligible for the case with $\mathcal{B}=0$. However, as the strength of the magnetic field increases, the difference is also proportional to $(e^{2}_{u}-e^{2}_{d})B^{2}_{0}$. For a better look on the isospin breaking effect, we define a quantity indicating the strength of the isospin breaking effect as follows:
\begin{equation}
\label{eq:RATIO}
\mathcal{R}\equiv
\frac{\langle iu^{\dagger}u\rangle-\langle id^{\dagger}d\rangle}
{\langle iu^{\dagger}u\rangle+\langle id^{\dagger}d\rangle}.
\end{equation}
We also note that the ratio $\mathcal{R}$ is deeply related to the low-energy constant of the $\chi$PT Lagrangian, $h_{3}$~\cite{Gasser:1983yg,Goeke:2010hm}.
\section{Instanton parameters at finite $T$}
To investigate the physical quantities in hand at finite $T$, we want to discuss briefly how to modify the instanton parameters, $\bar{\rho}$ and $\bar{R}$ at finite $T$. We will follow our previous work~\cite{Nam:2009nn} and Refs.~\cite{Harrington:1976dj,Diakonov:1988my} to this end. Usually, there are two different instanton configurations at finite $T$, being periodic in Euclidean time, with trivial and nontrivial holonomies. They are called the Harrington-Shepard~\cite{Harrington:1976dj} and Kraan-van Baal-Lee-Lu calorons~\cite{Kraan:1998pm,Lee:1998bb}, respectively. The nontrivial holonomy can be identified as the Polyakov line as an order parameter for the confinement-deconfinement transition of QCD. However, since we are not interested in the confinement-deconfinement transition in the present work, we choose the Harrington-Shepard caloron for the parameter modifications at finite $T$. We write the instanton distribution function at finite $T$ with the Harrington-Shepard caloron as follows:
\begin{equation}
\label{eq:IND}
d(\rho,T)=\underbrace{C_{N_c}\,\Lambda^b_{\mathrm{RS}}\,
\hat{\beta}^{N_c}}_\mathcal{C}\,\rho^{b-5}
\exp\left[-(A_{N_c}T^2
+\bar{\beta}\gamma{\cal N}\bar{\rho}^2)\rho^2 \right].
\end{equation}
Here, the abbreviated notations are also given as:
\begin{equation}
\label{eq:para}
\hat{\beta}=-b\ln[\Lambda_\mathrm{RS}\rho_\mathrm{cut}],\,\,\,\,
\bar{\beta}=-b\ln[\Lambda_\mathrm{RS}\langle R\rangle],\,\,\,
C_{N_c}=\frac{4.60\,e^{-1.68\alpha_{\mathrm{RS}} Nc}}{\pi^2(N_c-2)!(N_c-1)!},
\end{equation}
\begin{equation}
\label{eq:AA}
A_{N_c}=\frac{1}{3}\left[\frac{11}{6}N_c-1\right]\pi^2,\,\,\,\,
\gamma=\frac{27}{4}\left[\frac{N_c}{N^2_c-1}\right]\pi^2,\,\,\,\,
b=\frac{11N_c-2N_f}{3},\,\,\,\,{\cal N}=\frac{N}{V}.
\end{equation}
Note that we defined the one-loop inverse charge $\hat{\beta}$ and $\bar{\beta}$ at a certain phenomenological cutoff value $\rho_\mathrm{cut}$ and $\langle R\rangle\approx\bar{R}$. As will be shown, only $\bar{\beta}$ is relevant in the following discussions and will be fixed self-consistently within the present framework. $\Lambda_{\mathrm{RS}}$ stands for a scale depending on a renormalization scheme, whereas $V_3$ stands for the three-dimensional volume. Using the instanton distribution function in Eq.~(\ref{eq:IND}), we can compute the average value of the instanton size, $\bar{\rho}^2$ straightforwardly as follows~\cite{Schafer:1996wv}:
\begin{equation}
\label{eq:rho}
\bar{\rho}^2(T)
=\frac{\int d\rho\,\rho^2 d(\rho,T)}{\int d\rho\,d(\rho,T)}
=\frac{\left[A^2_{N_c}T^4
+4\nu\bar{\beta}\gamma {\cal N}\right]^{\frac{1}{2}}
-A_{N_c}T^2}{2\bar{\beta}\gamma {\cal N}},
\end{equation}
where $\nu=(b-4)/2$. Substituting Eq.~(\ref{eq:rho}) into Eq.~(\ref{eq:IND}), the distribution function can be evaluated further as:
\begin{equation}
\label{eq:dT}
d(\rho,T)=\mathcal{C}\,\rho^{b-5}
\exp\left[-\mathcal{M}(T)\rho^2 \right],\,\,\,\,
\mathcal{M}(T)=\frac{1}{2}A_{N_c}T^2+\left[\frac{1}{4}A^2_{N_c}T^4
+\nu\bar{\beta}\gamma {\cal N}\right]^{\frac{1}{2}}.
\end{equation}
The instanton-number density ${\cal N}$ can be computed self-consistently as a function of $T$, using the following equation:
\begin{equation}
\label{eq:NOVV}
{\cal N}^\frac{1}{\nu}\mathcal{M}(T)=\left[\mathcal{C}\,\Gamma(\nu) \right]^\frac{1}{\nu},
\end{equation}
where we have replaced $NT/V_3\to {\cal N}$, and $\Gamma(\nu)$ indicates a $\Gamma$ function with an argument $\nu$. Note that $\mathcal{C}$ and $\bar{\beta}$ can be determined easily using Eqs.~(\ref{eq:rho}) and (\ref{eq:NOVV}), incorporating the vacuum values of the ${\cal N}$ and $\bar{\rho}$: $\mathcal{C}\approx9.81\times10^{-4}$ and $\bar{\beta}\approx9.19$. At the same time, using these results, we can obtain the average instanton size $\bar{\rho}$ as a function of $T$ with Eq.~(\ref{eq:rho}).
Finally, in order to estimate the $T$ dependence of the constituent-quark mass $M_{f}$, it is necessary to consider the normalized distribution function, defined as follows:
\begin{equation}
\label{eq:NID}
d_N(\rho,T)=\frac{d(\rho,T)}{\int d\rho\,d(\rho,T)}
=\frac{\rho^{b-5}\mathcal{M}^\nu(T)
\exp\left[-\mathcal{M}(T)\rho^2 \right]}{\Gamma(\nu)}.
\end{equation}
Now, we want to employ the large-$N_c$ limit to simplify the expression of $d_N(\rho,T)$. Since the parameter $b$ is in the order of $\mathcal{O}(N_c)$ as shown in Eq.~(\ref{eq:para}), it becomes infinity as $N_c\to\infty$, and the same is true for $\nu$. In this limit, as understood from Eq.~(\ref{eq:NID}), $d_N(\rho,T)$ can be approximated as a $\delta$ function~\cite{Diakonov:1995qy}:
\begin{equation}
\label{eq:NID2}
\lim_{N_c\to\infty}d_N(\rho,T)=\delta({\rho-\bar{\rho}}).
\end{equation}
Remember that the constituent-quark mass can be represented by~\cite{Diakonov:1995qy}
\begin{equation}
\label{eq:M0}
M_{f}\propto\sqrt{\mathcal{N}}
\int d\rho\,\rho^{2}\delta(\rho-\bar{\rho})
=\sqrt{\mathcal{N}}\,\bar{\rho}^{2},
\end{equation}
where $\mathcal{N}$ and $\bar{\rho}$ are functions of $T$ implicitly. We can modify $M_{f}$ as a function of $T$ as follows:
\begin{equation}
\label{eq:momo}
M_{f}\to M_{f}\left(\frac{\sqrt{\mathcal{N}}}
{\sqrt{\mathcal{N}_{0}}}
\frac{\bar{\rho}^2}{\bar{\rho}^2_{0}}\right)\equiv M_{f}(T)
\end{equation}
where $\mathcal{N}_{0}$ and $\bar{\rho}_{0}$ are those at $T=0$. The numerical results for the normalized $\bar{\rho}/\bar{\rho}_{0}$ and ${\cal N}/{\cal N}_{0}$ as functions of $T$ are in the left panel of Figure~\ref{FIG1}. As shown there, these quantities are decreasing with respect to $T$ as expected: decreasing instanton effect. However, even beyond $T^{\chi}_{c}\approx\Lambda_{\mathrm{QCD}}\approx200$ MeV, the instanton contribution remains finite. In the right panel of figure, we draw the quark mass as a function of $T$ and absolute value of three momentum of a quark $|\bm{k}|$:
\begin{equation}
\label{eq:M00}
M(\bm{k}^2,T)=M_{f}(T)\left[\frac{2}{2+\bar{\rho}^{2}(T)\bm{k}^{2}}\right].
\end{equation}
Note that we have ignored the Euclidean-time component of the four momentum by setting $k_{4}=0$. This tricky treatment simplifies the calculations in hand to a large extent, and we also verified that only a small deviation appears in comparison to the full calculations. Moreover, $\bar{\rho}$ in Eq.~(\ref{eq:M00}) is now a function of $T$ as demonstrated by Eqs.~(\ref{eq:rho}) and (\ref{eq:momo}) previously. As shown in the figure, $M(|\bm{k}|,T)$ is a smoothly decreasing function of $T$ and $|\bm{k}|$, indicating that the effect of the instanton is diminished. Here, we choose $M_f=325$ MeV at $T=0$ in drawing the curve as a trial. For more details, one can refer to the previous work~\cite{Nam:2009nn}.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=8.5cm]{FIG1-1.pdf}
\includegraphics[width=9.5cm]{FIG2-1.pdf}
\end{tabular}
\caption{(Color online) Normalized $\bar{\rho}/\bar{\rho}_{0}$ and ${\cal N}/{\cal N}_{0}$ as a function of $T$ for $N_{c}=3$, where $\mathcal{N}\equiv N/V$ (left). The $M(\bm{k}^2,T)$ in Eq.~(\ref{eq:M00}) as a function of $T$ and absolute value of the momentum $|\bm{k}|$ (right).}
\label{FIG1}
\end{figure}
\section{E$\chi$A with MLC and external magnetic field at finite $T$}
In this section we generalize our model to the finite temperature to explore the chiral restoration at finite $T$. For this purpose, we employ the Matsubara formula for the fermions. In this Euclidean-time description, it can be done by replacing the four-dimensional integral measure in the E$\chi$A into a three-dimensional one with a summation over the fermionic Matsubara frequency, $w_{m}=(2m+1)\pi T$:
\begin{equation}
\label{eq:MATS}
\int\frac{d^4k}{(2\pi)^4}\to
T\sum^{\infty}_{m=-\infty}\int\frac{d^3\bm{k}}{(2\pi)^3}.
\end{equation}
Using this simple replacement we can rewrite the saddle-point equation in Eq.~(\ref{eq:NOVMLC}) as follows:
\begin{eqnarray}
\label{eq:NOVT}
\mathcal{N}&\approx&
2N_{c}N_{f}\int\frac{d^3\bm{k}}{(2\pi)^3}
[\mathcal{F}_1(\bm{k})+\mathcal{F}_2(\bm{k})\mathcal{B}^{2}_{f}]
+\frac{3\Lambda}{4\pi}\frac{\int\frac{d^3\bm{q}}{(2\pi)^3}\frac{d^3\bm{k}}{(2\pi)^3}
[\mathcal{F}_3(\bm{k},\bm{q})+\mathcal{F}_4(\bm{k},\bm{q})]}
{\int\frac{d^3\bm{k}}{(2\pi)^3}[\mathcal{F}_1(\bm{k})+\mathcal{F}_2(\bm{k})\mathcal{B}^{2}_{f}]}.
\end{eqnarray}
Summing over the Matsubara frequency, we define functions $\mathcal{F}_i$ as follows:
\begin{equation}
\label{eq:}
\mathcal{F}_i\equiv T\sum_{m} F_i.
\end{equation}
The analytic expressions for those functions are given in the Appendix. Note that the above equation recovers the expression of Eq.~(16) in Ref.~\cite{Nam:2010mh} when $\mathcal{B}=0$. Before going further, it is worth mentioning several assumptions made for deriving Eq.~(\ref{eq:NOVT}):
\begin{itemize}
\item The instanton packing fraction, $\mathcal{N}$ is a decreasing function of $T$ indicated by the previous Section. However, we assume that the $\mathcal{N}$ is not affected by the external magnetic field, since the (anti)instantons are electrically neutral. Moreover, the immunity of $\mathcal{N}$ to flavors are assumed.
\item In the momentum-dependent quark mass $M_{a,b}$, we replace $k^{2}$ into $\bm{k}^{2}$, ignoring the temporal term $\propto k_{4}=(2m+1)\pi T$. We verified that this treatment makes the numerical calculations much simpler, and only small deviation was observed in comparison to the full calculations. Hence, the mass-related functions in Eqs.~(\ref{eq:FFPARA}) and (\ref{eq:MMMMM}) are redefined as follows:
\begin{equation}
\label{eq:MAAA}
M_{a}=\frac{4M_{f}}
{(2+\bm{k}^{2}\bar{\rho}^{2})^{2}},
\,\,\,\,
N_{a}=-\frac{4M_{f}\bar{\rho}^{2}}
{(2+\bm{k}^{2}\bar{\rho}^{2})^{3}}.
\end{equation}
As for the $M_{b}$ and $N_{b}$, we replace $\bm{k}$ with $\bm{k}+\bm{q}$.
\item Similarly, the term $k_a\cdot k_b=k^2+k\cdot q$ in the functions of $F_i$ and $G_i$ is replaced by $w^2_{m}+\bm{k}^2+\bm{k}\cdot\bm{q}$. Moreover, the denominator is also replaced by $D^2_{a,b}=w^2_{m}+\bm{k}^2_{a,b}+\bar{M}^2_{a,b}$.
\item We also replace the integral variable $q_{4}$, which corresponds to the fourth-component of the pion momentum, into an additional parameter $\Lambda$ as in Ref.~\cite{Nam:2008bq}. Since the isovector-pseudoscalar meson, i.e. pion dominates the meson fluctuations, it is reasonable to set the cutoff $\Lambda$ proportional to $m_{\pi}$ as follows:
\begin{equation}
\label{eq:LAMBDA}
\Lambda\approx m_{\pi}\frac{\bar{\rho}_{0}}{\bar{\rho}}.
\end{equation}
Note that, in the above equation, we have multiplied a factor $\bar{\rho}_{0}/\bar{\rho}$ to $m_{\pi}$ in order to include $T$ dependence of the cutoff mass. Moreover, this multiplication factor represents a correct chiral-restoration pattern of $m_{\pi}$, i.e. the mass of the pion, as a Nambu-Goldstone (NG) boson, increases as SB$\chi$S restored partially.
\end{itemize}
Similarly the chiral condensate in Eq.~(\ref{eq:CCMLC}) reads:
\begin{eqnarray}
\label{eq:CCT}
\langle iq^{\dagger}q\rangle\approx
4N_{c}\int\frac{d^3\bm{k}}{(2\pi)^3}
\left[\mathcal{G}_1(\bm{k})+\mathcal{G}_2(\bm{k})\right]+\frac{3\Lambda}{4\pi N_f}
\frac{\int\frac{d^3\bm{q}}{(2\pi)^3}\frac{d^3\bm{k}}{(2\pi)^3}\,
\mathcal{G}_3(\bm{k},\bm{q})}
{\int\frac{d^3\bm{k}}{(2\pi)^3}[\mathcal{F}_1(\bm{k})+\mathcal{F}_2(\bm{k})
\mathcal{B}^2_f]}.
\end{eqnarray}
Again, the relevant functions $\mathcal{G}_i\equiv T\sum_{m}G_{i}$ are defined and given in the Appendix.
\section{Numerical results}
We present and discuss our numerical results in this Section. We choose $\bar{R}\approx1.0$ fm and $\bar{\rho}\approx0.34$ fm which give $M_f\approx315$ MeV in the present framework~\cite{Goeke:2010hm}. The values for the current-quark mass are reported as $m_{u}=(1.7\sim3.3)$ MeV and $m_{d}=(4.1\sim5.8)$ MeV~\cite{Nakamura:2010zzi}. Thus, we take the average values, $(m_{u},m_{d})\approx(2.5,5)$ MeV. For simplicity we introduce an positive integer $n$. The magnetic field is assigned in terms of the pion mass as $eB_{0}=n\,m^{2}_{\pi}$. In the Gauss unit for the magnetic field, we have the convention, $B_0\approx n\,(1.2\times10^{18})\,\mathrm{G}$. For instance, $n\approx1$ corresponds to a neutron star or magnetar with very strong magnetic field. The case with $n\approx10 $ or more may be observed inside the quark-gluon plasm created in the ultra high-energy peripheral heavy-ion collisions, such as RHIC and LHC, as the main source for the nontrivial QCD vacuum effect, i.e. chiral magnetic effect~\cite{Kharzeev:2004ey,Voloshin:2004vk,Kharzeev:2007jp,Voloshin:2008jx,:2009txa,:2009uh,Fukushima:2008xe,Fukushima:2009ft,Warringa:2009rw,Asakawa:2010bu}.
\subsection{Constituent-quark mass for each flavor: $M_f$}
Here we present the numerical result of the $T$ and $B_{0}$ dependencies of the constituent-quark mass $M_f$ which is one of the order parameters for the (partial) chiral restoration. According to the universal class of the restoration pattern, as for the chiral limit $m_{u,d}=0$, the restoration pattern shows the second order, whereas it becomes the crossover for the case with the physical quark mass, i.e. $m_{u,d}=(2.5,5)$ MeV. In our previous work~\cite{Nam:2010mh}, the MLC contributions, as the large-$N_c$ corrections, play a critical role to reproduce the universality in an appropriate manner. Similar observation was also reported in Ref.~\cite{Muller:2010am}. Here we study the impact of the external magnetic field on
the two light-flavor QCD matter at finite $T$.
In Figure~\ref{FIG2}, we depict the results of the $M_f$ at $T=0$ for the chiral limit in the panel (A) and physical quark mass case in the panel (B). In the absence of the magnetic field $(n=0)$, we have $M_{u,d}=315.02$ MeV for the chiral limit, and $M_{u,d}=(316.16,317.04)$ MeV for the physical quark mass. Accounting for $m_u<m_d$ and that the constituent-quark mass becomes $M_{u,d}\sim M_f+m_f$, the observed results can be easily understood. As the external magnetic field emerges the constituent-quark mass becomes heavier. It is due to the magnetic catalysis. Since the effects of the magnetic catalysis is proportional to $e^2_f$ as in Eqs.~(\ref{eq:NOVT}) and (\ref{eq:CCT}), the $M_u$ grows rapidly more than the $M_d$ with respect to the external magnetic field, i.e. $e_u^2>e_d^2$. Beyond $n=(17\sim18)$, the curve for the $M_u$ starts decreasing slightly. It is interesting to see that, for the physical quark mass case in the panel (B), there appears a point at which the $M_u$ and $M_d$ coincide each other ($n\approx$7.5). It is because that the broken isospin symmetry ($m_u\ne m_d$) is compensated by the effect from the external magnetic field. From our results, this interesting phenomena appears at the magnetic field $B_0\approx 10^{19}\,\mathrm{G}$, which can be created at the heavy-ion collision experiments. Beyond this point, the ordering of the $u$- and $d$-quark constituent masses are reversed.
Here, we want to discuss briefly the saturation behavior of the constituent-quark mass observed in the panel (A) and (B) beyond $n\approx15$ for the $M_u$. This can be understood as follows: As discussed in the previous Sections (see Eq.~(\ref{eq:BPRO}) for instance), we have taken into account the contributions up to $\mathcal{O}(Q_f^2)$. Hence, as for the stronger magnetic fields, higher-order contributions which are not included in the present work will be no longer negligible. Accordingly, we verified that this saturation behavior becomes weaker in the presence of higher-order contributions. Nevertheless. the complete and systematic treatment of those higher-order terms is left for the future work.
In the panel (C) and (D) the results for the case at $T=50$ MeV are demonstrated. Note that, for all the cases, the absolute values for the $M_f$ decrease in comparison to those at $T=0$. But the shapes and behaviors of the curves are similar. This decreasing tendency can be understood by the diluting instanton ensemble at finite $T$. For more details on the diluting ensemble in the present framework, one may refer to Refs.~\cite{Nam:2009nn}. It turns out that the decreasing rate of the constituent-quark mass is a few percent from $T=0$ to $T=50$ MeV: $M_{u,d}=(302.96)$ MeV for the chiral limit and $M_{u,d}=(304.44,305.62)$ MeV for physical quark mass, at $n=0$. Similarly to the vacuum case $T=0$, there appears a point $n\approx8$, at which the $u$- and $d$-quark constituent masses are very close to each other. Moreover, the isospin breaking effect $(m_d\ne m_u)$ becomes more obvious at $T=50$ MeV compared to that for $T=0$. It is because of the decreasing dynamically-generated quark mass at finite $T$. All of these observations are all based on a nontrivial competition between the magnetic catalysis and diluting instanton effects at finite $T$: The former tends to enhance the constituent quark mass, whereas the latter suppress it. Hence, the position of the equal mass point is a consequence of this nontrivial competition between the two mechanisms, on top of the explicit isospin symmetry breaking. The reversing order of mass beyond the equal mass point is also observed at finite $T$. In Ref.~\cite{Boomsma:2009yk}, the authors found qualitatively the same results; monotonically increasing curves for the $M_f$ with respect to the external magnetic field was observed and the $M_u$ is more sensitive to it. However, the rate of increasing is much higher than ours.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=8.5cm]{FIG2-1.pdf}
\includegraphics[width=8.5cm]{FIG2-2.pdf}
\end{tabular}
\begin{tabular}{cc}
\includegraphics[width=8.5cm]{FIG2-3.pdf}
\includegraphics[width=8.5cm]{FIG2-4.pdf}
\end{tabular}
\caption{Constituent-quark mass ($M_f$) as a function of $n\equiv eB_0/m^2_\pi$ for the chiral limit (left column) and physical quark mass (right column). In the upper and lower rows, we show the numerical results for $T=0$ and $T=50$ MeV, respectively. For more details, see the text.}
\label{FIG2}
\end{figure}
\subsection{Chiral condensate: $\langle iq^\dagger q\rangle$}
Now we are in a position to discuss the (partial) chiral restoration in the presence of the external magnetic field. It is indicated by the chiral order parameter, i.e. quark condensate. In Figure~\ref{FIG3}, we show the numerical results of the chiral condensate for the $d$ (left column) as well as $u$ (right column) quarks, separately.
We observe the second order chiral phase transition for the both flavors in the chiral limit case shown in the panel (A) and (B).
This result is expected from the universal class of the restoration pattern. Note that this correct restoration pattern in the present instanton framework is only achieved by the inclusion of the MLC as the large-$N_c$ corrections~\cite{Nam:2010mh}. Turning on the external magnetic field, one finds that the SB$\chi$S is enhanced, that is, the values for the quark condensate and $T_c$ both increase for the two flavor. Among them
we see that the $u$ quark condensate is more sensitive with respect to the magnetic field. It is
due to the larger quark electric charge of the $u$ quark.
The critical $T$ for the both flavors, $T^{u}_c$ and $T^{d}_c$ are listed in Table~\ref{TABLE1}. We observed that $\langle iu^\dagger u\rangle\approx\langle id^\dagger d\rangle\approx(247\,\mathrm{MeV})^3$ at $T=0$, which is just compatible to the empirical value of the isospin-symmetric quark condensate about $(250\,\mathrm{MeV})^3$.
Considering the physical quark mass case, the chiral phase transition for the two flavors are shown in the panel (C) and (D) of Figure~\ref{FIG3}. Following the universal class of the restoration pattern, the curves represent the crossover. The magnetic field effects are negligible for the $d$-quark condensate in the panel (C), due to the smaller quark electric charge, whereas $u$-quark condensate in the panel (D) shows visible changes in the vicinity of $T\approx180$ MeV with respect to the magnetic field. The $T_c$ can be obtained by computing the inflection point of the curves for the crossover phase transition~\cite{Rossner:2007ik}, resulting in that $T^{d}_c\approx200$ MeV for all the $n$ values and $T^{u}_c=(180\sim200)$ MeV for $n=(0\sim20)$. It is worthy of noting that the changes in the $T_c$ for the physical quark mass, due to the magnetic field, are relatively small in comparison to those for the chiral limit. This tendency is qualitatively consistent with the lattice QCD estimations~\cite{D'Elia:2010nq}. As for the physical quark mass case, the quark-condensate values for the both flavors at $T=0$ are about $(248\,\mathrm{MeV})^3$, which almost coincides with those for the chiral limit.
\begin{table}[h]
\begin{tabular}{c|c|c|c}
&$n=0$&$n=10$&$n=20$\\
\hline
$T^{u}_c$&$170.4$ MeV&$173.9$ MeV&$183.1$ MeV\\
$T^{d}_c$&$170.4$ MeV&$171.3$ MeV&$174.1$ MeV\\
\end{tabular}
\caption{Critical temperature for the $u$ and $d$ flavors, $T^{u}_c$ and $T^{d}_c$, for the chiral limit in the presence of the external magnetic field, $n=eB_0/m^2_\pi$.}
\label{TABLE1}
\end{table}
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=8.5cm]{FIG3-1.pdf}
\includegraphics[width=8.5cm]{FIG3-2.pdf}
\end{tabular}
\begin{tabular}{cc}
\includegraphics[width=8.5cm]{FIG3-3.pdf}
\includegraphics[width=8.5cm]{FIG3-4.pdf}
\end{tabular}
\caption{Chiral condensate as a function of $T$. We draw the numerical results for the chiral limit and physical quark mass in the left and right columns. The results for the $d$ and $u$ quarks are given in the upper and lower rows, respectively. For more details, see the text. }
\label{FIG3}
\end{figure}
\subsection{Ratio of the two-flavor quark condensates: $\mathcal{R}$}
In the previous Subsections, we discussed the competition between the magnetic catalysis and diluting instanton effect at finite $T$, on top of the explicit isospin breaking. In the present Subsection, we want to take a more careful look on the isospin breaking of the quark condensates
by defining a quantity as in Eq.~(\ref{eq:RATIO}). We also note that the ratio $\mathcal{R}$ is deeply related to the low-energy constant of the $\chi$PT Lagrangian, $h_{3}$~\cite{Gasser:1983yg,Goeke:2010hm}. In Figure~\ref{FIG4}, we depict the $\mathcal{R}$ for the chiral limit in the panel (A) and physical quark mass case in the panel (B). In the chiral limit without the external magnetic field, the $u$- and $d$-quark condensates are the same so that $\mathcal{R}=0$ for any $T$ values. As the magnetic field increases from $n=0$, the $\mathcal{R}$ becomes a positive and stiffly increasing function. It is because that the $u$-quark condensate increases more rapidly than that for the $d$ quark and the magnetic catalysis effect is proportional to $e^2_f$ as in Eq.~(\ref{eq:CCT}). At the critical $T$, the values for the $\mathcal{R}$ diverges, signaling the second-order chiral phase transition.
The situation becomes quite different for the physical quark mass case in the panel (B). Without the magnetic field the $\mathcal{R}$ decreases then becomes negative beyond $T\approx100$ MeV.
As the magnetic field increases, the curves are shifted to higher $T$, and there appears a bump around $T=180$ MeV for $n=20$.
The apparent difference between the two cases is not hard to understand.
In the chiral limit, there is as a sort of intact degeneracy between $u$ and $d$-quark condensates. Such that
nonzero $\mathcal{R}$ values are only possible in the presence of the finite external magnetic field which breaks this degeneracy~\cite{Gusynin:1994va}. This is also true for the nonzero degenerated quark mass of the flavors, $m_u=m_d\ne0$.
On the contrary, if this degeneracy between the quark condensates is lifted up beyond $T\approx100$ MeV with decreasing nonperturbative effect (instanton), the explicit isospin breaking becomes more pronounced and results in the negative difference of the condensates at the zero magnetic field. It is due to the fact that the heavier quark causes the larger quark condensate in general. On the hand the $u$ quark is more sensitive to the magnetic catalysis because of its larger electric charge. In other words, the explicit isospin breaking effect pushes the $\mathcal{R}$ downward but
the magnetic catalysis pushes the $\mathcal{R}$ upward. This competition also causes a bump around $T=180$ MeV for the $\mathcal{R}$. Nevertheless $\mathcal{R}$ goes down when $T$ increases beyond $T=180$ MeV indicating the explicit isospin breaking effect due to the quark mass difference wins over magnetic catalysis there.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=8.5cm]{FIG4-1.pdf}
\includegraphics[width=8.5cm]{FIG4-2.pdf}
\end{tabular}
\caption{$(u,d)$-quark condensate ratio $\mathcal{R}$ in Eq.~(\ref{eq:RATIO}) as function of $T$ for different $n\equiv eB_0/m^2_\pi$ values. We present the numerical results for the chiral limit and physical quark mass in the left and right panels, respectively.}
\label{FIG4}
\end{figure}
\subsection{Pion properties at finite $T$ under the magnetic field: $F_\pi$ and $m_\pi$}
Finally we want to make an analysis on the pion properties such as the pion weak-decay constant and pion mass, at finite $T$ in the presence of the external magnetic field, below the critical $T$. Since in the Nambu-Goldstone phase ($T$ is below $100$ MeV), the isospin symmetry is only slight broken as shown in Figure~\ref{FIG4}. Hence, in what follows, we focus on the properties with the isospin symmetry. For this purpose we employ the Gell-Mann-Oakes-Renner (GOR) relation defined as:
\begin{equation}
\label{eq:GOR}
m^2_\pi=\sum_{f=u,d}\frac{m_f}{F^2_\pi}\langle iq^\dagger_f q_f\rangle\to
\underbrace{\frac{2\bar{m}_f}{F^2_\pi}
\langle iq^\dagger q\rangle_{m_f=0}}_{\mathrm{isospin\,\,symmetric}}.
\end{equation}
Here, we defined $\bar{m}_f=(m_u+m_d)/2$. Since the quark condensates have been already computed as functions of $T$ as well as $B_0$ in the previous Sections, it is enough to calculate the pion-weak decay constant $F_\pi$ in the same framework. For simplicity, we ignore the MLC contribution to compute the $F_\pi$ and the nonlocal contribution for the time being~\cite{Nam:2008xx}. Then the analytical expression for the $F_\pi$ reads in the instanton framework for the vacuum as follows:
\begin{equation}
\label{eq:FPI1}
F^2_\pi\approx4\eta N_c\int\frac{d^4k}{(2\pi)^4}
\frac{M^2_k-k^2M_kN_k}{(k^2+M^2_k)^2},
\end{equation}
where we again have assumed the isospin symmetry. The $\eta$ denotes a correction factor for the case without the nonlocal contribution. From Ref.~\cite{Nam:2008xx}, the value for the $\eta$ can be estimated as about $0.5$ to obtain the empirical value for the pion-weak decay constant, i.e. $F_\pi\approx93$ MeV. Although some dynamical information from the nonlocal contributions are missing by this simplification, it is still useful for a simple and qualitative analysis. If we induce the EM field externally, we can replace the constituent-quark mass squared approximately as $M^2_K\to M^2_k+2N^2_k\mathcal{B}^2_f$, according to Eq.~(\ref{eq:LO}). Moreover, taking into account that the term $M_kN_k$ can be obtained by differentiating $M^2_k/2$, we have $M_KN_K\approx M_kN_k+2N_k(\partial N_k/\partial k)\mathcal{B}^2_f$. Hence, we can write the expression for the $F_\pi$ as a function of $T$ and $B_0$, employing the fermionic Matsubara formula:
\begin{equation}
\label{eq:FPI2}
F^2_\pi\approx4\eta N_cT\sum_m\int\frac{d^3\bm{k}}{(2\pi)^3}
\frac{M^2_a-(\bm{k}^2+w^2_m)M_aN_a
+2N^2_a\bar{\mathcal{B}}^2_f}
{(w^2_m+\bm{k}^2+M^2_{\bm{k}})^2}=4\eta N_c\int\frac{d^3\bm{k}}{(2\pi)^3}
[\mathcal{K}_1+\mathcal{K}_2+\mathcal{K}_3\bar{\mathcal{B}}^2_f].
\end{equation}
Here, we have defined a flavor-averaged external magnetic field, i.e. the $\bar{\mathcal{B}}^2_f\equiv \frac{1}{2}\left(\mathcal{B}^2_u+\mathcal{B}^2_d \right)$, considering the isospin symmetric matter. Analytic expressions for the relevant functions $\mathcal{K}_{1\sim3}$ are given in the Appendix.
In Figure~\ref{FIG5}, we preset the numerical results of the pion weak-decay constant $F_\pi$ (A) and pion mass $m_\pi$ (B) as functions of $T$ and the strength of the magnetic field. In our numerical calculations, we have chosen $2\bar{m}_f\approx10$ MeV in Eq.~(\ref{eq:GOR}) as a trial, although we evaluated the analytical expression for the $F_\pi$ near the chiral limit. In the panel (A), the $F_\pi$ smoothly decreases with respect to $T$ indicating the partial chiral restoration. At $T=100$ MeV, the value of the $F_\pi$ is about $10\%$ reduced.
Increasing the strength of the magnetic field one finds that the value of $F_\pi$ is enhanced by a few percent.
At $T=0$, we have $F_\pi=(93.47,93.81,94.16)$ MeV for $n=(0,10,20)$, respectively. The effect from the magnetic catalysis appears more important in the higher $T$ region.
The numerical results for the $m_\pi$ are given in the panel (B). As a signal for the partial chiral restoration,
the pion mass increases with respect to $T$ but decreases with respect to $B_{0}$.
In other words,
the enhancement of the SB$\chi$S due to the magnetic catalysis is quite small.
Only about $0.5$ MeV decease in the pion mass is observed for $n=(0\to20)$. Again, the magnetic catalysis plays an more significant role in the higher $T$ region.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[width=8.5cm]{FIG5-1.pdf}
\includegraphics[width=8.5cm]{FIG5-2.pdf}
\end{tabular}
\caption{Pion weak-decay constant $F_\pi$ (A) and pion mass $m_\pi$ (B) as a function of $T$, varying the strength of the magnetic field.}
\label{FIG5}
\end{figure}
\section{Summary and conclusion}
In the present work, we have investigated the (partial) chiral restoration at finite $T$ in QCD matter for the SU(2) light-flavor sector, under the strong and static external magnetic field. To this end, we employed the $T$-modified instanton-liquid model, together with the linear Schwinger method and fermionic Matsubara formula. We also took into the meson-loop corrections as the large-$N_c$ corrections to reproduce the correct phase transition pattern.
We then present the numerical results for the constituent-quark mass, chiral condensate, and isospin-symmetry breaking effect as functions of $T$, $B_0$, and flavor degrees of freedom. Below, we list important theoretical observations of the present work:
\begin{itemize}
\item Relevant instanton parameters $\bar{R}$ and $\bar{\rho}$ are modified as functions of $T$, resulting in the diluting instanton ensemble with respect to $T$, i.e. decreasing the SB$\chi$S effects. The external magnetic field enhances the SB$\chi$S in terms of the magnetic catalysis, which is proportional to $(e_fB_0)^2$. Hence, the $u$-quark constituent mass is more sensitive to the magnetic field and increases considerably more with respect to the field strength compared to the $d$ quark.
\item On top of the explicit isospin symmetry breaking, there appears a point at which the constituent quark masses for the $u$ and $d$ quarks coincide each other for the strong magnetic field $eB_0\approx10^{19}$ G. In the chiral limit, we observe the second-order chiral phase transition, as expected from the universal restoration pattern, effected much by the meson-loop corrections. Naturally, the crossover phase transition takes place for the physical quark masses.
\item The effects from the magnetic catalysis becomes more pronounced in the higher $T$ region because as $T$ increases the SB$\chi$S effects generated from the instanton is weakened so that the magnetic catalysis effects becomes relatively more important. The critical $T$, i.e. $T_c$ is shifted to higher $T$ due to the magnetic catalysis, whereas the change of the chiral condensate values is relatively small. $T_{c}$ becomes flavor-dependent because the magnetic catalysis effect depends on the electric charge of the quarks.
\item The isospin breaking between the quark condensates is explored by defining the ratio $\mathcal{R}$ as a function of $T$. As for $m_u=m_d$ case the ratio is zero at $B_0=0$ and monotonically increases with respect to $T$ for the finite magnetic field. For the physical quark mass case, the ratio $\mathcal{R}$ shows nontrivial structures with respect to $T$ and $B_{0}$ due to the complicated competition between the magnetic catalysis and the explicitly isospin breaking effect which becomes more important at higher T because of the decreasing nonperturbative effects.
\item According to our simple and qualitative analysis using the GOR relation, we observe correct partial chiral-restoration and magnetic-catalysis behaviors for the pion-weak decay constant $F_\pi$ and pion mass $m_\pi$. They decreases and increases about $10\%$ at $T\approx100$ MeV in comparison to those at $T=0$, respectively. However, the changes due to the magnetic field are relatively small, just a few percent.
\end{itemize}
Now we obtain an effective chiral action at finite $T$ as well as the magnetic field for the physical quark mass. If the strong magnetic field is created in the peripheral heavy-ion collisions as reported, it is worthy of studying the hadronization processes in the presence of the magnetic field, i.e. the dilepton production via the vector-meson dominance under the magnetic field for instance. Moreover, the QCD phase diagram on the $\mu$-$T$ plane and critical values, such as the critical end point (CEP) and tricritical point (TCP), are also able to be explored in our model in principle. Inclusion of finite $\mu$ to the effective action in the instanton framework is under progress and related works will appear elsewhere.
\section*{Acknowledgment}
The authors are grateful to B.~G.~Yu for fruitful discussions. The work of S.i.N. was supported by the grant NRF-2010-0013279 from National Research Foundation (NRF) of Korea. The work of C.W.K. was supported by the grant NSC 99-2112-M-033-004-MY3 from National Science Council (NSC) of Taiwan. C.W.K has also acknowledged the support of NCTS (North) in Taiwan.
\section*{Appendix}
The relevant functions in Eq.~(\ref{eq:NOVT}) and (\ref{eq:CCT}) are given as follows:
\begin{eqnarray}
\label{eq:FFFF}
\mathcal{F}_1&=&M_a\bar{M}_a\mathcal{H}_2,
\,\,\,\,
\mathcal{F}_2=2N^2_a\mathcal{H}_2,
\,\,\,\,
\mathcal{F}_3=M_aM_b\mathcal{H}_4
+M_aM_b\left[\bm{k}\cdot(\bm{k}+\bm{q})\right]\mathcal{H}_3,
\cr
\mathcal{F}_4&=&M_aM_b
\left[\bar{M}_{a}\bar{M}_{b}+M_{a}M_{b}+\frac{m_f}{2}(M_{a}+M_{b})\right]
\mathcal{H}_3
\cr
\mathcal{G}_1&=&\bar{M}_a\mathcal{H}_2,\,\,\,\,\mathcal{G}_2=-m_f\mathcal{H}_1,
\,\,\,\,
\mathcal{G}_3=M_aM_b(\bar{M}_a\bar{M}_b)\mathcal{H}_3,
\cr
\mathcal{K}_1&=&(M^2_a-\bm{k}^2M_aN_a)\mathcal{H}_5,\,\,\,\,
\mathcal{K}_2=-M_aN_a\mathcal{H}_6,\,\,\,\,
\mathcal{K}_3=2N^2_a\mathcal{H}_5. \nonumber
\end{eqnarray}
$\mathcal{H}_{1\sim 6}$ are explicitly given as follows,
\begin{eqnarray}
\label{eq:RF2}
\mathcal{H}_{1}&=&T\sum_{m}\frac{1}{w^{2}_{m}+E^{2}_{0}}
=\frac{1}{2E_{0}}\mathrm{tanh}\left(\frac{E_{0}}{2T} \right),\,\,\,\,
\mathcal{H}_{2}=T\sum_{m}\frac{1}{w^{2}_{m}+E^{2}_{a}}
=\frac{1}{2E_{a}}\mathrm{tanh}\left(\frac{E_{a}}{2T} \right),
\cr
\mathcal{H}_{3}&=&T\sum_{m}
\frac{1}{(w^{2}_{m}+E^{2}_{a})(w^{2}_{m}+E^{2}_{b})}
=\frac{1}{2E_{a}E_{b}(E^{2}_{a}-E^{2}_{b})}
\left[E_{a}\mathrm{tanh}\left(\frac{E_{b}}{2T} \right)
-E_{b}\mathrm{tanh}\left(\frac{E_{a}}{2T} \right) \right],
\cr
\mathcal{H}_{4}&=&T\sum_{m}
\frac{w^2_n}{(w^{2}_{m}+E^{2}_{a})(w^{2}_{m}+E^{2}_{b})}
=\frac{1}{2(E^{2}_{a}-E^{2}_{b})}
\left[E_{a}\mathrm{tanh}\left(\frac{E_{a}}{2T} \right)
-E_{b}\mathrm{tanh}\left(\frac{E_{b}}{2T} \right) \right],
\cr
\mathcal{H}_5&=&T\sum_{m}\frac{1}{(w^{2}_{m}+E^{2}_{a})^2}
=\frac{1}{8TE^3_a}\mathrm{sech}^2\left(\frac{E_a}{2T} \right)\left[T\,\mathrm{sinh}\left(\frac{E_a}{T} \right)-E_a \right],
\cr
\mathcal{H}_6&=&T\sum_{m}\frac{w^2_n}{(w^{2}_{m}+E^{2}_{a})^2}=
\frac{1}{8TE_a}\mathrm{sech}^2\left(\frac{E_a}{2T} \right)\left[E_a+T\,\mathrm{sinh}\left(\frac{E_a}{T} \right)\right].
\nonumber
\end{eqnarray}
| 2023-04-23T06:40:31.335Z | 2011-04-29T02:00:36.000Z | redpajama/arxiv | arxiv_0001 | 576 | 10,523 |
8ede967912739156032667d2e0aa3b70fdb7450d | \section{Introduction}
\label{intro}
The study of deuterated molecules is an extremely useful probe of the physical conditions in
star-forming regions. Deuterated species are readily produced in molecular
environments characterised by low temperatures ($T\leq 20$ K) and CO depletion (Millar et al.~1989).
These physical/chemical properties are commonly observed in low-mass pre--stellar cores (starless cores
on the verge of forming stars), where the {\it deuterated fraction} (hereafter $D_{\rm frac}$ ) of non-depleted
molecules, defined as the column density ratio of one
species containing deuterium to its counterpart containing hydrogen,
is orders of magnitude larger than the [D/H] interstellar abundance (of the order
of $\sim 10^{-5}$, Oliveira et al.~2003).
Caselli~(2002a) found a theoretical relation between $D_{\rm frac}$\ and the core evolution in the low-mass case.
This relation predicts that $D_{\rm frac}$\ increases when the starless core
evolves towards the onset of gravitational collapse because, as the core density profile becomes
more and more centrally peaked, freeze-out of CO increases in the core centre
and hence the abundance of deuterated molecules is greatly enhanced.
When the young stellar object formed at the core centre begins to heat its surroundings, the
CO evaporated from dust grains starts to destroy the deuterated species and
$D_{\rm frac}$\ decreases.
Observations of both starless cores and cores with already formed protostars confirm the
theoretical predictions in the low-mass regime: the pre--stellar cores closest to gravitational
collapse have the highest $D_{\rm frac}$\ (Crapsi et al.~2005), while $D_{\rm frac}$\ is lower in cores
associated with Class 0/I protostars, and the coldest (i.e. the youngest) objects possess
the largest $D_{\rm frac}$ , again in agreement with the predictions of chemical
models (Emprechtinger et al.~2009, Friesen et al.~\citeyear{friesen}).
On the basis of these results, $D_{\rm frac}$\ can be considered as an evolutionary
tracer of the low-mass star formation process before and after the formation of the protostellar object.
Can this result be applied to the high-mass regime? This question is difficult to answer
because the massive-star formation process is still not well-understood: large distances
($\geq 1$ kpc), high extinction and clustered environments
make observations of the process challenging (Beuther et al~\citeyear{beuther07},
Zinnecker \& Yorke~2007).
Observationally, the study of Pillai et al.~(2007), performed with the Effelsberg
and IRAM-30m telescopes, measured high values of $D_{\rm frac}$\ ($\sim 0.2$) from
deuterated ammonia in infrared dark clouds, which are understood to
represent the earliest stages of massive star and stellar cluster formation.
In more evolved objects, from IRAM-30m observations, Fontani et al.~(2006)
measured smaller values of $D_{\rm frac}$\ ($\sim 10^{-2}$) from the ratio N$_{2}$D$^{+}$ /N$_{2}$H$^{+}$ ,
which are nevertheless much larger than the D/H interstellar abundance.
Despite these efforts,
no systematic study of the [D/H] ratio across all stages of high-mass star formation
has yet been carried out.
In this letter, we present the first study of the relation between deuterated fraction
and evolution in a statistically significant sample of cores embedded in high-mass star
forming regions spanning a wide range of evolutionary stages,
from high-mass starless core candidates (HMSCs) to high-mass protostellar objects (HMPOs) and
ultracompact (UC) HII regions (for a definition of these stages see e.g. Beuther
et al~\citeyear{beuther07}).
This goal was achieved by observing rotational transitions of N$_{2}$H$^{+}$\ and N$_{2}$D$^{+}$\ with
the IRAM-30m telescope. We chose these species because N$_{2}$D$^{+}$\ can be formed from N$_{2}$H$^{+}$\
only in the gas phase, tracing cold and dense regions
more precisely than deuterated NH$_3$ , which can also be formed on dust grains
(e.g. Aikawa et al.~\citeyear{aikawa}) and then evaporates
by heating from nearby active star-formation.
Even though the observations are obtained with low
angular resolution, the objects observed in the survey were carefully selected to
limit as much as possible any emission arising from adjacent
objects.
\section{Source selection and observations}
\label{obs}
The source list is in Table~\ref{tab_sources}, where we give the source
coordinates, the distance, the bolometric luminosity, and the reference
papers. We observed 27 molecular cores divided into: ten HMSCs,
ten HMPOs, and seven UC HII regions.
The source coordinates were centred
towards either (interferometric) infrared/millimeter/centimeter continuum peaks or
high-density gas tracer peaks (NH$_3$\ with VLA, N$_{2}$H$^{+}$\ with CARMA or PdBI) identified
in images with angular resolutions comparable to or better than 6\arcsec ,
either from the literature or from observations not yet published.
In general, we rejected objects whose emission peaks were
separated by less than $\sim 8$$^{\prime\prime}$\
from another peak of a dense molecular gas tracer.
This selection criterion was adopted to avoid or limit as much as possible
the presence of multiple cores within the IRAM-30m beam(s).
The evolutionary stage of each source was established based on a
collection of evidence: HMSCs are massive cores embedded in
infrared dark-clouds or other massive star forming regions not associated
with indicators of ongoing star formation (embedded
infrared sources, outflows, masers); HMPOs are
associated with interferometric powerful outflows, and/or infrared sources, and/or
faint ($S_{\nu}$ at 3.6~cm $< 1 $mJy) radio continuum emission likely tracing a radio-jet;
and UC HIIs must be associated with a stronger radio-continuum ($S_{\nu}$ at 3.6~cm $\geq$ 1 mJy)
that probably traces gas photoionised by a young massive star.
We did not include evolved HII regions that have already dissipated the associated molecular core.
We also limited the sample to sources at distances of less than $\sim 5$ kpc.
We stress that the three categories must be regarded with
caution because it can be difficult to determine the relative evolutionary stage.
This caveat applies especially to HMPOs and UC HII regions,
whose evolutionary distinction is not always a clear cut (see e.g. Beuther et al.~\citeyear{beuther07}).
Among the HMSCs, three sources (AFGL5142- EC, 05358-mm3, and I22134-G)
have been defined as "warm" in Table~\ref{tab_sources}:
we explain the peculiarity of these sources in Sect.~\ref{res}.
The observations of the 27 cores listed in Table~\ref{tab_sources}
were carried out with the IRAM-30m telescope in two main observing runs
(February 2 to 4, June 19 to 21, 2010), and several additional hours
allocated during three Herapool weeks (December 2009, January 2010,
and November 2010).
We observed the N$_{2}$H$^{+}$\ (3--2), N$_{2}$H$^{+}$\ (1--0), and N$_{2}$D$^{+}$\ (2--1) transitions.
The main observational parameters of these lines are given in
Table~\ref{tab_mol}.
The observations were made in wobbler--switching mode. Pointing
was checked every hour. The data were calibrated with the chopper wheel
technique (see Kutner \& Ulich~\citeyear{kutner}), with a calibration
uncertainty of $\sim 20 - 30\%$. The spectra were obtained in
antenna temperature units, $T_{\rm A}^{*}$, and then converted to
main beam brightness temperature, $T_{\rm MB}$, via the relation
$T_{\rm A}^{*}=T_{\rm MB}\,\eta_{\rm MB}$, where
$\eta_{\rm MB}=B_{\rm eff}/F_{\rm eff}$ is 0.74 for N$_{2}$D$^{+}$\ (2--1),
0.53 for N$_{2}$H$^{+}$\ (3--2) and 0.88 for N$_{2}$H$^{+}$\ (1--0) lines, respectively.
All observed transitions possess hyperfine structure. To take this into account,
we fitted the lines using
METHOD HFS of the CLASS program, which is part of the GILDAS
software\footnote{The GILDAS software is available at http://www.iram.fr/IRAMFR/GILDAS}
developed at the IRAM and the Observatoire de Grenoble. This method assumes
that all the hyperfine components have the same excitation temperature
and width, and that their separation is fixed to the laboratory value.
The method also provides an estimate of the optical depth of
the line, based on the intensity ratio of the different hyperfine
components. For the faintest N$_{2}$D$^{+}$\ lines, for which the hfs method gives
poor results, the lines were fitted assuming a Gaussian shape.
\section{Results and discussion: is deuteration an evolutionary indicator of massive star formation?}
\label{res}
The spectra of N$_{2}$D$^{+}$\ (2--1) and N$_{2}$H$^{+}$\ (3--2) for all sources detected in N$_{2}$D$^{+}$\ are
shown in Figures~\ref{spectra1} -- \ref{spectra6}. We detected
N$_{2}$H$^{+}$\ (3--2) emission in all sources. We also found a remarkably high
detection rate in the N$_{2}$D$^{+}$\ (2--1) line: 100$\%$ in HMSCs, 64$\%$ in HMPOs,
and 100$\%$ in UC HII regions.
Such a high detection rate indicates that deuterated gas is present
at every stage of the massive star and star cluster formation process, even in the
surroundings of UC HII regions where the gas is expected to be hotter
and more chemically evolved.
Even though for 12 sources we also observed the N$_{2}$H$^{+}$\ (1--0) transition,
we always computed the column density of N$_{2}$H$^{+}$\ and the deuterated fraction
from the (3--2) line given its smaller telescope beam, to limit the contribution of
nearby sources as much as possible. An overall presentation of the data obtained,
and a deeper analysis of all physical parameters, will be given in a
forthcoming paper.
We derived the N$_{2}$H$^{+}$\ and N$_{2}$D$^{+}$\ column densities, $N({\rm N_2H^+})$ and
$N({\rm N_2D^+})$, from the line integrated intensity following the method described in the
appendix of Caselli et al.~(\citeyear{casellib}). Thanks to the
selection criteria for our sources, for which interferometric maps of
dense gas are available for most of the regions, a first estimate of
the filling factor could be computed. However, because maps of the
two transitions used to derive $D_{\rm frac}$\ have not yet been performed (except for I22134-VLA1),
the source size was determined from interferometric measurements
of NH$_3$ (2,2). This assumption seems reasonable because this line
traces gas with physical conditions similar
to those of N$_{2}$H$^{+}$\ (3--2) and N$_{2}$D$^{+}$\ (2--1). To take into account the
possible effects of the evolutionary stage on the source size, we also computed an
average diameter for each evolutionary group. This turns out to
be: 6.5\arcsec\ for HMSCs, 4.1\arcsec\ for HMPOs, and 5.5\arcsec\ for UC HIIs
(Busquet~2010, Busquet et al.~2011, S\'anchez-Monge~2011,
Palau et al.~2007, 2010). We stress that these angular diameters are
consistent with the (few) N$_{2}$H$^{+}$\ and N$_{2}$D$^{+}$\ interferometric observations
published to date (e.g. see the case of IRAS 05345+3157, Fontani et al. 2008).
The N$_{2}$H$^{+}$\ and N$_{2}$D$^{+}$\ column densities, their ratio ($D_{\rm frac}$ ), as well
as the line parameters used in the derivation
of the column densities, are listed in Table~\ref{tab_res}.
The method assumes a constant excitation temperature, \mbox{$T_{\rm ex}$} .
For the N$_{2}$H$^{+}$\ lines, \mbox{$T_{\rm ex}$}\ was derived directly from the parameters given
by the hyperfine fitting procedure corrected
for the filling factor (see the CLASS user manual for
details: http://iram.fr/IRAMFR/GILDAS/doc/html/class-html/class.html).
The procedure, however, cannot provide good estimates for
optically thin transitions or transitions with opacity ($\tau$) not well-constrained (e.g.
with relative uncertainty larger than $30 \%$). For these, we
were obliged to assume a value for \mbox{$T_{\rm ex}$}\ (for details, see the notes of Table~\ref{tab_res}).
For the N$_{2}$D$^{+}$\ (2--1) lines we were unable to derive \mbox{$T_{\rm ex}$}\ from the fitting procedure for almost all
sources because $\tau$ is either too small or too uncertain.
In 3 cases only was the optical depth of the N$_{2}$D$^{+}$\ (2--1) transition well-determined, and so is \mbox{$T_{\rm ex}$} :
in two of these objects we found a close agreement between the estimates derived from
the N$_{2}$D$^{+}$\ (2--1) and the N$_{2}$H$^{+}$\ (3--2) transitions. Therefore, the N$_{2}$D$^{+}$\
column density of each source was computed assuming the same \mbox{$T_{\rm ex}$}\ as for N$_{2}$H$^{+}$ .
Since N$_{2}$D$^{+}$\ (2--1) and N$_{2}$H$^{+}$\ (3--2) have similar critical densities and we measure similar
\mbox{$T_{\rm ex}$}\ for both transitions, the two lines approximately trace similar material, so that
computing $D_{\rm frac}$\ using them is a reasonable approach.
The N$_{2}$H$^{+}$\ column densities are on average of the order of $10^{13 - 14}$ cm$^{-2}$ , and
the N$_{2}$D$^{+}$\ column densities are of order $10^{12 - 13}$ cm$^{-2}$ .
Both values are consistent with similar observations towards massive star forming
regions (e.g. Fontani et al.~\citeyear{fontani06}). The measured \mbox{$T_{\rm ex}$}\
corrected for filling factor are between $\sim 7$ and $\sim 50$ K and agree, on average,
with the kinetic temperatures measured from ammonia, except for the colder HMSCs
for which they are a factor of $\sim 2$ lower.
\begin{figure}
\centerline{\includegraphics[angle=-90,width=9cm]{nn2dp_nn2hp.eps}}
\caption{N$_{2}$D$^{+}$\ column density versus N$_{2}$H$^{+}$\ column density. Blue symbols correspond to
HMSCs (triangles: "warm" cores, see text); green
squares show HMPOs (open squares are upper limits); black asterisks
correspond to UC HII regions. The two lines indicate the average values of
$D_{\rm frac}$\ for the HMSC group (i.e. 0.26) and that of both the HMPO and
UC HII groups (i.e.~0.04). }
\label{fig_dfrac}
\end{figure}
\begin{figure}
\centerline{\includegraphics[angle=-90,width=9cm]{Dfrac_plots_all.eps}}%
\caption{Deuterated fraction, $D_{\rm frac}$ = $N$(N$_{2}$D$^{+}$ )$/N$(N$_{2}$H$^{+}$ ), as a function of several parameters:
kinetic temperature (a), N(N$_{2}$H$^{+}$ ) (b), N$_{2}$H$^{+}$\ (3--2) line width (c) and N$_{2}$D$^{+}$\ (2--1) line width (d).
The symbols have the same meaning as in Fig.~\ref{fig_dfrac}. For some sources, the
errorbars are not visible because they are smaller than the symbol size. In panel (a), only the
sources with temperature derived from VLA ammonia observations are plotted.
}
\label{fig_dfrac_etal}
\end{figure}
The deuterated fraction for the three
evolutionary groups is shown in Fig.~\ref{fig_dfrac}, where we plot
$N$(N$_{2}$D$^{+}$ ) against $N$(N$_{2}$H$^{+}$ ). There is a statistically significant
separation between the HMSC group, which has the highest average $D_{\rm frac}$\
(mean value $\sim 0.26$, $\sigma = 0.22$), and the HMPOs and UC HII groups, which
have similar average deuterated fraction: mean $D_{\rm frac}$ $= 0.037$ ($\sigma = 0.017$) for HMPOs,
and mean $D_{\rm frac}$ = 0.044 ($\sigma = 0.024$) for UC HII regions. Both are about an order of
magnitude smaller than that associated with HMSCs.
A closer inspection of the data using the Kolmogorov-Smirnov statistical test
shows that the separation in $D_{\rm frac}$\ between the HMSC group and
that including both HMPOs and UC HII regions is indeed statistically
significant: the test shows that the probability of the
distributions being the same is very low ($P\sim 0.004$). This is
strong evidence that the two groups differ statistically.
Therefore, massive cores without stars have larger abundances of N$_{2}$D$^{+}$\
than cores with already formed massive (proto-)stars or proto-clusters.
The abundance of N$_{2}$D$^{+}$ , however, seems to remain constant, within
the uncertainties, after the formation of the protostellar object until the UC HII region
phase.
That $D_{\rm frac}$\ is of the order of $\sim 0.2- 0.3$, on average, in HMSCs, and then
drops by an order of magnitude after the onset of star formation,
indicates that the physical conditions acting on the abundance
of deuterated species (i.e. density and temperature) evolve similarly along both
the low- and high-mass star formation processes (see e.g.~Crapsi et
al.~\citeyear{crapsi} and Emprechtinger et al.~\citeyear{emprechtinger}).
Another interesting aspect emerging from Fig.~\ref{fig_dfrac} is that the
three HMSCs defined as "warm" in Table~\ref{tab_sources}
(AFGL5142-EC, 05358-mm3, and I22134-G, marked as triangles in
the figure) have $D_{\rm frac}$\ almost an order of magnitude smaller
than the others. These differ from the rest of the sub-sample of HMSCs
because they have temperatures \mbox{$T_{\rm k}$}\ $> 20$ K (see Table~\ref{tab_res} and panel (a) in
Fig.~\ref{fig_dfrac_etal}). High angular resolution studies indicate that they
could be externally heated
(Zhang et al.~\citeyear{zhang02}, Busquet~\citeyear{busquetphd},
S\'anchez-Monge~\citeyear{sanchez11}), so that they are likely to be perturbed by nearby
star formation and we expect their properties to be different from those
of the other, more quiescent cores.
An anticorrelation between $D_{\rm frac}$\ and the distance to heating sources such as
embedded protostars was found in the cluster-forming Ophiuchus-B
clump by Friesen et al~(\citeyear{friesen}). Our study tends to confirm
the Friesen et al.'s finding, even
though the poor statistics does not allow us to drive firm conclusions.
We also point out that the four cores selected from the Butler \& Tan~(2009)
work (G034-G2, G034-F1, G034-F2, G028-C1) have the highest values of
all measured $D_{\rm frac}$\ and lie in infrared-dark regions, away from active star formation.
These four cores are hence very similar to the prototype low-mass
'pre-stellar cores' (e.g. L1544, L694--2, see Crapsi et al.~2005) and we propose
that these are good 'massive pre--stellar core' candidates.
In Fig.~\ref{fig_dfrac_etal}, we plot $D_{\rm frac}$\ as a function of
several parameters: the kinetic temperature, the N$_{2}$H$^{+}$\ column
density, and the
line widths derived from both N$_{2}$H$^{+}$\ and N$_{2}$D$^{+}$ .
To search for possible (anti-)correlations between these parameters, we performed
two statistical tests: the Kendall's $\tau$ and the Spearman's $\rho$ rank correlation tests
\footnote{(http://www.statsoft.com/textbook/nonparametric-statistics/ )}.
For \mbox{$T_{\rm k}$} , the tests were applied to all sources in our survey with
gas temperature derived from VLA interferometric ammonia observations
(see Table~\ref{tab_res}).
As it can be inferred from panel (a) of Fig.~\ref{fig_dfrac_etal},
$D_{\rm frac}$\ and \mbox{$T_{\rm k}$}\ are slightly anti-correlated
($\tau = -0.38$, $\rho$= $-0.50$), and
$D_{\rm frac}$\ is also anti-correlated with the N$_{2}$H$^{+}$\ column density
($\tau =-0.43$, $\rho$= $-0.60$, panel (b) in Fig.~\ref{fig_dfrac_etal}).
We also find a very faint
anticorrelation between $D_{\rm frac}$\ and the N$_{2}$H$^{+}$\ line width ($\tau =-0.17$, $\rho=-0.23$) and
between $D_{\rm frac}$\ and the N$_{2}$D$^{+}$\ line width ($\tau=-0.25$, $\rho=-0.35$)
(panels (c) and (d) in in Fig.~\ref{fig_dfrac_etal}, respectively).
In particular, this latter is difficult to trust being affected by large uncertainties
in the N$_{2}$D$^{+}$\ line widths.
Emprechtinger et al.~(\citeyear{emprechtinger})
suggested that in low-mass star forming cores the deuteration is higher in colder and
more quiescent cores, according to the predictions of theoretical models.
A similar trend was found also in a small sample of seven massive star-forming clumps by
Fontani et al.~(\citeyear{fontani06}) including both HMPOs and UC HII regions but
not HMSCs.
That the warmer sources have lower $D_{\rm frac}$\ is not surprising and can be explained
by the CO freeze-out and the chemical reactions leading to the enhancement
of deuterium abundance being strongly depressed when the temperature
increases (Caselli et al.~\citeyear{caselli08}). The lack of correlation
between deuterated fraction and line widths tells us that the deuterium fractionation
process is independent of the gas turbulence.
This result agrees with high-angular resolution observations
of cluster-forming regions (Fontani et al.
~\citeyear{fontani09}, Busquet et al.~\citeyear{busquet}), but given
the large uncertainties (especially on the N$_{2}$D$^{+}$\ line widths),
the conclusions must be interpreted with caution.
We speculate that the anticorrelation between $D_{\rm frac}$\ and $N$(N$_{2}$H$^{+}$ )
could indicate that, assuming that $D_{\rm frac}$\ decreases in the protostellar
phase, the N$_{2}$H$^{+}$\ column density increases during the younger and most
embedded period of the protostellar phase, as suggested by Busquet~(2010)
for a different sample of sources.
In summary, our findings indicate that the physical conditions acting on the abundance
of deuterated species (i.e. density and temperature) evolve similarly during both
the low- and high-mass star formation process.
To confirm this, several questions however need to be answered:
in HMSCs, do the N$_{2}$D$^{+}$\ and N$_{2}$H$^{+}$\ emission peak at dust emission peak as in low-mass
pre--stellar cores? What is the nature of the N$_{2}$D$^{+}$\ emission
in evolved objects (HMPOs and UC HII regions)? Is the emission extended or fragmented
into several condensations (as found in the few massive star forming regions observed
with interferometers)? To answer these questions, higher
angular resolution observations are necessary.
On the theoretical side, we also need to investigate this proposed evolutionary sequence
using astrochemical models.
{\it Acknowledgments.}
We are grateful to the IRAM-30m telescope staff for their help during the observations.
Many thanks to the anonymous Referee for his/her comments that significantly
improved the work. FF has received funding from
the European Community's Seventh Framework Programme (FP7/2007--2013)
under grant agreement No. 229517.
AP, AS-M and GB are supported by the Spanish MICINN grant AYA2008-06189-C03
(co-funded with FEDER funds). AP is supported by JAEDoc CSIC fellowship
co-funded with the European Social Fund.
GB is funded by the Italian Space Agency (ASI) with contract ASI-I/005/07/1.
MA acknowledges support from the Swiss National Science Foundation
(grants PP002-110504 and PP00P2-130188)
| 2023-04-23T06:40:31.376Z | 2011-03-30T02:02:01.000Z | redpajama/arxiv | arxiv_0001 | 577 | 3,689 |
a008c94ec28ceb83feb141d8b30e4575406db58a | \section{1. Introduction}
Unconventional superfluid pairing such as Fulde-Ferrell-Larkin-Ovchinnikov (FFLO)-states have become of great interest recently, especially in the context of ultracold fermionic quantum gases \cite{koponen:finite:53dfgW}. As is well-known, BCS-, FFLO-, breached-pair-, Sarma- and normal phases can occur in imbalanced two-component Fermi-mixtures, if there is an attractive interspecies $s$-wave interaction, which may be approximated as $U(\textbf{x} - \textbf{y})= U \delta(\textbf{x} - \textbf{y})$ in the low-temperature limit \cite{pethick:bose}. The particle density, population imbalance and temperature are the relevant parameters determining which phase is actually realized, either in model calculations or in nature \cite{koponen:finite:53dfgW}.
From a theoretical point of view, the key concept is the fermionic $s$-wave superfluid order parameter, which is generally described by a non-vanishing pair annihilation function $\Delta (\textbf{x} , \textbf{x}^\prime) \equiv \langle \oper{\psi}{\downarrow}(\textbf{x}) \oper{\psi}{\uparrow}(\textbf{x}^\prime) \rangle$. Here $\psi_\sigma (\textbf{x})$ destroys a fermion at position $\textbf{x}$ in real-space and $\langle \ldots \rangle$ is the thermal average at inverse temperature $\beta \equiv 1/k_{\rm B} T$. In the context of Bose-Einstein condensates, the diagonal terms $\Delta(\textbf{x}) \equiv \Delta(\textbf{x},\textbf{x}^\prime = \textbf{x})$ are interpreted as a bosonic pair annihilation average at position $\textbf{x}$ \cite{pethick:bose}.
The special feature associated with FFLO-states is a spatially varying order parameter, which is either considered to be a spatially varying complex function \cite{koponen:finite:53dfgW} or to have sign changes as a real function of $\textbf{x}$ \cite{iskin:population:0FGw}. A common Ansatz for FFLO-states in translationally invariant systems is $\Delta (\textbf{x}) = |\Delta| \, \exp(i\textbf{q} \cdot \textbf{x})$, where $\textbf{q}\neq {\bf 0}$ is the momentum carried by a Cooper pair in the FFLO-state \cite{koponen:finite:53dfgW,koponen:FFLO:gd78,silva:population:ok09,rombouts:unconventional:9ju,cui:polarized:pob4}. A BCS-state with $s$-wave symmetry in a balanced mixture would be described by $\textbf{q}={\bf 0}$.
FFLO-states may arise in experiments with superimposed optical lattices \cite{koponen:finite:53dfgW}, which would theoretically be described by Hubbard-type lattice models, as well as in continuous systems \cite{machida:generic:IIde8}. In this paper we discuss some general effects, arising from the finite system size, which is an immediate consequence of the presence of a trap potential in ultracold quantum gases. We show that a complex Ansatz for the superfluid order parameter is in general unnecessary in the presence of a confining potential, both for continuous and for discrete systems.
Another major experimental goal, apart from the search for FFLO-states, is presently the demonstration of antiferromagnetism in systems with superimposed optical lattices and repulsive interaction \cite{werner:interaction:5GJk7,koetsier:achieving:ko0,schneider:metallic:vfghi}. By mapping our results for superfluid states in the attractive Hubbard model to the repulsive-$U$ model, we show that the antiferromagnetic order parameter is aligned parallelly to a space-independent vector. This insight may significantly reduce the numerical effort in theoretical studies of antiferromagnetism in ultracold quantum gases.
Furthermore we present numerical results for superfluidity obtained in the saddle-point approximation \cite{andersen:magnetic:f890,iskin:population:0FGw,gottwald:antiferro:jhk6}. With these results we demonstrate, in particular, the importance of the Hartree terms in a trapped system, which in the literature are often neglected. We also present results for a Hubbard model with spin-dependent hopping and show that sign changes in the superfluid order parameter are not generally a feature of imbalanced systems, as seems to be the standard belief today.
This paper is organized as follows. First, in section 2, we discuss the structure of the superfluid order parameter in finite systems, depending on the boundary conditions. By a particle-hole transformation, these results are then, in section 3, related to antiferromagnetic states in the repulsive-$U$ Hubbard model. For the attractive-$U$ Hubbard model, we demonstrate in section 4 the importance of the Hartree terms in systems with non-vanishing magnetization. Excited states containing vortices are studied in section 5, and we present numerical results for spin-dependent hopping in section 6. Finally section 7 contains a summary and our conclusions.
\section{2. Structure of the order parameter}
In order to determine the structure of the superfluid order parameter, we first consider the macroscopic description of suprafluidity in continuum systems of finite extension and then discuss the consequences for finite discrete lattice systems.
It is standard knowledge that the macroscopic superfluid velocity and the superfluid current are described by the phase $\varphi (\textbf{x}) \equiv \textrm{arg}[\Delta (\textbf{x})]$ of the order parameter $\Delta (\textbf{x})$ \cite{paananen:superfluid:2set6Z,leggett:quantum:bg09}. Specifically, the superfluid current is defined as ${\bf j}(\textbf{x}) \equiv |\Delta (\textbf{x})|^2 \textbf{v}(\textbf{x})$, where $\textbf{v}(\textbf{x}) \equiv \frac{\hbar}{m} \nabla \varphi (\textbf{x})$ is interpreted as the superfluid velocity. This implies that in FFLO states with a spatially varying complex superfluid order parameter a nonvanishing current occurs. We will now explain why such equilibrium solutions of FFLO form are suppressed in systems possessing a trapping potential, due to the finite size and the nature of the effective boundary conditions of such systems.
While the continuity equation for the superfluid order parameter, discussed above, can readily be derived for superfluid {\it bosonic\/} systems, the derivation for fermionic systems is more difficult, and the result requires careful interpretation. For fermions, an additional term appears in the continuity equation, which arises from the kinetic term of the Hamiltonian, as follows:
\begin{equation}
\frac{\partial}{\partial t} |\Delta(\textbf{x})|^2 + \nabla \cdot \textbf{j} = \frac{\hbar}{im} \Delta^{*} \, [\nabla_\textbf{x} \cdot \nabla_{\textbf{x}^\prime} \Delta(\textbf{x},\textbf{x}^\prime)]_{\textbf{x}^\prime \rightarrow \textbf{x}} + \textrm{h.c.}
\end{equation}
Since, apparently, both arguments $\textbf{x}$ and $\textbf{x}'$ are required independently to determine the right hand side of the continuity equation, it is clearly more convenient to consider a combined $2d$-dimensional variable $\textbf{y} \equiv (\textbf{x},\textbf{x}^\prime)^T$ and focus on the full correlation function $\Delta(\textbf{x},\textbf{x}^\prime)=\Delta(\textbf{y})$. One then arrives at a continuity equation of the following form:
\begin{equation}\label{vftOO0}
\frac{\partial}{\partial t} |\Delta(\textbf{y})|^2 + \nabla \cdot \textbf{j}_S = \frac{1}{i \hbar} \Delta^* \left\langle \left[ \mathcal{H}_U , \hat{\Delta} (\textbf{y}) \right] \right\rangle +\textrm{h.c.} \; ,
\end{equation}
where we have defined the $2d$-dimensional ``supercurrent'' $\textbf{j}_S$ analogously to the BEC current, but now as a function of the generalized $2d$-dimensional coordinates $\textbf{y}$. The right-hand side of \eqref{vftOO0} is the contribution from the interaction term and has the following explicit form for a contact interaction:
\begin{equation}\label{fg09OpP}
\begin{split}
\frac{U \Delta^*}{i \hbar} \left\langle \operdag{\psi}{\uparrow} (\textbf{x}) \oper{\psi}{\uparrow} (\textbf{x}) \oper{\psi}{\uparrow} (\textbf{x}^\prime) \oper{\psi}{\downarrow} (\textbf{x}) \,- \right.\hspace*{10ex} \\
\hspace*{10ex}\left. \operdag{\psi}{\downarrow} (\textbf{x}^\prime) \oper{\psi}{\downarrow} (\textbf{x}^\prime) \oper{\psi}{\downarrow} (\textbf{x}) \oper{\psi}{\uparrow} (\textbf{x}^\prime) \right\rangle + \textrm{h.c.}
\end{split}
\end{equation}
This contribution from the interaction term in the Hamiltonian is very small (negligible for our purposes) for various reasons. First, in our work we assume (as is customary for dilute BECs) that the interaction $U$ is {\it weak\/} \cite{pethick:bose}, so that this linear contribution in $U$ will tend to be numerically small.
Secondly, and more importantly, for superfluidity the diagonal terms $\Delta(\textbf{x},\textbf{x}^\prime = \textbf{x})$ can be expected to be dominant because of the short-ranged $s$-wave interaction, and for $\textbf{x}^{\prime} = \textbf{x}$ the interaction term in \eqref{fg09OpP} vanishes {\it exactly\/}.
As a result one can to a very good approximation simplify \eqref{vftOO0} to
\begin{equation}\label{vftOO00}
\frac{\partial}{\partial t} |\Delta(\textbf{y})|^2 + \nabla \cdot \textbf{j}_S = 0\; .
\end{equation}
It will become clear below that \eqref{vftOO00}, which does not hold rigorously but is nevertheless very accurate, simplifies the structure of the equilibrium solutions drastically.
We now apply \eqref{vftOO00} to a condensate in equilibrium, i.e., to a time-independent low-temperature system {\it without\/} vortices. Accordingly we assume that the phase $\varphi (\textbf{y})$ of the order parameter has no singularities. From the literature we know that the curl of the superfluid velocity is generally quantized along a contour $C$ in a BEC \cite{pethick:bose}, i.e.,
\begin{equation}\label{bghII93}
\frac{m}{\hbar} \oint_{C} d\textbf{s} \cdot \textbf{v}(\textbf{x}) = 2 \pi n; \; n \in \mathbb{N} \; .
\end{equation}
For a condensate without any vortices ($n=0$), the superfluid velocity $\textbf{v}(\textbf{x})$ has no singularities and, hence, satisfies $\nabla \times \textbf{v}(\textbf{x})=0$ on account of \eqref{bghII93}.
It is now easy to show from Gau\ss 's theorem that the superfluid order parameter $\Delta(\textbf{y})$ may be chosen to be {\it real\/}, provided that the boundary conditions are {\it fixed\/}, since then $\Delta(\textbf{y})=0$ and $\textbf{j}_S =0$ if $\textbf{x}$ or $\textbf{x}^\prime$ lies on the boundary:
\begin{eqnarray}\label{vfq789}
0 =\int_{\partial V} d{\bf F} \cdot (\varphi \, \textbf{j}_S) &=& \int_{V} dV \, \nabla \cdot \left( \varphi \, \textbf{j}_S \right)
\\ \nonumber
\approx \int_{V} dV \, \nabla \varphi \cdot \textbf{j}_S &=& \frac{m}{\hbar} \int_{V} dV \left| \frac{\textbf{j}_S}{\Delta(\textbf{y})} \right|^2 ,
\end{eqnarray}
In the derivation of \eqref{vfq789} we have used \eqref{vftOO0} or \eqref{vftOO00}, i.e., $\nabla \cdot \textbf{j}_S =0 $. It now follows from \eqref{vfq789} that $\textbf{j}_S=0$ for all values of $\textbf{y}$, and we may, therefore, choose $\Delta(\textbf{x},\textbf{x}^\prime) \in \mathbb{R}$ for all $\{\textbf{x}, \textbf{x}^\prime\}$. For periodic boundary conditions this argument fails, since the quantities mentioned above do not vanish at the boundaries.
\subsection{Lattice systems}
The properties of the superfluid order parameter, defined for smooth coordinates $\textbf{x}$, may be mapped onto its discrete analogue $\Delta (\textbf{i})$, defined on a lattice, by the use of wave functions describing localized states $w_{\textbf{i}} (\textbf{x})$ at lattice positions $\textbf{i}$ \cite{paananen:noise:op0e}; if the trapping potential is switched off, the states $w_{\textbf{i}} (\textbf{x})$ would be the Wannier-functions in the lowest band. When an optical lattice is superimposed, the quantum gas is effectively described by a discretized second-quantized Hamiltonian of Hubbard-type form \cite{iskin:population:0FGw},
\begin{eqnarray}\label{chT6a}
\mathcal{H} &=& -t \sum_{( {\bf ij} ), \sigma} \operdag{c}{i \sigma} \oper{c}{j \sigma} + \sum_{\bf i \sigma} \left( V{\bf i}^{2} - \mu_{\sigma} \right) \oper{n}{i \sigma} \\
\nonumber &+& U \sum_{\bf i} \oper{n}{i \uparrow} \oper{n}{i \downarrow} \; .
\end{eqnarray}
Here $t$ is the nearest-neighbor hopping, $V>0$ describes the steepness of the parabolic trap, the parameters $\mu_\sigma$ (with $\sigma =\uparrow ,\downarrow$) describe the on-site potentials, and $U$ is the on-site interaction strength, which may be chosen to be attractive \cite{zwierlein:fermionic} or repulsive \cite{schneider:metallic:vfghi}. The center of the harmonic trap is located at $\textbf{i}=0$.
Our arguments imply that, also in this Hubbard-type model, the superfluid order parameter $\Delta(\textbf{i})$ may always be chosen as a real function (provided, of course, that $U$ is attractive), since due to the confining potential $V_\sigma$ there is no occupation for large $|\textbf{i}|$, and therefore no superfluid current is present at the ``border'' of the system. Here we have assumed that $w_{\textbf{i}} (\textbf{x}) \in \mathbb{R}$, which is {\em exact} for $V=0$ and should therefore be quite accurate also in the general case.
Iskin and Williams \cite{iskin:population:0FGw} and Chen et al. \cite{chen:exploring:kl4dw} assumed $\Delta (\textbf{i})$ to be {\it real\/} for numerical simplicity. Our arguments show that there is, in fact, no need for a complex superfluid order parameter, if vortices are excluded. On the other hand there is a multitude of literature \cite{koponen:finite:53dfgW,silva:population:ok09,rombouts:unconventional:9ju,cui:polarized:pob4,koponen:FFLO:gd78} using various Ansatzes with complex forms of $\Delta (\textbf{i})$ for trapped quantum gases, especially in the context of the local density approximation (LDA). These results are not consistent with our arguments and should therefore be carefully rechecked.
\begin{figure*}
\begin{minipage}{0.66\columnwidth}
\resizebox{1.0\columnwidth}{!}{\includegraphics[clip=true]{MagnVortex.eps}}
(a)
\end{minipage}
\begin{minipage}{0.66\columnwidth}
\resizebox{1.0\columnwidth}{!}{\includegraphics[clip=true]{DeltaVortex.eps}}
(b)
\end{minipage}
\begin{minipage}{0.66\columnwidth}
\resizebox{1.0\columnwidth}{!}{\includegraphics[clip=true]{PhasVortex.eps}}
(c)
\end{minipage}
\caption{Magnetization (a), $|\Delta(\textbf{i})|$ (b) and $\varphi(\textbf{i})$ (c) in a vortex state with $n=1$ depending on $\textbf{i}$-position on a $32 \times 32$ lattice. While $|\Delta(\textbf{i})|$ depends only on the radial position, the magnetization is maximal in the trap center, where $\varphi$ is singular. In contrast to a vortex-free system, the magnetization profile does not break the $\pi/2$-rotational symmetry of the lattice. The system parameters have been chosen as: $U=-2.5$, $V=0.025$, $\beta=1000$, $\mu \equiv (\mu_\uparrow + \mu_\downarrow)/2=0.5$ and $\Delta \mu =0.4$ in units of $t$.}\label{gt66ui0}
\end{figure*}
Since these analytical results concerning the general structure of the order parameter rely on approximative assumptions, we emphasize that, for the case of the Hubbard model \eqref{chT6a}, these results may also be confirmed numerically (within the unrestricted Hartree-Fock approximation). Starting from periodic boundary conditions one may obtain complex (vortex-free) solutions of the order parameter $\Delta(\textbf{i})$ by using $\Delta_{0} (\textbf{i}) = \Delta \exp(i \textbf{q} \cdot \textbf{i})$ as the starting point for a numerical iteration. After convergence of the iteration process is reached, the sum over the squares of the imaginary parts $\sum_{\textbf{i}} \Im\{\Delta(\textbf{i})\}^2$ may be minimized by a global phase rotation as follows: $\Delta(\textbf{i}) \rightarrow \Delta(\textbf{i}) \exp(i \varphi) \; \forall \, \textbf{i}$. For periodic boundary conditions one finds (after the minimization): $[\sum_{\textbf{i}} \Im\{\Delta(\textbf{i})\}^2]/[\sum_{\textbf{i}} |\Delta(\textbf{i})|^2] \approx 0.3$. If one reduces the hopping amplitudes across the periodic boundaries to a relative strength of 20\% one obtains $[\sum_{\textbf{i}} \Im\{\Delta(\textbf{i})\}^2]/[\sum_{\textbf{i}} |\Delta(\textbf{i})|^2] \approx 0.008$ and if one turns the hopping amplitudes across the periodic boundaries off (fixed boundary conditions) the imaginary part reduces to $[\sum_{\textbf{i}} \Im\{\Delta(\textbf{i})\}^2]/[\sum_{\textbf{i}} |\Delta(\textbf{i})|^2] \approx 10^{-5}$, with otherwise the same parameters. This is in perfect agreement with our previous analytical statement: Also according to our numerical results, complex order parameters (without vortices) may occur for periodic boundary conditions, while, for fixed boundaries, the order parameter can be chosen as a {\em real} function, $\Delta (\textbf{i}) \in \mathbb{R}$.
\section{3. Connection to the repulsive-$U$ model}
The Hubbard model for attractive interaction is connected to the repulsive model (for bipartite lattices) via a canonical (so-called special particle-hole) transformation \cite{liebWu:HubbardModel}:
\begin{equation}
\oper{c}{i \uparrow} \rightarrow (-1)^{\textbf{i}} \operdag{c}{i \uparrow} \quad , \quad \oper{c}{i \downarrow} \rightarrow \oper{c}{i \downarrow} \; ,
\end{equation}
where $(-1)^{\textbf{i}}$ is $+1$ on the $A$-sublattice and $-1$ on the $B$-sublattice. Under this canonical transformation, the Hubbard Hamiltonian \eqref{chT6a} is transformed into
\begin{eqnarray}\label{durb8H}
\mathcal{H} &\rightarrow& -t \sum_{( {\bf ij} ), \sigma} \operdag{c}{i \sigma} \oper{c}{j \sigma} - \sum_{\bf i \sigma} \left( \sigma V{\bf i}^{2} + \mu^{\prime}_{\sigma} \right) \oper{n}{i \sigma} \\
\nonumber &-& U \sum_{\bf i} \oper{n}{i \uparrow} \oper{n}{i \downarrow} +\textrm{const} \; ,
\end{eqnarray}
while simultaneously the superfluid order parameter $\Delta (\textbf{i}) = \langle \oper{c}{i \uparrow} \oper{c}{i \downarrow} \rangle$ is transformed into a staggered magnetic order parameter in the $xy$-plane (and vice versa):
\begin{equation}
\langle \oper{c}{i \uparrow} \oper{c}{i \downarrow} \rangle \leftrightarrow (-1)^{\textbf{i}} \langle \operdag{c}{i \uparrow} \oper{c}{i \downarrow} \rangle = (-1)^{\textbf{i}} (S_{\textbf{i}, x} + i S_{\textbf{i}, y}) \; .
\end{equation}
Hence, the Hubbard Hamiltonian \eqref{chT6a} with a repulsive $U$ is transformed into a Hamiltonian with an attractive $U$ and a spatially varying Zeeman-term. On account of this Zeeman term, any equilibrium state of \eqref{durb8H} has a vanishing occupation for the ``down''-spin species ($n_{\textbf{i} \downarrow} \simeq 0$) for large $|\textbf{i}|$ and a full occupation for the ``up''-spin species ($n_{\textbf{i} \uparrow} \simeq 1$). As a consequence, in the negative-$U$ model, the superfluid flow vanishes in the large-$|\textbf{i}|$ regime. Hence, the superfluid order parameter $\Delta(\textbf{i})$ may globally be chosen to be {\it real\/} for $U<0$ on the basis of the arguments presented in section 2. Transforming this {\it real\/} order parameter for $U<0$ back to the repulsive-$U$ model demonstrates that, in this case, the $xy$-antiferromagnetic order may always be chosen along the $x$-direction, confirming the numerical results found in \cite{gottwald:antiferro:jhk6} within Hartree-Fock calculations and the results found in \cite{PhysRevB.83.054419} within R-DMFT calculations for particle-imbalanced Fermi-mixtures.
In balanced systems the symmetry of the underlying Hubbard model is larger\cite{gottwald:antiferro:jhk6}, namely $SU(2)$ instead of $U(1)$, and, therefore, antiferromagnetic order is allowed also along the $z$-direction. For a balanced system, our arguments imply that the order parameter has a unique (globally fixed) direction. For instance a spiral structure in the $xz$-plane cannot occur, since, for such a state, one can rotate the system around the $x$-axis [since the balanced Hamiltonian is fully $SU(2)$-invariant] to obtain an incommensurate $xy$-antiferromagnet, which is not allowed on account of our arguments presented above. Hence, also for balanced systems, the direction of the antiferromagnetic order parameter is globally defined. This result depends, of course, critically on the presence of a trap, which determines the boundary conditions. Indeed, in translationally invariant systems away from half-filling, incommensurate states are well-known and have been predicted analytically long ago \cite{schulz:incommensurate:bg89I}.
\section{4. Role of the Hartree terms}
In sections 5 and 6 (see below) we will illustrate the above ideas and present numerical results for the attractive-$U$ Hubbard model in the saddle-point approximation, which takes in general the well-known form
\begin{eqnarray}\label{vGt2ws}
\oper{n}{i \uparrow} \oper{n}{i \downarrow} &\rightarrow& \Delta(\textbf{i}) \left( \oper{c}{i \uparrow} \oper{c}{i \downarrow} + \operdag{c}{i \downarrow} \operdag{c}{i \uparrow} \right) - \Delta^2(\textbf{i}) \\
\nonumber &+& \langle \oper{n}{i \uparrow} \rangle \oper{n}{i \downarrow} + \oper{n}{i \uparrow} \langle \oper{n}{i \downarrow} \rangle - \langle \oper{n}{i \uparrow} \rangle \langle \oper{n}{i \downarrow} \rangle
\end{eqnarray}
and may require the following comment. In several previous publications \cite{iskin:population:0FGw,chen:exploring:kl4dw}, the {\it Hartree terms\/}, which correspond to the second line in Eq.\ \eqref{vGt2ws}, were neglected for numerical simplicity, since they were assumed to constitute an irrelevant deformation of the trapping potential $V\textbf{i}^2$. This assumption, however, is only correct in spatial regions where the magnetization vanishes ($m_\textbf{i} \equiv n_\uparrow - n_\downarrow = 0$). In magnetized spatial regions the Hartree terms create an additional effective space-dependent Zeeman-term, which leads to a decrease of the local magnetization for $U<0$. As a consequence, for the purposes of this paper, we must include the Hartree terms in our calculations, since they are of the same order of magnitude as the superfluidity terms [first line in Eq.\ \eqref{vGt2ws}]. In fact, we find that inclusion of the Hartree terms is physically highly significant and leads to results {\it differing\/} from previous literature \cite{chen:exploring:kl4dw}, in which they were neglected. Specifically, if Hartree terms are neglected one finds parameter regions, in which the superfluid order parameter has sign changes in the tangential direction, while the magnetization displays a rapidly oscillating behavior, as visible, e.g., in Fig.\ 4(g) in Ref.\ \cite{chen:exploring:kl4dw}. However, inclusion of the Hartree terms leads to a simpler radial oscillation, similar to the behavior visible, e.g., in Fig.\ 3(e) in Ref.\ \cite{chen:exploring:kl4dw}. In \cite{chen:exploring:kl4dw} it is shown that, if Hartree terms are neglected, the structure of the order parameter strongly depends on the filling in the center of the trap $n_C$. Including the Hartree terms we find the following results as compared to Ref.\ \cite{chen:exploring:kl4dw}:
\begin{enumerate}
\item In the low-filling region ($n_C<1$) we find qualitatively the same results.
\item In the high-filling regime ($n_C \approx 2$) the magnetization is smeared out by the effective Zeeman term, leading to a significant reduction of the peaks in the magnetization by a factor of 2-3.
\item In the medium-filling regime we find qualitatively different behavior. Instead of the rapidly oscillating phase we find, depending on the filling, essentially the same structure as is found either in the low- or in the high-filling regime.
\end{enumerate}
\section{5. Thermodynamically stable vortices}
The derivation of the statement, that the superfluid order parameter can globally be assumed to be a {\it real\/} function, obviously breaks down if the superfluid velocity is singular at any point in real-space, such as happens in the presence of vortices. We will now present numerical results, obtained within the saddle-point approximation, containing such vortices. Note that these solutions do not correspond to the minimum of the grand potential and have, therefore, to be interpreted as excitations, where the superfluid fraction of the system carries a finite angular momentum. Vortices tend to be numerically stable especially in the high-filling regime, in which the order parameter is small in the center of the trap. Results are shown in Figs. \ref{gt66ui0}, \ref{bgderT} and \ref{bkopwW}. In Fig.\ \ref{gt66ui0} we show a solution with vortex-number $n=1$, and, for comparison, we present a configuration lowering the grand potential at the same parameters in Fig.\ \ref{bkopwW}. Note that the magnetization and $\Delta(\textbf{i})$ break the $\pi/2$-rotational lattice symmetry in the energetically lower state, while they do not in the vortex state. In Fig.\ \ref{bgderT} we show a state with vortex-number $n=3$ in a balanced mixture.
These results show that it is possible to obtain numerically stable vortex-solutions in a system with a superimposed strong optical lattice. In contrast to lattice-free systems \cite{zwierlein:fermionic}, the angular motion of the condensate is not free in lattice systems, but is instead caused by nearest-neighbor tunneling processes. In addition, we have been able to determine the dependence of the magnetization distribution on the vortex quantum number $n$.
\begin{figure*
\begin{minipage}{0.8\columnwidth}
\resizebox{0.95\columnwidth}{!}{\includegraphics{DeltaVortexNoMagn.eps}}
(a)
\end{minipage}
\begin{minipage}{0.8\columnwidth}
\resizebox{0.95\columnwidth}{!}{\includegraphics{PhasVortexNoMagn.eps}}
(b)
\end{minipage}
\caption{$|\Delta(\textbf{i})|$ (a) and $\varphi(\textbf{i})$ (b) in a vortex state with $n=3$ depending on $\textbf{i}$-position on a $32 \times 32$ lattice.
The system is balanced and therefore the magnetization is not shown. The system parameters have been chosen as: $U=-2.5$, $V=0.025$, $\beta=1000$, $\mu \equiv (\mu_\uparrow + \mu_\downarrow)/2=0.5$ and $\Delta \mu =0$ in units of $t$.}\label{bgderT}
\end{figure*}
\section{6. Numerical Results for spin-dependent hopping}
In addition to our analytical results regarding the nature of the superfluid order parameter, we now present some numerical results for a superfluid trapped Fermi-mixture. While we have shown that a real function $\Delta (\textbf{x})$ is sufficient to describe an FFLO-state, sign changes in $\Delta (\textbf{x})$ are commonly understood to characterize FFLO-states in a trapped system. Iskin and Williams \cite{iskin:population:0FGw} and Chen et al. \cite{chen:exploring:kl4dw} find these sign changes to be typical for an imbalanced superfluid mixture and interpret them as an FFLO state. Performing a saddle-point approximation \cite{andersen:magnetic:f890,gottwald:antiferro:jhk6} for a modified attractive Hubbard Hamiltonian \eqref{chT6a} with spin-dependent nearest-neighbor hopping we do not find a sign change in $\Delta (\textbf{x})$ to be typical for an imbalanced mixture. Spin-dependent hopping arises, e.g., if both spin species have different masses \cite{wang:quantum:lf5gtS}
\begin{equation}
\frac{t_\uparrow}{t_\downarrow} \approx \frac{m_\downarrow}{m_\uparrow} \; .
\end{equation}
If hopping is spin-dependent ($t_\uparrow \neq t_\downarrow$), an imbalanced mixture arises naturally at $\mu_\uparrow = \mu_\downarrow$, since the free bandwidths of both fermion species differ, in contrast to the case of spin-independent hopping ($t_\uparrow = t_\downarrow \equiv t$).
Results for superfluid mixtures with asymmetric hopping ($t_\uparrow \neq t_\downarrow$) are shown in Fig.\ \ref{bg98Io}. As one can see, the balanced mixture has a sign-change in the superfluid order parameter, while the imbalanced mixture has virtually no sign change ($\textrm{min} \{\Delta\} \approx -0.009$, while $\textrm{max} \{\Delta\} \approx 0.273$). We conclude, therefore, that, for general hopping amplitudes $t_\sigma$, sign changes in $\Delta (\textbf{i})$ arise from a difference in the chemical potentials $\mu_\downarrow \neq \mu_\uparrow$ and not from an imbalance itself. Furthermore we find that the local magnetization is not generally maximal at sites where the superfluid order parameter has its sign change.
\begin{figure*}
\begin{minipage}{0.66\columnwidth}
\resizebox{1.0\columnwidth}{!}{\includegraphics[clip=true]{MagnSpher.eps}}
(a)
\end{minipage}
\begin{minipage}{0.66\columnwidth}
\resizebox{1.0\columnwidth}{!}{\includegraphics[clip=true]{DeltaSpher.eps}}
(b)
\end{minipage}
\begin{minipage}{0.66\columnwidth}
\resizebox{1.0\columnwidth}{!}{\includegraphics[clip=true]{PhasSpher.eps}}
(c)
\end{minipage}
\caption{Magnetization (a), $|\Delta(\textbf{i})|$ (b) and $\varphi(\textbf{i})$ (c) in the energetically lowest state with the same parameters as in Fig. \ref{gt66ui0}. $\Delta(\textbf{i})$ is a real function. The magnetization breaks the $\pi/2$-rotational symmetry of the lattice.}\label{bkopwW}
\end{figure*}
\section{7. Summary and Conclusions}
We investigated the general structure of a $s$-wave pairing order parameter relevant for BCS- and FFLO-states. We have shown that due to the finite size and the absence of vortices in the ground state the superfluid order parameter may always be chosen as a {\it real\/} function, putting into question common complex Ansatzes widely spread in the literature. Furthermore we have investigated the role of the (often neglected) Hartree terms and showed how they qualitatively influence the structure of the superfluid order parameter. Finally we demonstrated that sign changes in the order parameter are not a general feature of particle-imbalanced systems but rather a feature of the grand-canonical parameter $\mu_\uparrow - \mu_\downarrow$, again clarifying and generalizing previous findings.
\begin{figure*
\begin{minipage}{0.8\columnwidth}
\resizebox{0.95\columnwidth}{!}{\includegraphics[clip=true]{Plot_Bal.eps}}
(a)
\end{minipage}
\begin{minipage}{0.8\columnwidth}
\resizebox{0.95\columnwidth}{!}{\includegraphics[clip=true]{Plot_UnBal.eps}}
(b)
\end{minipage}
\caption{Particle density $n \equiv (n_\uparrow + n_\downarrow)/2$ (blue/dark line), population imbalance $m \equiv (n_\uparrow - n_\downarrow)/2$ (orange/grey line) and superfluid order parameter $\Delta$ (green/light grey line) on a $30 \times 30$ square lattice depending on $\textbf{i}_x$-position at $\textbf{i}_{y}=0$ (slice through the trap center). The parameters are chosen as $U=-2.4$, $V=0.04$, $\beta=50$, $\mu \equiv (\mu_\uparrow + \mu_\downarrow)/2=-0.5$, $t_\uparrow=0.5$ and $t_\downarrow=1$ in both (a) and (b). In the balanced case (a) we have chosen $\Delta \mu = 0.275$ in order to compensate the asymmetry arising from the unequal hopping terms, while in (b) we have $\Delta \mu =0$. The resulting particle numbers are (a) $n_\downarrow \approx n_\uparrow \approx 65$ and (b) $n_\downarrow \approx 56$, $n_\uparrow \approx 71$. $\Delta(\textbf{i})$ has virtually no sign change as a function of real-space in the imbalanced case (b), while the sign changes in the balanced case (a).}\label{bg98Io}
\end{figure*}
| 2023-04-23T06:40:32.166Z | 2011-04-01T02:02:08.000Z | redpajama/arxiv | arxiv_0001 | 605 | 4,804 |
d5375fb074f20e897a17c8162dc63bb22ce9e4f5 | \section{Introduction}
\label{introduction}
After the initial discovery of a new state of matter in high-energy
nuclear collisions at the Relativistic Heavy Ion Collider (RHIC),
the focus is now shifting to quantifying the properties of what is
believed to constitute a strongly coupled Quark-Gluon Plasma (sQGP).
Basic quantities characterizing the medium are its thermal spectral
functions and transport properties. In heavy-ion collisions (HICs),
the former are most directly studied in the electromagnetic (vector)
channel via the thermal emission of lepton pairs
(cf.~Ref.~\cite{Rapp:2009yu} for a recent review). The latter,
however, are best studied using observables with small but
controlled deviations from thermal equilibrium. Thus, heavy quarks
(charm ($c$) and bottom ($b$)), whose thermal equilibration time is
expected to be of the order of the QGP lifetime in HICs, are a
promising tool to quantify flavor transport, and eventually deduce
general properties of the QGP as formed in these
reactions~\cite{Rapp:2009my}.
The large masses of $c$ and $b$ quarks ($m_{c,b}$) enable us to
assess the modifications of their momentum spectra in HICs via a
diffusion process in an evolving background medium as formulated,
e.g., within a Fokker-Planck equation~\cite{Svetitsky:1987gq}
(typical early temperatures of the medium produced at RHIC,
$T\simeq250$\,MeV~\cite{Adare:2008fqa}, are well below
$m_{c,b}\simeq 1.5, 4.5$\,GeV). A reliable determination of the
heavy-quark (HQ) transport coefficients in the QGP depends on
several components. First and foremost these are microscopic
calculations of the thermal HQ relaxation rate in the
QGP~\cite{Svetitsky:1987gq,vanHees:2004gq,Moore:2004tg,Mustafa:2004dr,
vanHees:2005wb,vanHees:2007me,Gubser:2006bz,CasalderreySolana:2006rq,
Peshier:2008bg,CaronHuot:2009uh,Riek:2010fk} (see
Ref.~\cite{Rapp:2009my} for a review). Second, the coefficients need
to be implemented into a realistic bulk medium evolution (see, e.g.,
Ref.~\cite{Gossiaux:2011ea} for a recent discussion). Third,
heavy-flavor (HF) interactions in evolution phases other than the
QGP have to be evaluated, i.e., in the so-called pre-equilibrium
phase as well as in the hadronic phase. The former is of a
relatively short duration, $\Delta\tau_{\rm pre} \lesssim 1$\,fm/$c$,
and is sometimes mimicked by reducing the formation time of the QGP.
The duration of the hadronic phase is substantially longer,
$\Delta\tau_{\rm had}$$\simeq$\,5-10\,fm/$c$. Its relevance for HF
phenomenology is further augmented by the fact that the hadronic
medium inherits the full momentum anisotropy from the QGP, believed
to be close to the finally observed one. Thus, even a rather weak
coupling of HF hadrons to hadronic matter can lead to noticable
contributions to their elliptic flow. Furthermore, if the QGP
realizes a minimum in its viscosity-to-entropy-density ratio,
$\eta/s$, close to the (pseudo-) critical temperature,
$T_c\simeq170$\,MeV, a hadronic liquid close to $T_c$ should possess
similar properties. This is usually referred to as a ``quark-hadron
duality", as suggested, e.g., in calculations of thermal dilepton
emission rates~\cite{Rapp:2009yu}.
Charm diffusion in hadronic matter has received little attention to date
(see Ref.~\cite{Laine:2011is} for a very recent estimate using heavy-meson
chiral perturbation theory). Its potential relevance has been noted in
Ref.~\cite{Rapp:2009my} based on calculations of $D$-meson spectral
functions in nuclear matter using effective hadron
Lagrangians~\cite{Lutz:2005vx,Tolos:2007vh}, as well as in a hot pion
gas~\cite{Fuchs:2004fh}. In the present paper, we augment these works to
evaluate charm diffusion in hadronic matter. Since the latter features
many resonances at temperatures approaching $T_c$, we not only utilize
$D\pi$ and $DN$ interactions but also scattering amplitudes off excited
hadrons ($K$, $\eta$, $\rho$, $\omega$, $K^*$, $\Delta$), as constructed
in the literature using effective Lagrangians and constrained by
charm-resonance spectroscopy. In this sense we provide a lower estimate
of the diffusion coefficient, based on existing elastic $D$-hadron
amplitudes.
Our paper is organized as follows. In Sec.~\ref{sec_Dint} we
``reconstruct" microscopic models for $D\pi$ scattering in a hot
pion gas (via $s$-channel resonances; Sec.~\ref{ssec_Dpi}), for $D$
scattering off strange and vector mesons (Sec.~\ref{ssec_DM}), and
off baryons (Sec.~\ref{ssec_DB}), by parameterizing pertinent
scattering amplitudes. In Sec.~\ref{sec_trans} these are applied to
calculate thermal $D$-meson relaxation rates and diffusion
coefficients in hot hadronic matter at vanishing chemical potential,
first in a pion gas (Sec.~\ref{ssec_Api}) and then in a resonance
gas (Sec.~\ref{ssec_Ahg}), and finally including chemical potentials
as appropriate for heavy-ion collisions at RHIC
(Sec.~\ref{ssec_Arhic}). Conclusions are given in
Sec.~\ref{sec_concl}.
\section{$D$-Meson Scattering and Width in Hot Hadronic Matter}
\label{sec_Dint}
In this section we recapitulate basic elements of the $D$-hadron
scattering amplitudes and apply them to calculate pertinent $D$-meson
widths in hadronic matter. We only employ known amplitudes from the
literature for the most abundant hadrons in a hot medium and combine
them into a lower-limit estimate for the $D$-meson width (and $D$-meson
relaxation rate in Sec.~\ref{sec_trans}). We first focus on pion-gas
effects, followed by interactions with strange and vector mesons, as
well as hot nuclear matter.
\subsection{Hot Pion Gas}
\label{ssec_Dpi}
Following Ref.~\cite{Fuchs:2004fh} $D$ interactions in the pion gas
are dominated by its chiral partner, $D_0^*(2308)$, by the vector
$D^*(2010)$ and by the tensor meson $D_2^*(2460)$.
Phenomenological control over their interaction vertices has become
possible due to new observations of $D$-meson resonances by the
BELLE Collaboration~\cite{Abe:2003zm}.
Especially, the large width of the $D_0^*(2308)$,
$\Gamma_0^{\rm tot}=276\pm99$\,MeV,
attributed to $S$-wave pion decay, leads to a large $D\pi D_0^*$
coupling constant. With the constituent-quark model assignment of
isospin $I$=1/2 for $D$-states, the pertinent forward scattering
amplitudes have been parameterized in Breit-Wigner form as
\begin{eqnarray}
\mathcal{M}(s,\Theta=0)=\sum_{j=0}^2 \frac{8\pi\sqrt{s}}{k}
\frac{(2j+1) \sqrt{s}\Gamma_j^{D\pi}}{s-M^2_j+i\sqrt{s}\Gamma_j^{\rm{tot}}}
\ , \label{MDpi}
\end{eqnarray}
where $\sqrt{s}$ and $k$ denote the center-of-mass energy and
3-momentum in the $D\pi$ collision, respectively, and
$j$ is the resonance spin.
The total resonance decay width $\Gamma_{\rm tot}$, is assumed to be
saturated by the partial one, $\Gamma_{0,1}^{D\pi}$, for $D^*$ and $D^*_0$,
while for the narrow state $D_2^*$ ($\Gamma_2^{\rm tot}=45.6\pm12.5$\,MeV)
a branching fraction estimated by the particle data group is
employed~\cite{pdg2010}. The resulting total $D\pi$ cross section is
displayed in the upper panel of Fig.~\ref{fig_D-mes}.
The $D$-meson self-energy in a pion gas can now be obtained by
the standard procedure of closing the in- and outgoing pion
lines of the forward $D\pi$ amplitude with a thermal
pion propagator. In the narrow-width approximation for the pion
propagator, the $D$-meson self-energy takes the form
\begin{equation}
\Pi(p_D;T)=\int\frac{d^3p_{\pi}}{(2\pi)^32E_{\pi}}f_B(E_{\pi};T)
\mathcal{M}(s,\Theta=0),
\label{Pi}
\end{equation}
where $f_B$ is the thermal Bose function and $E_{\pi}$ the on-shell pion
energy. Alternatively, one can obtain the collision rate (or on-shell
width), $\Gamma$=$-{\rm Im}\Pi(p_D^2$=$m_D^2,T)/m_D$ from the Boltzmann
equation as
\begin{eqnarray}
\Gamma(p_D;T)=\frac{\gamma_{\pi}}{2E_D(p_D)}\int\frac{1}{(2\pi)^9}
\frac{d^3q}{2E_\pi(q)} \frac{d^3p'}{2E_D(p')}
\frac{d^3q'}{2E_\pi(q')}
\nonumber\\
\times f_B(E_\pi(q);T) \ \overline{|\mathcal{M}|^2}\
(2\pi)^4\delta^4(p+q-p'-q'),
\label{Gamma2}
\end{eqnarray}
where $\gamma_{\pi}$=3 denotes the spin-isospin degeneracy factor of
the in-medium particle (pion), and $p,q$ and $p',q'$ are the momenta of
in- and outgoing particles, respectively. Equations~(\ref{Pi}) and
(\ref{Gamma2}) are related via the optical theorem (we have checked
their consistency). In the following we employ the latter since it is
close to the form of the thermal relaxation rate discussed in
Sec.~\ref{sec_trans} below.
\begin{figure}[!t]
\includegraphics[width=1\columnwidt
]{Dpion_K_rho_cross_section.eps}
\includegraphics[width=1\columnwidt
]{D_width_pion_rho.eps}\caption{(a) Total elastic cross sections for
$D$-$\pi$ (solid line), $S$-wave $D$-$K$ (dashed line) and
$D$-$\rho$ (dash-dotted line) scattering based on the invariant
amplitudes as discussed in the text; vertical lines indicate the
respective thresholds. (b) Total $D$-meson scattering width in a
meson gas (dash double dotted line), consisting of contributions
from Goldstone bosons and vector mesons.} \label{fig_D-mes}
\end{figure}
The scattering width of a $D$-meson at rest in a pion gas is displayed
in the lower panel of Fig.~\ref{fig_D-mes}. For $T$=150-180\,MeV we find
$\Gamma_D$=20-40\,MeV,
where the chiral partner
of the $D$ provides the largest contribution through $S$-wave $D\pi$
scattering. For constant resonance widths, we find close agreement with
the results plotted in Fig.~2 of Ref.~\cite{Fuchs:2004fh}, while for
energy-momentum dependent widths (as quoted in Ref.~\cite{Fuchs:2004fh})
our results displayed in Fig.~\ref{fig_D-mes} turn out to be slightly
smaller (by ca.~20\%).
Chiral effective Lagrangians have been applied to $D$-meson
interactions with Goldstone bosons in
Refs.~\cite{Kolomeitsev:2003ac,Guo:2006fu,Gamermann:2006nm}. Once
the parameters are constrained by empirical information, the
resulting scattering amplitudes are very similar and closely agree
with the resonance ansatz of Ref.~\cite{Fuchs:2004fh} as employed
here. Chiral Lagrangians also predict non-resonant interactions. In
the (repulsive) $S$-wave $I$=3/2 $D\pi$ channel the scattering
length has been computed as $a_{D\pi}^{I=3/2}$$\equiv$$-f(s_{\rm
thr})$=$-$0.1\,fm in unitarized chiral perturbation theory
($\chi$PT)~\cite{Guo:2009ct} vs. $a_{D\pi}^{I=3/2}$=$-$0.15\,fm in
$\chi$PT to NNLO~\cite{Geng:2010vw}. We employ the former to
construct the $I$=3/2 scattering amplitude as
$\mathcal{M}(s)=8\pi\sqrt{s}f(s)$.
\subsection{Strange and Vector Meons}
\label{ssec_DM}
In a hot meson gas, the next abundant species after the pions
are the strangeness carrying Goldstone bosons and the light and
strange vector mesons.
For $D(K,\bar K)$ interactions we directly employ the results of the
(unitarized) chiral effective theory of
Ref.~\cite{Kolomeitsev:2003ac} (by parameterizing the amplitudes in
Figs.~1 and 2 in there). In the $S$-wave $DK$ channel, with
isospin-strangeness $(I,S)$=(0,1), the results are constrained by
the (loosely bound) $D_{s0}^*(2317)$ state (the analogue of
$D_0^*(2308)$ in $D\pi$), and again closely agree with
Refs.~\cite{Guo:2006fu,Gamermann:2006nm}. The application to the
$S$-wave $DK$ $(I,S)$=(1,1) and $D\bar K$ $(I,S)$=(0,$-$1) produces
Feshbach-type resonances (tetra-quarks) right at threshold
($E_{DK}^{\rm thr}$=2360\,MeV).
For $D\bar K$ in the $(I,S)$=(1,$-$1) channel the Born amplitude is
predicted to be repulsive (analogous to $I$=3/2 $D\pi$); the
scattering length has been calculated as $a_{\rm D\bar
K}^{I=1}$=$-$0.22\,fm in unitarized $\chi$PT~\cite{Guo:2009ct} vs.
$a_{\rm D\bar K}^{I=1}$=$-$0.33\,fm in in $\chi$PT at
NNLO~\cite{Geng:2010vw}. Analogous to $I$=3/2 $D\pi$, we employ the
former to construct the pertinent scattering amplitude.
For $D\eta$ scattering we also adopt Ref.~\cite{Kolomeitsev:2003ac},
where the $S$-wave $(I,S)$=(1/2,0) amplitude is governed by a narrow
state of mass 2413\,MeV just below threshold.
The evaluation of $DV$ scattering ($V$=$\rho$, $\omega$, $K^*$)
requires to go beyond the chiral Lagrangian. This has been done in
Ref.~\cite{Gamermann:2007fi} starting from $SU(4)$ flavor symmetry
and then implementing chiral breaking terms. This framework,
properly unitarized, recovers the resonance poles computed with the
chiral Lagrangians, but extends to axialvector states coupling to
$S$-wave $DV$ interactions (in particular $D_1(2420)$, $D_1'(2427)$,
$D_{s1}(2460)$, $D_{s1}(2536)$). A convenient Breit-Wigner
parametrization of the elastic coupling of $DV$ to the dynamically
generated resonances has been quoted as
\begin{equation}
\mathcal{M}(s)=\frac{|g_{DV}|^2}{s-s_R} ,
\label{DV}
\end{equation}
where $g_{DV}$ is the dimensionful coupling constant and $s_R$ the
complex resonance-pole position. We include the three $I$=1/2
resonance couplings to $D\rho$ and $D\omega$ from Tab.~7 in
Ref.~\cite{Gamermann:2007fi}, two $I$=0 and two $I$=1 resonances
with $DK^*$ coupling (Tabs.~5 and 4 in \cite{Gamermann:2007fi},
respectively) and one $I$=0 state coupling to $D\bar{K}^*$ (Tab.~8
in \cite{Gamermann:2007fi}). As a representative, the isospin
$I$=1/2 $D\rho$ cross section is shown in the upper panel of
Fig.~\ref{fig_D-mes}.
The lower panel of Fig.~\ref{fig_D-mes} shows the temperature
dependence of mesonic contributions to the $D$-meson scattering
width as calculated from the above amplitudes. The width from
anti-/kaons is the next largest contribution after the pion. The
effect of vector mesons is smaller but significant, especially for
the $K^*$. The total $D$ width in a hot meson gas reaches
$\sim$80\,MeV around $T$$\simeq$180\,MeV, which should be a lower
limit since several channels have still been neglected, e.g., higher
partial waves (except for pions) and inelastic channels (e.g.
$DK^*\leftrightarrow D_s\pi$).
\subsection{Hot Nuclear Matter}
\label{ssec_DB}
To evaluate $D$ scattering off baryons we follow the same
strategy as for mesons, employing vacuum scattering amplitudes. More
elaborate many-body calculations for $D$-mesons in nuclear matter are
available~\cite{Lutz:2005vx,Tolos:2010rn}, but our procedure keeps
consistency with the mesonic sector and allows for an estimate of the
systematic error due to in-medium effects (e.g., selfconsistency
of selfenergy and in-medium scattering amplitudes).
We start from Ref.~\cite{Lutz:2005vx} where the $T$-matrix results of an
effective $DN$ interaction with coupled channels have been parameterized
in analogy to Eq.~(\ref{DV}) as
\begin{equation}
T(\sqrt{s})=\frac{|g|^2}{\sqrt{s}-m_R+ i\Gamma_R/2} \ ,
\label{DB}
\end{equation}
where $m_R$ is the resonance mass and $\Gamma_R$ the width. The
$T$-matrix, Eq.~(\ref{DB}), is related to the invariant scattering
amplitude through $\mathcal{M}(s)=N(s)T(s)$ with
$N(s)$=$(s+m_N^2-m_D^2+2m_N\sqrt{s})/(2\sqrt{s})$. The key states
are the dynamically generated $I$=0 $\Lambda_c(2595)$ and $I$=1
$\Sigma_c(2620)$ $S$-wave bound states. The former is experimentally
well established, while the latter is ca.~180\,MeV too deep compared
to the empirical $\Sigma_c(2800)$ state. However, the $I$=1 $DN$
cross section above threshold (shown in the upper panel of
Fig.~\ref{fig_DB}) is quite comparable to the results of a recent
meson-exchange model calculation~\cite{Haidenbauer:2010ch} in which
the $\Sigma_c$ is generated at its experimental mass (in fact, at
threshold the $I$=1 scattering length in
Ref.~\cite{Haidenbauer:2010ch} is significantly larger than for the
$\Sigma_c(2620)$ state of Ref.~\cite{Lutz:2005vx}). In the $I$=0
channel, the scattering lengths of Refs.~\cite{Lutz:2005vx} and
\cite{Haidenbauer:2010ch} agree well. The corresponding cross
section is also shown in the upper panel of Fig.~\ref{fig_DB}.
\begin{figure}[!t]
\hspace{4mm}
\includegraphics[width=\columnwidt
]{DN_Delta_cross_section.eps}
\includegraphics[width=\columnwidt
]{D_width_N_Delta.eps} \caption{(a) Elastic cross sections for
$S$-wave $D$-$N$ scattering in the $I$=1 (solid line) and $I$=0
(dotted line) channel, and $D$-$\Delta$ scattering (dashed line,
$I$=1; vertical line: $D\Delta$ threshold) based on the invariant
amplitudes as discussed in the text; (b) $D$-meson widths from
$D$-$N$ scattering (solid line, including both $I=1$ and $I=0$
channels) and from $D$-$\Delta$ scattering (dashed line) at
vanishing chemical potential.} \label{fig_DB}
\end{figure}
The $D\bar N$ scattering amplitude can be inferred from $\bar D N$ due
to $C$-symmetry of strong interactions. We have found no evidence in the
literature for (multi-quark) resonances in this system and adopt the
$D^-N$ elastic $S$-wave amplitude calculated in Ref.~\cite{Lutz:2005vx}.
The pertinent scattering and relaxation rates are about a factor of
$\sim$3 smaller than from $DN$ scattering.
The only available calculation of $D\Delta$ we are aware of has been
conducted in Ref.~\cite{Hofmann:2006qx}, within the same framework
as our $DN$ amplitudes are based on. In the only available $I$=1
$S$-wave channel the parametrization, Eq.~(\ref{DB}), reflects a
rather deep bound state ($m_R$=2613\,MeV, $\Gamma_R$$\simeq$0,
$g$=8.6). The cross section is shown in the upper panel of
Fig.~\ref{fig_DB}. Unlike the $D\bar N$ case, the $D\bar\Delta$
system is predicted to support a shallow $I$=1 bound state
($m_R$=2867\,MeV, $\Gamma_R$$\simeq$0, $g$=5.8). As a result, the
contribution of $D\bar\Delta$ scattering to the $D$-meson width and
relaxation rate is about half of that from $D\Delta$ scattering.
The lower panel of Fig.~\ref{fig_DB} shows that the width of a
$D$-meson at rest from scattering off thermally excited nucleons and
$\Delta$'s at vanishing chemical potential ($\mu_B$=0) is comparable
to that of light vector mesons (cf.~lower panel of
Fig.~\ref{fig_DB}). When adding $\bar N$ and $\bar\Delta$
contributions, the baryon-induced $D$-meson width computed here
amounts to ca.~15\,MeV at $T$$\simeq$180\,MeV. Note again that we
have neglected higher partial waves as well as higher excited
resonances including strange anti-/baryons.
It is instructive to compare our nucleon-induced width to a selfconsistent
many-body calculation~\cite{Tolos:2007vh}. From Fig.~6 in
Ref.~\cite{Tolos:2007vh}
we read off $\Gamma_D$$\simeq$100(80)\,MeV at $T$=100(150)\,MeV and
$\varrho_N$=$\varrho_0$, vs.~$\Gamma_D$$\simeq$75(65)\,MeV in our approach.
This indicates that neglecting in-medium effects does not lead to an
overestimate in our calculation.
\section{Thermal Relaxation in Hadronic Matter}
\label{sec_trans}
The standard expression for the thermal relaxation rate of
a particle ($D$) in a heat bath in terms of its scattering
amplitude on medium particles ($h$) reads~\cite{Svetitsky:1987gq}
\begin{eqnarray}
A(p,T)&=&\frac{\gamma_{h}}{2E_D}\int\frac{1}{(2\pi)^9}
\frac{d^3q}{2E_h} \ \frac{d^3p'}{2E_D'} \ \frac{d^3q'}{2E_h'} f^h(E_h;T)
\nonumber\\
&& \hspace{-0.5cm}
\times \ \overline{|\mathcal{M}_{Dh}|^2}\
(2\pi)^4\delta^{(4)}(p+q-p'-q') (1-\frac{\vec p\cdot\vec p\,'}{\vec p^2})
\nonumber\\
&\equiv& \langle\langle 1-\frac{\vec p\cdot\vec p\,'}{\vec
p^2}\rangle\rangle \label{A}
\end{eqnarray}
with $\vec p$ ($\vec q$) and $\vec p\,'$ ($\vec q\,'$) being the
$D$-meson ($h$) momentum before and after the interaction,
respectively. The form of this expression is very similar to the one
for the total width, Eq.~(\ref{Gamma2}). The latter can be expressed
in terms of the average defined above as $\Gamma = \langle\langle 1
\rangle\rangle$. In Sec.~\ref{ssec_Api} we evaluate $A(p,T)$ for
$D$-mesons in a pion gas ($h$=$\pi$) and in Sec.~\ref{ssec_Ahg} for
all other hadrons whose amplitudes have been constructed in the
previous section.
In Sec.~\ref{ssec_Arhic} we evaluate the relaxation rate at
finite baryon and meson chemical potentials as applicable to the
hadronic phase below the chemical freeze-out temperature in heavy-ion
collisions at RHIC.
\subsection{Pion Gas}
\label{ssec_Api}
\begin{figure}[!t]
\hspace{4mm}
\includegraphics[width=1.1\columnwidt
]{Dpion_K_N_relaxation_rate_bands.eps} \caption{Thermal relaxation
rate of $D$-mesons at momentum $p_D$=0.1\,GeV in a gas of pions
(solid line), anti-/kaons (dash-double-dotted line), nucleons
(dash-dotted line) and antinucleons (dotted line), as a function of
temperature at vanishing chemical potentials using the elastic
scattering amplitudes constructed in Sec.~\ref{sec_Dint}. The bands
represent results obtained with constant $S$-wave $D\pi$, $DK$ and
$DN$ cross sections of 7-10\,mb and 10-15\,mb, respectively.}
\label{fig_ApiKN}
\end{figure}
The thermal relaxation rates for a $D$-meson at rest due to
scattering off pions in a thermal gas in chemical equilibrium
($\mu_\pi$=0), using the the scattering amplitude of
Eq.~(\ref{MDpi}), is displayed in Fig.~\ref{fig_ApiKN} as a function
of temperature. From $T$=100\,MeV to 180\,MeV the rate increases by
about a factor of 7, basically following the increase in pion
density from 0.2 to 1.4$\varrho_0$. Its magnitude at T=180\,MeV,
$A\simeq1/25$\,fm, is small but not negligible.
When replacing the $D\pi$ amplitude in Eq.~(\ref{Gamma2}) by one
yielding a constant $S$-wave cross section of
$\sigma_{D\pi}^S$=7-10\,mb, the pertinent band for the relaxation
rate essentially covers the result of our microscopic calculations.
The latter are closer to the upper end of the band at lower $T$ but
to the lower end at higher $T\gsim150$\,MeV. This reflects the
increased thermal motion of pions probing higher $\sqrt{s}$ in the
amplitudes where the latter decrease.
Compared to the recent work of Ref.~\cite{Laine:2011is}, where
$D$-meson diffusion in a lukewarm pion gas has been evaluated in
heavy-meson chiral perturbation theory, our value for the friction
coefficient is much smaller; e.g., at $T$=50(100)\,MeV,
Ref.~\cite{Laine:2011is} finds $A=\kappa/2m_DT\simeq 0.00055(0.05)$/fm,
about a factor of $\sim$4(10) larger than our pion-gas results,
$A\simeq0.00015(0.005)$/fm.
\subsection{Hadron Resonance Gas}
\label{ssec_Ahg}
\begin{figure}[!t]
\hspace{4mm}
\includegraphics[width=1.1\columnwidt
]{D_total_rate_p=0.1GeV_vs_T.eps}
\caption{Thermal relaxation rate of $p_D$=0.1\,GeV $D$-mesons using
empirical amplitudes in a hadron gas at vanishing (solid line) and
finite (dash-dotted line with squares) chemical potentials, as well as
in a pion gas (dotted line). The dashed line corresponds to isotropic
$D$-meson cross sections with mesons (7\,mb) and baryons (10\,mb).
The filled box at the upper right indicates
charm-quark relaxation rates in a QGP at 1.2\,$T_c$ from
in-medium $T$-matrix calculations~\cite{Riek:2010fk}.}
\label{fig_Ahrg}
\end{figure}
A hot hadron gas in equilibrium is characterized
by an increasing abundance of resonances with rising temperature.
For example, at $T$=180\,MeV, the density of baryons plus antibaryons
is above $\varrho_0$ and that of mesons with masses below 2\,GeV is
above 3$\varrho_0$. To improve the estimate of $D$-meson diffusion
in a pion gas for a more realistic hadron-resonance gas, we
include rescattering on all particles which at $T$=180\,MeV and
$\mu_h$=0 have a density at least 0.1$\varrho_0$, i.e., $\pi$, $K$,
$\eta$, $\rho$, $\omega$ and $K^*(892)$ in the meson sector and
anti-/nucleons, and $\Delta(1232)$, $\bar\Delta(1232)$ in the
anti-/baryon sector, using all scattering amplitudes of
Sec.~\ref{sec_Dint}.
The resulting $D$-meson friction coefficient in hadronic matter
at vanishing chemical potentials increases substantially over the
pion-gas result, by about a factor of $\sim$2(3) at $T$=150(180)\,MeV,
see Fig.~\ref{fig_Ahrg}. The individual contributions of $K+\bar K$
and $N$, $\bar N$ are compared to constant-cross-section calculations
in Fig.~\ref{fig_ApiKN}, indicating that a ``constituent" light-quark
cross section of 3-4\,mb
is compatible with our lower-limit estimates. A quantitative
decomposition of the individual hadron contributions to
kinetic $D$-meson relaxation at $T$=180\,MeV is given in Tab.~\ref{tab_A}.
Anti-/kaons provide the next-to-leading contributions after the pions,
while vector mesons, nucleons and Deltas play a smaller but non-negligible
role.
\begin{table}[!b]
\begin{tabular}{|c||c|c|}
\hline
Hadrons & $L_{I,2J}$ & $A$~[fm$^{-1}$] \\
\hline\hline
$\pi$ & $S_{1/2,0}$,~$P_{1/2,2}$,~$D_{1/2,4}$,~$S_{3/2,0}$ & $0.0371$\\
\hline
$K+\eta$ & $S_{0,0}$,~$S_{1,0}$ & $0.0236$\\
\hline
$\rho+\omega+K^*$ & $S_{1/2,2}$,~$S_{0,2}$,~$S_{1,2}$ & $0.0129$\\
\hline
$N+{\bar N}$ & $S_{0,1}$,~$S_{1,1}$ & $0.0128$\\
\hline
$\Delta+{\bar \Delta}$ & $S_{1,3}$ & $0.0144$\\
\hline
\end{tabular}
\caption{Contributions to the thermal $D$-meson thermal relaxation
rate at $T$=180\,MeV indicating the quantum numbers of the included
scattering channels with $L$: partial wave, $I$: isospin and
$J$: total angular momentum.}
\label{tab_A}
\end{table}
As an estimate of medium effects in our vacuum $Dh$ amplitudes
we have performed a calculation for $A$ where we have introduced an
in-medium broadening of $\Gamma_R^{\rm med}$=200\,MeV into the Breit-Wigner
parameterizations. The final result for $A$ changes by less than 5\%.
Quantitatively, a value of $A$$\simeq$0.1/fm translates into a thermal
relaxation time of $\tau_D$$\simeq$10\,fm. This time scale is quite
comparable to lifetimes of the hadronic phase in Au-Au collisions at
RHIC. It is also similar to non-perturbative calculations of charm-quark
relaxation in the QGP using in-medium $T$-matrix
interactions~\cite{vanHees:2007me,Riek:2010fk}. This is encouraging
both from a conceptual (in the sense of a quark-hadron continuity
through $T_c$) and a practical point of view (relaxation rates of
this magnitude currently provide a fair phenomenology of current
heavy-flavor data at RHIC~\cite{vanHees:2005wb,vanHees:2007me}).
\begin{figure}[!t]
\hspace{4mm}
\includegraphics[width=1.1\columnwidt
]{2piTD_s_vs_T.eps}
\caption{Spatial $D$-meson diffusion coefficient
in units of the medium's thermal wavelength, $1/(2\pi T)$. The
filled box at the lower right indicates the range of values
calculated for charm quarks in the QGP at 1.2\,$T_c$ within an
in-medium $T$-matrix approach~\cite{Riek:2010fk}.}
\label{fig_Ds}
\end{figure}
In Fig.~\ref{fig_Ds} we display the spatial $D$-meson
diffusion coefficient, $D_s=T/(m_DA(p=0,T))$, in hadronic matter.
When normalized to the thermal wavelength, 1/(2$\pi T$), this
quantity decreases with $T$, reaching a value of $\sim$5 at
$T$=180\,MeV. Again, this is surprisingly close to $T$-matrix
results for charm quarks in the QGP~\cite{Riek:2010fk}, and,
together with those results, suggests a minimum across the
hadron-to-quark transition.
\subsection{RHIC Conditions}
\label{ssec_Arhic}
In relativistic HICs the chemical freeze-out
of hadron ratios~\cite{BraunMunzinger:2003zd} at a temperature of
$T_{\rm chem}$$\simeq$170\,MeV is significantly earlier than thermal
freeze-out of the light hadrons at $T_{\rm fo}$$\simeq$100\,MeV.
Therefore, to conserve the observed particle ratios in the hadronic
evolution, effective chemical potentials are required, reaching
appreciable values even at RHIC energies~\cite{Rapp:2002fc}, e.g.,
$\mu_\pi$($T$=100\,MeV)$\simeq$80\,MeV. We implement the chemical
potentials into the thermal hadron distribution functions and
recalculate the $D$-meson equilibration rate, Eq.~(\ref{A}). As a
result, the latter is enhanced at temperatures below $T_{\rm chem}$,
staying above 1/(25\,fm) for $T$$\ge$130\,MeV (cf.~Fig.~\ref{fig_Ahrg}),
implying noticeable modifications of $D$-meson spectra in the hadronic
phase of nuclear collisions at RHIC. For example, if the hadronic evolution
lasts for $\Delta\tau_{\rm had}$$\simeq$5\,fm, the expected modification
amounts to ca.~$(1-\exp[A~\Delta\tau_{\rm had}])\simeq20\%$.
\section{Summary and Conclusion}
\label{sec_concl}
We have evaluated kinetic transport of $D$-mesons in hot hadronic
matter by elastic scattering off the 10 most abundant hadron
species. The interaction strength with mesons and baryons has been
estimated from existing microscopic models for $D$-hadron
scattering, constrained by chiral symmetry and vacuum spectroscopy.
In a pion gas at $T$=100\,MeV, $D\pi$ resonance scattering leads to
a relaxation rate which is substantially smaller than what has been
found in a recent calculation using heavy-meson chiral perturbation
theory. Yet, when extrapolating our full results to temperatures in
the vicinity of $T_c$, the relaxation rate reaches ca.~0.1/fm,
translating into a spatial diffusion coefficient of
$D_s\simeq5/(2\pi T)$. This is comparable to non-perturbative
$T$-matrix calculations of charm-quark relaxation in the QGP. On the
one hand, this suggests a rather smooth evolution of charm transport
through $T_c$, i.e., a kind of ``duality" of hadronic and
quark-based calculations, reminiscent of what has been found for
dilepton emission rates. On the other hand, it implies that
quantitative calculations of $D$-meson spectra in heavy-ion
collisions have to account for hadronic diffusion. This insight is
reinforced once chemical freeze-out is implemented into the
evolution of the hadronic phase (via effective chemical potentials),
with an estimated modification of $D$-meson observables of at least
20\%. The apparent agreement of hadron- and quark-based approaches,
when extrapolated to around $T_c$, is encouraging, especially since
the magnitude of the transport coefficient is compatible with the
phenomenology of current heavy-flavor observables at RHIC. Our
findings thus pave the way for an improved theoretical accuracy
which will be needed to take advantage of upcoming precision
measurements at RHIC and LHC.
{\bf Note added.} Two subsequently submitted papers have also
addressed hadronic $D$-meson diffusion. In Ref.~\cite{Abreu:2011ic}
the use of unitarized chiral effective $D\pi$ interactions leads to
relaxation rates in a pion gas in close agreement with our results.
In Ref.~\cite{Ghosh:2011bw} $D$-hadron interactions have been
evaluated using Born amplitudes, leading to relaxation rates
significantly larger than our results.
\acknowledgments We thank L.~Tolos for informative discussions. This
work has been supported by U.S. National Science Foundation (NSF)
CAREER Award PHY-0847538, by NSF grant PHY-0969394, by the
A.-v.-Humboldt Foundation (Germany), by the RIKEN/BNL Research
Center and DOE grant DE-AC02-98CH10886, and the JET Collaboration
and DOE grant DE-FG02-10ER41682.
| 2023-04-23T06:40:32.452Z | 2011-05-19T02:04:37.000Z | redpajama/arxiv | arxiv_0001 | 614 | 5,012 |
04230be6862d87abbae5c6bf4c3a823650165176 | \section{Introduction}
Recent observations indicate that the expansion of the universe is accelerating and the data is compatible with a cosmological constant, $\Lambda$, as the responsible actor. The value of $\Lambda$ turns out to be many orders of magnitude below the canonical estimate from quantum theoretical considerations. If one restricts the study to non-quantum approaches $\Lambda$ could be seen just as another fundamental constant of physics. Surely even the non-quantum cosmology is beset with fine tuning problems such as the cosmic coincidence of the onset of acceleration
\footnote{ {A name better suited for this "Why Now?" question, in view of its intriguing nature, is {\em cosmic scandal} as coined in \cite{carroll1}. We also refer to this article for arguably more functional names for {\em dark energy.}}}.
Nevertheless this is a problem somewhat unrelated to the {\em ease} by which the cosmological constant accommodates the observations. Such could be the view of a pragmatic who ignores aesthetics.
However even the pragmatic would feel a somewhat pronounced unease in trying to reconcile acceleration of observed space with the assumption of extra dimensions since the naive introduction of $\Lambda$ to the d-dimensional Einstein-Hilbert action mandates a time dependent compactification radius \footnote{ {In fact if a cosmological constant is the only actor then extra dimensions expand and accelerate in the same way as the large dimensions; an expected fact in view of the isotropy of a cosmological constant.}} in clear disconcert with stringent bounds on the cosmological evolution of fundamental constants.
An important ingredient of string inspired extra dimensional theories is string/brane gas cosmology\footnote{A possibly incomplete list of references for the literature is given in \cite{BV}-\cite{sgc36}.}. This framework is rather successful for cosmology of the very early universe and can even be a candidate to replace the inflationary paradigm in that it also solves the problems of standard cosmology and yields the same type of spectrum for density perturbations \cite{rev3,scale}. To raise an intriguing point let us ignore acceleration for a moment and assume that the universe expands as if it is dominated by objects which exert no pressure along observed dimensions. In such a scenario of the late universe string/brane gas cosmology is as successful compared to its phenomenology for early times, if not aesthetically better. In fact it can not
only accommodate a static radius for extra dimensions
and constant dilaton \footnote{ {Dilaton stabilization is slightly more involved than that of the radion since for the latter T duality provides a natural mechanism. Nevertheless this can be achieved in a multitude of ways. Simplest one being a dilaton potential. On the other hand a single agent can also achieve both radion and dilaton stabilization by invoking the use of S duality as for instance done in \cite{sgc22.5} via $(m,n)$ strings. At any rate a lagrangian needs some sort of dilaton potential for stabilization.}}
but also in doing so has a working idea to explain the number of observed dimensions
\footnote{The seed of this idea is of course related to the work of Brandenberger and Vafa \cite{BV}. Nevertheless it was shown in \cite{sgc25} that stabilization of radion and dilaton also fixes the dimensionality of the observed space assuming some dimensions got large {\em somehow}. The ideas that resulted in \cite{sgc25} were slowly taking shape in earlier works \cite{sgc19}-\cite{sgc24}.} and has for instance arguments on the possibility for dark matter of string/brane origin \cite{rev2}-\cite{dark3},\cite{sgc16}. However the acceleration of the observed dimensions poses a challenge to string/brane gas cosmology in view of the fact its constituents generally behave like pressureless dust along observed dimensions at late times. Thus it is of crucial importance to find an element compatible with string/brane gas cosmology that would stabilize extra dimensions and the dilaton while allowing our observed universe to expand in an accelerating fashion, desirably commensurable with a cosmological constant dynamics in a four dimensional point of view\footnote{This means that the observed dimensions expand exponentially. While as things stand for now this is not a must it is possibly the simplest scenario invoking Occam's razor.}. Surely one can also contemplate extra dimensional theories not motivated by strings and these are not free of the mentioned challenge. In this paper we follow a phenomenological approach: we assume the desired cosmological evolutions of fields and from that we infer conditions on the parameters of a general dilatonic extra-dimensional theory enriched with dilaton coupling to fields other than gravity.
\section{A Simple Observation}{\label{secII}}
Let us assume the following action which can also be motivated by the low energy limit of string theory;
\begin{equation}\label{eq1}
S=\int\;dx^{d}\sqrt{-g}\;e^{-2\phi}\left[R+4(\nabla\phi)^{2}+e^{a\phi}\mathcal{L}\right]\;.
\end{equation}
\noindent Within a cosmological scenario the metric can be chosen to be,
\begin{equation}
ds^{2}=-dt^{2}+e^{2B(t)}d{\mathcal{O}_{m}}^2+e^{2C(t)}d{\mathcal{E}_{p}}^{2}\;,
\end{equation}
\noindent where $d{\mathcal{O}_{m}}^2$ and $d{\mathcal{E}_{p}}^{2}$ represent the line elements of the $m$ and $p$ dimensional observed and extra dimensional manifolds ${\mathcal{M_{O}}}$ and ${\mathcal{M_{E}}}$ respectively. The quantities $B(t)$ and $C(t)$ are the corresponding scale factors and we have $d=1+m+p$. For string theory applications one would take $d=10$. Unless otherwise stated we will, in this work, assume that ${\mathcal{M_{O}}}$ and ${\mathcal{M_{E}}}$ are flat. In such a cosmological approach one can also assume a perfect fluid form
\begin{equation}{\label{eqeben}}
\mathcal{L}=-2\rho\;\;.
\end{equation}
\noindent Here $-2\sqrt{-g}\rho$ is assumed to yield a conserved energy-momentum tensor via
the usual construction
\begin{equation}
T_{\mu\nu}=-\frac{1}{\sqrt{-g}}\frac{\partial \sqrt{-g}\mathcal{L}}{\partial g^{\mu\nu}}\;,
\end{equation}
\noindent and thus will yield
\begin{equation}
\rho=\rho_{i}\exp\left[-(1+\omega)mB-(1+\nu)pC)\right]\;\;.
\end{equation}
\noindent Where $\omega$ and $\nu$ are the pressure coefficients along ${\mathcal{M_{O}}}$ and ${\mathcal{M_{E}}}$ respectively. The equations of motion for this model are therefore\footnote{ {The model given by lagrangian (\ref{eq1}) along with the choice (\ref{eqeben}) have also been studied the way it is presented here in \cite{sgc22}. Later it was generalized in \cite{sgc27}.}}
\begin{subequations}
\begin{eqnarray}
\ddot{B}+\dot{k}\dot{B}&=&e^{a\phi}\left(\omega-\tau\right)\rho\;,\label{bieq}\\
\ddot{C}+\dot{k}\dot{C}&=&e^{a\phi}\left(\nu-\tau\right)\rho\;,\label{cieq}\\
\ddot{\phi}+\dot{k}\dot{\phi}&=&\frac{1}{2}e^{a\phi}\left[T-(d-2)\tau\right]\rho\;,\label{phieq}\\
\dot{k}^{2}&=&m\dot{B}^{2}+p\dot{C}^{2} + 2 e^{a\phi}\rho\;,\label{refk1} \\
\dot{k}&\equiv& m\dot{B}+p\dot{C}-2\dot{\phi}\;.\label{refk2}
\end{eqnarray}
\end{subequations}
\noindent where $T=-1+m\omega+p\nu$, the trace of the energy-momentum tensor divided by $\rho$ and $\tau=(a-2)/2$. One can show that the signature of $\dot{k}$ is a constant of motion for positive $\rho$ and for $\dot{k}<0$ the solutions will be singular in finite proper time as was also argued in \cite{sgc22} and thus one should confine the study to $\dot{k}>0$.
Clearly if we would like to have a constant dilaton and radion, the right hand sides of (\ref{cieq}) and (\ref{phieq}) must vanish. Now assuming the observed dimensions expand in an accelerated fashion as $B(t)=Ht+B_{o}$, mimicking a four-dimensional cosmological constant and hence yielding $\omega=-1$, it is a simple exercise to show that one must have
\begin{subequations}{\label{hemhem}}
\begin{eqnarray}
\nu &=& -\frac{m+1}{m-1}\;,\\
a&=&-\frac{4}{m-1}\;.
\end{eqnarray}
\end{subequations}
In a pure Einstein gravity context, that $\nu=-2$ for $m=3$ was shown in \cite{eksi2ben} for flat ${\mathcal{M_{O}}}$ and ${\mathcal{M_{E}}}$. There it was also shown that dynamical stabilization of the radion could be established via a curvature term for ${\mathcal{M_E}}$ next to our $\rho$. A curvature term for extra dimensions is a plausible companion to $\rho$ since in a perfect fluid approach it represents a term with $\omega=-1$. In short what was shown in \cite{eksi2ben} was that ${\mathcal{M_{E}}}$ has to have negative curvature and $\nu\leq -2$ to have stabilization in the true sense of the word. But this is not much in accord with the expectations of string theory; for instance with the flatness of Calabi-Yau manifolds which are of crucial phenomenological importance. On the other hand if another perfect fluid term, now not a curvature term for ${\mathcal{M_{E}}}$, again with $\omega=-1$ and yet with another $\nu$ accompanies our $\rho$ the observation is invariant: one pressure coefficient along extra dimensions has to be less than -2. After \cite{eksi2ben} appeared Greene and Levin \cite{yavuzgl} reemphasized the need for $\nu=-2$ for constant radion and they argued that dynamical stabilization can be achieved via Casimir effect along extra dimensions, including massive contributions. At any rate here the impact of our phenomenological approach is clear; the constraint on $\nu$ remain the same and as a side nuisance we have the above condition on $a$.
That $\nu=-2$ in the analysis of this chapter, and $\nu\leq -2$ of \cite{eksi2ben} for that matter, are all in close relation and accord with the recently proved no-go theorems \cite{theo1}-\cite{theo3}. A general result of the mentioned theorems is that for a constant radion (and dilaton with a slight modification of the arguments) and observed acceleration one has to allow for violations of null energy condition and that this violation has to be generally along extra dimensions. And that it has to be {\em strong}.
So one has to come up with objects in string theory satisfying the amendments on $\nu$ {\em and} $a$. We would like to contrast this to the {\em ease} with which a simple cosmological constant is able to accommodate the observational requirements of four-dimensional cosmology.
We would also like to point out as a side remark that $\nu=-2$ along with $a=-2$ would mean that the lagrangian in (\ref{eq1}) along with the particular choice in (\ref{eqeben}) is invariant under both S and T duality transformations for $d=10$ and $m=3$; an observation that follows from the application of the general findings of \cite{sgc27} to the particulars of this work.
\section{A note on dilaton coupling to conserved energy-momentum tensors}
At this point we would like to emphasize a detail about the dilaton coupling to sources that yield a conserved energy momentum tensor. For concreteness
let us assume $\mathcal{L}$ in (\ref{eq1}) is not specified. Using the equations of motion arising from varying (\ref{eq1}) with respect to the metric and $\phi$ and the fact that the energy-momentum tensor originating from $\sqrt{-q}\mathcal{L}$ is conserved one can arrive at the following invoking the contracted Bianchi identity, as was done in \cite{sgc22},
\begin{equation}{\label{cons1}}
\tau e^{a\phi}\left(\mathcal{L}\delta^{\mu}_{\nu}-2T^{\mu}_{\nu}\right)\nabla^{\nu}\phi=0\;.
\end{equation}
One can read the implications of this equation in various ways\footnote{ {One must however keep in mind that this results from the Bianchi identities and hence must be automatically satisfied via the equations of motion of the fields in $\mathcal{L}$ if we knew its form. Nevertheless the implications of (\ref{cons1}) are rather illuminating.}}. If one has $\tau=0$, this is a quite general resolution. This implies $a=2$ meaning that the fields in $\mathcal{L}$ couple minimally to the dilatonic-gravitational part of the lagrangian in (\ref{eq1}); the principle of equivalence is obeyed. We would like to point out however the following fact; let us assume $\mathcal{L}$ represents everything else, then if the dilaton couples to all of them with the same $a$ one can argue that there is still a zest of equivalence principle at work in view of this universality, albeit this would not be the usual one; it would be one generalized with the presence of the dilaton\footnote{If one insists on the usual equivalence principle one must take $a=2$. But for instance, D-branes are known to have $a=1$.}. Nevertheless $a\neq 2$ has rather non-trivial consequences even in the absence of knowledge about the exact form of the lagrangian. So let us assume it for the sake of argument. Then, (\ref{cons1}) represents an interesting constraint on $\nabla^{\nu}\phi$ in that if it is not identically zero it has to be a vector in the null-space of the matrix
\begin{equation}
{\rm M}_{\mu\nu}\equiv \left(\mathcal{L}g_{\mu\nu}-2T_{\mu\nu}\right)=-2\frac{\partial{\mathcal{L}}}{\partial g^{\mu\nu}}
\end{equation}
One can also approach (\ref{cons1}) in another way. One could fix the dependence of $\phi$ on the metric co-ordinates and digress on the form of $\mathcal{L}$. As pointed out in \cite{sgc22} if the dilaton depends only on time and $T_{\mu\nu}$ is diagonal we must have $\mathcal{L}=2T^{0}_{0}=-2\rho$. Elaborating on this observation still assuming that the energy-momentum tensor is of perfect fluid form we can see that if the dilaton depends on any other co-ordinate along with time, the respective pressure coefficient has to be $-1$. For instance if the dilaton is to depend on time and on the co-ordinates of extra-dimensions\footnote{ {This is still commensurable with a cosmology which is isotropic and homogeneous along the observed dimensions.}}, it must have pressure $-1$ along them, as long as $a\neq 2$. Furthermore, again in a perfect fluid approach, if the dilaton is to have non-trivial dependence on all co-ordinates we end up with $\omega=-1$ and $\nu=-1$, compatible with a d-dimensional cosmological constant or a pure dilaton potential. It is tempting to speculate that these observations are somehow related to the previously mentioned no-go theorems presented in \cite{theo1}-\cite{theo3}.
\section{A Generalization}
The analysis of section \ref{secII} does not yield a true stabilization of the radion and the dilaton, it simply studies the constraints on the parameters to have a constant dilaton and radion of unspecified value. In fact if one performs a linear stability analysis around solutions one will find that the perturbations of both radion and dilaton have zero mass. This is quite expected as a consequence of the fact that our toy model so far depends on only one $\rho$; if the righthandside's of (\ref{cieq}) and (\ref{phieq}) are to vanish any further derivative of these terms with respect to $C$ or $\phi$ also vanish at the solution. A massless excitation is not phenomenologically favoured thus we need to have true stabilization with positive masses for the radion and dilaton perturbations.
As a general rule of thumb we need to have at least two sources to have dynamical stabilization. Both of these sources must have the same\footnote{ {Otherwise one will redshift faster than the other and we will eventually end up with a single source.}} $\omega$, which should be $-1$ to yield $B(t)=Ht+B_{o}$. So for example we can pick one of them to be like (\ref{eqeben}) and another to be a pure dilaton potential. Let us therefore assume (\ref{eq1}) still applies but now with
\begin{equation}
e^{a\phi}\mathcal{L}=-2 e^{a\phi}\rho- 2V(\phi)\;.
\end{equation}
This generalization will yield the following equations of motion
\begin{subequations}
\begin{eqnarray}
\ddot{B}+\dot{k}\dot{B}&=&-e^{a\phi}\left(1+\tau\right)\rho\;-\frac{1}{2}V'\;,\label{bieq2}\\
\ddot{C}+\dot{k}\dot{C}&=&e^{a\phi}\left(\nu-\tau\right)\rho\;-\frac{1}{2}V'\;,\label{cieq2}\\
\ddot{\phi}+\dot{k}\dot{\phi}&=&\frac{1}{2}e^{a\phi}\left[T-(d-2)\tau\right]\rho\;-V-\frac{d-2}{4}V',\label{phieq2}\\
\dot{k}^{2}&=&m\dot{B}^{2}+p\dot{C}^{2} + 2 e^{a\phi}\rho + 2 V\;,\label{refk12}
\end{eqnarray}
\end{subequations}
The conditions for the existence of an extremum and that $B=Ht+B_{o}$ with $H>0$ are simply given as
\begin{subequations}
\begin{eqnarray}
0&<&-(1+\tau)U_{o}-\frac{1}{2}V'_{o}\label{sol1}\\
0&=&(\nu-\tau)U_{o}-\frac{1}{2}V'_{o}\label{sol2}\\
0&=&\frac{1}{2}\left[T-(d-2)\tau\right]U_{o}-V_{o}-\frac{d-2}{4}V'_{o}\label{sol3}
\end{eqnarray}
\end{subequations}
\noindent where the subscripts $o$ refer to the values at the extrema. We have also defined $U_{o}=\rho_{i}e^{a\phi_{o}-(1+\nu)pC_{o}}$ to have compact expressions. A linear stability analysis around this solution will yield the following
\begin{equation}
\delta{\ddot{X}}=-F\delta{\dot{X}}-\Sigma \delta{X}
\end{equation}
\noindent with $\delta{X}^{T}=\left(\delta B,\delta C,\delta\phi\right)$. Also $F$ is the friction matrix and has the following form
\begin{center}
\begin{equation}
F=\left(\begin{tabular}{lll}
$2mH\;$ & $pH$ & $-2H$ \\
$\;\;0$ & $mH\;\;$ & $\;\;0$ \\
$\;\;0$ & $\;0$ & $mH$
\end{tabular}
\right)\end{equation}
\end{center}
\noindent which clearly enforces damping on all equations. On the other hand the matrix $\Sigma$ responsible for the frequencies of oscillations around the extrema has the form
\begin{center}
\begin{equation}{\label{eqsigma}}
\Sigma=\left(\begin{tabular}{lll}
$\;0\;$ & $\Sigma_{BC}\;$ & $\Sigma_{B\phi}$ \\
$\;0\;$ & $\Sigma_{CC}\;$ & $\Sigma_{C\phi}$ \\
$\;0\;$ & $\Sigma_{\phi C}\;$ & $\Sigma_{\phi\phi}$
\end{tabular}
\right)\end{equation}
\end{center}
\noindent where we have
\begin{subequations}
\begin{eqnarray}
\Sigma_{BC}&=&-(1+\nu)(1+\tau)pU_{o}\;,\\
\Sigma_{B\phi}&=&\frac{1}{2}V''_{o}+a(1+\tau)U_{o}\;,\\
\Sigma_{CC}&=&(1+\nu)(\nu-\tau)pU_{o}\;,\\
\Sigma_{C\phi}&=&\frac{1}{2}V''_{o}-a(\nu-\tau)U_{o}\;,\\
\Sigma_{\phi C}&=&\frac{1}{2}(1+\nu)\left[T-(d-2)\tau\right]pU_{o}\;,\\
\Sigma_{\phi\phi}&=&-\frac{1}{2}a\left[T-(d-2)\tau\right]U_{o}+V'_{0}+\frac{d-2}{4}V''_{o}\;.
\end{eqnarray}
\end{subequations}
The existence of a vanishing column in (\ref{eqsigma}) is simply a consequence of the fact that $\omega=-1$; the derivatives of the righthandsides of all (\ref{bieq2})-(\ref{phieq2}) with respect $B$ identically vanishes. In other to have stabilization we would require positive eigenvalues for $\Sigma$. But this is not the whole issue; as it stands $\Sigma$ also describes a general mixing between the perturbations which can be detrimental since it implies mixing of $\delta C$ and of $\delta\phi$ to $\delta B$. A simple and somewhat elegant way of getting around this obstacle is to assume $\Sigma_{\phi C}=0$. Requiring positive eigenvalues along with this simplifying restriction will yield
\begin{subequations}
\begin{eqnarray}
0&<&(1+\nu)(\nu-\tau)\label{sol4}\\
0&<&-\frac{1}{2}a(T-(d-2)\tau)U_{0}+V'_{o}+\frac{d-2}{4}V''_{o}\label{sol5}\\
0&=&(1+\nu)(T-(d-2)\tau)\label{sol6}
\end{eqnarray}
\end{subequations}
Analysing (\ref{sol6}) we immediately see that for these equations to be consistent one has as before $T-(d-2)\tau=0$ for the other solution is $\nu=-1$ ans this is in conflict with (\ref{sol1}) used along with (\ref{sol2}). In fact from these considerations we have $\nu<-1$. Therefore we again see that for $d=10$ the $\rho$ contribution to the lagrangian is S dual\footnote{ {In this case, as opposed to the example of section II, the $\rho$ term is not T duality invariant. It better not be because $V'$ term also contributes to the righthandside of the $C$ equations.}}. Having picked $\Sigma_{\phi C}=0$ we have achieved the following: the equations for $\delta\phi$ completely decoupled and its solutions are simply damped oscillations since $F_{\phi\phi}>0$ and $\Sigma_{\phi\phi}>0$. As a result of this one can take this solution and paste in into $\delta C$ equations where it will act as a source term: a source which asymptotically vanishes as a result of the damping. The solution of $\delta C$ thus obtained can be used in the $\delta B$ equation again as a source. Consequently the evils of mixing are somewhat circumvented. One could along with $\Sigma_{\phi C}$ also take $\Sigma_{C\phi}=0$ which completely decouples the radion and the dilaton but this is not necessary for this simple example. However We would like to stress again the fact that taking $\Sigma_{\phi C}=0$ is synonymous with the S duality of the $\rho$ term, at least for ten dimensions. Using the above intermediate result along with (\ref{sol4}) we arrive at the following
\begin{subequations} \label{hemhem2}
\begin{eqnarray}\label{eqson}
\nu &<& -\frac{m+1}{m-1}\\
a&=&\frac{p-2+p\nu}{4}
\end{eqnarray}
\end{subequations}
\noindent For say $m=3$ and $d=10$ it is clear that $\nu<-2$ and $a<-2$. We have thus established that true stabilization can be achieved and that the stringent constraints on $\nu$ and $a$ remain as upper bounds. There are further consistency conditions on $V$ given as
\begin{subequations}
\begin{eqnarray}
V'_{o}&<&0\\
V_{o}&=&-\frac{d-2}{4}V'_{o}\\
V''_{o}&>&-\frac{4}{d-2}V'_{o}
\end{eqnarray}
\end{subequations}
\noindent These constraints on $V$ are not too illuminating and can possibly be satisfied somewhat easily for a wide range of models.
\subsection{Digression on further generalizations}
In this chapter we have presented this simple generalization as an example evidencing that the constraints of section II are somewhat robust. There can be aesthetical objections to our approach since the extremum condition on (\ref{cieq2}) in essence assumes a fine tuning between the $\rho$ and $V$ terms; two contributions that possibly have nothing to do with each other.
But again, our emphasis was not on the precise way dilaton and radion stabilization along with accelerating observed dimensions is achieved, it was on making the case for the necessity for rather exotic sources to achieve it in general.
Nevertheless this aestethical objection will generally be present whenever we want to achieve both radion and dilaton stabilization with few sources having $\omega=-1$. Such terms aren't exactly in abundance; a pure dilaton potential, a $\rho$ of the type we have studied and a curvature term for the extra dimensional manifold are the simplest ones that come to mind. A deeper reason for the mentioned fine tuning between sources that are at face value unrelated could be the fact that these sources become overworked in that we require both radion and dilaton stabilization from them. In reality what we truly need is the consistent presence of only one such term for the equation of the scale factor of observed space to ensure $B=Ht+B_{o}$. Thus it is an intriguing possibility to check for sources with different pressure coefficient along observed dimensions such that the responsibilities to have accelerated observed space and stabilized radion and dilaton are separated.
As a simple counter example to this somewhat attractive possibility we would like to digress on the impact of $(m,n)$ strings as studied in \cite{sgc22.5}. In that work there are two contributions to the energy-momentum tensor; the winding and momentum modes of strings. Both of these sources have zero pressure along observed dimensions. On the other hand they also bring about a potential term for the dilaton equations. It can be shown that in this picture both the dilaton an the radion can be stabilized. The crucial point is that these sources will force the observed dimensions to grow as pressureless ordinary matter would; in a decelerating way. So what we need is a source with $\omega=-1$ that does not contribute to the $C$ and $\phi$ equations. The resolution is very simple; on top of the winding and momentum mode contributions of $(m,n)$ strings add a source which satisfies (\ref{hemhem}). This should work since we have already seen that these conditions mean that the mentioned source is invariant under T and S dualities and thus will not contribute to the right hand sides of the dilaton and radion equations but will have an effect on the $B$ equations. The situation will be described by the following
equations
\begin{subequations}
\begin{eqnarray}
\ddot{B}+\dot{k}\dot{B}&=&e^{-mB}S_{B}(\phi,C)-\left(1+\tau\right)e^{a\phi}\rho\;\\
\ddot{C}+\dot{k}\dot{C}&=&e^{-mB}S_{C}(\phi,C)\\
\ddot{\phi}+\dot{k}\dot{\phi}&=&e^{-mB}S_{\phi}(\phi,C)
\end{eqnarray}
\end{subequations}
\noindent The first term in the $B$ equations is becoming less and less relevant since the $\rho$ term does not depend on $B$. So in time we will have $B=Ht+B_{o}$. The terms $S_{C}$ and $S_{\phi}$ represent contributions from the winding and momentum modes of $(m,n)$ strings and for their explicit expressions we refer the reader to \cite{sgc22.5}. Consequently we will have dynamically stabilized radion and dilaton along with accelerated observed dimensions. However the price we pay is that the masses of the excitations around the minima are becoming exponentially small in time even though they are always positive since the right hand sides of both $C$ and $\phi$ equations above are multiplied by $e^{-mB}=e^{-mHt}$. Furthermore this multiplicative factor can not change the location of the minima in $(C,\phi)$ space. Now since $H$ is rather small one can possibly argue in favour of such a scenario at least for now but as time evolves we would have less and less massive excitations and this is undoubtedly a source for stringent constraints on such an approach.
We can thus conclude that we typically need sources with the same $\omega$ achieving both dilaton and radion stabilization and that the aesthetical objection raised at the beginning of this subsection is, even though still standing, a bit too restrictive.
Another way to achieve stabilization without a fine-tuning between radion dependent objects and a pure dilaton potential is to include a curvature term for extra dimensions along with our $\rho$. This is possible but it will, along with (\ref{hemhem2}), require a $\mathcal{M_{E}}$ which is negatively curved. A situation that, as we have stressed before, is not compatible with the need of string theory for Calabi-Yau manifolds.
\section{Conclusion}
We have presented stringent constraints on parameters of theories yielding accelerated observed space as well as providing stable radion and dilaton. The constraints on the dilaton couplings are just as strong as those on the pressure coefficients along extra dimensions. This can possibly be understood via the fact that the dilaton can be seen as the scale factor of a compactified eleventh dimension. The observations we have presented are related and in accord with the rather strong no-go theorems on a marriage between dark energy and extra dimensional models \cite{theo1}-\cite{theo3} in that the sources are shown to very strongly violate the null energy condition along extra dimensions. In fact the theorems mentioned not only require strong NEC violation along extra dimensions but also that this violation has to be time dependent to allow for the observed cosmological history of the universe. Such time dependence is still a possibility within string/brane gas cosmology nevertheless we have only worked in the regime where a pure de Sitter phase has already settled in the past.
Perhaps a stronger result of these theorems is that they require non-trivial distribution of density and pressure along extra dimensions. Since we have worked with homogeneous quantities it seems the simple approach we have presented here violates this fact. Nevertheless in view of this the constraints we have presented could be seen as averages over extra dimensions and still operational. This means for instance that the average of the pressure along extra dimensions should be $-2$ and thus there must be regions where it is considerably less if the pressure -and hence the energy density- is inhomogeneously distributed.
On the other hand, dark energy is not the only source to put stringent constraints on multidimensional theories. Recently Eingorn and Zhuk have shown that
if one assumes point like sources, toroidal extra dimensions are incompatible with classical tests such as the perihelion advance of Mercury and gravitational frequency shift \cite{zhuk1}-\cite{zhuk3}. Even though there the sources can not provide NEC violation they are inhomogeneously distributed along extra dimensions. So it is tempting to speculate on an interplay between the theorems in \cite{theo1}-\cite{theo3} as providing an avenue for even stronger constraints on extra dimensional theories.
To conclude we reemphasize the need for a source satisfying (\ref{hemhem}) to provide for stable radion and dilaton and allowing de Sitter type expansion for observed dimensions. As we have stated, in the absence of the dilaton one can argue that these constraints can be accommodated by Casimir effect along extra dimensions \cite{yavuzgl} but it is not clear how this can be extended to dilaton gravity. Nevertheless recent research \cite{gaugino} shows that supersymmetry breaking via gaugino condensation in string gas cosmology can be the responsible actor for dilaton stabilization. It is tempting to expect that this approach, since it introduces a dilaton potential, could provide a resolution.
| 2023-04-23T06:40:32.493Z | 2011-04-01T02:01:36.000Z | redpajama/arxiv | arxiv_0001 | 617 | 4,978 |
9b03ce183dfdf0ec2aa308f520752ffa9fba1e55 | \section{Acknowledgements}
This work was supported by an NSF GRFP Fellowship for Y.S.E.
\begin{figure}
\begin{center}
\resizebox{3.5in}{!}{\includegraphics{sal_nbs_crossover_v3_color.pdf}}
\caption{\label{fig:crossover} Transport properties as a function of $T_{\mathrm{g}}/T$ for two typical supercooled liquids. Black circles in Panels \emph{A} and \emph{B} represent experimental data considered in \cite{Corr_1}. Labeling here is consistent with \cite{Corr_1} - that is to say that Sal-2 and NBS refer to the same experimental measurements and fit parameters as in Table 1 of \cite{Corr_1}. Red dashed line is the fit parabolic form for $T<T_{\mathrm{o}}$, as in \cite{Corr_1}. Blue dashed line represents Arrhenius fit for lowest $T$ points \cite{Mallamace2010}. In Panel \emph{A}, relaxation time, $\tau$, of Salol where $T_{g}$ = 221 K is the glass transition temperature where $\log (\tau_\mathrm{g}/\text{s}) = 2$. In Panel \emph{B}, viscosity, $\eta$, of NBS where $T_{g}$ = 708 K is the glass transition temperature where $\log (\eta_\mathrm{g}/\text{Poise}) = 13$. It is generally assumed that $\tau \propto \eta$, and, with this assumption, Panel \emph{A} includes data used in Ref. \cite{Mallamace2010} (triangles).
}
\end{center}
\end{figure}
| 2023-04-23T06:40:32.637Z | 2011-05-13T02:00:55.000Z | redpajama/arxiv | arxiv_0001 | 623 | 203 |
ca446343ef8e537fb424e7ff5dc858f890aa823f | \section{\label{introduction}Introduction and Summary}
The possibility that the universe may contain large, and possibly infinite, spatial dimensions beyond the three we commonly perceive has opened up entirely new avenues to address fundamental questions posed by particle physics and by cosmology. The precise manner in which the dynamics of the higher-dimensional space manifests itself in the four dimensional world depends on the geometry and topology of the extra-dimensional manifold, and the matter content and action chosen. At low enough energies, the relevant physics is then captured by a four-dimensional effective field theory with properties inherited from the specific higher-dimensional model under consideration. The simplest example of this is the Kaluza-Klein tower -- the hierarchy of higher mass states that accompany zero mass particles when compactifying a five-dimensional theory on a circle. There are, however, much more exotic possibilities. Many of these describe viable higher-dimensional theories, while others are merely mathematical tools with which to construct interesting physical four-dimensional effective field theories.
A particularly interesting and well studied example of a higher-dimensional model is the Dvali-Gabadadze-Poratti (DGP) model \cite{Dvali:2000hr}, for which the ambient space is a flat $5$-dimensional spacetime in which a Minkowski $3$-brane floats, subject to an action consisting merely of two separate Einstein Hilbert terms -- one in $5$d, and the other only on the brane, constructed from the induced metric there. In an appropriate limit, the resulting four-dimensional effective field theory describes gravity plus a scalar degree of freedom parametrizing the bending of the brane in the extra dimension \cite{Luty:2003vm,Nicolis:2004qq}. The specific form of the four dimensional action for the scalar inherits a symmetry from a combination of five dimensional Poincar\'e invariance and brane reparametrization invariance. In the small field limit this symmetry takes a rather simple form and has been called the {\it Galilean} symmetry, with the associated scalar becoming the {\it Galileon} \cite{Nicolis:2008in}.
Abstracting from DGP, a four dimensional field theory with this Galilean symmetry is interesting in its own right. It turns out that there are a finite number of terms, the {\it Galileon terms}, that have fewer numbers of derivatives per field than the infinity of competing terms with the same symmetries. These terms have the surprising property that, despite the presence of higher derivatives in the actions, the equations of motion are second order, so that no extra degrees of freedom are propagated around any background. Much has been revealed about the Galileon terms, including such useful properties as a non-renormalization theorem \cite{Luty:2003vm,Hinterbichler:2010xn,Burrage:2010cu}, and applications in cosmology \cite{Agarwal:2011mg,Burrage:2010cu,Creminelli:2010ba,Creminelli:2010qf,DeFelice:2010as,Deffayet:2010qz,Kobayashi:2011pc,Mota:2010bs,Wyman:2011mp}. The Galileons have been covariantized \cite{Deffayet:2009mn,Deffayet:2009wt,Deffayet:2011gz}, extended to p-forms \cite{Deffayet:2010zh}, and supersymmetrized \cite{Khoury:2011da}. Further, it was recently shown that the general structure of Galileon field theories can be extended to multiple fields, finding their origins in braneworld constructions with more than one codimension \cite{Hinterbichler:2010xn,Padilla:2010de,Padilla:2010ir,Padilla:2010tj,Zhou:2010di}. If some of the resulting symmetries of the four dimensional effective field theory are broken, then they are related to low energy descriptions of cascading gravity models in which a sequence of higher dimensional branes are embedded within one another \cite{deRham:2007rw,deRham:2007xp,Agarwal:2011mg,Agarwal:2009gy}.
If our universe really is a brane world, then theories of this sort are generic, since they share, in a certain limit, the symmetries of the Dirac-Born-Infeld (DBI) action. The DBI action encodes the lowest order dynamics of a brane embedded in higher dimensions, and provides an
important arena within which to study inflation~\cite{Silverstein:2003hf,Alishahiha:2004eh}, late-time cosmic acceleration~\cite{Ahn:2009xd}, tunneling~\cite{Brown:2007zzh}, and exotic topological defects~\cite{Andrews:2010eh,Babichev:2006cy,Sarangi:2007mj,Bazeia:2007df,Babichev:2008qv}. The Galileon terms can be thought of as a subset of the higher order terms expected to be present in any effective field theory of the brane, and which will be suppressed by powers of some cutoff scale. The Galileons are a special subset in the class of all possible higher order terms because they contain fewer derivatives per field than competing terms with the same symmetries, and because they yield second order equations. Crucially, there can exist regimes in which only a finite number of Galileon terms are important, and the infinity of other possible terms within the effective field theory are not (see section II of \cite{Hinterbichler:2010xn}, as well as \cite{Nicolis:2004qq,Endlich:2010zj}, for more on this and for examples of such regimes.) This fact, coupled with a non-renormalization theorem for Galileons and the fact that there are a finite number of such terms, holds out the hope of computing non-linear facts about the world which are exact quantum mechanically. Finally, it should be remembered that even if our universe is not a brane world, the same conclusions follow if one postulates the existence of symmetries of the same form as those of a brane world.
In this paper, we construct a general class of four-dimensional effective field theories by writing an action on a 3-brane probing a higher dimensional bulk, of which the Galileon theory and DBI scalars are special cases. This extends the construction of~\cite{deRham:2010eu} to its most general form. We observe that the symmetries inherited by scalar fields in the $4$d theory are determined by isometries of the bulk metric, and are present if and only if the bulk has isometries. The precise manner in which the symmetries are realized is determined by the choice of gauge, or foliation, against which brane fluctuations are measured. We derive in general the symmetries of these effective field theories, and classify the examples that result when embedding a maximally symmetric brane in a maximally symmetric background. This approach yields a set of new Galileon-like theories which live on $4$d curved space but retain the same number of non-linear shift-like symmetries as the flat-space Galileons or DBI theories.
These theories have their own unique properties. For example, in curved space the field acquires a potential which is fixed by the symmetries -- something that is not allowed for the flat space Galileons. In particular, the scalars acquire a mass of order the inverse radius of the background, and the value of the mass is fixed by the nonlinear symmetries. Although not addressed in detail here, allowing for de Sitter solutions on the brane opens up the possibility of adapting these new effective field theories to cosmological applications such as inflation or late time cosmic acceleration in such a way that their symmetries ensure technical naturalness.
The paper is structured as follows. In the next section we discuss general brane actions and symmetries, and the ways in which these symmetries may be inherited by a four-dimensional effective field theory. In section~\ref{sec:ghostfreeactions} we then consider constructing actions with second order equations and explicitly derive all possible terms in such theories. We then provide six separate examples, exhausting all the maximally symmetric possibilities: a $4$d Minkowski brane embedded in a Minkowski bulk; a $4$d Minkowski brane embedded in $AdS_5$; a $4$d de Sitter brane embedded in a Minkowski bulk; a $4$d de Sitter brane embedded in $dS_5$; a $4$d de Sitter brane embedded in $AdS_5$; and a $4$d Anti-de Sitter brane embedded in $AdS_5$. In each case, we describe the resulting $4$d effective field theories and comment on their structure. In section~\ref{sec:smallfieldlimits} we take the small field limits to obtain Galileon-like theories, discuss their stability, and compare and contrast these theories with the special case of the original Galileon, before concluding.
{\bf Conventions and notation}:
We use the mostly plus metric signature convention. The 3-brane worldvolume coordinates are $x^\mu$, $\mu=0,1,2,3$, bulk coordinates are $X^A$, $A=0,1,2,3,5$. Occasionally we use 6-dimensional cartesian coordinates $Y^\mathcal{A}$, $\mathcal{A}=0,1,2,3,4,5$, for constructing five dimensional $AdS_5$ and $dS_5$ as embeddings. Tensors are symmetrized and anti-symmetrized with unit weight, i.e $T_{(\mu\nu)}=\frac{1}{2} \left(T_{\mu\nu}+T_{\nu\mu}\right)$, $T_{[\mu\nu]}=\frac{1}{2} \left(T_{\mu\nu}-T_{\nu\mu}\right)$.
When writing actions for a scalar field $\pi$ in curved space with metric $g_{\mu\nu}$ and covariant derivative $\nabla_\mu$, we use the notation $\Pi$ for the matrix of second derivatives $\Pi_{\mu\nu}\equiv\nabla_{\mu}\nabla_\nu\pi$. For traces of powers of $\Pi$ we write $[\Pi^n]\equiv Tr(\Pi^n)$, e.g. $[\Pi]=\nabla_\mu\nabla^\mu\pi$, $[\Pi^2]=\nabla_\mu\nabla_\nu\pi\nabla^\mu\nabla^\nu\pi$, where all indices are raised with respect to $g^{\mu\nu}$. We also define the contractions of powers of $\Pi$ with $\nabla\pi$ using the notation $[\pi^n]\equiv \nabla\pi\cdot\Pi^{n-2}\cdot\nabla\pi$, e.g. $[\pi^2]=\nabla_\mu\pi\nabla^\mu\pi$, $[\pi^3]=\nabla_\mu\pi\nabla^\mu\nabla^\nu\pi\nabla_\nu\pi$, where again all indices are raised with respect to $g^{\mu\nu}$.
\section{General brane actions and symmetries}
We begin with a completely general case - the theory of a dynamical 3-brane moving in a fixed but arbitrary (4+1)-dimensional background.
The dynamical variables are the brane embedding $X^A(x)$, five functions of the world-volume coordinates $x^\mu$.
The bulk has a fixed background metric $G_{AB}(X)$. From this and the $X^A$, we may construct the induced metric $\bar g_{\mu\nu}(x)$ and the extrinsic curvature $K_{\mu\nu}(x)$, via
\begin{eqnarray}
\bar g_{\mu\nu}&=&e^A_{\ \mu}e^B_{\ \nu} G_{AB}(X), \\
K_{\mu\nu}&=&e^A_{\ \mu}e^B_{\ \nu}\nabla_A n_B \ .
\end{eqnarray}
Here $e^A_{\ \mu}= {\partial X^A\over\partial x^\mu}$ are the tangent vectors to the brane, and $n^A$ is the normal vector, defined uniquely (up to a sign) by the properties that it is orthogonal to the tangent vectors $e^A_{\ \mu}n^BG_{AB}=0$, and normalized to unity $n^An^BG_{AB}=1$. (Note that the extrinsic curvature can be written $K_{\mu\nu}=e^B_{\ \nu}\partial_\mu n_B-e^A_{\ \mu}e^B_{\ \nu}\Gamma^C_{AB}n_C$, demonstrating that it depends only on quantities defined directly on the brane and their tangential derivatives.)
We require the world-volume action to be gauge invariant under reparametrizations of the brane,
\begin{equation}
\label{gaugetransformations}
\delta_g X^A=\xi^\mu\partial_\mu X^A \ ,
\end{equation}
where $\xi^\mu(x)$ is the gauge parameter. This requires that the action be written as a diffeomorphism scalar, $F$, of $\bar g_{\mu\nu}$ and $K_{\mu\nu}$ as well as the covariant derivative $\bar\nabla_\mu$ and curvature $\bar R^\alpha_{\ \beta\mu\nu}$ constructed from $\bar g_{\mu\nu}$,
\begin{equation}
\label{generalaction}
S= \int d^4x\ \sqrt{-\bar g}F\left(\bar g_{\mu\nu},\bar\nabla_\mu,\bar R^{\alpha}_{\ \beta\mu\nu},K_{\mu\nu}\right) \ .
\end{equation}
This action will have global symmetries only if the bulk metric has Killing symmetries. If the bulk metric has a Killing vector $K^A(X)$, i.e. a vector satisfying the Killing equation
\begin{equation} \label{killingequation}
K^C\partial_C G_{AB}+\partial_AK^CG_{CB}+\partial_BK^CG_{AC}=0 \ ,
\end{equation}
then the action will have the following global symmetry under which the $X^A$ shift,
\begin{equation}
\label{generalsym}
\delta_K X^A=K^A(X) \ .
\end{equation}
It is straightforward to see that the induced metric and extrinsic curvature, and hence the action~(\ref{generalaction}), are invariant under~(\ref{generalsym}).
We are interested in creating non-gauge theories with global symmetries from the transverse fluctuations of the brane, so we now fix all the gauge symmetry of the action. We accomplish this by first choosing a foliation of the bulk by time-like slices. We then choose bulk coordinates such that the foliation is given by the surfaces $X^5= {\rm constant}$. The remaining coordinates $X^\mu$ can be chosen arbitrarily and parametrize the leaves of the foliation. The gauge we choose is
\begin{equation}
\label{physgauge}
X^\mu(x)=x^\mu, \ \ \ X^5(x)\equiv \pi(x) \ .
\end{equation}
In this gauge, the world-volume coordinates of the brane are fixed to the bulk coordinates of the foliation. We call the remaining unfixed coordinate $\pi(x)$, which measures the transverse position of the brane relative to the foliation (see Figure~\ref{foliation}). This completely fixes the gauge freedom. The resulting gauge fixed action is then an action solely for $\pi$,
\begin{equation}
\label{gaugefixedaction}
S= \int d^4x\ \left. \sqrt{-\bar g}F\left(\bar g_{\mu\nu},\bar\nabla_\mu,\bar R^{\alpha}_{\ \beta\mu\nu},K_{\mu\nu}\right)\right|_{X^\mu=x^\mu,\ X^5=\pi} \ .
\end{equation}
\begin{figure}
\centering
\includegraphics[width=4.0in]{foliation}
\caption{The field $\pi$ measures the brane position with respect to some chosen foliation.}
\label{foliation}
\end{figure}
Global symmetries are physical symmetries that cannot be altered by the unphysical act of gauge fixing. Thus, if the original action~(\ref{generalaction}) possesses a global symmetry (\ref{generalsym}), generated by a Killing vector $K^A$, then the gauge fixed action~(\ref{gaugefixedaction}) must also have this symmetry. However, the form of the symmetry will be different because the gauge choice will not generally be preserved by the global symmetry. The change induced by $K^A$ is
\begin{equation}
\delta_ Kx^\mu=K^\mu(x,\pi),\ \ \ \delta_K\pi=K^5(x,\pi) \ .
\end{equation}
To re-fix the gauge to~(\ref{physgauge}), it is necessary to simultaneously perform a compensating gauge transformation with gauge parameter
\begin{equation}
\xi_{\rm comp}^{\mu}=-K^\mu(x,\pi) \ .
\end{equation}
The combined symmetry acting on $\pi$,
\begin{equation}
\label{gaugefixsym}
(\delta_K+\delta_{g,{\rm comp}})\pi=-K^\mu(x,\pi)\partial_\mu\pi+K^5(x,\pi) \ ,
\end{equation}
is then a symmetry of the gauge fixed action~(\ref{gaugefixedaction}).
\subsection{\label{maxsymsubsection} A special case}
We now specialize to a case which includes all the maximally symmetric examples of interest to us in this paper. This is the case where the foliation is Gaussian normal with respect to the metric $G_{AB}$, and the extrinsic curvature on each of the leaves of the foliation is proportional to the induced metric. With these restrictions, the metric takes the form
\begin{equation}
\label{metricform}
G_{AB}dX^AdX^B=d\rho^2+f(\rho)^2g_{\mu\nu}(x)dx^\mu dx^\nu \ ,
\end{equation}
where $X^5=\rho$ denotes the Gaussian normal transverse coordinate, and $g_{\mu\nu}(x)$ is an arbitrary brane metric. Recall that in the physical gauge (\ref{physgauge}), the transverse coordinate of the brane is set equal to the scalar field, $\rho(x)=\pi(x)$.
Working in the gauge (\ref{physgauge}), the induced metric is
\begin{equation}
\bar g_{\mu\nu}=f(\pi)^2g_{\mu\nu}+\nabla_\mu\pi\nabla_\nu\pi \ .
\end{equation}
Defining the quantity
\begin{equation}
\gamma={1\over \sqrt{1+{1\over f^2}(\nabla\pi)^2}} \ ,
\end{equation}
the square root of the determinant and the inverse metric may then be expressed as
\begin{equation}
\sqrt{-\bar g}=\sqrt{-g}f^4\sqrt{1+{1\over f^2}(\nabla\pi)^2}=\sqrt{-g}f^4{1\over \gamma},
\end{equation}
and
\begin{equation}
\bar g^{\mu\nu}={1\over f^2}\left(g^{\mu\nu}-\gamma^2{\nabla^\mu\pi\nabla^\nu\pi\over f^2}\right) \ .
\end{equation}
The tangent vectors are
\begin{equation}
e^A_{\ \mu}={\partial X^A\over \partial x^\mu}=\begin{cases}\delta^\nu_\mu & A=\nu \\ \nabla_\mu \pi & A=5\end{cases} \ .
\end{equation}
To find the normal vector $n^A$ we solve the two equations
\begin{eqnarray} 0&=&e^A_{\ \mu}n^BG_{AB}=f^2n^\nu g_{\mu\nu}+n^5\partial_\mu\pi, \\
1&=&n^An^BG_{AB}={1\over f^2}g^{\mu\nu}\partial_\mu\pi\partial_\nu\pi(n^5)^2+(n^5)^2 \ ,
\end{eqnarray}
to obtain
\begin{equation}
n^A=\begin{cases} -{1\over f^2}\gamma\nabla^\mu\pi & A=\mu \\ \gamma & A=5\end{cases},\ \ \ \ n_A=\begin{cases} -\gamma\nabla_\mu\pi & A=\mu \\ \gamma & A=5\end{cases} \ .
\end{equation}
Using the non-vanishing Christoffel symbols $\Gamma^\lambda_{\mu\nu}=\Gamma^\lambda_{\mu\nu}(g)$, $\Gamma^5_{\mu\nu}=-f f' g_{\mu\nu}$, $\Gamma^\mu_{\nu 5}= \delta^\mu_\nu {f'\over f}$, the extrinsic curvature
is then
\begin{equation}
K_{\mu\nu}=\gamma\left(-\nabla_\mu\nabla_\nu\pi+f f'g_{\mu\nu}+2{f'\over f}\nabla_\mu\pi\nabla_\nu\pi\right) \ .
\end{equation}
Note that when the $4$d coordinates have dimensions of length, $\pi$ has mass dimension $-1$ and $f$ is dimensionless.
The algebra of Killing vectors of $G_{AB}$ contains a natural subalgebra consisting of the Killing vectors
for which $K^5=0$. This is the subalgebra of Killing vectors that are parallel to the foliation of constant $\rho$ surfaces, and it generates the subgroup of isometries which preserve the foliation. We choose a basis of this subalgebra and index the basis elements by $i$,
\begin{equation}
K_i^A(X)=\begin{cases} K_i^\mu(x) & A=\mu \\ 0 & A=5\end{cases} \ ,
\end{equation}
where we have written $K_i^\mu(x)$ for the $A=\mu$ components, indicating that these components are independent of $\rho$. To see that this is the case, note that,
for those vectors with $K^5=0$, the $\mu5$ Killing equations (\ref{killingequation}) tell us that $K_i^\mu(x)$ is independent of $\rho$. Furthermore, the $\mu\nu$ Killing equations tell us that $K_i^\mu(x)$ is a Killing vector of $g_{\mu\nu}$.
We now extend our basis of this subalgebra to a basis of the algebra of all Killing vectors by appending a suitably chosen set of linearly independent Killing vectors with non-vanishing $K^5$. We index these with $I$, so that $(K_i,K_I)$ is a basis of the full algebra of Killing vectors. From the $55$ component of Killing's equation, we see that $K^5$ must be independent of $\rho$, so we may write $K^5(x)$.
A general global symmetry transformation thus reads
\begin{equation}
\delta_KX^A=a^i K_i^A(X)+a^I K_I^A(X) \ ,
\end{equation}
where $a^i$ and $a^I$ are arbitrary constant coefficients of the transformation. In the gauge (\ref{physgauge}), the transformations become, from~(\ref{gaugefixsym}),
\begin{equation}
\label{specialcasesym}
(\delta_K+\delta_{g,{\rm comp}})\pi=-a^i K_i^\mu(x)\partial_\mu\pi+a^I K_I^5(x)-a^I K_I^\mu(x,\pi)\partial_\mu\pi \ .
\end{equation}
From this, we see that the $K_i$ symmetries are linearly realized, whereas the $K_I$ are realized nonlinearly. Thus, the algebra of all Killing vectors is spontaneously broken to the subalgebra of Killing vectors preserving the foliation.
\subsection{\label{maxsymsubsection2} Maximally symmetric cases}
In this paper, we will focus on the case in which the 5d background metric has 15 global symmetries, the maximal number. Thus, the bulk is either $5$d anti-de Sitter space $AdS_5$ with isometry algebra $so(4,2)$, 5d de-Sitter space $dS_5$ with isometry algebra $so(5,1)$, or flat 5d Minkowski space $M_5$ with isometry algebra the five dimensional Poincare algebra $p(4,1)$. In addition, we focus on the case where the brane metric $g_{\mu\nu}$, and hence the extrinsic curvature, are maximally symmetric, so that the unbroken subalgebra has the maximal number of generators, 10. This means that the leaves of the foliation are either $4$d anti-de Sitter space $AdS_4$ with isometry algebra $so(3,2)$, 4d de-Sitter space $dS_4$ with isometry algebra $so(4,1)$, or flat 4d Minkowski space $M_4$ with isometry algebra the four dimensional Poincare algebra $p(3,1)$. In fact, there are only 6 such possible foliations of $5$d maximally symmetric spaces by $4$d maximally symmetric time-like slices, such that the metric takes the form~(\ref{metricform}). Flat $M_5$ can be foliated by flat $M_4$ slices or by $dS_4$ slices; $dS_5$ can be foliated by flat $M_4$ slices, $dS_4$ slices, or $AdS_4$ slices; and $AdS_5$ can only be foliated by $AdS_4$ slices. Each of these 6 foliations, through the construction leading to~(\ref{gaugefixedaction}), will generate a class of theories living on an $AdS_4$, $M_4$ or $dS_4$ background and having 15 global symmetries broken to the 10 isometries of the brane. These possibilities are summarized in Figure \ref{types}.
\begin{figure}
\centering
\includegraphics[width=5.0in]{types}
\caption{Types of maximally symmetric embedded brane effective field theories, their symmetry breaking patterns, and functions $f(\pi)$. The relationships
to the Galileon and DBI theories are also noted.}
\label{types}
\end{figure}
It should be noted that the missing squares in Figure \ref{types} may be filled in if we are willing to consider a bulk which has more than one time direction\footnote{We thank Sergei Dubovsky for pointing this out.}. For example, it is possible to embed $AdS_4$ into a five-dimensional Minkowski space with two times (indeed, this is the standard way of constructing $AdS$ spaces). From the point of view that the bulk is physical, and hence should be thought of as dynamical, these possibilities may be unacceptable on physical grounds. However, if one thinks of the bulk as merely a mathematical device for constructing novel four-dimensional effective theories, then there is nothing a priori to rule out these possibilities. In this paper, we focus on those cases in which the bulk has only one time dimension. The construction in the other cases will, however, follow the same pattern.
Finally, note that the only invariant data that go into constructing a brane theory are the background metric and the action. Theories with the same background metric and the same action are isomorphic, regardless of the choice of foliation (which is merely a choice of gauge). For example, given the same action among the theories listed in Figure \ref{types}, the three that have an $AdS_5$ background, namely the conformal DBI Galileons, the $AdS_4$ DBI Galileons, and the type III $dS_4$ DBI Galileons, are really the same theory. They are related by choosing a different foliation (gauge), shuffling the background $\pi$ configuration into the background metric.
\section{Actions with second order equations of motion}
\label{sec:ghostfreeactions}
Up until now we have discussed the degrees of freedom and their symmetries, but it is the choice of action that defines the dynamics. A general choice for the function $F$ in~(\ref{gaugefixedaction}) will lead to scalar field equations for $\pi$ which are higher than second order in derivatives. When this is the case, the scalar will generally propagate extra degrees of freedom which are ghost-like~\cite{Ostrogradski,deUrries:1998bi}. The presence of such ghosts signifies that either the theory is unstable, or the cutoff must be lowered so as to exclude the ghosts. Neither of these options is particularly attractive, and so it is desirable to avoid ghosts altogether. It is the Galileon terms which are special because they lead to equations of at most second order. Furthermore, as mentioned in the introduction, there can exist regimes in which the Galileon terms dominate over all others, so we will be interested only in these terms.
A key insight of de Rham and Tolley~\cite{deRham:2010eu} is that there are a finite number of actions of the type (\ref{gaugefixedaction}), the Lovelock terms and their boundary terms, that do in fact lead to second order equations for $\pi$ and become the Galileon terms.
The possible
extensions of Einstein gravity which remain second order are given by Lovelock terms~\cite{Lovelock:1971yv}. These terms are specific combinations of powers of the Riemann tensor which are topological (i.e. total derivatives) in some specific home dimension, but in lower dimensions have the property that equations of motions derived from them are second order. (For a short summary of some properties of these terms, see Appendix B of~\cite{Hinterbichler:2010xn}.) The Lovelock terms come with boundary terms. It is well known that, when a brane is present, bulk gravity described by the Einstein-Hilbert Lagrangian should be supplemented by the Gibbons-Hawking-York boundary term~\cite{Gibbons:1976ue,York:1972sj}
\begin{equation}
\label{e:EHGH}
S = \int_M d^5 X \ \sqrt{-G}R[G] \ + 2 \int \ d^4 x\sqrt{-\bar g}K \ .
\end{equation}
Similarly, Lovelock gravity in the bulk must be supplemented by brane terms which depend on the intrinsic and extrinsic curvature of the
brane (the so-called Myers terms ~\cite{Myers:1987yn,Miskovic:2007mg}), which are needed in order to make the variational problem for the brane/bulk system well posed~\cite{Dyer:2008hb}.
Of course we are not considering bulk gravity to be dynamical, but the point here is that these boundary terms also yield second order equations of motion for $\pi$ in the construction leading to~(\ref{gaugefixedaction}).
The prescription of~\cite{deRham:2010eu} is then as follows: on the 4-dimensional brane, we may add the first two Lovelock terms, namely the cosmological constant term $\sim \sqrt{-\bar g}$ and the Einstein-Hilbert term $\sim \sqrt{-\bar g}R[\bar g]$. (The higher Lovelock terms are total derivatives in 4-dimensions.) We may also add the boundary term corresponding to a bulk Einstein-Hilbert term, $\sqrt{-\bar g}K$, and the boundary term ${\cal L}_{\rm GB}$ corresponding to the Gauss-Bonnet Lovelock invariant $R^2 - 4 R_{\mu\nu} R^{\mu\nu}+ R_{\mu\nu\alpha\beta} R^{\mu\nu\alpha\beta}$ in the bulk. The zero order cosmological constant Lovelock term in the bulk has no boundary term (although as we will see, we may construct a fifth term, the tadpole term, from it) and the higher order bulk Lovelock terms vanish identically. Therefore, in total, for a 3-brane there are four possible terms (five including the tadpole) which
lead to second order equations. These are the terms we focus on.
\subsection{The tadpole term}
As mentioned, there is one term that contains no derivatives of $\pi$ and is not of the form~(\ref{gaugefixedaction}). This Lagrangian is called the tadpole term, denoted by ${\cal A}(\pi)$. The value of the tadpole action is the proper 5-volume between some $\rho= {\rm constant}$ surface and the position of the brane,
\begin{equation}
\label{tadpoleterm}
S_1=\int d^4x \int^\pi d\pi' \sqrt{-G}=\int d^4x\sqrt{-g}\int^\pi d\pi' f(\pi')^4,
\end{equation}
so that
\begin{equation}
{\cal L}_1=\sqrt{-g}{\cal A}(\pi),\ \ \ \ {\cal A}(\pi)=\int^\pi d\pi' f(\pi')^4.
\end{equation}
Note that ${\cal A}'(\pi)=f(\pi)^4$.
Under a general nonlinear symmetry $\delta_{\rm K}\pi=K^5(x)-K^\mu(x,\pi)\partial_\mu\pi$ of the type~(\ref{specialcasesym}), its change is
\begin{equation}
\delta_{K}{\cal L}_1=\sqrt{-g}{\cal A}'(\pi)\delta_{K}\pi=\sqrt{-g}f^4\left(K^5(x)-K^\mu(x,\pi)\partial_\mu\pi\right) \ .
\end{equation}
Using the Killing equation (\ref{killingequation}), it is straightforward to check directly that a general variation of the right-hand side vanishes,
demonstrating that the change in the tadpole term under the symmetry transformation is a total derivative. Thus the tadpole term has the same symmetries as the other terms.
\subsection{Explicit expressions for the terms}
Including the tadpole term there are thus five terms that lead to second order equations for $\pi$,
\begin{eqnarray}
{\cal L}_1&=&\sqrt{-g}\int^\pi d\pi' f(\pi')^4,\nonumber\\
{\cal L}_2&=&- \sqrt{-\bar g} \ ,\nonumber\\
{\cal L}_3&=& \sqrt{-\bar g}K \ ,\nonumber\\
{\cal L}_4&=& -\sqrt{-\bar g}\bar R \ ,\nonumber\\
{\cal L}_5&=&{3\over 2}\sqrt{-\bar g} {\cal K}_{\rm GB} \ ,
\label{ghostfreegenterms}
\end{eqnarray}
where the explicit form of the Gauss-Bonnet boundary term is
\begin{equation}
{\cal K}_{\rm GB}=-{1\over3}K^3+K_{\mu\nu}^2K-{2\over 3}K_{\mu\nu}^3-2\left(\bar R_{\mu\nu}-\frac{1}{2} \bar R \bar g_{\mu\nu}\right)K^{\mu\nu} \ .
\end{equation}
Indices are raised and traces are taken with $\bar g^{\mu\nu}$. At this stage, each of these terms would appear in a general Lagrangian with an arbitrary coefficient. As we will see later, requiring stability will, however, force certain choices on us in specific examples.
En route to presenting specific examples of our new theories, we now evaluate these terms on the special case metric~(\ref{metricform}). We make use of formulae catalogued in Appendix \ref{appendix1}. Our strategy is to collect coefficients of $f''$, $f'$, $f'^2$ and $f'^3$, eliminate everywhere $(\partial\pi)^2$ in favor of $\gamma={1\over \sqrt{1+{1\over f^2}(\partial\pi)^2}}$, and then to group like terms by powers of $\gamma$. A lengthy calculation yields
\begin{eqnarray}
{\cal L}_1&=&\sqrt{-g}\int^\pi d\pi' f(\pi')^4,\nonumber\\
{\cal L}_2&=&-\sqrt{-g}f^4\sqrt{1+{1\over f^2}(\partial\pi)^2},\nonumber\\
{\cal L}_3&=&\sqrt{-g}\left[f^3f'(5-\gamma^2)-f^2[\Pi]+\gamma^2[\pi^3]\right],\nonumber\\
{\cal L}_4&=& -\sqrt{-g}\left\{{1\over\gamma}f^2R-2{\gamma}R_{\mu\nu}\nabla^\mu\pi\nabla^\nu\pi\right. \nonumber\\
&&+\gamma\left[[\Pi]^2-[\Pi^2]+2{\gamma^2\over f^2}\left(-[\Pi][\pi^3]+[\pi^4]\right)\right]+6{f^3f''\over \gamma}\left(-1+\gamma^2\right) \nonumber\\
&&\left.+2\gamma ff'\left[-4[\Pi]+{\gamma^2\over f^2}\left(f^2[\Pi]+4[\pi^3]\right)\right]-6{f^2f'^2\over \gamma}\left(1-2\gamma^2+\gamma^4\right) \right\},\nonumber\\
{\cal L}_5&=&{3\over 2}\sqrt{-g}\left\{ R\left[3ff'-[\Pi]+{\gamma^2\over f^2}\left(-f^3f'+[\pi^3]\right)\right]-2{\gamma^2\over f^2}R^{\mu\nu\alpha\beta}\nabla_\mu\pi\nabla_\alpha\pi\Pi_{\nu\beta} \right.\nonumber \\
&& +2R^{\mu\nu}\left[\Pi_{\mu\nu}+{\gamma^2\over f^2}\left((-3ff'+[\Pi])\nabla_\mu\pi\nabla_\nu\pi-2\Pi_{\alpha(\mu}\nabla_{\nu)}\pi\nabla^\alpha\pi\right)\right] \nonumber\\
&&-{\gamma^2\over f^2}\left[{2\over 3}\left([\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]\right)+2{\gamma^2\over f^2}\left(-[\pi^3]([\Pi]^2-[\Pi^2])+2[\Pi][\pi^4]-2[\pi^5]\right)\right]\nonumber\\
&& +4ff''\left[-3ff'+[\Pi]+{\gamma^2\over f^2}\left(3f^3f'-f^2[\Pi]-[\pi^3]\right)\right]-2ff'^3\left(9-11\gamma^2+6\gamma^4\right)\nonumber\\
&&+2f'^2\left[[\Pi]-{\gamma^2\over f^2}\left(8f^2[\Pi]+[\pi^3]\right)+2{\gamma^4\over f^2}\left(2f^2[\Pi]+5[\pi^3]\right)\right] \nonumber\\
&& \left.+2\gamma^2{f'\over f}\left[3\left([\Pi]^2-[\Pi^2]\right)-{\gamma^2\over f^2}\left(f^2([\Pi]^2-[\Pi^2])+6([\Pi][\pi^3]-[\pi^4])\right)\right] \right\} \ . \nonumber\\
\label{generalterms}
\end{eqnarray}
The quantities $[\Pi^n]$ and $[\pi^n]$ are various contractions of derivatives of the $\pi$ field, and the notation is explained in the conventions at the end of Section \ref{introduction}. In these expressions, all curvatures are those of the metric $g_{\mu\nu}$, and all derivatives are covariant derivatives with respect to $g_{\mu\nu}$. We point out that no integrations by parts have been performed in obtaining these expressions.
The equations of motion derived from any of these five terms will
contain no more than two derivatives on each field, ensuring that no extra degrees of freedom propagate around any background. After suitable integrations by parts, these actions should therefore conform to the general structure presented in \cite{Deffayet:2011gz} for actions of a single scalar with second order equations (see also the Euler hierarchy constructions \cite{Fairlie:1991qe,Fairlie:1992nb,Fairlie:1992yy,Fairlie:2011md}). In the above construction, however, we can immediately identify the nonlinear symmetries by reading them off from the isometries of the bulk.
Finally, we note that by keeping the metric $g_{\mu\nu}$ in (\ref{metricform}) arbitrary rather than fixing it to the foliation, we can automatically obtain the covariantizaton of these various Galileon actions, including the non-minimal curvature terms required to keep the equations of motion second order, the same terms obtained by purely 4-d methods in \cite{Deffayet:2009mn,Deffayet:2009wt,Deffayet:2011gz}. Of course, this in general ruins the symmetries we are interested in considering. But from this point of view, we can see exactly when such symmetries will be present. The symmetries will only be present if the $g_{\mu\nu}$ which is used to covariantly couple is such that the full metric (\ref{metricform}) has isometries.
\section{Maximally Symmetric Examples}
We now proceed to construct explicitly the maximally symmetric examples catalogued in Section \ref{maxsymsubsection2} and Figure \ref{types}. The construction starts by finding coordinates which are adapted to the desired foliation, so that the metric in the bulk takes the form (\ref{metricform}), allowing us to read off the function $f(\pi)$. Plugging into (\ref{generalterms}) then gives us the explicit Lagrangians. To find the form of the global symmetries, we must write the explicit Killing vectors in the bulk, and identify those which are parallel and not parallel to the foliation. We may then read off the symmetries from (\ref{specialcasesym}).
The construction for each case is similar, and some of the results are related by analytic continuation, but there are enough differences in the forms of the embeddings and the Killing vectors that we thought it worthwhile to display each case explicitly. The reader interested only in a given case may skip directly to it.
\subsection{A Minkowski brane in a Minkowski bulk: $M_4$ in $M_5$ -- DBI Galileons}
Choosing cartesian coordinates $(x^\mu,\rho)$ on $M_5$, the foliation of $M_5$ by $M_4$ is simply given by $\rho={\rm constant}$ slices, and the metric takes the form
\begin{equation}
ds^2=(d\rho)^2+\eta_{\mu\nu}dx^\mu dx^\nu \ .
\end{equation}
Comparing this to~(\ref{metricform}), we obtain
\begin{equation}
{f(\pi)=1,\ \ \ g_{\mu\nu}=\eta_{\mu\nu},}
\end{equation}
and the terms~(\ref{generalterms}) become (again, without integration by parts)
\begin{align}
\mathcal{L}_{1}&=\pi ,\nonumber \\
\mathcal{L}_{2}&=-\sqrt{1+(\partial \pi)^{2}} \ , \nonumber \\
\mathcal{L}_{3}&=-\left [\Pi\right ]+\gamma^{2}\left [\pi^3\right ] \ ,\nonumber \\
\mathcal{L}_{4}& =-\gamma \left (\left [\Pi \right ]^{2} -\left [\Pi^{2}\right ]\right )-2\gamma^{3}\left (\left [\pi^{4}\right ]-\left [\Pi\right ]\left [\pi^3\right ]\right ) \ ,\nonumber \\
\mathcal{L}_{5}& =-\gamma^{2}\left (\left [\Pi\right ]^{3}+2\left [\Pi^{3}\right ]-3\left [\Pi\right ]\left [\Pi^{2}\right ]\right )-\gamma^{4}\left (6\left [\Pi\right ]\left [\pi^{4}\right ]-6\left [\pi^{5}\right ]-3\left (\left [\Pi\right ]^{2}-\left [\Pi^{2}\right ]\right )\left [\pi^3\right ]\right ) \ ,\nonumber \\
\label{DBIGalileonterms}
\end{align}
where $\gamma={1\over \sqrt{1+(\partial\pi)^2}}$.
These are the DBI Galileon terms, first written down in \cite{deRham:2010eu} and further studied in \cite{Goon:2010xh}.
\subsubsection{Killing vectors and symmetries}
The Killing vectors of $5$d Minkowski space are the 10 boosts $L_{AB}=X_A\partial_B-X_B\partial_A$, and the 5 translations $P_A=-\partial_A$. The 6 boosts $J_{\mu\nu}$ and the 4 translations $P_\mu$ are parallel to the foliation and form the unbroken $p(3,1)$ symmetries of $M_{4}$. The 5 broken generators are
\begin{eqnarray}
&&K\equiv-P_5=\partial_\rho, \\
&& K_\mu\equiv L_{\mu 5}=x_\mu\partial_\rho-\rho\partial_\mu \ .
\end{eqnarray}
Using the relation $\delta_K\pi=K^5(x)-K^\mu(x,\pi)\partial_\mu\pi$ from~(\ref{specialcasesym}), we obtain the transformation rules
\begin{eqnarray}
&&\delta\pi=1, \nonumber \\
&& \delta_\mu \pi=x_\mu+\pi\partial_\mu\pi \ , \label{DBIGalileontrans}
\end{eqnarray}
under which the terms (\ref{DBIGalileonterms}) are each invariant up to a total derivative. The symmetry breaking pattern is
\begin{equation}
p(4,1)\rightarrow p(3,1) \ .
\end{equation}
\subsection{A Minkowski brane in an anti-de Sitter bulk: $M_4$ in $AdS_5$ -- Conformal Galileons\label{MinAdSsection}}
In this section, indices $\mathcal{A},\mathcal{B},\cdots$ run over six values $0,1,2,3,4,5$ and $Y^\mathcal{A}$ are cartesian coordinates in an ambient $6$d two-time Minkowski space with metric $\eta_{\mathcal{A}\mathcal{B}}={\rm diag}(-1,-1,1,1,1,1)$, which we call $M_{4,2}$.
Five dimensional anti-de Sitter space $AdS_5$ (more precisely, a quotient thereof) can be described as the subset of points $(Y^0,Y^1,Y^2\ldots,Y^{5})\in M_{4,2}$ in the hyperbola of one sheet satisfying
\begin{equation}
\eta_{\mathcal{A}\mathcal{B}}Y^\mathcal{A} Y^\mathcal{B}=-(Y^0)^2-(Y^1)^2+(Y^2)^2+\cdots+(Y^{5})^2=-{\cal R}^2 \ ,
\end{equation}
with ${\cal R}>0$ the radius of curvature of $AdS_5$, and where the metric is induced from the flat metric on $M_{4,2}$. This space is not simply connected, but its universal cover is $AdS_5$. The scalar curvature $R$ and cosmological constant $\Lambda$ are given by $R=-{20\over {\cal R}^2},\ \Lambda=-{6\over{\cal R}^2}.$
We use Poincare coordinates $(\rho, x^\mu)$ on $AdS_5$ which cover the region $Y^0+Y^2>0$,
\begin{eqnarray}
\nonumber Y^0&=&\mathcal{R} \cosh\left(\rho\over\mathcal{R}\right)+{1\over 2\mathcal{R}}e^{-\rho/\mathcal{R}}x^2 \ ,\\ \nonumber
Y^1&=&e^{-\rho/\mathcal{R}}x^0 \ ,\\ \nonumber
Y^2&=&-\mathcal{R} \sinh\left(\rho\over\mathcal{R}\right)-{1\over 2\mathcal{R}}e^{-\rho/\mathcal{R}}x^2 \ ,\\
Y^{i+2}&=&e^{-\rho/\mathcal{R}}x^i \ ,\ \ i=1,2,3 \ ,
\end{eqnarray}
where $x^2\equiv\eta_{\mu\nu}x^\mu x^\nu$, and $\eta_{\mu\nu}={\rm diag}(-1,1,1,1)$ is the Minkowski 4-metric. The coordinates $u$ and $x^\mu$ all take the range $(-\infty,\infty)$. Lines of constant $\rho$ foliate the Poincare patch of $AdS_5$ with Minkowski $M_4$ time-like slices, given by intersecting the planes $Y^0+Y^2={\rm constant}$ with the hyperbola.
The induced metric is
\begin{equation}
ds^2=d\rho^2+e^{-2\rho/\mathcal{R}}\eta_{\mu\nu}dx^\mu dx^\nu \ .
\end{equation}
Comparing this with~(\ref{metricform}) we obtain
\begin{equation}
{f(\pi)=e^{-\pi/\mathcal{R}},\ \ \ \ g_{\mu\nu}=\eta_{\mu\nu}} \ ,
\end{equation}
and the terms~(\ref{generalterms}) become (without integration by parts)
\begin{eqnarray}
{\cal L}_1&=& -{\mathcal{R}\over 4}e^{-4\pi/ \mathcal{R}} \ ,\nonumber\\
{\cal L}_2&=& -e^{-4\pi/ \mathcal{R}}\sqrt{1+e^{2\pi/ \mathcal{R}} (\partial\pi)^2} \ ,\nonumber \\
{\cal L}_3&=& \gamma^2[\pi^3]-e^{-2\pi/ \mathcal{R}}[\Pi]+{1\over \mathcal{R}}e^{-4\pi/ \mathcal{R}}(\gamma^2-5) \ ,\nonumber \\
{\cal L}_4&=&
-\gamma([\Pi]^2-[\Pi^2])-2\gamma^3 e^{2\pi/\mathcal{R}}([\pi^4]-[\Pi][\pi^3])\nonumber\\
&&+\frac{6}{\mathcal{R} ^2}e^{-4\pi/\mathcal{R}}{1\over \gamma}\(2-3\gamma^2+\gamma^4\)+\frac 8 \mathcal{R}\gamma^3 [\pi^3]
-\frac 2 \mathcal{R} e^{-2\pi/\mathcal{R}}\gamma\(4-\gamma^2\)[\Pi]
\, ,\nonumber \\
{\cal L}_5&=&
-\gamma^2 e^{2\pi/ \mathcal{R}}\([\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]\) \nonumber \\
\hspace{-5pt}&&\hspace{-5pt}
-3 \gamma^4e^{4\pi/ \mathcal{R}}\left[2([\Pi][\pi^4]-[\pi^5])-([\Pi]^2-[\Pi^2])[\pi^3]\right]\nonumber\\
\hspace{-5pt}&&\hspace{-5pt} +\frac{18}{ \mathcal{R}}e^{2\pi/ \mathcal{R}}\gamma^4([\Pi][\pi^3]-[\pi^4])-\frac 3 \mathcal{R} {\gamma^2}(3-\gamma^2)([\Pi]^2-[\Pi^2])\nonumber\\
\hspace{-5pt}&&\hspace{-5pt}-\frac{3}{ \mathcal{R}^2}{ \gamma^2}(3-10\gamma^2)[\pi^3]
-\frac{3}{ \mathcal{R}^2}e^{-2\pi/ \mathcal{R}}(-3+10\gamma^2-4\gamma^4)[\Pi]\nonumber \hspace{-100pt}\nonumber \\
\hspace{-5pt}&&\hspace{-5pt}+\frac{3}{ \mathcal{R}^3}e^{-4\pi/ \mathcal{R}}(15-17\gamma^2+6\gamma^4)
\ , \nonumber \\ \label{conformalDBIGalileonterms}
\end{eqnarray}
where
\begin{equation}
\gamma={1\over \sqrt{1+e^{2\pi/ \mathcal{R}} (\partial\pi)^2}} \ .
\end{equation}
These are the conformal DBI Galileons, first written down in \cite{deRham:2010eu}.
\subsubsection{Killing vectors and symmetries}
The 15 Lorentz generators of $M_{4,2}$; $M_{ \mathcal{A} \mathcal{B} }=Y_ \mathcal{A} \bar{\partial}_ \mathcal{B} -Y_ \mathcal{B} \bar{\partial}_ \mathcal{A} $ (here $\bar{\partial}_ \mathcal{A} $ are the coordinate basis vectors in the ambient space $M_{4,2}$, and indices are lowered with the $M_{4,2}$ flat metric $\eta_{ \mathcal{A} \mathcal{B} }$) are all tangent to the $AdS_5$ hyperboloid, and become the 15 isometries of the $so(4,2)$ isometry algebra of $AdS_5$. Of these, 10 have no $\partial_\rho$ components and are parallel to the $M_4$ foliation. These form the unbroken $p(3,1)$ isometry algebra of the $M_4$ slices.
First we have
\begin{eqnarray}
Y^{i+2}\bar{\partial}_1+Y^1\bar{\partial}_{i+2}&\rightarrow &x^i\partial_0+x^0\partial_i ,\ \ \ i=1,2,3,\\
Y^{i+2}\bar{\partial}_{j+2}-Y^{j+2}\bar{\partial}_{i+2}&\rightarrow & x_i\partial_j-x_j\partial_i, \ \ \ i,j=1,2,3,
\end{eqnarray}
which taken together are the 6 Lorentz transformations $L_{\mu\nu}=x_\mu\partial_\nu-x_\nu\partial_\mu$ of the $x^{\mu}$.
For the remaining 4, we focus on
\begin{eqnarray}
-Y^{1}\bar{\partial}_0+Y^0\bar{\partial}_{1}&\rightarrow &x^0\partial_\rho+\left[{\mathcal{R}\over 2}\left(1+e^{2\rho\over \mathcal{R}}\right)+{1\over 2\mathcal{R}}x^2\right]\partial_0+{1\over \mathcal{R}}x^0x^\mu\partial_\mu \ ,\nonumber \\
-Y^{i+2}\bar{\partial}_0-Y^0\bar{\partial}_{i+2}&\rightarrow &x^i\partial_\rho+\left[-{\mathcal{R}\over 2}\left(1+e^{2\rho\over \mathcal{R}}\right)-{1\over 2\mathcal{R}}x^2\right]\partial_i+{1\over \mathcal{R}}x^ix^\mu\partial_\mu \ ,\ \ \ \ i=1,2,3 \ ,\nonumber \\
-Y^{2}\bar{\partial}_1-Y^1\bar{\partial}_{2}&\rightarrow &x^0\partial_\rho+\left[-{\mathcal{R}\over 2}\left(1-e^{2\rho\over \mathcal{R}}\right)+{1\over2 \mathcal{R}}x^2\right]\partial_0+{1\over \mathcal{R}}x^0x^\mu\partial_\mu \ ,\nonumber \\
-Y^{i+2}\bar{\partial}_2+Y^2\bar{\partial}_{i+2}&\rightarrow &x^i\partial_\rho+\left[{\mathcal{R}\over 2}\left(1-e^{2\rho\over \mathcal{R}}\right)-{1\over2 \mathcal{R}}x^2\right]\partial_i+{1\over \mathcal{R}}x^ix^\mu\partial_\mu \ ,\ \ \ \ i=1,2,3 \ ,
\end{eqnarray}
which may be grouped as
\begin{eqnarray}
V_\mu &= &x_\mu\partial_\rho+\left[-{\mathcal{R}\over 2}\left(1+e^{2\rho\over \mathcal{R}}\right)-{1\over 2\mathcal{R}}x^2\right]\partial_\mu+{1\over \mathcal{R}}x_\mu x^\nu\partial_\nu \ ,\ \ \ \ \mu=0,1,2,3\nonumber \\
V'_\mu &= &x_\mu\partial_\rho+\left[{\mathcal{R}\over 2}\left(1-e^{2\rho\over \mathcal{R}}\right)-{1\over2 \mathcal{R}}x^2\right]\partial_\mu+{1\over \mathcal{R}}x_\mu x^\nu\partial_\nu,\ \ \ \ \mu=0,1,2,3 \ . \nonumber\\
\end{eqnarray}
If we now take the following linear combinations,
\begin{eqnarray}
P_\mu &= & {1\over \mathcal{R}}(V_\mu-V'_\mu)=-\partial_\mu \ ,\\
K_\mu &= &(V_\mu+V'_\mu)=2x_\mu\partial_\rho-\left[{\mathcal{R}}e^{2\rho\over \mathcal{R}}+{1\over \mathcal{R}}x^2\right]\partial_\mu+{2\over \mathcal{R}}x_\mu x^\nu\partial_\nu \ ,
\end{eqnarray}
the $P_\mu$ are the translations on the $x^{\mu}$, the remaining 4 unbroken vectors.
The $K_\mu$ are broken generators and, in addition, there is one more broken vector,
\begin{equation}
-Y^2\bar{\partial}_0-Y^0\bar{\partial}_2=\mathcal{R}\partial_\rho+x^\mu\partial_\mu \ .
\end{equation}
Using the relation $\delta_K\pi=K^5(x)-K^\mu(x,\pi)\partial_\mu\pi$ from~(\ref{specialcasesym}), we obtain the transformation rules for the $\pi$ field from this and from the $K_\mu$ as
\begin{eqnarray}
\delta\pi&=&\mathcal{R}-x^\mu\partial_\mu\pi,\nonumber\\
\delta_\mu\pi&=&2x_\mu+\left[{\mathcal{R}}e^{2\pi\over \mathcal{R}}+{1\over \mathcal{R}}x^2\right]\partial_\mu\pi-{2\over \mathcal{R}}x_\mu x^\nu\partial_\nu\pi \ . \label{MinAdStrans}
\end{eqnarray}
The terms~(\ref{conformalDBIGalileonterms}) are each invariant up to a total derivative under these transformations, and the symmetry breaking pattern is
\begin{equation}
so(4,2)\rightarrow p(3,1) \ .
\end{equation}
\subsection{A de Sitter brane in a Minkowski bulk: $dS_4$ in $M_5$}
We describe the Minkowski bulk with the usual metric in cartesian coordinates
\begin{equation}
{ds^2=\eta_{AB}X^AX^B=-(dX^0)^2+(dX^1)^2+(dX^2)^2+(dX^3)^2+(dX^4)^2} \ .
\end{equation}
The region $\eta_{AB}X^AX^B>0$ (i.e. outside the lightcone) can be foliated by de Sitter slices. To see this, we use Rindler coordinates which cover this region,
\begin{eqnarray}
\nonumber X^0&=&r\sinh\tau, \\ \nonumber
X^1&=&\rho \cosh \tau\ \cos\theta_1 \ , \\ \nonumber
X^2&=& \rho\cosh \tau\ \sin\theta_1\cos\theta_2 \ , \\ \nonumber
X^3&=& \rho\cosh \tau\ \sin\theta_1\sin\theta_2\cos\theta_3 \ , \\
X^4&=& \rho\cosh \tau\ \sin\theta_1\sin\theta_2\sin\theta_3 \ ,
\end{eqnarray}
where $\rho\in(0,\infty)$, $\tau\in(-\infty,\infty)$, and the $\theta_i$ ($i=1,2,3$) parametrize a $3$ sphere. The metric in Rindler coordinates is then
\begin{equation}
\label{rindlermetric}
{ds^2=d \rho ^2+ \rho ^2\left[-d\tau^2+\cosh^2\tau \ d\Omega^2_{(3)}\right]} \ .
\end{equation}
This metric is $ds^2=d\rho^2+\rho^2 ds^2_{dS_{4}},$ where $ds^2_{dS_{4}}$ is the global metric on a unit radius $4$d de Sitter space. The foliation by $dS^{4}$ thus corresponds to $\rho= {\rm constant}$ surfaces (or to $-(X^0)^2+(X^i)^2= {\rm constant} >0$ in cartesian coordinates).
Comparing this with~(\ref{metricform}), we obtain
\begin{equation}
{f(\pi)=\pi,}\ \ \ g_{\mu\nu}=g_{\mu\nu}^{(dS_4)} \ ,
\end{equation}
and the terms~(\ref{generalterms}) become (without any integrations by parts)
\begin{eqnarray}
{\cal L}_1&=&{1\over 5}\sqrt{-g}{\pi^5} \ ,\nonumber\\
{\cal L}_2&=&-\sqrt{-g}\pi^4\sqrt{1+{1\over \pi^2}(\partial\pi)^2} \ ,\nonumber \\
{\cal L}_3&=&\sqrt{-g}\left[\pi^3(5-\gamma^2)-\pi^2[\Pi]+\gamma^2[\pi^3]\right] \ ,\nonumber \\
{\cal L}_4&=&\sqrt{-g}\ \gamma\left[-[\Pi]^2+[\Pi^2]+8\pi[\Pi]-18\pi^2-2{\gamma^2\over \pi^2}\left([\pi^4]+4\pi[\pi^3]-3\pi^4-[\Pi][\pi^3]+\pi^3[\Pi]\right)\right] \ ,\nonumber \\
{\cal L}_5&=&\sqrt{-g}\ {\gamma^2\over \pi^2}\Bigg[ -[\Pi]^3+3[\Pi][\Pi^2]-2[\Pi^3]+9\pi([\Pi]^2-[\Pi^2])+42\pi^3-30\pi^2[\Pi] \nonumber\\
&&+3{\gamma^2\over \pi^2}\left(([\Pi]^2-[\Pi^2])[\pi^3]+2[\pi^5]+6\pi[\pi^4]+10\pi^2[\pi^3]-\pi^3([\Pi]^2-[\Pi^2]) \right. \nonumber\\
&&\left. -6\pi^5-2[\Pi]([\pi^4]+3\pi[\pi^3]-2\pi^4)]\right)\Bigg] \ , \label{dSinMDBI}
\end{eqnarray}
where the background metric and covariant derivatives are those of unit-radius $4$d de Sitter space, and
\begin{equation}
\gamma={1\over \sqrt{1+{1\over \pi^2}(\partial\pi)^2}} \ .
\end{equation}
Note that, since we have chosen the $4$d space to be a unit-radius $dS_4$ with dimensionless coordinates, $\pi$ and $f$ have mass dimension $-1$. In evaluating (\ref{dSinMDBI}), we have used that the scalar curvature and cosmological constant of this space are $R=12$ and $\Lambda=3$ respectively, and used the relations $R_{\mu\nu\alpha\beta}={R\over 12}\(g_{\mu\alpha}g_{\nu\beta}-g_{\mu\beta}g_{\nu\alpha}\)$, and $R_{\mu\nu}={R\over 4}g_{\mu\nu}$, valid for a maximally symmetric space. It is possible, of course, to rescale the coordinates, canonically normalize the field, and/or rescale $f$ to bring these quantities to their usual dimensions. Given a suitable combinations of these Lagrangians so that a constant field $\pi(x)=\pi_0={\rm constant}$ is a solution to the equations of motion, $\pi_0$ sets the radius of the de Sitter brane in its ground state.
We call these Type II de Sitter DBI Galileons (see Figure \ref{types}), and they are our first example of a Galileon that lives on curved space yet still retains the same number of shift-like symmetries as their flat space counterparts.
\subsubsection{Killing vectors and symmetries}
The 10 Lorentz transformations of $M_5$ are parallel to the de Sitter slices and become the unbroken $so(4,1)$ isometries of $dS_4$. The 5 translations are not parallel and will be nonlinearly realized.
With a future application to cosmology in mind, we will calculate the transformation laws explicitly using conformal inflationary coordinates $(u,y^i)$ on the de Sitter slices, even though these coordinates only cover half of each de Sitter slice. The embedding becomes
\begin{eqnarray}
X^0&=&{\rho\over 2u}\left(1-u^2+y^2\right) \ ,\nonumber \\
X^1&=&{\rho\over 2u}\left(1+u^2-y^2\right) \ ,\nonumber \\
X^{i+1}&=&{\rho y^i\over u},\ \ \ i=1,2,3 \ ,
\label{dsMinflationaryembedding}
\end{eqnarray}
where $y^2\equiv \delta_{ij}y^iy^j$, and the coordinate ranges are $\rho\in (0,\infty)$, $u\in(0,\infty)$, $y^i\in(-\infty,\infty)$. The metric takes the form
\begin{equation}
ds^2=d\rho^2+\rho^2\left[{1\over u^2}\left(-du^2+dy^2\right)\right] \ ,
\end{equation}
so that the $dS_4$ slices have conformal inflationary coordinates, with $u$ the conformal time.
We are interested in the form of the nonlinear symmetries stemming from the broken translation generators of $M_5$. In the coordinates~(\ref{dsMinflationaryembedding}), the broken Killing vectors $\bar{\partial}_A$ are
\begin{eqnarray}
\bar{\partial}_0&=&{1\over 2u}\left(-1+u^2-y^2\right)\partial_\rho-{1\over 2\rho}\left(1+u^2+y^2\right)\partial_u-{u\over \rho}y^i\partial_i \ ,\\
\bar{\partial}_1&=&{1\over 2u}\left(1+u^2-y^2\right)\partial_\rho-{1\over 2\rho}\left(-1+u^2+y^2\right)\partial_u-{u\over \rho}y^i\partial_i \ ,\\
\bar{\partial}_{i}&=&{y_i\over u}\partial_\rho+{y_i\over\rho}\partial_u+{u\over r}\partial_i,\ \ i=1,2,3 \ .
\end{eqnarray}
Taking the following linear combinations
\begin{eqnarray}
K_+&=&\bar{\partial}_0+\bar{\partial}_1={1\over u}\left(u^2-y^2\right)\partial_\rho-{1\over \rho}\left(u^2+y^2\right)\partial_u-{2u\over \rho}y^i\partial_i \ ,\\
K_-&=&\bar{\partial}_0-\bar{\partial}_1=-{1\over u}\partial_\rho-{1\over \rho}\partial_u \ ,\\
K_i&=& \bar{\partial}_{i}={y_i\over u}\partial_\rho+{y_i\over\rho}\partial_u+{u\over \rho}\partial_i \ ,
\end{eqnarray}
and using the relation $\delta_K\pi=K^5(x)-K^\mu(x,\pi)\partial_\mu\pi$ from~(\ref{specialcasesym}), we then obtain the transformation rules
\begin{eqnarray}
\delta_+\pi&=&{1\over u}\left(u^2-y^2\right)+{1\over \pi}\left(u^2+y^2\right)\pi'+{2u\over \pi}y^i\partial_i\pi \ ,\nonumber\\
\delta_-\pi&=&-{1\over u}+{1\over \pi}\pi' \ ,\nonumber \\
\delta_i\pi &=& {y_i\over u}-{y_i\over\pi}\pi'-{u\over \pi}\partial_i\pi \ ,
\label{dSinMtrans}
\end{eqnarray}
where $\pi'\equiv \partial_u\pi$.
The terms~(\ref{dSinMDBI}) are each invariant up to a total derivative under these transformations, and the symmetry breaking pattern is
\begin{equation}
p(4,1)\rightarrow so(4,1) \ .
\end{equation}
\subsection{A de Sitter brane in a de Sitter bulk: $dS_4$ in $dS_5$}
In this section, indices $\mathcal{A},\mathcal{B},\cdots$ run over six values $0,1,2,3,4,5$ and $Y^\mathcal{A}$ are coordinates in an ambient $6$d Minkowski space with metric $\eta_{\mathcal{A}\mathcal{B}}={\rm diag}(-1,1,1,1,1,1)$, which we call $M_{6}$.
Five-dimensional de Sitter space $dS_5$ can be described as the subset of points $(Y^0,Y^1,Y^2\ldots,Y^{5})\in M_{6}$ in the hyperbola of one sheet satisfying
\begin{equation}
\eta_{\mathcal{A}\mathcal{B}}Y^\mathcal{A} Y^\mathcal{B}=-(Y^0)^2+(Y^1)^2+(Y^2)^2+\cdots+(Y^{5})^2={\cal R}^2 \ ,
\end{equation}
with the metric induced from the metric on $M_{6}$, for some constant ${\cal R}>0$, the radius of curvature of the $dS_5$. The scalar curvature $R$ and cosmological constant $\Lambda$ are given by $R=20/ {\cal R}^2$ and $\Lambda=6/{\cal R}^2$, respectively.
We use coordinates in which the constant $\rho$ surfaces are the intersections of the planes $Y^1={\rm constant}$ with the hyperbola, and are themselves four-dimensional de Sitter spaces $dS^{4}$,
\begin{eqnarray}
Y^0&=&{\cal R}\sin\rho\sinh\tau \ , \\
Y^1&=&{\cal R}\cos \rho \ , \\
Y^2&=&{\cal R}\cosh\tau\sin\rho\ \cos\theta_1 \ ,\\
Y^3&=&{\cal R}\cosh\tau\sin\rho\ \sin\theta_1\cos\theta_2 \ ,\\
Y^4&=&{\cal R}\cosh\tau\sin\rho\ \sin\theta_1\sin\theta_2\cos\theta_3 \ ,\\
Y^5&=&{\cal R}\cosh\tau\sin\rho\ \sin\theta_1\sin\theta_2\sin\theta_3 \ .\\
\end{eqnarray}
Here $\tau\in(-\infty,\infty)$, $\rho\in(0,\pi)$
and $\theta_i$, $i=1,2,3$ parametrize a $3$-sphere. These coordinates cover the region $0<Y^1<{\cal R}$, $0<Y^2<{\cal R}$.
The metric is
\begin{equation}
\label{hyperbolicdesitterinside}
dx^2={\cal R}^2\left[d\rho^2+\sin^2\rho\left(-d\tau^2+\cosh^2\tau\ d\Omega_{(3)}\right)\right] \ .
\end{equation}
Scaling $\rho$ so that it lies in the range $(0,\pi \mathcal{R})$, the metric becomes $ds^2=d\rho^2+\mathcal{R}^2\sin^2\(\rho\over \mathcal{R}\) ds^2_{dS_{4}}$, where $ds^2_{dS_{4}}$ is the global metric on a four-dimensional de Sitter space $dS_4$ of unit radius. The foliation by $dS^{4}$ thus corresponds to $\rho={\rm constant}$ surfaces. These slices are given by intersecting the planes $Y^1={\rm constant}$ with the hyperbola, for values $0<Y^1<{\cal R}$. (By taking $\rho<0$ we cover instead $-{\cal R}<Y^2<0$. This is the maximum extent to which we may extend the foliation.)
Comparing this with~(\ref{metricform}), we obtain
\begin{equation}
{f(\pi)=\mathcal{R}\sin (\pi/{\cal R}), \ \ \ g_{\mu\nu}=g_{\mu\nu}^{(dS_4)}} \ ,
\end{equation}
and the terms~(\ref{generalterms}) become (using no integrations by parts)
\begin{eqnarray}
{\cal L}_1&=& \sqrt{-g}{\mathcal{R}^4\over 32}\( 12\ \pi-8\mathcal{R} \sin\(2\pi\over \mathcal{R}\) +\mathcal{R} \sin\(4\pi\over \mathcal{R}\)\) \ ,\\
{\cal L}_2&=&-\sqrt{-g}{\mathcal{R}^4\over \gamma} \sin^4\(\pi\over \mathcal{R}\) \ ,\\
{\cal L}_3&=&\sqrt{-g}\left[\gamma^2 [\pi^3]-\mathcal{R}^2[\Pi] \sin^2\(\pi\over \mathcal{R}\)+\mathcal{R}^3(5-\gamma^2) \sin^3\(\pi\over \mathcal{R}\) \cos\(\pi\over \mathcal{R}\)\right] \ , \\
{\cal L}_4&=&\sqrt{-g}\Bigg[ {2\gamma^3\over \mathcal{R}^2}\([\Pi][\pi^3]-[\pi^4]\)\csc^2\left(\pi\over {\cal R}\right)-{\gamma}\([\Pi]^2-[\Pi^2]+{8\gamma^2\over \mathcal{R}}[\pi^3]\cot\left(\pi\over {\cal R}\right)\) \\
&&+\mathcal{R}\gamma(4-\gamma^2)[\Pi]\sin\left(2\pi\over {\cal R}\right)+{3\mathcal{R}^2\over \gamma}\sin^2\left(\pi\over {\cal R}\right) \(-2-3\gamma^2+\gamma^4+(2-3\gamma^2+\gamma^4)\cos\left(2\pi\over {\cal R}\right)\)\Bigg] \ , \nonumber \\
{\cal L}_5&=&\sqrt{-g}\Bigg[ {3\gamma^4\over \mathcal{R}^4}\(2([\pi^5]-[\Pi][\pi^4])+[\pi^3]([\Pi]^2-[\Pi^2])\)\csc^4\left(\pi\over {\cal R}\right) \\
&&-{18\gamma^4\over \mathcal{R}^3}\([\Pi][\pi^3]-[\pi^4]\)\csc^2\left(\pi\over {\cal R}\right)\cot\left(\pi\over {\cal R}\right) \nonumber\\
&&-{\gamma^2\over \mathcal{R}^2}\csc^2\left(\pi\over {\cal R}\right)\([\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]-{3\over 2}(3+10\gamma^2)[\pi^3]+{3\over 2}(3-10\gamma^2)[\pi^3]\cos\left(2\pi\over {\cal R}\right) \) \nonumber \\
&&+{3\gamma^2\over \mathcal{R}}(3-\gamma^2)([\Pi]^2-[\Pi^2])\cot\left(\pi\over {\cal R}\right)+{3\over 2}[\Pi] \(-3-10\gamma^2+4\gamma^4+(3-10\gamma^2+4\gamma^4)\cos\left(2\pi\over {\cal R}\right)\) \nonumber \\
&& -{3\mathcal{R} \over 4} \(-15-11\gamma^2+6\gamma^4+(15-17\gamma^2+6\gamma^4)\cos\left(2\pi\over {\cal R}\right)\) \sin\left(2\pi\over {\cal R}\right) \Bigg] \ ,
\label{dSindSDBI}
\end{eqnarray}
where the background metric and covariant derivatives are those of the unit-radius $4$d de Sitter space, and
\begin{equation}
\gamma={1\over \sqrt{1+{(\partial\pi)^2\over \mathcal{R}^2 \sin^2\(\pi\over \mathcal{R}\)}}} \ .
\end{equation}
Since we have chosen the $4$d space to have unit radius in dimensionless coordinates, $\pi$ and $f$ have mass dimension $-1$. In evaluating~(\ref{dSinMDBI}), we have used that fact that the scalar curvature and cosmological constant of this space are $R=12$ and
$\Lambda=3$ respectively, and the relations $R_{\mu\nu\alpha\beta}={R\over 12}\(g_{\mu\alpha}g_{\nu\beta}-g_{\mu\beta}g_{\nu\alpha}\)$ and $R_{\mu\nu}={R\over 4}g_{\mu\nu}$ valid for a maximally symmetric space.
Given a suitable combination of these Lagrangians so that a constant field $\pi(x)=\pi_0=const.$ is a solution to the equations of motion, $f(\pi_0)= \mathcal{R} \sin\(\pi_0\over \mathcal{R}\)$ sets the radius of the de Sitter brane. We call these Type I de Sitter DBI Galileons (see Figure \ref{types}).
\subsubsection{Killing vectors and symmetries}
Once again, we calculate the transformation laws using conformal inflationary coordinates $(u,y^i)$ on the de Sitter slices, even though they only cover half of each de Sitter slice. The embedding becomes
\begin{eqnarray}
Y^0&=&\mathcal{R} \sin\left(\rho\over \mathcal{R}\right){1\over 2u}\left(1-u^2+y^2\right) \ ,\\
Y^1&=&\mathcal{R} \cos\left(\rho\over \mathcal{R}\right) \ ,\\
Y^2&=&\mathcal{R} \sin\left(\rho\over \mathcal{R}\right){1\over 2u}\left(1+u^2-y^2\right) \ ,\\
Y^{i+2}&=&\mathcal{R} \sin\left(\rho\over \mathcal{R}\right){y^i\over u},\ \ \ i=1,2,3 \ .
\end{eqnarray}
The coordinate ranges are $\rho\in (0,\pi\mathcal{R})$, $u\in(0,\infty)$ and $y^i\in(-\infty,\infty)$, and the induced metric then becomes
\begin{equation}
ds^2=d\rho^2+\mathcal{R}^2 \sin^2\left(\rho\over \mathcal{R}\right)\left[{1\over u^2}\left(-du^2+dy^2\right)\right] \ .
\end{equation}
The 15 Lorentz generators of $M_6$
are all tangent to the $dS_5$ hyperboloid, and become the 15 isometries of its $so(5,1)$ isometry algebra. Of these, 10 have no $\partial_\rho$ components and are parallel to the $dS_4$ foliation: these form the $so(4,1)$ isometry algebra of the $dS_4$ slices,
\begin{eqnarray}
-Y^2\bar{\partial}_0-Y^0\bar{\partial}_2&\rightarrow & d=u\partial_u+y^i\partial_i \ ,\\
-Y^{i+2}\bar{\partial}_0-Y^0\bar{\partial}_{i+2}&\rightarrow &j_i^+=uy_i\partial_u+\frac{1}{2}\left(-1+u^2-y^2\right)\partial_i+y_iy^j\partial_j, \ \ \ i=1,2,3,\\
-Y^{i+2}\bar{\partial}_2+Y^2\bar{\partial}_{i+2}&\rightarrow& j_i^-=uy_i\partial_u + \frac{1}{2}\left(1+u^2-y^2\right)\partial_i+y_iy^j\partial_j, \ \ \ i=1,2,3,\\
Y^{i+2}\bar{\partial}_{j+2}-Y^{j+2}\bar{\partial}_{i+2}&\rightarrow &j_{ij}=y_i\partial_j-y_j\partial_i,\ \ \ \ i,j=1,2,3 .
\end{eqnarray}
Taking the combinations
\begin{eqnarray}
p_i&=&j_i^+-j_i^-=-\partial_i \ ,\\
k_i&=& j_i^++j_i^-=2uy_i\partial_u+(u^2-y^2)\partial_i+2y_iy^j\partial_j \ ,
\end{eqnarray}
we then recognize $p_i$ and $j_{ij}$ as translations and rotations on the $y$ plane, while $d$ and $k_i$ fill out the $so(4,1)$ algebra.
The remaining 5 Killing vectors do have a $\partial_\rho$ component,
\begin{eqnarray}
-Y^1\bar{\partial}_0-Y^0\bar{\partial}_1&\rightarrow & K={\mathcal{R}\over 2u}\left(1-u^2+y^2\right)\partial_\rho+\frac{1}{2}\left(1+u^2+y^2\right)\cot\left(\rho\over \mathcal{R}\right)\partial_u+u\cot\left(\rho\over \mathcal{R}\right)y^i\partial_i \ ,\nonumber \\
-Y^2\bar{\partial}_1+Y^1\bar{\partial}_2&\rightarrow & K'={\mathcal{R}\over 2u}\left(1+u^2-y^2\right)\partial_\rho+\frac{1}{2}\left(1-u^2-y^2\right)\cot\left(\rho\over \mathcal{R}\right)\partial_u-u\cot\left(\rho\over \mathcal{R}\right)y^i\partial_i \ ,\nonumber \\
-Y^{i+2}\bar{\partial}_1+Y^1\bar{\partial}_{i+2}&\rightarrow & K_i={\mathcal{R}\over u}y_i\partial_\rho +y_i \cot\left(\rho\over \mathcal{R}\right)\partial_u+u\cot\left(\rho\over \mathcal{R}\right)\partial_i ,\ \ \ \ i=1,2,3.
\end{eqnarray}
Defining the following linear combinations,
\begin{eqnarray}
K_+&=&K+K'={\mathcal{R}\over u}\partial_\rho+ \cot\left(\rho\over \mathcal{R}\right)\partial_u \ ,\nonumber \\
K_-&=&K-K'={\mathcal{R}\over u}\left(-u^2+y^2\right)\partial_\rho+\left(u^2+y^2\right)\cot\left(\rho\over \mathcal{R}\right)\partial_u+2u\cot\left(\rho\over \mathcal{R}\right)y^i\partial_i \ ,\nonumber \\
K_i&=&{\mathcal{R}\over u}y_i\partial_\rho +y_i \cot\left(\rho\over \mathcal{R}\right)\partial_u+u\cot\left(\rho\over \mathcal{R}\right)\partial_i \ ,
\end{eqnarray}
and using the relation $\delta_K\pi=K^5(x)-K^\mu(x,\pi)\partial_\mu\pi$ from~(\ref{specialcasesym}), we obtain the transformation rules
\begin{eqnarray}
\delta_+\pi&=&{\mathcal{R}\over u}- \cot\left(\pi\over \mathcal{R}\right)\pi' \ ,\nonumber \\
\delta_-\pi&=&{\mathcal{R}\over u}\left(-u^2+y^2\right)-\left(u^2+y^2\right)\cot\left(\pi\over \mathcal{R}\right)\pi'-2u\cot\left(\pi\over \mathcal{R}\right)y^i\partial_i\pi \ ,\nonumber \\
\delta_i\pi&=&{\mathcal{R}\over u}y_i-y_i \cot\left(\pi\over \mathcal{R}\right)\pi'-u\cot\left(\pi\over \mathcal{R}\right)\partial_i\pi \ ,
\end{eqnarray}
where $\pi'\equiv\partial_u\pi$. The terms~(\ref{dSindSDBI}) are each invariant up to a total derivative under these transformations, and the symmetry breaking pattern is
\begin{equation}
so(5,1)\rightarrow so(4,1) \ .
\end{equation}
\subsection{A de Sitter brane in an anti-de Sitter bulk: $dS_4$ in $AdS_5$}
Using the description and notation for the $AdS_5$ embedding in section~\ref{MinAdSsection}, the following coordinates cover the intersection of the $AdS_5$ hyperbola with the region $Y^0>{\cal R}$,
\begin{eqnarray}
\nonumber Y^0&=&{\cal R}\cosh \rho \ , \\ \nonumber
Y^1&=&{\cal R}\sinh \rho \sinh \tau \ , \\ \nonumber
Y^2&=&{\cal R}\sinh \rho \cosh \tau\cos\theta_1 \ , \\ \nonumber
Y^3&=&{\cal R}\sinh \rho \cosh \tau\sin\theta_1\cos\theta_2 \ , \\
Y^4&=&{\cal R}\sinh \rho \cosh \tau\sin\theta_1\sin\theta_2\cos\theta_3 \ , \\
Y^5&=&{\cal R}\sinh \rho \cosh \tau\sin\theta_1\sin\theta_2 \sin\theta_3 \ ,\\
\end{eqnarray}
where $\tau\in (-\infty,\infty)$, $\rho\in(0,\infty)$, and $\theta_i$, $i=1,2,3$ parametrize a $3$-sphere.
The metric reads
\begin{equation}
\label{adssynchronous2}
{ ds^2={\cal R}^2\left[d\rho^2+\sinh^2\rho\left(-d\tau^2+\cosh^2\tau\ d\Omega^2_{(3)}\right)\right]} \ .
\end{equation}
Scaling $\rho$, the metric becomes $ds^2=d\rho^2+\mathcal{R}^2\sinh^2\(\rho\over \mathcal{R}\) ds^2_{dS_{4}}$, where $ds^2_{dS_{4}}$ is the global metric on a four-dimensional de Sitter space $dS_4$ of unit radius. The foliation by $dS_{4}$ thus corresponds to $\rho={\rm constant}$ surfaces. These slices are given by intersecting the planes $Y^0={\rm constant}$ with the hyperbola in the region $Y^0>{\cal R}$. (If we map $Y^0\rightarrow -Y^0$ then the coordinates cover the region $Y^0<-{\cal R}$, and the metric remains identical to~(\ref{adssynchronous2}), and this is the maximum extent to which we can extend the foliation.)
Comparing this with~(\ref{metricform}), we obtain
\begin{equation}
{f(\pi)=\mathcal{R}\sinh (\pi/{\cal R}), \ \ \ g_{\mu\nu}=g_{\mu\nu}^{(dS_4)}} \ ,
\end{equation}
and the terms~(\ref{generalterms}) become (without integration by parts)
\begin{eqnarray}
{\cal L}_1&=& \sqrt{-g}{\mathcal{R}^4\over 32}\( 12\ \pi-8\mathcal{R} \sinh\(2\pi\over \mathcal{R}\) +\mathcal{R} \sinh\(4\pi\over \mathcal{R}\)\) \ ,\\
{\cal L}_2&=&-\sqrt{-g}{\mathcal{R}^4\over \gamma} \sinh^4\(\pi\over \mathcal{R}\) \ ,\\
{\cal L}_3&=&\sqrt{-g}\left[\gamma^2 [\pi^3]-\mathcal{R}^2[\Pi] \sinh^2\(\pi\over \mathcal{R}\)+\mathcal{R}^3(5-\gamma^2) \sinh^3\(\pi\over \mathcal{R}\) \cosh\(\pi\over \mathcal{R}\)\right] \ , \\
{\cal L}_4&=&\sqrt{-g}\Bigg[ {2\gamma^3\over \mathcal{R}^2}\([\Pi][\pi^3]-[\pi^4]\)\rm csch^2\left(\pi\over {\cal R}\right)-{\gamma}\([\Pi]^2-[\Pi^2]+{8\gamma^2\over \mathcal{R}}[\pi^3]\coth\left(\pi\over {\cal R}\right)\) \\
&&+\mathcal{R}\gamma(4-\gamma^2)[\Pi]\sinh\left(2\pi\over {\cal R}\right)+{3\mathcal{R}^2\over \gamma}\sinh^2\left(\pi\over {\cal R}\right) \(-2-3\gamma^2+\gamma^4+(2-3\gamma^2+\gamma^4)\cosh\left(2\pi\over {\cal R}\right)\)\Bigg] \ , \nonumber \\
{\cal L}_5&=&\sqrt{-g}\Bigg[ {3\gamma^4\over \mathcal{R}^4}\(2([\pi^5]-[\Pi][\pi^4])+[\pi^3]([\Pi]^2-[\Pi^2])\)\rm csch^4\left(\pi\over {\cal R}\right) \nonumber\\
&&-{18\gamma^4\over \mathcal{R}^3}\([\Pi][\pi^3]-[\pi^4]\)\rm csch^2\left(\pi\over {\cal R}\right)\coth\left(\pi\over {\cal R}\right) \nonumber\\
&&-{\gamma^2\over \mathcal{R}^2}\rm csch^2\left(\pi\over {\cal R}\right)\([\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]-{3\over 2}(3+10\gamma^2)[\pi^3]+{3\over 2}(3-10\gamma^2)[\pi^3]\cosh\left(2\pi\over {\cal R}\right) \) \nonumber \\
&&+{3\gamma^2\over \mathcal{R}}(3-\gamma^2)([\Pi]^2-[\Pi^2])\coth\left(\pi\over {\cal R}\right)+{3\over 2}[\Pi] \(-3-10\gamma^2+4\gamma^4+(3-10\gamma^2+4\gamma^4)\cosh\left(2\pi\over {\cal R}\right)\) \nonumber \\
&& -{3\mathcal{R} \over 4} \(-15-11\gamma^2+6\gamma^4+(15-17\gamma^2+6\gamma^4)\cosh\left(2\pi\over {\cal R}\right)\) \sinh\left(2\pi\over {\cal R}\right) \Bigg] \ ,
\label{dSinAdSDBI}
\end{eqnarray}
where the background metric and covariant derivatives are those of the unit-radius $4$d de Sitter space, and
\begin{equation}
\gamma={1\over \sqrt{1+{(\partial\pi)^2\over \mathcal{R}^2 \sinh^2\(\pi\over \mathcal{R}\)}}} \ .
\end{equation}
Given suitable combinations of these Lagrangians so that a constant field $\pi(x)=\pi_0={\rm constant}$ is a solution to the equations of motion, $f(\pi_0)= \mathcal{R} \sinh\(\pi_0\over \mathcal{R}\)$ sets the radius of the de Sitter brane.
We call these Type III de Sitter DBI Galileons (see Figure \ref{types}).
\subsubsection{Killing vectors and symmetries}
Once again we use conformal inflationary coordinates on the $dS_4$ slices. The embedding becomes,
\begin{eqnarray}
Y^0&=&\mathcal{R} \cosh\left(\rho\over \mathcal{R}\right) \ ,\\
Y^1&=& \mathcal{R} \sinh\left(\rho\over \mathcal{R}\right){1\over 2u}\left(1-u^2+y^2\right) \ ,\\
Y^2&=&\mathcal{R} \sinh\left(\rho\over \mathcal{R}\right){1\over 2u}\left(1+u^2-y^2\right) \ ,\\
Y^{i+2}&=&\mathcal{R} \sinh\left(\rho\over \mathcal{R}\right){y^i\over u},\ \ \ i=1,2,3 \ ,
\end{eqnarray}
where $\rho\in (0,\infty)$ and $u\in(0,\infty)$. The coordinate ranges are $\rho\in (0,\infty)$, $u\in(0,\infty)$, $y^i\in(-\infty,\infty)$, and the induced metric is
\begin{equation}
ds^2=d\rho^2+\mathcal{R}^2 \sinh^2\left(\rho\over \mathcal{R}\right)\left[{1\over u^2}\left(-du^2+dy^2\right)\right] \ .
\end{equation}
The 15 Lorentz generators of $M_{4,2}$, $M_{AB}=Y_A\bar{\partial}_B-Y_B\bar{\partial}_A$, are all tangent to the $AdS_5$ hyperboloid, and become the 15 isometries of the $so(4,2)$ isometry algebra of $AdS_5$. Of these, 10 have no $\partial_\rho$ components and are parallel to the $dS_4$ foliation. These form the $so(4,1)$ isometry algebra of the $dS_4$ slices
\begin{eqnarray}
-Y^2\bar{\partial}_1-Y^1\bar{\partial}_2&\rightarrow & d=u\partial_u+y^i\partial_i \ ,\\
-Y^{i+2}\bar{\partial}_1-Y^1\bar{\partial}_{i+2}&\rightarrow &j_i^+=uy_i\partial_u+\frac{1}{2}\left(-1+u^2-y^2\right)\partial_i+y_iy^j\partial_j, \ \ \ i=1,2,3, \\
-Y^{i+2}\bar{\partial}_2+Y^2\bar{\partial}_{i+2}&\rightarrow& j_i^-=uy_i\partial_u + \frac{1}{2}\left(1+u^2-y^2\right)\partial_i+y_iy^j\partial_j, \ \ \ i=1,2,3,\\
Y^{i+2}\bar{\partial}_{j+2}-Y^{j+2}\bar{\partial}_{i+2}&\rightarrow &j_{ij}=y_i\partial_j-y_j\partial_i , \ \ \ \ i,j=1,2,3.
\end{eqnarray}
Taking the combinations
\begin{eqnarray}
p_i&=&j_i^+-j_i^-=-\partial_i \ ,\\
k_i&=& j_i^++j_i^-=2uy_i\partial_u+(u^2-y^2)\partial_i+2y_iy^j\partial_j \ ,
\end{eqnarray}
we recognize $p_i$ and $j_{ij}$ as translations and rotations on the $y$ plane, with $d$ and $k_i$ filling out the rest of the $so(4,1)$ algebra.
The remaining 5 Killing vectors do have a $\partial_\rho$ component,
\begin{eqnarray}
Y^1\bar{\partial}_0-Y^0\bar{\partial}_1&\rightarrow & K={\mathcal{R}\over 2u}\left(1-u^2+y^2\right)\partial_\rho+\frac{1}{2}\left(1+u^2+y^2\right)\coth\left(\rho\over \mathcal{R}\right)\partial_u+u\coth\left(\rho\over \mathcal{R}\right)y^i\partial_i \ ,\nonumber \\
Y^2\bar{\partial}_0+Y^0\bar{\partial}_2&\rightarrow & K'={\mathcal{R}\over 2u}\left(1+u^2-y^2\right)\partial_\rho+\frac{1}{2}\left(1-u^2-y^2\right)\coth\left(\rho\over \mathcal{R}\right)\partial_u-u\coth\left(\rho\over \mathcal{R}\right)y^i\partial_i \ ,\nonumber \\
Y^{i+2}\bar{\partial}_0+Y^0\bar{\partial}_{i+2}&\rightarrow & K_i={\mathcal{R}\over u}y_i\partial_\rho +y_i \coth\left(\rho\over \mathcal{R}\right)\partial_u+u\coth\left(\rho\over \mathcal{R}\right)\partial_i, \ \ \ i=1,2,3. \nonumber
\end{eqnarray}
Taking the following linear combinations
\begin{eqnarray}
K_+&=&K+K'={\mathcal{R}\over u}\partial_\rho+ \coth\left(\rho\over \mathcal{R}\right)\partial_u \ ,\nonumber \\
K_-&=&K-K'={\mathcal{R}\over u}\left(-u^2+y^2\right)\partial_\rho+\left(u^2+y^2\right)\coth\left(\rho\over \mathcal{R}\right)\partial_u+2u\coth\left(\rho\over \mathcal{R}\right)y^i\partial_i \ ,\nonumber \\
K_i&=&{\mathcal{R}\over u}y_i\partial_\rho +y_i \coth\left(\rho\over \mathcal{R}\right)\partial_u+u\coth\left(\rho\over \mathcal{R}\right)\partial_i \ ,
\end{eqnarray}
and using the relation $\delta_K\pi=K^5(x)-K^\mu(x,\pi)\partial_\mu\pi$ from~(\ref{specialcasesym}), we obtain the transformation rules
\begin{eqnarray}
\delta_+\pi&=&{\mathcal{R}\over u}- \coth\left(\pi\over \mathcal{R}\right)\pi' \ ,\nonumber \\
\delta_-\pi&=&{\mathcal{R}\over u}\left(-u^2+y^2\right)-\left(u^2+y^2\right)\coth\left(\pi\over \mathcal{R}\right)\pi'-2u\coth\left(\pi\over \mathcal{R}\right)y^i\partial_i\pi \ ,\nonumber \\
\delta_i\pi&=&{\mathcal{R}\over u}y_i-y_i \coth\left(\pi\over \mathcal{R}\right)\pi'-u\coth\left(\pi\over \mathcal{R}\right)\partial_i\pi \ , \nonumber\\
\end{eqnarray}
where $\pi'\equiv\partial_u\pi$.
The terms~(\ref{dSinAdSDBI}) are each invariant up to a total derivative under these transformations, and the symmetry breaking pattern is
\begin{equation}
so(4,2)\rightarrow so(4,1) \ .
\end{equation}
\subsection{An anti-de Sitter brane in an anti-de Sitter bulk: $AdS_4$ in $AdS_5$}
Using the description and notation for the $AdS_5$ embedding from section~\ref{MinAdSsection}, hyperbolic coordinates on $AdS^5$ are
\begin{eqnarray}
\nonumber Y^0&=&{\cal R}\cos \tau \cosh \rho\cosh\psi \ , \\ \nonumber
Y^1&=&{\cal R}\sin \tau \cosh \rho\cosh\psi \ , \\ \nonumber
Y^2&=&{\cal R}\sinh\rho \ , \\ \nonumber
Y^3&=&{\cal R}\cosh \rho\sinh\psi\cos\theta_1 \ , \\ \nonumber
Y^4&=&{\cal R}\cosh \rho\sinh\psi\sin\theta_1\cos\theta_2 \ , \\
Y^5&=&{\cal R}\cosh \rho\sinh\psi\sin\theta_1\sin\theta_2 \ ,
\end{eqnarray}
where $\tau\in(-\pi,\pi)$ (the universal cover is obtained by extending this to $\tau\in(-\infty,\infty)$), $\rho\in(-\infty,\infty)$, $\psi\in(0,\infty)$, and $\theta_1,\theta_2$ parametrize a $2$-sphere. These coordinates cover the entire $AdS_5$ hyperbola, and after extending $\tau$, the whole of $AdS_5$.
The metric reads
\begin{equation}
\label{adshyperbolic}
{ ds^2={\cal R}^2\left[d\rho^2+\cosh^2\rho\left(-\cosh^2\psi\ d\tau^2+d\psi^2+\sinh^2\psi\ d\Omega^2_{(2)}\right)\right]} \ ,
\end{equation}
and after scaling $\rho$, this becomes $ds^2=d\rho^2+\mathcal{R}^2\cosh^2\(\rho\over \mathcal{R}\) ds^2_{AdS_{4}},$ where $ds^2_{AdS_{4}}$ is the global metric on an anti-de Sitter space $AdS_4$ of unit radius. The foliation by $AdS_{4}$ thus corresponds to $\rho={\rm constant}$ surfaces, and these slices are given by intersecting the planes $Y^2={\rm constant}$ with the hyperbola. This foliation covers the entire $AdS_5$ space.
Comparing this with~(\ref{metricform}), we obtain
\begin{equation}
{f(\pi)=\mathcal{R}\cosh (\pi/{\cal R}), \ \ \ g_{\mu\nu}=g_{\mu\nu}^{(AdS_4)}} \ ,
\end{equation}
and the terms~(\ref{generalterms}) become (without any integrations by parts)
\begin{eqnarray}
{\cal L}_1&=&\sqrt{-g}{\mathcal{R}^4\over 32}\( 12\ \pi+8\mathcal{R} \sinh\(2\pi\over \mathcal{R}\) +\mathcal{R} \sinh\(4\pi\over \mathcal{R}\)\) \ ,\\
{\cal L}_2&=&-\sqrt{-g}{\mathcal{R}^4\over \gamma} \cosh^4\(\pi\over \mathcal{R}\) \ ,\\
{\cal L}_3&=&\sqrt{-g}\left[\gamma^2 [\pi^3]-\mathcal{R}^2[\Pi] \cosh^2\(\pi\over \mathcal{R}\)+\mathcal{R}^3(5-\gamma^2) \cosh^3\(\pi\over \mathcal{R}\) \sinh\(\pi\over \mathcal{R}\)\right] \ , \\
{\cal L}_4&=&\sqrt{-g}\Bigg[ {2\gamma^3\over \mathcal{R}^2}\([\Pi][\pi^3]-[\pi^4]\)\rm sech^2\left(\pi\over {\cal R}\right)-{\gamma}\([\Pi]^2-[\Pi^2]+{8\gamma^2\over \mathcal{R}}[\pi^3]\tanh\left(\pi\over {\cal R}\right)\) \\
&&+\mathcal{R}\gamma(4-\gamma^2)[\Pi]\sinh\left(2\pi\over {\cal R}\right)+{3\mathcal{R}^2\over \gamma}\cosh^2\left(\pi\over {\cal R}\right) \(2+3\gamma^2-\gamma^4+(2-3\gamma^2+\gamma^4)\cosh\left(2\pi\over {\cal R}\right)\)\Bigg] \ , \nonumber \\
{\cal L}_5&=&\sqrt{-g}\Bigg[ {3\gamma^4\over \mathcal{R}^4}\(2([\pi^5]-[\Pi][\pi^4])+[\pi^3]([\Pi]^2-[\Pi^2])\)\rm sech^4\left(\pi\over {\cal R}\right) \nonumber\\
&&-{18\gamma^4\over \mathcal{R}^3}\([\Pi][\pi^3]-[\pi^4]\)\rm sech^2\left(\pi\over {\cal R}\right)\tanh\left(\pi\over {\cal R}\right) \nonumber\\
&&-{\gamma^2\over \mathcal{R}^2}\rm sech^2\left(\pi\over {\cal R}\right)\([\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]+{3\over 2}(3+10\gamma^2)[\pi^3]+{3\over 2}(3-10\gamma^2)[\pi^3]\cosh\left(2\pi\over {\cal R}\right) \) \nonumber \\
&&+{3\gamma^2\over \mathcal{R}}(3-\gamma^2)([\Pi]^2-[\Pi^2])\tanh\left(\pi\over {\cal R}\right)+{3\over 2}[\Pi] \(3+10\gamma^2-4\gamma^4+(3-10\gamma^2+4\gamma^4)\cosh\left(2\pi\over {\cal R}\right)\) \nonumber \\
&& -{3\mathcal{R} \over 4} \(15+11\gamma^2-6\gamma^4+(15-17\gamma^2+6\gamma^4)\cosh\left(2\pi\over {\cal R}\right)\) \sinh\left(2\pi\over {\cal R}\right) \Bigg] \ ,
\label{AdSinAdSDBI}
\end{eqnarray}
where the background metric and covariant derivatives are those of a unit-radius $AdS_4$, and
\begin{equation}
\gamma={1\over \sqrt{1+{(\partial\pi)^2\over \mathcal{R}^2 \cosh^2\(\pi\over \mathcal{R}\)}}} \ .
\end{equation}
In evaluating~(\ref{dSinMDBI}), we have used that fact that the scalar curvature and cosmological constant of the unit-radius $AdS_4$ are
$R=-12$ and $\Lambda=-3$ respectively, as well as the relations $R_{\mu\nu\alpha\beta}={R\over 12}\(g_{\mu\alpha}g_{\nu\beta}-g_{\mu\beta}g_{\nu\alpha}\)$, $R_{\mu\nu}={R\over 4}g_{\mu\nu}$ valid for a maximally symmetric space. Given suitable combinations of these Lagrangians so that a constant field $\pi(x)=\pi_0={\rm constant}$ is a solution to the equations of motion, $f(\pi_0)= \mathcal{R} \cosh\(\pi_0\over \mathcal{R}\)$ sets the radius of the anti-de Sitter brane. We call these anti-de Sitter DBI Galileons (see Figure \ref{types}).
\subsubsection{Killing vectors and symmetries}
We use Poincare coordinates $(u,x^0,x^1,x^2)$ on the $AdS_4$ slices. The embedding becomes,
\begin{eqnarray}
Y^0&=&\mathcal{R} \cosh\left(\rho\over \mathcal{R}\right){1\over 2u}\left(1+u^2+x^2\right) \ ,\\
Y^1&=& \mathcal{R} \cosh\left(\rho\over \mathcal{R}\right){x^0\over u} \ ,\\
Y^2&=&\mathcal{R} \sinh\left(\rho\over \mathcal{R}\right) \ ,\\
Y^3&=& \mathcal{R} \cosh\left(\rho\over \mathcal{R}\right){1\over 2u}\left(1-u^2-x^2\right) \ ,\\
Y^{i+3}&=&\mathcal{R} \cosh\left(\rho\over \mathcal{R}\right){x^i\over u},\ \ \ i=1,2 \ .
\end{eqnarray}
Here $x^2\equiv \eta_{ij}x^ix^j$, where $\eta_{ij}={\rm diag}(-1,1,1)$ is the Minkowski 3-metric. The coordinate ranges are $\rho\in (0,\infty)$, $u\in(0,\infty)$ and $x^i\in(-\infty,\infty)$, and the induced metric is
\begin{equation}
ds^2=d\rho^2+\mathcal{R}^2 \cosh^2\left(\rho\over \mathcal{R}\right)\left[{1\over u^2}\left(du^2+\eta_{ij}dx^idx^j\right)\right] \ .
\end{equation}
The 15 Lorentz generators of $M_{4,2}$
are all tangent to the $AdS_5$ hyperboloid, and become the 15 isometries of the $so(4,2)$ isometry algebra of $AdS_5$. Of these, 10 have no $\partial_\rho$ components and are parallel to the $AdS_4$ foliation - these form the $so(3,2)$ isometry algebra of the $AdS_4$ slices,
\begin{eqnarray}
-Y^3\bar{\partial}_0-Y^0\bar{\partial}_3&\rightarrow & u\partial_u+x^i\partial_i \ ,\nonumber \\
-Y^1\bar{\partial}_0+Y^0\bar{\partial}_1&\rightarrow & u x^0\partial_u+\frac{1}{2}\left(1+u^2+x^2\right)\partial_0+x^0x^j\partial_j \ ,\nonumber \\
-Y^{i+3}\bar{\partial}_0-Y^0\bar{\partial}_{i+3}&\rightarrow & u x_i\partial_u-\frac{1}{2}\left(1+u^2+x^2\right)\partial_i+x_ix^j\partial_j,\ \ \ i=1,2\nonumber \\
-Y^3\bar{\partial}_1-Y^1\bar{\partial}_3&\rightarrow & u x^0\partial_u+\frac{1}{2}\left(-1+u^2+x^2\right)\partial_0+x^0x^j\partial_j \ ,\nonumber \\
-Y^{i+3}\bar{\partial}_3+Y^3\bar{\partial}_{i+3}&\rightarrow & u x_i\partial_u-\frac{1}{2}\left(-1+u^2+x^2\right)\partial_i+x_ix^j\partial_j,\ \ \ i=1,2 \nonumber \\
Y^{i+3}\bar{\partial}_1+Y^1\bar{\partial}_{i+3}&\rightarrow &x^i\partial_0+x^0\partial_i,\ \ \ \ i=1,2 \nonumber \\
Y^{5}\bar{\partial}_4+Y^4\bar{\partial}_{5}&\rightarrow & x^2\partial_1-x^1\partial_2 \ ,
\end{eqnarray}
where the sums are over $j=0,1,2$, and indices are raised and lowered with $\eta_{ij}$. These may be grouped as
\begin{eqnarray}
d&=&u\partial_u+x^i\partial_i \ ,\\
j_i^+&=&ux_i\partial_u-\frac{1}{2}\left(1+u^2+x^2\right)\partial_i+x_ix^j\partial_j,\ \ \ \ i=0,1,2\\
j_i^-&=&ux_i\partial_u - \frac{1}{2}\left(-1+u^2+x^2\right)\partial_i+x_ix^j\partial_j,\ \ \ \ i=0,1,2\\
j_{ij}&=&x_i\partial_j-x_j\partial_i, \ \ \ \ i,j=0,1,2 \ ,
\end{eqnarray}
and by taking the combinations
\begin{eqnarray}
p_i&=&j_i^+-j_i^-=-\partial_i \ ,\\
k_i&=& j_i^++j_i^-=2ux_i\partial_u-(u^2+x^2)\partial_i+2x_ix^j\partial_j \ ,
\end{eqnarray}
we recognize $p_i$ and $j_{ij}$ as translations and rotations on the $x$-space, with $d$ and $k_i$ filling out the rest of the $so(3,2)$ algebra.
The remaining 5 Killing vectors do have a $\partial_\rho$ component,
\begin{eqnarray}
Y^2\bar{\partial}_0+Y^0\bar{\partial}_2&\rightarrow & K={\mathcal{R}\over 2u}\left(1+u^2+x^2\right)\partial_\rho+\frac{1}{2}\left(1-u^2+x^2\right)\tanh\left(\rho\over \mathcal{R}\right)\partial_u-u\tanh\left(\rho\over \mathcal{R}\right)x^i\partial_i \ ,\nonumber \\
Y^3\bar{\partial}_2-Y^2\bar{\partial}_3&\rightarrow & K'={\mathcal{R}\over 2u}\left(1-u^2-x^2\right)\partial_\rho+\frac{1}{2}\left(1+u^2-x^2\right)\tanh\left(\rho\over \mathcal{R}\right)\partial_u+u\tanh\left(\rho\over \mathcal{R}\right)x^i\partial_i \ ,\nonumber \\
Y^2\bar{\partial}_1+Y^1\bar{\partial}_2&\rightarrow & {\mathcal{R}\over u}x^0\partial_\rho+x^0\tanh\left(\rho\over \mathcal{R}\right)\partial_u+u\tanh\left(\rho\over \mathcal{R}\right)\partial_0 \ ,\nonumber \\
Y^{i+3}\bar{\partial}_2-Y^2\bar{\partial}_{i+3}&\rightarrow & {\mathcal{R}\over u}x^i\partial_\rho+x^i\tanh\left(\rho\over \mathcal{R}\right)\partial_u-u\tanh\left(\rho\over \mathcal{R}\right)\partial_i,\ \ \ i=1,2 \ ,\nonumber
\end{eqnarray}
which may be combined to form
\begin{eqnarray}
K_+&=&K+K'={\mathcal{R}\over u}\partial_\rho+ \tanh\left(\rho\over \mathcal{R}\right)\partial_u \ ,\nonumber \\
K_-&=&K-K'={\mathcal{R}\over u}\left(u^2+x^2\right)\partial_\rho+\left(-u^2+x^2\right)\tanh\left(\rho\over \mathcal{R}\right)\partial_u-2u\tanh\left(\rho\over \mathcal{R}\right)x^i\partial_i \ ,\nonumber \\
K_i&=&{\mathcal{R}\over u}x_i\partial_\rho +x_i \tanh\left(\rho\over \mathcal{R}\right)\partial_u-u\tanh\left(\rho\over \mathcal{R}\right)\partial_i,\ \ \ i=0,1,2.
\end{eqnarray}
Using the relation $\delta_K\pi=K^5(x)-K^\mu(x,\pi)\partial_\mu\pi$ from~(\ref{specialcasesym}), we obtain the transformation rules
\begin{eqnarray}
\delta_+\pi&=&{\mathcal{R}\over u}- \tanh\left(\pi\over \mathcal{R}\right)\pi' \ , \\
\delta_-\pi&=&{\mathcal{R}\over u}\left(u^2+x^2\right)-\left(-u^2+x^2\right)\tanh\left(\pi\over \mathcal{R}\right)\pi'+2u\tanh\left(\pi\over \mathcal{R}\right)x^i\partial_i\pi \ , \\
\delta_i\pi&=&{\mathcal{R}\over u}x_i-x_i \tanh\left(\pi\over \mathcal{R}\right)\pi'+u\tanh\left(\pi\over \mathcal{R}\right)\partial_i\pi,\ \ \ i=0,1,2 \ ,
\label{AdSinAdStrans}
\end{eqnarray}
where $\pi'\equiv\partial_u\pi$.
The terms~(\ref{AdSinAdSDBI}) are each invariant up to a total derivative under these transformations, and the symmetry breaking pattern is
\begin{equation}
so(4,2)\rightarrow so(3,2) \ .
\end{equation}
\section{Small field limits: the analogues of Galileons}
\label{sec:smallfieldlimits}
The Lagrangians we have uncovered have a fairly complicated, non-polynomial form. We know in the Minkowski case that
the special case of the Galileon symmetry arises in a particular limit \cite{deRham:2010eu}, and that this limit greatly simplifies the actions. In this section, we consider similar limits for the general theories we have constructed.
Consider a Lagrangian ${\cal L}$ that may be expanded in some formal series in a parameter $\lambda$ as
\begin{equation}
{\cal L}=\lambda^n\left({\cal L}_{(0)}+\lambda {\cal L}_{(1)}+\lambda^2{\cal L}_{(2)}+\cdots\right) \ ,
\end{equation}
where $n$ is an integer, indicating that the series need not start at order $\lambda^0$. Suppose ${\cal L}$ possesses a symmetry that may also be expanded in such a series
\begin{equation}
\delta\pi=\lambda^m\left(\delta_{(0)}\pi+\lambda \delta_{(1)}\pi+\lambda^2 \delta_{(2)}\pi+\cdots\right) \ ,
\end{equation}
where $m$ is another integer, again indicating that this series also need not start at order $\lambda^0$. The statement that $\delta\pi$ is a symmetry of ${\cal L}$ is
\begin{equation}
\label{invarianceeq} {\delta^{EL}{\cal L}\over \delta \pi}\delta\pi \simeq 0 \ ,
\end{equation}
where ${\delta^{EL}{\cal L}\over \delta \pi}$ is the Euler-Lagrange derivative and $\simeq$ indicates equality up to a total derivative.
Expanding~(\ref{invarianceeq}) in powers of $\lambda$ yields a series of equations
\begin{eqnarray}
&& {\delta^{EL}{\cal L}_{(0)}\over \delta \pi}\delta_{(0)} \pi\simeq 0 \ , \\
&& {\delta^{EL}{\cal L}_{(1)}\over \delta \pi}\delta_{(0)}\pi+ {\delta^{EL}{\cal L}_{(0)}\over \delta \pi}\delta_{(1)}\pi \simeq 0 \ , \nonumber\\
&&\vdots
\end{eqnarray}
with the first of these indicating that $\delta_{(0)}$ is a symmetry of ${\cal L}_{(0)}$. Our goal in this section is to seek expansions of this form for the various examples we have constructed, in order to find simpler, but still non-trivial, theories with the same number of symmetries.
The expansion we choose is one in powers of the field $\pi$ around some background. We expand $\pi$ around a constant background value $\pi_0$ and let $\lambda$ count powers of the deviation from this background; i.e. we make the replacement
\begin{equation}
\pi\rightarrow \pi_0+\lambda\pi \ ,
\end{equation}
and then expand the Lagrangians and symmetries in powers of $\lambda$.
Applying this small field limit to the DBI Galileons~(\ref{DBIGalileonterms}) gives rise to the original Galileons first studied in~\cite{Nicolis:2008in}. These are, up to total derivatives,
\begin{align}
\mathcal{L}_{2}&=\pi \ , \nonumber \\
\mathcal{L}_{2}&=-{1\over 2}(\partial\pi)^2 \ , \nonumber \\
\mathcal{L}_{3}&=-{1\over 2}(\partial\pi)^2[\Pi] \ ,\nonumber \\
\mathcal{L}_{4}& =-{1\over 2}(\partial\pi)^2\left([\Pi]^2-[\Pi^2]\right) \ ,\nonumber \\
\mathcal{L}_{5}& =-{1\over 2}(\partial\pi)^2\left([\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]\right)\ .
\label{normalGalileons}
\end{align}
Note that lower order terms in the expansion are total derivatives. For example, in the expansion of ${\cal L}_4$ there exists an ${\cal O}\(\pi^2\)$ piece, but this is a total derivative in Minkowski space, and the first non-trivial term is the ${\cal O}\(\pi^4\)$ piece shown above.
Applying the small field limit to the transformation laws~(\ref{DBIGalileontrans}) yields
\begin{eqnarray}
&&\delta\pi=1 \ , \nonumber \\
&& \delta_\mu \pi=x_\mu \ ,
\label{normalGalileontrans}
\end{eqnarray}
under which the terms~(\ref{normalGalileons}) are invariant. This is the original Galilean symmetry considered in \cite{Nicolis:2008in}. The small field limit can also be applied to the case of a flat brane embedded in an $AdS_5$ bulk~(\ref{conformalDBIGalileonterms}), but the resulting actions and transformation laws are identical to those of~(\ref{normalGalileons}),~(\ref{normalGalileontrans}).
Applying this technique to a de Sitter brane embedded in a flat bulk, we expand~(\ref{dSinMDBI}) around some constant background. The following linear combinations allow us to successively cancel the lowest order terms in $\lambda$ up to total derivatives on $dS_4$, yielding terms which start at order $\lambda$, $\lambda^2$, etc.
\begin{eqnarray}
\bar{\cal L}_1&=&{1\over \pi_0^4}{\cal L}_1=\sqrt{-g}\pi \ , \nonumber\\
\bar{\cal L}_2&=&{1\over \pi_0^2}\left( {\cal L}_2+{4\over\pi_0}{\cal L}_1\right)=-\frac{1}{2}\sqrt{-g} \left((\partial\pi)^2-4 \pi^2\right) \ ,\nonumber \\
\bar{\cal L}_3&=& {\cal L}_3+{6\over\pi_0}{\cal L}_2+{12\over\pi_0^2}{\cal L}_1=\sqrt{-g}\left(-{1\over 2}(\partial\pi)^2[\Pi]-3 (\partial\pi)^2\pi+4\pi^3\right) \ ,\nonumber\\
\bar{\cal L}_4&=& \pi_0^2\left( {\cal L}_4+{6\over\pi_0}{\cal L}_3+{18\over\pi_0^2}{\cal L}_2+{24\over\pi_0^3}{\cal L}_1\right) \nonumber \\
&=&\sqrt{-g}\left[-\frac{1}{2}(\partial\pi)^2\left([\Pi]^2-[\Pi^2]+\frac{1}{2}(\partial\pi)^2+6\pi[\Pi]+18\pi^2\right)+6\pi^4\right] \ , \nonumber \\
\bar{\cal L}_5&=& \pi_0^4\left( {\cal L}_5+{4\over\pi_0}{\cal L}_4+{12\over\pi_0^2}{\cal L}_3+{24\over\pi_0^3}{\cal L}_2+{24\over\pi_0^4}{\cal L}_1\right) \nonumber \\
&=& \sqrt{-g}\left[-\frac{1}{2}\left((\partial\pi)^2+{1\over 5}\pi^2\right)\left([\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]\right)\right. \nonumber \\
&&\left.-{12\over 5}\pi(\partial\pi)^2\left([\Pi]^2-[\Pi^2]+{27\over 12}[\Pi]\pi+5\pi^2\right)+{24\over 5}\pi^5\right] \ .
\end{eqnarray}
Scaling the coordinates to $(\hat u,\hat y^i)\equiv(L u, Ly^i)$, carrying dimensions of length, the $dS_4$ curvature becomes $R={12\over L^2}$, and canonically normalizing the field to $\hat\pi={1\over L^2}\pi$, we then obtain
\begin{eqnarray}
\hat{\cal L}_1&=&\sqrt{-g}\hat\pi \ , \nonumber\\
\hat{\cal L}_2&=&-\frac{1}{2}\sqrt{-g} \left((\partial\hat\pi)^2-{4\over L^2}\hat \pi^2\right) \ ,\nonumber \\
\hat{\cal L}_3&=& \sqrt{-g}\left(-{1\over 2}(\partial\hat\pi)^2[\hat\Pi]-{3\over L^2} (\partial\hat\pi)^2\hat\pi+{4\over L^4}\hat\pi^3\right) \ ,\nonumber\\
\hat{\cal L}_4&=&\sqrt{-g}\left[-\frac{1}{2}(\partial\hat\pi)^2\left([\hat\Pi]^2-[\hat\Pi^2]+{1\over 2L^2}(\partial\hat\pi)^2+{6\over L^2}\hat\pi[\hat\Pi]+{18\over L^4}\hat\pi^2\right)+{6\over L^6}\hat\pi^4\right] \ , \nonumber \\
\hat{\cal L}_5&=& \sqrt{-g}\left[-\frac{1}{2}\left((\partial\hat\pi)^2+{1\over 5L^2} \hat\pi^2\right)\left([\hat\Pi]^3-3[\hat\Pi][\hat\Pi^2]+2[\hat\Pi^3]\right)\right. \nonumber \\
&&\left.-{12\over 5L^2} \hat\pi(\partial \hat\pi)^2\left([\hat\Pi]^2-[\hat\Pi^2]+{27\over 12L^2}[\hat\Pi] \hat\pi+{5\over L^4} \hat\pi^2\right)+{24\over 5L^8} \hat\pi^5\right] \ ,
\label{dsGalileonsscaled}
\end{eqnarray}
where $\hat{\cal L}_n={1\over L^{4n+2}}\bar{\cal L}_n$.
These expressions are invariant under the lowest order symmetry transformations obtained by taking the small field limit of~(\ref{dSinMtrans}),
\begin{eqnarray}
\delta_{+}\hat\pi&=&{1\over u}\left(u^2-y^2\right) \ ,\nonumber\\
\delta_{-} \hat\pi&=&-{1\over u} \ ,\nonumber\\
\label{dSGalileontrans}
\delta_{i} \hat\pi &=& {y_i\over u} \ .
\end{eqnarray}
The terms~(\ref{dsGalileonsscaled}) are Galileons which naturally live in de Sitter space, and become the original Galileons in the limit where the $dS_4$ radius goes to infinity. They have the same number of nonlinear shift-like symmetries as the original flat space Galileons, despite the fact that they live on a curved space. As such, we anticipate them being naturally suited to models of inflation and dark energy.
Another fascinating new feature that is not shared by the original Galileons is the existence of a potential. In particular, the quadratic term $\hat{\cal L}_2$ comes with a mass term of order the $4$d de Sitter radius. The symmetries (\ref{dSGalileontrans}) fix the value of the mass (in fact, each of the symmetries in (\ref{dSGalileontrans}) is alone sufficient to fix the mass). If the coefficient of $\hat{\cal L}_2$ is chosen to be positive, so that the scalar field is not a ghost, then this mass is tachyonic. However, this instability is not necessarily worrisome because its timescale is of order the de Sitter time. Furthermore, this small mass should not be renormalized, because its value is protected by symmetry. The higher terms also come with cubic, quartic, and quintic terms in the potential, with values tied to the kinetic structure by the symmetries.
The small field limit may also be applied to the examples of a de Sitter brane embedded in either a de Sitter~(\ref{dSindSDBI}) or anti-de Sitter~(\ref{dSinAdSDBI}) bulk. The resulting actions and transformation laws are identical to those of~(\ref{dsGalileonsscaled}) and~(\ref{dSGalileontrans}).
Finally, we apply the small field expansion to the case of an anti-de Sitter brane embedded in an anti-de Sitter bulk, by expanding the terms~(\ref{AdSinAdSDBI}) around a constant background $\pi_0$. In a similar manner to the previous case, the following linear combinations yield terms which start at order $\lambda$, $\lambda^2$, etc. up to total derivatives.
\begin{eqnarray}
\bar{\cal L}_1&=&{1\over L^4}{\cal L}_1=\sqrt{-g}\pi \ ,\nonumber \\
\bar{\cal L}_2&=&{1\over L^2}\left[ {\cal L}_2+{4\over\mathcal{R}}\tanh\left(\pi_0\over \mathcal{R}\right){\cal L}_1\right]=-\frac{1}{2}\sqrt{-g}\left((\partial\pi)^2+4 \pi^2\right) \ ,\nonumber \\
\bar{\cal L}_3&=& {\cal L}_3+{6\over\mathcal{R}}\tanh\left(\pi_0\over \mathcal{R}\right){\cal L}_2+{4\over \mathcal{R}^2}\left(2-3\ \text{sech}^2\left(\frac{\pi_0}{\mathcal{R}}\right)\right){\cal L}_1=\sqrt{-g}\left(-{1\over 2}(\partial\pi)^2[\Pi]+3 (\partial\pi)^2\pi+4\pi^3\right) \ ,\nonumber \\
\bar{\cal L}_4&=&L^2\left[ {\cal L}_4+ {6\over\mathcal{R}}\tanh\left(\pi_0\over \mathcal{R}\right){\cal L}_3+ {6\over \mathcal{R}^2}\left(4-3\ \text{sech}^2\left(\frac{\pi_0}{\mathcal{R}}\right)\right){\cal L}_2-{24\over \mathcal{R}^3} \text{sech}^2\left(\frac{\pi_0}{\mathcal{R}}\right)\tanh\left(\pi_0\over \mathcal{R}\right){\cal L}_1\right]\nonumber \\
&=& \sqrt{-g}\left[-\frac{1}{2}(\partial\pi)^2\left([\Pi]^2-[\Pi^2]-\frac{1}{2}(\partial\pi)^2-6\pi[\Pi]+18\pi^2\right)-6\pi^4\right] \ ,\nonumber \\
\bar{\cal L}_5&=& L^4\left[ {\cal L}_5+ {4\over\mathcal{R}}\tanh\left(\pi_0\over \mathcal{R}\right){\cal L}_4+ {3\over \mathcal{R}^2}\left(5-4\ \text{sech}^2\left(\frac{\pi_0}{\mathcal{R}}\right)\right){\cal L}_3\right. \nonumber\\
&& \left.+{12\over \mathcal{R}^3} \text{sech}^3\left(\frac{\pi_0}{\mathcal{R}}\right)\left(\sinh\left(3\pi_0\over \mathcal{R}\right)-\sinh\left(\pi_0\over \mathcal{R}\right)\right){\cal L}_2+{24\over \mathcal{R}^4} \text{sech}^4\left(\frac{\pi_0}{\mathcal{R}}\right){\cal L}_1\right]\nonumber \\
&=& \sqrt{-g}\left[-\frac{1}{2}\left((\partial\pi)^2-{1\over 5}\pi^2\right)\left([\Pi]^3-3[\Pi][\Pi^2]+2[\Pi^3]\right)\right. \nonumber \\
&&\left.+{12\over 5}\pi(\partial\pi)^2\left([\Pi]^2-[\Pi^2]-{27\over 12}[\Pi]\pi+5\pi^2\right)+{24\over 5}\pi^5\right] \ ,
\end{eqnarray}
where $L=\mathcal{R}\cosh^4\left(\pi_0\over \mathcal{R}\right)$ is the $AdS_{3,1}$ radius.
Scaling the coordinates to $(\hat u,\hat x^i)\equiv(L u, Ly^i)$ so that they carry dimensions of length, the $AdS_4$ curvature becomes $R=-{12\over L^2}$, and canonically normalizing the field to $\hat\pi={1\over L^2}\pi$, we then obtain
\begin{eqnarray}
\hat{\cal L}_1&=&\sqrt{-g}\hat\pi \ , \nonumber\\
\hat{\cal L}_2&=&-\frac{1}{2}\sqrt{-g} \left((\partial\hat\pi)^2+{4\over L^2}\hat \pi^2\right) \ ,\nonumber \\
\hat{\cal L}_3&=& \sqrt{-g}\left(-{1\over 2}(\partial\hat\pi)^2[\hat\Pi]+{3\over L^2} (\partial\hat\pi)^2\hat\pi+{4\over L^4}\hat\pi^3\right) \ ,\nonumber\\
\hat{\cal L}_4&=&\sqrt{-g}\left[-\frac{1}{2}(\partial\hat\pi)^2\left([\hat\Pi]^2-[\hat\Pi^2]-{1\over 2L^2}(\partial\hat\pi)^2-{6\over L^2}\hat\pi[\hat\Pi]+{18\over L^4}\hat\pi^2\right)-{6\over L^6}\hat\pi^4\right] \ , \nonumber \\
\hat{\cal L}_5&=& \sqrt{-g}\left[-\frac{1}{2}\left((\partial\hat\pi)^2-{1\over 5L^2} \hat\pi^2\right)\left([\hat\Pi]^3-3[\hat\Pi][\hat\Pi^2]+2[\hat\Pi^3]\right)\right. \nonumber \\
&&\left.+{12\over 5L^2} \hat\pi(\partial \hat\pi)^2\left([\hat\Pi]^2-[\hat\Pi^2]-{27\over 12L^2}[\hat\Pi] \hat\pi+{5\over L^4} \hat\pi^2\right)+{24\over 5L^8} \hat\pi^5\right] \ ,
\label{AdSGalileonsscaled}
\end{eqnarray}
where $\hat{\cal L}_n={1\over L^{4n+2}}\bar{\cal L}_n$.
These terms are invariant under the lowest order symmetry transformations obtained by taking the small field limit of~(\ref{AdSinAdStrans})
\begin{eqnarray}
\delta_{+(0)}\hat\pi&=&{\mathcal{R}\over u} \ ,\nonumber \\
\delta_{-(0)}\hat\pi&=&{\mathcal{R}\over u}\left(u^2+x^2\right) \ ,\nonumber \\
\delta_{i(0)}\hat\pi&=&{\mathcal{R}\over u}x_i,\ \ \ i=0,1,2 \ .\nonumber\\
\end{eqnarray}
These are Galileons that live on anti-de Sitter space. In this case, the quadratic term comes with a non-tachyonic mass of order the $AdS_4$ radius.
While we have focused on the construction of new effective field theories through the small field expansion of embedded brane models, it is important to note that there may well exist other expansions that lead to different theories in the limit. For the example of a flat brane embedded in an anti-de Sitter bulk (\ref{conformalDBIGalileonterms}), the theory admits an expansion in powers of derivatives. Up to total derivatives, the derivative expansion yields
\begin{eqnarray}
\bar{\cal L}_1&=&{1\over \mathcal{R}}{\cal L}_1=-{1\over 4}e^{-4\hat\pi} \ , \nonumber\\
\bar{\cal L}_2&=&{1\over \mathcal{R}^2}\left( {\cal L}_2-{4\over \mathcal{R}}{\cal L}_1\right)=-\frac{1}{2} e^{-2\hat\pi}(\partial\hat\pi)^2 \ ,\nonumber \\
\bar{\cal L}_3&=&{1\over \mathcal{R}^3}\left( {\cal L}_3-{6\over \mathcal{R}}{\cal L}_2+{8\over \mathcal{R}^2}{\cal L}_1\right)=-\frac{1}{2}(\partial \hat{\pi})^2 \Box \hat{\pi}+\frac{1}{4} (\partial \hat{\pi})^4 \ ,\nonumber\\
\bar{\cal L}_4&=& {1\over \mathcal{R}^4}\left( {\cal L}_4-{6\over \mathcal{R}}{\cal L}_3+{24\over \mathcal{R}^2}{\cal L}_2\right) \nonumber \\
&=&- \frac{1}{2}e^{2\hat \pi}(\partial \hat \pi)^2
\([\hat \Pi]^2-[\hat \Pi^2]+{2\over 5}((\partial \hat \pi)^2 \Box \hat \pi-[\hat \pi^3])+{3\over 10}(\partial \hat \pi)^4\) \ , \nonumber \\
\bar{\cal L}_5&=& {1\over \mathcal{R}^5}\left( {\cal L}_5-{4\over \mathcal{R}}{\cal L}_4+{15\over \mathcal{R}^2}{\cal L}_3-{48\over \mathcal{R}^3}{\cal L}_2\right) \nonumber \\
&=& -\frac{1}{2} e^{4\hat \pi} (\partial \hat \pi)^2
\Big[[\hat \Pi]^3-3[\hat \Pi][\hat \Pi^2]+2[\hat \Pi^3]+3(\partial \hat \pi)^2([\hat \Pi]^2-[\hat \Pi^2])\nonumber\\
&&+\frac{30}{7}(\partial \hat \pi)^2((\partial \hat \pi)^2[\hat \Pi]-[\hat \pi^3])-\frac{3}{28}(\partial \hat \pi)^6\Big] \ ,\label{DBIgalderivexpan}
\end{eqnarray}
where $\hat\pi\equiv\pi/\mathcal{R}$. These are the conformal Galileons~\cite{Nicolis:2008in,deRham:2010eu,Khoury:2011da}. Their transformation laws come from applying the derivative expansion to the transformation laws (\ref{MinAdStrans}),
\begin{eqnarray}
\delta\hat\pi&=&1-x^\mu\partial_\mu\hat\pi,\nonumber\\
\delta_\mu\hat\pi&=&2x_\mu+x^2\partial_\mu\hat\pi-2x_\mu x^\nu\partial_\nu\hat\pi \ . \label{MinAdStranslim}
\end{eqnarray}
In taking the limit in powers of derivatives, we must remember that the explicit factors of the coordinates in the transformation laws are assigned a power of inverse derivatives. The terms (\ref{DBIgalderivexpan}) are each invariant up to a total derivative under (\ref{MinAdStranslim}). As mentioned in \cite{deRham:2010eu}, it is remarkable that this limit does not alter the commutation relations of the symmetries, so that the algebra remains $so(4,2)$.
The derivative expansion can also be applied to the DBI Galileons (\ref{DBIGalileonterms}). The result is identical to the small field limit, since the powers of $\pi$ and powers of $\partial$ within each limiting Lagrangian are identical.
A derivative expansion does not, however, seem applicable in general. To see the problem, attempt to construct an order four derivative term from the general Lagrangians in (\ref{generalterms}). It is necessary to find a constant $A$ such that the two derivative part in the expression ${\cal L}_3+A{\cal L}_2$ is a total derivative. The two derivative part reads $\sqrt{-g}\(3ff'-{A\over 2}f^2\)(\partial\pi)^2$, up to a total derivative, and for this to vanish we must have $f\propto e^{A\pi/6}$. The only cases of ours that conform to this are the conformal DBI Galileons ($A\not= 0$) and the ordinary DBI Galileons ($A=0$).
\subsection{Symmetry breaking and ghosts}
By writing the actions of the previous section in terms of the scalar curvature, $R={12\over L^2}$ for $dS_4$, $R=-{12\over L^2}$ for $AdS_4$, and $R=0$ for $M_4$, it is possible to combine the $dS_4$ Galileons (\ref{dsGalileonsscaled}), the $AdS_4$ Galileons (\ref{AdSGalileonsscaled}) and the flat space Galileons (\ref{normalGalileons}) into the single set of expressions
\begin{eqnarray}
\hat{\cal L}_1&=&\sqrt{-g}\hat\pi \ , \nonumber\\
\hat{\cal L}_2&=&-\frac{1}{2}\sqrt{-g} \left((\partial\hat\pi)^2-{R\over 3}\hat \pi^2\right) \ ,\nonumber \\
\hat{\cal L}_3&=& \sqrt{-g}\left(-{1\over 2}(\partial\hat\pi)^2[\hat\Pi]-{R\over 4} (\partial\hat\pi)^2\hat\pi+{R^2\over 36}\hat\pi^3\right) \ ,\nonumber\\
\hat{\cal L}_4&=&\sqrt{-g}\left[-\frac{1}{2}(\partial\hat\pi)^2\left([\hat\Pi]^2-[\hat\Pi^2]+{R\over 24}(\partial\hat\pi)^2+{R\over 2}\hat\pi[\hat\Pi]+{R^2\over 8}\hat\pi^2\right)+{R^3\over 288}\hat\pi^4\right] \ , \nonumber \\
\hat{\cal L}_5&=& \sqrt{-g}\left[-\frac{1}{2}\left((\partial\hat\pi)^2+{R\over 60} \hat\pi^2\right)\left([\hat\Pi]^3-3[\hat\Pi][\hat\Pi^2]+2[\hat\Pi^3]\right)\right. \nonumber \\
&&\left.-{R\over 5} \hat\pi(\partial \hat\pi)^2\left([\hat\Pi]^2-[\hat\Pi^2]+{3R\over 16}[\hat\Pi] \hat\pi+{5R^2\over 144} \hat\pi^2\right)+{R^4\over 4320} \hat\pi^5\right] \ .
\label{singlesetGalileons}
\end{eqnarray}
Focusing on $\hat{\cal L}_2$, we note that the non-linear symmetries fix the sign of the mass term relative to that of the kinetic term. Therefore, in de Sitter space, where $R$ is positive, the scalar is either a tachyon or a ghost, depending on the overall sign of $\hat{\cal L}_2$. In $AdS$ on the other hand, where $R<0$, the scalar can be stable and ghost free if the sign of $\hat{\cal L}_2$ is chosen to be positive\footnote{A scalar in $AdS$ can tolerate a slightly negative mass without instability. Any mass squared larger than the Breitenhloer Friedman bound $m^2\geq -{9\over 4L^2}={3\over 16}R$ is stable~\cite{Breitenlohner:1982bm}. However, we cannot make use of this in any way, since the $AdS$ scalar is ghostlike whenever its mass squared is negative.}.
The presence of a tachyon suggests spontaneous symmetry breaking, as there may be higher order terms in the potential which stabilize it. In this section, we explore the possibility of using the tachyon of the de Sitter Galileons to induce spontaneous symmetry breaking. More specifically, consider imposing a $Z_2$ symmetry $\pi\rightarrow -\pi$, which forbids the odd terms $\hat{\cal L}_3$ and $\hat{\cal L}_5$.\footnote{This is interesting in its own right. Imposing this symmetry on the original Galileons gives an interacting scalar field theory which in suitable regimes has only one possible interaction term $\hat{\cal L}_4$, which furthermore is not renormalized. This is the co-dimension one version of introducing an internal $so(N)$ symmetry in a theory with a multiplet of $N$ Galileons, which also yields a single possible interaction term~\cite{Hinterbichler:2010xn}.} In the $dS$ case and $AdS$ case respectively, a symmetry breaking potential can be achieved by choosing
\begin{eqnarray}
\hat{\cal L}_2-a \hat{\cal L}_4,\ \ \ dS \ ,\\
-\hat{\cal L}_2+a \hat{\cal L}_4.\ \ \ AdS \ ,
\end{eqnarray}
with coupling constant $a>0$. In both cases, the potential is
\begin{equation}
V(\pi)={ |R|\over 288}\(-48\pi^2+aR^2\pi^4\) \ .
\end{equation}
This has a $Z_2$ preserving vacuum at $\pi=0$ and $Z_2$ breaking vacua at $\pi=\pm\sqrt {24\over a}{1\over |R|}$.
None of these vacua alter any of the Galilean symmetries of these models. Thus, expanding around one of the minima (the positive one, say), we obtain a Lagrangian which is also a combination of the terms~(\ref{singlesetGalileons}), with coefficients depending only on the original coefficient $a$,
\begin{eqnarray}
-2 \hat{\cal L}_2-\sqrt{6a} \hat{\cal L}_3-a \hat{\cal L}_4,\ \ \ dS \ ,\\
2 \hat{\cal L}_2-\sqrt{6a} \hat{\cal L}_3+a \hat{\cal L}_4.\ \ \ AdS \ .
\end{eqnarray}
In the $dS$ case, the field has a normal sign kinetic term around the tachyonic $\pi=0$ solution, and a ghostly kinetic term around the symmetry breaking vacuum. In the $AdS$ case, the field is a ghost around the tachyonic $\pi=0$ solution, and is ghost-free around the symmetry breaking vacuum. In this case we see a version of ghost condensation along with the usual tachyon condensation. See figure \ref{potential}.
\begin{figure}
\centering
\includegraphics[width=4.0in]{potential}
\caption{$Z_2$ symmetry breaking for the $dS/AdS$ Galileons.}
\label{potential}
\end{figure}
\section{Conclusions}
The DGP model has led to a fertile area of research, both in five dimensional braneworld models and in the $4$d effective field theories to which they lead. The Galileon theory exhibits a fascinating structure, terminating after a finite number of terms and obeying a nontrivial symmetry group arising from combinations of the higher dimensional symmetries. Perhaps most interestingly, the Galileon theories admit a non-renormalization theorem, which some authors have suggested makes them well-suited for applications to inflation and the late-time acceleration of the universe.
In this paper we have shown that the Galileon theory is a special case of a class of effective field theories that may be identified by embedding a brane solution with a general set of symmetries in a bulk of a similarly general structure. The theories obtained in this way may be interesting as examples of higher dimensional gravitating theories, or may merely provide new nontrivial examples of $4$d effective field theories.
We have derived the general conditions for the brane constructions and for obtaining the associated four-dimensional effective theories. We have then applied this construction comprehensively to all possible special cases in which both the brane and the bulk are maximally symmetric spaces in their respective dimensionalities (with the bulk metric having only a single time direction). The results are new classes of effective field theories, sharing the important properties of the Galileons, while exhibiting distinctive new features, such as the existence of potentials, with masses fixed by symmetries.
These potentials open up the possibility of new natural implementations of accelerating cosmological solutions in theories naturally having a de Sitter solution.
Furthermore, in some cases the potentials allow both spontaneous symmetry breaking and ghost condensation at the same time. This may allow for other new consequences of these theories, including the possibility of novel topological defects in these theories. We are currently studying these implications~\cite{GHT-newpaper}.
\bigskip
\goodbreak
\centerline{\bf Acknowledgements}
\noindent
\\
The authors are grateful to Sergei Dubovsky and Justin Khoury for discussions and comments. This work is supported in part by NASA ATP grant NNX08AH27G, NSF grant PHY-0930521, and by Department of Energy grant DE-FG05-95ER40893-A020. MT is also supported by the Fay R. and Eugene L. Langberg chair.
| 2023-04-23T06:40:32.742Z | 2011-03-31T02:00:05.000Z | redpajama/arxiv | arxiv_0001 | 628 | 18,352 |
03e7280eb8bb4bcc86ee67a8d330457e91e3d6cf |
\section{Background}
\label{sec:background}
\subsection{Basic notions}
\paragraph*{Manifolds.}
We model a $3$D shape as a two-dimensional compact Riemannian manifold (surface) $X$, possibly with boundary $\partial X$.
Let $T_x X$ denote the \emph{tangent plane} at $x$, modeling the surface locally as a Euclidean space, $TX$ denote the \emph{tangent bundle}, and let
$\text{exp}_x \colon T_x X \to X$ be the \emph{exponential map}, mapping tangent vectors onto the surface.
A \emph{Riemannian metric} is an inner product $\langle \cdot,\cdot \rangle_{T_x X} \colon T_x X \times T_x X \to \mathbb{R}$ on the tangent plane, depending smoothly on $x$. The Riemannian metric is represented as a $2\times 2$ matrix referred to as {\em first fundamental form}.
Quantities which are expressible entirely in terms of Riemannian metric, and therefore independent on the way the surface is embedded, are called \emph{intrinsic}. Such quantities are invariant to isometric (metric-preserving) deformations.
\paragraph*{Curvature.}
Given an embedding of the surface, the \emph{second fundamental form}, represented as a $2\times 2$ matrix at each point, describes how the surface locally differs from a plane.
The eigenvalues $\kappa_m, \kappa_M$ of the second fundamental form are called the \emph{principal curvatures}; the corresponding eigenvectors $\vct{v}_m, \vct{v}_M$ called the \emph{principal curvature directions} form an orthonormal basis on the tangent plane.
\paragraph*{Differential operators on manifolds.}
Let $f \colon X \to \mathbb{R}$ be a smooth scalar field on the surface. The \emph{intrinsic gradient} is defined as
\[
\nabla_X f(x) = \nabla (f \circ \exp_x)(\mathbf{0}),
\]
where $\nabla$ denotes the standard Euclidean gradient acting in the tangent plane.
The intrinsic gradient can be interpreted as the direction (tangent vector on $T_x X$) in which $f$ changes the most at point $x$.
First-order Taylor expansion takes the form $(f \circ \exp_x)(\mathbf{v}) \approx f(x) + \langle \nabla_X f(x), \mathbf{v} \rangle_{T_x X}$, where the second term is the {\em directional derivative} of $f$ along the tangent vector $\mathbf{v} \in T_x X$.
Given a smooth vector field $\mathbf{v} \colon X \to TX$, the \emph{intrinsic divergence} is an operator acting on vector fields producing scalar fields, defined as the negative adjoint of the intrinsic gradient operator,
\begin{equation}
\int_X \langle \nabla_X f(x), \mathbf{v}(x) \rangle_{T_x X} dx = - \int_X f(x) \text{div}_X \mathbf{v}(x) dx,
\end{equation}
where the area element $dx$ is induced by the Riemannian metric.
Combining the two, we define the \emph{Laplace-Beltrami operator} as
\begin{equation}\label{eq:LBO}
\Delta_X f(x) = - \text{div}_X ( \nabla_X f(x) ).
\end{equation}
\subsection{Spectral analysis on manifolds}
\paragraph*{Laplacian eigenfunctions and eigenvalues.}
$\Delta_X$ is a positive-semidefinite operator, admitting real eigendecomposition
\begin{eqnarray}
\Delta_{X} \phi_i(x) = \lambda_i \phi_i(x) & \quad & x \in \mathrm{int}(X) \\
\langle \nabla_{X} \phi_i(x) , \hat{n}(x) \rangle = 0 & \quad & x \in \partial X,
\label{eq:neumann}
\end{eqnarray}
with homogeneous Neumann boundary conditions~(\ref{eq:neumann}) if $X$ has a boundary (here $\hat{n}$ denotes the normal vector to the boundary), where $0 = \lambda_0 <\lambda_1 \leq \hdots$ are eigenvalues and $\phi_0, \phi_1, \hdots$ are the corresponding eigenfunctions.
The eigenfunctions are orthonormal w.r.t. the standard inner product $\langle \phi_i, \phi_j \rangle_X = \int_X \phi_i(x) \phi_j(x) dx = \delta_{ij}$ and form an orthonormal basis for the functional space $L^2(X) = \{ f: X \rightarrow \mathbb{R} : \langle f, f \rangle_X < \infty \}$.
The Laplacian eigenfunctions are a generalization of the classical Fourier basis to non-Euclidean domains:
a function $f\in L^2(X)$ can be represented as the {\em Fourier series}
\begin{eqnarray}
f(x) &=& \sum_{k\geq 0} \underbrace{\langle f, \phi_k \rangle_X}_{\hat{f}_k} \phi_k(x),
\label{eq:fourier}
\end{eqnarray}
where
the eigenvalues $\{ \lambda_k \}_{k\geq 0}$ play the role of frequencies (the first eigenvalue $\lambda_0 =0$ corresponds to a constant `DC' eigenvector) and the Fourier coefficients $\{\hat{f}_k\}_{k\geq 0}$ can be interpreted as the Fourier transform of $f$.
\paragraph*{Heat diffusion on manifolds}
is governed by the {\em heat equation}, which, in the simplest case of {\em homogeneous} and {\em isotropic} heat conductivity properties of the surface, takes the form
\begin{equation}\label{eq:heat_eq}
f_t(x,t) = -\Delta_X f(x,t),
\end{equation}
where $f(x,t)$ denotes the temperature at point $x$ at time $t$, and appropriate boundary conditions are applied if necessary.
Given some initial heat distribution $f_0(x) = f(x,0)$, the solution of heat equation~(\ref{eq:heat_eq}) at time $t$ is obtained by applying the {\em heat operator} $H^t = e^{-t \Delta_X}$ to $f_0$,
\begin{equation}
\label{eq:heatop}
f(x,t) = H^t f_0(x) = \int_X f_0(\xi) h_t(x,\xi)\;d\xi\,,
\end{equation}
where $h_t(x,\xi)$ is called the \emph{heat kernel}, and the above equation can be interpreted as a non-shift-invariant version of convolution.\footnote{In the Euclidean case, the heat kernel has the form $h_t(x-\xi)$ and the solution is given as $f = f_0 \ast h_t$. In signal processing terms, the heat kernel in the Euclidean case is the impulse response of a linear shift-invariant system.}
In the spectral domain, the heat kernel is expressed as
\begin{equation}\label{eq:heat_kernel}
h_t(x,\xi) = \sum_{k \ge 0} e^{-t \lambda_k} \phi_k(x) \phi_k(\xi).
\end{equation}
Appealing again to the signal processing intuition,
$e^{-t \lambda}$ acts as a low-pass filter (larger $t$ corresponding to longer diffusion results in a filter with a narrower pass band).
\paragraph*{Spectral descriptors.}
The diagonal of the heat kernel $h_t(x,x)$, also known as {\em autodiffusivity}, for a range of values $t$, was used in \cite{HKS1,HKS2} as a local intrinsic shape descriptor referred to as the Heat Kernel Signature (HKS). The Wave Kernel Signature (WKS) \cite{WKS} uses another set of band-pass filters instead of the low-pass filters $e^{-t \lambda_k}$.
The Optimal Spectral Descriptors (OSD) \cite{OSD} approach suggested to learn a set of optimal tasks-specific filters instead of the ``handcrafted'' low- or band-pass filters.
\subsection{Anisotropic heat kernels}
\paragraph*{Anisotropic diffusion.}
In a more general setting, the heat equation has the form
\begin{equation}\label{eq:heat_eqa}
f_t(x,t) = -\text{div}_X(\mathbf{D}(x) \nabla_X f(x,t) ),
\end{equation}
where $\mathbf{D}(x)$ is the {\em thermal conductivity tensor} ($2\times 2$ matrix) applied to the intrinsic gradient in the tangent plane. This formulation allows modeling heat flow that is position- and direction-dependent ({\em anisotropic}).
The special case of equation~(\ref{eq:heat_eq}) assumes $\mathbf{D}(x) = \mathbf{I}$.
Andreux \emph{et al.~} \cite{albo} considered anisotropic diffusion driven by the surface curvature. Boscaini \emph{et al.~} \cite{ADD}, assuming that at each point $x$ the tangent vectors are expressed w.r.t. the orthogonal basis $\mathbf{v}_m, \mathbf{v}_M$ of principal curvature directions, used a thermal conductivity tensor of the form
\begin{equation}
\label{eq:aniso_tensor}
\mathbf{D}_{\alpha \theta}(x) = \mathbf{R}_\theta(x)
\begin{bmatrix}
\alpha & \\
& 1
\end{bmatrix}
\mathbf{R}^\top_\theta(x),
\end{equation}
where the $2\times 2$ matrix $\mathbf{R}_\theta(x)$ performs rotation of $\theta$ w.r.t. to the maximum curvature direction $\mathbf{v}_M(x)$, and
$\alpha > 0$ is a parameter controlling the degree of anisotropy ($\alpha = 1$ corresponds to the classical isotropic case).
\paragraph*{Anisotropic Laplacian.}
We refer to the operator
\[
\Delta_{\alpha\theta}f(x) = -\text{div}_X (\mathbf{D}_{\alpha \theta}(x) \nabla_X f(x))
\]
as the {\em anisotropic Laplacian}, and denote by $\{ \phi_{\alpha\theta i}, \lambda_{\alpha\theta i} \}_{i\geq 0}$ its eigenfunctions and eigenvalues (computed, if applicable, with the appropriate boundary conditions).
Analogously to equation~(\ref{eq:heat_kernel}), the {\em anisotropic heat kernel} is given by
\begin{equation}
\label{eq:ahks}
h_{\alpha\theta t}(x,\xi) = \sum_{k\geq 0} e^{-t \lambda_{\alpha\theta k}} \phi_{\alpha\theta k}(x) \phi_{\alpha\theta k}(\xi).
\end{equation}
This construction was used in Anisotropic Diffusion Descriptors (ADD) \cite{ADD} to generalize the OSD approach using anisotropic heat kernels (considering the diagonal $h_{\alpha\theta t}(x,x)$ and learning a set of optimal task-specific spectral filters replacing the low-pass filters $e^{-t \lambda_{\alpha\theta k}}$).
\section{Conclusions}
\label{sec:concl}
We presented Anisotropic CNN, a new framework generalizing convolutional neural networks to non-Euclidean domains, allowing to perform deep learning on geometric data. Our work follows the very recent trend in bringing machine learning methods to computer graphics and geometry processing applications, and is currently the most generic intrinsic CNN model.
Our experiments show that ACNN outperforms previously proposed intrinsic CNN models, as well as additional state-of-the-art methods in the shape correspondence application in challenging settings.
Being a generic model, ACNN can be used for many other applications.
We believe it would be useful for the computer graphics and geometry processing community, and will release the code.
\section{Intrinsic deep learning}
\label{sec:deeplearn}
This paper deals with the extension of convolutional neural networks (CNN) to non-Euclidean domains.
CNN \cite{lecun1989backpropagation} have recently become extremely popular in the computer vision community due to a series of successful applications in many classically difficult problems in that domain.
%
A typical convolutional neural network architecture is hierarchical, composed of
alternating convolutional-, pooling- (i.e. averaging), linear and non-linear layers. The parameters of different layers are learned by minimizing some task-specific cost function. The key feature of CNNs is the convolutional layer, implementing the idea of ``weight sharing'', wherein a small set of templates (filters) is applied to different parts of the data.
In image analysis applications, the input into the CNN is a function representing pixel values given on a Euclidean domain (plane); due to shift-invariance the convolution can be thought of as passing a template across the plane and recording the correlation of the template with the function at that location.
One of the major problems in applying the CNN paradigm to non-Euclidean domains is the lack of shift-invariance, making it impossible to think of convolution as correlation with a fixed template: the template now has to be location-dependent.
There have recently been several attempts to develop intrinsic CNNs on non-Euclidean domain, which we overview below. The advantage of intrinsic CNN models over descriptor learning frameworks such as OSD \cite{OSD} or ADD \cite{ADD} is that they accept as input any information on the surface, which can represent photometric properties (texture), some geometric descriptor, motion field, etc. Conversely, learnable spectral descriptors try to learn the best spectral kernel that acts on the Laplacian eigenvalues, thus limited to geometric data of the manifold only.
\paragraph*{Geodesic CNN (GCNN)}
was introduced by Masci \emph{et al.~} \cite{masci2015shapenet} as a generalization of CNN to triangular meshes based on geodesic local patches.
The core of this method is the construction of local geodesic polar coordinates
using a procedure previously employed for intrinsic shape context descriptors \cite{ISC}. The {\em patch operator} $(D(x)f)(\theta,\rho)$ in GCNN maps the values of the function $f$ around vertex $x$ into the local polar coordinates $\theta, \rho$, leading to the definition of the {\em geodesic convolution}
\begin{eqnarray}
\label{eq:geoconv}
(f \ast a)(x) &=& \max_{\Delta\theta \in [0, 2\pi)} \int a(\theta + \Delta\theta,\rho) (D(x)f)(\theta,\rho) d\rho d\theta,
\end{eqnarray}
which follows the idea of multiplication by template, but is defined up to arbitrary rotation $\Delta\theta \in [0, 2\pi)$ due to the ambiguity in the selection of the origin of the angular coordinate.
Taking the maximum over all possible rotations of the template $a(\rho,\theta)$ is necessary to remove this ambiguity.
Here, and in the following, $f$ is some feature vector that is defined on the surface (e.g. texture, geometric descriptors, etc.)
There are several drawbacks to this construction. First, the charting method relies on a fast marching-like procedure requiring a triangular mesh. The method is relatively insensitive to the triangulation \cite{ISC}, but may fail if the mesh is very irregular.
Second, the radius of the geodesic patches must be sufficiently small compared to the injectivity radius of the shape, otherwise the resulting patch is not guaranteed to be a topological disk. In practice, this limits the size of the patches one can safely use, or requires an adaptive radius selection mechanism.
\paragraph*{Spectral CNN (SCNN). }
Bruna \emph{et al.~} \cite{Bruna} defined a generalization of convolution in the spectral domain, appealing to the Convolution Theorem, stating that in the Euclidean case, the convolution operator is diagonalized in the Fourier basis. This allows defining a non-shift-invariant convolution as the inverse Fourier transform of the product of two Fourier transforms,
\begin{eqnarray}
\label{eq:geoconv}
(f \ast a)(x) &=& \sum_{k\geq 0} \langle f, \phi_k \rangle_X \underbrace{\langle a, \phi_k \rangle_X}_{\hat{a}_k} \phi_k(x),
\end{eqnarray}
where the Fourier transform is understood as inner products of the function with the Laplace-Beltrami orthogonal eigenfunctions.
The filter in this formulation is represented in the frequency domain, by the set of Fourier coefficients $\{\hat{a}_k\}_{k\geq 0}$. The SCNN is essentially a classical CNN where standard convolutions are replaced by definition~(\ref{eq:geoconv}) and the frequency representations of the filters are learned.
The key drawback of this approach is the lack of generalizability, due to the fact that the filter coefficients $\{\hat{a}_k\}_{k\geq 0}$ depend on the basis $\{\phi_k(x)\}_{k\geq 0}$; as a result, applying a filter on two different shapes may produce two different results.
Therefore, SCNN can be used to learn on a non-Euclidean domain, but not across different domains.
Secondly, the filters defined in the frequency domain lack a clear geometric interpretation. Third, the filters are not guaranteed to be localized in the spatial domain.
\paragraph*{Localized Spectral CNN (LSCNN).}
Boscaini \emph{et al.~} \cite{WFT2015} proposed an approach that combines the ideas of GCNN (spatial ``patch operator'') and SCNN (frequency-domain filters). The key concept is based on the Windowed Fourier Transform (WFT) \cite{Shumann}, a generalization to manifolds of a classical tool of signal processing suggesting to apply frequency analysis in a small window.
The WFT boils down to projecting the function $f$ on a set of modulated local windows (``atoms'')
$g_{\xi,k}(x) = \phi_k(x) \sum_{l\geq 0} \hat{g}_l \phi_l(\xi) \phi_l(x)$,
\begin{eqnarray}
\label{eq:wft}
(S(x) f)_k &=& \langle f, g_{x,k}\rangle_X = \sum_{l\geq 0} \hat{g}_l \phi_l(x) \langle f, \phi_l \phi_k \rangle_X,
\end{eqnarray}
where $\{\hat{g}_k\}_{k\geq 0}$ are the Fourier coefficients of the window.
The WFT can be regarded as a ``patch operator'', representing the local values of $f$ around a point $x$ in the frequency domain.
The main advantage of this approach is that being a spectral construction, it is easily applicable to any representations of the shape (mesh, point cloud, etc.), provided an appropriate discretization of the Laplace-Beltrami operator.
Since the patch operator is constructed in the frequency domain using the WFT, there is also no issue related to the topology of the patch like in the GCNN.
Yet, unlike the geodesic patches used in GCNN, the disadvantage of LSCNN is the lack of oriented structures which tend to be important in capturing the local context.
From the computational standpoint, a notable disadvantage of the WFT-based construction is the need to explicitly produce each window, which result in high memory and computational requirements.
\section{Numerical implementation}
\label{sec:discrete}
\definecolor{mygray}{HTML}{6195C8}
\begin{figure}[t!]
\centering
\begin{overpic}
[trim=0cm 0cm 0cm 0cm,width=0.97\linewidth]{./cotans1.pdf}
\put(89,6){\footnotesize $\alpha_{ij}$}
\put(6,18){\footnotesize $\beta_{ij}$}
\put(73,19){\footnotesize $\color{red}\theta$}
%
\put(40.2,5){\footnotesize $i$}
\put(58,44){\footnotesize $j$}
\put(100,2){\footnotesize $k$}
\put(-2,18){\footnotesize $h$}
%
\put(52,48){\footnotesize $\color{red}\mathbf{R}_\theta\hat{\vct{u}}_m$}
\put(93,29){\footnotesize $\color{red}\mathbf{R}_\theta\hat{\vct{u}}_M$}
%
\put(70,49){\footnotesize $\hat{\vct{u}}_m$}
\put(94.5,14){\footnotesize $\hat{\vct{u}}_M$}
\put(78,47){\footnotesize $\color{gray}\hat{\vct{n}}$}
%
\put(75,28){\footnotesize $\color{mygray}\hat{\vct{e}}_{kj}$}
\put(65,2.5){\footnotesize $\color{mygray}\hat{\vct{e}}_{ki}$}
%
\put(21,9){\footnotesize $\color{mygray}\hat{\vct{e}}_{hi}$}
\put(25.5,33){\footnotesize $\color{mygray}\hat{\vct{e}}_{hj}$}
\end{overpic}
\caption{\label{fig:cotans}Discretization of the anisotropic Laplace-Beltrami operators on a triangular mesh. The orthogonal basis vectors $\hat{\vct{u}}_M,\hat{\vct{u}}_m$, as well as their rotated counterparts (in red), lie in the plane of the respective triangle (reproduced from \cite{ADD}).
}
\end{figure}
\subsection{Anisotropic Laplacian discretization}
In the discrete setting, the surface $X$ is sampled at $n$ points $V = \{ \mathbf{x}_1,\dots, \mathbf{x}_n \}$. The points are connected by edges $E$ and faces $F$, forming a manifold triangular mesh $(V,E,F)$.
To each triangle $ijk \in F$, we attach an orthonormal reference frame $\vct{U}_{ijk} = (\hat{\vct{u}}_M,\hat{\vct{u}}_m,\hat{\vct{n}})$, where $\hat{\vct{n}}$ is the unit normal vector to the triangle and $\hat{\vct{u}}_M,\hat{\vct{u}}_m\in\mathbb{R}^3$ are the directions of principal curvature, computed using the method of \cite{cohen03}.
The thermal conductivity tensor for the triangle $ijk$ operating on tangent vectors is expressed w.r.t. $\vct{U}_{ijk}$ as a $3\times 3$ matrix $\begin{psmallmatrix}\alpha& &\\ & 1 & \\ & &1\end{psmallmatrix}$.
We first derive the case $\theta = 0$.
Let $\hat{\vct{e}}_{ab} \in \mathbb{R}^3$ denote the oriented edge pointing from vertex $a$ to vertex $b$, normalized to unit length, and consider the triangle $ijk$ as in Figure~\ref{fig:cotans}. We define the $\mathbf{H}$-weighted inner product between edges $\hat{\vct{e}}_{kj}$ and $\hat{\vct{e}}_{ki}$ as
\begin{align}
\label{eq:shear}
\langle \hat{\vct{e}}_{kj} , \hat{\vct{e}}_{ki}\rangle_{\mathbf{H}}
=\hat{\vct{e}}_{kj}\ensuremath{^\top} \underbrace{\vct{U}_{ijk} \begin{psmallmatrix}\alpha& &\\ & 1 & \\ & &1\end{psmallmatrix} \vct{U}_{ijk}\ensuremath{^\top}}_\vct{H} \hat{\vct{e}}_{ki}\,,
\end{align}
where the {\em shear matrix} $\vct{H}$ encodes the anisotropic scaling up to an orthogonal basis change. Note that in the isotropic case ($\alpha = 1$) we have $\vct{H}=\mathbf{I}$, such that the $\mathbf{H}$-weighted inner product simplifies to the standard inner product $\langle \hat{\vct{e}}_{kj} , \hat{\vct{e}}_{ki}\rangle_{\mathbf{H}} = \cos \alpha_{ij}$.
The discretization of the anisotropic Laplacian takes the form of an $n \times n$ sparse matrix $\vct{L} = -\vct{S}^{-1} \vct{W}$.
The {\em mass matrix} $\vct{S}$ is a diagonal matrix of area elements $s_i = \frac{1}{3}\sum_{jk:ijk\in F} A_{ijk}$, where $A_{ijk}$ denotes the area of triangle $ijk$. The {\em stiffness matrix} $\vct{W}$ is composed of weights
\begin{eqnarray}\label{eq:acotan}
w_{ij} & = & \left\{
\begin{array}{ll}
\frac{1}{2}\left(\frac{\langle \hat{\vct{e}}_{kj} , \hat{\vct{e}}_{ki}\rangle_{\mathbf{H}}}{\sin \alpha_{ij}} + \frac{\langle \hat{\vct{e}}_{hj} , \hat{\vct{e}}_{hi}\rangle_{\mathbf{H}}}{\sin \beta_{ij}}\right) & (i,j) \in E_\mathrm{int}; \\
%
\frac{1}{2}\frac{\langle \hat{\vct{e}}_{kj} , \hat{\vct{e}}_{ki}\rangle_{\mathbf{H}}}{\sin \alpha_{ij}} & (i,j) \in E_{\partial}; \\
%
-\sum_{k\neq i} w_{ik} & i = j; \\
%
0 & \mathrm{else}\,,
\end{array}
\right.
\end{eqnarray}
where the notation is according to Figure~\ref{fig:cotans} and $E_\mathrm{int}, E_{\partial}$ denote interior and boundary edges, respectively.
In the isotropic case,
$\frac{\langle \hat{\vct{e}}_{kj},\hat{\vct{e}}_{ki}\rangle_{\mathbf{H}}}{\sin \alpha_{ij}} = \frac{\cos \alpha_{ij}}{\sin \alpha_{ij}}=\cot \alpha_{ij}$, thus reducing equation~\eqref{eq:acotan} to the classical cotangent formula \cite{macneal1949solution,duffin1959distributed,Pinkall1993,meyer2003:ddg}.
To obtain the general case $\theta \neq 0$, it is sufficient to rotate the basis vectors $\vct{U}_{ijk}$ on each triangle around the respective normal $\hat{\vct{n}}$ by the angle $\theta$, equal for all triangles (see Figure~\ref{fig:cotans}, red). Denoting by $\vct{R}_\theta$ the corresponding $3\times 3$ rotation matrix, this is equivalent to modifying the $\mathbf{H}$-weighted inner product with the directed shear matrix $\vct{H}_\theta = \vct{R}_\theta \vct{H}\vct{R}\ensuremath{^\top}_\theta$. The resulting weights $w_{ij}$ in equation \eqref{eq:acotan} are thus obtained by using the inner products $\langle \hat{\vct{e}}_{kj} , \hat{\vct{e}}_{ki}\rangle_{\mathbf{H}_\theta} = \hat{\vct{e}}_{kj}\ensuremath{^\top} \vct{H}_\theta \hat{\vct{e}}_{ki}$.
\subsection{Heat kernels}
The computation of heat kernels is performed in the frequency domain, using a truncation of formula~(\ref{eq:heat_kernel}).
The first $k$ eigenfunctions and eigenvalues of the Laplacian are computed by performing the generalized eigen-decomposition $\mathbf{W} \pmb{\Phi} = \mathbf{S}\pmb{\Phi}\pmb{\Lambda}$, where $\pmb{\Phi} = (\pmb{\phi}_1, \hdots, \pmb{\phi}_k)$ is an $n\times k$ matrix containing as columns the discretized eigenfunctions and $\pmb{\Lambda} = \mathrm{diag}(\lambda_1, \hdots, \lambda_k)$ is the diagonal matrix of the corresponding eigenvalues.
The heat operator is given by $\pmb{\Phi} e^{-t\pmb{\Lambda}} \pmb{\Phi}^\top$; the $i$th row/column represents the values of the heat kernel at point $\mathbf{x}_i$.
\section{Introduction}
In geometry processing, computer graphics, and vision, finding intrinsic correspondence between 3D shapes affected by different transformations is one of the fundamental problems with a wide spectrum of applications ranging from texture mapping to animation \cite{kaick2010survey}.
Of particular interest is the setting in which the shapes are allowed to deform non-rigidly. Recently, the emergence of 3D sensing technology has brought the need to deal with acquisition artifacts, such as missing parts, geometric, and topological noise, as well as matching 3D shapes in different representations, such as meshes and point clouds.
The main topic of this paper is establishing dense intrinsic correspondence between non-rigid shapes in such challenging settings.
\subsection{Related works}
\paragraph*{Correspondence.}
Traditional correspondence approaches try to find a {\em point-wise} matching between (a subset of) the points on two or more shapes.
\emph{Minimum-distortion methods} establish the matching by minimizing some structure distortion, which can include similarity of local features \cite{OvMe*10,WKS,ZaBo*09}, geodesic \cite{Memoli:2005,bro:bro:kim:PNAS,koltun} or diffusion distances \cite{lafon:05:LOCAL}, or a combination thereof \cite{torresani2008feature}, or higher-order structures \cite{zeng2010dense}.
Typically, the computational complexity of such methods is high, and there have been several attempts to alleviate the computational complexity using hierarchical \cite{sahillioglu2012} or subsampling \cite{tevs2011intrinsic} methods.
Several approaches formulate the correspondence problem as quadratic assignment and employ different relaxations thereof \cite{umeyama1988eigendecomposition,leordeanu2005spectral,rodola2012game,koltun,lipman15}.
\emph{Embedding methods} try to exploit some assumption on the shapes (e.g. approximate isometry) in order to parametrize the correspondence problem with a small number of degrees of freedom.
Elad and Kimmel \cite{ela:kim:FLATTEN} used multi-dimensional scaling to embed the geodesic metric of the matched shapes into a low-dimensional Euclidean space, where alignment of the resulting ``canonical forms'' is then performed by simple rigid matching (ICP) \cite{ChenMedioni:91:ICP,bes:mck:SURFACEMATCH}.
The works of \cite{Mateus08,shtern2013matching} used the eigenfunctions of the Laplace-Beltrami operator as embedding coordinates and performed matching in the eigenspace.
Lipman \emph{et al.~} \cite{Lipman2011,KimLCF10,kim2011blended} used conformal embeddings into disks and spheres to parametrize correspondences between homeomorphic surfaces as M{\"o}bius transformations.
As opposed to point-wise correspondence methods, \emph{soft correspondence} approaches assign a point on one shape to more than one point on the other.
Several methods formulated soft correspondence as a mass-transportation problem \cite{Me11,solomon2012soft}.
Ovsjanikov \emph{et al.~} \cite{ovsjanikov2012functional} introduced the \emph{functional correspondence} framework, modeling the correspondence as a linear operator between spaces of functions on two shapes, which has an efficient representation in the Laplacian eigenbases.
This approach was extended in several follow-up works \cite{pokrass2013sparse,kovnatsky15,SGMDS,PFM2016} .
In the past year, we have witnessed the emergence of \emph{learning-based approaches} for 3D shape correspondence.
The dramatic success of deep learning (in particular, convolutional neural networks \cite{fukushima1980neocognitron,lecun1989backpropagation}) in computer vision \cite{Krizhevsky:2012} has lead to a recent keen interest in the geometry processing and graphics communities to apply such methodologies to geometric problems \cite{masci2015shapenet,kalogerakis2015,Wu,WFT2015,Li}.
\begin{figure}[t]
\centering
\begin{overpic}
[width=1\linewidth]{./extvsint.pdf}
\end{overpic}
\caption{
Illustration of the difference between extrinsic and intrinsic deep learning methods on geometric data.
Left: extrinsic methods such as volumetric CNNs treat 3D geometric data in its Euclidean representation. Such a representation is not invariant to deformations (e.g., in the shown example, the filter that responds to features on a straight cylinder would not respond to a bent one). Right: in an intrinsic representation, the filter is applied to some data on the surface itself, thus being invariant to deformations.
}
\label{fig:extvsint}
\end{figure}
\paragraph*{Extrinsic deep learning.}
Many machine learning techniques successfully working on images were tried ``as is'' on 3D geometric data, represented for this purpose in some way ``digestible'' by standard frameworks.
Su \emph{et al.~} \cite{kalogerakis2015} used CNNs applied to range images obtained from multiple views of 3D objects for retrieval and classification tasks.
Wei \emph{et al.~} \cite{Li} used view-based representation to find correspondence between non-rigid shapes.
Wu \emph{et al.~} \cite{Wu} used volumetric CNNs applied to rasterized volumetric representation of 3D shapes.
The main drawback of such approaches is their treatment of geometric data as Euclidean structures (see Figure~\ref{fig:extvsint}). First, for complex 3D objects, Euclidean representations such as depth images or voxels may lose significant parts of the object or its fine details, or even break its topological structure (in particular, due to computational reasons, the volumetric CNNs \cite{Wu} used a $64\times 64\times 64$ cube, allowing only a very coarse representation of 3D geometry). Second, Euclidean representations are not intrinsic, and vary as the result of pose or deformation of the object. Achieving invariance to shape deformations, a common requirement in many applications, is extremely hard with the aforementioned methods and requires complex models and huge training sets due to the large number of degrees of freedom involved in describing non-rigid deformations.
\paragraph*{Intrinsic deep learning}
approaches try to apply learning techniques to geometric data by generalizing the main ingredients such as convolutions to non-Euclidean domains.
One of the first attempts to learn spectral kernels for shape recognition was done in \cite{Aflalo}. Litman and Bronstein \cite{OSD} learned optimal spectral descriptors that generalize the popular ``handcrafted'' heat- \cite{HKS1,HKS2} and wave-kernel signatures \cite{WKS} and show performance superior to both.
Their construction was recently extended in \cite{ADD} using anisotropic spectral kernels, referred to as Anisotropic Diffusion Descriptors (ADD), based on the anisotropic Laplace-Beltrami operator \cite{albo}. A key advantage of the resulting approach is the ability to disambiguate intrinsic symmetries \cite{ovsjanikov2008global}, to which most of the standard spectral descriptors are agnostic.
Corman \emph{et al.~} \cite{corman2014supervised} used descriptor learning in the functional maps framework.
Rodol{\`a} \emph{et al.~} \cite{rodoladense} proposed learning correspondences between non-rigid shapes using random forests applied to WKS descriptors.
The first intrinsic convolutional neural network architecture (Geodesic CNN) was presented in \cite{masci2015shapenet}. GCNN is based on a local intrinsic charting procedure from \cite{ISC},
and while producing impressive results on several shape correspondence and retrieval benchmarks, has a number of significant drawbacks. First, the charting procedure is limited to meshes, and second, there is no guarantee that the chart is always topologically meaningful.
Another intrinsic CNN construction (Localized Spectral CNN) using an alternative charting technique based on the windowed Fourier transform \cite{Shumann} was proposed in \cite{WFT2015}. This method is a generalization of a previous work \cite{Bruna} on spectral deep learning on graphs. One of the key advantages of LSCNN is that the same framework can be applied to different shape representations, in particular, meshes and point clouds.
A drawback of this approach is its memory and computation requirements, as each window needs to be explicitly produced.
\subsection{Main contributions}
In this paper, we present Anisotropic Convolutional Neural Networks (ACNN), a method for intrinsic deep learning on non-Euclidean domains. Though it is a generic framework that can be used to handle different tasks, we focus here on learning correspondence between shapes.
Our approach is related to two previous methods for deep learning on manifolds, GCNN \cite{masci2015shapenet} and ADD \cite{ADD}.
Compared to \cite{ADD}, where a learned spectral filter applied to the eigenvalues of anisotropic Laplace-Beltrami operator, we use anisotropic heat kernels as spatial weighting functions allowing to extract a local intrinsic representation of a function defined on the manifold. Unlike ADD, our ACNN is a convolutional neural network architecture.
Compared to GCNN, our construction of the ``patch operator'' is much simpler, does not depend on the injectivity radius of the manifold, and is not limited to triangular meshes.
Overall, ACNN combines all the best properties of the previous approaches without inheriting their drawbacks.
We show that the proposed framework beats GCNN, ADD, and other state-of-the-art approaches on challenging correspondence benchmarks.
The rest of the paper is organized as follows.
In Section~\ref{sec:background} we overview the main notions related to spectral analysis on manifolds and define anisotropic Laplacians and heat kernels.
In Section~\ref{sec:deeplearn} we briefly discuss previous approaches to intrinsic deep learning on manifolds and their drawbacks.
Section~\ref{sec:our} describes the proposed ACNN construction for learning intrinsic dense correspondence between shapes.
Section~\ref{sec:discrete} discussed the discretization and numerical implementation.
In Section~\ref{sec:results} we evaluate the proposed approach on standard benchmarks and compare it to previous state-of-the-art methods.
Finally, Section~\ref{sec:concl} concludes the paper.
\section*{Acknowledgment}
This research was supported by the ERC Starting Grant No. 307047 (COMET), a Google Faculty Research Award, and Nvidia equipment grant.
\bibliographystyle{eg-alpha}
\bibliographystyle{eg-alpha-doi}
\section{Anisotropic convolutional neural networks}
\label{sec:our}
The construction presented in this paper aims at benefiting from all the advantages of the aforementioned intrinsic CNN approaches, without inheriting their drawbacks.
\paragraph*{Intrinsic convolution.}
The key idea of the Anisotropic CNN presented in this paper is the construction of a patch operator using anisotropic heat kernels. We interpret heat kernels as local weighting functions and construct
\begin{equation}
\label{eq:acnn}
(D(x) f)(\theta, t) =\frac{ \int_X h_{\alpha\theta t}(x,\xi) f(\xi) d\xi }{ \int_X h_{\alpha\theta t}(x,\xi) d\xi},
\end{equation}
for some anisotropy level $\alpha > 1$. This way, the values of $f$ around point $x$ are mapped to a local system of coordinates $(\theta, t)$ that behaves like a polar system (here $t$ denotes the scale of the heat kernel and $\theta$ is its orientation).
We define {\em intrinsic convolution} as
\begin{eqnarray}
\label{eq:intconv}
(f \ast a)(x) &=& \int a(\theta,t) (D(x)f)(\theta,t) dt d\theta,
\end{eqnarray}
Note that unlike the arbitrarily oriented geodesic patches in GCNN, necessitating to take a maximum over all the template rotations~(\ref{eq:geoconv}), in our construction it is natural to use the principal curvature direction as the reference $\theta = 0$.
Such an approach has a few major advantages compared to previous intrinsic CNN models.
First, being a spectral construction, our patch operator can be applied to any shape representation (like LSCNN and unlike GCNN).
Second, being defined in the spatial domain, the patches and the resulting filters have a clear geometric interpretation (unlike LSCNN).
Third, our construction accounts for local directional patterns (like GCNN and unlike LSCNN).
Fourth, the heat kernels are always well defined independently of the injectivity radius of the manifold (unlike GCNN).
We summarize the comparative advantages in Table~\ref{tab:cnns}.
\begin{table*}
\centering
\begin{tabular}{ l c c c c c c c}
Method & Representation & Input & Generalizable & Filters & Context & Directional & Task\\
\hline
OSD \cite{OSD} & Any & Geometry & Yes & Spectral & No & No & Descriptor\\
ADD \cite{ADD} & Any & Geometry & Yes & Spectral & No & Yes & Any\\
RF \cite{rodoladense} & Any & Any & Yes & Spectral & No & No & Correspondence\\
GCNN \cite{masci2015shapenet} & Mesh & Any & Yes & Spatial & Yes & Yes & Any\\
SCNN \cite{Bruna} & Any & Any & No & Spectral & Yes & No & Any\\
LSCNN \cite{WFT2015} & Any & Any & Yes & Spectral & Yes & No & Any\\
Proposed ACNN & Any & Any & Yes & Spatial & Yes & Yes & Any\\
\hline
\end{tabular}
\caption{\label{tab:cnns}Comparison of different intrinsic learning models. Our ACNN model combines all the best properties of the other models.
Note that OSD and ADD are local spectral descriptors operating with intrinsic geometric information of the shape and cannot be applied to arbitrary input, unlike the Random Forest (RF) and convolutional models.
}
\end{table*}
\subsection{ACNN architecture}
Similarly to Euclidean CNNs, our ACNN consists of several layers that are applied subsequently, i.e. the output of the previous layer is used as the input into the subsequent one. The network is called {\em deep} if many layers are employed.
ACNN is applied in a point-wise manner on a function defined on the manifolds, producing a point-wise output that is interpreted as soft correspondence, as described below.
We distinguish between the following types of layers:
\paragraph*{Fully connected (FC$Q$)} layer
typically follows the input layer and precedes the output layer to adjust the
input and output dimensions by
means of a linear combination. Given a $P$-dimensional input $\mathbf{f}^\mathrm{in}(x) = ( f_1^\mathrm{in}(x), \hdots, f_P^\mathrm{in}(x))$, the FC layer produces a $Q$-dimensional output $\mathbf{f}^\mathrm{out}(x) = ( f_1^\mathrm{out}(x), \hdots, f_Q^\mathrm{out}(x))$ as a linear combination of the input components with learnable weights $w$,
\begin{equation}
f^\text{out}_{q}(x) =
\eta\left( \sum_{p=1}^P w_{qp} f^\text{in}_p(x) \right);
\quad q = 1,\hdots, Q,
\label{eq:cnn_fc}
\end{equation}
The output of the FC layer is optionally passed through a non-linear function such as the ReLU\cite{nair2010}, $\eta(t) = \max\{0,t\}$.
\paragraph*{Intrinsic convolution (IC$Q$)} layer
replaces the convolutional layer used in classical Euclidean CNNs with the construction~(\ref{eq:intconv}).
The IC layer contains $PQ$ filters arranged in banks ($P$ filters in $Q$ banks); each bank corresponds to an output dimension. The filters are applied to the input as follows,
\begin{equation}
\label{eq:gcnn}
f^\mathrm{out}_{q}(x) =
\sum_{p=1}^P (f^\text{in}_p \star a_{qp})(x),
\quad q = 1,\hdots, Q,
\end{equation}
where $a_{qp}(\theta,t)$ are the learnable coefficients of the $p$th filter in the $q$th filter bank.
\paragraph*{Softmax} layer is used as the output layer in a particular architecture employed for learning correspondence; it applies the softmax function
\begin{equation}
\label{eq:softmax}
f^\mathrm{out}_{p}(x) =
\sigma(f^\mathrm{in}_{p}(x)) =
\frac{e^{f^\mathrm{in}_{p}(x)}}{\sum_{p=1}^P e^{f^\mathrm{in}_{p}(x)}}
\end{equation}
to the $P$-dimensional input. The result is a vector that can be interpreted as a probability distribution.
\paragraph*{Dropout($\pi$)} layer~\cite{hinton2012dropout} is a fixed layer that injects binomial noise to each of the computational units of the network; it has been shown to be an excellent remedy to prevent overfitting.
During training, an i.i.d. binary mask $m_p \sim \mathrm{Binomial}(\pi_\mathrm{drop})$ is generated for each input dimension; each element is $1$ with probability $1 - \pi_\mathrm{drop}$,
\begin{equation}
\label{eq:dropout_train}
f^\mathrm{out}_{p}(x) =
m_p f^\mathrm{in}_{p}(x).
\end{equation}
At test time, in order to do inference one would have to integrate over all possible binary masks.
However, it has been shown that rescaling the input by the drop probability of the layer,
\begin{equation}
\label{eq:dropout_train}
f^\mathrm{out}_{p}(x) =
\pi_\mathrm{drop} f^\mathrm{in}_{p}(x).
\end{equation}
is a good approximation applicable for real applications.
\paragraph*{Batch normalization} layer is another fixed layer recently introduced in~\cite{ioffe2015} to reduce training times of very large CNN models.
It normalizes each mini-batch during stochastic optimization to have zero mean and unit variance, and then performs a linear transformation of the form
\begin{equation}
\label{eq:batch_norm}
f^\mathrm{out}_{p}(x) = \frac{f^\mathrm{in}_{p}(x) - \mu}{\sqrt{\sigma^2 + \epsilon}} \gamma + \beta
\end{equation}
where $\mu$ and $\sigma^2$ are, respectively, the mean and the variance of the data estimated on the training set using exponential moving average; $\epsilon$ is a small positive constant to avoid numerical errors.
After training, one can re-estimate the statistics on the test set or simply keep the training set estimates.
Overall, the ACNN architecture combining several layers of different type, acts as a non-linear parametric mapping of the form $\mathbf{f}_\Theta(x)$ at each point $x$ of the shape, where $\Theta$ denotes the set of all learnable parameters of the network.
The choice of the parameters is done by an optimization process, minimizing a task-specific cost.
Here, we focus on learning shape correspondence.
\subsection{Learning correspondence}
Finding the correspondence in a collection of shapes can be posed as a labelling problem, where one tries to label each vertex of a given {\em query} shape $X$ with the index of a corresponding point on some common {\em reference} shape $Y$ \cite{rodoladense}. Let $n$ and $m$ denote the number of vertices in $X$ and $Y$, respectively.
For a point $x$ on a query shape, the output of ACNN $\mathbf{f}_\Theta(x)$ is $m$-dimensional and is interpreted as a probability distribution (`soft correspondence') on $Y$.
The output of the network at all the points of the query shape can be arranged as an $n\times m$ matrix with elements of the form $f_\Theta(x,y)$, representing the probability of $x$ mapped to $y$.
Let us denote by $y^*(x)$ the ground-truth correspondence of $x$ on the reference shape. We assume to be provided with examples of points from shapes across the collection and their ground-truth correspondence, $\mathcal{T} = \{ (x, y^*(x)) \}$.
The optimal parameters of the network are found by minimizing the {\em multinomial regression loss}
\begin{eqnarray}
\label{eq:corresp}
\ell_{\mathrm{reg}}(\boldsymbol{\Theta})
&=& -\sum_{(x, y^*(x))\in \mathcal{T}} \log \boldsymbol{f}_{\Theta}(x,y^*(x)),
\end{eqnarray}
which represents the Kullback-Leibler divergence between the probability distribution produced by the network and the groundtruth distribution $\delta_{y^*(x)}$.
\subsection{Correspondence refinement}
\paragraph*{Full correspondence. }
The most straightforward way to convert the soft correspondence $f(x,y)$ produced by ACNN into a point-wise correspondence is by assigning $x$ to
\begin{equation}
\hat{y}(x) = \mathrm{arg}\max_{y\in Y} f(x,y).
\end{equation}
The value $c(x) = \max_{y \in Y} f(x,y) \in [0,1]$ can be interpreted as the {\em confidence} of the prediction: the closer the distribution produced by the network is to a delta-function (in which case $c=1$), the better it is.
In this paper, we use a slightly more elaborate scheme to refine the soft correspondences produced by ACNN. First, we select a subset of points $I = \{x : c(x) > \tau_\mathrm{th}\}$ at which the confidence of the predicted correspondence exceeds some threshold $\tau_\mathrm{th}$.
Second, we use this subset of corresponding points
to find a functional map \cite{ovsjanikov2012functional} between $L^2(X)$ and $L^2(Y)$ by solving the linear system of $|I|k$ equations in $k^2$ variables,
\begin{eqnarray}
\label{eq:fm}
\pmb{\Phi}_I\mathbf{C} &=& \pmb{\Psi}_I,
\end{eqnarray}
where
\begin{eqnarray}
\pmb{\Phi}_I &=& (\phi_1(x), \hdots, \phi_k(x)) \,\,\,\,\, x\in I, \\
\pmb{\Psi}_I &=& (\psi_1(\hat{y}(x)), \hdots, \psi_k(\hat{y}(x))) \,\,\,\,\, x\in I,
\end{eqnarray}
are the first $k$ Laplace-Beltrami eigenfunctions of shapes $X$ and $Y$, respectively, sampled at the subset of corresponding points (represented as $|I| \times k$ matrices).
The $k\times k$ matrix $\mathbf{C}$ represents the functional correspondence between $L^2(X)$ and $L^2(Y)$ in the frequency domain.
The parameters $\tau_\mathrm{th}$ and $k$ must be chosen in such a way that the system is over-determined, i.e. $|I| > k$.
Third, after having found $\mathbf{C}^*$ by solving~(\ref{eq:fm}) in the least-squares sense, we produce a new point-wise correspondence by matching $\pmb{\Phi}\mathbf{C}^*$ and $\pmb{\Psi}$ in the $k$-dimensional eigenspace,
\begin{equation}
y(x) = \mathrm{arg}\max_{y\in Y} \| (\phi_1(x), \hdots, \phi_k(x))\mathbf{C}^* - (\psi_1(y), \hdots, \psi_k(y) \|_2.
\end{equation}
\paragraph*{Partial correspondence. }
A similar procedure is employed in the setting of partial correspondence, where instead of the computation of a functional map, we use the recently introduced {\em partial functional map} \cite{PFM2016}.
\section{Results}
\label{sec:results}
In this section, we evaluate the proposed ACNN method and compare it to state-of-the-art approaches on the FAUST \cite{FAUST} and SHREC'16 Partial Correspondence \cite{SHREC2016p} benchmarks.
\subsection{Settings}
\paragraph*{Implementation.}
Isotropic Laplacians were computed using the cotangent formula \cite{macneal1949solution,duffin1959distributed,Pinkall1993,meyer2003:ddg}; anisotropic Laplacians were computed according to~(\ref{eq:acotan}).
Heat kernels were computed in the frequency domain using all the eigenpairs.
In all experiments, we used $L=16$ orientations and the anisotropy parameter $\alpha = 100$.
Neural networks were implemented in Theano~\cite{bergstra2010theano}.
The ADAM~\cite{kingma2014} stochastic optimization algorithm was used with
initial learning rate of $10^{-3}$, $\beta_1=0.9$, and $\beta_2=0.999$.
As the input to the networks, we used the local SHOT descriptor \cite{SHOT} with $544$ dimensions and using default parameters.
For all experiments, training was done by minimizing the loss~(\ref{eq:corresp}).
The code to reproduce all the experiments in this paper, and the full framework will be released upon publication.
\paragraph*{Timing.}
The following are typical timings for FAUST shapes with 6.9K vertices. Laplacian computation and eigendecomposition took $1$ sec
and $4$ seconds per angle, respectively on a desktop workstation with 64Gb of RAM and i7-4820K CPU.
Forward propagation of the trained model takes approximately $0.5$ sec to produce the dense soft correspondence for all the vertices.
\begin{figure}[t!]
\begin{overpic}
[width=1\linewidth]{./filters.pdf}
\end{overpic}
\centering
\caption{ \label{fig:filters} Examples of filters in the first IC layer learned by the ACNN (hot and cold colors represent positive and negative values, respectively).
}
\vspace{-3mm}
\end{figure}
\subsection{Full mesh correspondence}
\paragraph*{Data.}
In the first experiment, we used the FAUST humans dataset \cite{FAUST}, containing $100$ meshes of $10$ scanned subjects, each in $10$ different poses. The shapes in the collection manifest strong non-isometric deformations. Vertex-wise groundtruth correspondence is known between all the shapes.
The zeroth FAUST shape containing $6890$ vertices was used as reference; for each
point on the query shape, the output of the network represents the soft
correspondence as a $6890$-dimensional vector which was then converted to point correspondence
with the technique explained in Section~\ref{sec:our}. First $80$ shapes for training and the remaining $20$ for testing, following verbatim the settings of~\cite{masci2015shapenet}.
Batch normalization~\cite{ioffe2015} was used to speed up the training; we did not experience any noticeable difference in raw performance of the produced soft correspondence compared to un-normalized setting.
\begin{figure*}[t!]
\begin{overpic}
[width=0.975\linewidth]{./faust_geoerr.pdf}
\put(45,-2){\footnotesize Anisotropic CNN}
\put(44.5,29.1){\footnotesize Geodesic CNN}
\put(42,62){\footnotesize Blended Intrinsic Map}
\put(98.3,64){\tiny 0}
\put(98.3,77){\tiny 0.1}
\end{overpic}
\centering
\vspace{3mm}
\caption{ \label{fig:faust_geoerr} Pointwise geodesic error (in $\%$ of geodesic diameter) of different correspondence methods (top to bottom: Blended intrinsic maps, GCNN, ACNN) on the FAUST dataset. For visualization clarity, the error values are saturated at $10\%$ of the geodesic diameter. Hot colors correspond to large errors.
Note the different behavior of different approaches: BIM produces large distortions with very few accurate matches; GCNN produces many near-perfect matches but also many matches with large distortion; ACNN produces very few matches with large distortion and many near-perfect matches.
}
\end{figure*}
\paragraph*{Methods.}
Batch normalization allows us to effectively train larger and deeper networks, for this experiment
we adopted the following architecture inspired by GCNN~\cite{masci2015shapenet}: FC$64$+IC$64$+IC$128$+IC$256$+FC$1024$+FC$512$+Softmax.
Additionally, we compared our method to Random Forests (RF) \cite{rodoladense}, Blended Intrinsic Maps (BIM) \cite{kim2011blended}, Localized Spectral CNN (LSCNN) \cite{WFT2015}, and Anisotropic Diffusion Descriptors (ADD) \cite{ADD}.
\paragraph*{Results.}
Figure~\ref{fig:faust_plot} shows the performance of different methods. The performance was evaluated using the Princeton protocol \cite{kim2011blended}, plotting
the percentage of matches that are at most $r$-geodesically distant from the groundtruth correspondence on the reference shape. Two versions of the protocol consider intrinsically symmetric matches as correct (symmetric setting) or wrong (asymmetric, more challenging setting). Some methods based on intrinsic structures (e.g. LSCNN or RF applied on WKS descriptors) are invariant under intrinsic symmetries and thus cannot distinguish between symmetric points.
The proposed ACNN method clearly outperforms all the compared approaches and also perfectly distinguishes symmetric points.
Figure~\ref{fig:teaser} visualizes some correspondences obtained by ACNN using texture mapping. The correspondence show almost no noticeable artifacts.
Figure~\ref{fig:faust_geoerr} shows the pointwise geodesic error of different correspondence methods (distance of the correspondence at a point from the groundtruth). ACNN shows dramatically smaller distortions compared to other methods. Over $60\%$ of matches are exact (zero geodesic error), while only a few points have geodesic error larger than $10\%$ of the geodesic diameter of the shape.
\begin{figure}[t!]
\setlength\figureheight{5cm}
\setlength\figurewidth{\linewidth}
\input{./kim_FAUST.tikz}
\caption{\label{fig:faust_plot} Performance of different correspondence methods on FAUST meshes. Evaluation of the correspondence was done using the symmetric (solid) and asymmetric (dashed) Princeton protocol. }
\end{figure}
\subsection{Partial correspondence}
\paragraph*{Data.}
In the second experiment, we used the recent very challenging SHREC'16 Partial Correspondence benchmark \cite{SHREC2016p}, consisting of nearly-isometrically deformed shapes from eight classes, with different parts removed. Two types of partiality in the benchmark are {\em cuts} (removal of a few large parts) and {\em holes} (removal of many small parts). In each class, the vertex-wise groundtruth correspondence between the full shape and its partial versions is given.
The dataset was split into training and testing disjoint sets. For cuts, training was done on 15 shapes per class; for holes, training was done on 10 shapes per class.
\paragraph*{Methods.}
We used the following ACNN architecture: IC$32$+FC$1024$+DO($0.5$)+FC$2048$+DO($0.5$)+Softmax. The dropout regularization, with $\pi_\mathrm{drop} = 0.5$, was crucial to avoid overfitting on such a small training set.
We compared ACNN to RF \cite{rodoladense} and Partial Functional Maps (PFM) \cite{PFM2016}.
For the evaluation, we used the protocol of \cite{SHREC2016p}, which closely follows the Princeton benchmark.
\paragraph*{Cuts.}
Figure~\ref{fig:shrec_cuts} compares the performance of different partial matching methods on the SHREC'16 Partial (cuts) dataset. ACNN outperforms other approaches with a significant margin.
Figure~\ref{fig:horse_partial} (top) shows examples of partial correspondence on the horse shape as well as the pointwise geodesic error (bottom). We observe that the proposed approach produces high-quality correspondences even in such a challenging setting.
\begin{figure}[t!]
\setlength\figureheight{5cm}
\setlength\figurewidth{\linewidth}
\input{./kim_cuts.tikz}
\caption{\label{fig:shrec_cuts}Performance of different correspondence methods on SHREC'16 Partial (cuts) meshes. Evaluation of the correspondence was done using the symmetric Princeton protocol. }
\end{figure}
\begin{figure}[t!]
\setlength\figureheight{5cm}
\setlength\figurewidth{\linewidth}
\input{./kim_holes.tikz}
\caption{\label{fig:shrec_holes}Performance of different correspondence methods on SHREC'16 Partial (holes) meshes. Evaluation of the correspondence was done using the symmetric Princeton protocol. }
\end{figure}
\begin{figure*}[t!]
\vspace{-1mm}
\begin{overpic}
[width=1\linewidth]{./horse_partial.pdf}
\put(50,-1){\footnotesize Random Forest}
\put(49,26){\footnotesize Anisotropic CNN}
\put(98.1,28){\tiny 0}
\put(98.1,38){\tiny 0.1}
\end{overpic}
\centering
\vspace{1.5mm}
\caption{ \label{fig:horse_partial}Examples of partial correspondence on the horse shape from the SHREC'16 Partial (cuts) dataset. First row: correspondence produced by ACNN. Corresponding points are shown in similar color. Reference shape is shown on the left.
Second and third rows: pointwise geodesic error (in $\%$ of geodesic diameter) of the ACNN and RF correspondence, respectively. For visualization clarity, the error values are saturated at $10\%$ of the geodesic diameter. Hot colors correspond to large errors.}
\end{figure*}
\paragraph*{Holes.}
Figure~\ref{fig:shrec_holes} compares the performance of different partial matching methods on the SHREC'16 Partial (holes) dataset. In this setting as well, ACNN outperforms other approaches with a significant margin.
Figure~\ref{fig:dog_holes} (top) shows examples of partial correspondence on the dog shape as well as the pointwise geodesic error (bottom).
\begin{figure*}[t!]
\vspace{-1mm}
\begin{overpic}
[width=1\linewidth]{./dog_holes.pdf}
\put(50,-2.25){\footnotesize Random Forest}
\put(49,20.5){\footnotesize Anisotropic CNN}
\put(98.8,23.75){\tiny 0}
\put(98.8,34){\tiny 0.1}
\end{overpic}
\centering
\vspace{1.5mm}
\caption{ \label{fig:dog_holes}Examples of partial correspondence on the dog shape from the SHREC'16 Partial (holes) dataset. First row: correspondence produced by ACNN. Corresponding points are shown in similar color. Reference shape is shown on the left.
Second and third rows: pointwise geodesic error (in $\%$ of geodesic diameter) of the ACNN and RF correspondence, respectively. For visualization clarity, the error values are saturated at $10\%$ of the geodesic diameter. Hot colors correspond to large errors.}
\end{figure*}
| 2023-04-23T06:40:32.870Z | 2016-05-23T02:12:54.000Z | redpajama/arxiv | arxiv_0001 | 634 | 8,458 |
dedc4f251e3e5f3bff5896b1f0dec339537d1931 | \section{The model and the physical picture} \label{modelphyspicture}
\noindent The FMO protein complex consists of seven
chlorophyll molecules in a protein cavity.
There is an eighth bacteriochlorophyll molecule outside the cavity
whose interaction with the rest seven molecules can be neglected.
The first two laser pulses in the 2D spectroscopic experiment, delayed by only 20 fs from each other,
excite two excitons and lead to their coherent superposition, which results
in the beating signal.
\begin{figure}{} \begin{center} \begin{minipage}{15cm}
\scalebox{0.4}{\includegraphics*{fig1-drakdoy.eps}}
\end{minipage}
\end{center}
\caption{Model for studying the quantum coherent behaviour of a complex of bacteriochlorophylls (FMO complex)
in a protein scaffold in the green sulfur bacteria. The protein scaffold provides
the carbon atom in the carbonyl $CO$ fragment to capture the nuclephile, the exciton electron,
in a transient negative ion resonance which relaxes and couples with phonon modes in the protein
and excites local rocking and wagging motion of the carbonyl $CO$ fragment.
The entanglement with gravonons is effective within the NIR, but as it develops
slowly compared to the coupling with the phonons,
its effect during the tiny burst, induced by the double laser pulse, is neglected in the calculations.
In this schematic drawing the gravonon spectrum is plotted on a scale
magnified by a factor $10^8$ and differs from the scale of
the other spectra.
\label{twooscillators2}} \end{figure}
The time between the first two laser pulses,
i.e. the coherence time $\tau$, in different 2D spectroscopic experiments
is of the order of 20 to 500 fs
and the duration of each pulse is 15-40 fs.
These times are very short compared
to the time needed to establish the entanglement with the gravonons
(representing local version of the gravitons),
the environmental excitations
we take into account in addition to the excitations
considered in other theories like phonons and electron-hole pairs.
Therefore for the
purposes of our theory the first two laser pulses are regarded as one double pulse.
The third laser pulse is applied after a waiting time varying from 0 fs to 1000-1500 fs
and serves to stimulate the emission of the photon, which is recorded
as the photon echo after the rephasing time $t$. \\
\noindent The physical picture we describe is the following.
The exciton electron wave packet is diffuse, penetrating in the protein scaffold,
where as a nuclephile in a nucleophilic-like reaction the exciton electron attacks
the positively polarized carbon atom of the carbonyl $CO$ fragment.
The electron can be transiently accomodated in the $3s$-affinity like orbital on the carbonyl carbon atom
creating a transient negative ion resonance (NIR).
Thus the electron penetrates in the carbon core region where it interacts gravitationally with the atom core.
At interatomic distances the strength of the gravitational interaction is enhanced
by many orders of magnitude, if hidden extra dimensions exist.
In the spirit of Einstein's general relativity
the gravitational interaction between masses deforms spacetime and we introduce
a basis state called warp resonance for the electron (fig. \ref{twooscillators2}),
where it entangles with the gravitons
(the messenger particles of the gravitational field).
Locally a soft gravonon structure is generated, which is constructed as a
correlated motion of 3-4 atoms leading to the warping.
The soft gravonons in our theory are the solution of interacting quantum harmonic oscillators
centered on the 3-4 atoms close to the NIR. For more details on the theory
concerning the generation of the gravonons see refs. \cite{dice2014,foundpaper}.\\
\noindent The electromagnetic field acts as an external force on the carbon atom
and, together with the thermal fluctuations, shifts it slightly in a different position
creating a local deformation in the protein chain.
This is no wonder since
the local vibrations of the atoms in the carbonyl $CO$ fragment are different if the carbon atom has
captured transiently the electron or not. The local deformation of the
protein chain occurs around the NIR, it is associated with slight increase in energy, and is
described by introducing a basis state called deformation resonance for the core motion of
the carbonyl fragment. Furthermore excitation of
vibrational modes like the rocking and wagging motion of the
carbonyl $CO$ fragment is possible which we describe by coupling the deformation resonance to
two local harmonic oscillators, i.e. to local vibrational modes (fig. \ref{twooscillators2}). \\
\noindent In the deformation resonance coupling to the extended phonons of the protein backbone
is also effective, leading to phonon excitation and energy dissipation from the local region
(fig. \ref{twooscillators2}).
The exciton electron has the option to decay from its localized state in the NIR in delocalized
electron states, called 'escape states' because total energy has to be conserved,
if phonons are excited.
Electron states delocalized over the chlorophyll assembly with
the $3s$-affinity like orbital of the carbonyl carbon atom as component play this role.
It is within the deformation resonance that coupling
with the phonons of the protein backbone is taken into account and energy dissipation
from the local region in the protein backbone via phonon excitation may occur.
However, due to the interaction between the
deformation resonance and the local rocking and wagging modes of the carbonyl $CO$ fragment,
reflection to and fro between the local states occurs. Therefore, despite the dissipation
into the protein backbone phonons, there remains a local non-zero electron component
in the total wave packet, which explains the finite non-zero value
of the photon signal in the 2D spectroscopic experiment when the deexcitation of the exciton electron is
stimulated by the third laser pulse.\\
\noindent
The gravitational interaction of the exciton electron with the carbon atom core
has to be in high spacetime dimensions ($11D$) as string theory
assumes. Only then gravity is strong enough at small distances and decays fast with distance
($r^{-8}$ power law and nearly 33 orders of magnitude higher value of the gravitational constant in $11D$).
The additional 7 spacial dimensions are compactified and are hidden,
so that there is no discrepancy with Newton's gravitational law at distances where it was proved valid.
Within the warped space the time development of the
entanglement of the electron with gravonons
leads to the beables.\\
\noindent
Beables are mathematically precisely defined as a set of configurations
in the expansion of the wave function containing a localized matter field
and excited local gravonons.
Loosely speaking beables
are matter fields (e.g. atoms or electrons) in $3D$ space, entangled
with gravonons, which interact with
other atoms gravitationally and generate the gravonon structure in high
dimensional spacetime.
Therefore localized atoms, molecules or electrons in $3D$ space
entangled with gravonons in high dimensions in the form of beables
exist as long as the beables and the entanglement with the gravonons
exist.\\
\noindent
The word "beable" has been coined by John Bell \cite{bell}
as a terminology against the word observable: "The concept of
'observable' lends itself to very precise mathematics when identified
with 'self-adjoined operator'. But physically, it is a rather wooly
concept. It is not easy to identify precisely which physical processes
are to be given the status of 'observations' and which are to be
relegated to the limbo between one observation and another.'' \\
\noindent
We use the expression beable in the sense of John Bell's {\em local}
beable. According to our proposal
signals in experiments in $3D$ space can be received only via beables
and hence measurements are tied to beables \cite{foundpaper}.
Expressed with John Bell's words: "One of the apparent non-localities
of quantum mechanics is the instantaneous, over all space, 'collapse
of the wave function' on 'measurement'. But this does not bother us if
we do not grant beable status to the wave function."\\
\noindent
Already in 1927 it was revealed that an interference pattern shows up
on a photographic plate only when the number of photons falling on the
plate is very large \cite{dempter-batho}.
The history of photon detection is nicely reviewed in ref. \cite{dempkhalili}.
Here the results of Dempter and Batho are summarized in such a way that
when, during
the so called 'collapse of the wave function', the photon is destroyed,
there appears somewhere on the photographic plate an atom of elemental silver
which will act as an embryo from which, by photographic development, a small
seed of silver will grow. The silver embryo is much smaller than the electromagnetic
wavelength and constitutes the beable in our picture.\\
\noindent
Applied to the spectroscopic method used in the bacteria experiment inestigated here
this means that the photon of the laser pulse occupies during the interference
process at least the whole volume of the
cavity embraced by the protein scaffold, but if the photon is measured and hence destroyed,
there appears a beable (embryo) formed in the detection devices of the experimenters.\\
\noindent
The described
mechanism of the so called collapse of the wave function has by now
also been established for matter fields \cite{arndt1}-\cite{arndt3}. For an electron in
an excitonic state of the chlorophyll it means that the wave function
of this electron occupies during the interference process the whole volume
of the chlorophyll, but when it hits the protein scaffold,
which acts as a kind of screen, the electron becomes localized in a tiny volume
of the size of the silver embryo, this being the carbon affinity level of the
carbonyl group in the model developed here. \\
\noindent
Before the laser pulse is sent in, the electron will be almost with certainty
in such a beable. The laser pulse initiates then an interference process
where the many particle wave function describing photon, electron,
phonon etc. extends over the whole volume embraced by the protein scaffold.
This many particle wave function is not a beable, but, in John Bell's
terminology, rather a 'limbo state'. As time passes on a matter field will
somewhere entangle with gravonons and form a beable which
destroys the photon. If beforehand the photon 'localizes' in a beable
in the detection devices of the experimenters, it will do so proportional
to the light intensity in the system.
\subsection {Chooser mechanism, beables created with the chooser mechanism}
\noindent The question is why does the exciton electron become
localized on a single carbonyl carbon atom of a single carbonyl $CO$ fragment in the protein backbone?
Quantum particles, e.g. the exciton electron,
may become localized via the entanglement with gravonons in the form of beables.
However, as the gravitational interaction even in high dimensional
spacetime is weak, entanglement with gravonons can be effective on the energy shell alone.
In the present model the exciton electron is localized
because via the entanglement to gravonons a beable is generated
by the local and strongly distant dependent interaction with the
gravonons. In the case of on-shell coupling even a very weak coupling
with the gravonons is effective. Just a few or a single carbonyl carbon atom in the protein
chain provide the condition for on-shell coupling with the initial exciton electron wave packet.
This is the chooser mechanism for the
site of most favourable entanglement with gravonons.
The chooser mechanism is responsible for the appearance of stuck
particles on single sites on the detection screen in double slit experiments \cite{doubleslit}.\\
\noindent The exciton is a diffuse object. Its overlap with all carbonyl $CO$ fragments
in the protein backbone would mean that the electron might be thought
to reside on all carbonyl carbon atoms in $CO$ sites at the same time and
would couple to all degrees of freedom all over the protein chain.
Then decay in the environmental degrees of freedom
resembles decoherence and is commonly predicted to be very fast.
This is, however, not the case in our theory
because the chooser mechanism selects a single carbonyl carbon atom on a single $CO$ site where
the condition for degeneracy coupling with the gravonons is fulfilled,
i.e. where the chooser mechanism leading to the transient localization of the electron works.
The electron can go to only a single carbonyl carbon atom in a single $CO$ site,
chosen by the degeneracy coupling criterion.\\
\noindent Beables are destroyed when the entanglement with the gravonons
is changed or truncated. This occurs as a tiny burst in the time development of the beable
caused for instance by the laser pulses
(fig. \ref{beable}). In the tiny burst the quantum particles (atoms, molecules, electrons)
are released from the entanglement with the gravonons and they burst out of the beable into $3D$ space
in a 'limbo state'. All states where particles are not entangled to gravonons
we call limbo states. \\
\noindent
Within the beable the electron is localized in
the warp resonance and can be measured.
As the lifetime of the gravonons in the
high hidden dimensions expires, the entanglement
of the localized electron with the gravonons is truncated and the electron is released in
$3D$ space. The electron bursts out of the entanglement with the
gravonons. \\
\begin{figure}{} \begin{center} \begin{minipage}{15cm}
\scalebox{0.4}{\includegraphics*{fig2-drakdoy.eps}}
\end{minipage}
\end{center}
\caption{Time development of the entanglement of a quantum particle, e.g.
the exciton electron, with the gravonons,
generating the beable. Within the beable the electron is localized in
the warp resonance and can interact with the photon field, i.e. it can be measured.
As soon as the lifetime of the gravonons in the
high hidden dimensions expires, the electron bursts out of the entanglement with the
gravonons and is released in $3D$ space. This is the tiny burst in the figure.
It exists for a short time before the entanglement with the gravonons is
reestablished, which takes $5\cdot 10^{-9}-10^{-8}$ s. The inset shows the
bursting of the exciton electron out of the beable in three dimensional space
(in the warp resonance and the excitons in the cavity) during the tiny burst.
\label{beable}} \end{figure}
\noindent To demonstrate the destruction and re-creation of the beable
the Hamiltonian, described in section \ref{totalhamilt}, is diagonalized
without the term $H_{el-grav}$. Then this term is transformed to the
new diagonalized basis and the time dependent Schr\"odinger
equation (TDSE) is solved as described in section \ref{totalhamilt}.
The tiny burst can be seen in fig. \ref{beable}.
It exists for a short time before the entanglement with the gravonons is
reestablished which takes $5\cdot 10^{-9}-10^{-8}$ s.
Within the tiny burst experimentally nothing can be recorded
about the electron. This situation does exist in different experiments:
in double slit diffraction experiments with electrons and molecular beams
we know nothing about the quantum particles during their time of flight between the
source and the detection screen; in laser-induced desorption we know
nothing about the adsorbate for the time of flight between the
state with the adsorbate as a beable on the substrate surface and in the mass spectrometer,
where a new beable is created.\\
\noindent The effect of the double laser pulse in experiment
is to induce a tiny burst in the time development
since the laser pulse creates an electromagnetic field,
which exerts a force on the localized particle within a beable.
Slight shifts of the atom positions mean changed local
gravonon structure and destroyed coupling to the gravonons
(very short ranged), hence the beable is destroyed.
For instance, the double laser pulse in the 2D spectroscopic experiment
interacting with the electric dipole due to the localized electron on the carbon atom,
induces a slight shift of the carbon atom
and rocking and wagging motion of the carbonyl $CO^-$ fragment in the protein. This would cause
such a drastical change in the gravonon structure, typical for the beable
configuration existing before the laser induced tiny burst, that the entanglement
with the gravonons is destroyed. This is how the
gravonon structure and the beable are destroyed by the laser pulses.
As the entanglement with the gravonons develops in a period of time of the order of $10^{-8}$ s,
in the laser-pulse-induced tiny burst the electron is released
from the entanglement with the gravonons, however, its
coupling with other excitations, living in 4 dimensional spacetime
like phonons and the vibrations of the local oscillators, gains weight.
Although experimentally an electron, a photon or a molecule
released from a beable cannot be directly detected during the tiny burst, we can
calculate the time development further with the time dependent Schr\"odinger equation.
This is where in our theory we start a new time development: the electron is localized
in a transient NIR on a carbonyl carbon atom in a single carbonyl
$CO$ fragment, the photon is in the protein cavity, no gravonons are involved.\\
\noindent During the tiny burst
the exciton electron is no more entangled with the gravonons, it is
released in a limbo state in $3D$ space,
therefore cannot be directly accessed experimentally.
But everything in the 2D spectroscopic experiment
occurs during the tiny burst induced by the double laser pulse.
This situation lasts untill the entanglement with gravonons can develop
again, giving rise to the beables. During the lifetime of the beable
the exciton electon, which initially is a delocalized electron wave packet, remains localized
transiently on the carbonyl carbon atom.
Within this physical picture all measurable phenomena
originate effectively from a very local region which then expands in $3D$ space.
The chooser mechanism for the generation of the beables
and the concept of the tiny burst due to the laser pulse
indicate that what is measured in experiment has to be interpreted in a local picture
concerning just a single
carbonyl $CO$ fragment in the protein backbone, which couples to no more than
two local harmonic oscillators and the phonon excitations in the protein backbone. If more
local harmonic oscillators were excited, which is possible only in the absence of the
chooser mechanism, the result would be indeed fast
decoherence of the photon beating signal.
This is, however, not measured in experiment. \\
\section{ The Hamiltonian} \label{totalhamilt}
\noindent The quantum fields in the model are:
(i) electron;
(ii) photon;
(iii) gravonons;
(iv) core movement field of the carbonyl $CO$ fragment;
(v) phonons.
The total Hamiltonian, the gravonons taken into account as well, includes terms describing
these fields and the many-particle interactions with the electron field:
\begin{eqnarray}
H_{FMO}&=&H_{el}+H_{phot}+H_{grav}+H_{CO}+H_{phon} \nonumber \\
&+&H_{el-phot}+H_{el-grav}+H_{el-CO}+H_{el-phon}.
\end{eqnarray}
The Hamiltonian for the electron includes single electron terms
for exciton 1 within the chlorophyll assembly $\mid e \rangle$,
exciton 2 in the chlorophyll assembly $\mid e' \rangle$,
exciton 1 in the warp resonance $\mid w \rangle$,
electron in the deformation resonance $\mid v \rangle$,
electron in delocalized 'escape states' $\{ \mid k\rangle \}$,
the interaction of exciton 1 with the warp resonance and the interaction between the two excitons:
\begin{eqnarray} \label{electronhamilt}
H_{el}&=&E_ec_e^+c_e+E_{e'}c_{e'}^+c_{e'} + + E_{w}c_{w}^+c_{w}
+E_vc_v^+c_v+\sum_k E_kc_k^+c_k \nonumber \\
&+& W_{e}(c_w^+c_{e}+c_{e}^+c_w) + W_{ee'}(c_e^+c_{e'}+c_{e'}^+c_e).
\end{eqnarray}
The Hamiltonian for the photon is:
\begin{equation}
H_{phot}=\omega_{phot}a^+a
\end{equation}
and for the non-perturbed gravonons:
\begin{equation}
H_{grav}=\sum_{\kappa}^{}\varepsilon_{\kappa}\zeta^+_{\kappa}\zeta_{\kappa}.
\end{equation}
The Hamiltonian for the core movement of the carbonyl $CO$ fragment in the deformation resonance
(initial state and excited states) includes also the two local oscillators, describing the
wagging and rocking motion of the carbonyl group, and their interaction:
\begin{eqnarray}
H_{CO}&=&\epsilon_od_o^+d_o+ E_dd^+d
+\sum_{i=1,2;n_i=0}^{\infty} n_i\omega_{i} d_{n_i}^+d_{n_i} \nonumber \\
&+&W_d \sum_{i=1,2;n_i=0}^{\infty}(d^+d_{n_i}+d_{n_i}^+d).
\end{eqnarray}
The Hamiltonian for the phonons is:
\begin{equation}
H_{phon}=\sum_p\omega_{p}b^+_p b_p.
\end{equation}
The many particle electron - photon interaction, giving rise to an exciton polariton, is
described by the Hamiltonian:
\begin{equation}
H_{el-phot}=V_g(c_e^+c_{e'}+c_{e'}^+c_e)(a^++a)
\end{equation}
and the many particle electron - gravonon interaction is described by the Hamiltonian:
\begin{equation}
H_{el-grav}= Y_wc_w^+c_w \sum_{\kappa}(\zeta_{\kappa}^+ \zeta_o+\zeta_o^+\zeta_{\kappa}).
\end{equation}
The many particle interaction between the electron in the deformation resonance and
the core movement of the carbonyl group includes interaction
terms with the core movement states in the deformation resonance and in the local oscillators:
\begin{eqnarray} \label{h-el-co}
H_{el-CO}&=& V_{core-def}(c_w^+c_v +c_v^+c_w)(d_o^+d+d^+d_o) \nonumber \\
&+&V_{core-loc}c_v^+c_v\left( \sum_{i=1,2;n_i=0}^{\infty}d^+d_{n_i}+
\sum_{i=1,2;n_i=0}^{\infty}d_{n_i}^+d\right),
\end{eqnarray}
giving rise to the vibronic states.
The many particle electron - phonon interactions are contained in the term:
\begin{equation} \label{h-el-phonon}
H_{el-phon}= Y_p \sum_{k}(c_v^+c_k +c_k^+c_v)\sum_{p}(b_p^++b_p).
\end{equation}
\noindent The meaning of the symbols in eqs. (\ref{electronhamilt}-\ref{h-el-phonon}) is
as follows:\\
\noindent
$E_e, E_{e'}$ are single particle energies of exciton 1 and exciton 2 in the protein cavity and
$E_{w}, E_v, E_k$ denote the electron energy in the warp resonance $\mid w \rangle$
(localized at a single carbonyl carbon atom in one carbonyl $CO$ fragment in the protein backbone),
in the deformation resonance $\mid v \rangle$
and in the escape states $\{ \mid k \rangle \}$.
\noindent
$c_e^+,c_e,c_{e'}^+,c_{e'},c_w^+,c_w,c_v^+,c_v,c_k^+,c_k$ are the creation and destruction operators
for the electron in the respective single electron states. \\
\noindent
$W_e$ is the interaction strength between exciton 1 and the warp resonance;
$W_{ee'}$ is the interaction strength between exciton 1 and exciton 2.\\
\noindent
$\omega_{phot},a^+,a$ are used for the photon energy and photon creation and annihilation operators. \\
\noindent
$\varepsilon_{\kappa},\zeta^+_{\kappa},\zeta_{\kappa}$ are the energy and
creation and annihilation operators
for the gravonons, which are treated in the quantum harmonic approximation.
$\omega_{grav}(\beta^+\beta + \frac{1}{2})\equiv \sum_\kappa \varepsilon_{\kappa}\zeta_{\kappa}^+\zeta_{\kappa}$
with $\varepsilon_{\kappa}=\omega_{grav}(n_{\kappa}+\frac{1}{2})$ is the parabola
representation of the gravonons. \\
\noindent
$\epsilon_o, d^+_o,d_o$ are used for the energy, the
creation and annihilation operators for the initial core movement state
$\mid d_o \rangle$ of the carbonyl $CO$ fragment in the deformation resonance.
$E_d, d^+,d$ refer to the energy, creation and annihilation operators for the excited core movement state
$\mid d \rangle$ of the carbonyl $CO$ fragment in the deformation resonance.\\
\noindent
$n_i$ denotes the number of the energy level in the $i$-th local oscillator ($i=1,2$) with energy
$\omega_{i}$ and creation and annihiltion
operators $d^+_{n_i},d_{n_i}$ of vibrations in the $n$-th energy level of the $i$-th local oscillator. \\
\noindent
$W_d$ is the coupling strength between the vibrations of the carbonyl group
$CO$ in the deformation resonance and the local oscillators.\\
\noindent
$\omega_{p}$ is the energy of the non-perturbed phonons in the protein backbone,
with $b^+_p,b_p$ the creation and annihilation operators for phonons in the protein backbone.\\
\noindent
$V_g$ is the electron-photon coupling strength in the cavity, creating the polariton. \\
\noindent
$Y_{w}$ is the coupling strength between the electron in the warp resonance $\mid w \rangle$ and
the gravonons $\{ \mid \kappa \rangle \}$.\\
\noindent
With $V_{core-def}$ the interaction strength between the electron and the
core movement state of the carbonyl group $CO$
in the deformation resonance is denoted. \\
\noindent
$V_{core-loc}$ is the interaction strength between the electron in the deformation resonance
and the core movement states of the carbonyl group $CO$ in the local oscillators.\\
\noindent
$Y_{p}$ refers to the coupling strength between the electron and the phonons
$\{ \mid p \rangle \}$ in the protein backbone.\\
\noindent
The last term eq. (\ref{h-el-phonon}) shows that when a phonon is excited in the protein backbone
the electron has to decay from a local state in the deformation resonance
in a delocalized electron escape state. This warrants the energy conservation
required by Schr\"odinger's equation. \\
\noindent The second term in eq. (\ref{h-el-co}) and the term in eq. (\ref{h-el-phonon})
are responsible for the quantum beats, their exponential damping
and the non-zero finite value of the photon signal
produced by the stimulated electron deexcitation.
The second term in $H_{el-CO}$ (eq. \ref{h-el-co}) describes
the interaction between
the electron in the deformation resonance of the carbonyl $CO$ fragment and
the local oscillators. The term eq. (\ref{h-el-phonon})
is the interaction of the electron with the delocalized phonons, excited in the protein backbone.
The second interaction term in eq. (\ref{h-el-co})
takes care of preserving the weight of local configurations
with the electron involved and is responsible for the finite final amplitude
of the photon signal, resulting from electron
deexcitation, stimulated by the 3rd laser pulse.
The interaction term eq. (\ref{h-el-phonon}) takes care
of the entanglement with the protein phonons. This is the dissipative term,
which leads to the attenuation of the weight of electron configurations in the
local region and causes the exponential damping of the amplitude of the
stimulated photon emission signal with time. \\
\noindent The method of solving Schr\"odinger's equation for
the time development of the total wave packet
has been described in ref. \cite{foundpaper}.
It uses a basis consiting of many particle configurations (the electron, the photon,
gravonons, core movement of the carbonyl $CO$ fragment, protein phonons)
in a configuration interaction (CI) approximation.
The local many particle basis includes
notations for all fields (with gravonons and phonons in their initial states). For instance:
$\mid e,1,0,0,0\rangle$, $\mid e',0,0,0,0\rangle$, $\mid w,1,0,0,0\rangle$, $\mid v,1,0,d,0\rangle$,
where in the first position the electron basis function is denoted,
in the second place the photon state,
in the third place: the gravonon state,
in the fourth place: core movement state of the carbonyl $CO$ fragment and
in the fifth place: the protein backbone phonon state.\\
\noindent The specific feature of the theoretical approach
applying to the green sulfur bacteria is the choice of the matrix elements.
The energy difference between exciton 1 and exciton 2 equals 22 meV
and is of the order of magnitude determined experimentally \cite{caram}
and used in other approaches \cite{ishizaki2}. For the interection matrix element between the two
excitons we use 20 meV, and 12 meV for the coupling between the exciton and
the warp resonance. \\
\noindent After diagonalizing to first order in the basis the time
dependent Schr\"odinger equation is solved:
\begin{equation} \label{psioft}
\mid\Psi (t)\rangle = e^{-iH_{FMO}t}\mid \Psi(0)\rangle.
\end{equation}
As describen in section \ref{modelphyspicture} we solve the TDSE twice. In the first calculation
the initial state $\mid \Psi(0)\rangle$ is an extended exciton state
which develops into a localized beable. In the second calculation
the initial state $\mid \Psi(0)\rangle$ has the exciton electron in the beable state
($3s$-affinity orbital of the carbonyl carbon atom) as described in section \ref{modelphyspicture}. \\
\noindent The single particle basis states are:\\
\noindent (i) the electron in the excitons $\mid e \rangle$,
$\mid e' \rangle$ and in the warp resonance $\mid w \rangle$,
in the deformation resonance $\mid v \rangle$ and in the escape states $\{ \mid k \rangle\}$;\\
\noindent (ii) the photon: 1: with photon in the protein cavity, 0: no photon;\\
\noindent (iii) gravonons: 0: for initial state of the gravonon continuum, $\{ \kappa \}$ for excited gravonons; \\
\noindent (iv) the carbonyl $CO$ group in its vibrational state in the deformation resonance:
(0 for initial vibrational state; $d$ in an excited vibrational state) and
the carbonyl $CO$ group in the wagging or rocking vibrational state,
described by the local oscillators
($\omega_i$ is the energy of the $i$-th local oscillator in its $n$-th state $\mid d_{n_i} \rangle$);\\
\noindent (v) the protein backbone phonon state: 0 for protein phonons in their initial state,
$\{ p \}$ for excited protein phonons. \\
\subsection {Vibrational energy in the local oscillators}
\noindent An estimate of the vibrational excitation energy of the rocking and wagging motions of the carbonyl
group in the protein backbone of the order of $\omega\approx 5$ meV
is based on the experimental data in ref. \cite{engeletal}.
With this estimate at thermal energy
of approximately $k_BT=11$ meV at $T=125$ K two local
vibrations can be excited each with $\omega_1=\omega_2 =\omega \approx 5$ meV,
so that $2\omega \approx 10$ meV corresponds approximately to the thermal energy at 125 K.
The rocking and wagging motions of the carbonyl group are much softer
than its vibration with respect to the protein chain, hence 5 meV for the quantum of
these vibrations appears physically plausible. \\
\noindent The energy of a local vibration can be neither 4 meV nor 6 meV because these values
contradict the experimental curves for 77 K, 125 K and 150 K.
The estimate for the energy of the local mode
$\omega \approx 5$ meV, based on the 2D Fourier transform spectroscopy experiment, corresponds to
the thermal energy $k_BT$ at $T\approx 58$ K.
At $T=125$ K and $T=150$ K the two experimental curves coincide. If by
$T=125$ K two quanta are excited, then at $T=150$ K it is not possible to excite 3 quanta,
hence 2 excited quanta would explain the coinciding curves in experiment.
The two quanta $2\omega$ should correspond to $T=125$ K and
$k_BT\approx 11$ meV, hence $\omega \approx 5$ meV.
$\omega$ cannot be 4 meV ($T\approx 46$ K) because then at $T=150$ K three
vibrational quanta will be excited, whereas at $T=125$ K only two can be excited
and the two curves will not coincide.
$\omega$ cannot be 6 meV ($T\approx 70$ K) either because at $T=125$ K, $k_BT\approx 11$ meV,
just a single local vibration will be excited exactly as by $T=77$ K. The two
curves at $T=77$ K and $T=125$ K would coincide. This is, however, not observed experimentally. \\
\subsection {Protein phonon band}
\noindent The protein phonon band width is of the order of 0.03 eV.
This follows from the following argument. Assume a periodic constant of
a simple one dimensional protein of the order of $a\approx 10$ bohr, then the maximal
wave vector $k_{max}$
is of the order of $k_{max}=\frac{\pi}{a}\approx 0.3$ bohr$^{-1}$. The energy corresponding to
the maximal $k$-vector is $\omega_{p,max}=vk_{max}=0.027$ eV
(using for the sound velocity in dry air $v=330$ m/s$=6.6 \times 10^{12}$ bohr/s).
This estimate refers to the transversal phonon modes. \\
\section {Results for the time development of the tiny burst} \label{results}
\noindent We assume that the exciton electron is in a beable before
the laser pulses occur, because the bacterium exists in reality and therefore
cannot be in a limbo state. Reality consists of beables only.\\
\noindent The results of the calculation leading to the beable have been
presented in fig. \ref{beable}. We now discuss what happens after the
beable has been destroyed by the laser pulse.\\
\noindent The first laser pulse in the 2D spectroscopic experiment excites
the FMO complex electronically producing the first exciton delocalized in the protein cavity.
The second laser pulse excites the second exciton and a coherent
superposition between the two \cite{fleming}.
If no interaction with the environment exists the coherent oscillations between
exciton 1 and exciton 2 will continue, the amplitude of each state changing
sinusoidally with time (fig. \ref{g1-e0}).\\
\begin{figure}{} \begin{center} \begin{minipage}{15cm}
\scalebox{0.4}{\includegraphics*{fig3-drakdoy.eps}}
\end{minipage}
\end{center}
\caption{Rabi oscillations between the two exciton states
isolated from the environment.
\label{g1-e0}} \end{figure}
\noindent However, for the FMO-protein complex we have the protein cavity
and the chlorophyll molecules within the cavity. Before the tiny burst the exciton
electron is transiently localized in a single warp resonance on a carbon atom
at a single carbonyl $CO$ fragment chosen
by degeneracy with the gravonons. The photon is in the cavity. The localized electron
exercises a force on the carbon atom, exciting two local oscillators with the wagging
and rocking movements of the carbonyl $CO$ fragment in the deformation resonance.
In addition to coupling to the local oscillators, coupling to the extended
protein phonons within the deformation resonance allows
relaxation and dissipation in delocalized phonons away from the local site.
In the 2D Fourier transform electronic spectroscopy experiments
these components are not observed, the measurement provides just the
amplitude of the photon echo, which scales with the weight of the local configurations involving
the exciton electron, as a function of time. But we can calculate
the time development of the wave packet during the period of the tiny burst and it
shows a strong similarity with the variations with time of the photon amplitude
measured experimentally (fig. \ref{temper-2osc}).\\
\noindent In experiment the third laser pulse after a waiting time $T$ stimulates
the photon emission. At time $T$ the photon is emitted with the amplitude,
with which the electron to be deexcited is represented in the local components of the wave packet
(warp resonance, deformation resonance, local oscillators) just before the third
laser pulse. Our theory ends at the moment just before the third laser pulse is shot.
The third laser pulse will cause stimulated photon emission
with the amplitude, with which the exciton electron is represented in all configurions involving the local
states: the warp resonance, the deformation resonance and the local oscillators.
The sum of the squared amplitudes of these configurations is plotted
as a function of time in fig. \ref{temper-2osc}.
The amplitude of the emitted photon signal scales with the weight of
all local configurations involving the exciton electron.
In experiment choosing the photon in the detector via a new beable happens
with an amplitude equal to the sum of the squared coefficients of those configurations, which have
the exciton electron, whose deexcitation is stimulated by the third laser pulse.
\begin{figure}{} \begin{center} \begin{minipage}{15cm}
\scalebox{0.35}{\includegraphics*{fig4-drakdoy.eps}}
\end{minipage}
\end{center}
\caption{Time development of the amplitude of the photon signal during
the tiny burst, induced by the double laser pulse, for the model in fig. \ref{twooscillators2}
at three different temperatures of the environment. The photon signal scales
with the summed weight of the field configurations with
local components in the wave packet. These
are configurations with the exciton electron in the warp resonance
(i.e. in the beable), in the deformation
resonance and the local oscillators. The initial state is the beable
described in the text.
\label{temper-2osc}} \end{figure}
\subsection{Temperature effects}
\noindent For the solution of the time dependent
Schr\"odinger equation at different temperatures
we explicitely need the dependence of the accessible vibrational
configurations on temperature, i.e.
counting of the vibrational configurations in the deformation resonance,
and the temperature dependence of the accessible on-shell phonon configurations.
The assumption is that the deformation resonance is
as highly excited as the external temperature allows. This energy can then be
redistributed betwen the local channels and the delocalized phonons.\\
\noindent We assume that within the deformation resonance
coupling to two local quantum harmonic oscillators is effective,
associated with the rocking and wagging motions of the carbonyl $CO$ fragment.
Einstein's model is assumed (non-interacting oscillators
with the same frequency).
One local oscillator is not enough to reproduce the temperature dependence of the signal,
it is not able to compete with the dissipative effect of the protein phonons.
If it were just one local harmonic oscillator available for relaxation
(one relaxed vibrational mode) we would get fast decay of the photon amplitude and no temperature
dependence since one oscillator provides one local vibrational
configuration for all experimental temperatures.
If the number of coupled local oscillators and
local vibrational configurations were
large, i.e. the exciton electron resides transiently
on many carbonyl carbon atoms in many carbonyl $CO$ fragments,
we would get no temperature dependence of the decay of the photon
amplitude either. It would immediately drop to zero as decoherence theory suggests.
No chooser effect and fast decay in the protein phonons all over the protein chain would
be the result. This is
a situation when the local structure is lost.
Hence, the result would be fast exponential decay of the photon signal as
anticipated by decoherence theory.
However, in our model, starting from a delocalized exciton electron,
with the chooser mechanism due to on-shell entanglement with gravonons,
a localized electron in the form of a beable is generated.
The local nature of the interactions in this many particle system prevents
the excitation of many harmonic oscillators over the protein chain
and the total decay of the photon signal. \\
\noindent With two local oscillators we get the exponential damping
of the photon amplitude with time,
the beating of the photon signal and the finite non-zero value of the photon
signal at long time as shown in fig. \ref{temper-2osc}. The temperature dependence is also
in satisfactory agreement with the experimental observation.\\
\noindent The dependence of the photon signal on the temperature is reflected by changes in the
vibrational structure of the local oscillators and the accessibility of the protein phonon configurations.
The temperature dependence of the accessible
vibrational configurations due to the local oscillators and the delocalized phonon configurations
in the protein is teated in the following way.
Figure \ref{phonon-counting} illustrates how the local oscillator vibrational structure changes with
temperature if we restrict the model to two local oscillators.
The wagging and the rocking modes are assumed to have the same energy
$\omega=5$ meV. This energy corresponds roughly to thermal
energy $k_BT$ at $T=58$ K. The energy is provided by the external heating
and is available within the deformation resonance.
So if the temperature in the 2D Fourier transform spectroscopy experiment equals 77 K, it means that
either one or the other local oscillator can be excited, i.e. there are two
degenerate vibrational configurations due to the local oscillators involved in the interaction
with the deformation resonance (the first line
in fig. \ref{phonon-counting}). At $T=125$ K and $T=150$ K
the available thermal energy is sufficient to excite just 2 local vibrations
which makes three degenerate local vibrational configurations
(the second line in fig. \ref{phonon-counting}) and so on.
Thus, when the temperature rises the local vibrational structure becomes more
complex. As the temperature rises 2, 3, 4 ... local vibrational configurations,
which are energetically accessible and are on the energy shell
with respect to the energy of the exciton wave packet, will be involved.
More and more on-shell local channels get open for the wave packet when the
temperature rises. This is the decisive feature of the model, which
prevents the destruction of the coherent time development by decoherence
and the total decay in the protein phonons. \\
\noindent Because we solve Schr\"odinger's equation with a procedure similar to CI,
we see that the number of the configurations involved increases with the
increasing energy, i.e. with increasing temperature.\\
\noindent The channels for decay in delocalized protein phonons also
increase, the avaibale density of phonon configurations
on the energy shell of the initial wave packet increases linearly with temperature.
This is so because we assume a one-dimensional protein chain with
constant density of states which provides a linear dependence on the temperature of the
on-shell density of phonon configurations.
If we would take the $3D$ Debye model for the phonons the density of on-shell phonon configurations
would vary with $T^3$, which would result in fast dissipation in phonons,
accompanied by the decay of the electron in the delocalized escape states. \\
\noindent The higher the temperature, the more protein phonon configurations are
accessible on the energy shell.
Assume the energy in the deformation resonance is 11 meV corresponding
to a temperature of 125 K. The phonon configurations in the protein backbone
which preserve the energy of the total wave packet
can vary their energy upto 11 meV. Assume now that the energy
in the deformation resonance is 22 meV corresponding to 250K.
The number of phonon configurations, which can preserve the
total energy of the wave packet and thus leave it on the energy shell,
is much larger.\\
\noindent The two channels, governed by the terms in the Hamiltonian:
\begin{eqnarray}
H_{el-CO}&=& ...
+V_{core-loc}c_v^+c_v\left( \sum_{i=1,2;n_i=0}^{\infty}d^+d_{n_i}+
\sum_{i=1,2;n_i=0}^{\infty}d_{n_i}^+d\right) \nonumber
\end{eqnarray}
and
\begin{eqnarray}
H_{el-phon}&=& Y_p \sum_{k}(c_v^+c_k +c_k^+c_v)\sum_{p}(b_p^++b_p) \nonumber
\end{eqnarray}
compete. The first term has the effect of conserving the amplitude of the wave packet in the local
region, whereas the second leads to its decay out of the local region in the
delocalized protein phonons, which are not measured in the experiment, causing
the exponential damping of the photon signal at short time.
As time goes by the two competing effects balance each other
and the photon signal gets stabilized at a finite value.\\
\begin{figure}{} \begin{center} \begin{minipage}{15cm}
\scalebox{0.37}{\includegraphics*{fig5-drakdoy.eps}}
\end{minipage}
\end{center}
\caption{Local vibrational configurations on the energy shell of the initial wave packet
as a function of the excitation energy, hence as a function
of the temperature, available within the deformation resonance.
\label{phonon-counting}} \end{figure}
\noindent As the local structure in the model comprises more and more
configurations with rising temperature it has
a counterbalancing effect on the decay due to coupling to the delocalized phonons
and warrants that the weight of local configurations with the exciton
electron is retained locally. Then the third
laser pulse in the experiment can interact with the localized electron in the $3s$,
the deformation resonance and the local oscillators
and stimulate the photon emission with the amplitude retained in the local
structure.\\
\noindent
For our result the localized initial beable state is decisive.
In our theory we start from a local beable and follow its time development
during the tiny burst. Since the local structure expands
in number of configurations with rising temperature, the wave packet does
not delocalize completely and remains local. This reminds qualitatively of the
quantum Zeno effect, which is collapse after each interaction event with the environment.
However, the time development of the total system in high dimensional space
is completely coherent. The stochastic approaches try to obtain similar
effects by an empirical special choice and tuning of the phonon noise
such that the experimental beating signal is reproduced \cite{shabani}. \\
\section {Discussion}
\noindent As we see that with the simple model we reproduce
the experimental dependence of the photon amplitude on the waiting time and the
temperature, which is observed in the 2D Fourier transform electronic
spectroscopy, we can suggest an explanation of the questions
raised in the introduction.\\
\noindent No isotope effect on the coherences in FMO is established experimentally.
In our theory it is of no significance
if $^1H$ is replaced by $^2H$ or $^{12}C$ is exchanged with $^{13}C$. The local vibrational spectrum changes,
but we only need the spectrum on the energy shell of the wave packet.
The local picture explains why if something changes 10 {\rm \AA} away from
the transient negative ion resonance,
for instance isotopes of hydrogen are exchanged, the change does not affect the
coherent behaviour of the NIR, which is observed in experiment. \\
\noindent Using an argument from perturbation theory, the first order
coupling between two configurations scales with the inverse of their energy difference.
The closer the energy of the interacting configurations,
the stronger the interaction between them and the first order term
will provide the major contribution.
Hence, the on-shell contribution to the coupling dominates. Therefore the many particle
configurations we take into account, are nearly on-shell with the initial
wave packet since they provide the major contributions to the coupling.\\
\noindent Reminding the many-particle character of the field configurations
in our theory we necessarily have to conclude that the nature of the
states involved in the superposition, leading to the quantum beats,
is vibronic, as it has been suggested by experimentalists as well.\\
\noindent {\bf Why does conventional decoherence not prevail?}
The initial state in a conventional decoherence approach is the superposition of exciton
states delocalized over the protein cavity.
The environment are the protein phonons, the coupling
of the excitons with the environment leads to phonon excitation. Energy
conservation requires that the excitons are deexcited in the ground
electron state. No oscillations between the electron ground and exciton
states can occur. With the argument used by decoherence theory,
the reduced density matrix has then a single
non-zero diagonal matrix element equal to 1 for the electron
ground state of the excitons. \\
\noindent This would happen if we would neglect the chooser
mechanism, which localizes the electron in the $3s$-affinity orbital
of the carbonyl carbon atom. In an attempt to simulate the result of decoherence theory let us
omit the local oscillators due to the carbonyl $CO$ fragment,
as in decoherence theory
they play no role. The initial state is also changed, the electron is no
more 'chosen' in the warp resonance, it is assumed to be
in the somewhat diffuser deformation resonance.
If in our model we would disregard the chooser mechanism and
the coupling to the local $CO$ oscillators and allow the exciton
electron to couple to phonon continua all over
the protein chain the result is similar to what is expected from decoherence theory
(fig. \ref{engel-decoherence}).
The two curves in fig. \ref{engel-decoherence} are evaluated with different initial
states, of course. Whereas for the upper curve the exciton electron in the initial wave packet
is localized via the chooser mechanism in the beable, for the lower curve
where no chooser mechanism is operating, the initial wave packet has the electron
in the deformation resonance. Ignoring the chooser mechanism leads to
an exponential decay of the photon amplitude without oscillations to zero value,
completely and irreversibly.
Everything is lost in the protein phonons (cf. the dashed curve in the inset in fig. \ref{engel-decoherence}),
and this is not measured in the experiment.
This resembles the prediction of conventional decoherence theory.
In contrast, the chooser mechanism due to degeneracy entanglement with gravonons
determines the transient
localization of the electron wave packet
in a beable and precludes its total decay in the environmental phonon
continuum, which is the reason for retaining finite photon amplitude
for long time, at least within the lifetime of the tiny burst. \\
\begin{figure}{} \begin{center} \begin{minipage}{15cm}
\scalebox{0.425}{\includegraphics*{fig6-drakdoy.eps}}
\end{minipage}
\end{center}
\caption{Time development of the photon amplitude in a model neglecting
the coupling of the exciton electron to the local oscillators with dominating coupling to the phonons
(decaying dashed curve), corresponding to temperature 300 K, compared to the result
with the local structure included (oscillating full curve). Inset: development with
time of the weight of field configurations involving the protein phonons
with the local structure included (full curve)
and excluded (dashed curve).
\label{engel-decoherence}} \end{figure}
\noindent The curve, which reproduces the experimental result, starts with the exciton
electron in a beable (which is destroyed by the
photon-induced tiny burst) localized in the NIR via
the chooser mechanism due to entanglement with the gravonons.
The exponentially decaying photon amplitude curve is evaluated by neglecting the
chooser mechanism, the electron is in the deformation resonance, where it couples with
the dissipative phonon environment all over the protein backbone. \\
\noindent {\bf The role of the gravonons}, although they are not involved
during the time of the tiny burst, is to create the beable via the chooser mechanism.
The transient trapping of the exciton electron in the $3s$-affinity orbital on the carbonyl carbon atom
due to the entanglement
with the degenerate gravonons occurs before the double laser pulse
and the tiny burst. The laser pulse generates the tiny burst and causes
the bursting of the $3D$ components of the total wave packet in $3D$ space
and the destruction of the entanglement with the gravonons
(cf. fig. \ref{beable} and the inset showing how $3D$ components
of the wave packet gain weight during the tiny burst). \\
\noindent From our theory and the measurements it appears that {\bf decoherence is an illusion}.
If it did exist it would have led to the total decay of the photon signal within
femtoseconds, as the decaying dashed curve in fig. \ref{engel-decoherence} shows.
Some obvious arguments can be summarized: \\
\noindent (i) Despite that in the present theory standard decoherence theory due to coupling
to environmental degrees of freedom is {\bf not} involved, all experimental observations are
reproduced in a coherent picture by solving the TDSE. Reduced
density operator and matrix, being the basis for the conclusions
in decoherence theory, are neither needed nor used.
The chooser mechanism localizes the exciton electron within the beable,
allowing it to interact with the local core movement states of the carbonyl group in
addition to the dissipative interaction with the phonons in the protein backbone.
In our theory these are physical phenomena within a completely coherent picture.\\
\noindent (ii) Decoherence theory misses the localization in a beable.
It starts from an initial state with a delocalized
exciton electron. In contrast,
in the present theory we reproduce and explain the experiments on the coherent behaviour of
the protein-FMO complex as effects of the local structure and its temperature dependence.
The generation of a local beable via entanglement
of the exciton electron with gravonons is the clue to the understanding of the experiment.
In contrast, decoherence theory has no local beables, therefore it cannot provide
the explanation of the long-living coherence of FMO.\\
\noindent (iii) In the stochastic approaches, based on the reduced density
matrix and hence decoherence, special constructions of environmental excitations, in
particular special phonons (colour-noise), are suggested as a decohering environment
to be able to reproduce the experimental curves.
Arguments are suggested concerning the efficiency of
electron transfer from the FMO complex to the reaction center.
But experiment shows the coherent behaviour
and {\bf no} dependence of the coherent behaviour on the protein phonon structure.\\
\noindent (iv) Stochastic approaches, missing localization in beables,
cannot provide an explanation of the non-zero
value of the photon echo amplitude at long time. No explanation is
suggested why should the phonons not be able
to reduce the photon echo amplitude to zero.
If decoherence were active, the large number of environmental degrees
of freedom involved should lead to complete decay of the
photon echo signal.
\section { Conclusion}
\noindent We can understand the experimental results
of long time coherence in the FMO protein complex without the far-fetched
assumption of special phonons involved in the approach based on stochastic quantum master equations.
Our understanding is based on the localization of the exciton electron
due to a localization chooser mechanism using gravitation in high dimensional
spacetime. Released from the entanglement with the gravonons
by the laser pulses, the
electron can couple with local vibrations of the carbonyl $CO$ group
and the delocalized phonons in the protein backbone.
This determines the vibronic character of the many-particle configurations
leading to the beating of the photon echo signal.
The two channels lead to preservation of a local non-vanishing
weight of the exciton electron and to the beating photon echo signal and its exponential damping.
Conventional decoherence theory would in fact
predict a total exponential decay of the signal
and much faster total loss of coherence compared to the
experimentally established coherence times.\\
\noindent In order to understand the finite final value of the photon echo signal in 2D
Fourier transform electronic spectroscopy
at long time we involve a very local structure and then
a temperature dependent accessibilty to the local structure.
The starting point in our theory on the coherent behaviour of chlorophyll assemblies is the localized exciton
electron as a beable which allows the interaction of this localized charge
with the photon field. The further time development of the wave packet
follows from the initial localization of the electron. \\
\noindent The stepwise bunching of photon-amplitude curves at different temperatures
(fig. 3 for $T=125$ K and $T=150$ K in ref. \cite{engeletal}) is only possible if we have a local picture with
excitation of only a few local oscillators. This point needs to
be further investigated experimentally since the error bars in the experimental figure
in ref. \cite{engeletal} are larger than the expected differences between the curves.\\
\noindent The observation that isotope substitution does not change the
experimentally measured photon echo amplitude is explained in our theory.
No special phonons ('colour noise', special spectral function)
are needed to explain the coherent behaviour and the
damped amplitude of the photon beating signal with time.\\
\noindent We explain the magic non-appearance of decoherence in the
2D Fourier transform electronic spectroscopic experiment. \\
\noindent To finalize, we claim that quantum mechanics needs a chooser mechanism for particle
localization. Experiments can be explained within quantum mechanics only
if the incoming delocalized particle wave gets localized on a local site.
The solution of the time dependent Schr\"odinger equation in high dimensional spacetime including
the entanglement to gravonons is the theory of the chooser mechanism.\\
\noindent Local beables are the fundament of measurement. The measurement
gives data on local features and this is what Schr\"odinger's equation with entanglement with the
gravonons in high dimensional spacetime ($11D$) provides. All measured particles
are local, they are local only when they entangle with gravonons as beables.\\
\noindent {\bf Aknowledgment:} We gratefully aknowledge the useful discussion
and comments by G. Engel at the Gordon Center for Integrative Science,
Chicago University.
| 2023-04-23T06:40:33.063Z | 2016-05-23T02:01:25.000Z | redpajama/arxiv | arxiv_0001 | 641 | 9,050 |
7da05dfdadd3f7717d464746f16b143d0f8afcfd | \section{Introduction}
\label{intro}
Unsupervised nonlinear feature learning, or unsupervised
representation learning, is one of the biggest challenges facing
machine learning.
Various approaches have been proposed, many of them in the deep learning framework.
Some of the most popular methods are multi-layer belief nets and
Restricted Boltzmann Machines \cite{Hinton07} as well as autoencoders
\cite{Hinton94,vincent2010stacked,Kingma14}, which form the
basis for the ladder networks \cite{Valpola15}.
While some success has been obtained, the general consensus is that
the existing methods are lacking in scalability, theoretical justification, or both;
more work is urgently needed to make machine learning applicable to
big unlabeled data.
Better methods may be found by using the temporal structure in time series data.
One approach which has shown
a great promise recently is based on a set of methods variously
called temporal coherence \cite{Hyvarinen09nis} or slow feature
analysis \cite{Wiskott02}.
The idea is to find features
which change as slowly as possible, originally proposed in
\cite{Foldiak91}. Kernel-based methods \cite{Harmeling03,Sprekeler14} and deep learning methods \cite{mobahi2009deep,springenberg2012learning,goroshin2015unsupervised} have been developed to extend this principle to the general nonlinear case.
However, it is not clear how one
should optimally define the temporal stability criterion;
these methods typically use heuristic criteria and are not based on generative models.
In fact, the most satisfactory solution for unsupervised deep learning would arguably be based on estimation of probabilistic generative models, because probabilistic theory often gives optimal objectives for learning. This has been possible in linear unsupervised learning, where sparse coding and independent component analysis (ICA) use independent, typically sparse, latent variables that generate the data via a linear mixing. Unfortunately,
at least without temporal structure, the nonlinear ICA model is seriously unidentifiable \cite{Hyva99NN}, which means that the original sources cannot be found. In spite of years of research \cite{Jutten10}, no generally applicable identifiability conditions have been found. Nevertheless, practical algorithms have been proposed
\cite{Tan01,Almeida03,Dinh15} with the hope that some kind of useful solution can still be found even for i.i.d.\ data.
Here, we combine a new heuristic principle for analysing temporal structure with a rigorous treatment of a nonlinear ICA model, leading to a new identifiability proof.
The structure of our theory is illustrated in Figure~\ref{fig:method}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=\textwidth]{fig1.pdf}
\end{center}
\caption{An illustration of how we combine a new generative nonlinear ICA model with the new learning principle called time-contrastive learning (TCL).
%
(A)~The probabilistic generative model of nonlinear ICA, where the observed signals are given by a nonlinear transformation of source signals, which are mutually independent, and have segment-wise nonstationarity. (B)~In TCL we train a feature extractor sensitive to the nonstationarity of the data by using a multinomial logistic regression which attempts to discriminate between the segments, labelling each data point with the segment label $1,\ldots,T$. The feature extractor and logistic regression together can be implemented by a conventional multi-layer perceptron.}
\label{fig:method}
\end{figure}
First, we propose to learn features using the (temporal) nonstationarity of the data. The idea is that the learned features should enable discrimination between different time windows; in other words, we search for features that provide maximal information on which part of the time series a given data point comes from. This provides a new, intuitively appealing method for feature extraction, which we call time-contrastive learning (TCL).
Second, we formulate a generative model in which independent components have different distributions in different time windows, and we observe nonlinear mixtures of the components. While a special case of this principle, using nonstationary variances, has been very successfully used in linear ICA \cite{Matsuoka95}, our extension to the nonlinear case is completely new. Such nonstationarity of variances seems to be prominent in many kinds of data, for example EEG/MEG \cite{Brookes2011}, natural video \cite{Hyvarinen09nis}, and closely related to changes in volatility in financial time series; but we further generalize the nonstationarity to modulated exponential families.
Finally, we show that as a special case, TCL estimates the nonlinear part of the nonlinear ICA model, leaving only a simple linear mixing to be determined by linear ICA, and a final indeterminacy in terms of a component-wise nonlinearity similar to squaring. For modulated Gaussian sources, even the squaring can be removed and we have ``full'' identifiability. This gives the very first identifiability proof for a high-dimensional, nonlinear, ICA mixing model --- together with a practical method for its estimation.
\section{Time-contrastive learning}
\label{sec:tcl}
TCL is a method to train a feature extractor
by using a multinomial logistic regression (MLR) classifier which aims to
discriminate all segments (time windows) in a time series, given the segment indices as the labels of the data points.
In more detail, TCL proceeds as follows:
\begin{enumerate
\setlength{\itemsep}{-0.5mm
\item Divide a multivariate time series ${\ve x}_t$ into segments, i.e.\ time windows, indexed by~$\tau=1,\ldots,T$. Any temporal segmentation method can be used, e.g.\ simple equal-sized bins.
\item Associate each data point with the corresponding segment index
$\tau$ in which the data point is contained;
i.e. the data points in the segment $\tau$ are all given the same segment label $\tau$.
\item Learn a feature extractor ${\ve h}({\ve x}_t;\boldsymbol{\theta})$ together with an MLR with a linear regression function $\ve w_\tau^T {\ve h}({\ve x}_t; \boldsymbol{\theta})+b_\tau$ to classify
all data points with the corresponding segment labels $\tau$ defined above used as class labels $C_t$. (For example, by ordinary deep learning with ${\ve h}({\ve x}_t;\boldsymbol{\theta})$ being outputs in the last hidden layer and $\boldsymbol{\theta}$ being network weights.)
\end{enumerate}
The purpose of the feature extractor is to extract a feature vector
that enables the MLR to discriminate the segments. Therefore,
it seems intuitively clear that the feature extractor needs to learn a
useful representation of the temporal structure of the data, in
particular the differences of the distributions across segments.
Thus, we are effectively using a classification method (MLR) to accomplish unsupervised learning. Methods such as noise-contrastive estimation \cite{Gutmann12JMLR} and generative adversarial nets \cite{goodfellow2014generative}, see also \cite{gutmann2014likelihood}, are similar in spirit, but clearly distinct from TCL which uses the \textit{temporal} structure of the data by contrasting different time segments.
In practice, the feature extractor needs to be capable of
approximating a general nonlinear relationship between the data points and
the log-odds of the classes, and it must be easy to learn from data simultaneously with the MLR. To
satisfy these requirements, we use here a multilayer perceptron (MLP)
as the feature extractor. Essentially, we use ordinary MLP/MLR training
according to very well-known neural network theory, with the last
hidden layer working as the feature extractor.
Note that the MLR is only used as an instrument for training the
feature extractor, and has no practical meaning after the training.
\section{TCL as approximator of log-pdf ratios}
\label{sec:opttcl}
We next show how the combination of
the optimally discriminative feature extractor
and MLR learns to model the nonstationary log-pdf's of the data.
The posterior over classes for one data point ${\ve x}_t$ in the multinomial logistic regression of TCL is given by well-known theory as
\begin{align}
p(C_t=\tau | {\ve x}_t; \boldsymbol{\theta}, \m W, \ve b)=
\frac{\exp(\ve w_\tau^T {\ve h}({\ve x}_t; \boldsymbol{\theta})+b_\tau)}
{1+\sum_{j=2}^{T} \exp(\ve w_j^T {\ve h}({\ve x}_t; \boldsymbol{\theta})+b_j)} \label{eq:ell2}
\end{align}
where $C_t$ is a class label of the data at time $t$,
${\ve x}_t$ is the $n$-dimensional data point at time $t$,
$\boldsymbol{\theta}$ is the parameter vector of the $m$-dimensional feature extractor (neural network) ${\ve h}$,
$\m W = [\ve w_1, \ldots, \ve w_T] \in \mathbb R^{m \times T}$,
and $\ve b = [b_1, \ldots, b_T]^T$ are the weight and bias parameters of the MLR.
We fixed the elements of $\ve w_1$ and $b_1$ to zero to avoid
the well-known indeterminacy of the softmax function.
On the other hand, the true posteriors of the segment labels
can be written, by the Bayes rule, as
\begin{equation}
p(C_t=\tau | {\ve x}_t) = \frac{p_\tau({\ve x}_t)p(C_t = \tau)}
{\sum_{j=1}^{T} p_j({\ve x}_t)p(C_t = j)},
\label{eq:pc_i}
\end{equation}
where $p(C_t = \tau)$ is a prior distribution of the segment label $\tau$, and $p_\tau({\ve x}_t)=p({\ve x}_t|C_t=\tau)$.
Assume that the feature extractor has a universal approximation capacity, and that the amount of data is infinite, so that the MLR converges to the optimal classifier.
Then,
we will have equality between the model posterior Eq.~\eqref{eq:ell2} and the true posterior in Eq.~\eqref{eq:pc_i} for all $\tau$. Well-known developments, intuitively based on equating the numerators in those equations and taking the pivot into account, lead to the relationship
\begin{equation}
\ve w_\tau^T {\ve h}({\ve x}_t; \boldsymbol{\theta}) + b_\tau
= \log p_\tau({\ve x}_t) - \log p_1({\ve x}_t) + \log \frac{p(C_t = \tau)}{p(C_t = 1)} ,
\label{eq:relation}
\end{equation}
where last term on the right-hand side is zero if the segments have equal prior probability (i.e.\ equal length).
In other words, what the feature extractor computes after TCL training (under optimal conditions) is the log-pdf of the data point in each segment (relative to that in the first segment which was chosen as pivot above). This gives a clear probabilistic interpretation of the intuitive principle of TCL, and will be used below to show its connection to nonlinear ICA.
\section{Nonlinear nonstationary ICA model}
\label{sec:datamodel}
In this section, seemingly unrelated to the preceding section, we define a probabilistic generative model; the connection will be explained in the next section. We assume, as typical in nonlinear ICA, that the observed multivariate time series ${\ve x}_t$ is a smooth and invertible nonlinear mixture
of a vector of source signals ${\ve s}_t=(s_1(t),\ldots,s_n(t))$; in other words:
\begin{equation}
{\ve x}_t = \ve f({\ve s}_t).
\label{eq:xfs}
\end{equation}
The components $s_i(t)$ in ${\ve s}_t$ are assumed mutually independent over $i$ (but not over time $t$).
The crucial question is how to define a suitable model for the sources, which is general enough while allowing strong identifiability results.
Here, we start with the fundamental principle that the source signals $s_i(t)$ are \textit{nonstationary}. For example, the variances (or similar scaling coefficients) could be changing as proposed earlier in the linear case \cite{Matsuoka95,Pham01,hyvarinen01}.
We generalize that idea and propose a generative model for nonstationary sources based on the exponential family.
Merely for mathematical convenience, we assume that the
nonstationarity is much slower than the sampling rate, so the time series can be divided into segments
in each of which the distribution is approximately constant (but the distribution is different in different segments).
The probability density function (pdf) of the source signal with index $i$ in the segment $\tau$ is then defined as:
\begin{equation} \label{eq:p_tau}
\log p_\tau(s_i)= q_{i,0}(s_i) + \sum_{v=1}^V \lambda_{i,v}(\tau) q_{i,v}(s_i) - \log Z( \lambda_{i,1}(\tau),\ldots,\lambda_{i,v}(\tau))
\end{equation}
where $q_{i,0}$ is a ``stationary baseline'' log-pdf of the source, and the $q_{i,v}, v\geq 1$ are nonlinear scalar functions defining the exponential family for source $i$. The essential point is that the parameters $\lambda_{i,v}(\tau)$ of the source $i$ depend on the segment index $\tau$, which creates nonstationarity. The normalization constant $Z$ is needed in principle although it disappears in all our proofs below.
A simple example would be obtained by setting $q_{i,0}=0,V=1$, i.e., using a single modulated function $q_{i,1}$ with $q_{i,1}(s_i) = -s_i^2/2$ which means that the variance of a Gaussian source is modulated, or $q_{i,1}(s_i) = -|s_i|$, a modulated Laplacian source. Another interesting option might be to use two ReLU-like nonlinearities $q_{i,1}(s_i) = \max(s_i,0)$ and $q_{i,2}(s_i) = \max(-s_i,0)$ to model both changes in scale (variance) and location (mean). Yet another option is to use a Gaussian baseline $q_{i,0}(s_i)=-s_i^2/2$ with a nonquadratic function $q_{i,1}$.
Our definition thus generalizes the linear model \cite{Matsuoka95,Pham01,hyvarinen01} to the nonlinear case, as well as to very general modulated non-Gaussian densities by allowing $q_{i,v}$ to be non-quadratic and using more than one $q_{i,v}$ per source (i.e.\ we can have $V>1$).
Note that our principle of nonstationarity is clearly distinct from the principle of linear autocorrelations previously used in the nonlinear case \cite{Harmeling03,Sprekeler14}; also, some authors prefer to use the term blind source separation (BSS) for generative models with temporal structure.
\section{Solving nonlinear ICA by TCL}
\label{sec:tcl_ica}
Now we consider the case where TCL as defined in Section~\ref{sec:tcl} is applied on data generated by the nonlinear ICA model in Section~\ref{sec:datamodel}. We refer again to Figure~\ref{fig:method} which illustrates the total system. For simplicity, we consider the case $q_{i,0}=0,V=1$, i.e.\ the exponential family has a single modulated function $q_{i,1}$ per source, and this function is the same for all sources; we will discuss the general case separately below.
The modulated function will be simply denoted by $q:=q_{i,1}$ in the following.
First, we show that the nonlinear functions
$q(s_i), i=1,\ldots,n,$ of the sources can be obtained as unknown linear transformations of the outputs of the
feature extractor $h_i$ trained by the TCL:
\begin{Theorem}
Assume the following:\vspace*{-2mm}
\begin{enumerate}\defA\theenumi.{A\theenumi.}
\item We observe data which is obtained by generating sources according to (\ref{eq:p_tau}),
and mixing them as in (\ref{eq:xfs}) with a smooth invertible $\ve f$. For simplicity, we assume only a single function defining the exponential family, i.e.\ $q_{i,0}=0,V=1$ and $q:=q_{i,1}$ as explained above.
\item We apply TCL on the data so that the dimension of the feature extractor ${\ve h}$
is equal to the dimension of the data vector ${\ve x}_t$, i.e., $m=n$.
\item The modulation parameter matrix $\m L$ with elements $[\m L]_{\tau,i}= \lambda_{i,1}(\tau)- \lambda_{i,1}(1), \tau=1,\ldots,T; i=1,\ldots,n$
has full column rank $n$. (Intuitively speaking, the variances of
the independent components are modulated sufficiently independently of each other.)
\end{enumerate}\vspace*{-2mm}
Then, after learning the parameter vector $\boldsymbol{\theta}$, the outputs of the feature extractor are equal to $q({\ve s})=(q(s_1),q(s_2),\ldots,q(s_n))^T$
up to an invertible linear transformation. In other words,
\begin{equation}
q({\ve s}_t) = \m A {\ve h}({\ve x}_t;\boldsymbol{\theta}) + \ve d
\label{eq:q=Ah+d}
\end{equation}
for some constant invertible matrix $\m A \in \mathbb R^{n \times n}$ and a
constant vector $\ve d \in \mathbb R^n$.
\end{Theorem}
\textit{Sketch of proof}: (see supplementary material for full proof) The basic idea is that after convergence we must have equality between the model of the log-pdf in each segment given by TCL in Eq.~(\ref{eq:relation}) and that given by nonlinear ICA, obtained by summing the RHS of Eq.~(\ref{eq:p_tau}) over $i$:
\begin{equation}
\ve w_\tau^T {\ve h}({\ve x}_t; \boldsymbol{\theta}) - k_1({\ve x}_t)=
\sum_{i=1}^n
\lambda_{i,1}(\tau) q(s_i) - k_2(\tau)
\end{equation}
where $k_1$ does not depend on $\tau$, and $k_2(\tau)$ does not depend on ${\ve x}$ or ${\ve s}$.
We see that the functions $h_i({\ve x})$ and $q(s_i)$ must span
the same linear subspace. (TCL looks at differences of log-pdf's, introducing $ k_1({\ve x}_t)$, but this does not actually change the subspace). This implies that the $q(s_i)$ must be equal to some invertible linear transformation of ${\ve h}({\ve x};\boldsymbol{\theta})$ and a constant bias term, which gives (\ref{eq:q=Ah+d}). \qed
To further estimate the linear transformation $\m A$ in (\ref{eq:q=Ah+d}), we can simply use linear ICA:
\begin{Corollary}
The estimation (identification) of the $q(s_i)$ can be performed by first performing TCL, and then linear ICA on the hidden representation ${\ve h}({\ve x})$.
\end{Corollary}
\textit{Proof:}
We only need to combine the well-known identifiability proof of linear ICA \cite{Comon94} with Theorem~1, noting that the quantities $q(s_i)$ are independent, and since $q$ has a strict upper bound (which is necessary for integrability), $q(s_i)$ must be non-Gaussian. \qed
In general, TCL followed by linear ICA does not allow us to exactly recover the independent components because the function $q(\cdot)$ can hardly be invertible, typically being something like squaring or absolute values. However,
for a specific class of $q$ including the modulated Gaussian family, we can prove a stricter form of identifiability. Slightly counterintuitively, we can recover the signs of the $s_i$, since we also know the corresponding ${\ve x}$ and the transformation is invertible:
\begin{Corollary}
Assume $q(s)$ is a strictly monotonic function of $|s|$. Then, we can further identify the original $s_i$, up to strictly monotonic transformations of each source.
\end{Corollary}
\textit{Proof:}
To make $p_\tau(s)$ integrable, necessarily $q(s)\rightarrow -\infty$ when $|s|\rightarrow \infty$, and $q(s)$ must have a finite maximum, which we can set to zero without restricting generality.
For each fixed $i$, consider the manifold defined by $q(g_i({\ve x})))=0$. By invertibility of ${\ve g}$, this divides the space of ${\ve x}$ into two halves. In one half, define $\tilde{s}_i=q(s_i)$, and in the other, $\tilde{s}_i=-q(s_i)$. With such $\tilde{s}_i$, we have thus recovered the original sources, up to the strictly monotonic transformation $\tilde{s}_i=c\,\text{sign}(s_i)q(s_i)$, where $c$ is either $+1$ or $-1$.
(Note that in general, the $s_i$ are meaningfully defined only up to a strictly monotonic transformation, analogue to multiplication by an arbitrary constant in the linear case \cite{Comon94}.)
\qed
\paragraph{Summary of Theory} What we have proven is that in the special case of a single $q(s)$ which is a monotonic function of $|s|$, our nonlinear ICA model is identifiable, up to inevitable component-wise monotonic transformations. We also provided a practical method for the estimation of the nonlinear transformations $q(s_i)$ for any general $q$, given by TCL followed by linear ICA. (The method provided in the proof of Corollary 2 may be very difficult to implement in practice.)
\paragraph{Extension 1: Combining ICA with dimension reduction} In practice we may want to set the feature extractor dimension $m$ to be smaller than $n$, to accomplish dimension reduction.
It is in fact simple to modify the generative model and the theorem so that a dimension reduction similar to nonlinear PCA can be included, and performed by TCL. It is enough to assume that while in the nonlinear mixing (\ref{eq:xfs}) we have the same number of dimensions for both ${\ve x}$ and ${\ve s}$, in fact some of the components $s_i$ are stationary, i.e.\ for them, $\lambda_{i,v}(\tau)$ do not depend on $\tau$.
The nonstationary components $s_1(t),\ldots,s_m(t)$ will then be identified as in the Theorem, using TCL.
\paragraph{Extension 2: General case with many nonlinearities} With many $q_{i,v}$ ($V>1$), the left-hand-side of (\ref{eq:q=Ah+d}) will have $V n$ entries given by all the possible $q_{i,v}(s_i)$, and the dimension of the feature extractor must be equally increased; the condition of full rank on $\m L$ is likewise more complicated. Corollary~1 must then consider an independent subspace model, but it can still be proven in the same way.
\section{Simulation on artificial data}
\label{sec:simulation}
\paragraph{Data generation}
We created data from the nonlinear ICA model in Section~\ref{sec:datamodel}, using the simplified case of the Theorem as follows.
Nonstationary source signals ($n=20$, segment length 512) were randomly generated by modulating Laplacian sources
by $\lambda_{i,1}(\tau)$ randomly drawn from a uniform distribution in $[0, 1]$.
As the nonlinear mixing function $\ve f({\ve s})$, we used an MLP (``mixing-MLP'').
%
%
In order to guarantee that the mixing-MLP is invertible, we used leaky ReLU units and the same number of units in all layers.
\paragraph{TCL settings, training, and final linear ICA}
As the feature extractor to be trained by the TCL,
we adopted an MLP (``feature-MLP'').
The segmentation in TCL was the same as in the data generation, and the number of layers was the same in the mixing-MLP and the feature-MLP.
Note that when $L=1$,
both the mixing-MLP and feature-MLP are a one layer model, and then
the observed signals are simply linear mixtures of the source signals as in
a linear ICA model.
As in the Theorem, we set $m=n$.
As the activation function in the hidden layers, we used a ``maxout''
unit, constructed by taking the maximum across $G=2$ affine fully
connected weight groups.
However, the output layer has ``absolute value'' activation units exclusively.
This is because the output of the feature-MLP (i.e., ${\ve h}({\ve x};\boldsymbol{\theta})$)
should resemble $q(s)$, based on Theorem~1,
and here we used the Laplacian distribution for the sources.
The initial weights of each layer were randomly drawn from
a uniform distribution for each layer, scaled as in \cite{Glorot10}.
To train the MLP, we used back-propagation with a momentum term.
%
To avoid overfitting, we used $\ell_2$ regularization for
the feature-MLP and MLR.
According to the Corollary above, after TCL
we further applied linear ICA (FastICA, \cite{Hyvarinen99}) to the
${\ve h}({\ve x};\boldsymbol{\theta})$, and used its outputs as the final estimates of $q(s_i)$.
To evaluate the performance of source recovery, we computed the mean correlation coefficients between the true $q(s_i)$ and their estimates.
For comparison, we also applied a linear ICA method based on
nonstationarity of variance (NSVICA) \cite{hyvarinen01},
a kernel-based nonlinear ICA method (kTDSEP) \cite{Harmeling03},
and a denoising autoencoder (DAE) \cite{vincent2010stacked} to the observed data.
We took absolute values of the estimated sources
to make a fair comparison with TCL.
In kTDSEP, we selected the 20 estimated components with the highest correlations with the source signals.
We initialized the DAE by the stacked
DAE scheme \cite{vincent2010stacked}, and sigmoidal units were used in the hidden layers;
we omitted the case $L>3$ because of instability of training.
\paragraph{Results}
\label{subsec:simunonlin1}
Figure~\ref{fig:sim1}a) shows that after training the feature-MLP by TCL,
the MLR achieved higher classification accuracies
than chance level which implies that the feature-MLP was able to learn a representation of the data nonstationarity.
(Here, chance level denotes the performance of the MLP with a randomly
initialized feature-MLP.)
We can see that the larger the number of layers is
(which means that the nonlinearity in the mixing-MLP is stronger),
the more difficult it is to train the feature-MLP and the MLR.
The classification accuracy also goes down when the number of segments increases, since when there are more and more classes, some of them will inevitably have very similar distributions and are
thus difficult to discriminate; this is why we computed the chance level as above.
Figure~\ref{fig:sim1}b) shows
that the TCL method could reconstruct
the $q(s_i)$ reasonably well even for the nonlinear mixture case ($L > 1$), while all other methods failed (NSVICA obviously performed very well in the linear case)
The figure also shows that (1)~the larger the number of segments (amount of data) is,
the higher the performance of the TCL method is (i.e.\ the method seems to converge), and
(2)~again, more layers makes learning more difficult.
To summarize, this simulation confirms that TCL is able to estimate the nonlinear ICA model based on nonstationarity. Using more data increases performance, perhaps obviously, while making the mixing more nonlinear decreases performance.
\begin{figure}[tb]
\centering
\raisebox{3.5cm}{\textsf{a)}}
\includegraphics[width=0.46\columnwidth]{fig2.pdf}
\raisebox{3.5cm}{\textsf{b)}}
\includegraphics[width=0.46\columnwidth]{fig3.pdf}
\caption{
Simulation on artificial data. \textit{a)} Mean classification accuracies
of the MLR
simultaneously trained with the feature-MLP to implement TCL,
with different settings of the number of layers $L$ and segments.
Note that chance levels (dotted lines) change as a function of the number of segments (see text).
The MLR achieved higher accuracy than chance level.
\textit{b)}~Mean absolute correlation coefficients
between the true $q(s)$ and the features learned by TCL (solid line)
and, for comparison:
nonstationarity-of-variance-based linear ICA (NSVICA, dashed line),
kernel-based nonlinear ICA (kTDSEP, dotted line),
and denoising autoencoder (DAE, dash-dot line).
TCL has much higher correlations than DAE or kTDSEP, and in the nonlinear case ($L > 1$), higher than NSVICA.
}
\label{fig:sim1}
\end{figure}
\section{Experiments on real brain imaging data}
\label{sec:real}
To evaluate the applicability of the TCL method to real data,
we applied it on magnetoencephalography (MEG),
i.e.\ measurements of the electrical activity in the human brain.
In particular, we used data measured in a resting-state session,
during which the subjects did not have any task nor were receiving any particular stimulation.
In recent years, many studies shown the existence of networks of
brain activity in resting state, with MEG as well
\cite{Brookes2011, Pasquale2012}. Such networks mean that the data is
nonstationary, and thus this data provides an excellent target for TCL.
\paragraph{Data and preprocessing}
We used MEG data from an earlier neuroimaging study
\cite{Ramkumar12}, graciously provided by P.~Ramkumar. MEG signals were measured from nine
healthy volunteers by a Vectorview helmet-shaped neuromagnetometer
at a sampling rate of 600~Hz with
306 channels.
The experiment consisted of two kinds of sessions, i.e.,
resting sessions (2 sessions of 10 min) and task sessions (2 sessions of 12 min).
In the task sessions, the subjects were exposed to a sequence of 6--33~s blocks of
auditory, visual and tactile stimuli, which were interleaved with 15-s rest periods.
We exclusively used the resting-session data for the training of the network,
and task-session data was only used in the evaluation. The modality of the sensory stimulation (incl.\ no stimulation, i.e.\ rest) provided a class label that we used in the evaluation, giving in total four classes.
We preprocessed the MEG signals by Morlet filtering around the alpha frequency band.
\paragraph{TCL settings}
We used segments of equal size, of length 12.5~s or 625 data points (downsampling to 50~Hz).
The number of layers takes the values $L \in \{1, 2, 3, 4\}$, and
the number of nodes of each hidden layer was a function of $L$
so that we always fixed the number of output layer nodes to 10, and increased gradually the number of nodes when going to earlier layer as $L=1: 10$, $L=2: 20-10$, $L=3: 40-20-10$, and $L=4: 80-40-20-10$. We used ReLU units in the middle layers,
and adaptive units $\phi(x)=\max(x,ax)$ exclusively for the output layer,
which is more flexible than the ``absolute value'' unit used in the simulation.
In order to prevent overfitting, we applied dropout \cite{Srivastava2014} to inputs, and batch normalization \cite{DBLP:journals/corr/IoffeS15} to hidden layers.
Since different subjects and sessions are likely to have artefactual
differences, we used a multi-task learning scheme, with a separate top-layer MLR
classifier for each measurement session and subject, but a shared feature-MLP. (In fact, if we use the MLR to discriminate all segments of all sessions, it
tends to mainly learn the artifactual differences across sessions.)
Otherwise, all the settings and comparisons were as in Section~\ref{sec:simulation}.
\paragraph{Evaluation methods}
To evaluate the obtained features,
we performed classification of the sensory stimulation categories (modalities)
by applying feature extractors trained with (unlabeled) resting-session data
to (labeled) task-session data.
Classification was performed using a linear support vector machine (SVM) classifier trained on the stimulation modality labels,
and its performance was evaluated by a session-average of
session-wise one-block-out cross-validation (CV) accuracies.
The hyperparameters of the SVM were determined by nested CV without using the test data.
The average activities of the feature extractor during each block were used as feature
vectors in the evaluation of TCL features. However, we used log-power activities for the other (baseline) methods
because the average activities had much lower performance with those methods.
We balanced the number of blocks between the four categories.
We measured the CV accuracy 10 times by changing the initial values of
the feature extractor training, and showed their average performance.
We also visualized the spatial activity patterns obtained by TCL,
using weighted-averaged sensor signals;
i.e., the sensor signals are averaged while weighted by the activities
of the feature extractor.
\paragraph{Results}
Figure~\ref{fig:real}a) shows the comparison of classification accuracies
between the different methods, for different numbers of layers $L=\{1,2,3,4\}$.
The classification accuracies by the TCL method were consistently higher than
those by the other (baseline) methods.\footnote{Note that the classification using the final linear ICA is equivalent to using whitening since ICA only makes a further orthogonal rotation, and could be replaced by whitening without affecting classification accuracy.}
We can also see a superior performance of multi-layer networks ($L \geq 3$)
compared with that of the linear case ($L=1$),
which indicates the importance of nonlinear demixing in the TCL method.
Figure~\ref{fig:real}b) shows an example of spatial patterns learned by the TCL method.
For simplicity of visualization,
we plotted spatial patterns for the three-layer model.
We manually picked one out of the ten hidden nodes from the third layer,
and plotted its weighted-averaged sensor signals (Figure~\ref{fig:real}b, L3). We also visualized the most strongly contributing second- and first-layer nodes.
We see progressive pooling of L1 units to form left temporal, right temporal, and occipito-parietal patterns in L2, which are then all pooled together in the L3 resulting in a bilateral temporal pattern with negative contribution from the occipito-parietal region.
Most of the spatial patterns in the third layer (not shown) are actually similar to those previously reported using functional magnetic resonance imaging (fMRI), and MEG \cite{Brookes2011, Pasquale2012}. Interestingly, none of the hidden units seems to represent artefacts, in contrast to ICA.
\begin{figure}[tb]
\parbox{0.05\textwidth}{\raisebox{2.5cm}{\textsf{a)}}}
\parbox{0.35\textwidth}{\includegraphics[width=0.32\textwidth]{accu.pdf}\hspace*{5mm}}
\parbox{0.05\textwidth}{ \raisebox{2.5cm}{\textsf{b)}}}
\parbox{0.55\textwidth}{
\raisebox{0.4cm}{\textsf{L3}} \hspace*{18mm}
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=2.5cm]{L3-1.png}\\
\raisebox{0.0cm}{\textsf{L2}} \hspace*{1mm}
\parbox{2cm}{
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=1.75cm]{L2-11.png}}
\parbox{2cm}{
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=1.75cm]{L2-14.png}}
\parbox{2cm}{
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=1.75cm]{L2-16.png}}\\
\raisebox{0.0cm}{\textsf{L1}}\hspace*{0.8mm}
\parbox{2cm}{
\begin{center}
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]{L1-38.png}
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]{L1-39.png}
\end{center}
}
\parbox{2cm}{
\begin{center}
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]{L1-33.png}
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]{L1-25.png}
\end{center}
}
\parbox{2cm}{
\begin{center}
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]{L1-8.png}
\includegraphics[trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]{L1-6.png}
\end{center}
}
}
\caption{
Real MEG data. \textit{a)} Classification accuracies of
linear SMVs newly trained with task-session data
to predict stimulation labels in task-sessions, with feature extractors trained in advance with resting-session data.
Error bars give standard errors of the mean
across ten repetitions. For TCL
and DAE, accuracies are given for different numbers of layers $L$. Horizontal line shows the
chance level (25\%).
\textit{b)} Example of spatial patterns of nonstationary components learned by TCL.
Each small panel corresponds to one spatial pattern with the measurement helmet seen from three different angles (left, back, right); red/yellow is positive and blue is negative.
%
``L3'' shows approximate total spatial pattern of one selected third-layer unit. ``L2'' shows the patterns of the three second-layer units maximally contributing to this L3 unit. ``L1'' shows, for each L2 unit, the two most strongly contributing first-layer units.
}
\label{fig:real}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We proposed a new learning principle for unsupervised feature (representation) learning. It is based on analyzing nonstationarity in temporal data by discriminating between time segments. The ensuing ``time-contrastive learning'' is easy to implement since it only uses ordinary neural network training: a multi-layer perceptron with logistic regression.
However, we showed that, surprisingly, it can estimate independent components in a nonlinear mixing model up to certain indeterminacies, assuming that the independent components are nonstationary in a suitable way. The indeterminacies include a linear mixing (which can be resolved by a further linear ICA step), and component-wise nonlinearities, such as squares or absolute values.
TCL also avoids the computation of the gradient of the Jacobian, which is a major problem with maximum likelihood estimation \cite{Dinh15}.
Our developments also give by far the strongest identifiability proof of nonlinear ICA in the literature. The indeterminacies actually reduce to just inevitable monotonic component-wise transformations in the case of modulated Gaussian sources. Thus, our results pave the way for further developments in nonlinear ICA, which has so far seriously suffered from the lack of almost any identifiability theory.
Experiments on real MEG found neuroscientifically interesting networks. Other promising future application domains include video data, econometric data, and biomedical data such as EMG and ECG, in which nonstationary variances seem to play a major role.
\clearpage
\small
\let\oldthebibliography=\thebibliography
\let\endoldthebibliography=\endthebibliography
\renewenvironment{thebibliography}[1]{\begin{oldthebibliography}{#1} \setlength{\itemsep}{.2ex}} {\end{oldthebibliography}}
| 2023-04-23T06:40:33.462Z | 2016-05-23T02:09:59.000Z | redpajama/arxiv | arxiv_0001 | 655 | 5,848 |
0e92ef0a07d7e76e807c98f8a5372785f1a8ccab | \section{A family of subcritical Caffarelli-Kohn-Nirenberg interpolation inequalities}\label{Sec:Intro}
With the norms
\[
\nrm w{q,\gamma}:=\(\ird{|w|^q\,|x|^{-\gamma}}\)^{1/q}\,,\quad\nrm wq:=\nrm w{q,0}\,,
\]
let us define $\mathrm L^{q,\gamma}({\mathbb R}^d)$ as the space of all measurable functions $w$ such that $\nrm w{q,\gamma}$ is finite. Our functional framework is a space $\mathrm H^p_{\beta,\gamma}({\mathbb R}^d)$ of functions $w\in\mathrm L^{p+1,\gamma}({\mathbb R}^d)$ such that $\nabla w\in\mathrm L^{2,\beta}({\mathbb R}^d)$, which is defined as the completion of the space $\mathcal D({\mathbb R}^d\setminus\{0\})$ of the smooth functions on ${\mathbb R}^d$ with compact support in~${\mathbb R}^d\setminus\{0\}$, with respect to the norm given by $\|w\|^2:=(p_\star-p)\,\nrm w{p+1,\gamma}^2+\nrm{\nabla w}{2,\beta}^2$.
Now consider the family of \emph{Caffarelli-Kohn-Nirenberg interpolation inequalities} given by\par\smallskip
\be{CKN}
\nrm w{2p,\gamma}\le{\mathsf C}_{\beta,\gamma,p}\,\nrm{\nabla w}{2,\beta}^\vartheta\,\nrm w{p+1,\gamma}^{1-\vartheta}\quad\forall\,w\in\mathrm H^p_{\beta,\gamma}({\mathbb R}^d)\,.
\end{equation}
\par\medskip\noindent Here the parameters $\beta$, $\gamma$ and $p$ are subject to the restrictions
\be{parameters}
d\ge2\,,\quad\gamma-2<\beta<\frac{d-2}d\,\gamma\,,\quad\gamma\in(-\infty,d)\,,\quad p\in\(1,p_\star\right]\quad\mbox{with}\quad p_\star:=\frac{d-\gamma}{d-\beta-2}
\end{equation}
and the exponent $\vartheta$ is determined by the scaling invariance, \emph{i.e.},
\[\label{theta}
\vartheta=\frac{(d-\gamma)\,(p-1)}{p\,\big(d+\beta+2-2\,\gamma-p\,(d-\beta-2)\big)}\,.
\]
These inequalities have been introduced, among others, by L.~Caffarelli, R.~Kohn and L.~Nirenberg in~\cite{Caffarelli-Kohn-Nirenberg-84}. We observe that $\vartheta=1$ if $p=p_\star$, a case which has been dealt with in~\cite{DEL2015}, and we shall focus on the sub-critical case $p<p_\star$. Throughout this paper, ${\mathsf C}_{\beta,\gamma,p}$ denotes the optimal constant in \eqref{CKN}. We shall say that a function $w\in\mathrm H^p_{\beta,\gamma}({\mathbb R}^d)$ is an \emph{extremal function} for~\eqref{CKN} if equality holds in the inequality.
\medskip\emph{Symmetry} in~\eqref{CKN} means that the equality case is achieved by Aubin-Talenti type functions
\[
w_\star(x)=\(1+|x|^{2+\beta-\gamma}\)^{-1/(p-1)}\quad\forall\,x\in{\mathbb R}^d\,.
\]
On the contrary, there is \emph{symmetry breaking} if this is not the case, because the equality case is then achieved by a non-radial extremal function. It has been proved in~\cite{2016arXiv160208319B} that \emph{symmetry breaking} holds in~\eqref{CKN}~if
\be{set-symm-breaking}
\gamma<0\quad\mbox{and}\quad\beta_{\rm FS}(\gamma)<\beta<\frac{d-2}d\,\gamma
\end{equation}
where
\[
\beta_{\rm FS}(\gamma):=d-2-\sqrt{(\gamma-d)^2-4\,(d-1)}\,.
\]
For completeness, we will give a short proof of this result in Section~\ref{Sec:SB}. Our main result shows that, under Condition~\eqref{parameters}, \emph{symmetry} holds in the complement of the set defined by~\eqref{set-symm-breaking}, which means that~\eqref{set-symm-breaking} is the sharp condition for \emph{symmetry breaking}. See Fig.~\ref{Fig}.
\par\smallskip\begin{theorem}\label{Thm:Main}{\sl Assume that~\eqref{parameters} holds and that
\be{Symmetry condition}
\beta\le\beta_{\rm FS}(\gamma)\quad\mbox{if}\quad\gamma<0\,.
\end{equation}
Then the extremal functions for~\eqref{CKN} are radially symmetric and, up to a scaling and a multiplication by a constant, equal to $w_\star$.}\end{theorem}\par\smallskip
\begin{figure}[ht]\begin{center}
\includegraphics[width=10cm]{Figure.pdf}
\caption{\label{Fig}\emph{In dimension $d=4$, with $p=1.2$, the grey area corresponds to the cone determined by $d-2+(\gamma-d)/p\le\beta<(d-2)\,\gamma/d$ and $\gamma\in(-\infty,d)$ in~\eqref{parameters}. The light grey area is the region of symmetry, while the dark grey area is the region of symmetry breaking. The threshold is determined by the hyperbola $(d-\gamma)^2-(\beta-d+2)^2-4\,(d-1)=0$ or, equivalently $\beta=\beta_{\rm FS}(\gamma)$. Notice that the condition $p\le p_\star$ induces the restriction $\beta\ge d-2+(\gamma-d)/p$, so that the region of symmetry is bounded. The largest possible cone is achieved as $p\to1$ and is limited from below by the condition $\beta>\gamma-2$.}}
\end{center}\end{figure}
The result is slightly stronger than just characterizing the range of $(\beta,\gamma)$ for which equality in~\eqref{CKN} is achieved by radial functions. Actually our method of proof allows us to analyze the symmetry properties not only of extremal functions of~\eqref{CKN}, but also of all positive solutions in $\mathrm H^p_{\beta,\gamma}({\mathbb R}^d)$ of the corresponding Euler-Lagrange equations, that is, up to a multiplication by a constant and a dilation, of
\be{ELeq}
-\,\mbox{div}\,\big(|x|^{-\beta}\,\nabla w\big)=|x|^{-\gamma}\,\big(w^{2p-1}-\,w^p\big)\quad\mbox{in} \quad {\mathbb R}^d \setminus \{ 0 \}\,.
\end{equation}
\par\smallskip\begin{theorem}\label{Thm:Rigidity}{\sl Assume that~\eqref{parameters} and~\eqref{Symmetry condition} hold. Then all positive solutions to~\eqref{ELeq} in $\mathrm H^p_{\beta,\gamma}({\mathbb R}^d)$ are radially symmetric and, up to a scaling and a multiplication by a constant, equal to $w_\star$.}\end{theorem}\par\smallskip
Up to a multiplication by a constant, we know that all non-trivial extremal functions for~\eqref{CKN} are non-negative solutions to~\eqref{ELeq}. Non-negative solutions to~\eqref{ELeq} are actually positive by the standard Strong Maximum principle. Theorem~\ref{Thm:Main} is therefore a consequence of Theorem~\ref{Thm:Rigidity}. In the particular case when $\beta=0$, the condition~\eqref{parameters} amounts to $d\ge2$, $\gamma\in (0,2)$, $p\in\big(1,(d-\gamma)/(d-2)\big]$, and~\eqref{CKN} can be written as
\[\label{CKN-beta0}
\nrm w{2p,\gamma}\le{\mathsf C}_{0,\gamma,p}\,\nrm{\nabla w}2^\vartheta\,\nrm w{p+1,\gamma}^{1-\vartheta}\quad\forall\,w\in\mathrm H^p_{0,\gamma}({\mathbb R}^d)\,.
\]
In this case, we deduce from Theorem~\ref{Thm:Main} that symmetry always holds. This is consistent with a previous result ($\beta=0$ and $\gamma>0$, close to $0$) obtained in~\cite{DMN2015}. A few other cases were already known. The Caffarelli-Kohn-Nirenberg inequalities that were discussed in~\cite{DEL2015} correspond to the critical case $\theta=1$, $p=p_\star$ or, equivalently $\beta=d-2+(\gamma-d)/p$. Here by \emph{critical} we simply mean that $\nrm w{2p,\gamma}$ scales like $\nrm{\nabla w}{2,\beta}$. The limit case $\beta=\gamma-2$ and $p=1$, which is an endpoint for~\eqref{parameters}, corresponds to Hardy-type inequalities: there is no extremal function, but optimality is achieved among radial functions: see~\cite{0902}. The other endpoint is $\beta=(d-2)\,\gamma/d$, in which case $p_\star=d/(d-2)$. The results of Theorem~\ref{Thm:Main} also hold in that case with $p=p_\star=d/(d-2)$, up to existence issues: according to~\cite{Catrina-Wang-01}, either $\gamma\ge0$, symmetry holds and there exists a symmetric extremal function, or $\gamma<0$, and then symmetry is broken but there is no optimal function.
\medskip Inequality~\eqref{CKN} can be rewritten as an interpolation inequality with same weights on both sides using a change of variables. Here we follow the computations in~\cite{2016arXiv160208319B} (also see~\cite{DEL2015,dolbeault:hal-01286546}). Written in spherical coordinates for a function
\[
\widetilde w(r,\omega)=w(x)\,,\quad\mbox{with}\quad r=|x|\quad\mbox{and}\quad\omega=\frac x{|x|}\,,
\]
inequality~\eqref{CKN} becomes
\[
\(\irdsph r{|\widetilde w|^{2p}}{d-\gamma}\)^\frac1{2p}\le{\mathsf C}_{\beta,\gamma,p}\(\irdsph r{\left|\nabla \widetilde w\right|^2}{d-\beta}\)^\frac\vartheta2\(\irdsph r{|\widetilde w|^{p+1}}{d-\gamma}\)^\frac{1-\vartheta}{p+1}
\]
where $\left|\nabla\widetilde w\right|^2=\big|\tfrac{\partial\widetilde w}{\partial r}\big|^2+\tfrac1{r^2}\,\left|\nabla_{\kern-2pt\omega}\widetilde w\right|^2$ and $\nabla_{\kern-2pt\omega}\widetilde w$ denotes the gradient of $\widetilde w$ with respect to the angular variable $\omega\in\S^{d-1}$. Next we consider the change of variables $r\mapsto s=r^\alpha$,
\be{wv}
\widetilde w(r,\omega)=v(s,\omega)\quad\forall\,(r,\omega)\in{\mathbb R}^+\times\S^{d-1}
\end{equation}
where $\alpha$ and $n$ are two parameters such that
\[
n=\frac{d-\beta-2}\alpha+2=\frac{d-\gamma}\alpha\,.
\]
Our inequality can therefore be rewritten as
\begin{multline*}
\(\irdsph s{|v|^{2p}}n\)^\frac1{2p}\\
\le\mathsf K_{\alpha,n,p}\(\irdsph s{\(\alpha^2\,\big|\tfrac{\partial v}{\partial s}\big|^2+\tfrac1{s^2}\,|\nabla_{\kern-2pt\omega} v|^2\)}n\)^\frac\vartheta2\(\irdsph s{|v|^{p+1}}n\)^\frac{1-\vartheta}{p+1}\,,
\end{multline*}
\[
\mbox{with}\quad{\mathsf C}_{\beta,\gamma,p}=\alpha^\zeta\,\mathsf K_{\alpha,n,p}\quad\mbox{and}\quad\zeta:=\frac\vartheta2+\frac{1-\vartheta}{p+1}-\frac1{2\,p}=\frac{(\beta+2-\gamma)\,(p-1)}{2\,p\,\big(d+\beta+2-2\,\gamma-p\,(d-\beta-2)\big)}\,.
\]
Using the notation
\[
\DD v=\(\alpha\,\frac{\partial v}{\partial s},\frac1s\,\nabla_{\kern-2pt\omega}v\)\,,
\]
with
\[\label{Eqn:alpha-n}
\alpha=1+\frac{\beta-\gamma}2\quad\mbox{and}\quad n=2\,\frac{d-\gamma}{\beta+2-\gamma}\,,
\]
Inequality~\eqref{CKN} is equivalent to a Gagliardo-Nirenberg type inequality corresponding to an artificial dimension~$n$ or, to be precise, to a Caffarelli-Kohn-Nirenberg inequality with weight $|x|^{n-d}$ in all terms. Notice that
\[
p_\star=\frac n{n-2}\,.
\]
\par\smallskip\begin{corollary}\label{Cor:Main}{\sl Assume that $\alpha$, $n$ and $p$ are such that
\[\label{parameters1}
d\ge2\,,\quad\alpha>0\,,\quad n>d\quad\mbox{and}\quad p\in\(1,p_\star\right]\,.
\]
Then the inequality
\be{CKN1}
\nrm v{2p,d-n}\le\mathsf K_{\alpha,n,p}\,\nrm{\DD v}{2,d-n}^\vartheta\,\nrm v{p+1,d-n}^{1-\vartheta}\quad\forall\,v\in\mathrm H^p_{d-n,d-n}({\mathbb R}^d)\,,
\end{equation}
holds with optimal constant $\mathsf K_{\alpha,n,p}=\alpha^{-\kern0.5pt\zeta}\,{\mathsf C}_{\beta,\gamma,p}$ as above and optimality is achieved among radial functions if and only if
\be{SymmetryCondition1}
\alpha\le\alpha_{\rm FS}\quad\mbox{with}\quad\alpha_{\rm FS}:=\sqrt{\frac{d-1}{n-1}}\,.
\end{equation}
When symmetry holds, optimal functions are equal, up to a scaling and a multiplication by a constant, to
\[
v_\star(x):=\(1+|x|^2\)^{-1/(p-1)}\quad\forall\,x\in{\mathbb R}^d\,.
\]
}\end{corollary}\par\smallskip
We may notice that neither $\alpha_{\rm FS}$ nor $\beta_{\rm FS}$ depend on $p$ and that the curve $\alpha=\alpha_{\rm FS}$ determines the same threshold for the symmetry breaking region as in the critical case $p=p_\star$. In the case $p=p_\star$, this curve was found by V.~Felli and M.~Schneider, who proved in~\cite{Felli-Schneider-03} the linear instability of all radial critical points if $\alpha>\alpha_{\rm FS}$. When $p=p_\star$, symmetry holds under Condition~\eqref{SymmetryCondition1} as was proved in~\cite{DEL2015}. Our goal is to extend this last result to the subcritical regime $p\in(1,p_\star)$.
\medskip The change of variables $s=r^\alpha$ is an important intermediate step, because it allows to recast the problem as a more standard interpolation inequality in which the \emph{dimension} $n$ is, however, not necessarily an integer. Actually $n$ plays the role of a dimension in view of the scaling properties of the inequalities and, with respect to this \emph{dimension}, they are critical if $p=p_\star$ and sub-critical otherwise. The critical case $p=p_\star$ has been studied in~\cite{DEL2015} using tools of entropy methods, a critical fast diffusion flow and, in particular, a reformulation in terms of a \emph{generalized Fisher information}. In the subcritical range, we shall replace the entropy by a \emph{R\'enyi entropy power} as in~\cite{MR3200617,1501}, and make use of the corresponding fast diffusion flow. As in~\cite{DEL2015}, the flow is used only at heuristic level in order to produce a well-adapted test function. The core of the method is based on the Bakry-Emery computation, also known as the \emph{carr\'e du champ method}, which is well adapted to optimal interpolation inequalities: see for instance~\cite{MR3155209} for a general exposition of the method and~\cite{MR3229793,dolbeault:hal-01206975} for its use in presence of nonlinear flows. Also see~\cite{MR1853037} for earlier considerations on the Bakry-Emery method applied to nonlinear flows and related functional inequalities in unbounded domains. However, in non-compact manifolds and in presence of weights, integrations by parts have to be justified. In the critical case, one can rely on an additional invariance to use an Emden-Fowler transformation and rewrite the problem as an autonomous equation on a cylinder, which simplifies the estimates a lot. In the subcritical regime, estimates have to be adapted since after the Emden-Fowler transformation,
the problem in the cylinder is no longer autonomous.
This paper is organized as follows. We recall the computations which characterize the linear instability of radially symmetric minimizers in Section~\ref{Sec:SB}. In Section~\ref{RenyiStrategy}, we expose the strategy for proving symmetry in the subcritical regime when there are no weights. Section~\ref{Sec:BE} is devoted to the Bakry-Emery computation applied to R\'enyi entropy powers, in presence of weights. This provides a proof of our main results, if we admit that no boundary term appears in the integrations by parts in Section~\ref{Sec:BE}. To prove this last result, regularity and decay estimates of positive solutions to~\eqref{ELeq} are established in Section~\ref{Sec:RegDecay}, which indeed show that no boundary term has to be taken into account (see Proposition~\ref{Prop:b}).
\section{Symmetry breaking}\label{Sec:SB}
For completeness, we summarize known results on symmetry breaking for~\eqref{CKN}. Details can be found in~\cite{2016arXiv160208319B}. With the notations of Corollary~\ref{Cor:Main}, let us define the functional
\[
\mathcal J[v]:=\vartheta\,\log\(\nrm{\DD v}{2,d-n}\)+(1-\vartheta)\,\log\(\nrm v{p+1,d-n}\)+\log\mathsf K_{\alpha,n,p}-\log\(\nrm v{2p,d-n}\)
\]
obtained by taking the difference of the logarithm of the two terms in~\eqref{CKN1}. Let us define $d\mu_\delta:=\mu_\delta(x)\,dx$, where
\[
\mu_\delta(x):=\frac1{(1+|x|^2)^\delta}\,.
\]
Since~$v_\star$ as defined in Corollary~\ref{Cor:Main} is a critical point of $\mathcal J$, a Taylor expansion at order $\varepsilon^2$ shows that
\[
\nrm{\DD v_\star}{2,d-n}^2\,\mathcal J\big[v_\star+\varepsilon\,\mu_{\delta/2}\,f\big]=\tfrac12\,\varepsilon^2\,\vartheta\,\mathcal Q[f]+o(\varepsilon^2)
\]
with $\delta=\frac{2\,p}{p-1}$ and
\[
\mathcal Q[f]=\irdmu{|\DD f|^2\,|x|^{n-d}}_\delta-\,\frac{4\,p\,\alpha^2}{p-1}\irdmu{|f|^2\,|x|^{n-d}}_{\delta+1}\,.
\]
The following \emph{Hardy-Poincar\'e inequality} has been established in~\cite{2016arXiv160208319B}.
\par\smallskip\begin{proposition}\label{Prop:SpectralGap}{\sl Let $d\ge2$, $\alpha\in(0,+\infty)$, $n>d$ and $\delta\ge n$. Then
\be{Hardy-Poincare1}
\irdmu{|\DD f|^2\,|x|^{n-d}}_\delta\ge\Lambda\irdmu{|f|^2\,|x|^{n-d}}_{\delta+1}
\end{equation}
holds for any $f\in\mathrm L^2({\mathbb R}^d,|x|^{n-d}\,d\mu_{\delta+1})$, with $ \DD f \in \mathrm L^2({\mathbb R}^d,|x|^{n-d}\,d\mu_{\delta}) $, such that $\irdmu{f\,|x|^{n-d}}_{\delta+1}=0$, with an optimal constant $\Lambda$ given by
\[\label{Eqn:SpectralGap}
\Lambda=\left\{\begin{array}{rl}
2\,\alpha^2\,(2\,\delta-n)\quad&\mbox{if}\quad0<\alpha^2\le\frac{(d-1)\,\delta^2}{n\,(2\,\delta-n)\,(\delta-1)}\,,\\[6pt]
2\,\alpha^2\,\delta\,\eta\quad&\mbox{if}\quad\alpha^2>\frac{(d-1)\,\delta^2}{n\,(2\,\delta-n)\,(\delta-1)}\,,
\end{array}
\right.
\]
where $\eta$ is the unique positive solution to
\[
\eta\,(\eta+n-2)=\frac{d-1}{\alpha^2}\,.
\]
Moreover, $\Lambda$ is achieved by a non-trivial eigenfunction corresponding to the equality in~\eqref{Hardy-Poincare1}. If $\alpha^2>\frac{(d-1)\,\delta^2}{n\,(2\,\delta-n)\,(\delta-1)}$, the eigenspace is generated by $\varphi_i(s,\omega)=s^\eta\,\omega_i$, with $i=1$, $2$,\ldots$d$ and the eigenfunctions are not radially symmetric, while in the other case the eigenspace is generated by the radially symmetric eigenfunction $\varphi_0(s,\omega)=s^2-\frac n{2\,\delta-n}$.}\end{proposition}\par\smallskip
As a consequence, $\mathcal Q$ is a nonnegative quadratic form if and only if $\frac{4\,p\,\alpha^2}{p-1}\le\Lambda$. Otherwise, $\mathcal Q$ takes negative values, and a careful analysis shows that symmetry breaking occurs in~\eqref{CKN} if
\[
2\,\alpha^2\,\delta\,\eta<\frac{4\,p\,\alpha^2}{p-1}\quad\Longleftrightarrow\quad\eta<1\,,
\]
which means
\[
\frac{d-1}{\alpha^2}=\eta\,(\eta+n-2)<n-1\,,
\]
and this is equivalent to $\alpha>\alpha_{\rm FS}$.
\section{The strategy for proving symmetry without weights}\label{RenyiStrategy}
Before going into the details of the proof we explain the strategy for the case of the Gagliardo-Nirenberg inequalities without weights. There are several ways to compute the optimizers, and the relevant papers are~\cite{MR1940370,MR1777035,MR1986060,MR1853037,MR3155209,1501} (also see additional references therein). The inequality is of the form
\be{ordinary}
\nrm w{2p}\le{\mathsf C}_{0,0,p}\,\nrm{\nabla w}2^\vartheta\,\nrm w{p+1}^{1-\vartheta}\quad\mbox{with}\quad1<p<\frac d{d-2}
\end{equation}
and
\[
\vartheta=\frac{d\,(p-1)}{p\,\big(d+2-p\,(d-2)\big)}\,.
\]
It is known through the work in~\cite{MR1940370} that the optimizers of this inequality are, up to multiplications by a constant, scalings and translations, given by
\[
w_\star(x)=\(1+|x|^2\)^{-\frac1{p-1}}\quad\forall\,x\in{\mathbb R}^d\,.
\]
In our perspective, the idea is to use a version of the \emph{carr\'e du champ} or \emph{Bakry-Emery method} introduced in~\cite{Bakry-Emery85}: by differentiating a relevant quantity along the flow, we recover the inequality in a form which turns out to be sharp. The version of the \emph{carr\'e du champ} we shall use is based on the \emph{R\'enyi entropy powers} whose concavity as a function of $t$ has been studied by M.~Costa in~\cite{MR823597} in the case of linear diffusions (see~\cite{MR3200617} and references therein for more recent papers). In~\cite{MR1768665}, C.~Villani observed that the \emph{carr\'e du champ} method gives a proof of the logarithmic Sobolev inequality in the Blachman-Stam form, also known as the Weissler form: see~\cite{MR0188004,MR479373}. G.~Savar\'e and G.~Toscani observed in~\cite{MR3200617} that the concavity also holds in the nonlinear case, which has been used in~\cite{1501} to give an alternative proof of the Gagliardo-Nirenberg inequalities, that we are now going to sketch.
The first step consists in reformulating the inequality in new variables. We set
\[
u=w^{2p}\,,
\]
which is equivalent to $w=u^{m-1/2}$, and consider the flow given by
\be{ordinaryflow}
\frac{\partial u}{\partial t}=\Delta u^m\,,
\end{equation}
where $m$ is related to $p$ by
\[
p=\frac1{2\,m-1}\,.
\]
The inequalities $1<p<\frac d{d-2}$ imply that
\be{ordinaryrange}
1-\frac1d<m<1\,.
\end{equation}
For some positive constant $\kappa>0$, one easily finds that the so-called Barenblatt-Pattle functions
\[\label{ordinaryselfsimilar}
u_\star(t,x)=\kappa^d\,t^{-\frac d{d\,m-d+2}}\,w_\star^{2p}\(\kappa\,t^{-\frac1{d\,m-d+2}}\,x\)=\(a+b\,|x|^2\)^{-\frac1{1-m}}
\]
are self-similar solutions of \eqref{ordinaryflow}, where $a=a(t)$ and $b=b(t)$ are explicit. Thus, we see that $w_\star=u_\star^{m-1/2}$ is an optimizer for~\eqref{ordinary} for all $t$ and it makes sense to rewrite~\eqref{ordinary} in terms of the function $u$. Straightforward computations show that~\eqref{ordinary} can be brought into the form
\be{ordinaryu}
\(\ird u\)^{(\sigma+1)\,m-1}\le C\,\mathcal E^{\sigma-1}\,\mathcal I\quad\mbox{where}\quad\sigma=\frac2{d\,(1-m)}-1
\end{equation}
for some constant $C$ which does not depend on $u$, where
\[
\mathcal E:=\ird{u^m}
\]
is a \emph{generalized Ralston-Newman entropy}, also known in the literature as \emph{Tsallis entropy}, and
\[
\mathcal I:=\ird{u\,|\nabla\mathsf P|^2}
\]
is the corresponding \emph{generalized Fisher information}. Here we have introduced the \emph{pressure variable}
\[
\mathsf P=\frac m{1-m}\,u^{m-1}\,.
\]
The \emph{R\'enyi entropy power} is defined by
\[
\mathcal F:=\mathcal E^\sigma
\]
as in~\cite{MR3200617,1501}. With the above choice of $\sigma$, $\mathcal F$ is an affine function of $t$ if $u=u_\star$. For an arbitrary solution of~\eqref{ordinaryflow}, we aim at proving that it is a concave function of $t$ and that it is affine if and only if $u=u_\star$. For further references on related issues see~\cite{MR1940370,MR2282669}. Note that one of the motivations for choosing the variable $\mathsf P$ is that it has a particular simple form for the self-similar solutions, namely
\[
\mathsf P_{\kern -1pt\star}=\frac m{1-m}\(a+b\,|x|^2\)\,.
\]
Differentiating $\mathcal E$ along the flow~\eqref{ordinaryflow} yields
\[
\mathcal E'=(1-m)\,\mathcal I\,,
\]
so that
\[
\mathcal F'=\sigma\,(1-m)\,\mathcal G\quad\mbox{with}\quad\mathcal G:=\mathcal E^{\sigma-1}\,\mathcal I\,.
\]
More complicated is the derivative for the Fisher information:
\[
\mathcal I'=-\,2\ird{u^m\,\left[{\rm Tr}\(\(\mathrm{Hess}\,\mathsf P-\tfrac1d\,\Delta\mathsf P\,{\rm Id}\)^2\)+\(m-1+\tfrac1d\)(\Delta\mathsf P)^2\right]}\,.
\]
Here $\mathrm{Hess}\,\mathsf P$ and $\mathrm{Id}$ are respectively the Hessian of $\mathsf P$ and the $(d\times d)$ identity matrix. The computation can be found in~\cite{1501}. Next we compute the second derivative of the \emph{R\'enyi entropy power} $\mathcal F$ with respect to $t$:
\[
\frac{(\mathcal F)''}{\sigma\,\mathcal E^\sigma}=(\sigma-1)\,\frac{\mathcal E'^2}{\mathcal E^2}+\frac{\mathcal E''}{\mathcal E}=(\sigma-1)\,(1-m)^2\,\frac{\mathcal I^2}{\mathcal E^2}+(1-m)\,\frac{\mathcal I'}{\mathcal E}=:(1-m)\,\mathcal H\,.
\]
With $\sigma=\frac2d\,\frac1{1-m}-1$, we obtain
\be{ordinaryexpression}
\mathcal H=-\,2\,\left\langle{\rm Tr}\(\(\mathrm{Hess}\,\mathsf P-\tfrac1d\,\Delta\mathsf P\,{\rm Id}\)^2\)\right\rangle+(1-m)\,(1-\sigma)\,\left\langle\(\Delta\mathsf P-\langle\Delta\mathsf P\rangle\)^2\right\rangle
\end{equation}
where we have used the notation
\[
\langle A\rangle:=\frac{\ird{u^m\,A}}{\ird{u^m}}\,.
\]
Note that by~\eqref{ordinaryrange}, we have that $\sigma >1$ and hence we find that $\mathcal F''=(\mathcal E^\sigma)''\le0$, which also means that $\mathcal G=\mathcal E^{\sigma-1}\,\mathcal I$ is a non-increasing function. In fact it is strictly decreasing unless $\mathsf P$ is a polynomial function of order two in~$x$ and it is easy to see that the expression~\eqref{ordinaryexpression} vanishes precisely when $\mathsf P$ is of the form $a+b\,|x-x_0|^2$, where $a$, $b\in{\mathbb R}$, $x_0\in{\mathbb R}^d$ are constants (but $a$ and $b$ may still depend on $t$).
Thus, while the left side of~\eqref{ordinaryu} stays constant along the flow, the right side decreases. In~\cite{1501} it was shown that the right side decreases towards the value given by the self-similar solutions~$u_\star$ and hence proves~\eqref{ordinary} in the sharp form. In our work we pursue a different tactic. The variational equation for the optimizers of~\eqref{ordinary} is given~by
\[
-\Delta w=a\,w^{2\,p-1}-b\,w^p\,.
\]
A straightforward computation shows that this can be written in the form
\[
2\,m\,u^{m-2}\,{\rm div}\big(u\,\nabla\mathsf P\big)+|\nabla\mathsf P|^2+c_1\,u^{m-1}=c_2
\]
for some constants $c_1$, $c_2$ whose precise values are explicit. This equation can also be interpreted as the variational equation for the sharp constant in~\eqref{ordinaryu}. Hence, multiplying the above equation by $\Delta u^m$ and integrating yields
\[
\ird{\left[2\,m\,u^{m-2}\,{\rm div}\big(u\,\nabla\mathsf P\big)+|\nabla\mathsf P|^2\right]\,\Delta u^m}+c_1\ird{u^{m-1}\,\Delta u^m}=c_2\ird{\Delta u^m}=0\,.
\]
We recover the fact that, in the flow picture, $\mathcal H$ is, up to a positive factor, the derivative of $\mathcal G$ and hence vanishes. From the observations made above we conclude that $\mathsf P$ must be a polynomial function of order two in~$x$. In this fashion one obtains more than just the optimizers, namely a classification of all positive solutions of the variational equation. The main technical problem with this method is the justification of the integrations by parts, which in the case at hand, without any weight, does not offer great difficulties: see for instance~\cite{MR1853037}. This strategy can also be used to treat the problem with weights, which will be explained next. Dealing with weights, however, requires some special care as we shall see.
\section{The Bakry-Emery computation and R\'enyi entropy powers in the weighted case}\label{Sec:BE}
Let us adapt the above strategy to the case where there are weights in all integrals entering into the inequality, that is, let us deal with inequality~\eqref{CKN1} instead of inequality~\eqref{ordinary}. In order to define a new, well-adapted fast diffusion flow, we introduce the diffusion operator $\L=-\,\mathsf D_\alpha^*\,\DD$, which is given in spherical coordinates~by
\[
\L u=\alpha^2\(u''+\frac{n-1}s\,u'\)+\frac1{s^2}\,\Delta_\omega\,u
\]
where $\Delta_\omega$ denotes the Laplace-Betrami operator acting on the $(d-1)$-dimensional sphere $\S^{d-1}$ of the angular variables, and $'$ denotes here the derivative with respect to $s$. Consider the fast diffusion equation
\be{FDE}
\frac{\partial u}{\partial t}=\L u^m
\end{equation}
in the subcritical range $1-\frac1n<m=1-\frac1\nu<1$. The exponents $m$ in~\eqref{FDE} and $p$ in~\eqref{CKN1} are related as in Section~\ref{RenyiStrategy} by
\[
p=\frac1{2\,m-1}\quad\Longleftrightarrow\quad m=\frac{p+1}{2\,p}
\]
and $\nu$ is defined by
\[
\nu:=\frac1{1-m}\,.
\]
We consider the \emph{Fisher information} defined as
\[
\mathcal I[\mathsf P]:=\irdmu{u\,|\DD\mathsf P|^2}\quad\mbox{with}\quad\mathsf P=\frac m{1-m}\,u^{m-1}\quad\mbox{and}\quad d\mu=s^{n-1}\,ds\,d\omega=s^{n-d}\,dx\,.
\]
Here $\mathsf P$ is the \emph{pressure variable}. Our goal is to prove that $\mathsf P$ takes the form $a+b\,s^2$, as in Section~\ref{RenyiStrategy}. It is useful to observe that~\eqref{FDE} can be rewritten as
\[
\frac{\partial u}{\partial t}=\mathsf D_\alpha^*\(u\,\DD\mathsf P\)
\]
and, in order to compute $\frac{d\mathcal I}{d\kern-0.5pt t}$, we will also use the fact that $\mathsf P$ solves
\be{Eqn:p2}
\frac{\partial\mathsf P}{\partial t}=(1-m)\,\mathsf P\,\L\mathsf P-\,|\DD\mathsf P|^2\,.
\end{equation}
\subsection{First step: computation of~$\frac{d\kern-0.5pt\mathcal I}{d\kern-0.5pt t}$} Let us define
\[
\mathcal K[\mathsf P]:=\mathcal A[\mathsf P]-(1-m)\,(\L\mathsf P)^2\quad\mbox{where}\quad\mathcal A[\mathsf P]:=\frac12\,\L\,|\DD\mathsf P|^2-\,\DD\mathsf P\cdot\DD\L\mathsf P
\]
and, on the boundary of the centered ball $B_s$ of radius $s$, the boundary term
\begin{multline}\label{b}
\mathsf b(s):=\idB s{\Big(\tfrac\partial{\partial s}\(\mathsf P^{\frac m{m-1}}\,|\DD\mathsf P|^2\)-\,2\,(1-m)\,\mathsf P^{\frac m{m-1}}\,\mathsf P'\,\L\mathsf P\Big)}\\
=\isphone s{\Big(\tfrac\partial{\partial s}\(\mathsf P^{\frac m{m-1}}\,|\DD\mathsf P|^2\)-\,2\,(1-m)\,\mathsf P^\frac m{m-1}\,\mathsf P'\,\L\mathsf P\Big)}\,,
\end{multline}
where by $ d\varsigma = s^{n-1}\,d\omega $ we denote the standard Hausdorff measure on $ \partial B_s $.
\par\smallskip\begin{lemma}\label{Lem:DerivFisherL}{\sl If $u$ solves~\eqref{FDE} and if
\be{FisherDerBC}
\lim_{s\to0_+}\mathsf b(s)=\lim_{S\to+\infty}\mathsf b(S)=0\,,
\end{equation}
then,
\be{BLWL}
\frac d{d\kern-0.5pt t}\,\mathcal I[\mathsf P]=-\,2\irdmu{\mathcal K[\mathsf P]\,u^m}\,.
\end{equation}
}\end{lemma}\par\smallskip
\begin{proof} For $0<s<S<+\infty$, let us consider the set $A_{(s,S)}:=\left\{x\in{\mathbb R}^d\,:\,s<|x|<S\right\}$, so that $\partial A_{(s,S)}=\partial B_s\cup\partial B_S$. Using~\eqref{FDE} and~\eqref{Eqn:p2}, we can compute
\begin{eqnarray*}
\hspace*{2cm}&&\hspace*{-2cm}\frac d{d\kern-0.5pt t}\iring{u\,|\DD\mathsf P|^2}\\
&=&\iring{\frac{\partial u}{\partial t}\,|\DD\mathsf P|^2}+\,2\iring{u\,\DD\mathsf P\cdot\DD\frac{\partial\mathsf P}{\partial t}}\\
&=&\iring{\L(u^m)\,|\DD\mathsf P|^2}+\,2\iring{u\,\DD\mathsf P\cdot\DD\Big((1-m)\,\mathsf P\,\L\mathsf P-|\DD\mathsf P|^2\Big)}\\
&=&\iring{u^m\,\L|\DD\mathsf P|^2}+\,2\,(1-m)\iring{u\,\mathsf P\,\DD\mathsf P\cdot\DD\L\mathsf P}\\
&&\hspace*{1cm}+\,2\,(1-m)\iring{u\,\DD\mathsf P\cdot\DD\mathsf P\,\L\mathsf P}-\,2\iring{u\,\DD\mathsf P\cdot\DD|\DD\mathsf P|^2}\\
&&+\,\alpha^2\,\idB S{\((u^m)'\,|\DD\mathsf P|^2 - u^m\,\tfrac\partial{\partial s}\,(|\DD\mathsf P|^2)\)}\\
&&\quad-\,\alpha^2\,\idB s{\((u^m)'\,|\DD\mathsf P|^2-u^m\,\tfrac\partial{\partial s}\,(|\DD\mathsf P|^2)\)}\\
&=&-\iring{u^m\,\L|\DD\mathsf P|^2}+\,2\,(1-m)\iring{u\,\mathsf P\,\DD\mathsf P\cdot\DD\L\mathsf P}\\
&&+\,2\,(1-m)\iring{u\,\DD\mathsf P\cdot\DD\mathsf P\,\L\mathsf P}\\
&&+\,\alpha^2\,\idB S{\((u^m)'\,|\DD\mathsf P|^2+u^m\,\tfrac\partial{\partial s}\,(|\DD\mathsf P|^2)\)}\\
&&\quad-\,\alpha^2\,\idB s{\((u^m)'\,|\DD\mathsf P|^2+u^m\,\tfrac\partial{\partial s}\,(|\DD\mathsf P|^2)\)}\,,
\end{eqnarray*}
where the last line is given by an integration by parts, upon exploiting the identity $u\,\DD\mathsf P=-\,\DD(u^m)$:
\begin{multline*}
\iring{u\,\DD\mathsf P\cdot\DD|\DD\mathsf P|^2}=-\iring{\DD(u^m)\cdot\DD|\DD\mathsf P|^2}\\
=\iring{u^m\,\L|\DD\mathsf P|^2}
-\,\alpha^2\,\idB S{u^m\,\tfrac\partial{\partial s}\,(|\DD\mathsf P|^2)}
+\alpha^2\,\idB s{u^m\,\tfrac\partial{\partial s}\,(|\DD\mathsf P|^2)}\,.
\end{multline*}
1) Using the definition of $\mathcal A[\mathsf P]$, we get that
\be{Id1}
-\iring{u^m\,\L|\DD\mathsf P|^2}=-\,2\iring{u^m\,\mathcal A[\mathsf P]}-\,2\iring{u^m\,\DD\mathsf P\cdot\DD\L\mathsf P}\,.
\end{equation}
2) Taking advantage again of $u\,\DD\mathsf P=-\,\DD(u^m)$, an integration by parts gives
\begin{multline*}
\hspace*{-12pt}\iring{u\,\DD\mathsf P\cdot\DD\mathsf P\,\L\mathsf P}=-\iring{\DD(u^m)\cdot\DD\mathsf P\,\L\mathsf P}\\
=\iring{u^m\,(\L\mathsf P)^2}\,+\iring{u^m\,\DD\mathsf P\cdot\DD\L\mathsf P}\\
-\,\alpha^2\,\idB S{u^m\,\mathsf P'\L\mathsf P}+\alpha^2\,\idB s{u^m\,\mathsf P'\L\mathsf P}\,.
\end{multline*}
and, with $u\,\mathsf P=\frac m{1-m}\,u^m$, we find that
\begin{multline}
2\,(1-m)\iring{u\,\mathsf P\,\DD\mathsf P\cdot\DD\L\mathsf P}+\,2\,(1-m)\iring{u\,\DD\mathsf P\cdot\DD\mathsf P\,\L\mathsf P}\\
=\,2\,(1-m)\iring{u^m\,(\L\mathsf P)^2}+2\iring{u^m\,\DD\mathsf P\cdot\DD\L\mathsf P}\hspace*{4cm}\\
-\,2\,(1-m)\,\alpha^2\,\idB S{u^m\,\mathsf P'\L\mathsf P}+2\,(1-m)\,\alpha^2\,\idB s{u^m\,\mathsf P'\L\mathsf P}\,.\label{Id2}
\end{multline}
Summing~\eqref{Id1} and~\eqref{Id2}, using~\eqref{b} and passing to the limits as $s\to0_+$, $S\to+\infty$, establishes~\eqref{BLWL}.\end{proof}
\subsection{Second step: two remarkable identities.} Let us define
\[
\mathsf k[\mathsf P]:=\tfrac12\,\Delta_\omega\,|\nabla_\omega\mathsf P|^2-\nabla_\omega\mathsf P\cdot\nabla_\omega\Delta_\omega\,\mathsf P-\tfrac1{n-1}\,(\Delta_\omega\,\mathsf P)^2-(n-2)\,\alpha^2\,|\nabla_\omega\mathsf P|^2
\]
and
\[
\mathcal R[\mathsf P]:=\mathcal K[\mathsf P]-\(\frac1n-(1-m)\)\(\L\mathsf P\)^2\,.
\]
We observe that
\[
\mathcal R[\mathsf P]=\frac12\,\L\,|\DD\mathsf P|^2-\,\DD\mathsf P\cdot\DD\L\mathsf P-\frac1n\,(\L\mathsf P)^2
\]
is independent of $m$. We recall the result of~\cite[Lemma~5.1]{DEL2015} and give its proof for completeness.
\par\smallskip\begin{lemma}\label{Lem:Derivmatrixform1}{\sl Let $d\in{\mathbb N}$, $n\in{\mathbb R}$ such that $n>d\ge2$, and consider a function $\mathsf P\in C^3({\mathbb R}^d\setminus\{0\})$. Then,
\[
\mathcal R[\mathsf P]=\alpha^4\(1-\frac1n\)\left[\mathsf P''-\frac{\mathsf P'}s-\frac{\Delta_\omega\,\mathsf P}{\alpha^2\,(n-1)\,s^2}\right]^2+\frac{2\,\alpha^2}{s^2}\left|\nabla_\omega\mathsf P'-\frac{\nabla_\omega\mathsf P}s \right|^2+\frac{\mathsf k[\mathsf P]}{s^4}\,.
\]}\end{lemma}\par\smallskip
\begin{proof} By definition of $\mathcal R[\mathsf P]$, we have
\begin{eqnarray*}
\mathcal R[\mathsf P]&=&\frac{\alpha^2}2\left[\alpha^2\,\mathsf P'^2+\frac{|\nabla_\omega\mathsf P|^2}{s^2}\right]''+\frac{\alpha^2}2\frac{n-1}s
\left[\alpha^2\,\mathsf P'^2+\frac{|\nabla_\omega\mathsf P|^2}{s^2}\right]'+\frac1{2\,s^2}\,\Delta_\omega\left[\alpha^2\,\mathsf P'^2+\frac{|\nabla_\omega\mathsf P|^2}{s^2}\right]\\
&&-\,\alpha^2\,\mathsf P'\(\alpha^2\,\mathsf P''+\alpha^2\,\frac{n-1}s\,\mathsf P'+\frac{\Delta_\omega\,\mathsf P}{s^2}\)'-\frac1{s^2}
\nabla_\omega\mathsf P\cdot\nabla_\omega\(\alpha^2\,\mathsf P''+\alpha^2\,\frac{n-1}s\,\mathsf P'+\frac{\Delta_\omega\,\mathsf P}{s^2}\)\\
&&-\,\frac 1n\(\alpha^2\,\mathsf P''+\alpha^2\,\frac{n-1}s\,\mathsf P'+\frac{\Delta_\omega\,\mathsf P}{s^2}\)^2\,,
\end{eqnarray*}
which can be expanded as
\begin{eqnarray*}
\mathcal R[\mathsf P]&=&\frac{\alpha^2}2\left[ 2\,\alpha^2\,\mathsf P''^2+2\,\alpha^2\,\mathsf P'\,\mathsf P'''+2\,\frac{|\nabla_\omega\mathsf P'|^2+\nabla_\omega\mathsf P\cdot\nabla_\omega\mathsf P''}{s^2}
-8\,\frac{\nabla_\omega\mathsf P\cdot\nabla_\omega\mathsf P'}{s^3}+6\,\frac{|\nabla_\omega\mathsf P|^2}{s^4}\right]\\
&&+\,\alpha^2\,\frac{n-1}s\left[\alpha^2\,\mathsf P'\,\mathsf P''+\frac{\nabla_\omega\mathsf P\cdot\nabla_\omega\mathsf P'}{s^2}-\frac{|\nabla_\omega\mathsf P|^2}{s^3}\right]+\frac1{s^2}\left[\alpha^2\,\mathsf P'\Delta_\omega\,\mathsf P'+\alpha^2\,|\nabla_\omega\mathsf P'|^2+\frac{\Delta_\omega\,|\nabla_\omega\mathsf P|^2}{2\,s^2}\right]\\
&&-\,\alpha^2\,\mathsf P'\(\alpha^2\,\mathsf P'''+\alpha^2\,\frac{n-1}s\,\mathsf P''-\,\alpha^2\,\frac{n-1}{s^2}\mathsf P'-2\,\frac{\Delta_\omega\,\mathsf P}{s^3}+\frac{\Delta_\omega\,\mathsf P'}{s^2}\)\\
&&\hspace*{2cm}-\frac1{s^2}
\(\alpha^2\,\nabla_\omega\mathsf P\cdot\nabla_\omega\mathsf P''+\alpha^2\,\frac{n-1}s\nabla_\omega\mathsf P\cdot\nabla_\omega\mathsf P'+\frac{\nabla_\omega\mathsf P\cdot\nabla_\omega\Delta_\omega\,\mathsf P}{s^2}\)\\
&&-\,\frac 1n\left[\alpha^4\,\mathsf P''^2+\alpha^4\,\frac{(n-1)^2}{s^2}\,\mathsf P'^2+\frac{(\Delta_\omega\,\mathsf P)^2}{s^4}+2\,\alpha^4\,\frac{n-1}s\,\mathsf P'\,\mathsf P''+2\,\alpha^2\,\frac{\mathsf P''\Delta_\omega\,\mathsf P}{s^2}+2\,\alpha^2\,\frac{n-1}{s^3}\mathsf P'\Delta_\omega\,\mathsf P\right]\,.
\end{eqnarray*}
Collecting terms proves the result.\end{proof}
\medskip Now let us study the quantity $\mathsf k[\mathsf P]$ which appears in the statement of Lemma~\ref{Lem:Derivmatrixform1}.
The following computations are adapted from~\cite{MR3229793} and~\cite[Section~5]{DEL2015}. For completeness, we give a simplified proof in the special case of the sphere $(\S^{d-1},g)$ considered as a Riemannian manifold with standard metric~$g$. We denote by $\mathrm Hf$ the \emph{Hessian} of~$f$, which is seen as $(d-1)\times(d-1)$ matrix, identify its trace with the Laplace-Beltrami operator on $\S^{d-1}$ and use the notation $\|\mathrm A\|^2:=\mathrm A:\mathrm A$ for the sum of the squares of the coefficients of the matrix~$A$. It is convenient to define the \emph{trace free Hessian}, the tensor $\mathrm Zf$ and its trace free counterpart respectively~by
\[
\mathrm Lf:=\mathrm Hf-\frac1{d-1}\,(\Delta_\omega f)\,g\,,\quad\mathrm Zf:=\frac{\nabla_\omega f\otimes\nabla_\omega f}f\quad\mbox{and}\quad\mathrm Mf:=\mathrm Zf-\frac1{d-1}\,\frac{|\nabla_\omega f|^2}f\,g
\]
whenever $f\neq0$. Elementary computations show that
\be{TraceFree}
\|\mathrm Lf\|^2=\|\mathrm Hf\|^2-\frac1{d-1}\,(\Delta_\omega f)^2\quad\mbox{and}\quad\|\mathrm Mf\|^2=\|\mathrm Zf\|^2-\frac1{d-1}\,\frac{|\nabla_\omega f|^4}{f^2}=\frac{d-2}{d-1}\,\frac{|\nabla_\omega f|^4}{f^2}\,.
\end{equation}
The Bochner-Lichnerowicz-Weitzenb\"ock formula on $\S^{d-1}$ takes the simple form
\be{BLW}
\tfrac12\,\Delta_\omega\,(|\nabla_\omega f|^2)=\|\mathrm Hf\|^2+\nabla_\omega(\Delta_\omega f)\cdot\nabla_\omega f+(d-2)\,|\nabla_\omega f|^2
\end{equation}
where the last term, \emph{i.e.}, $ \mathrm{Ric}(\nabla_\omega f,\nabla_\omega f)=(d-2)\,|\nabla_\omega f|^2$, accounts for the Ricci curvature tensor contracted with \hbox{$\nabla_\omega f \otimes\nabla_\omega f$}.
We recall that $\alpha_{\rm FS}:=\sqrt{\frac{d-1}{n-1}}$ and $\nu=1/(1-m)$. Let us introduce the notations
\[
\delta:=\frac1{d-1}-\frac1{n-1}
\]
and
\[
\mathcal B[\mathsf P]:=\isph{\(\tfrac12\,\Delta_\omega(|\nabla_\omega\mathsf P|^2)-\nabla_\omega(\Delta_\omega\mathsf P)\cdot\nabla_\omega\mathsf P-\tfrac1{n-1}\,(\Delta_\omega\mathsf P)^2\)\,\mathsf P^{1-\nu}}\,,
\]
so that
\[
\isph{\mathsf k[\mathsf P]\,\mathsf P^{1-\nu}}=\mathcal B[\mathsf P]-(n-2)\,\alpha^2\isph{|\nabla_\omega\mathsf P|^2\,\mathsf P^{1-\nu}}\,.
\]
\par\smallskip\begin{lemma}\label{kappapositive}{\sl Assume that $d\ge2$ and $1/(1-m)=\nu>n>d$. There exists a positive constant $c(n,m,d)$ such that, for any positive function $\mathsf P\in C^3({\S^{d-1}})$,
\[
\isph{\mathsf k[\mathsf P]\,\mathsf P^{1-\nu}}\ge(n-2)\,\big(\alpha_{\rm FS}^2-\,\alpha^2\big)\isph{|\nabla_\omega\mathsf P|^2\,\mathsf P^{1-\nu}}+c(n,m,d)\isph{\frac{|\nabla_\omega\mathsf P|^4}{\mathsf P^2}\,\mathsf P^{1-\nu}}\,.
\]}\end{lemma}\par\smallskip
\begin{proof} If $d=2$, we identify $\S^1$ with $[0,2\pi)\ni\theta$ and denote by $\mathsf P_\theta$ and $\mathsf P_{\theta\theta}$ the first and second derivatives of $\mathsf P$ with respect to $\theta$. As in~\cite[Lemma~5.3]{DEL2015}, a direct computation shows that
\[
\mathsf k[\mathsf P]=\frac{n-2}{n-1}\,|\mathsf P_{\theta\theta}|^2-(n-2)\,\alpha^2\,|\mathsf P_\theta|^2=(n-2)\,\(\alpha_{\rm FS}^2\,|\mathsf P_{\theta\theta}|^2-\,\alpha^2\,\,|\mathsf P_\theta|^2\)\,.
\]
By the Poincar\'e inequality, we have
\[
\icircle{\left|\frac\partial{\partial\theta}\(\mathsf P^\frac{1-\nu}2\,\mathsf P_\theta\)\right|^2}\ge\icircle{\left|\mathsf P^\frac{1-\nu}2\,\mathsf P_\theta\right|^2}\,.
\]
On the other hand, an integration by parts shows that
\[
\icircle{\mathsf P_{\theta\theta}\,\frac{|\mathsf P_\theta|^2}{\mathsf P}\,\mathsf P^{1-\nu}}=\frac 13\icircle{\frac\partial{\partial\theta}\(|\mathsf P_\theta|^2\,\mathsf P_\theta\)\,\mathsf P^{-\nu}}=\frac\nu3\icircle{\frac{|\mathsf P_\theta|^4}{\mathsf P^2}\,\mathsf P^{1-\nu}}
\]
and, as a consequence, by expanding the square, we obtain
\[
\icircle{\left|\frac\partial{\partial\theta}\(\mathsf P^\frac{1-\nu}2\,\mathsf P_\theta\)\right|^2}=\icircle{\left|\mathsf P_{\theta\theta}+\frac{1-\nu}2\,\frac{|\mathsf P_\theta|^2}{\mathsf P}\right|^2\,\mathsf P^{1-\nu}}=\icircle{|\mathsf P_{\theta\theta}|^2\,\mathsf P^{1-\nu}}-\,\frac{(\nu-1)\,(\nu+3)}{12}\icircle{\frac{|\mathsf P_\theta|^4}{\mathsf P^2}\,\mathsf P^{1-\nu}}\,.
\]
The result follows with $c(n,m,2)=\frac{n-2}{n-1}\,\frac1{12}\,(\nu-1)\,(\nu+3)=\frac{n-2}{n-1}\,\frac{m\,(4-3\,m)}{12\,(1-m)^2}$ from
\[\label{Pincd2}
\icircle{|\mathsf P_{\theta\theta}|^2\,\mathsf P^{1-\nu}}\ge\icircle{|\mathsf P_\theta|^2\,\mathsf P^{1-\nu}}+\frac{(\nu-1)\,(\nu+3)}{12}\icircle{\frac{|\mathsf P_\theta|^4}{\mathsf P^2}\,\mathsf P^{1-\nu}}\,.
\]
Assume next that $d\ge3$. We follow the method of~\cite[Lemma~5.2]{DEL2015}. Applying~\eqref{BLW} with $f=\mathsf P$ and multiplying by $\mathsf P^{1-\nu}$ yields, after an integration on $\S^{d-1}$, that $\mathcal B[\mathsf P]$ can also be written as
\[
\mathcal B[\mathsf P]=\isph{ \(\|\mathrm H\mathsf P\|^2+(d-2)\,|\nabla_\omega\mathsf P|^2-\tfrac1{n-1}\,(\Delta_\omega\mathsf P)^2\)\,\mathsf P^{1-\nu}}\,.
\]
We recall that $n>d\ge3$ and set $\mathsf P=f^\beta$ with $\beta=\frac2{3-\nu}$. A straightforward computation shows that $\mathrm H f^\beta=\beta\,f^{\beta-1}\,\big(\mathrm H f+(\beta-1)\,\mathrm Z f\big)$ and hence
\begin{multline*}
\mathcal B[\mathsf P]=\beta^2 \isph{ \(\|\mathrm H f+(\beta-1)\,\mathrm Z f\|^2+(d-2)\,|\nabla_\omega f|^2-\tfrac1{n-1}\,\big(\mathrm{Tr}\,(\mathrm H f+(\beta-1)\,\mathrm Z f)\big)^2\)}\\
=\beta^2 \isph{ \(\|\mathrm L f+(\beta-1)\,\mathrm M f\|^2+(d-2)\,|\nabla_\omega f|^2+\delta\,\big(\mathrm{Tr}\,(\mathrm H f+(\beta-1)\,\mathrm Z f)\big)^2\)}\,.
\end{multline*}
Using~\eqref{TraceFree}, we deduce from
\begin{multline*}
\isph{\Delta_\omega\,f\,\frac{|\nabla_\omega f|^2}{f}}=\isph{ \frac{|\nabla_\omega f|^4}{f^2} }-2 \isph{\mathrm H f:\mathrm Z f}\\
=\frac{d-1}{d-2}\,\isph{\|\mathrm Mf\|^2}-2 \isph{\mathrm L f:\mathrm Z f}-\frac{2}{d-1} \isph{ \Delta_\omega\,f\,\frac{|\nabla_\omega f|^2}f}
\end{multline*}
that
\begin{multline*}
\isph{\Delta_\omega\,f\,\frac{|\nabla_\omega f|^2}{f}}=\frac{d-1}{d+1}\left[\isph{\frac{d-1}{d-2}\,\|\mathrm Mf\|^2}-2 \isph{\mathrm L f:\mathrm Z f} \right]\\
=\frac{d-1}{d+1}\left[\isph{\frac{d-1}{d-2}\,\|\mathrm Mf\|^2}-2 \isph{\mathrm L f:\mathrm M f} \right]
\end{multline*}
on the one hand, and from~\eqref{BLW} integrated on $\S^{d-1}$ that
\[
\isph{(\Delta_\omega\,f)^2}=\frac{d-1}{d-2} \isph{\|\mathrm L f\|^2}+(d-1)\isph{|\nabla_\omega f|^2}
\]
on the other hand. Hence we find that
\begin{multline*}
\isph{\big(\mathrm{Tr}\,(\mathrm H f+(\beta-1)\,\mathrm Z f)\big)^2}=\isph{\((\Delta_\omega\,f)^2+2\,(\beta-1)\,\Delta_\omega\,f\,\frac{|\nabla_\omega f|^2}{f}+(\beta-1)^2\,\frac{|\nabla_\omega f|^4}{f^2}\)}\\
\hspace*{1cm}=\frac{d-1}{d-2} \isph{\|\mathrm L f\|^2}+(d-1)\isph{|\nabla_\omega f|^2}\\
\hspace*{6cm}+2\,(\beta-1)\,\frac{d-1}{d+1}\left[\isph{\frac{d-1}{d-2}\,\|\mathrm Mf\|^2}-2 \isph{\mathrm L f:\mathrm M f} \right]\\
+(\beta-1)^2\,\frac{d-1}{d-2}\isph{\|\mathrm Mf\|^2}\,.
\end{multline*}
Altogether, we obtain
\[
\mathcal B[\mathsf P]=\beta^2\isph{\Big(\mathsf a\,\|\mathrm L f\|^2+\,2\,\mathsf b\,\mathrm L f:\mathrm M f+\,\mathsf c\,\|\mathrm M f\|^2\Big)}+\beta^2\,\big(d-2+\delta\,(d-1)\big)\isph{|\nabla_\omega f|^2}
\]
where
\[
\mathsf a=1+\delta\,\frac{d-1}{d-2}\,,\quad\mathsf b=(\beta-1)\,\(1-\,2\,\delta\,\frac{d-1}{d+1}\)\quad\mbox{and}\quad\mathsf c=(\beta-1)^2\,\(1+\delta\,\frac{d-1}{d-2}\)+2\,(\beta-1)\,\frac{\delta\,(d-1)^2}{(d+1)\,(d-2)}\,.
\]
A tedious but elementary computation shows that
\[
\mathcal B[\mathsf P]=\mathsf a\,\beta^2\isph{\left\|\mathrm L f+\tfrac{\mathsf b}{\mathsf a}\;\mathrm M f\right\|^2}+\big(\mathsf c-\tfrac{\mathsf b^2}{\mathsf a}\big)\,\beta^2\isph{\left\|\mathrm M\,f\right\|^2}+\beta^2\,(n-2)\,\alpha_{\rm FS}^2\isph{|\nabla_\omega f|^2}
\]
can be written in terms of $ \mathsf P$ as
\[
\mathcal B[\mathsf P]=\isph{\mathrm Q[\mathsf P]\,\mathsf P^{1-\nu}}+(n-2)\,\alpha_{\rm FS}^2\isph{|\nabla_\omega\mathsf P|^2\,\mathsf P^{1-\nu}}
\]
where
\[
\mathrm Q[\mathsf P]:=\alpha_{\rm FS}^2\,\frac{n-2}{d-2}\,{\left\|\mathrm L\mathsf P+\tfrac{3\,(\nu-1)\,(n-d)}{(d+1)\,(n-2)\,(\nu-3)}\;\mathrm M\mathsf P\right\|^2}+\tfrac{(d-1)\,(\nu-1)\,(n-d)\,[((4\,d-5)\,n+d-8)\,\nu+9\,(n-d))]}{(d-2)\,(d+1)^2\,(\nu-3)^2\,(n-2)\,(n-1)}\,\|\mathrm M\mathsf P\|^2
\]
is positive definite. This concludes the proof in the case $d\ge3$ with $c(n,m,d)=\frac{m\,(n-d)\,[4\,(d+1)\,(n-2)-9\,m\,(n-d)]}{(d+1)^2\,(3\,m-2)^2\,(n-2)\,(n-1)}$.\end{proof}
Let us recall that
\[
\mathcal K[\mathsf P]=\mathcal R[\mathsf P]+\(\frac1n-(1-m)\)\(\L\mathsf P\)^2\,.
\]
We can collect the two results of Lemmas~\ref{Lem:Derivmatrixform1} and~\ref{kappapositive} as follows.
\par\smallskip\begin{corollary}\label{Cor:TwoIdentities}{\sl Let $d\in{\mathbb N}$, $n\in{\mathbb R}$ be such that $n>d\ge2$, and consider a positive function $\mathsf P\in C^3({\mathbb R}^d\setminus\{0\})$. If $u$ is related to $\mathsf P$ by $\mathsf P=\frac m{1-m}\,u^{m-1}$ for some $m\in(1-\frac1n,1)$, then there exists a positive constant $c(n,m,d)$ such that
\begin{multline*}
\irdmu{\mathcal R[\mathsf P]\,u^m}\ge\alpha^4\(1-\frac1n\)\irdmu{\left[\mathsf P''-\frac{\mathsf P'}s-\frac{\Delta_\omega\,\mathsf P}{\alpha^2\,(n-1)\,s^2}\right]^2\,u^m}+2\,\alpha^2\irdmu{\frac1{s^2}\,\left|\nabla_\omega\mathsf P'-\frac{\nabla_\omega\mathsf P}s \right|^2\,u^m}\\
+(n-2)\,\big(\alpha_{\rm FS}^2-\,\alpha^2\big)\irdmu{\frac1{s^4}\,|\nabla_\omega\mathsf P|^2\,u^m} + c(n,m,d)\irdmu{\frac1{s^4}\,\frac{|\nabla_\omega\mathsf P|^4}{\mathsf P^2}\,u^m} \,.
\end{multline*}}\end{corollary}\par\smallskip
\subsection{Third step: concavity of the R\'enyi entropy powers and consequences}
We keep investigating the properties of the flow defined by~\eqref{ordinaryflow}. Let us define the \emph{entropy} as
\[
\mathcal E:=\irdmu{u^m}
\]
and observe that
\[
\mathcal E'=(1-m)\,\mathcal I
\]
if $u$ solves~\eqref{FDE}, after integrating by parts. The fact that boundary terms do not contribute, \emph{i.e.},
\be{FisherBC}
\lim_{s\to0_+}\idB s{u^m\,\mathsf P'}=\lim_{S\to+\infty}\idB S{u^m\,\mathsf P'}=0
\end{equation}
will be justified in Section~\ref{Sec:RegDecay}: see Proposition~\ref{Prop:b}. Note that we use $ ' $ both for derivation w.r.t.~$t$ and w.r.t.~$s$, at least when this does not create any ambiguity. As in Section~\ref{RenyiStrategy}, we introduce the \emph{R\'enyi entropy power}
\[
\mathcal F:=\mathcal E^\sigma
\]
for some exponent $\sigma$ to be chosen later, and find that $\mathcal F'=\sigma\,(1-m)\,\mathcal G$ where $\mathcal G:=\mathcal E^{\sigma-1}\,\mathcal I$. With $\mathcal H:=\mathcal E^{-\sigma}\,\mathcal G'$, by using Lemma~\ref{Lem:DerivFisherL}, we also find that $\mathcal E^{-\sigma}\,\mathcal F''=\sigma\,(1-m)\,\mathcal H$ where
\begin{multline*}
\mathcal E^2\,\mathcal H=\mathcal E^{2-\sigma}\,\mathcal G'=\frac1{\sigma\,(1-m)}\,\mathcal E^{2-\sigma}\,\mathcal F''=(1-m)\,(\sigma-1)\(\irdmu{u\,|\DD\mathsf P|^2}\)^2-\,2\irdmu{u^m}\irdmu{\mathcal K[\mathsf P]\,u^m}\\
=(1-m)\,(\sigma-1)\(\irdmu{u\,|\DD\mathsf P|^2}\)^2-\,2\,\(\frac1n-(1-m)\)\irdmu{u^m}\irdmu{\(\L\mathsf P\)^2\,u^m}\hspace*{-2cm}\\
-\,2\irdmu{u^m}\irdmu{\mathcal R[\mathsf P]\,u^m}
\end{multline*}
if $\lim_{s\to0_+}\mathsf b(s)=\lim_{S\to+\infty}\mathsf b(S)=0$. Using $u\,\DD\mathsf P=-\,\DD(u^m)$, we know that
\[
\irdmu{u\,|\DD\mathsf P|^2}=-\irdmu{\DD(u^m)\cdot\DD\mathsf P}=\irdmu{u^m\,\L\mathsf P}
\]
and so, with the choice
\[
\sigma=\frac2n\,\frac1{1-m}-1\,,
\]
we may argue as in Section~\ref{RenyiStrategy} and get that
\[
\mathcal E^2\,\mathcal H+(1-m)\,(\sigma-1)\,\mathcal E \irdmu{u^m\,\left|\L\mathsf P - \frac{\irdmu{u\,|\DD\mathsf P|^2}}{\irdmu{u^m}}\right|^2}+\,2\,\mathcal E \irdmu{\mathcal R[\mathsf P]\,u^m}=0
\]
if $\lim_{s\to0_+}\mathsf b(s)=\lim_{S\to+\infty}\mathsf b(S)=0$. So, if $\alpha\le\alpha_{\rm FS}$ and $\mathsf P$ is of class $C^3$, by Corollary~\ref{Cor:TwoIdentities}, as a function of $t$, $\mathcal F$ is concave, that is, $\mathcal G=\mathcal E^{\sigma-1}\,\mathcal I$ is non-increasing in $t$. Formally, $\mathcal G$ converges towards a minimum, for which necessarily $\L\mathsf P$ is a constant and $\mathcal R[\mathsf P]=0$, which proves that $\mathsf P(x)=\mathsf a+\mathsf b\,|x|^2$ for some real constants $\mathsf a$ and~$\mathsf b$, according to Corollary~\ref{Cor:TwoIdentities}. Since $\frac{2\,(1-\vartheta)}{\vartheta\,(p+1)}=\sigma-1$, the minimization of $\mathcal G$ under the mass constraint $\irdmu u=\irdmu{v^{2p}}$ is equivalent to the \emph{Caffarelli-Kohn-Nirenberg interpolation inequalities}~\eqref{CKN}, since for some constant~$\kappa$,
\[
\mathcal G=\mathcal E^{\sigma-1}\,\mathcal I=\kappa\,\(\irdmu{v^{p+1}}\)^{\sigma-1}\,\irdmu{|\DD v|^2}\quad\mbox{with}\quad v=u^{m-1/2}\,.
\]
We emphasize that~\eqref{FDE} preserves mass, that is, $\frac d{d\kern-0.5pt t}\irdmu{v^{2p}}=\frac d{d\kern-0.5pt t}\irdmu u=\irdmu{\L u^m}=0$ because, as we shall see in Proposition~\ref{Prop:b}, no boundary terms appear when integrating by parts if $v$ is an extremal function associated with~\eqref{CKN1}. In particular, for mass conservation we need
\be{MassBC}
\lim_{s\to0_+}\idB s{u\,\mathsf P'}=\lim_{S\to+\infty}\idB S{u\,\mathsf P'}=0 \, .
\end{equation}
The above remarks on the monotonicity of $\mathcal G$ and the symmetry properties of its minimizers can in fact be extended to the analysis of the symmetry properties of all critical points of $\mathcal G$. This is actually the contents of Theorem~\ref{Thm:Rigidity}.
\medskip\noindent\emph{Proof of Theorem~\ref{Thm:Rigidity}.} Let $w$ be a positive solution of equation~\eqref{ELeq}. As pointed out above, by choosing
\[
w(x)=u^{m-1/2}(r^\alpha,\omega)\,,
\]
we know that $u$ is a critical point of $\mathcal G$ under a mass constraint on $\ird u$, so that we can write the corresponding Euler-Lagrange equation as $\mathrm d\mathcal G[u]=C$, for some constant $C$. That is, $\irdmu{\mathrm d\mathcal G[u]\cdot\L u^m }= C\,\irdmu{\L u^m }=0$ thanks to~\eqref{MassBC}. Using $\L u^m$ as a test function amounts to apply the flow of~\eqref{FDE} to $\mathcal G$ with initial datum $u$ and compute the derivative with respect to $t$ at $t=0$. This means
\begin{multline*}
0=\irdmu{\mathrm d\mathcal G[u]\cdot\L u^m}=\mathcal E^\sigma\,\mathcal H\\
=-\,(1-m)\,(\sigma-1)\,\mathcal E^{\sigma-1}\irdmu{u^m\,\left|\L\mathsf P-\frac{\irdmu{u\,|\DD\mathsf P|^2}}{\irdmu{u^m}}\right|^2}-\,2\,\mathcal E^{\sigma-1}\irdmu{\mathcal R[\mathsf P]\,u^m}
\end{multline*}
if $\lim_{s\to0_+}\mathsf b(s)=\lim_{S\to+\infty}\mathsf b(S)=0$ and~\eqref{FisherBC} holds. Here we have used Lemma~\ref{Lem:DerivFisherL}. We emphasize that this proof is purely variational and does not rely on the properties of the solutions to~\eqref{FDE}, although using the flow was very useful to explain our strategy. All we need is that no boundary term appears in the integrations by parts. Hence, in order to obtain a complete proof, we have to prove that~\eqref{FisherDerBC},~\eqref{FisherBC} and~\eqref{MassBC} hold with $\mathsf b$ defined by~\eqref{b}, whenever $u$ is a critical point of $\mathcal G$ under mass constraint. This will be done in Proposition~\ref{Prop:b}. Using Corollary~\ref{Cor:TwoIdentities}, we know that $\mathcal R[\mathsf P]=0$, $\nabla_\omega\mathsf P=0$ a.e.~in~${\mathbb R}^d$ and $\L\mathsf P=\frac{\irdmu{u\,|\DD\mathsf P|^2}}{\irdmu{u^m}}$ a.e.~in~${\mathbb R}^d$, with $\mathsf P=\frac m{1-m}\,u^{m-1}$. We conclude as in~\cite[Corollary 5.5]{DEL2015} that $\mathsf P$ is an affine function of~$s^2$.
\hfill\ \qed
\section{Regularity and decay estimates}\label{Sec:RegDecay}
In this last section we prove the regularity and decay estimates on $w$ (or on $\mathsf P$ or $u$) that are necessary to establish the absence of boundary terms in the integrations by parts of Section~\ref{Sec:BE}.
\par\smallskip\begin{proposition}\label{Prop:b}{\sl Under Condition~\eqref{parameters}, if $w$ is a positive solution in $\mathrm H^p_{\beta,\gamma}({\mathbb R}^d)$ of~\eqref{ELeq}, then~\eqref{FisherDerBC},~\eqref{FisherBC} and~\eqref{MassBC} hold with $\mathsf b$ as defined by~\eqref{b}, $u=v^{2p}$ and $v$ given by~\eqref{wv}.}\end{proposition}\par\smallskip
To prove this result, we split the proof in several steps: we will first establish a uniform bound and a decay rate for $ w $ inspired by~\cite{DMN2015} in Lemmas~\ref{Lem:estimates1}, \ref{Lem:estimates2}, and then follow the methodology of~\cite[Appendix]{DEL2015} in the subsequent Lemma~\ref{Lem:decayinRd}.
\par\smallskip\begin{lemma}\label{Lem:estimates1}{\sl Let $\beta$, $\gamma$ and $p$ satisfy the relations~\eqref{parameters}. Any positive solution $w$ of~\eqref{ELeq} such that
\be{energyintegrals}
\nrm w{2p,\gamma}+\nrm{\nabla w}{2,\beta}+\nrm w{p+1,\gamma}^{1-\vartheta}<+\infty\,.
\end{equation}
is uniformly bounded and tends to $0$ at infinity, uniformly in $|x|$.}\end{lemma}\par\smallskip
\begin{proof} The strategy of the first part of the proof is similar to the one in~\cite[Lemma 3.1]{DMN2015}, which was restricted to the case $\beta=0$.
Let us set $\delta_0:=2\,(p_\star-p)$. For any $A>0$, we multiply~\eqref{ELeq} by $(w\wedge A)^{1+\delta_0}$ and integrate by parts (or, equivalently, plug it in the weak formulation of~\eqref{ELeq}): we point out that the latter is indeed an admissible test function since $ w \in \mathrm H^p_{\beta,\gamma}({\mathbb R}^d) $. In that way, by letting $ A \to+\infty $, we obtain the identity
\[\label{eq: prima-stima-H1}
\frac{4\,(1+\delta_0)}{(2+\delta_0)^2}\ird{\left|\nabla{w^{1+\delta_0/2}}\right|^2\,|x|^{-\beta}}+\ird{w^{p+1+\delta_0}\,|x|^{-\gamma}}=\ird{w^{2p+\delta_0}\,|x|^{-\gamma}}\,.
\]
By applying~\eqref{CKN} with $p=p_\star$ (so that $\vartheta=1$) to the function $w=w^{1+{\delta_0}/2}$, we deduce that
\[
\nrm{w}{2p+\delta_1,\gamma}^{2+\delta_0}\le\frac{(2+\delta_0)^2}{4\,(1+\delta_0)}\,{\mathsf C}_{\beta,\gamma,p_\star}^2\,\nrm{w}{2p+\delta_0,\gamma}^{2p+\delta_0}
\]
with $2\,p+\delta_1=p_\star\,(2+\delta_0)$. Let us define the sequence $\{\delta_n\}$ by the induction relation $\delta_{n+1}:=p_\star\,(2+\delta_n)-2\,p$ for any $n\in{\mathbb N}$, that is,
\[\label{eq: stima-Lq-weighted-rec-solved}
\textstyle\delta_n=2\,\frac{p_\star-p}{p_\star-1}\(p_\star^{n+1}-1\)\quad\forall\,n\in{\mathbb N}\,,
\]
and take $q_n=2\,p+\delta_n$. If we repeat the above estimates with $\delta_0$ replaced by $\delta_n$ and $\delta_1$ replaced by $\delta_{n+1}$, we get
\[
\nrm{w}{q_{n+1},\gamma}^{2+\delta_n}\le\frac{(2+\delta_n)^2}{4\,(1+\delta_n)}\,{\mathsf C}_{\beta,\gamma,p_\star}^2\,\nrm{w}{q_n,\gamma}^{q_n}\,.
\]
By iterating this estimate, we obtain the estimate
\[
\nrm{w}{q_n,\gamma}\le C_n\,\nrm{w}{2p_\star,\gamma}^{\zeta_n}\quad\mbox{with}\quad\zeta_n=\frac{(p_\star-1)\,p_\star^n}{p-1+(p_\star-p)\,p_\star^n}
\]
where the sequence $\{C_n\}$ is defined by $C_0=1$ and
\[
C_{n+1}^{2+\delta_n}=\frac{(2+\delta_n)^2}{4\,(1+\delta_n)}\,{\mathsf C}_{\beta,\gamma,p_\star}^2\,C_n^{q_n}\quad\forall\,n\in{\mathbb N}\,.
\]
The sequence $\{C_n\}$ converges to a finite limit $C_\infty$. Letting $n\to\infty$ we obtain the uniform bound
\[\label{eq: stima-infty-indip}
\nrm{w}\infty\le C_\infty\,\nrm{w}{2p_\star,\gamma}^{\zeta_\infty}\le C_\infty\({\mathsf C}_{\beta,\gamma,p_\star}\,\nrm{\nabla w}{2,\beta}\)^{\zeta_\infty}\le C_\infty\({\mathsf C}_{\beta,\gamma,p_\star}\,\nrm w{2p,\gamma}^p\)^{\zeta_\infty}
\]
where $\zeta_\infty=\frac{p_\star-1}{p_\star-p}=\lim_{n\to\infty}\zeta_n$.
\medskip In order to prove that $\lim_{|x|\to+\infty}w(x)=0$, we can suitably adapt the above strategy. We shall do it as follows: we truncate the solution so that the truncated function is supported outside of a ball of radius $R_0$ and apply the iteration scheme. Up to an enlargement of the ball, that is, outside of a ball of radius $R_\infty=a\,R_0$ for some fixed numerical constant $a>1$, we get that $\left\| w \right\|_{\mathrm L^{\infty}(B_{R_\infty}^c)}$ is bounded by the energy localized in $B_{R_0}^c$. The conclusion will hold by letting $ R_0\to+\infty$. Let us give some details.
Let $ \xi\in C^\infty({\mathbb R}^+) $ be a cut-off function such that $ 0 \le \xi \le 1 $, $ \xi \equiv 0 $ in $ [0,1) $ and $ \xi \equiv 1 $ in $ (2,+\infty) $. Given $ R_0 \ge 1 $, consider the sequence of radii defined by
\[
R_{n+1}=\( 1+\frac1{n^2} \) R_n \quad \forall\,n\in \mathbb{N}\,.
\]
By taking logarithms, it is immediate to deduce that $ \{ R_n \} $ is monotone increasing and that there exists $ a>1 $ such that
\[\label{lim-Rn}
R_\infty:=\lim_{n \to \infty} R_n = a\,R_0\,.
\]
Let us then define the sequence of radial cut-off functions $ \{ \xi_n \} $ by
\[
\xi_n(x) := \xi^2\!\( \frac{|x|-R_n}{R_{n+1}-R_n}+1 \) \quad \forall\,x \in{\mathbb R}^d\,.
\]
Direct computations show that there exists some constant $c>0$, which is independent of $n$ and $R_0$, such that
\be{estimates-cutoff}
\left| \nabla \xi_n(x) \right| \le c\,\frac{n^2}{R_n}\,\chi_{B_{R_{n+1}} \setminus B_{R_{n}}}\,, \quad \left| \nabla \xi_n^{1/2}(x) \right| \le c\,\frac{n^2}{R_n}\,\chi_{B_{R_{n+1}} \setminus B_{R_{n}}}\,, \quad \left| \Delta \xi_n(x) \right| \le c\,\frac{n^4}{R_n^2}\,\chi_{B_{R_{n+1}} \setminus B_{R_{n}}} \quad \forall\,x \in{\mathbb R}^d\,.
\end{equation}
{}From here on we denote by $c$, $c'$, \emph{etc.} positive constants which are all independent of $n$ and $R_0$. We now introduce the analogue of the sequence $ \{ \delta_n \} $ above, which we relabel $ \{ \sigma_n \} $ to avoid confusion. Namely, we set $ \sigma_0:=2\,p-2 $ and $ \sigma_{n+1}=p_\star\,(2+\sigma_n)-2 $, so that $ \sigma_n=2\,(p\,p_\star^{n}-1) $. If we multiply~\eqref{ELeq} by $ \xi_n w^{1+\sigma_n} $ and integrate by parts, we obtain:
\[
\ird{\nabla{\( \xi_n\,w^{1+\sigma_n} \)} \cdot \nabla{w}\,|x|^{-\beta}}+\ird{\xi_n\,w^{p+1+\sigma_n}\,|x|^{-\gamma}} =\ird{\xi_n\,w^{2p+\sigma_n}\,|x|^{-\gamma}}\,,
\]
whence
\[\label{estimates-cutoff-1}
\frac{4\,(1+\sigma_n)}{(2+\sigma_n)^2}\ird{\xi_n\,\left| \nabla w^{1+\sigma_n/2} \right|^2\,|x|^{-\beta}}+\frac1{2+\sigma_n}\ird{\nabla{\xi_n} \cdot \nabla{w^{2+\sigma_n}}\,|x|^{-\beta}} \le\int_{B_{R_n}^c} w^{2p+\sigma_n}\,|x|^{-\gamma}\,dx\,.
\]
By integrating by parts the second term in the l.h.s. and combining this estimate with
\[
\ird{\left| \nabla\( \xi_n^{1/2}\,w^{1+\sigma_n/2} \) \right|^2\,|x|^{-\beta}} \le 2\ird{\xi_n\,\left| \nabla w^{1+\sigma_n/2} \right|^2\,|x|^{-\beta}}+2\ird{\left| \nabla \xi_n^{1/2}\right|^2 w^{2+\sigma_n}\,|x|^{-\beta}}\,,
\]
we end up with
\[\label{estimates-cutoff-2}
\begin{aligned}
\frac{2\,(1+\sigma_n)}{(2+\sigma_n)^2}\,\ird{\left| \nabla\( \xi_n^{1/2}\,w^{1+\sigma_n/2} \) \right|^2\,|x|^{-\beta}} - \frac{4\,(1+\sigma_n)}{(2+\sigma_n)^2}\,\ird{\left| \nabla \xi_n^{1/2} \right|^2 w^{2+\sigma_n}\,|x|^{-\beta}} & \\
-\,\frac1{2+\sigma_n}\,\ird{\( |x|^{-\beta} \Delta \xi_n - \beta\,|x|^{-\beta-2} x \cdot \nabla{\xi_n} \) w^{2+\sigma_n}} & \le\int_{B_{R_n}^c} w^{2p+\sigma_n}\,|x|^{-\gamma}\,dx\,.
\end{aligned}
\]
Thanks to~\eqref{estimates-cutoff}, we can deduce that
\begin{multline*}
\ird{\left|\nabla\(\xi_n^{1/2}\,w^{1+\sigma_n/2}\)\right|^2\,|x|^{-\beta}}\le\int_{B_{R_{n+1}}\setminus B_{R_{n}}}\(\frac{2\,c^2+c}{R_n^2}\,n^4+\frac{\beta\,c}{R_n}\,n^2\,|x|^{-1}\)w^{2+\sigma_n}\,|x|^{-\beta}\,dx\\
+\frac{(2+\sigma_n)^2}{2\,(1+\sigma_n)}\,\int_{B_{R_n}^c} w^{2p+\sigma_n}\,|x|^{-\gamma}\,dx\,.
\end{multline*}
In particular,
\[\label{estimates-cutoff-4}
\ird{\left| \nabla\( \xi_n^{1/2}\,w^{1+\sigma_n/2} \) \right|^2\,|x|^{-\beta}} \le c^\prime n^4\,\int_{B_{R_n}^c} w^{2+\sigma_n}\,|x|^{-\beta-2}\,dx+\frac{(2+\sigma_n)^2}{2\,(1+\sigma_n)}\,\| w \|_\infty^{2p-2}\,\int_{B_{R_n}^c} w^{2+\sigma_n}\,|x|^{-\gamma}\,dx\,.
\]
Since~\eqref{parameters} implies that $ \beta+2>\gamma $, by exploiting the explicit expression of $\sigma_n$ and applying~\eqref{CKN} with $p=p_\star$ (and $\vartheta=1$) to the function $ \xi_n^{1/2}\,w^{1+\sigma_n/2} $, we can rewrite our estimate as
\[\label{estimates-cutoff-5}
\left\| w \right\|^{2+\sigma_n}_{\mathrm L^{2+\sigma_{n+1},\gamma}(B_{R_{n+1}}^c)} \le c^{\prime\prime} p_\star^n \left\| w \right\|^{2+\sigma_n}_{\mathrm L^{2+\sigma_n,\gamma}(B_{R_n}^c)}\,.
\]
After iterating the scheme and letting $ n \to \infty $ we end up with
\[
\left\| w \right\|_{\mathrm L^{\infty}(B_{R_\infty }^c)} \le c^{\prime\prime\prime} \left\| w \right\|_{\mathrm L^{2p,\gamma}(B_{R_0}^c)}\,.
\]
Since $w$ is bounded in $\mathrm L^{2p,\gamma}({\mathbb R}^d) $, in order to prove the claim it is enough to let $ R_0 \to+\infty $.
\end{proof}
\par\smallskip\begin{lemma}\label{Lem:estimates2}{\sl Let $\beta$, $\gamma$ and $p$ satisfy the relations~\eqref{parameters}. Any positive solution $w$ of~\eqref{ELeq} satisfying
\eqref{energyintegrals} is such that $w\in C^\infty({\mathbb R}^d\setminus\{0\})$ and there exists two positive constants, $C_1$ and $C_2$ with $C_1<C_2$, such that for $|x|$ large enough,
\[
C_1\,|x|^{(\gamma-2-\beta)/(p-1)}\le w(x)\le C_2\,|x|^{(\gamma-2-\beta)/(p-1)}\,.
\]}\end{lemma}\par\smallskip
\begin{proof} By Lemma~\ref{Lem:estimates1} and elliptic bootstrapping methods we know that $w\in C^\infty({\mathbb R}^d\setminus\{0\})$. Let us now consider the function $h(x):=C\,|x|^{(\gamma-2-\beta)/(p-1)}$, which satisfies the differential inequality
\[
-\,\mbox{div}\,\(|x|^{-\beta}\,\nabla h\)+(1-\varepsilon)\,|x|^{-\gamma}\,h^p\ge0\quad\forall\,x\in{\mathbb R}^d\setminus\{0\}
\]
for any $\varepsilon\in(0,1)$ and $C$ such that $C^{p-1}>\frac{2-\gamma+\beta}{1-\varepsilon}\,\frac{d-\gamma-p\,(d-2-\beta)}{(p-1)^2}$. On the other hand, by Lemma~\ref{Lem:estimates1}, $w^{2p-1}$ is negligible compared to $w^p$ as $|x|\to\infty$ and, as a consequence, for any $\varepsilon>0$ small enough, there is an $R_\varepsilon>0$ such that
\[
-\,\mbox{div}\,\(|x|^{-\beta}\,\nabla w\)+(1-\varepsilon)\,|x|^{-\gamma}\,w^p\le0\quad\mbox{if}\quad|x|\ge R_\varepsilon\,.
\]
With $q:=(1-\varepsilon)\,|x|^{-\gamma}\,\frac{h^p-w^p}{h-w}\ge0$, it follows that
\[
-\,\mbox{div}\,\(|x|^{-\beta}\,\nabla (h-w)\)+q\,(h-w)\ge0\quad\mbox{if}\quad|x|\ge R_\varepsilon\,.
\]
Hence, for $C$ large enough, we know that $h(x)\ge w(x)$ for any $x\in{\mathbb R}^d$ such that $|x|=R_\varepsilon$, and we also have that $\lim_{|x|\to+\infty}\big(h(x)-w(x)\big)=0$. Using the Maximum Principle, we conclude that $0\le w(x)\le h(x)$ for any $x\in{\mathbb R}^d$ such that $|x|\ge R_\varepsilon$. The lower bound uses a similar comparison argument. Indeed, since
\[
-\,\mbox{div}\,\(|x|^{-\beta}\,\nabla w\)+|x|^{-\gamma}\,w^p\ge0\quad\forall\,x\in{\mathbb R}^d\setminus\{0\}
\]
and
\[
-\,\mbox{div}\,\(|x|^{-\beta}\,\nabla h\)+|x|^{-\gamma}\,h^p\le0\quad\forall\,x\in{\mathbb R}^d\setminus\{0\}
\]
if we choose $C$ such that $C^{p-1}\le(2-\gamma+\beta)\,\frac{d-\gamma-p\,(d-2-\beta)}{(p-1)^2}$, we easily see that
\[
w(x)\ge \( \min_{|x|=1}w(x) \wedge C \) |x|^{(\gamma-2-\beta)/(p-1)} \quad \forall x \in \mathbb{R}^d \setminus {B}_1 \, .
\]
This concludes the proof.
\end{proof}
\medskip Our next goal is to obtain growth and decay estimates, respectively, on the functions $\mathsf P$ and $u$ as they appear in the proof of Theorem~\ref{Thm:Rigidity} in Section~\ref{Sec:BE}, in order to prove Proposition~\ref{Prop:b}. We also need to estimate their derivatives near the origin and at infinity. Let us start by reminding the change of variables~\eqref{wv}, which in particular, by Lemma~\ref{Lem:estimates2}, implies that for some positive constants $C_1$ and $C_2$,
\[\label{asympv}
C_1\,s^{2/(1-p)}\le v(s,\omega)\le C_2\,s^{2/(1-p)}\quad\mbox{as}\; s\to+\infty\,.
\]
Then we perform the Emden-Fowler transformation
\be{Emden-Fowler}
v(s,\omega)=s^a\,\varphi(z,\omega)\quad\mbox{with}\quad z=-\log s\,,\quad a=\frac{2-n}2\,,
\end{equation}
and see that $\varphi$ satisfies the equation
\be{varphieq}
-\,\alpha^2\,\varphi''-\Delta_\omega\,\varphi+a^2\alpha^2\varphi = e^{((n-2)\,p-n)\,z}\,\varphi^{2p-1} - e^{((n-2)\,p-n-2)\,z/2}\,\varphi^p=:h\quad\mbox{in}\;\mathcal C:={\mathbb R}\times\S^{d-1}\ni(z,\omega)\,.
\end{equation}
{}From here on we shall denote by $'$ the derivative with respect either to $z$ or to $s$, depending on the argument. By definition of $\varphi$ and using Lemma~\ref{Lem:estimates2}, we obtain that
\[\label{asympvarphi}
\varphi(z,\omega)\sim e^{\big(\frac{2-n}2+\frac2{p-1}\big)\,z}\quad\mbox{as}\; z\to -\infty\,,
\]
where we say that $f(z,\omega)\sim g(z,\omega)$ as $z\to+\infty$ (resp.~$z\to-\infty$) if the ratio $f/g$ is bounded from above and from below by positive constants, independently of $\omega$, and for $z$ (resp.~$-z$) large enough. Concerning $z\to+\infty$, we first note that Lemma~\ref{Lem:estimates1} and~\eqref{Emden-Fowler} show that $\varphi(z,\omega)\le O(e^{a\,z})$. The lower bound can be established by a comparison argument as in~\cite[Proposition~A.1]{DEL2015}, after noticing that $|h(z,\omega)|\le O(e^{(a-2)z})$. Hence we obtain that
\[
\varphi(z,\omega)\sim e^{a\,z}=e^{\frac{2-n}2\,z}\quad\mbox{as}\; z\to+\infty\,.
\]
Moreover, uniformly in $\omega$, we have that
\[
|h(z,\omega)| \le O\big( e^{-\frac{n+2}2\,z} \big) \quad\mbox{as}\; z\to+\infty\,,\quad |h(z,\omega)|\sim e^{\(-\frac{n+2}2+\frac{2\,p}{p-1}\)z}\quad\mbox{as}\; z\to -\infty\,,
\]
which in particular implies
\[\label{littleo}
|h(z,\omega)|=o\big(\varphi(z,\omega)\big)\quad\mbox{as}\; z\to+\infty\quad\mbox{and}\quad |h(z,\omega)|\sim\varphi(z,\omega)\quad\mbox{as}\; z\to-\infty\,.
\]
Finally, using~\cite[Theorem~8.32, p.~210]{MR1814364} on local $C^{1,\delta}$ estimates, as $|z|\to+\infty$ we see that all first derivatives of $\varphi$ converge to~$0$ at least with the same rate as $\varphi$. Next,~\cite[Theorem~8.10, p.~186]{MR1814364} provides local $\mathrm W^{k+2,2}$ estimates which, together with~\cite[Corollary~7.11, Theorem~8.10, and Corollary~8.11]{MR1814364}, up to choosing $k$ large enough, prove that
\be{asympt-beh-ders2}
|\varphi'(z,\omega)|\kern1.2pt,\;|\varphi''(z,\omega)|\kern1.2pt,\;|\nabla_\omega\varphi(z,\omega)|\kern1.2pt,\;|\nabla_\omega\varphi'(z,\omega)|\kern1.2pt,\;|\nabla_\omega\varphi''(z,\omega)|\kern1.2pt,\;|\Delta_\omega\,\varphi(z,\omega)|\le O(\varphi(z,\omega))\,,
\end{equation}
uniformly in $\omega$. Here we denote by $\nabla_\omega$ the differentiation with respect to $\omega$. As a consequence, we have, uniformly in $\omega$, and for $\ell\in\{0,1,2\},\; t\in \{0,1\}$,
\be{asymph}
|\partial_z^\ell \nabla_\omega^t h(z,\omega)| \le O\big( e^{-\frac{n+2}2\,z} \big) \quad\mbox{as}\; z\to+\infty\,,\quad |\partial_z^\ell \nabla_\omega^t h(z,\omega)|\le O( e^{\(-\frac{n+2}2+\frac{2\,p}{p-1}\)z})\quad\mbox{as}\; z\to -\infty\,,
\end{equation}
\be{asymphother}
|\Delta_\omega h(z,\omega)| \le O\big( e^{-\frac{n+2}2\,z} \big) \quad\mbox{as}\; z\to+\infty\,,\quad |\Delta_\omega h(z,\omega)|\le O( e^{\(-\frac{n+2}2+\frac{2\,p}{p-1}\)z})\quad\mbox{as}\; z\to -\infty\,.
\end{equation}
\par\smallskip\begin{lemma}\label{Lem:decayinRd}{\sl Let $\beta$, $\gamma$ and $p$ satisfy the relations~\eqref{parameters} and assume $\alpha\le\alpha_{\rm FS}$. For any positive solution $w$ of~\eqref{ELeq} satisfying~\eqref{energyintegrals}, the \emph{pressure function} $\mathsf P=\frac m{1-m}\,u^{m-1}$ is such that $\mathsf P''$, $\mathsf P'/s$, $\mathsf P/s^2$, $\nabla_\omega\mathsf P'/s$, $\nabla_\omega\mathsf P/s^2$ and $\L\mathsf P$ are of class $C^\infty$ and bounded as $s\to+\infty$. On the other hand, as $s\to0_+$ we have
\begin{enumerate}
\item[(i)] $\isph{|\mathsf P'(s,\omega)|^2}\le O(1)$,
\item[(ii)] $\isph{|\nabla_\omega\mathsf P(s,\omega)|^2}\le O(s^2)$,
\item[(iii)] $\isph{|\mathsf P''(s,\omega)|^2}\le O(1/s^2)$,
\item[(iv)] $\isph{\left|\nabla_\omega\mathsf P'(s,\omega)-\tfrac1s\,\nabla_\omega\mathsf P(s,\omega)\right|^2}\le O(1)$,
\item[(v)] $\isph{\left| \frac{1}{s^2}\,\Delta_\omega \mathsf P(s,\omega)\right|^2}\le O(1/s^2)$.
\end{enumerate}}
\end{lemma}\par\smallskip
\begin{proof} By using the change of variables~\eqref{Emden-Fowler}, we see that
\[\label{Pvarphi}
\mathsf P(s,\omega)= \tfrac{p+1}{p-1}\,e^{-\frac12\,(n-2)\,(p-1)\,z}\,\varphi^{1-p}(z,\omega)\,,\quad z=-\log s\,.
\]
{}From~\eqref{asympt-beh-ders2} we easily deduce that uniformly in $\omega$, $\mathsf P''$, $\mathsf P'/s$, $\mathsf P/s^2$, $\nabla_\omega\mathsf P'/s$, $\nabla_\omega\mathsf P/s^2$ and $\L\mathsf P$ are of class $C^\infty$ and bounded as $s\to+\infty$. Moreover, as $s\to0_+$, we obtain that
\[
\big|\mathsf P'(s,\omega)\big|\le O\(\frac1s\(\frac{\varphi'(z,\omega)}{\varphi(z,\omega)}-a\)\)\quad\mbox{and}\quad\Big|\frac1s\,\nabla_\omega\mathsf P(s,\omega)\Big|\le O\(\frac1s\,\(\frac{\nabla_\omega\varphi(z,\omega)}{\varphi(z,\omega)}\)\)
\]
are of order at most $1/s$ uniformly in $\omega$. Similarly we obtain that
\begin{eqnarray*}
&&|\mathsf P''(s,\omega)|\le O\(\frac1{s^2}\(\frac{\varphi''(z,\omega)}{\varphi(z,\omega)}-\,p\,\frac{|\varphi'(z,\omega)|^2}{|\varphi(z,\omega)|^2}+\,\big(1-2\,a\,(1-p)\big)\,\frac{\varphi'(z,\omega)}{\varphi(z,\omega)}+a^2\,(1-p)-a\)\)\,,\\
&&\left|\frac{\nabla_\omega\mathsf P'(s,\omega)}{s}-\frac{a(1-p)}{s^2}\,\nabla_\omega\mathsf P(s,\omega)\right|\le O\(\frac1{s^2}\(\frac{\nabla_\omega\varphi'(z,\omega)}{\varphi(z,\omega)}-\frac{p\,\varphi'(z,\omega)\,\nabla_\omega\varphi(z,\omega)}{|\varphi(z,\omega)|^2}\)\)\,,\\
&&\frac1{s^2}\,|\Delta_\omega\,\mathsf P(s,\omega)|\le O\(\frac1{s^2}\,\(\frac{\Delta_\omega\,\varphi(z,\omega)}{\varphi(z,\omega)}-\,p\,\frac{|\nabla_\omega\varphi(z,\omega)|^2}{|\varphi(z,\omega)|^2}\)\)\,,
\end{eqnarray*}
are at most of order $1/s^2$ uniformly in $\omega$. This shows that $|\mathsf b(s)|\le O(s^{n-4})$ as $s\to0_+$ and concludes the proof if $4\le d<n$. When $d=2$ or $3$ and $n \le 4$, more detailed estimates are needed. Properties (i)--(v) amount to prove that
\begin{enumerate}
\item[(i)] $\isph{\left|\frac{\varphi'(z,\omega)}{\varphi(z,\omega)}-a\right|^2}\le O(e^{-2\,z})$,
\item[(ii)] $\isph{\left|\frac{\nabla_\omega\varphi(z,\omega)}{\varphi(z,\omega)}\right|^2}\le O(e^{-2\,z})$,
\item[(iii)] $\isph{\left|\frac{\varphi''(z,\omega)}{\varphi(z,\omega)}-\,p\,\frac{|\varphi'(z,\omega)|^2}{|\varphi(z,\omega)|^2}+\,\big(1-2\,a\,(1-p)\big)\frac{\varphi'(z,\omega)}{\varphi(z,\omega)}+a^2\,(1-p)-a\right|^2}\le O(e^{-2\,z})$,
\item[(iv)] $\isph{\left|\frac{\nabla_\omega\varphi'(z,\omega)}{\varphi(z,\omega)}-\frac{p\,\varphi'(z,\omega)\,\nabla_\omega\varphi(z,\omega)}{|\varphi(z,\omega)|^2}\right|^2}\le O(e^{-2\,z})$,
\item[(v)] $\isph{\left|\frac{\Delta_\omega\,\varphi(z,\omega)}{\varphi(z,\omega)}-\,p\,\frac{|\nabla_\omega\varphi(z,\omega)|^2}{|\varphi(z,\omega)|^2}\right|^2}\le O(e^{-2\,z})$,
\end{enumerate}
as $z\to+\infty$.
\medskip\noindent\emph{Step 1: Proof of\/ (ii) and (iv)}. If $w$ is a positive solution of~\eqref{ELeq}, then $\varphi$ is a positive solution to~\eqref{varphieq}.
With $\ell\in\{0,1,2\}$, applying the operator $\nabla_\omega\partial_z^\ell$ to the equation~\eqref{varphieq} we obtain
\[
-\,\alpha^2\,(\nabla_\omega\partial_z^\ell\varphi)''-\,\nabla_\omega\,\Delta_\omega\,\partial_z^\ell\varphi+a^2\,\alpha^2\,\nabla_\omega\partial_z^\ell\varphi=\nabla_\omega\partial_z^\ell h(z,\omega)\quad\mbox{in}\;\mathcal C\,.
\]
Define
\[
\chi_\ell(z):=\frac12\isph{|\nabla_\omega\partial_z^\ell\varphi|^2}\,,
\]
which by~\eqref{asympt-beh-ders2} converges to $0$ as $z\to \pm\infty$. Assume first that $\chi_\ell$ is a positive function.
After multiplying the above equation by $\nabla_\omega\partial_z^\ell\varphi$, integrating over $ \mathbb{S}^{d-1}$, integrating by parts and using
\[
\chi_\ell'=\isph{\nabla_\omega\partial_z^\ell\varphi\,\nabla_\omega\partial_z^\ell\varphi'}
\]
and
\[
\chi_\ell''=\isph{\nabla_\omega\partial_z^\ell\varphi\,\nabla_\omega\partial_z^\ell\varphi''}+\isph{|\nabla_\omega\partial_z^\ell\varphi'|^2}\,,
\]
we see that $\chi_\ell$ satisfies
\[
-\,\chi_\ell''+\isph{|\nabla_\omega\partial_z^\ell\varphi'|^2} +\frac1{\alpha^2}\(\isph{|\Delta_\omega\,\partial_z^\ell\varphi|^2}-\lambda_1\isph{|\nabla_\omega\partial_z^\ell\varphi|^2}\)+ 2\,\(a^2+\frac{\lambda_1}{\alpha^2}\)\,\chi_\ell=\frac{h_\ell}{\alpha^2}\,,
\]
with $h_\ell :=\isph{\nabla_\omega\partial_z^\ell h\,\nabla_\omega\partial_z^\ell\varphi}$.
Then, using $\isph{\nabla_\omega\partial_z^\ell\varphi}=0$, by the Poincar\'e inequality we deduce
\[
\isph{|\Delta_\omega\,\partial_z^\ell\varphi|^2}\ge\lambda_1\isph{|\nabla_\omega\partial_z^\ell\varphi|^2}
\]
as \emph{e.g.}~in~\cite[Lemma~7]{MR3229793}, where $ \lambda_1:=d-1 $. A Cauchy-Schwarz inequality implies that
\[
-\,\chi_\ell''+\frac{|\chi_\ell'|^2}{2\,\chi_\ell}+\,2\,\(a^2+\frac{\lambda_1}{\alpha^2}\)\,\chi_\ell\le\frac{|h_\ell|}{\alpha^2} \, .
\]
The function $\zeta_\ell:=\sqrt{\chi_\ell}$ satisfies
\[
-\,\zeta_\ell''+\,\(a^2+\frac{\lambda_1}{\alpha^2}\)\,\zeta_\ell\le\frac{|h_\ell|}{2\,\alpha^2\,\zeta_\ell}\,.
\]
By the Cauchy-Schwarz inequality and~\eqref{asymph} we infer that $|h_\ell/\zeta_\ell| = O\big(e^{(a-2)\,z}\big)$ for $z\to+\infty$, and $|h_\ell/\zeta_\ell| = O\big(e^{(a+2/(p-1))\,z}\big)$ for $z\to-\infty$. By a simple comparison argument based on the Maximum Principle, and using the convergence of $\chi_\ell$ to $0$ at $\pm\infty$, we infer that
\[
\zeta_\ell(z)\le -\frac{e^{-\nu\,z}}{2\,\nu\,\alpha^2}\int_{-\infty}^ze^{\nu\,t}\,\frac{|h_\ell(t)|}{\zeta_\ell(z)}\,d\kern-0.5pt t-\frac{e^{\nu\,z}}{2\,\nu\,\alpha^2}\int_z^\infty e^{-\nu\,t}\,\frac{|h_\ell(t)|}{\zeta_\ell(z)}\,d\kern-0.5pt t
\]
if $\nu:=\sqrt{a^2+\lambda_1/{\alpha^2}}$. This is enough to deduce that $\zeta_\ell(z)\le O\big(e^{(a-1) z}\big)$ as $z\to+\infty$ after observing that the condition
\[\label{luckycomparison}
-\nu = -\sqrt{a^2+\lambda_1/{\alpha^2}} \le a-1
\]
is equivalent to the inequality $\alpha\le\alpha_{\rm FS}$. Hence we have shown that if $\chi_\ell$ is a positive function, then for $\alpha\le\alpha_{\rm FS}$,
\be{lestimates}
\chi_\ell(z)\le O\big(e^{\kern 0.5pt 2\,(a-1)\,z}\big)\quad\mbox{as}\; z\to+\infty\,.
\end{equation}
In the case where $\chi_\ell$ is equal to $0$ at some points of ${\mathbb R}$, it is enough to do the above comparison argument on maximal positivity intervals of $\chi_\ell$ to deduce the same asymptotic estimate. Finally we observe that $\varphi(z,\omega)\sim e^{a\,z}$ as $z\to+\infty$, which ends the proof of (ii) considering the above estimate for $\chi_\ell$ when $\ell=0$. Moreover, the same estimate for $\ell=1$ together with (ii) and~\eqref{asympt-beh-ders2} proves (iv).
\medskip\noindent\emph{Step 2: Proof of\/(v)}. By applying the operator $\Delta_\omega$ to~\eqref{varphieq}, we obtain
\[
-\,\alpha^2\,(\Delta_\omega\,\varphi)''-\,\Delta_\omega^2\,\varphi+a^2\,\alpha^2\,\Delta_\omega\,\varphi=\Delta_\omega\,h \quad\mbox{in}\;\mathcal C\,.
\]
We proceed as in Step 1. With similar notations, by defining
\[
\chi_5(z):=\frac12\isph{|\Delta_\omega\,\varphi|^2}\,,
\]
after multiplying the equation by $\Delta_\omega\,\varphi$ and using the fact that
\[
-\isph{\Delta_\omega\,\varphi\,\Delta_\omega^2\,\varphi}=\isph{|\nabla_\omega\Delta_\omega\,\varphi|^2}\ge\lambda_1\isph{|\Delta_\omega\,\varphi|^2}\,,
\]
we obtain
\[
-\,\chi_5''+\frac{|\chi_5'|^2}{2\,\chi_\ell}+\,2\,\(a^2+\frac{\lambda_1}{\alpha^2}\)\,\chi_5\le\frac{|h_5|}{\alpha^2}
\]
with $h_5:=\isph{\Delta_\omega\,h\,\Delta_\omega\,\varphi}$. Again using the same arguments as above, together with~\eqref{asymphother}, we deduce that
\[
\chi_5(z)\le O\big(e^{\kern 0.5pt 2\,(a-1)\,z}\big)\quad\mbox{as}\; z\to+\infty\,.
\]
This ends the proof of (v), using (ii),~\eqref{asympt-beh-ders2} and noticing again that $\varphi(z,\omega)\sim e^{a\,z}$ as $z\to+\infty$.
\medskip\noindent\emph{Step 3: Proof of (i) and (iii).}
Let us consider a positive solution $\varphi$ to~\eqref{varphieq} and define on ${\mathbb R}$ the function
\[
\varphi_0(z) := \frac{1}{\left| \mathbb{S}^{d-1} \right|} \isph{\varphi(z,\omega)}\,.
\]
By integrating~\eqref{varphieq} on $\S^{d-1}$, we know that $\varphi_0$ solves
\[
-\,\varphi_0''+a^2\,\varphi_0=\frac1{\alpha^2 \left| \mathbb{S}^{d-1} \right|}\,\isph{h(z,\omega)}=:\frac{h_0(z)}{\alpha^2 }\quad\forall\,z\in{\mathbb R}\,,
\]
with
\[
|h_0(z)| \le O \big( e^{-\frac{n+2}2\,z} \big) \quad\mbox{as}\; z\to+\infty\,,\quad |h_0(z)|\sim e^{\(-\frac{n+2}2+\frac{2\,p}{p-1}\)\,z}\quad\mbox{as}\; z\to -\infty\,.
\]
{}From the integral representation
\[
\varphi_0(z)=-\frac{e^{a\,z}}{2\,a\,\alpha^2}\int_{-\infty}^ze^{-at}\,h_0(t)\,d\kern-0.5pt t-\frac{e^{-a\,z}}{2\,a\,\alpha^2}\int_z^\infty e^{at}\,h_0(t)\,d\kern-0.5pt t\,,
\]
we deduce that as $z\to+\infty$, $\varphi_0(z) \sim e^{a\,z}$ and
\[
\frac{\varphi_0'(z)-a\,\varphi_0(z)}{\varphi(z,\omega)}\sim\,e^{-2a\,z}\int_z^{\infty}e^{at}\,h_0(t)\,d\kern-0.5pt t = O(e^{-2\,z})\,.
\]
If we define the function $\psi(z,\omega):=e^{-a\,z}\,\big(\varphi(z,\omega)-\varphi_0(z)\big)$, we may observe that it is bounded for $z$ positive and moreover
\[
\frac{\varphi'(z,\omega)}{\varphi(z,\omega)}-a=O(e^{-2\,z})+\frac{\psi'(z,\omega)}{e^{-a\,z}\,\varphi(z,\omega)}\quad\mbox{as}\;\; z\to\,+\infty\,.
\]
We recall that $e^{-a\,z}\,\varphi(z,\omega)$ is bounded away from $0$ by a positive constant as $z\to+\infty$. Hence we know that
\be{dzpsi}
\Big|\frac{\varphi'(z,\omega)}{\varphi(z,\omega)}-a\Big|\le O\(|\psi'(z,\omega)\)+O(e^{-2\,z})\,.
\end{equation}
By the Poincar\'e inequality and estimate~\eqref{lestimates} with $\ell=0$, we have
\[
\isph{|\psi|^2}= e^{-2az} \isph{|\varphi-\varphi_0|^2}\le\frac{e^{-2az}}{\lambda_1} \isph{|\nabla_\omega\varphi|^2}\le O(e^{-2z})\,.
\]
Moreover, by the estimate~\eqref{lestimates} with $\ell=1$, we also obtain
\[
e^{-2az} \isph{|\varphi'-\varphi'_0|^2}\le\frac{e^{-2az}}{\lambda_1} \isph{|\nabla_\omega\varphi'|^2}\le O(e^{-2z})\,.
\]
Hence, since $\psi' = -\,a\,\psi+ e^{-az}\,(\varphi'-\varphi'_0)$, the above estimates imply that
\[
\isph{|\psi|^2} \, + \, \isph{|\psi'|^2}\le O(e^{-2z})\,,
\]
which together with~\eqref{dzpsi} ends the proof of (i).
To prove (iii), we first check that
\[
\frac{\varphi''}{\varphi}-\,p\,\frac{|\varphi'|^2}{|\varphi|^2}+\,\big(1-2\,a\,(1-p)\big)\frac{\varphi'}{\varphi}+a^2\,(1-p)-a= O(|\psi'|+|\psi'|^2+|\psi''|)+O(e^{-2\,z})\,,
\]
and so it remains to prove that $\isph{|\psi''|^2}$ is of order $O(e^{-2\,z})$. Since
\[
\psi'' = a^2\,\psi -\,2\,a\,e^{-az}\,(\varphi'-\varphi'_0) + e^{-az}\,(\varphi''-\varphi''_0) \,,
\]
using the above estimates, we have only to estimate the term with the second derivatives. This can be done as above by the Poincar\'e inequality,
\[
e^{-2az} \isph{|\varphi''-\varphi''_0|^2}\le\frac{e^{-2az}}{\lambda_1} \isph{|\nabla_\omega\varphi''|^2}\le O(e^{-2z})\,,
\]
based on the estimate~\eqref{lestimates} with $\ell=2$.
This ends the proof of (iii).
\end{proof}
\medskip\noindent\emph{Proof of Proposition~\ref{Prop:b}}
It is straightforward to verify that the boundedness of $\mathsf P''$, $\mathsf P'/s$, $\mathsf P/s^2$, $\nabla_\omega\mathsf P'/s$, $\nabla_\omega\mathsf P/s^2$, $\L\mathsf P$ as $ s \to +\infty $ and the integral estimates (i)-(v) as $ s \to 0^+ $ from Lemma~\ref{Lem:decayinRd} are enough in order to establish~\eqref{FisherDerBC},~\eqref{FisherBC} and~\eqref{MassBC}.
\hfill \ \qed
\bigskip\noindent{\small{\bf Acknowlegments.} This research has been partially supported by the projects \emph{STAB} (J.D.) and \emph{Kibord} (J.D.) of the French National Research Agency (ANR), and by the NSF grant DMS-1301555 (M.L.). J.D.~thanks the University of Pavia for support. M.L.~thanks the Humboldt Foundation for support. M.M.~has been partially funded by the National Research Project ``Calculus of Variations'' (PRIN 2010-11, Italy).
\par\smallskip\noindent\copyright~2016 by the authors. This paper may be reproduced, in its entirety, for non-commercial purposes.}
\newpage
| 2023-04-23T06:40:33.496Z | 2016-05-23T02:10:57.000Z | redpajama/arxiv | arxiv_0001 | 658 | 14,143 |
61543ff3d9721ab781c5d8c918420b6bba9f0832 | \section{Introduction}
Scientific experiments are often faced with simultaneous inference problems when addressing multiple objectives, such as assessing the differences between several experimental conditions. Weighted multiple test procedures (MTPs) are commonly used to control the overall Type \rm{I} error rate by assigning weights to different hypotheses in order to reflect the relative importance of objectives in the test strategy. For example, early references on weighted min-$p$ tests include the resampling-based tests from \citet[Chapter 6]{westfall1993resampling}, and \citet{westfall1998using}. Weighted MTPs based on specific parametric models have been investigated using hierarchical tests \citep{huque2008flexible} and graphical approaches \citep{bretz2011graphical}. These procedures discuss weighted parametric MTPs using the closure principle \citep{marcus1976closed} where each intersection hypothesis is tested at exact level $\alpha$. When there are no logical restrictions among the hypotheses, such as for the step-down Dunnett procedure \citep{dunnett1991step}, a weighted parametric test has been introduced by \citet{xie2012weighted} based on adjusted $p$-values.
MTPs are usually carried out by comparing either adjusted significance levels with unadjusted $p$-values or adjusted $p$-values with the unadjusted level $\alpha$. Although various weighted parametric tests have been proposed in the literature, the link between rejection rules using either adjusted significance levels or adjusted $p$-values has not been systematically explored. In addition, the majority of the procedures in the literature focus on the case where each intersection hypothesis is tested at exact level $\alpha$. It remains unclear how to deal with the non-trivial case where the significance level is strictly less than $\alpha$ for some of the intersection hypotheses. This is a relevant question for certain study considerations. For example, in the phase III clinical trial of buparlisib in patients with advanced and metastatic breast cancer, the analysis of progression-free survival (PFS) endpoints happens much earlier in time than the analysis of the overall survival (OS) endpoints. Thus, testing of PFS hypotheses does not benefit from rejecting the OS hypotheses at a later time point. \citep{goteti2014some}. Besides, in certain parallel and $k$-out-of-$n$ gatekeeping procedures, some intersection hypotheses involving primary hypotheses are tested at level smaller than $\alpha$ to allow testing secondary hypotheses if a certain number of primary hypotheses have been rejected \citep{dmitrienko2008general,xi2014general}.
We propose a unified framework for weighted parametric MTPs using the closure principle. This framework allows for general weighting strategies and includes many procedures in the literature as special cases. When some intersection hypotheses are tested at level smaller than $\alpha$, we reveal a special property of a class of parametric tests which proportionally increases the hypothesis weights to ensure exact $\alpha$-level tests. When the parametric assumptions only apply to subsets of hypotheses, we propose a new procedure which utilizes the parametric assumptions within each subset. We derive analytic expressions for the adjusted $p$-values to avoid numerical root finding under multidimensional integration.
\section{Notation}
\label{sec:notation}
Consider testing $m$ elementary null hypotheses $H_i,i\in I=\{1,\ldots,m\}$. Under the closure principle \citep{marcus1976closed}, we test each non-empty intersection hypothesis $H_J=\cap_{j\in J}H_j$, $J\subseteq I$, at level $\alpha$. We reject an elementary hypothesis $H_i$, $i\in I$, if every intersection hypothesis $H_J$ with $i\in J\subseteq I$ is rejected by its associated $\alpha$-level test. The closed procedure controls the familywise error rate (FWER) at level $\alpha$ in the strong sense \citep{hochberg1987multiple}.
Because some hypotheses among $H_1,\ldots,H_m$ may be more important than others, we assign weights for different hypotheses to reflect the relative importance. Using the notation from \citet{maurer2013memory}, let $w_{J} = (w_j(J), j \in J)$ denote a vector of weights for an index set $J \subseteq I$. A weighting scheme $W = \{w_{J}, J \subseteq I\}$ is called valid if for every $J\subseteq I$ and $j\in J$ we have $w_j(J)\geq 0$ and $0< \sum_{j \in J} w_j(J) \leq 1$. Validity is a basic but important condition and thus all weighting schemes considered in this paper are valid. In addition, $W$ is called exhaustive if for every $J\subseteq I$ we have $\sum_{j \in J} w_j(J) = 1$. Exhaustiveness is a desirable property but not required in this paper. For example, the weighting scheme of the step-down Dunnett procedure is $w_j(J)=1/\left\vert J \right\vert$ for $j\in J\subseteq I$, where $\left\vert J \right\vert$ denotes the number of indices in $J$.
Let $p_i$ denote the unadjusted $p$-value for $H_i$, $i\in I$. Consider the weighted Bonferroni test that rejects $H_J$ at level $\alpha$ if $p_j\leq w_j(J)\alpha$ for any $j\in J$. In the following, $w_j(J)$ and $w_j(J)\alpha$ are called the local weight and local significance level, respectively. An equivalent way of testing $H_J$ is to use its $p$-value $\hat{p}_J=\min[1,\min_{j\in J}\{p_j/w_j(J)\}]$. Accordingly, we can reject $H_J$ if $\hat{p}_J\leq \alpha$. Applying the closure principle, we can then reject the elementary hypothesis $H_i$ if its adjusted $p$-value $\max_{\{J:i\in J\subseteq I\}}\hat{p}_J \leq \alpha$.
Throughout this paper, we assume that under the null hypothesis $H_i$ the unadjusted $p$-value $p_i$ is uniformly distributed over $[0,1],i=1,\ldots,m$. The test statistic associated with $H_i$ is a function of $p_i$ under the inverse of the cumulative distribution function, which could be, for example, an (asymptotically) normal or a $t$ distribution. The joint distribution of the $p_i$'s is available if the corresponding test statistics follow a multivariate probability distribution, such as an (asymptotically) multivariate normal distribution.
\section{Weighted parametric tests for intersection hypotheses}
\subsection{Joint distribution fully known}
\label{sec:parametric}
Let $P_j$ denote the random variable whose realization is the observed unadjusted $p$-value $p_j$ for $H_j,j\in J$, for some $J\subseteq I$. If the joint distribution of $P_j$, $j \in J$, is fully known, the weighted min-$p$ test rejects $H_J$ if $p_j\leq c_J w_{j}(J)\alpha$ for any $j\in J$, where $c_J$ is calculated such that
\begin{equation}
\text{pr}_{H_J}\left[\bigcup_{j\in J}\left\{P_j\leq c_J w_{j}(J)\alpha\right\}\right]=\alpha\sum_{j\in J}w_{j}(J).
\label{eq:test}
\end{equation}
Setting $c_J=1$ results in the weighted Bonferroni test with an inequality in \eqref{eq:test}. Otherwise, $c_J>1$ and the resulting weighted parametric test is more powerful than the weighted Bonferroni test. Let $q_J=\min_{j\in J}\left\{p_j/w_j(J)\right\}$ denote the smallest observed weighted $p$-value for $H_j$, $j\in J$. The $p$-value $\hat{p}_J$ for the intersection hypothesis $H_J$ subject to $\sum_{j\in J}w_{j}(J)\leq 1$ is then given by
\begin{equation}
\hat{p}_{J}=\min\left[1,\frac{1}{\sum_{j\in J}w_{j}(J)}\text{pr}_{H_J}\left[\bigcup_{j\in J}\left\{\frac{P_j}{w_j(J)}\leq q_J\right\} \right]\right].
\label{eq:adjp}
\end{equation}
Therefore, we reject $H_J$ if $p_j\leq c_J w_{j}(J)\alpha$ for any $i\in J$ with $c_J$ determined in \eqref{eq:test} or, equivalently, if $\hat{p}_J\leq \alpha$. By the closure principle, we reject an elementary hypothesis $H_i,i\in I$, if every $H_J$ with $i\in J\subseteq I$ is rejected. Equivalently, the adjusted $p$-value of $H_i$ is the maximum of $\hat{p}_J,i\in J\subseteq I$, and we reject $H_i$ if it is less than or equal to $\alpha$. Together with a general weighting scheme $W$, the proposed weighted parametric test \eqref{eq:test} and \eqref{eq:adjp} includes many procedures in the literature as special cases, such as the step-down Dunnett procedure \citep{dunnett1991step}, the parametric fallback procedure \citep{huque2008flexible}, and the graphical approaches with parametric assumptions \citep{bretz2011graphical}.
To see how $\hat{p}_{J}$ is derived in \eqref{eq:adjp}, rewrite the left hand side of \eqref{eq:test} as
$$
\text{pr}_{H_J}\left[\bigcup_{j\in J}\left\{\frac{P_j}{w_j(J)}\leq c_J \alpha\right\}\right]=\text{pr}_{H_J}\left[\min_{j\in J}\left\{\frac{P_j}{w_j(J)}\right\}\leq c_J \alpha\right]=\alpha\sum_{j\in J}w_{j}(J).
$$
Then $c_J\alpha$ is the $\left\{\alpha\sum_{j\in J}w_{j}(J)\right\}$th quantile of the distribution of the minimum weighted $p$-value $Q_{J}=\min_{j\in J}\left\{P_j/w_j(J)\right\}$. Under the null hypothesis $H_J$, the probability of observing an equally or more extreme outcome is $\text{pr}_{H_J}\left\{Q_{J} \leq q_J \right\}$. The $p$-value for $H_J$ subject to $\sum_{j\in J}w_j(J)\leq 1$, is then given by \eqref{eq:adjp}, after truncation at 1. Note that it is computationally more efficient to derive rejection rules using $\hat{p}_J$ because it avoids solving numerically for $c_J$ from an equation involving multidimensional integration.
\subsection{Parametric tests that enforce exhaustiveness}
\label{sec:exhaustive}
In Section \ref{sec:parametric}, we investigated weighted parametric tests that preserve the significance level for $H_J,J\subseteq I$, at level $\alpha\sum_{j\in J}w_j(J)$. However, it may be tempting to always increase the sum of the local weights to 1. \citet{xie2012weighted} considered the case when the initial weights $w_i=w_i(I)>0$ for all $i\in I$. If the joint distribution among the $p$-values is fully known, they proposed a closed procedure using
\begin{equation}
\hat{p}_{J}=\text{pr}_{H_J}\left[\bigcup_{j\in J}\left\{\frac{P_j}{w_j}\leq q_J\right\}\right],
\label{eq:adjpProp}
\end{equation}
where $q_J=\min_{j\in J}\left\{p_j/w_j\right\}$. Here, \eqref{eq:adjpProp} is stated more generally because we do not assume the ordering in weighted $p$-values as in Section 2$\cdot$4 of \citet{xie2012weighted}. Compared to \eqref{eq:adjp}, the factor $1/\sum_{j\in J}w_j(J)$ is missing, which implies that $\sum_{j\in J}w_j(J)$ is always increased to 1.
Note that Xie (2012) did not provide rejection rules based on adjusted significance levels. From the relationship between \eqref{eq:test} and \eqref{eq:adjp}, we can derive an equivalent rejection rule that $H_J$ is rejected if $p_j\leq c_J w_{j}(J)\alpha$ for any $j\in J$, where $c_J$ is calculated such that
\begin{equation}
\text{pr}_{H_J}\left[\bigcup_{j\in J}\left\{P_j\leq c_J w_{j}(J)\alpha\right\}\right]=\alpha=\text{pr}_{H_J}\left[\bigcup_{j\in J}\left\{P_j\leq c_J\frac{w_{j}}{\sum_{j\in J}w_j} \alpha\right\}\right].
\label{eq:testProp}
\end{equation}
If $w_j(J)=w_j/\sum_{j\in J}w_j$, the leftmost and the rightmost expressions in \eqref{eq:testProp} are the same. We then reject $H_J$ if $p_j\leq c_J \alpha w_{j}/\sum_{j\in J}w_j$
for any $j\in J$. Thus, the procedure by \citet{xie2012weighted} actually tests $H_J$ in the following two steps. First, set $w_j(J)$ to $w_j/\sum_{j\in J}w_j$, i.e., increase $w_j$ proportionally such that $\sum_{j\in J}w_j(J)=1$. Second, reject $H_J$ if $p_j\leq c_J \alpha w_{j}/\sum_{j\in J}w_j$ for any $j\in J$ as in \eqref{eq:testProp} or equivalently if $\hat{p}_J\leq \alpha$ as in \eqref{eq:adjpProp}.
The resulting weighting scheme is always exhaustive when $w_i>0$ for all $i\in I$. However, it requires that all local weights $w_j(J)$ are completely determined by the initial local weights, i.e., $w_j(J)=w_j/\sum_{j\in J}w_j,j\in J\subseteq I$. It does not apply to general weighting schemes, especially when some initial local weights are 0. Nevertheless, the idea by \citet{xie2012weighted} can be generalized to any valid weighting scheme by dropping $\sum_{i\in J}w_j(J)$ and $1/\sum_{i\in J}w_j(J)$ from the right hand side of \eqref{eq:test} and \eqref{eq:adjp}, respectively. The resulting closed procedure then always increases the local weight $w_j(J)$ proportionally to $w_j(J)/\sum_{i\in J}w_j(J)$.
It is not trivial to determine whether a weighting scheme generated from an MTP is exhaustive or not, even if the initial local weights sum to 1. In addition, it may be desirable to use a non-exhaustive weighting scheme for practical considerations. For these reasons, we recommend working on the weighting scheme separately to incorporate trial design considerations, and then using a weighted parametric test that preserves the significance level for each intersection hypothesis as in \eqref{eq:test} and \eqref{eq:adjp}. For instance, when $\sum_{i\in I}w_i<1$, the procedure by \citet{xie2012weighted} can be implemented by first proportionally increasing local weights so that the weighting scheme is $W=\left\{ w_{J}=\left(w_j/\sum_{j\in J}w_j, j\in J\right), J\subseteq I\right\}$. Then we can apply the weighted parametric test in Section \ref{sec:parametric} within the closed procedure.
\subsection{Joint distribution not fully known}
\label{sec:partial}
If the joint distribution is only known for subsets of $p$-values, we can extend the parametric test in \eqref{eq:test} and \eqref{eq:adjp} using ideas from \citet{bretz2011graphical}. Assume that $I$ can be partitioned into $\ell$ mutually exclusive subsets $I_h$ such that $I=\cup_{h=1}^{\ell}I_h$. For each subset $I_h$, $h=1,\ldots,\ell$, we assume that the joint distribution of the $p$-values $p_i$, $i\in I_h$, is fully known, but the joint distribution of $p$-values from different subsets is not necessarily known. For any $J\subseteq I$, let $J_h=J\cap I_h, h=1,\ldots,\ell$. Then we reject $H_J$ if $p_j\leq c_{J}w_{j}(J)\alpha$ for any $j\in J$, where $c_{J}$ is calculated such that
\begin{equation}
\sum_{h=1}^{\ell}\text{pr}_{H_J}\left[\bigcup_{j\in J_h}\left\{P_j\leq c_{J} w_{j}(J)\alpha\right\}\right]=\alpha\sum_{i\in J}w_{j}(J).
\label{eq:testPartialCommon}
\end{equation}
The approach from \citet{bretz2011graphical} is a special case of \eqref{eq:testPartialCommon} when $\sum_{j\in J}w_j(J)=1$.
Note that \eqref{eq:testPartialCommon} uses a common $c_J$ for all subsets $J_h, h=1,\ldots,\ell$. Hence, the test decisions in $J_h$ are affected by the distribution in other subsets although the joint distribution between subsets is not necessarily known. For example, if $J_h=\{j\}$ contains only one index, we reject $H_J$ if $p_j\leq c_{J} w_{j}(J)\alpha$, which is no longer the rejection rule if the Bonferroni test were applied. Instead, we propose to use different $c_{J_h}$'s for different subsets $J_{h}, h=1,\ldots,\ell$, to fully utilize the parametric assumptions for $J_h$. Specifically, for any $J\subseteq I$, we reject $H_J$ if $p_j\leq c_{J_h}w_{j}(J)\alpha$ for any $j\in J$, where $c_{J_h}$ is calculated such that
\begin{equation}
\text{pr}_{H_J}\left[\bigcup_{j\in J_h}\left\{P_j\leq c_{J_h} w_{j}(J)\alpha\right\}\right]=\alpha\sum_{j\in J_h}w_{j}(J)
\label{eq:testPartial}
\end{equation}
for $h=1,\ldots,\ell$. If we take the sum of the left hand side in \eqref{eq:testPartial} over $h=1,\ldots,\ell$, we have $\alpha\sum_{j\in J}w_{j}(J)$ on the right hand side, which is the significance level for $H_J$.
Another advantage of using different $c_{J_h}$'s for $J_h$ is that we can derive the $p$-values analytically. First, the $p$-value for each subset $J_h$ is derived using \eqref{eq:adjp} and then the $p$-value for $H_J$ is the minimum over $h=1,\ldots,\ell$. Specifically, let $q_{J_h}=\min_{j\in J_h}\{p_j/w_j(J)\}$ such that the $p$-value for $H_J$ becomes $\hat{p}_{J}=\min_{h=1}^{\ell}\{\hat{p}_{J_h}\}$, where
\begin{equation}
\hat{p}_{J_h}=\min\left[1,\frac{1}{\sum_{j\in J_h}w_{j}(J)}\text{pr}_{H_J}\left[\bigcup_{j\in J_h}\left\{\frac{P_j}{w_j(J)}\leq q_{J_h}\right\}\right]\right].
\label{eq:adjpPartial}
\end{equation}
\section{Consonance}
\label{sec:consonance}
A closed procedure is called consonant \citep{gabriel1969simultaneous} if the rejection of $H_J,J\subseteq I$, further implies that at least one $H_j$, $j\in J$, is rejected. Consonance is a desirable property leading to a short-cut procedure which gives the same rejection decisions as the original closed procedure but with fewer operations to the order of $m$ or $m^2$ \citep{grechanovsky1999closed}. \citet{hommel2007powerful} proved that the monotonicity condition $w_j(J) \leq w_j(J')$ for all $j\in J' \subseteq J \subseteq I$, guarantees consonance if weighted Bonferroni tests are applied to all intersection hypotheses.
If a weighted parametric test is applied as in \eqref{eq:test}, \citet{bretz2011graphical} showed that
\begin{equation}
c_J w_j(J) \leq c_{J'} w_j(J') \text{ for all } j\in J' \subseteq J
\label{eq:monotonicP}
\end{equation}
ensures consonance. If \eqref{eq:monotonicP} is satisfied, Algorithm 3 in \citet{bretz2011graphical} carries out the short-cut procedure. But \eqref{eq:monotonicP} is not always satisfied even if $w_j(J) \leq w_j(J')$ for all $j\in J' \subseteq J\subseteq I$. In such cases, \citet{bretz2011graphical} proposed to modify the weighting scheme such that \eqref{eq:monotonicP} is satisfied for a particular significance level $\alpha$. However, to calculate the $p$-values \eqref{eq:adjp} for $H_J$, this modification has to be satisfied for $c_J$ under all $\alpha\in [0,1]$, which is difficult to achieve.
The procedure by \citet{xie2012weighted} considers a special weighting scheme that ensures consonance. As in Section \ref{sec:exhaustive}, it assumes $w_i>0$ for all $i\in I$ and defines the weighting scheme as $w_j(J)=w_j/\sum_{j\in J}w_j$, $j\in J\subseteq I$. If the joint distribution of all test statistics is fully known, we calculate $c_{J}$ and $c_{J'}$ such that
$$
\text{pr}_{H_J}\left[\bigcup_{j\in J}\left\{p_j\leq c_J \frac{w_j}{\sum_{j\in J}w_j}\alpha\right\}\right]=\alpha=\text{pr}_{H_{J'}}\left[\bigcup_{j\in J'}\left\{p_j\leq c_{J'} \frac{w_j}{\sum_{j\in J'}w_j}\alpha\right\}\right].
$$
Because $J'\subseteq J$, the above equalities can only hold if $c_J /\sum_{j\in J}w_j\leq c_{J'}/\sum_{j\in J'} w_j$, which leads to \eqref{eq:monotonicP}. \citet{xie2012weighted} also made a similar assessment using $p$-values but did not refer to consonance explicitly. In fact, \eqref{eq:monotonicP} continues to hold even when the joint distribution of all test statistics is not fully known, as in \eqref{eq:testPartialCommon}.
\citet{xie2012weighted} provided a short-cut procedure to calculate the adjusted $p$-value for each elementary hypothesis. Here, we simplify the algorithm and do not assume the ordering in the weighted unadjusted $p$-values. For the overall intersection hypothesis $H_{J_1},J_1=I$, we calculate its $p$-value $\hat{p}_{J_1}$ according to \eqref{eq:adjpProp}. If $\hat{p}_{J_1}\leq \alpha$, reject $H_{j_1}$ with the adjusted $p$-value $\hat{p}_{j_1}=\hat{p}_{J_1}$ and proceed to the next step, where $j_{1}=\text{argmin}_{j\in J_1}\ p_j/w_j$; otherwise stop. In general, for $i=2,\ldots,m$, let $J_i=J_{i-1}\setminus \{j_{i-1}\}$ and calculate the $p$-value $p_{J_i}$ for $H_{J_i}$. If $p_{J_i}\leq \alpha$, reject $H_{j_i}$ with the adjusted $p$-value $\hat{p}_{j_i}=\max\{\hat{p}_{i_{i-1}},\hat{p}_{J_i}\}$, and proceed to the next step (as long as $i<m$), where $j_{i}=\text{argmin}_{j\in J_i}\{p_j/w_j\}$; otherwise stop. This short-cut procedure is performed in at most $m$ operations and can be viewed as a weighted version of the step-down Dunnett procedure.
If we generalize the procedure by \citet{xie2012weighted} to any valid weighting scheme, a sufficient condition for \eqref{eq:monotonicP} is that $w_j(J)$ can be written as $c_J w_j$, where $c_J$ is a constant for all $j\in J$. As a simple example, we derive a weighted version of the single-step \citet{dunnett1955multiple} test. The weighting scheme for the closed procedure is $W=\{w_{J}=(c_I w_j,j\in J),J\subseteq I\}$ such that $H_J$ is rejected if $p_j\leq c_I w_j \alpha$ for any $j\in J\subseteq I$. Here, $c_I$ is a constant for every $j\in J\subseteq I$ such that $\text{pr}_{H_I}\left\{\cup_{j\in I}\left(P_j\leq c_I w_{j}\alpha\right)\right\}=\alpha$. Assuming $\sum_{i\in I}w_i=1$, we derive the short-cut procedure for the weighted single-step Dunnett test which rejects $H_i$ if $p_i \leq c_I w_i\alpha$ for any $i\in I$. The adjusted $p$-value for $H_i$ is $\hat{p}_{i}=\text{pr}_{H_I}\left\{\cup_{j\in I}\left(P_j/w_j \leq p_i/w_i\right) \right\}$. Then the single-step \citet{dunnett1955multiple} test is a special case when $w_i=1/m,i\in I$.
\section{Clinical trial example}
\label{sec:example}
Consider the clinical trial example from \citet{bauer2001multiple} to test for the superiority of three doses of an investigational treatment against a control regarding an efficacy and a safety endpoint. There are three efficacy hypotheses $H_1,H_2,H_3$ and three safety hypotheses $H_4,H_5,H_6$ for the comparison of the high, medium, low dose against the control, respectively. We modify the step-down procedure without order constraints between the doses from Section 3 in \citet{bauer2001multiple} as follows. Assume the initial weights as $w_I=(\text{0$\cdot$4},\text{0$\cdot$4},\text{0$\cdot$2},0,0,0)$. Within each dose-control comparison, the hypothesis on the efficacy endpoint is tested first and, if rejected, the test on the safety endpoint is performed at the same local significance level. If both hypotheses can be rejected for a same dose, the associated local level is equally distributed among the other two doses. The one-sided significance level is $\alpha=\text{0$\cdot$025}$. The graphical representation of this MTP is shown in Figure \ref{fig:example} using the graphical approach from \citep{bretz2009graphical,burman2009recycling}. In this framework, hypotheses are denoted by nodes associated with their local weights. A directed edge from $H_i$ to $H_j$ means that when $H_i$ is rejected, its local weight can be propagated to $H_j$. The number associated with the edge quantifies the proportion of the local weight of $H_i$ that can be propagated to $H_j$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.6]
\node at (70bp,-125bp) {Efficacy};
\node at (125bp,-80bp) {0$\cdot$4};
\node (HE1) at (125bp,-125bp)[draw,circle,inner sep=2pt,minimum size=2pt] {$H_1$};
\node at (325bp,-80bp) {0$\cdot$4};
\node (HE2) at (325bp,-125bp)[draw,circle,inner sep=2pt,minimum size=2pt] {$H_2$};
\node at (525bp,-80bp) {0$\cdot$2};
\node (HE3) at (525bp,-125bp)[draw,circle,inner sep=2pt,minimum size=2pt] {$H_3$};
\node at (70bp,-225bp) {Safety};
\node at (125bp,-270bp) {High dose};
\node at (160bp,-230bp) {0};
\node (HS1) at (125bp,-225bp)[draw,circle,inner sep=2pt,minimum size=2pt] {$H_4$};
\node at (325bp,-270bp) {Medium dose};
\node at (360bp,-230bp) {0};
\node (HS2) at (325bp,-225bp)[draw,circle,inner sep=2pt,minimum size=2pt] {$H_5$};
\node at (525bp,-270bp) {Low dose};
\node at (560bp,-230bp) {0};
\node (HS3) at (525bp,-225bp)[draw,circle,inner sep=2pt,minimum size=2pt] {$H_6$};
\draw [->,line width=0.5pt] (HE1) to node[pos=0.5,left] {1} (HS1);
\draw [->,line width=0.5pt] (HE2) to node[pos=0.5,left] {1} (HS2);
\draw [->,line width=0.5pt] (HE3) to node[pos=0.5,right] {1} (HS3);
\draw [->,line width=0.5pt] (HS1) to node[pos=0.3,below] {0$\cdot$5} (HE2);
\draw [->,line width=0.5pt] (HS1.50) arc(140:117:317bp) arc(117:70:317bp) to (HE3);
\node at (225bp,-125bp) {0$\cdot$5};
\draw [->,line width=0.5pt] (HS2) to node[pos=0.3,below] {0$\cdot$5} (HE1);
\draw [->,line width=0.5pt] (HS2) to node[pos=0.3,below] {0$\cdot$5} (HE3);
\draw [->,line width=0.5pt] (HS3.130) arc(40:63:317bp) arc(63:110:317bp) to (HE1);
\node at (425bp,-125bp) {0$\cdot$5};
\draw [->,line width=0.5pt] (HS3) to node[pos=0.3,below] {0$\cdot$5} (HE2);\end{tikzpicture}
\caption{Graphical multiple test procedure for the clinical trial example.}
\label{fig:example}
\end{figure}
The weighting scheme of this MTP can be obtained using Algorithm 1 in \citet{bretz2011graphical}, which has been implemented in the {\tt gMCP} R package \citep{Rohmeyer2015}. Table 1 in the Supplementary Material provides the local weight vector $w_J$ for each intersection hypothesis $H_J, J \subseteq I$. For example, the local weights for $H_{123}=H_1\cap H_2\cap H_3$ are $w_1=\text{0$\cdot$4},w_2=\text{0$\cdot$4}$ and $w_3=\text{0$\cdot$2}$, and for $H_{234}=H_2\cap H_3\cap H_4$ they are $w_2=\text{0$\cdot$4},w_3=\text{0$\cdot$2}$ and $w_4=\text{0$\cdot$4}$. The \citet{dunnett1955multiple} test is suitable for the many-to-one comparisons as it is more powerful than the Bonferroni test. This motivates us to use a weighted parametric test for the intersection hypothesis involving any two of the three efficacy hypotheses $H_1,H_2,H_3$. For the sake of illustration, we assume that the joint distribution of test statistics between the safety hypotheses $H_4,H_5,H_6$ is unknown.
We assume that the joint distribution of the test statistics for $H_1,H_2,H_3$ is trivariate normal with a mean vector of 0's. Assuming equal group sizes, the pairwise correlation between the test statistics is 0$\cdot$5 among the efficacy hypotheses. All other correlations are assumed to be unknown. Given this joint distribution, the index set $I=\{1,\ldots,6\}$ is partitioned into four subsets: $\{1,2,3\}$, $\{4\}$, $\{5\}$, $\{6\}$. Within the subset $\{1,2,3\}$, the test statistics follow the trivariate normal distribution.
We calculate the local significance levels for all intersection hypotheses using (A) the weighted Bonferroni test, (B) the weighted parametric test \eqref{eq:testPartialCommon} and (C) the weighted parametric test \eqref{eq:testPartial}. Using (A), the local significance levels for $H_{234}$ are $(\text{0$\cdot$4},\text{0$\cdot$2},\text{0$\cdot$4})\times \text{0$\cdot$025}=(\text{0$\cdot$01},\text{0$\cdot$005},\text{0$\cdot$01})$. Using (B), we calculate $c_{234}=\text{1$\cdot$033}$ from \eqref{eq:testPartialCommon} via the {\tt mvtnorm} package in R \citep{Genz2016}. The resulting local significance levels are $(\text{0$\cdot$4},\text{0$\cdot$2},\text{0$\cdot$4})\times \text{1$\cdot$033}\times \text{0$\cdot$025}=(\text{0$\cdot$0103},\text{0$\cdot$0052},\text{0$\cdot$0103})$. Using (C), we calculate $c_{23}=\text{1$\cdot$057}$ and $c_{4}=1$ from \eqref{eq:testPartial}. The resulting local significance levels are $(\text{0$\cdot$4}\times \text{1$\cdot$057},\text{0$\cdot$2}\times \text{1$\cdot$057},\text{0$\cdot$4}\times 1) \times \text{0$\cdot$025}=(\text{0$\cdot$0106},\text{0$\cdot$0053},\text{0$\cdot$01})$.
From the above calculations we can see that both parametric tests (B) and (C) produce higher local significance levels than the Bonferroni test (A). Thus, they can reject at least as many hypotheses as the Bonferroni test. Differences between (B) and (C) arise when two efficacy hypotheses and at least one safety hypothesis with a positive weight are associated with an intersection. For example, for $H_{234}$ (C) preserves the level for the safety hypotheses at the level of the Bonferroni test (A) but produces higher level for the efficacy hypotheses than (B). On the other hand, (B) has a higher level for the safety hypotheses but a lower level for the efficacy hypotheses. These conclusions apply to all intersection hypotheses.
\section*{Supplementary material}
Supplementary material includes the weighting scheme and further numerical comparisons for the clinical trial example in Section \ref{sec:example}.
\bibliographystyle{unsrtnat}
| 2023-04-23T06:40:33.585Z | 2016-05-23T02:11:25.000Z | redpajama/arxiv | arxiv_0001 | 665 | 4,744 |
5a5d98b05dbe003d883c8e25306ff1e9ab9d2f4d | \section{Classical notions}
\noindent
Henri Poincar\'e (1854-1912) in \cite{Poi95} formalised the notion of {\it fundamental group} of a connected topological space $X$. It had appeared earlier on, notably in the work of Bernhard Riemann (1826-1866) (\cite{Rie51}, \cite{Rie57}) in the shape of multi-valued functions.\\[.1cm]
Fixing a base point $x\in X$, then $\pi^{\rm top}_1(X,x)$ is first the set of homotopy classes of loops centered at $x$. It has a {\it group structure} by composing loops centered at $x$. It is a {\it topological invariant}, i.e. depends only on the homeomorphism type of $X$. It is {\it functorial}: if $f: Y\to X$ is a continuous map, and $y\in Y$, then $f$ induces a homomorphism $f_*: \pi_1^{\rm top}(Y,y)\to \pi_1^{\rm top}(X, f(y))$ of groups.
\\[.1cm]
If $X$ is locally contractible, for example if $X$ is a connected complex analytic manifold,
its fundamental group determines its topological coverings as follows: fixing $x$, there is a {\it universal covering $X_x$, together with a covering map $\pi: X_x\to X$, and a lift $\tilde{x}$ of $x$ on $X_x$, such that $\pi_1^{\rm top}(X,x)$ is identified with ${\rm Aut}(X_x/X)$.} More precisely, $\pi_1^{-1}(x)$ is a set, which is in bijection with $\pi_1^{\rm top}(X, x)$, sending the neutral element of the group to $\tilde{x}$. In fact the universal covering exists under weaker local assumptions, which we do not discuss, as we only consider analytic and algebraic varieties in this note.
\\[.1cm]
Let us assume $X$ is a smooth projective algebraic curve over ${\mathbb C}$, that is $X({\mathbb C})$ is a Riemann surface. By abuse of notations, we write $ \pi_1^{\rm top}(X,x)$ instead of $ \pi_1^{\rm top}(X({\mathbb C}),x)$.
Then
$\pi_1^{\rm top}(X,x)=0$ for $\P^1$, the Riemann sphere, that is if the genus $g$ of $X$ is $0$, it is equal to ${\mathbb Z}^2$ if $g=1$ and else for $g\ge 2$, it is spanned by $2g$ generators $\alpha_i, \beta_i, i=1,\ldots, g$ with one relation $\prod_{i=1}^g [\alpha_i, \beta_i]=1$. So it is nearly a free group. In fact, for any choice of $s$ points $a_1,\ldots, a_s$ of $X({\mathbb C})$ different from $x$, $s\ge 1$, $\pi_1^{\rm top}(X\setminus \{a_1,\ldots, a_s\}, x)$ is free, is spanned by $\alpha_i, \beta_i, \gamma_1,\ldots, \gamma_s$,
with one relation $\prod_{i=1}^g [\alpha_i, \beta_i]\prod_{j=1}^s \gamma_j=1$ (\cite{Hat02}).
For $s=1$, the map
$\pi_1^{\rm top}(X\setminus a, x) \to \pi_1^{\rm top}(X, x)$ is surjective and yields the presentation.
\\[.1cm]
More generally, for any non-trivial Zariski open subvariety $U\hookrightarrow X$ containing $x$,
the homomorphism $\pi_1^{\rm top}(U, x) \to \pi_1^{\rm top}(X, x)$ is always surjective, as we see taking loops and moving them via homotopies inside of $U$. The kernel in general is more complicated, but is spanned by loops around the divisor at infinity. \\[.1cm]
If $X$ has dimension $\ge 2$, then $\pi_1^{\rm top}(X,x)$ is far from being free. A natural question is how to compute it. This is the content of the Lefschetz (Salomon Lefschetz (1884-1972)) theorems for the fundamental group.
\begin{thm}[Lefschetz theorems] \label{thm:top_lef}
Let $X$ be a smooth connected projective variety defined over ${\mathbb C}$. Let $Y\to X$ be a smooth hyperplane section. Let $x\in Y\subset X$. Then the homomorphism of groups $\pi_1^{\rm top}(Y,x)\to \pi_1^{\rm top}(X,x)$
is
\begin{itemize}
\item[1)] surjective if $Y$ has dimension $1$;
\item[2)] an isomorphism if $Y$ has dimension $\ge 2$.
\end{itemize}
\end{thm}
In particular
\begin{cor} \label{cor:f_presented}
$\pi_1^{\rm top}(X,x)$ is a {\it finitely presented} group.
\end{cor}
\noindent
In fact, both the theorem and its corollary remain true for $\pi_1^{\rm top}(U,x)$, where $U$ is any non-trivial Zariski open subvariety in $X$. One takes $U\hookrightarrow X$ to be a good compactification, that is $X$ is smooth projective such that $X\setminus U$ is a strict normal crossing divisor (strict meaning that all components are smooth). Then $Y$ in the theorem is replaced by the intersection $V=Y\cap U$, with the additional assumption that $Y$ is in good position with respect to $X\setminus U$.
\\[.1cm]
In his proof in \cite{Lef24}, Lefschetz introduces the notion of {\it Lefschetz pencil}: one moves $Y$ in a one parameter family $Y_t, t\in \P^1$. For a good family, all fibres but finitely many of them are smooth. His proof was not complete. In the \'etale context, it was proven only in \cite{SGA2}. Theorem~\ref{thm:top_lef} was proven in \cite{Bot59} using vector bundles:
let $Y$ be a section of the bundle ${\mathcal O}_X(Y)$. Bott uses the hermitian metric $h$ on ${\mathcal O}_X(Y)$ to define the function $\varphi=1/(2\pi i) \bar \partial \partial {\rm log} h(s)$. Then he proves that $X({\mathbb C})$ is obtained from $Y({\mathbb C})$ by attaching finitely many cells of dimension $\ge {\rm dim} (X)$ by doing Morse theory with $\varphi$.
\section{Galois theory}
\noindent
Let $K$ be a field, $\iota: K\hookrightarrow \bar K$ be a fixed separable closure. One defines the group ${\rm Aut}(\bar K/K)$ of automorphisms of $\bar K$ over $K$, endowed with its natural profinite topology. This is `the' Galois group of $K$ associated to $\iota$. The main theorem of Galois theory says that there is an equivalence of categories $\{$closed subgroups of ${\rm Aut}(\bar K/K) \} \to \{$extensions $K\subset L \subset \bar K\}$ via $L=\bar K^H$ (\cite{Mil96}, \cite{Sza09}). \\[.2cm]
For $X$ a smooth connected variety defined over ${\mathbb C}$, Grothendieck's key idea was to reinterpret $\pi_1^{\rm top}(X,x)$ as follows. One defines the category of {\it topological covers} ${\rm TopCov}(X)$.
The objects are maps $\pi: Y\to X$ where $Y$ is an Hausdorff topological space, locally homeomorphic to $X$ via $\pi$, or equivalently, $Y$ is an analytical space, locally biholomorphic to $X$ via $\pi$ (\cite[Thm.~4.6]{For81}). The maps are over $X$.
The point $x$ yields a {\it fiber functor} $\omega_x: {\rm TopCov}(X)\to {\rm Sets}, \ (\pi: Y\to X({\mathbb C}) ) \mapsto \pi^{-1}(x)$. This means that $\omega_x$ is {\it faithful}, that is ${\rm Hom}(A,B)\to {\rm Hom}(\omega(A), \omega(B))$ is injective. \\[.1cm]
A unified presentation of Poincar\'e and Galois theories is as follows.
\begin{thm}
\begin{itemize}
\item[1)] ${\rm Aut}(\omega_x)=\pi_1^{\rm top}(X,x)$;
\item[2)] $\omega_x$ yields an equivalence of categories $${\rm TopCov}(X) \xrightarrow{\omega_x} {\rm Rep}_{\rm Sets}( \pi_1^{\rm top}(X,x)).$$
\item[3)] The universal cover $X_x$ corresponds to the representation of $\pi_1^{\rm top}(X,x)$ by translation on itself.
\item[4)] A change of $x$ yields equivalent fibre functors $\omega_x$, isomorphic $\pi_1^{\rm top}(X,x)$ and isomorphic $X_x$ over $X$. The equivalence and isomorphisms are not canonical.
\end{itemize}
\end{thm}
\noindent
In this language, 1) uses the universal cover and the identification $\pi_1^{\rm top}(X,x)={\rm Aut}(X_x/X)$. \\[.1cm]
One can interpret Galois theory in the same way. The embedding $\iota: K\to \bar K$ corresponds {\it both} to $x\to X$ and to $X_x \to X$. One defines ${\rm FinExt}(K)$ to be the category of {\it finite} separable $K$-algebra extensions $K\subset L$. Then $\iota$ defines a fibre functor $\omega_{\iota}: {\rm FinExt}(K) \to {\rm FinSets}, (K\hookrightarrow L ) \mapsto L\otimes_{\iota} \bar K$, the latter understood as a finite set indexing the split $\bar K$-algebra $L\otimes_{\iota} \bar K$, and ${\rm FinSets}$ being now the category of finite sets.
One defines $\pi_1(K, \iota)={\rm Aut}(\omega_\iota)$. It is a {\it profinite} group.
\begin{thm}[Galois theory revisited]
\begin{itemize}
\item[1)] $\omega_\iota$ yields an equivalence of categories $${\rm FinExt}(K)
\xrightarrow{\omega_\iota} {\rm Rep}_{\rm FinSets}( \pi_1(K , \iota)).$$
\item[2)] This equivalence extends to the category of Ind-extensions ${\rm Ext}(K)$ yielding the functor $\omega_{\iota}: {\rm Ext}(K)\to {\rm Sets}$. The functor
$\omega_\iota$ yields an equivalence of categories $${\rm Ext}(K) \xrightarrow{\omega_\iota} {\rm ContRep}_{\rm Sets}( \pi_1(K ,\iota)).$$
\item[3)] $\iota: K\hookrightarrow \bar K$ corresponds to the continuous representation of
$\pi_1(K,\iota)$ by translation on itself.
\item[4)] A change of separable closure $\iota$ yields equivalent fibre functors $\omega_{\iota}$, isomorphic $\pi_1(K, \iota)$, simply called {\it the Galois group } $G_K$ of $K$. The equivalence and the isomorphism are not canonical.
\end{itemize}
\end{thm}
\noindent
To understand $\pi_1(K, \iota)$ for number fields is the central topic of one branch of number theory. For example, the inverse Galois problem is the question whether or not $G_{{\mathbb Q}}$ can be as large as thinkable, that is whether or not any finite group is a quotient of $G_{{\mathbb Q}}$. In these notes, we shall take for granted the knowledge of these groups. The main focus shall be on {\it finite} fields $k$. For these, Galois theory due to Galois (!) shows that $G_{{\mathbb F}_q}=\varprojlim_n {\mathbb Z}/n=:\widehat{ {\mathbb Z}}$, where $\widehat{{\mathbb Z}}$ is topologically generated by the arithmetic Frobenius $\bar k\to \bar k, \lambda \mapsto \lambda^q$.
\section{\'Etale fundamental group \cite{SGA1}} \label{s:pi}
\noindent
This is the notion which unifies the topological fundamental group and Galois theory.
Let $X$ be a connected normal (geometrically unibranch is enough) locally noetherian scheme. In \cite{SGA3}, it is suggested that one can enlarge the category $\rm{\acute{E}t}(X)$ of pro-finite \'etale covers to discrete covers. It is important when one drops the normality assumption on $X$, and still requests to have $\ell$-adic sheaves as representations of a (the right one) fundamental group. A general theory of pro\'etale fundamental groups has been defined by Scholze \cite{Sch13}, and Bhatt-Scholze \cite{BS15}, but we won't discuss this, as we focus on Lefschetz theorems, and for those we need the \'etale fundamental group as defined in \cite{SGA1}.\\[.1cm]
The category of finite \'etale covers ${\rm Fin\acute{E}t}(X)$ is the category of $\pi: Y\to X$ which are of finite presentation, finite flat and unramified, or equivalently of finite presentation, finite smooth and unramified. \\[.1cm]
The other basic data consist of a geometric point $x\in X$, in fact a point in a separably closed field is enough. Indeed, if $x$ is a point with separably closed residue field, then up to isomorphism there is only one geometric point $\tilde x$ above it, which is algebraic, and the fibre functors
$\omega_{\tilde x}: {\rm Fin \acute{E}t}(X) \to {\rm FinSets},( \pi: Y\to X) \mapsto \pi^{-1}(\tilde x)$
associated to those geometric points are the same (not only isomorphic). \\[.1cm]
So the construction explained now depends only on the point in a separable closure. But a geometric point enables one to take non-algebraic points, this gives more freedom as we shall see. \
The functor $\omega_x: {\rm Fin \acute{E}t}(X) \to {\rm FinSets},( \pi: Y\to X) \mapsto \pi^{-1}(x)$, the latter understood as a finite set indexing the split algebra $\pi^{-1}(x)$ over $x$, is a fibre functor.
One defines the {\it \'etale fundamental group of $X$ based at $x$} as $$\pi_1(X, x)={\rm Aut}(\omega_x).$$ It is a {\it profinite} group, thus in particular has a topology. \begin{thm}[Grothendieck \cite{SGA1}]
\begin{itemize}
\item[1)] $\omega_x$ yields an equivalence of categories $${\rm Fin\acute{E}t}(X) \xrightarrow{\omega_x} {\rm Rep}_{\rm FinSets}( \pi_1(X ,x)).$$
\item[2)] This equivalence extends to the category of pro-finite \'etale covers ${\rm \acute{E}t}(X)$ yielding $\omega_{x}: {\rm \acute{E}t}(X)\to {\rm Sets}$.
$\omega_x$ yields an equivalence of categories $${\rm\acute{ E}t}(X) \xrightarrow{\omega_x} {\rm ContRep}_{\rm Sets}( \pi_1(X , x)).$$
\item[3)] The continuous representation of $\pi_1(X,x)$ acting by translation on itself corresponds to $X_x\to X$, called the universal cover centered at $x$.
\item[4)] A change of $x$ yields equivalent $\omega_x$, isomorphic $\pi_1(X, x)$ and isomorphic $X_x$ over $X$. The equivalence and the isomorphisms are not canonical.
\end{itemize}
\end{thm}
\section{Comparison}
\noindent
We saw now the formal analogy between topological fundamental groups, Galois groups and \'etale fundamental groups.
We have to see the geometric relation.
\begin{thm}[Riemann existence theorem, \cite{Rie51}, \cite{Rie57}] \label{thm:RET}
Let $X$ be a smooth variety over ${\mathbb C}$. Then a finite \'etale cover $\pi_{{\mathbb C}}: Y_{{\mathbb C}} \to X({\mathbb C})$ is the complex points $\pi({\mathbb C}): Y({\mathbb C})\to X({\mathbb C})$ of a uniquely defined finite \'etale cover $\pi: Y\to X$.
\end{thm}
\begin{cor}[Grothendieck, \cite{SGA1}] \label{cor:profinite}
The \'etale fundamental group $\pi_1(X,x)$ is the profinite completion of the topological fundamental group $\pi_1^{\rm top}(X, x)$, where $x\in X({\mathbb C})$.
\end{cor}
\noindent
In particular, using localization and the Lefschetz theorems, one concludes
\begin{cor} \label{cor:ft}
Let $X$ be a smooth variety over ${\mathbb C}$. Then $\pi_1(X,x)$ is topologically of finite type, that is there is a finite type subgroup of $\pi_1(X,x)$ which is dense for the profinite topology.
\end{cor}
\section{Homotopy sequence and base change}
\noindent
However, it does not shed light on the structure of $\pi_1(X,x)$ in general.\\[.1cm]
In the sequel we shall always assume $X$ to be connected, locally of finite type over a field $k$ and to be geometrically connected over $k$. The last condition is equivalent to $k$ being equal to its algebraic closure in $\Gamma(X, {\mathcal O}_X)$.
Given the geometric point $x\in X$, defining the algebraic closure $\iota: k\to \bar k\subset k(x)$ of $k$ in the residue field $k(x)$ of $x$,
the functors $${\rm Ext}(k) \to {\rm \acute{E}t}(X), \ (k\subset \ell) \mapsto (X_\ell\to X)$$ and $${\rm \acute{E}t}(X) \to {\rm \acute{E}t}(X_{k(x)}), \ (\pi: Y\to X)\mapsto (\pi_{k(x)}: Y_{k(x)}\to X_{k(x)})$$
define the {\it homotopy sequence} of continuous homomorphisms
\ga{1}{ 1\to \pi_1(X_{k(x)}, x)\to \pi_1(X, x) \to \pi_1(k, \iota)\to 1.}
\begin{thm}[Grothendieck's homotopy exact sequence, \cite{SGA1}] \label{thm:HES}
The homotopy sequence \eqref{1} is exact.
\end{thm}
\noindent
Surjectivity on the right means precisely this:
${\rm Ext}(k) \to {\rm \acute{E}t}(X)$ is fully faithful, and any intermediate \'etale cover $X_\ell \to Y\to X$ comes from $k\subset \ell'\subset \ell$, with $(Y\to X)=(X_{\ell'} \to X)$. Injectivity on the left means that
any finite \'etale cover of $X_{k(x)}$ comes from some finite \'etale cover of $X$ (not necessarily geometrically connected) by taking a factor, and exactness in the middle means that
given $Y\to X$ finite \'etale, such that $Y_{k(x)} \to X_{k(x)}$ is completely split,
then there is a $\ell \in {\rm Ext}(k)$ such that $Y\to X$ is $X_\ell \to X$. See \cite[~49.14,~49.4.3,~49.4.5]{StacksProject} \\[.2cm]
There is a more general homotopy sequence: one replaces $X\to {\rm Spec}( k)$ by $f: X\to S$ a {\it proper separable} morphism (\cite[Exp.~X, Defn.~1.1]{SGA1}) of locally noetherian schemes. Separable means that $f$ is flat, and all fibres $X_s$ are separable, i.e. reduced after all field extensions $X_s\otimes_s K$. Let $s=f(x)$. Then analogously defined functors yield the sequence
\ga{2}{ \pi_1(X_{s}, x)\to \pi_1(X, x) \to \pi_1(S,s )\to 1.}
\begin{thm}[Grothendieck's second homotopy exact sequence, \cite{SGA1}] \label{thm:HES2} If $f_*{\mathcal O}_X={\mathcal O}_S$,
the homotopy sequence \eqref{2} is exact.
\end{thm}
\noindent
Let $\iota_K: k\hookrightarrow K$ be an embedding in an algebraically closed field, defining $\iota: k\hookrightarrow \bar k \hookrightarrow K$. This defines the functor ${\rm \acute{E}t}(X_{\bar k})\to {\rm \acute{E}t}(X_{K}), \ (Y\to X_{\bar k} ) \to (Y_K\to X_K)$. Let us denote by $x_K$ a $K$-point of $X$, and by $x_{\bar k}$ the induced $\bar k$-point by the map $X_K\to X_{\bar k}$.
\begin{prop} \label{prop:basechange}
This functor induces a surjective homomorphism $\pi_1(X_K, \iota_K) \to \pi_1(X_{\bar k}, \iota)$.
If $X$ is proper, it is an isomorphism.
\end{prop}
\noindent
Surjectivity again amounts to showing that if $Y$ is a connected scheme, locally of finite type over $\bar k$, then $Y_K$ is connected as well. This is a local property, so one may assume that $Y={\rm Spec}(A)$ where $A$ is an affine $\bar k$-algebra, where $\bar k$ is algebraically closed in $A$. Then
$Y_K={\rm Spec}(K\otimes_{\bar k} A)$, and $K=K\otimes_{\bar k} \bar k$ is the algebraic closure of $K$ in $K\otimes_{\bar k} A$.
\\[.1cm]
The homomorphism
$\pi_1(X_K, x_K)\to \pi_1(X,x)$ factors through $\pi_1(X_{\bar k}, x_{\bar k})$. Thus
if $X$ is proper,
Theorem~\ref{thm:HES2} implies the second assertion of Proposition~\ref{prop:basechange}.
\\[.2cm]
Yet if $k$ has characteristic $p>0$, {\it injectivity it not true} in general. For example,
setting $X={\mathbb A}^1$, and $ \bar k\hookrightarrow \bar k[t]\hookrightarrow K$,
the Artin-Schreier cover $x^p-x=st$ in $\mathbb{A}^2$ is not constant in $t$. (Example of Lang-Serre, \cite{SGA1}).
If the homomorphism of Proposition~\ref{prop:basechange} was injective, the quotient
${\mathbb Z}/p$ of $\pi_1(X_K, \iota_K) $ defined by this example $\pi_K: Y_K\to {\mathbb A}^1_K$ would factor through $\pi_1(X_{\bar k}, \iota)$,
so there would be an Artin-Schreier cover $\pi: Y_{\bar k}\to {\mathbb A}^1_{\bar k}$ which pulls-back over $K$ to $\pi_K$, so
$\pi_K$ would be constant. Compare with Proposition~\ref{prop:basechangechar0} in characteristic $0$.
\\[.1cm]
In fact this is more general.
\begin{lem}
Let $C\subset {\mathbb A}^2$ be any smooth geometrically connected curve, with $x\in C$, then the homomorphism $\pi_1(C,x)\to \pi_1({\mathbb A}^2,x)$ is {\it never surjective}. So there can't possibly be any Lefschetz type theorem for $\pi_1(X,x)$ when $X$ is non-proper.
\end{lem}
\begin{proof}
By Theorem~\ref{thm:HES} we may assume $k=\bar k$. Let $f\in k[s,t]$ be the defining equation of $C$, $L$ be a line cutting $C$ transversally. Thus the restriction $f|_L: L\to {\mathbb A}^1$ has degree $d \ge 1$, the degree of $f$, and is ramified in $d$ points. Thus if $T\to {\mathbb A}^1 $ is a connected Artin-Schreier cover, then $T\times_{{\mathbb A}^1} L$ is connected, thus as fortiori $T\times_{{\mathbb A}^1} {\mathbb A}^2$ is connected as well, and $T\times_{{\mathbb A}^1} {\mathbb A}^2\to {\mathbb A}^2$ induces a ${\mathbb Z}/p$-quotient of $\pi_1({\mathbb A}^2, x)$. However, $(T\times_{{\mathbb A}^1} {\mathbb A}^2) \times_{{\mathbb A}^2} C = (T\times_{{\mathbb A}^1} 0 ) \times_k C\to {\mathbb A}^2\times_{{\mathbb A}^2} C=C$ splits completely. Thus the composite homomorphism
$\pi_1(C, x)\to \pi_1({\mathbb A}^2, x) \to {\mathbb Z}/p$ is $0$.
\end{proof}
\noindent
We remark however: Theorem~\ref{thm:HES2} enables one to compare $\pi_1(X_{\bar k})$ in characteristic $0$ and $p>0$. Indeed, assume $X$ is separable proper of finite type, defined over an algebraically closed characteristic $p>0$ field $\bar k$, and there is a model $X_R/R$, i.e. flat (thus proper), over a strictly henselian ring $R$
with residue field $\bar k$ and field of fractions $K$ of characteristic $0$. Then $X_R/R$ is separable. Theorem~\ref{thm:HES2} refines to saying
\begin{prop}
$\pi_1(X_{\bar k}, x) \to \pi_1(X_R, x)$
is an isomorphism.
\end{prop}
\noindent
This is a direct consequence of the equivalence of categories between ${\rm \acute{E} t}(X_{\bar k})$ and ${\rm \acute{E} t}(X_{R_n})$
(\cite[Thm.~18.1.2]{EGA4}), and of the formal function theorem (\cite[Thm.~5.1.4]{EGA3}).
Here $R_n=R/\langle \pi^n\rangle$, where $\pi$ is a uniformizer of $R$.\\[.2cm]
\noindent
This defines the {\it specialization homomorphism} $sp: \pi_1(X_{\bar K}, x_{\bar K})\to \pi_1(X_{\bar k}, x)$, if $x_{\bar K}$ specializes to $x$.
\begin{thm}[Grothendieck's specialization theorem, \cite{SGA1}] \label{thm:sp}
If $X_R/R$ is proper separable, then sp is surjective.
\end{thm}
\noindent
We see that a surjective specialization can not exist for non-proper varieties, e.g. over for ${\mathbb A}^1_R$, where $R$ is the ring of Witt vectors of an algebraically closed characteristic $p>0$ field. But in the proper case, the \'etale fundamental group can not be larger than the one in characteristic $0$.
\section{Remarks on conjugate varieties}
\noindent
Let $X$ be a complex variety. Then $X$ is defined over a subfield $k\subset {\mathbb C}$ of finite type over ${\mathbb Q}$. Let $k\subset K $ be any algebraically closed field, and $\bar k$ be the algebraic closure of $k$ in $K$. Then Proposition~\ref{prop:basechange} is now much better behaved:
\begin{prop} \label{prop:basechangechar0}
The homomorphism $\pi_1(X_K, \iota_K) \to \pi_1(X_{\bar k}, \iota)$ is an isomorphism.
\end{prop}
\begin{proof}
Surjectivity comes from Proposition~\ref{prop:basechange}. On the other hand, any finite \'etale cover $\pi: Y\to X_K$ is defined over an affine algebra $R=k[S]$, say $\pi_R: Z \to X_R$, so that $\pi_R\otimes _R K =\pi$. We choose a complex embedding $k\hookrightarrow {\mathbb C}$, inducing the embedding
$R\hookrightarrow {\mathbb C}[S]$. Thus $(\pi_R)\otimes_k {\mathbb C}$ is a finite \'etale cover of $X_{{\mathbb C}} \times_{{\mathbb C}} S_{{\mathbb C}}$. As the topological fundamental group verifies the K\"unneth formula, one concludes that $Z\otimes_k {{\mathbb C}}$ is isomorphic over $S_{{\mathbb C}}$ to $V\times_{{\mathbb C}} S_{{\mathbb C}}$, for some connected finite \'etale cover $V\to X_{{\mathbb C}}$. So $V$ is isomorphic over ${\mathbb C}$ to $Z_s\times_s {\mathbb C}$, and $Z\otimes_k {\mathbb C}$ is isomorphic over $S_{{\mathbb C}}$ to $(Z_s\times_k S)\otimes_k {\mathbb C}$ for any closed point $s \in S$. Since this isomorphism
is defined over an affine $k$-scheme $T$ say, by specializing at a closed point of $T$ one obtains the splitting $Z=Z_s\times_k S$ over $S$.
\end{proof}
\noindent
However, Proposition~\ref{prop:basechangechar0} does not extend to the topological fundamental group. Indeed Serre (\cite{Ser64}) constructed an example of a $X$ together with a $\sigma \in {\rm Aut}({\mathbb C}/K)$ such that the topological fundamental group $\pi_1^{\rm top}(X_{\sigma})$ of the conjugate variety $X_\sigma$ is not isomorphic to the the topological fundamental group $\pi_1^{\rm top}(X)$ of $X$. (We do not write the base points as they do not play any r\^ole). \\[.1cm]
\section{Lefschetz theorems in the projective case and in the tame case}
\begin{thm}[Grothendieck's Lefschetz theorems, \cite{SGA2}] \label{thm:et_lef}
Let $X$ be a regular geometrically connected projective variety defined over a field $k$. Let $Y\to X$ be a regular hyperplane section. Let $x\in Y\subset X$ be a geometric point. Then the continuous homomorphism of groups $\pi_1(Y,x)\to \pi_1(X,x)$
is
\begin{itemize}
\item[1)] surjective if $Y$ has dimension $1$;
\item[2)] an isomorphism if $Y$ has dimension $\ge 2$.
\end{itemize}
\end{thm}
\noindent
We see immediately a wealth of corollaries of this fundamental theorem. Let us comment on a few of them.\\[.1cm]
The theorem implies in particular that $Y$ {\it is geometrically connected}. If $X$ and $Y$ are smooth over $k$, then $Y_{\bar k}$ can't have several components as each of them would be ample, thus they would meet, and $Y_{\bar k}/\bar k$ could not be smooth. But if we assume only regularity, this already is a subtle information.
\begin{cor} \label{cor:ft_proj} If in addition, $X/k$ is smooth,
$\pi_1(X_{\bar k}, x)$ is topologically of finite type.
\end{cor}
\begin{proof} If $k$ has characteristic $0$, we just apply Corollary~\ref{cor:profinite} together with Proposition~\ref{prop:basechange}. If $k$ has characteristic $p>0$,
applying Theorem~\ref{thm:et_lef} we may assume that $X$ has dimension $1$. Then $X$ lifts to characteristic $0$, so we apply Theorem~\ref{thm:sp}, and then Proposition~\ref{prop:basechangechar0} to reduce the problem to $k={\mathbb C}$, then Corollary~\ref{cor:profinite} to reduce to the explicit topological computation.
\end{proof}
\noindent
One has the notion of {\it tame} fundamental group. Recall that a finite extension $R\hookrightarrow S$ of discretely valued rings is {\it tame} if the ramification index is not divisible by $p$, the residue characteristic, and if the residue field extension is separable (\cite{Ser62}). \\[.1cm]
One has two viewpoints to define tame coverings. If $X$ has a good compactification $X\hookrightarrow \bar X$ with a strict normal crossings divisor at infinity, then a finite \'etale cover $\pi: Y\to X$ is said to be tame if, for $\bar Y$ the normalization of $\bar X$ in the field of rational functions on $Y$, and all codimension $1$ points $y$ on $\bar Y$, with image $x$ in $\bar X$, a codimension $1$ point on $\bar X$, the extension ${\mathcal O}_{\bar X,x}\hookrightarrow {\mathcal O}_{\bar Y, y}$ is tame. \\[.1cm]
Another viewpoint is to say that a finite \'etale cover $\pi: Y\to X$ is tame if and only if it is after restriction to all smooth curves $C\to X$. \\[.1cm]
That those two definitions are equivalent is a theorem of Kerz-Schmidt \cite{KS10}. It enables one to define tame covers via the curve criterion without having a good compactification at disposal. \\[.1cm]
This defines the {\it tame fundamental group} as a continuous quotient
$\pi_1(X,x)\to \pi_1^t(X,x)$.
By definition, the tame quotient factors
$$\pi_1(X,x)\to \pi_1^t(X,x)\to\pi_1(k, \iota)$$ from \eqref{1}, which is surjective if $X$ is geometrically connected.
\begin{thm}[Tame Lefschetz theorems, \cite{EKin15}] \label{thm:EK}
Let $X\hookrightarrow \bar X$ be a good regular projective compactification of a regular quasi-projective scheme $X$ defined over a field, that is $D=\bar X\setminus X$ is a normal crossings divisor with regular components. Let $\bar Y$ be a regular hyperplane section which intersects $D$ transversally. Set $Y=\bar Y\setminus D \cap \bar Y$.
Then the continuous homomorphism of groups $\pi^t_1(Y,x)\to \pi^t_1(X,x)$
is
\begin{itemize}
\item[1)] surjective if $Y$ has dimension $1$;
\item[2)] an isomorphism if $Y$ has dimension $\ge 2$.
\end{itemize}
\end{thm}
\noindent
Again we see that this implies in particular that $Y$ and $X$ have the same field of constants.
\begin{cor} \label{cor:ft_qproj} If in addition $X/k$ is smooth, then
$\pi_1^t(X_{\bar k}, x)$ is topologically of finite type.
\end{cor}
\begin{proof}
Theorem~\ref{thm:EK} enables one to assume ${\rm dim}(X)=1$. Then one argues as for Corollary~\ref{cor:ft_proj}, applying the surjective specialization homomorphism of Mme Raynaud (see \cite[XIII, Cor.~2.12]{SGA1}) for the tame fundamental group for $X_R\subset \bar X_R$ a relative normal crossings divisor compactification of the curve $X_R$ over $R$.
\end{proof}
\section{Deligne's $\ell$-adic conjectures in Weil II}
\noindent
In Weil II, \cite[Conj.1.2.10]{Del80}, Deligne conjectured that if $X$ is a normal connected scheme of finite type over a finite field, and $V$ is an irreducible lisse
$\bar{{\mathbb Q}}_\ell$ sheaf of rank $r$ over $X$, with finite determinant, then
\begin{itemize}
\item[(i)] $V$ has weight $0$,
\item[(ii)] there is a number field $E(V)\subset \bar{ {\mathbb Q}}_\ell$
containing all the coefficients of the local characteristic polynomials $f_V(x)(t)={\rm
det}(1-tF_x|_{V_x})$, where $x$ runs through the closed points of $X$ and $F_x$ is the
geometric Frobenius at the point $x$,
\item[(iii)] $V$ admits $\ell'$-companions for all prime
numbers $\ell'\neq p$.
\end{itemize}
The last point means the following. Given a field isomorphism $\sigma: \bar {\mathbb Q}_\ell \to \bar {\mathbb Q}_{\ell'}$ for prime numbers $\ell, \ell'$ (possibly $\ell=\ell'$) different from $p$, there is a $\bar {\mathbb Q}_{\ell'}$-lisse sheaf $V_{\sigma}$ such that $$\sigma f_V=f_{V_{\sigma}}.$$
(Then automatically, $V_{\sigma}$ is irreducible and has finite determinant as well.)\\[.1cm]
As an application of his Langlands correspondence for ${\rm GL}_r$,
Lafforgue proved (i), (ii), (iii) for $X$ a smooth curve \cite{Laf02}.
In order to deduce (i) on $X$ smooth of higher dimension, one needs a Lefschetz type theorem.
\begin{thm}[Wiesend \cite{Wie06}, \cite{Wie08}, Deligne \cite{Del12}, Drinfeld \cite{Dri12}] \label{thm:curve}
Let $X$ be a smooth variety over ${\mathbb F}_q$ and $V$ be an irreducible $\bar {\mathbb Q}_\ell$-lisse (or Weil) sheaf. Then there is a smooth curve $C \to X$ such that $V|_C$ is irreducible. One can request $C$ to pass through a finite number of closed points $x\in X$ with the same residue field $k(x)$.
\end{thm}
\noindent
There is a mistake in \cite{Laf02} on this point.
The Lefschetz theorem is in fact weaker than claimed in {\it loc.cit.}, the curve depends on $V$, and is not good for all $V$ at the same time.
\begin{proof} The sheaf
$V$ corresponds to a representation $\rho: \pi_1(X,\bar x)\to GL(r, R)$ where $R\supset {\mathbb Z}_\ell$ is a finite extension. Let $\frak{m}\subset R$ be its maximal ideal. One defines $H_1$ as the kernel of
$\pi_1(X, \bar x)\to GL(r, R)\to GL(r, R/\frak{m})$. Let $X_1\to X$ be the Galois cover such that $H_1= \pi_1(X_1,\bar x)$. One defines $H_2$ to be the intersection of the kernels of all homomorphisms $ H_1\to {\mathbb Z}/\ell$. As $H_1(X_1, {\mathbb Z}/\ell)$ is finite, $H_2\hookrightarrow H_1$ is of finite index. Then $H_2$ is normal in $\pi_1(X, \bar x)$. This defines the covers $X_2\to X_1\to X$, where $X_2\to X$ is Galois and $H_2= \pi_1(X_2, \bar x)$. In addition, any continuous homomorphism $ K\to \rho(\pi_1(X, \bar x))$ from a profinite group $K$ is surjective
if and only if its quotient
$K\to \pi_1(X, \bar x)/\pi_1(X_2, \bar x)= \rho( \pi_1(X, \bar x))/\rho(\pi_1(X_2, \bar x) )$ is surjective. \\[.1cm]
Then one needs a curve $C$ passing through $x$ such that $\pi_1(C, \bar x)\to
\pi_1(X, \bar x)/\pi_1(X_2, \bar x)$ is surjective.
To this: one may apply Hilbert irreducibility (Drinfeld), see also \cite{EK12}, or Bertini \`a la Jouanolou (Deligne). \\[.1cm]
On the latter: one may assume $X$ affine (so $X_2$ affine as well). Then take an affine embedding $X\hookrightarrow {\mathbb A}^N$, and define the Grassmannian of lines in ${\mathbb A}^N$. Bertini implies there is a non-empty open on $\bar {\mathbb F}_q$ on which the pull-back on $X_2\otimes \bar {\mathbb F}_q$ of any closed point is connected and smooth. Making the open smaller, it is defined over ${\mathbb F}_q$. It yields the result.
In the construction, one may also fix first a finite number of closed points. The curves one obtains in this way are defined over ${\mathbb F}_{q^n}$ for a certain $n$ as our open might perhaps have no ${\mathbb F}_q$-rational point. \\[.1cm]
On the former: one may assume $X$ affine and consider a Noether normalisation $\nu : X\to {\mathbb A}^d$ which is generically \'etale. The points $x_i$ map to $y_i$ (in fact one may even assume $\nu$ to be \'etale at those points). Then take a linear projection ${\mathbb A}^d\to {\mathbb A}^1$ and consider $X_{2, k({\mathbb A}^1)}\to X_{k({\mathbb A}^1)} \to {\mathbb A}^{d-1}_{k({\mathbb A}^1)}$. Hilbert irreducibility implies there are closed $k({\mathbb A}^1)$-points of ${\mathbb A}^{d-1}_{k({\mathbb A}^1)}$, thus with value in $k(\Gamma_i)$ for finite covers $\Gamma_i \to {\mathbb A}^1$, which does not split in $
X_{2, k({\mathbb A}^1)}$, the image of which in $X_{k({\mathbb A}^1)} $ specialises to $x_i$.
\end{proof}
\noindent
Using Lafforgue's results, Deligne showed (ii)
in 2007 \cite{Del12}. He first proves it on curves. Lafforgue shows that finitely many Frobenii of closed points determine the number field containing all eigenvalues of closed points. Deligne makes the bound effective, depending on the ramification of the sheaf and the genus of the curve. Then given a closed point of high degree, he shows the existence of a curve with small bound which passes through this point. \\[.1cm]
Using (ii) and ideas of Wiesend, Drinfeld \cite{Dri12} showed
(iii) in 2011, assuming in addition $X$ to be smooth.
\noindent
To reduce the higher dimensional case to curves, starting with $V$, Drinfeld shows that a system of eigenvalues for all closed points comes from an $\sigma({\mathcal O}_{E_{\ell}})$-adic sheaf, where $E_\ell$ is the finite extension of ${\mathbb Q}_\ell$ which contains the monodromy ring of $V$, if and only if it does on curves and has tame ramification on the finite \'etale cover on which $V$ has tame ramification.\\[.1cm]
Let $E(V)$ be Deligne's number field for $V$ irreducible with finite determinant, on $X$ {\it smooth} connected over ${\mathbb F}_q$.
\begin{thm}[Lefschetz for $E(V)$, \cite{EK12}] \label{thm:E(V)} Let $X$ be smooth over ${\mathbb F}_q$.
\begin{enumerate}
\item[1)] For $\emptyset \neq U \subset X$ open, $E(V|_{U})=E(V)$.
\item[2)] Assume $X$ has a good compactification $X\hookrightarrow \bar X$. Let $C\hookrightarrow X$ be a smooth curve passing through $x$ such that
$\pi_1^t(C, \bar x)\to \pi_1^t(X,\bar x)$ is surjective
(Theorem~\ref{thm:EK}, 1)). Then for all tame $V$ irreducible with finite determinant, $E(V|_C)=E(V)$.
\end{enumerate}
\end{thm}
\begin{proof}
Ad 1): One has an obvious injection $E(V|_{U})\hookrightarrow E(V)$. To show surjectivity, let $\sigma \in {\rm Aut} (E(V)'/E(V|_U))$, where $E(V)'/E(V|_U)$ is the Galois closure of $E(V)/E(V|_U)$ in $\bar {\mathbb Q}_\ell$. Then $f(V_{\sigma}|_U)=f(V|_U)$ so by \v{C}ebotarev's density theorem, one has $V_{\sigma}|_U=V|_U$. From the surjectivity
of $\pi_1(U, \bar x)\to \pi_1(X,\bar x)$
one obtains $V_\sigma=V$.\\[.1cm]
Ad 2): Same proof as in 1): take $\sigma \in {\rm Aut}(E(V)/E(V|_C))$. Then
$f(V_{\sigma}|_C)=f(V|_C)$, thus by the Lefschetz theorem $V_{\sigma}=V$. \\[.1cm]
\end{proof}
\noindent
Let $X$ be a geometrically connected scheme of finite type over ${\mathbb F}_q$, $\alpha: X'\to X$ be a finite \'etale cover. One says that a $\bar {\mathbb Q}_\ell$-lisse sheaf $V$ has ramification bounded by $\alpha$ if $\alpha^* (V)$ is tame (\cite{Dri12}).
This means that for any smooth curve $C$ mapping to $X'$, the pullback of $V$ to $C$ is tame in the usual sense.
If $X$ is geometrically unibranch, so is $X'$, and $V$ is defined by a representation $\rho: \pi_1(X)\to GL(r, R)$ of the fundamental group
where $R\supset {\mathbb Z}_\ell$ is a finite extension of discrete valuation rings. Then $V$ has ramification bounded by any $\alpha$ such that $\pi_1(X')\subset {\rm Ker} \big(\pi_1(X)\to GL(r, R)\to GL(r, R/2\ell)\big)$.
If $X$ is smooth, so is $X'$, and $\alpha^*(V)$ is tame amounts to say that the induced representation of $\pi_1(X')$ factors through $\pi^t_1(X')$. Given a natural number $r$ and given $\alpha$, one defines ${\mathcal S}(X, r, \alpha)$ to be the set of isomorphism classes of irreducible $\bar {\mathbb Q}_\ell$-lisse sheaves $V$ of rank $r$ on $X,$ such that $\alpha^*(V)$ is tame, modulo twist by a character of the Galois group of ${\mathbb F}_q$. \\[.1cm]
\noindent
Let $X$ be a smooth scheme of finite type over ${\mathbb F}_q$, $X\hookrightarrow \bar X$ be a normal compactification, and $D$ be a Cartier divisor with support $\bar X\setminus X$. One says that a $\bar {\mathbb Q}_\ell$-lisse sheaf $V$ has ramification bounded by $D$ if for any smooth curve $C$ mapping to $X$,
with compactification $\bar C\to \bar X$, where $\bar C$ is smooth,
the pullback $V_C$ of $V$ to $C$ has Swan conductor bounded above by $\bar C\times_{\bar X} D$ (\cite{EK12}). If $V$ has ramification bounded by $\alpha$, then also by $D$
for some Cartier divisor with support $\bar X\setminus X$ (\cite{EK12}).
Given a natural number $r$ and given $D$, one defines ${\mathcal S}(X, r, D)$ to be the set of isomorphism classes of irreducible $\bar {\mathbb Q}_\ell$-sheaves of rank $r$ on $X$, of ramification bounded by $D$, modulo twist by a character of the Galois group of ${\mathbb F}_q$. \\[.1cm]
\begin{thm}[Deligne's finiteness, \cite{EK12}] \label{thm:finiteness}
On $X$ smooth of finite type over ${\mathbb F}_q$, with a fixed normal compactification $X\hookrightarrow \bar X$ and a fixed Cartier divisor $D$ with support $\bar X\setminus X$, the set ${\mathcal S}(X, r, D)$ is finite.
\end{thm}
\noindent
Deligne's proof is very complicated. It relies on the existence of companions, thus $X$ has to be smooth. However, one can prove the following variant of Deligne's finiteness theorem in a very simple way, without using the existence of the companions and the existence of the number field.
\begin{thm}[\cite{Esn16}] \label{thm:E}
On $X$ geometrically unibranch over ${\mathbb F}_q$, with a fixed finite \'etale cover $\alpha: X'\to X$, the set ${\mathcal S}(X, r, \alpha)$ is finite.
\end{thm}
\noindent
The proof relies crucially on Theorem~\ref{thm:E(V)}. Indeed, on a good alteration $Y\to X'$, there is one curve $C\to Y$ such that $\pi_1^t(C)\to \pi_1^t(Y)$ is surjective. Then, for any natural number $s$,
the set ${\mathcal S}(Y, s, {\rm id})$ is recognized via restriction in ${\mathcal S}(C, s, {\rm id})$, which is finite by \cite{Laf02}.
\begin{rmk}
If one were able to reprove Drinfeld's theorem in an easier way, one could this way reprove Deligne's
theorem on the existence of the number field: indeed ${\rm Aut}(\bar {\mathbb Q}_\ell/{\mathbb Q})$ then acts on the $\ell$-adic irreducible sheaves with bounded rank and ramification, and each object $V$ has a finite orbit. So the stabilizer $G$ of $V$ is of finite index in ${\rm Aut}(\bar {\mathbb Q}_\ell/{\mathbb Q})$.
This defines $E(V)=\bar {\mathbb Q}_\ell^G$, a finite extension of ${\mathbb Q}$.
\end{rmk}
\noindent
The method used in the proof of Theorem~\ref{thm:E} enables one to enhance the Lefschetz theorem for $E(V)$.
\begin{thm}[Lefschetz for $E(V)$] Let $X$ be smooth of finite type over ${\mathbb F}_q$,
let $\alpha: X'\to X$ be a finite \'etale cover, let a natural number $r$ be given.
Then there is a smooth curve $ C \to X$, finite over its image, and a natural number $m>0$ such that $E(V|_C)$ contains $E(V\otimes {\mathbb F}_{q^m})$ for all $\bar {\mathbb Q}_\ell$-lisse sheaves $V$ with class in ${\mathcal S}(X,r,\alpha)$.
\end{thm}
\section{Deligne's crystalline conjecture in Weil II} \label{s:crys}
\noindent
In \cite[Conj.1.2.10]{Del80}, Deligne predicts crystalline companions, without giving a precise conjecture. It has been later made precise by Crew \cite{Cre86}.
In the sequel, $X$ is a smooth geometrically connected variety of finite type over ${\mathbb F}_q$. We briefly recall the definition of the category of $F$-overconvergent isocrystals. \\[.1cm]
Crystals are crystals in the crystalline site. They form a $W({\mathbb F}_q)$-linear category. Its ${\mathbb Q}$-linearization is the category of isocrystals. It is a $K$-linear category, where $K$ is the field of fractions of $W({\mathbb F}_q)$.
The Frobenius (here $x\mapsto x^q$) acts on the category. An isocrystal $M$ is said to have an $m$-th Frobenius structure, or equivalently is an $F^m$-isocrystal, for some natural number $m\ge 1$, if it is endowed with an isomorphism $F^{m*}M\cong M$ of isocrystals.
An isocrystal with a Frobenius structure is necessarily convergent. Convergent isocrystals are the isocrystals which are $F^\infty$-divisible.
Isocrystals with a Frobenius structure are not necessarily overconvergent. Overconvergence is an analytic property along the boundary of $X$, and concerns the radius of convergence at infinity of $X$ of the underlying $p$-adic differential equation. Overconvergence
is defined on isocrystals, whether or not they carry a Frobenius structure.
One defines the $\bar {\mathbb Q}_p$-linear category of $F$-{\it overconvergent isocrystals} as follows
(see \cite[Section~1.1]{AE16}). One first considers the category of overconvergent isocrystals over $K$, then $\bar {\mathbb Q}_p$-linearize it for a given algebraic closure $K\hookrightarrow \bar {\mathbb Q}_p$. In this $\bar {\mathbb Q}_p$-linear category, one defines the subcategory of
isocrystals with an $F^m$-structure in this category, for some natural number $m\ge 1$. The morphisms respect all the structures.
It is a $\bar {\mathbb Q}_p$-linear tannakian category.\\[.1cm]
\noindent
The analytic overconvergence condition is difficult to understand. However,
Kedlaya \cite{Ked07} proved that an isocrystal with a Frobenius structure is overconvergent if and only if
there is an alteration $ Y\to X$, with $Y$ smooth and $Y\hookrightarrow \bar Y$ is a good compactification, such that the isocrystal $M$, pulled back to $Y$, has nilpotent residues at infinity. \\[.2cm]
The category of $F$-overconvergent isocrystals
is believed to be the 'pendant' to the category of $\ell$-adic sheaves.
More precisely, Deligne's conjecture can be interpreted as saying: \\[.1cm]
\begin{enumerate}
\item For $V$ an irreducible $\bar {\mathbb Q}_\ell$-sheaf with torsion determinant, and an abstract isomorphism of fields $\sigma: \bar {\mathbb Q}_\ell \xrightarrow{\cong} \bar {\mathbb Q}_p$, there is an irreducible $F$-overconvergent isocrystal $M$ with torsion determinant, called $\sigma$-companion, with the property: for any closed point $x$ of $X$,
the characteristic polynomials $f_V(x)\in \bar {\mathbb Q}_\ell[t]$ of the geometric $F_x$ acting on $V_x$ and the characteristic polynomial $f_M(x) \in \bar {\mathbb Q}_p[t]$ of the absolute Frobenius (still denoted by) $F_x$ acting on $M_x$ are the same via $\sigma$: $\sigma f_V=f_M$.
\item And vice-versa: for $M$ an
irreducible $R$-overconvergent isocrystal with torsion determinant, and an abstract isomorphism of fields $\sigma: \bar {\mathbb Q}_\ell \xrightarrow{\cong} \bar {\mathbb Q}_p$, there is an irreducible $\bar {\mathbb Q}_\ell$-lisse sheaf $V$ with torsion determinant with the property $\sigma f_V=f_M$.
\end{enumerate}
\begin{thm}[Abe, crystalline companions on curves, \cite{Abe13}]
The whole strength of (1) and (2) is true on smooth curves.
\end{thm}
A first step towards a Lefschetz theorem for $F$-overconvergent isocrystals is the following weak form of \v{C}ebotarev-density theorem.
\begin{thm}[Abe, \v{C}ebotarev for $F$-overconvergent isocrystals, \cite{Abe13}]
If two $F$-overconvergent isocrystals have their eigenvalues of the local Frobenii equal, then their semi-simplification are isomorphic.
\end{thm}
\noindent
One has an analog of Theorem~\ref{thm:EK}.
\begin{thm}[Tame Lefschetz theorems for F-overconvergent isocrystals, \cite{AE16}]
Let $X\hookrightarrow \bar X$ be a good regular projective compactification of a smooth quasi-projective scheme $X$, geometrically irreducible over a finite field, such that $\bar X\setminus X$ is a normal crossings divisor with smooth components. Let $\bar C$ be a curve, smooth complete intersection of ample smooth ample divisors in good position with respect to
$\bar X\setminus X$.
Then the restriction to $C$ of any $F$-overconvergent isocrystal irreducible $M$ is irreducible.
\end{thm}
\noindent
One also has the precise analog of the Lefschetz Theorem~\ref{thm:curve}.
\begin{thm}[Abe-Esnault \cite{AE16}] \label{thm:curve_crys}
Let $X$ be a smooth variety over ${\mathbb F}_q$ and $M$ be an irreducible $F$-overconvergent isocrystal. Then there is a smooth curve $C \to X$ such that $V|_C$ is irreducible. One can request $C$ to pass through a finite number of closed points $x\in X$ with the same residue field $k(x)$.
\end{thm}
\noindent
It is beyond the scope of these notes to give the essential points of the proof of these two theorems. We observe that one strongly uses the Tannakian structure of the category. Rather than proving the theorems as stated, one shows that the restriction to a good curve preserves the Tannakian group spanned by one object.
To this aim, one ingredient is a version of class field theory for rank one $F$-overconvergent isocrystals, due to Abe. This enables one to argue purely cohomologically at the level of the global sections in the Tannakian category, and their behavior after restriction to a curve. This also yields a new proof of Theorem~\ref{thm:curve}.
\\[.1cm]
Theorem~\ref{thm:curve_crys} has a number of consequences. The first one is
\begin{thm}[Abe-Esnault, \cite{AE16}] \label{thm:AEcomp}
(2) is true.
\end{thm}
\noindent
The existence of $V$ as a Weil sheaf, without the irreducibility property, has been proved independently by Kedlaya \cite{Ked16}, introducing weights, which are not discussed in these notes. \\[.2cm]
Other ones consist in transposing on the crystalline side what one knows on the $\ell$-adic side, such as Deligne's Finiteness Theorem~\ref{thm:finiteness}: in bounded rank and bounded ramification, there are, up to twist by a rank one $F$-isocrystal of ${\mathbb F}_q$, only finitely many isomorphism classes of irreducible $F$-overconvergent isocrystals. The notion of bounded ramification here is not intrinsic to the crystalline theory. One says that the $F$-overconvergent isocrystal has ramification bounded by an effective Cartier divisor $D$ supported at infinity of $X$ if a $\sigma$-companion has. This notion does not depend on the choice of the isomorphism $\sigma: \bar {\mathbb Q}_\ell \xrightarrow{\cong} \bar {\mathbb Q}_p$ chosen (\cite{AE16}).\\[.2cm]
\noindent
There is a {\it hierarchy} of $F$-overconvergent isocrystals. \\[.1cm]
Among the $F$-overconvergent isocrystals, there are those which, while restricted to any closed point of $X,$ are unit-root $F$-isocrystals. This simply means that they consist of a finite dimensional vector space over the field of fraction of the Witt vectors of the residue field of the point, together with a $\sigma$-linear isomorphism with slopes equal to $0$. (We do not discuss slopes here).
This defines the category of {\it unit-root} F-isocrystals, as a subcategory of the category of $F$-overconvergent isocrystals. \\[.1cm]
Crew \cite{Cre87} proves that they admit a lattice, that is a crystal with the same isocrystal class, which is stabilized by the Frobenius action, which implies that the lattice is locally free.
Such a lattice is defined by a representation
$\pi_1(X,\bar x)\to GL(r, R),$ where $R$ is a finite extension of ${\mathbb Z}_p$. In particular, all eigenvalues of the Frobenius at closed points are $p$-adic units in $\bar {\mathbb Q}_p$. Drinfeld defines an unit-root $F$-overconvergent isocrystal to be {\it absolute unit-root} if the image of those eigenvalues by any automorphism of $\bar {\mathbb Q}_p$ are still $p$-adic units.
\begin{thm}[Koshikawa, \cite{Kos15}] \label{thm:koshikawa}
Irreducible absolute unit-root $F$-overconvergent isocrystals with finite determinant are iso-constant, that is the restriction of the representation to $\pi_1(\bar X, x)$ has finite monodromy.
\end{thm}
\noindent
Isoconstancy means that after a finite \'etale cover, the isocrystal is constant, that is comes from an isocrystal on the ground field.
\noindent
\begin{rmk} One can summarize geometrically, as opposed to analytically, the following different variants of isocrystals.
\begin{enumerate}
\item
Irreducible absolute $F$-overconvergent unit-root isocrystals with finite determinant: those are the iso-constant ones;
\item Irreducible $F$-overconvergent unit-root isocrystals: those are the ones which are potentially unramified;
\item Irreducible $F$-overconvergent isocrystals: those are the $F$-convergent isocrystals which become nilpotent after some alteration.
\end{enumerate}
The point (2), not discussed here, is used in the proof of (1) (Theorem~\ref{thm:koshikawa}) and is due to Tsusuki \cite{Tsu02}.
\end{rmk}
\noindent
{\it Acknowledgements:} It is a pleasure to thank Jakob Stix for a discussion on separable base points reflected in Section~\ref{s:pi}. We thank Tomoyuki Abe and Atsushi Shiho for discussions. We thank the public of the Santal\'o lectures at the Universidad Complutense de Madrid (October 2015) and the Rademacher lectures at the University of Pennsylvania (February 2016), where some points discussed in those notes were presented.
In particular, we thank
Ching-Li Chai for an enlightening discussion on compatible systems. We thank Moritz Kerz for discussions we had when we tried to understand Deligne's program in Weil II while writing \cite{EK12}. We thank the two referees for their friendly and thorough reports which helped us to improve the initial version of these notes.
| 2023-04-23T06:40:34.139Z | 2017-01-16T02:08:16.000Z | redpajama/arxiv | arxiv_0001 | 692 | 8,405 |
b1a1796287f5eb7bc910da1918413c04eaa86502 | \section{Introduction}
To characterize the level of solar activity one traditionally uses
amplitude indices, which are calculated on the base of number and
size of sunspots (the Wolf number, Group Sunspot Number, sums of
sunspot group areas etc). However, until the beginning of the epoch
of regular observations of sunspots those indices are often derived
from non-uniform data, which contain errors due to loss or incorrect
treating o of observations. For example, it is known (see, e.g.,
\cite{vit}), that the widely used Z\"urich series of
the Wolf number before the middle of the 19th century was
constructed by R. Wolf on the base of a fragmentary data.
In addition to amplitude indices there are data on spatial
distributions of sunspot groups. First of all, such data are
presented in the Greenwich catalog of sunspot groups. Recently other
catalogs with sunspot coordinates for earlier epochs became
available, e.g., the catalogs based on observations of Staudacher
\cite{arlt09} and Schwabe \cite{arlt13}.
On the one hand, both information on number of sunspots and on their
latitude distribution is subjected to distortions caused by loss of
observational data, but it is much weaker for the latter. On the
other hand, there are stable links between the latitude distribution
of sunspots in the 11-year solar cycle and its amplitude
\cite{imn,im11,im14}. Therefore, latitude characteristics of
sunspots can be used for control and correction of normalization of
traditional series of amplitude indices. In this paper we
demonstrate it analyzing the extended Greenwich catalogue (GC),
that include the original Greenwich data and their extension by
NOAA/USAF (\url{http://solarscience.msfc.nasa.gov/greenwch.shtml}), and
the Schwabe catalog (SC) \cite{arlt13}
(\url{http://www.aip.de/members/rarlt/sunspots/schwabe}).
\section{Data and method}
It is convenient to use as an amplitude characteristic of solar
activity the index G that is equal to yearly averaged daily numbers
of observed sunspot groups. This index can be readily obtained from
sunspot groups catalogs, it is tightly related to the Group Sunspot
Number index (GSN) proposed by Hoyt and Schatten \cite{hoyt} and
differ from the latter basically in its normalization (G $\approx$
GSN/12). As a measure of the sunspot latitude extension we will use
the yearly means of absolute values of sunspot group latitudes $\phi$
and their dispersions averaged over the two hemispheres
$\sigma_{\phi}^2$.
In Fig.~\ref{fig1} the latitude distribution of groups (``the
Maunder butterflies'') and indices G, $\phi$ and $\sigma_{\phi}^2$
for GC (1874--2014) and SC (1825--1867) are plotted. In SC each
drawing of the Sun is attributed by the so called ``subjective
quality flag'' $Q$, and in the following we will use, unless
otherwise stated, only data with $Q=1$ corresponding to the highest
quality. The thin lines in plots of $\phi$ and $\sigma_{\phi}^2$
correspond to years of cyclic minimums and two adjacent years, which will
not be taken into account in analysis of latitude properties, since
in these years the wings of neighboring Maunder butterflies overlap,
so the mean latitudes are ambiguous and the hemisphere dispersions
are strongly overestimated.
\begin{figure}
\begin{center}
\ff{\includegraphics[width=0.99\textwidth]{fig1s.eps}}
\caption{Top panel: the latitude distribution of sunspot groups
(``the Maunder butterflies'') for GC (1874--2014) and SC
(1825--1867). Lower panels: indices G, $\phi$ and $\sigma_{\phi}^2$
for both catalogs. See the text for details.} \label{fig1}
\end{center}
\end{figure}
The empty circles in Fig.~\ref{fig1} mark maximums of index G and
the mean latitudes $\phi_{G_{\rm max}}$ corresponding to these
moments. (for the bimodal 20th cycle the moment between two almost
equal peaks is selected as a maximum). The values $G_{\rm max}$ and
$\phi_{{\rm G}_{\rm max}}$ are well correlated (the correlation
coefficient $r=0.93$, see Fig.~\ref{fig2}) and related by the
regression equation
\begin{equation}\label{eq1}
\phi_{{\rm G}_{\rm max}}=0.66^\circ\cdot {\rm G}_{\rm max}+8.81^\circ \,,
\end{equation}
For the Wolf numbers a similar relationship was found by Waldmeier as early
as in the 1930s \cite{wald39,wald55}. It is tightly connected to the
following two regularities:
(i) Evolution of the mean latitude of sunspots in the 11-year cycle
(``the Sp\"orer law''), as it was demonstrated by Ivanov and Miletsy
in \cite{im14}, can be described by the universal
dependence $\phi(t) = A\cdot \exp \left[-b \cdot (t - T_{\rm min})
\right]$, where $T_{\rm min}$ is the moment of the cycle minimum,
the coefficient $a$ correlates with the amplitude of the cycle and
$b \approx -0.13\,{\rm years}^{-1}$ does not depend upon this
amplitude;
(ii) According to the Waldmeier rule \cite{wald35} maximums in
higher cycles tend to take place earlier than in lower ones.
One can easily deduce from rules (i) and (ii) that the mean latitude
in the maximum must be higher for more powerful cycles, in agreement
with expression \eq{eq1}.
\begin{figure}
\begin{center}
\includegraphics[width=0.99\textwidth]{fig2.eps}
\caption{The relationship between amplitudes of 11-year cycles ${\rm
G}_{\rm max}$ and the mean latitudes in maximums $\phi_{{\rm G}_{\rm
max}}$ for GC (circles) and the similar relationship for ``raw'' indices
$G_{{\rm max},q}$ in SC (triangles).} \label{fig2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.99\textwidth]{fig3.eps}
\caption{The relationship
between the index G and the latitude dispersion $\sigma_\phi^2$ for
GC (filled circles), entire SC (triangles) and SC from 1847 (empty
circles).}
\label{fig3}
\end{center}
\end{figure}
Besides, in papers \cite{mi09,imn,im11} we found a relationship between the
latitude extension of the sunspot distribution and the level of solar
activity. For GC such relationship between G and the dispersion
$\sigma_\phi^2$ (see Fig.~\ref{fig3}) can be described by the regression
\begin{equation}\label{eq2}
\sigma_\phi^2=3.12 \cdot {\rm G} + 13.3\;{\rm deg}^2
\end{equation}
with correlation $r = 0.90$.
It is important to note that the both mentioned relations are not
destroyed in cases when a part of observations is lost. To show it,
we artificially sparsed GC, randomly selecting from the data
one qth part of observations. Regressions \eq{eq1} and \eq{eq2} in
this case turn to
\begin{equation}\label{eq1s}
\phi_{{\rm G}_{{\rm max},q}} = a_q \cdot q \cdot {\rm G}_{{\rm max},q} + c_q
\end{equation}
and
\begin{equation}\label{eq2s}
\sigma_{\phi,q}^2 = b_q \cdot q \cdot {\rm G}_q + d_q\,,
\end{equation}
where variables with indices $q$ correspond to the sparse GC and the
additional ``loss factor'' $q$ in ${\rm G}_q$ and ${\rm G}_{{\rm
max},q}$ compensates the loss of $(q-1)/q \cdot 100\%$ data in the
catalog. Behavior of the ratios $a_q/a_1$ and $b_q/b_1$, which
characterizes change of the relationships between amplitude and latitude
extension of sunspot activity with growth of $q$ as compared with
the same relationships for the full GC (i.e. for $q=1$), is presented in
Fig.~\ref{fig4}. One can see that even for $q = 100$ (i.e. when 99\%
of observations are lost), relative variations of the regression
coefficients are limited by the range 20--25\%.
\begin{figure}\begin{center}
\includegraphics[width=0.99\textwidth]{fig4.eps}
\caption{Behavior
of ratios $a_q/a_1$ and $b_q/b_1$, which characterize regressions
\eq{eq1s} and \eq{eq2s}, for the sparsed GC as a function of the loss
factor $q$. The error bars for each point are estimated
by 100 randomly sparsed series
} \label{fig4}
\end{center}
\end{figure}
Such stability of the found relationships against loss of data allows their using
for control or/and restoration of normalization of amplitude indices
of solar activity in catalogs of sunspots.
\section{Restoration of normalization in the Schwabe catalog}
Let us demonstrate how relation \eq{eq1} can be used for restoration
of amplitude indices by the example of the Schwabe catalog. To do it
we plot for SC the dependence ${\rm G}_{{\rm max},q} - \phi_{{\rm
G}_{\rm max}}$ (Fig.~\ref{fig2}) and build the corresponding
regression
\begin{equation}\label{eq3}
\phi_{{\rm G}_{\rm max}}
= 1.84^\circ \cdot {\rm G}_{{\rm max},q} +8.49^\circ \qquad (r=0.82)\,,
\end{equation}
where ${\rm G}_{{\rm max},q} $ are maximums of cycles for the
``raw'' indices ${\rm G}_q$, calculated by SC and the coefficient $q
>1$ corresponds to some ({\em a priori} unknown) loss of data. Comparing \eq{eq1}, \eq{eq1s}
and \eq{eq3}, one can find $q = 1.84^\circ / 0.66^\circ \approx 2.8$
and obtain ``renormed'' indices ${\rm G} = q \cdot {\rm G}_q$ (the
gray curve in the second panel of Fig.~\ref{fig1}) with the
distortion due to the data loss corrected.
\begin{figure}
\begin{center}
\includegraphics[width=0.99\textwidth]{fig5.eps}
\caption{The relation between indices W and G for GC (filled
circles), ``raw'' ${\rm G}_q$ (triangles) and renormed G (empty
circles) indices for SC.} \label{fig5}
\end{center}
\end{figure}
In this case we can independently control agreement of the obtained
renormalization using the Wolf numbers W that are known for this
epoch. In Fig.~\ref{fig5} the relations between indices W and G for
GC, ``raw'' (${\rm G}_q$) and renormed (G) indices for GC are shown.
One can see that the normalization, which we obtained without using
of the known Wolf numbers, agrees with the latter rather well.
\section{Latitude dispersions in the Schwabe catalog}
Let us study the second relationship (2) between G and $\sigma_\phi^2$
(Fig.~\ref{fig3}), using the renormalized G found for SC. One can
see that the properties of this dependence vary and it agrees with the
relation found for GC only since 1847. If we assume that the
normalization of index G for SC is valid, we conclude that until the
middle of the 1840s the dispersions of the latitude distribution are
anomalously large. One can make the same conclusion from
Fig.~\ref{fig1}, where in the corresponding cycles the unusually
great number of sunspot are sited close to the equator.
\begin{figure}
\begin{center}
\includegraphics[width=0.99\textwidth]{fig6.eps}
\caption{Top panel: the observed latitude dispersions
$\sigma_\phi^2$ for SC (black curves) and the corresponding values
$(\sigma_\phi^{\rm (0)})^2$ obtained by the relation (2) (gray
curves). Bottom panel: overestimatings of the dispersion in SC
$\epsilon = \left( \sigma_\phi /\sigma_\phi^{\rm (0)} \right)^2$ for
$Q=1$ (solid curves) and any Q (dotted curves). } \label{fig6}
\end{center}
\end{figure}
In the top panel of Fig.~\ref{fig6} the latitude dispersions
$\sigma_\phi^2$, which are calculated directly from the data of SC,
and the values $(\sigma_\phi^{\rm (0)})^2$ obtained from G with use
of the relation (2), are plotted. We assume that the relation (2)
remained valid for sunspots in the pre-Greenwich epoch, so the
visible difference in the dependence between activity and latitude
dispersion of GC and SC are caused by some errors in the latter.
This error can be described by the value $\epsilon = \left(
\sigma_\phi /\sigma_\phi^{\rm (0)} \right)^2$ (see the bottom panel
of Fig.~\ref{fig6}). This relation is large for the cycles~7 and 8
and decreases almost to unity after the middle of 1840s. Let us note
that if one uses all data of SC (the dotted curves of the bottom
panel) rather than observations with the quality flag $Q=1$ only
(the solid curves), the magnitude of $\epsilon$s become notably
larger. Therefore, we assume that GC contains errors in
determination of sunspot latitudes that are larger for drawings of
low quality and decrease to the end of the period of observations.
It seems probable that the cause of these errors is wrong
determination of the position of the solar equator in drawings
of Schwabe. Apparently, in the case of systematic inclinations of
the equator line on the drawings of the Sun relative to its true
angle the calculation of the latitude distribution must lead to
overestimation of $\sigma_\phi^2$. At the same time, it will not
cause a systematic shift of the mean latitudes $\phi$ because of
equal probabilities of positive and negative errors in the
inclination. Just such a picture one can see in the data of SC.
It is interesting that a similar anomalous number of sunspots on the
equator were found in the observations of Staudacher (see
Fig.~2 in the paper \cite{arlt09}). Arlt assumes that such
phenomenon can be caused by a quadrupole magnetic field dominating
on the Sun in the third quarter of the 18th century. However, the
same picture can be a result of errors in determination of the
equator position on the drawings of the observer.
\section{Conclusions}
Therefore, relationships between characteristics of the latitude
distribution of sunspots and the level of solar activity allow one
to control normalization of activity indices and correct their
distortions caused by a loss of a part of data. In this paper we
discuss two such relationships. The first one \eq{eq1} relates the mean
latitude of sunspots in the maximum of activity $\phi_{{\rm G}_{\rm
max}} $ with the amplitude of the 11-year cycle ${\rm G}_{\rm max}$.
The second relationship (2) associates the dispersion of the mean latitude
$\sigma_\phi^2$ and the current level of activity G.
Apparently, evaluation of the mean latitudes requires less precision
in determination of sunspot coordinates than calculation of the
latitude dispersions. We saw it on the example of SC, the first
part of which, probably, contains large errors. On the other hand,
using of the relationship \eq{eq1} requires that a catalog does not contain
dramatic changes in data quality and the loss factor $q$ does not
vary strongly during a 11-year cycle. At the same time, usage of the
relationship (2) does not limited by the data quality so strongly, since
it operates by yearly indices. In cases when using of both
relationships is possible they, as it was shown above by the example of the
second part of SC, lead to consistent results.
It is interesting that both relationships hold even in the case of loss of
99\% of data (see Fig.~\ref{fig4}), i.e. in situations when a
direct calculation of amplitude indices (like the Wolf number)
becomes very difficult. This fact makes possible using the described
latitude-amplitude relations for analysis and correcting of solar
activity indices obtained on the base of pre-Greenwich sunspots
catalogs.
\section{Acknowledgements}
The paper was supported by the RFBR grant No. 13-02-00277 and
programs of the Presidium of the Russian Academy of Sciences Nos.~21
and~22.
| 2023-04-23T06:40:34.896Z | 2016-03-11T02:10:26.000Z | redpajama/arxiv | arxiv_0001 | 718 | 2,477 |
84c26199a622d6fadd68d4a65c92afa8cc4ed9c3 | \section{Lattice geometries}
\label{app:lattices}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Lattices}
\caption{\label{fig:lattices}The different lattice geometries used for the TFI model. The red boxes indicate the lattice basis cells, the arrows mark the Bravais-vectors. The square and square-octagon lattices obey a $C_4$ rotational symmetry, the triangular, honeycomb and kagome lattices a $C_6$ rotational symmetry.}
\end{figure}
\section{Mapping the perturbed Toric Code onto the transverse field Ising model}
\label{app:tc_to_tfi}
\begin{figure}
\centerline{\includegraphics[width=.6\columnwidth]{TC_Sketch}}
\caption{\label{fig:TC_Sketch}The Toric Code on a torus. Black dots show the positions of the Toric Code variables $\sigma_i^{x,z}$, grey squares the dual lattice for the variables $\mu_p^{x,z}$. $T_{1,2}$ depict a choice of the two incontractible loops winding around the torus. See text for further details. }
\end{figure}
In this section, we demonstrate an exact mapping of the charge-free sector of the Toric Code model perturbed by a longitudinal field to a transverse field Ising model with only even states under spin-inversion. Such a mapping has already been used in previous studies of the Toric Code \cite{Trebst2007, Hamma2008, Carr2010}, here we will additionally show that the different groundstate sectors of the Toric Code result in different boundary conditions of the transverse field Ising model.
The Hamiltonian of the Toric Code in a longitudinal field is given by
\begin{align}
H = -J &\sum_s A_s - J \sum_p B_p - h \sum_i \sigma_i^x\\
A_s = &\prod_{i \in s} \sigma_i^x, \; B_p = \prod_{i \in p} \sigma_i^z
\label{eq:TC}
\end{align}
where the $\sigma_i$ describe spins on the links of a square lattice, $p$ denotes a plaquette and $s$ a star on this lattice.
All $A_s$ and $B_p$ commute with each other and thus the GS of the Hamiltonian for $h=0$ can be found by setting $A_s=1 \; \forall s$ and $B_p=1 \; \forall p$.
On a torus, however, not all of the $A_s$ and $B_p$ are linearly independent, as $\prod_s A_s=1$ and $\prod_p B_p=1$, leading to a 4-fould degenerate groundstate manifold.
This groundstate manifold can be distinguished by the expectation values of the Wilson loop operators $t_{1,2} = \prod_i \sigma_i^x$ where the paths wind around the torus along two non-contractible loops through the centers of the edges of the lattice (e.g. parallel to $T_{1,2}$ in \figref{fig:TC_Sketch}).
To perform the mapping to a transverse field Ising model we first note, that $A_s$ and $t_{1,2}$ are still conserved for $h \neq 0$, when the longitudinal field is turned on. So, we consider the charge-free sector, $A_s=1 \; \forall s$, which describes the low-energy physics even at criticality, and define the new variables
\begin{align}
\label{eq:Bpmap}
\mu_p^z &= B_p \\
\mu_{p,\rightarrow(\uparrow)}^x &= \prod_{i \in c_{p \rightarrow(\uparrow)}} \sigma_i^x
\end{align}
on each site $p$ of the dual lattice (center of plaquette $p$) \cite{Hamma2008}. We choose two incontractible paths $T_{1,2}$ in $\hat{x}(\hat{y})$ direction along the lattice. The path $c_{p \rightarrow(\uparrow)}$ is then a straight path from $T_{2(1)}$ to the site $p$ in $\hat{x}(\hat{y})$-direction along the dual lattice (cf.~\figref{fig:TC_Sketch}).
It is straightforward to show that these variables fulfill the Pauli-Algebra $\{\mu_p^x, \mu_p^z\}=0, (\mu_p^x)^2=1$ and that
\begin{align}
\label{eq:tc_to_tfi1}
\sigma_i^x(\hat{x}) &= \mu_{p(i),\uparrow}^x \mu_{p(i)-\hat{y},\uparrow}^x \\
\label{eq:tc_to_tfi2}
\sigma_i^x(\hat{y}) &= \mu_{p(i),\rightarrow}^x \mu_{p(i)-\hat{x},\rightarrow}^x
\end{align}
where $\sigma_i^x(\hat{x}(\hat{y}))$ describes a Pauli operator on a link in $\hat{x}(\hat{y})$-direction on the lattice.
With this, the TC eventually maps onto the well-known TFI model
\begin{equation}\label{eq:tfi_from_tc}
H_{TFI} = -h \sum_{\langle p, q\rangle} \mu_p^x \mu_q^x - J_p \sum_p \mu_q^z + const.
\end{equation}
on the dual lattice and $A_s = 1 \; \forall s$, as it was imposed.
The resulting transverse field Ising model \eqref{eq:tfi_from_tc} is invariant under global spin-inversion $\mathcal{I} = \prod_p \mu_p^z$. From \eqref{eq:Bpmap} it immediately follows that
\begin{equation}
\mathcal{I} = \prod_p B_p = 1
\end{equation}
where the last equality is always satisfied on a torus and so the Toric Code maps to an {\em even} transverse field Ising model.
Let us finally apply the mapping on the different groundstate sectors characterized by the eigenvalues of $t_{1,2}$. Using \eqref{eq:tc_to_tfi1} and \eqref{eq:tc_to_tfi2} it follows that
\begin{equation}
t_1 = \prod_{p=0}^{L-1} \mu_{(p,j)}^x \mu_{(p+1,j)}^x = \mu_{(0,j)}^x \mu_{(L,j)}^x
\end{equation}
where the index $(p,j)$ labels the position $p \hat{x} + j \hat{y}$ on the dual lattice and $L$ is the linear extend of the torus. An equivalent relation can be computed for $t_2$. The different groundstate sectors of the Toric Code therefore map onto periodic and antiperiodic boundary conditions of the transverse field Ising model for both directions around the torus.
\section{Finite size gap estimation with QMC}
To estimate the gaps in a larger range of system sizes than reachable by ED, we use a continuous-time world-line Monte-Carlo scheme, supplemented with a cluster update \cite{Bloete2002} to overcome critical slowing down at the quantum phase transition.
For our computations, we used system linear sizes ranging up to $L=30$ (for the simplest lattices). For each system, the average energy was computed from several (from 16 to 256) independent runs of $10^4$ to $10^6$ measurements. Between two measurements, the number of cluster updates $n_c$ was chosen so that $n_c\langle s_c\rangle\gtrsim \beta L^2$, where $\langle s_c\rangle$ is the average cluster size. This leads to an autocorrelation-time of order one Monte Carlo step or less.
The world-line Monte-Carlo allows to extract the excitation spectrum of the system through the evaluation of the imaginary-time spin-spin correlation function.
Indeed, the spin-spin correlation function at momentum $\mathbf{q}$ is given by
\begin{equation}
\begin{array}{ll}
S^{zz}(\mathbf{q},\tau)&=\dfrac{1}{\beta N}\left\langle\displaystyle\sum_{i,j}{\displaystyle\int_0^{\beta}{d\tau'\,e^{-\mathrm{i}{\bf q}\cdot({\bf r}_i-{\bf r}_j)}s_i^z(\tau')s_j^z(\tau'+\tau)}}\right\rangle\\
&\underset{\beta\to\infty,\tau\to\infty}{\approx} e^{-\Delta_\mathbf{q} \tau},
\end{array}
\end{equation}
where $\Delta_\mathbf{q}$ is the excitation gap at momentum $\mathbf{q}$.
\begin{figure}[h!]
\includegraphics[width=.45\textwidth]{Gapestimation.pdf}
\caption{Extraction of the excitation gaps from QMC data (here we display the example of a triangular lattice system of size $L=20$, at the smallest non-zero momentum).
{\sl top panel} Spin-spin correlation function as a function of imaginary time compared to the fitted exponential.
{\sl bottom panel} Relative fitting error on the evaluation of the gap as a function of the minimal imaginary time used in the fit.
}
\label{GapEstimation}
\end{figure}
To optimize the estimation of the gap, we fit an exponential decay to the spin-spin correlation function for imaginary times $\tau>\tau_{\text{min}}$ (Fig.~\ref{GapEstimation}, top panel)
and find the value $\tau_{\text{min}}^*$ that minimizes the relative fitting error on the value of the gap (Fig.~\ref{GapEstimation}, bottom panel).
\section{Finite-size extrapolation of the energy gaps}
To motivate the dominant $1/N$ scaling used in our extrapolations of the
finite-size energy gaps $L \; \Delta$ we have calculated the dispersion relation
$\epsilon(\mathbf{k})$ at criticality within linear spin-wave theory (LSWT). We
can use this dispersion to compute the finite-size scaling of a $\kappa\neq0$
level within this approach by considering a momentum $\mathbf{k}=\frac{1}{L} (u,
v)$ (on the square lattice) and expanding the resulting expression in powers of
$1/L$
\begin{align}
\epsilon(\mathbf{k}) &= \frac{c}{L} \left(1 - \frac{a}{L^2} +
\mathcal{O}(1/L^4)\right) \\
c &= \sqrt{(u^2+v^2)/2} \\
a &= \frac{u^4+v^4}{24(u^2+v^2)}
\end{align}
The dominant corrections to the spectral levels $L\;\Delta$ within the LSWT
approach are thus $1/L^2 = 1/N$. In \figref{fig:fsslswt} we compare the
finite-size extrapolation of a spectral level in the full LSWT approach and in
the power-series expansion up to $1/N$. The effect of higher order corrections
is small already for intermediate size systems.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{TFI_FSS_LSWT_Square}
\caption{\label{fig:fsslswt} Finite size scaling of a $\kappa=\sqrt{2}$
spectral level at criticality within a linear spin-wave theory (LSWT) approach. The
blue curve is according to the LSWT dispersion, the yellow curve is a
power-series of this dispersion up to the second non-vanishing order.}
\end{figure}
We have also considered fitting approaches with additional $1/L$ terms for the
extrapolation of the spectral levels to the thermodynamic limit $N\rightarrow
\infty$. In \figref{fig:extrapolations} we compare different fitting approaches
of the QMC gaps $L\;\Delta$ for $\sigma_T$ levels in different $\kappa$ sectors on the
triangular lattice. While a correction in purely $1/L$ (red fits) gives poor
results, correction terms in $1/L^2$ with and without an additional $1/L$ term
fit the data very well leading to similar extrapolated gaps. The fitting
procedure with both terms is, however, much more instable when very small
systems are included in the fit, often leading to strong dips close to $1/N=0$.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{TFI_extrapolations_QMC_sigmas_Triangular}
\caption{\label{fig:extrapolations}Comparison of different fitting approaches for the extrapolation of
the finite-size $\sigma_T$ levels in different $\kappa$ sectors. The values
in parantheses give the pure fitting error for the given values.}
\end{figure}
\section{Speed of light from QMC}
For each lattice, in order to extract the speed of light $c$, we proceed as follows.
We first extract with QMC the energies $E^{\sigma_T}_L(\kappa)$ of the lowest $\mathbb{Z}_2$ odd levels at a given system size $L$.
We then extrapolate those to the thermodynamic limit $E^{\sigma_T}(\kappa)$.
We finally fit a line $E^{\sigma_T}(\kappa)=\delta E+c\cdot\kappa$ to this extrapolated data, in the interval $\left[\kappa_{\min},\kappa_{\max}\right]$~\cite{Sen2015}.
Since we expect the levels at small momenta to be affected by the periodic boundary conditions, we take $\kappa_{\min}>0$. For large momenta, the finite-size curvature effects render the
thermodynamic-limit extrapolation ill-defined, and therefore one has to introduce an ultraviolet cutoff $\kappa_{\max}$. There is ambiguity in the choice of $\kappa_{\min}$ and $\kappa_{\max}$,
but for all {\it reasonable} choices
({\sl i.e.} so that enough points lie within the linear regime in this interval),
one gets a value for $c$ with a fitting asymptotic standard error of less than 0.5\%. However, the value of $c$ thus obtained varies by about 1\% (2\%) for the square and triangular lattices
(square-octagon, honeycomb and kagome lattices) across the various choices of fitting intervals. This leads to the speed of light estimates given in Tab.~\ref{tab:speedoflight}.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|}
\hline
{\bf Lattice} & $\mathbf{c/J}$ & (prev. works) \\\hline\hline
Square & 3.323$\pm$0.033 & 3.01$\pm$0.09\cite{Hamer2000}\\\hline
Square-Octagon & 5.126$\pm$0.103& \\\hline
Triangular & 2.047$\pm$0.020& \\\hline
Honeycomb & 2.923$\pm$0.058& \\\hline
Kagome & 2.013 $\pm$ 0.040&\\\hline
\end{tabular}
\caption{\label{tab:speedoflight}Speed of light for each lattice geometry, from QMC.}
\end{table}
\section{Spectrum of the Wilson-Fisher CFT on a torus: $\epsilon$-expansion}
\label{app:epsilon}
In this appendix we elaborate on the calculation of the spectrum from the $\epsilon$ expansion. A more detailed exposition which generalizes to the O(N) model and includes deviations from the critical point will be presented in a future publication. The Hamiltonian is
\begin{equation}
H = \int d^dx \left[\frac{1}{2}\Pi^2 + \frac{1}{2}( \nabla \phi)^2 + \frac{s}{2} \phi^2 + \frac{u}{4!} \phi^4 \right] \label{onhapp}
\end{equation}
where we will always tune to the critical point, $s = s_c$, $u = u^\ast$, in final expressions. The system is taken to be on $d/2$ copies of a 2-torus with modular parameter $\tau = \tau_1 + i \tau_2$ and area $\mathcal{A} = \mathrm{Im}\left( \omega_2 \omega_1^{\ast} \right) = \tau_2 L^2$. We will use complex coordinates, $x = x_1 + i x_2$, for each copy of the torus (see Fig.~1 in the main text).
As discussed in the main text, a gapless field theory on a finite volume leads to incurable infrared divergences due to the zero-momentum component of the fields. The solution to this problem is to split the fields into a zero-mode part and a finite momentum part,
\begin{eqnarray}
\phi(x) &=& \mathcal{A}^{\frac{1-d}{4}}\varphi + \psi(x) \nonumber \\
\Pi(x) &=& \mathcal{A}^{-\frac{d+1}{4}} \pi + p(x)
\end{eqnarray}
where the zero mode terms have been normalized such that they are dimensionless and satisfy the commutation relation $[\varphi,\pi] = i$. The fields $\chi(x)$ and $p(x)$ only have finite-momentum modes in their Fourier series:
\begin{eqnarray}
\psi(x) &=& \frac{1}{\mathcal{A}^{d/4}} \sum_{k \neq 0} \frac{e^{i k \cdot x}}{\sqrt{2 |k|}}\left( b(k) + b^{\dagger}(-k) \right) \nonumber \\
p(x) &=& -\frac{i}{\mathcal{A}^{d/4}} \sum_{k \neq 0} \sqrt{\frac{|k|}{2}}e^{i k \cdot x}\left( b(k) - b^{\dagger}(-k) \right)
\end{eqnarray}
The momentum sums are over $d/2$ copies of the complex dual lattice to the torus, and the dot product is given by $k \cdot x = \mathrm{Re}\left( k x^{\ast} \right)$. With this decomposition, the Hamiltonian can be split up as
\begin{equation}
H = H_0 + V
\end{equation}
with
\begin{eqnarray}
H_0 &=& \sum_{k \neq 0} |k| b^{\dagger}(k)b(k) \nonumber \\
V &=& \frac{1}{\sqrt{\mathcal{A}}}\left[ \frac{1}{2}\pi^2 + \frac{1}{2} \mathcal{A} s \varphi^2 + \frac{u \mathcal{A}^{\epsilon/2}}{4!} \varphi^4 \right] \nonumber \\
&+& \frac{u \mathcal{A}^{\epsilon/2}}{\sqrt{\mathcal{A}}} \frac{\varphi^2}{8} \sum_{\mathbf{k} \neq 0} \frac{\chi(-k)\chi(k)}{\mathcal{A}^{1/2} |k|} \nonumber \\
&+& \frac{u \mathcal{A}^{\epsilon/2}}{\sqrt{\mathcal{A}}} \frac{\varphi}{6} \sum_{k,k' \neq 0} \frac{\chi(k) \chi(k') \chi(-k - k')}{(8 \mathcal{A}^{3/2} \omega_{k}\omega_{k'}\omega_{k + k'})^{1/2}} \nonumber \\
&+& \frac{u \mathcal{A}^{\epsilon/2}}{\sqrt{\mathcal{A}}} \frac{1}{4!} \sum_{k_i \neq 0 \atop k_1 + k_2 = k_3 + k_4} \frac{\chi(k_1) \chi(k_2) \chi(k_3) \chi(k_4)}{4 (\mathcal{A}^2 |k_1||k_2||k_3||k_4|)^{1/2}} \qquad
\end{eqnarray}
where we define $\chi(k) \equiv b(k) + b^{\dagger}(-k)$. Here, $\mathcal{H}_0$ describes a Fock spectrum of finite momentum states, and the interaction Hamiltonian $V$ contains all terms involving the zero mode and nonlinearities. In this paper, we will always set the ground state energy to zero; a future publication will discuss the universal dependence of the ground state energy on $L$ and on relevant perturbations $s - s_c$.
At zeroth order, our states are given by finite momentum Fock states multiplied by arbitrary functionals of the zero mode,
\begin{equation}
H_0 \Psi[\varphi]|k, k', \cdots \rangle = \left( |k| + |k'| + \cdots \right)\Psi[\varphi]|k, k', \cdots \rangle
\end{equation}
Since we can multiply by any normalizable functional $\Psi[\varphi]$, these states are infinitely degenerate. This degeneracy is broken in perturbation theory. We use a perturbation method due to C. Bloch which is well-suited to degenerate problems \cite{B58}. For a review of this method and its relation to other effective Hamiltonian methods, see Ref.~\cite{K74}. The main idea is to consider each degenerate subspace separately, but construct an effective Hamiltonian within each subspace whose eigenvalues give the exact energy. So if we consider a degenerate subspace of $H_0$,
\begin{equation}
H_0 |\alpha_0 \rangle = \epsilon_0 | \alpha_0 \rangle
\end{equation}
this perturbation method constructs a new operator $H_{eff}$ which acts on this subspace but gives the exact energy levels,
\begin{equation}
H_{eff}|\alpha_0 \rangle = E_{\alpha} |\alpha_0 \rangle \label{effdef}
\end{equation}
where $E_{\alpha} = \epsilon_0 + \mathcal{O}(V)$.
The expression for $H_{eff}$ can be obtained perturbatively in $V$, which was the main result of Bloch's work \cite{B58}. To leading order, the effective Hamiltonian for a given degenerate subspace is given by
\begin{equation}
H_{eff} = \epsilon_0 P_0 + P_0 V P_0 + P_0 V \frac{1 - P_0}{\epsilon_0 - \mathcal{H}_0} V P_0 + \cdots
\label{eqn:effham}
\end{equation}
where $P_0$ is the projection operator onto the degenerate subspace of interest. At this order in perturbation theory the effective Hamiltonian is hermitian, although to higher orders one needs to make a unitary transformation insure hermiticity \cite{B58,K74}.
As a definite example, we give the effective Hamiltonian acting on the Fock vacuum, $\Psi[\varphi] |0 \rangle$. From Eq.~(\ref{eqn:effham}), the effective Hamiltonian takes the form
\begin{equation}
H_{eff,k=0} = h_{k=0} | 0 \rangle \langle 0 | \nonumber
\end{equation}
\begin{equation}
h_{k = 0} = \langle 0 | V | 0 \rangle - \langle 0 | V \left( \frac{1 - | 0 \rangle \langle 0 |}{H_0} \right) V | 0 \rangle + \cdots
\label{eqn:hk0}
\end{equation}
When this acts on the ground state manifold, it generates a Schr\"odinger equation for the zero-mode functional,
\begin{equation}
h_{k = 0} \Psi[\varphi] = E \Psi[\varphi]
\end{equation}
We now obtain $h_{k=0}$ from evaluating the expectation values in Eq.~(\ref{eqn:hk0}). This involves UV divergent sums which requires renormalization, but the renormalization constants and RG equations will be identical to the infinite volume case as a consequence of finite-size scaling. We renormalize the theory using dimensional regularization with minimal subtraction as detailed in Ref.~\cite{ZJ02}, and then set the couplings to their fixed point values. The analytic continuation of divergent loop sums to arbitrary dimension can be found, for example, in Refs.~\cite{EBJZ85,ZJ02,WS16}.
The one-loop result for $h_{k=0}$ can be written
\begin{equation}
h(\varphi) = \frac{1}{\sqrt{A}}\left(\frac{1}{2}\pi^2 + \frac{R}{2} \varphi^2 + \frac{U}{4!} \varphi^4 \right),
\end{equation}
where $R$ and $U$ are dimensionless universal quantities which form a power series in $\epsilon$. These constants will also depend on $\tau$, and our expression for them is in terms of integrals over Riemann theta functions which need to be evaluated numerically. As we will justify below, we need to calculate $R$ to order $\epsilon$ and $U$ to order $\epsilon^2$ to obtain the spectrum to one-loop. Given the commutation relation $[\varphi,\pi] = i$, the momentum acts on the zero-mode functional as
\begin{equation}
\pi^2 \Psi[\varphi] = - \frac{d^2}{d \varphi^2} \Psi[\varphi]
\end{equation}
Therefore, at leading order in the $\epsilon$-expansion, the low-energy spectrum of the Ising model on the torus maps onto the spectrum of a one-dimensional quantum anharmonic oscillator with universal coefficients.
In spite of the effective Hamiltonian being an ordinary series in $\epsilon$, the oscillator is strongly coupled for small $\epsilon$. This can be seen by performing the canonical transformation $\varphi \rightarrow U^{-1/6}\varphi$ and $\pi \rightarrow U^{1/6}\pi$, which takes
\begin{eqnarray}
&& \left( \frac{\pi^2}{2} + \frac{R}{2} \varphi^2 + \frac{U}{4!} \varphi^4 \right) \nonumber \\
&& \qquad \qquad \longrightarrow U^{1/3} \left( \frac{\pi^2}{2} + \frac{RU^{-2/3}}{2} \varphi^2 + \frac{1}{4!} \varphi^4 \right)
\end{eqnarray}
Since both $R$ and $U$ are $\mathcal{O}(\epsilon)$, this latter form implies that the energy eigenvalues are an expansion in $\epsilon^{1/3}$, and the coefficients of the expansion are given by a pure quartic oscillator perturbed by a quadratic term. The latter problem has been widely studied in the literature, and the coefficients of this expansion to high order can be found in Ref. \cite{SCZ99}.
The above form for the effective Hamiltonian shows that the $\epsilon$ expansion on the torus results in a reordering of the perturbation expansion, since powers of $\varphi$ effectively carry a factor of $\epsilon^{-1/6}$. This reordering is what justifies our calculating $R$ to order $\epsilon$ and $U$ to order $\epsilon^2$ above. A detailed analysis shows that the one-loop expansion of the energy levels is accurate to order $\epsilon^{4/3}$, since the leading two-loop correction to the energy is of order $\epsilon^{5/3}$. This leading two-loop correction is to the coefficient $R$, and we also need to add terms of the form $p^2 \varphi^4 + \mathrm{c.c.}$ and $\varphi^6$ to the effective Hamiltonian at the same order.
For the finite momentum states, the effective Hamiltonian also takes the form of a strongly-coupled oscillator, but the coefficients will depend on the momentum. In addition, the effective Hamiltonian will couple different Fock states with the same energy and momentum whenever possible, which can lead to a multi-dimensional Hamiltonian which mixes Fock states; for these Hamiltonians the effect of these off-diagonal terms were computed numerically.
Finally, we note that since the anti-periodic sectors in the Ising$^{\ast}$ transition do not have a zero mode, the calculation is straight-forward. The energy levels are a normal expansion in $\epsilon$, and we simply need to compute the $\mathcal{O}(\epsilon)$ correction to the energy using first-order perturbation theory.
\clearpage
\FloatBarrier
\widetext
\section{Complete low-energy spectrum for Ising CFT with modular parameter $\tau=i$}
\begin{table*}[h!]
\centering
\begin{tabular}{|c||c|c|c|c|c|c||c|}
\hline
$\tau=i$ & $\kappa=0$ & $\kappa=1 \;(\times 4)$ & $\kappa= \sqrt{2} \;(\times 4)$ & $\kappa=2 \;(\times 4)$ & $\kappa=\sqrt{5} \;(\times 8)$ & $\kappa=2\sqrt{2} \;(\times 4)$ & Denomination\\ \hline
& 0 & & & & & & 1\\
& \cellcolor{lightgray} \textcolor{blue}{1.28} & & & & & & $\sigma_T$\\
& 4.71 & & & & & & $\varepsilon_T$\\
& & \cellcolor{lightgray} \textcolor{blue}{6.79} & & & & & $\sigma_T+\Delta \kappa$ \\
& \cellcolor{lightgray} 8.77 & & & & & & $\sigma^{\prime}_T$ \\
& & & \cellcolor{lightgray} \textcolor{blue}{9.44} & & & & \\
& & 9.52 & & & & & $\varepsilon_T+\Delta \kappa$ \\
& & & 11.6 & & & & \\
& 12.9 & & & & & & \\
& & & & \cellcolor{lightgray} \textcolor{blue}{13.15} & & & \\
& & & & 13.5 & & & \\
& & \cellcolor{lightgray} 13.6 & & & & & \\
& 14.1 & & & & & & \\
& 14.4 & & & & & & \\
$\tilde{E}/c$ & & & 14.5 & & & & \\
& & & & & \cellcolor{lightgray} \textcolor{blue}{14.67}& & \\
& & & & 14.9 & & & \\
& & & & & 15.4 & & \\
& & & \cellcolor{lightgray} 15.6& & & & \\
& & 16.0 & & & & & \\
& & 17.3 & & & & & \\
& & & & \cellcolor{lightgray} 17.6 & & & \\
& \cellcolor{lightgray} 17.7 & & & & & & \\
& 17.9 & & & & & & \\
& & & \cellcolor{lightgray} 18.4 & & & & \\
& & & & & & \cellcolor{lightgray} \textcolor{blue}{18.46} & \\
\hline
\end{tabular}
\caption{\label{tab:squareoverc}Low-energy spectrum $\tilde{E}/c=(E-E_0)\sqrt{N}/c$ for the Ising QFT with $\tau=i$ obtained from ED/QMC on the square lattice. $c$ denotes the speed of light~(see Tab.~\ref{tab:speedoflight}). Unshaded (shaded) cells are even (odd) under spin-inversion. Blue colored values are obtained from QMC+ED, the other values from ED alone. The degeneracy of the finite-momentum levels is given in brackets, all levels for $\kappa=0$ are not degenerate, some very close levels may, however, be actually degenerate in the thermodynamic limit.
The given values are obtained by linear fits of the finite-size levels $\tilde{E}_N/c$ as a function of $1/N$ and should be accurate up to variations of the last given digit. Obtaining more precise values is a non-trivial task as the values from QMC are the result of a series of fits and ED data shows larger finite-size effects for higher levels in the spectrum and for larger momentum $\kappa$, where the available finite-size momenta already lie within the non-linear regime of the dispersion relation close to the Brillouin zone boundary.
The last column shows our denomination of the levels as it was used in the main text.
See Tab.~\ref{tab:epsilondataoverc} for a comparison with $\epsilon$-expansion results.
}
\end{table*}
\begin{table*}[h!]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|c|c||c|}
\hline
$\tau=i$ & $\kappa=0$ & $\kappa_2=0$ & $\kappa=1$ & $\kappa= \sqrt{2}$ & $\kappa_2= \sqrt{2}$ & $\kappa=2$ & $\kappa=\sqrt{5}$ & $\kappa=2\sqrt{2}$ & Denomination\\ \hline
& 0 & & & & & & & & 1\\
& \cellcolor{lightgray} 1.825 & & & & & & & & $\sigma_T$\\
& 5.16 & & & & & & & & $\varepsilon_T$\\
& & & \cellcolor{lightgray} 6.40 & & & & & & $\sigma_T+\Delta \kappa$ \\
& & & 8.88 & & & & & & $\varepsilon_T+\Delta \kappa$ \\
& & & & \cellcolor{lightgray} 9.01 & & & & & \\
& \cellcolor{lightgray} 9.02 & & & & & & & & $\sigma^{\prime}_T$ \\
& & & & 11.35 & & & & & \\
& & & \cellcolor{lightgray} 12.41 & & & & & & \\
& & & & & & \cellcolor{lightgray} 12.60 & & & \\
& & & & & & 12.72 & & & \\
$\tilde{E}/c$ & & 12.95 & & & 12.95 & & & & \\
& 13.39 & & & & & & & & \\
& & & & & & & \cellcolor{lightgray} 14.15 & & \\
& & & & \cellcolor{lightgray} 14.867 & & & & & \\
& & & & & & 14.873 & & & \\
& & & 15.51 & & & & 15.51 & & \\
& & & & & & \cellcolor{lightgray} 15.83 & & & \\
& & \cellcolor{lightgray} 16.00 & & & \cellcolor{lightgray} 16.00 & & & & \\
& & & & & & & 16.32 & & \\
& & & 16.50 & & & & & & \\
& & & & & & & & \cellcolor{lightgray} 17.78 & \\
\hline
\end{tabular}
\caption{\label{tab:epsilondataoverc}Low-energy spectrum for the Ising QFT with modular parameter $\tau=i$ from $\epsilon$-expansion. The notation $\kappa_2$ indicates "two-particle" states (but this distinction loses meaning for higher $\kappa$).}
\end{table*}
\FloatBarrier
\section{Complete low-energy spectrum for Ising CFT with modular parameter $\tau=\frac{1}{2} + \frac{\sqrt{3}}{2} i$}
\begin{table*}[h!]
\centering
\begin{tabular}{|c||c|c|c|c|c||c|}
\hline
$\tau=\frac{1}{2} + \frac{\sqrt{3}}{2} i$ & $\kappa=0$ & $\kappa=1 \;(\times 6)$ & $\kappa= \sqrt{3} \;(\times 6)$ & $\kappa=2 \;(\times 6)$ & $\kappa=\sqrt{7} \;(\times 12)$ & Denomination\\ \hline
& 0 & & & & & 1\\
& \cellcolor{lightgray} \textcolor{blue}{1.35} & & & & & $\sigma_T$\\
& 5.03 & & & & & $\varepsilon_T$\\
& & \cellcolor{lightgray} \textcolor{blue}{7.77} & & & & $\sigma_T+\Delta \kappa$ \\
& \cellcolor{lightgray} 9.33 & & & & & $\sigma^{\prime}_T$ \\
& & 10.53 & & & & $\varepsilon_T+\Delta \kappa$\\
& & & \cellcolor{lightgray} \textcolor{blue}{13.14} & & & \\
& 14.5 & & & & & \\
& & & 14.6 & & & \\
& & \cellcolor{lightgray} 14.8 & & & &\\
& 15.1 $(\times 2)$ & & & & & \\
& & & & \cellcolor{lightgray} \textcolor{blue}{15.10}& & \\
& & & & 15.9 & & \\
& & 16.1 & & & & \\
& & & & 16.2 & & \\
& & & 16.7 & & & \\
$\tilde{E}/c$ & 18.3 & & & & & \\
& & & \cellcolor{lightgray} 18.6 & & &\\
& & & & & \cellcolor{lightgray} \textcolor{blue}{19.82} & \\
& \cellcolor{lightgray} 19.8 $(\times 2)$ & & & & & \\
\hline
\end{tabular}
\caption{\label{tab:triangularoverc}Low-energy spectrum for the Ising QFT with $\tau=\frac{1}{2} + \frac{\sqrt{3}}{2} i$ obtained from ED/QMC on the triangular lattice. See Tab.~\ref{tab:squareoverc} for further details and Tab.~\ref{tab:epsilondataoverc_hex} for a comparison with $\epsilon$-expansion results.
}
\end{table*}
\begin{table*}[h!]
\centering
\begin{tabular}{|c||c|c|c|c|c||c|}
\hline
$\tau=\frac{1}{2} + \frac{\sqrt{3}}{2} i$ & $\kappa=0$ & $\kappa=1 \;(\times 6)$ & $\kappa= \sqrt{3} \;(\times 6)$ & $\kappa=2 \;(\times 6)$ & $\kappa=\sqrt{7} \;(\times 12)$ & Denomination\\ \hline
& 0 & & & & & 1\\
& \cellcolor{lightgray} 1.96 & & & & & $\sigma_T$\\
& 5.55 & & & & & $\varepsilon_T$\\
& & \cellcolor{lightgray} 7.38 & & & & $\sigma_T+\Delta \kappa$ \\
& \cellcolor{lightgray} 9.72 & & & & & $\sigma^{\prime}_T$ \\
& & 10.02 & & & & $\varepsilon_T+\Delta \kappa$\\
& & & \cellcolor{lightgray} 12.68 & & & \\
& & \cellcolor{lightgray} 13.83 & & & &\\
& 14.31 & & & & & \\
& 14.44 & & & & & \\
& & & & \cellcolor{lightgray} 14.54 & & \\
$\tilde{E}/c$ & & & & 14.67 & & \\
& & 14.90 & 14.90 & & & \\
& & & 15.08 & & & \\
& 15.23 & & & & & \\
& \cellcolor{lightgray} 16.66 & & & & & \\
& & & & 16.96 & & \\
& \cellcolor{lightgray} 17.59 & & & & & \\
& & & & \cellcolor{lightgray} 17.97 & & \\
& & \cellcolor{lightgray} 18.13 & \cellcolor{lightgray} 18.13 & & & \\
& & 18.24 & & & & \\
& & & \cellcolor{lightgray} 18.84 & & & \\
& & & & & \cellcolor{lightgray} 19.28 & \\
& \cellcolor{lightgray} 19.61 & & & & & \\
\hline
\end{tabular}
\caption{\label{tab:epsilondataoverc_hex}Low-energy spectrum for the Ising QFT with modular parameter $\tau=\frac{1}{2} + \frac{\sqrt{3}}{2}i$ from $\epsilon$-expansion.}
\end{table*}
\FloatBarrier
\section{Complete low-energy spectrum for Ising* CFT with $\tau=i$}
\begin{table*}[h!]
{%
\newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}}
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c||c|}
\hline
& \mc{3}{c|}{(P,P)} & \mc{3}{c|}{(P,A)/(A,P)} & \mc{3}{c||}{(A,A)} & \\
\hline
$\tau=i$ & $\kappa=0$ & $\kappa=1$ & $\kappa=\sqrt{2}$ & $\kappa=0$ & $\kappa=1$ & $\kappa=\sqrt{2}$ & $\kappa=0$ & $\kappa=1$ & $\kappa=\sqrt{2}$ & Denomination\\
\hline
& 0 & & & & & & & & & 1 \\
& & & & 0.45 & & & & & & $1'_T$ \\
& & & & & & & 0.62 & & & $1''_T$ \\
& 4.71 & & & & & & & & & $\varepsilon_T$ \\
& & & & & 7.3 & & & & & \\
& & & & 8.5 & & & & & & \\
$\tilde{E}/c$& & & & & & & 9.0 & & & \\
& & 9.52 & & & & & & & & \\
& & & & & & & & & 9.6 & \\
& & & & & & & & 10.3 & & \\
& & & & & 10.4 & & & & & \\
& & & & & & 11.4 & & & & \\
& & & 11.6 & & & & & & & \\
\hline
\end{tabular}
\caption{Low-energy spectrum for the Ising* CFT with $\tau=i$ obtained from ED on the square lattice. The four distinct topological sectors are indicated by the corresponding boundary conditions (P,A) etc. in the two directions around the torus, where A(P) denotes (anti-)periodic boundary conditions~(See main text for further details).
The four lowest-lying levels constitute the topological four-fold degenerate groundstate manifold in the Toric Code phase and are still remarkably low in energy at criticality. See Tab.~\ref{tab:isingstarepsilon} for a comparison with $\epsilon$-expansion results and Tab.~\ref{tab:squareoverc} for further details.}
\end{center}
}%
\end{table*}
\begin{table*}[h!]
{%
\newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}}
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c||c|}
\hline
& \mc{3}{c|}{(P,P)} & \mc{3}{c|}{(P,A)/(A,P)} & \mc{3}{c||}{(A,A)} &\\
\hline
$\tau=i$ & $\kappa=0$ & $\kappa=1$ & $\kappa=\sqrt{2}$ & $\kappa=0$ & $\kappa=1$ & $\kappa=\sqrt{2}$ & $\kappa=0$ & $\kappa=1$ & $\kappa=\sqrt{2}$ & Denomination \\
\hline
& 0 & & & & & & & & & 1 \\
& & & & 0.19 & & & & & & $1'_T$ \\
& & & & & & & 0.35 & & & $1''_T$ \\
& 5.16 & & & & & & & & & $\varepsilon_T$ \\
$\tilde{E}/c$ & & & & & 6.88 & & & & & \\
& & & & 7.54 & & & & & & \\
& & 8.88 & & & & & & & & \\
& & & & & & & & & 9.18 & \\
& & & & & & & 9.75 & 9.75 & & \\
& & & 11.35 & & & & & & & \\
\hline
\end{tabular}
\caption{\label{tab:isingstarepsilon}Low-energy spectrum for the Ising* CFT with $\tau=i$ obtained from $\epsilon$-expansion.}
\end{center}
}%
\end{table*}
\end{document}
| 2023-04-23T06:40:35.033Z | 2016-11-22T02:02:38.000Z | redpajama/arxiv | arxiv_0001 | 724 | 4,937 |
17eab83fbc3c87de198e3d07e5239471b72d1e9d | \section{Introduction}
In this paper we consider the configuration model $\mathrm{CM}_n(\boldsymbol{d})$ on $n$ vertices with a prescribed degree sequence $\boldsymbol d = (d_1, d_2,...,d_n)$. We investigate the condition on $\boldsymbol d$ for $\mathrm{CM}_n(\boldsymbol{d})$ to be with high probability connected or disconnected in the limit as $n \to \infty$, and we analyse the behaviour of the model in the critical window for connectivity (i.e., when the asymptotic probability of producing a connected graph is in the interval $(0,1)$). Given a vertex $v\in[n]=\{1,2,...,n\},$ we call $d_v$ its degree. The configuration model is constructed by assigning $d_v$ half-edges to each vertex $v$, after which the half-edges are paired randomly: first we pick two half-edges at random and create an edge out of them, then we pick two half-edges at random from the set of remaining half-edges and pair them into an edge, etc. We assume the total degree $\sum_{v\in [n]} d_v$ to be even. The construction can give rise to self-loops and multiple edges between vertices, but these imperfections are relatively rare when $n$ is large; see \cite{Hofs09, Jan09, Jan14}.
We define the random variable $D_n$ as the degree of a vertex chosen uniformly at random from the vertex set $[n]$. We call $\mathscr{N}_i$ the set of all vertices of degree $i$ and $n_i$ its cardinality.
The configuration model is known to have a phase transition for the existence of a giant component with critical point at
\[
\nu_n =\frac{\mathbb{E} [D_n(D_n-1)]}{\mathbb{E}[D_n]}=1
\]
(see e.g., \cite{MolRee95} or \cite{JanLuc09}). When $\nu_n\rightarrow \nu>1$, there is a (unique) giant component $\mathscr{C}_{\rm max}$ containing a positive proportion of the vertices, while when $\nu_n\rightarrow \nu\leq 1$, the maximal connected component contains a vanishing proportion of the vertices. Assuming that the second moment of $D_n$ remains uniformly bounded, the subcritical behaviour was analysed by Janson in \cite{Jan08}.
In this paper, we focus on the {\em connectivity transition} of the configuration model. Let us first describe the history of this problem. Wormald \cite{Wor81} showed that for $k \geq 3$ a random $k$-regular graph on $n$ vertices is with high probability $k$-connected as $n \to \infty$ (see also \cite{Boll01}). Tomasz \L uczak \cite{Luc92} proved that also if the graph is not regular, but $d_v \geq k$ for every $v \in [n]$, then $\mathrm{CM}_n(\boldsymbol{d})$ in with high probability $k$-connected, and found the asymptotic probability to have a connected graph when $d_v \geq 2$ and the graph is simple. Actually \L uczak's model was defined in a different way from the configuration model and does not allow for vertices of degree $1$, but the results could easily be adapted to the configuration model. We will refine his results, including the case in which there are vertices of degree $1$ and we give more precise asymptotics on the size of the complement of the maximal connected component $[n] \setminus \mathscr{C}_{\rm max}$. We start by introducing some notation.
\medskip
\paragraph{\bf Notation.}
All limits in this paper are taken as $n$ tends to infinity unless stated otherwise.
A sequence of events $(\mathcal{A}_n)_{n \geq 1}$ happens \emph{with high probability (whp{})} if $\P(\mathcal{A}_n) \to 1$.
For random variables $(X_n)_{n \geq 1}, X$, we write $X_n \overset{d}{\rightarrow} X$ and $X_n \overset{\sss\prob}{\rightarrow} X$ to denote convergence in distribution and in probability, respectively.
For real-valued sequences $(a_n)_{n \geq 1}$, $(b_n)_{n \geq 1}$, we write $a_n=O(b_n)$ if the sequence $(a_n/b_n)_{n \geq 1}$ is bounded; $a_n=o(b_n)$ if $a_n/b_n \to 0$; $a_n =\Theta(b_n)$ if the sequences $(a_n/b_n)_{n \geq 1}$ and $(b_n/a_n)_{n \geq 1}$ are both bounded; and $a_n \sim b_n$ if $a_n/b_n \to 1$.
Similarly, for sequences $(X_n)_{n \geq 1}$, $(Y_n)_{n \geq 1}$ of random variables, we write $X_n=O_{\sss \prob}(Y_n)$ if the sequence $(X_n/Y_n)_{n \geq 1}$ is tight; and $X_n=o_{\sss \prob}(Y_n)$ if $X_n/ Y_n \overset{\sss\prob}{\rightarrow} 0$. Moreover, ${\sf Poi}(\lambda)$ always denotes a Poisson distributed random variable with mean $\lambda$ and ${\sf Bin} (n,p)$ denotes a random variable with binomial distribution with parameters $n$ and $p$.
\section{Main Results}
We start by defining the conditions for $\mathrm{CM}_n(\boldsymbol{d})$ to be in the connectivity critical window. We define the random variable $D_n$ as the degree of a vertex chosen uniformly at random in $[n]$. We assume these conditions to hold throughout this paper:
\begin{cond}[Critical window for connectivity]
\label{cond-crit-conn}
We define a sequence $\mathrm{CM}_n(\boldsymbol{d})$ to be in the \emph{critical window for connectivity} when the following conditions are satisfied:
\begin{enumerate}
\item There exists a limiting degree variable $D$ such that $D_n \overset{d}{\to} D$;
\item $n_0 =0$;
\item $\lim_{n \to \infty} n_1/\sqrt{n}= \rho_1 \in [0 , \infty)$;
\item $\lim_{n \to \infty} n_2/n = p_2 \in [0,1)$;
\item $\lim_{n \to \infty} \mathbb{E} [D_n]= d < \infty$.
\end{enumerate}
\end{cond}
\medskip
Under these conditions, we prove our main theorem. In its statement, we write $\mathscr{C}_{\rm max}$ for the maximal connected component in $\mathrm{CM}_n(\boldsymbol{d})$:
\begin{thm}[Connectivity threshold for the configuration model]\label{main}
Consider $\mathrm{CM}_n(\boldsymbol{d})$ in the critical window for connectivity as described in Condition \ref{cond-crit-conn}. Then
\begin{equation}\label{conn}
\lim_{n \to \infty} \P (\mathrm{CM}_n(\boldsymbol{d}) \text {\emph{ is connected}}) =\left(\dfrac{d-2p_2}{d}\right)^{1/2} \exp \left\lbrace - \dfrac{\rho_1 ^2 }{2(d -2p_2)} \right\rbrace.
\end{equation}
Moreover,
\begin{equation}\label{dist}
n-|\mathscr{C}_{\rm max} | \overset{d}{\to} X,
\end{equation}
where $X = \sum_k k(C_k + L_k)$, and $\big((C_k , L_k)\big)_{k\geq 1}$ are independent random variables such that
\begin{equation*}
L_k \overset{d}{=} {\sf Poi} \left( \dfrac{\rho_1^{2} (2p_2)^{k-2}}{2d^{k-1}}\right), \quad C_k \overset{d}{=}
{\sf Poi} \left( \dfrac{(2p_2)^{k}}{2k d^{k}}\right).
\end{equation*}
Finally,
\begin{equation}
\lim_{n \to \infty} \mathbb{E} [n - |\mathscr{C}_{\rm max}|] = \dfrac{\rho_1^2(2d-p_2)}{2(d-p_2)^2}+\dfrac{p_2}{d-2p_2}.
\end{equation}
\end{thm}
\medskip
The convergence in distribution of $n-|\mathscr{C}_{\rm max} |$ to a proper random variable with finite mean is a stronger result than proved by \L uczak \cite{Luc92}, who instead proved that
\begin{equation}
\dfrac{|\mathscr{C}_{\rm max}|}{n}\overset{\sss\prob}{\rightarrow} 1.
\end{equation}
Our improvement is achieved by an application of the multivariate method of moments, as well as a careful estimate of the probability that there exists $v \in [n]$ that is not part of $|\mathscr{C}_{\rm max}|$. We next investigate the boundary cases:
\begin{remark}[Boundary cases]
\label{rem-bc}
Our proof also applies to the boundary cases where $\rho_1 = \infty, p_2=0$ or $d=\infty$. When $d<\infty$, we obtain
\begin{equation}
\label{bc-1}
\P(\mathrm{CM}_n(\boldsymbol{d}) \text {\emph{ is connected}}) \to \begin{cases}
0 & \text{ when } \rho_1 = \infty,\\
1 & \text{ when } \rho_1, p_2 = 0.
\end{cases}
\end{equation}
When $d = \infty$, instead
\begin{equation}
\label{bc-2}
\lim_{n \to \infty} \P (\mathrm{CM}_n(\boldsymbol{d}) \text{ is connected})= \lim_{n \to \infty} \exp\Big\{-\frac{n_1^2}{2 \ell_n}\Big\}.
\end{equation}
\end{remark}
\medskip
We next investigate how many connected graphs there are with prescribed degrees in the connectivity window defined in Condition \ref{cond-crit-conn}. Assuming also that $D$ has finite second moment the configuration model is simple with non-vanishing probability. Under this additional condition we can prove the following results:
\begin{thm}[Connectivity conditioned on simplicity and number of connected simple graphs]\label{mainsim}
Consider $\mathrm{CM}_n(\boldsymbol{d})$ in the connectivity critical window defined in Condition \ref{cond-crit-conn}. If
\[
\lim_{n \to \infty}\dfrac{\mathbb{E} [D_n(D_n-1)]}{\mathbb{E}[D_n]}= \nu < \infty,
\]
then
\begin{equation}
\label{conns}\begin{split}
&\lim_{n \to \infty}\P(\mathrm{CM}_n(\boldsymbol{d}) \text {\emph{ is connected}}
\mid\mathrm{CM}_n(\boldsymbol{d}) \text {\emph{ is simple}}) \\
&\qquad\qquad=\left(\dfrac{d-2p_2}{d}\right)^{1/2}\exp{\left(-\dfrac{\rho_1^2}{2(d-2p_2)} +\dfrac{p_2^2+d p_2}{d^2} \right)}.
\end{split}\end{equation}
Let $\mathscr{N}^C_n(\boldsymbol{d})$ be the number of connected simple graphs with degree distribution $\boldsymbol{d}$. Then
\begin{equation}
\begin{split}
\mathscr{N}^C_n(\boldsymbol{d}) =& \dfrac{( \ell_n-1)!!}{\prod_{i \in [n]}d_i !}\left(\dfrac{d-2p_2}{d}\right)^{1/2} \\
&\quad \times\exp\left\lbrace -\dfrac{\nu}{2} -\dfrac{\nu^2}{4}-\dfrac{\rho_1^2}{2(d-2p_2)}
+\dfrac{p_2^2+d p_2}{d^2}\right\rbrace (1+o(1)),
\end{split}
\end{equation}
where $\ell_n = \sum_{i \in [n]} d_i$ denotes the total degree.
\end{thm}
\medskip
With these results, the connectivity critical window is fully explored. Indeed, we have determined the asymptotic probability for the configuration model to produce a connected graph for all possible choices of the limiting degree distribution under finite mean assumption. What remains is to find the asymptotic of the number of connected simple graphs with degree distribution $\boldsymbol d$ when it is above the connectivity critical window (i.e., when $n_1 \gg n^{1/2}$). In this case we should analyse how fast the probability to produce a connected graph vanishes, which is a hard problem.
It is also worth noticing that the size of the largest component is very sensitive to the precise way how $n_2/n \to 1$
(recall that we assume that $p_2<1$ in Condition \ref{cond-crit-conn}).
We define $\mathscr{C} (v)$ as the connected component of a uniformly chosen vertex. Indeed, when $n_2=n$, it is not hard to see that
\begin{equation}
\dfrac{|\mathscr{C}_{\rm max}|}{n} \overset{d}{\to} S; \qquad\quad \dfrac{|\mathscr{C}(v)|}{n} \overset{d}{\to} T,
\end{equation}
where $S,T$ are proper random variables that satisfy the relation $S \overset{d}{=} T \vee [(1-T)S]$.
Instead, $|\mathscr{C}_{\rm max}|/n\overset{\sss\prob}{\rightarrow} 0$ when $n_2 = n - n_1$, with $n_1 \to \infty$, while $|\mathscr{C}_{\rm max}|/n\overset{\sss\prob}{\rightarrow} 1$ when $n_2 = n - n_4$, with $n_4 \to \infty$. The latter two statements can be proved by relating it to the case where $n_2=n$. Indeed, for the case where $n_1>0$, we take $n_2'=n_2+n_1/2$, and produce $\mathrm{CM}_n(\boldsymbol{d})$ from the configuration model with $n_2$ vertices of degree 2 by `splitting' $n_1/2$ vertices of degree 2 into two vertices of degree 1. For the case where $n_4>0$, we take $n_2'=n_2+2n_4$, and produce $\mathrm{CM}_n(\boldsymbol{d})$ from the configuration model with $n_2$ vertices of degree 2 by `merging' $2n_4$ vertices of degree 2 into a vertex of degree 4.
\subsection{Outline of the proof}
We first notice that in the connectivity critical window our configuration model is supercritical, i.e., whp{} \ it has a unique component of linear size with respect to the whole graph. In more detail, for finite $\rho_1 < \infty$ and $ p_2< 1$,
\begin{equation}
\lim_{n \to \infty} \nu_n= \lim_{n \to \infty}\dfrac{\mathbb{E} [D_n(D_n-1)]}{\mathbb{E}[D_n]}
\geq \dfrac{2p_2 + 6 (1-p_2 )}{2 p_2 +3 (1- p_2 )}
> 1.
\end{equation}
Thus the results from \cite{JanLuc09,MolRee98} imply that $|\mathscr{C}_{\rm max}|= \Theta_\P (n)$, while the second largest connected component $\mathscr{C}_{\scriptscriptstyle (2)}$ satisfies $|\mathscr{C}_{\scriptscriptstyle (2)}| = o_\P(n)$ and $|E(\mathscr{C}_{\scriptscriptstyle (2)})|=o_\P(n)$. The proof of our main theorem is now divided into two parts:
\begin{enumerate}
\item To identify the limit distribution of the number of lines and cycles that form $[n] \setminus \mathscr{C}_{\rm max}$, which we do in Section \ref{sec-Pois-conv};
\item To prove that whp{}~ all vertices $v \in [n]$ with $d_v \geq 3$ are in the giant component $\mathscr{C}_{\rm max}$, which we do in Section \ref{sec-conn-three}.
\end{enumerate}
The proofs of our main theorems are then completed in Section \ref{sec-compl-pf}.
\section{Poisson convergence of the number of lines and cycles}
\label{sec-Pois-conv}
In this section, we prove that the number of cycles (components made by $k$ vertices of degree 2) and lines (components made by 2 vertices of degree 1 and $k-2$ vertices of degree 2) jointly converge to independent Poisson random variables. In Section \ref{sec-conn-three}, we will show that $[n] \setminus \mathscr{C}_{\rm max}$ whp{}{} only contains vertices of degree 1 and 2, so that all the other components are either cycles or lines.
We define the sequences of random variables $(\mathbf{C}_n, \mathbf{L}_n)= \big((C_k(n),L_k(n))\big)_{k\geq 1}$ as
\begin{itemize}
\item[$\rhd$] $C_k(n)$= $\#$ \{cycles of length $k$ in $\mathrm{CM}_n(\boldsymbol{d})$\},
\item[$\rhd$] $L_k(n)$= $\#$ \{lines of length $k$ in $\mathrm{CM}_n(\boldsymbol{d})$\}.
\end{itemize}
\medskip
We consider a vertex of degree 2 with a self-loop as a cycle of length 1. By convention, $L_1(n)=0$ for all $n$ since a vertex of degree $1$ can not have a self-loop.
We define $\mathscr{C}_k= \{ \{v_1,v_2,...,v_k\} \subseteq \mathscr{N}_2\}$ to be the set of all collections of $k$ vertices that could form a cycle, and denote
\begin{equation}
C_k (n) = \sum_{c \in \mathscr{C}_k} \mathbbm{1} _{\{c \text{ forms a cycle}\}}.
\end{equation}
In a similar way we define $\mathscr{L}_k=\{ \{v_1,v_2,...,v_k\}: v_1, v_k \in \mathscr{N}_1; v_2,...,v_{k-1} \in \mathscr{N}_2 \}$ to be the set of all collections of $k$ vertices that could form a line, and denote
\begin{equation}
L_k (n) = \sum_{l \in \mathscr{L}_k} \mathbbm{1} _{\{l \text{ forms a line}\}}.
\end{equation}
We will use the multivariate method of moments to show that $\big((C_n (k), \ L_k(n))\big)_{ k \geq 1}$ converges to a vector of independent Poisson random variables. For a random variable $X$, we define $(X)_r=X (X -1)\cdots(X -r+1)$. For the multivariate method of moments, we recall two useful lemmas, whose proofs are given in \cite[Section 2.1]{Hofs09}:
\begin{lem}[Multivariate moment method with Poisson limit]
\label{poi}
A sequence of vectors of non-negative integer-valued random variables ~$(X_1^{\scriptscriptstyle(n)}, X_2^{\scriptscriptstyle(n)},...,X_k^{\scriptscriptstyle(n)} )_{n\geq 1}$ converges in distribution to a vector of independent Poisson random variables
with parameters $(\lambda_1 , \lambda_2,...,\lambda_k)$ when, for all possible choices of $(r_1, r_2,...,r_k) \in \mathbb{N}^{k} $,
\begin{equation}
\lim_{n\to \infty} \mathbb{E} [(X_1^{\scriptscriptstyle(n)})_{r_1}(X_2^{\scriptscriptstyle(n)})_{r_2}\cdots (X_k^{\scriptscriptstyle(n)})_{r_k}]
= \lambda_1 ^{r_1} \lambda_2^{r_2} \cdots \lambda_k^{r_k}.
\end{equation}
\end{lem}
\begin{lem}[Factorial moments of sums of indicators]
\label{ind}When $X_j=\sum_{i \in \mathcal{I}_j}\mathbbm{1}_{i^{(j)}}$ for all $j =1,\ldots,k$,
\begin{equation}
\begin{split}
&\mathbb{E}[(X_1^{\scriptscriptstyle(n)})_{r_1}(X_2^{\scriptscriptstyle(n)})_{r_2}\cdots(X_k^{\scriptscriptstyle(n)})_{r_k}]\\ =&\sum_{i_1^{(1)},\ldots,i_{r_1}^{(1)}\in \mathcal{I}_1}^{\ast} \cdots \sum_{i_1^{(k)},\ldots,i_{r_k}^{(k)}\in \mathcal{I}_k}^\ast \P(\mathbbm{1}_{i_s^{(j)}}=1\ \forall j=1,\ldots,k, s=1,\ldots,r_k),
\end{split}
\end{equation}
where $\sum^\ast$ denotes a sum over distinct indices.
\end{lem}
\medskip
See also \cite[Chapter 6]{JanLucRuc00} for more general versions of the method of moments.
The main result in this section is the following theorem:
\begin{thm}[Poisson convergence of number of lines and cycles]
\label{poicl}
Consider $\mathrm{CM}_n(\boldsymbol{d})$ in the critical window for connectivity defined in Condition \ref{cond-crit-conn}. Then
\begin{equation}\label{conver}
(\mathbf{C}_n, \mathbf{L}_n) \xrightarrow{d} (\mathbf{C}, \mathbf{L}),
\end{equation}
where $(\mathbf{C}, \mathbf{L})=\big((L_k,C_k)\big)_{ k \geq 1}$ is a sequence of independent random variables with
\begin{equation}
L_k \overset{d}{=} {\sf Poi} \left( \dfrac{\rho_1^{2} (2p_2)^{k-2}}{2d^{k-1}}\right), \quad C_k \overset{d}{=}
{\sf Poi} \left( \dfrac{(2p_2)^{k}}{2k d^{k}}\right),
\end{equation}
and the convergence in \eqref{conver} is in the product topology on $\mathbb N$.
\end{thm}
\medskip
\proof We want to find the combined factorial moments of $(L_j(n), C_j(n))_{j \leq k}$ and show that
\eqan{
\label{mome}
&\mathbb{E} [(C_1(n))_{r_1}(L_2(n))_{s_2}\cdots (C_k(n))_{r_k}(L_k(n))_{s_k}]\\
&\qquad\to \prod_{j=2}^{k} \left( \dfrac{\rho_1^{2} (2p_2)^{j-2}}{d^{j-1}}\right)^{s_j}
\prod_{j=1}^{k} \left( \dfrac{(2p_2)^{j}}{2kd^{j}}\right)^{r_j}.\nonumber
}
We argue by induction on $k$. When $k=0$, both sides in \eqref{mome} are equal to 1, which initializes the induction hypothesis.
We next argue how to advance the induction hypothesis. We define
\[
w_{k,j}(\mathbf r,\mathbf s)= \{ c_i (1),\ldots, c_i(r_i)\in \mathscr{C}_i, 1 \leq i \leq k; l_i(1),\ldots, l_{i} (s_i) \in \mathscr{L}_i, 2\leq i \leq j \}.
\]
Further, $\mathscr{E}(w_{k,j}(\mathbf r,\mathbf s))$ denotes the event that all $c_i(h) \in w_{k,j}(\mathbf r,\mathbf s)$ form a cycle and all $l_i(h) \in w_{k,j}(\mathbf r,\mathbf s)$ form a line.
By Lemma \ref{ind},
\begin{equation}
\mathbb{E} [(C_1(n))_{r_1}(L_2(n))_{s_2}\cdots (C_k(n))_{r_k}(L_k(n))_{s_k}]
=\sum_{w_{k,k}(\mathbf r,\mathbf s) }\P (\mathscr{E}(w_{k,k}(\mathbf r,\mathbf s))).
\end{equation}
We rewrite this as
\begin{equation}
\sum_{w_{k,k-1}(\mathbf r,\mathbf s) }\P (\mathscr{E}(w_{k,k-1}(\mathbf r,\mathbf s))) \sum_{l_1,\ldots,l_{s_k}\in \mathscr{L}_k}
\mathbb{E}[\mathbbm{1}_{i_1} \mathbbm{1}_{i_2} \cdots \mathbbm{1}_{i_{s_k}}\vert \mathscr{E}(w_{k,k-1}(\mathbf r,\mathbf s))],
\end{equation}
where $\mathbbm{1}_{i_{s}}$ is the indicator that the vertices in $c_{i_s}$ form a line.
We call $a_1$ and $a_2$ the number of vertices of degree 1 and 2 necessary to create the cycles and lines prescribed by $w_{k,k-1}(\mathbf r,\mathbf s)$ and $a_e = a_1 +2a_2$ the number of half-edges they have. These are completely independent from the exact choice of $w_{k,k-1}(\mathbf r,\mathbf s)$ as long as all sets are disjoint (otherwise the event $\mathscr{E}(w_{k,k-1}(\mathbf r,\mathbf s))$ is impossible). The number of possible choices of $s_k$ different disjoint $l \in \mathscr{L}_k$ without using the vertices allocated for $w_{k,k-1}(\mathbf r,\mathbf s)$ are
\begin{equation}\begin{split}
\dfrac{(n_1-a_1)!}{2^{s_k} (n_1-a_1 -2s_k)!}&\dfrac{(n_2 - a_2)!}{(k-2)!^{s_k}(n_2 - a_2-(k-2)s_k)!}(1+o(1))\\
= & \dfrac{n_1^{2s_k}}{2^{s_k}}\dfrac{n_2^{(k-2)s_k}}{(k-2)!^{s_k}}(1+o(1)).
\end{split}
\end{equation}
The probability that the first forms a line is
\eqan{
&\dfrac{2k-4}{\ell_n-a_e-1}\dfrac{2k-6}{\ell_n-a_e-3}\cdots \dfrac{2}{\ell_n-a_e-2k+5}
\dfrac{1}{\ell_n-a_e-2k+3}\nonumber\\
&\qquad=\dfrac{(2k-4)!!}{\ell_n^{k-1}}(1+o(1)).
}
For all the other lines we just have to subtract from $\ell_n-a_e$ the $2k-2$ half-edges that we have used for each of the previous ones, so that
\begin{equation}
\mathbb{E}[\mathbbm{1}_{i_1} \mathbbm{1}_{i_2} \cdots \mathbbm{1}_{i_{s_k}}\vert \mathscr{E}(w_{k,k-1}(\mathbf r,\mathbf s))]
= \dfrac{(2k-4)!!^{s_k}}{\ell_n^{s_k(k-1)}}(1+o(1)).
\end{equation}
Finally we obtain
\begin{equation}\begin{split}
\sum_{l_1,\ldots,l_{s_k}\in \mathscr{L}_k}
& \mathbb{E}[\mathbbm{1}_{i_1} \mathbbm{1}_{i_2} \cdots \mathbbm{1}_{i_{s_k}}\vert \mathscr{E}(w_{k,k-1}(\mathbf r,\mathbf s))]\\
&= \dfrac{(\rho_1^{2}n)^{s_k}}{2^{s_k}}\dfrac{(p_2 n)^{(k-2)s_k}}{(k-2)!^{s_k}}
\dfrac{(2k-4)!!^{s_k}}{\ell_n^{s_k(k-1)}}(1+o(1))
\\&=\left( \dfrac{\rho_1^{2}(2n_2)^{k-2}}{2d^{k-1}}\right)^{s_k}(1+o(1)).
\end{split}\end{equation}
We do the same for the cycles $C_k(n)$, writing
\begin{equation}
\sum_{w_{k-1,k-1}(\mathbf r,\mathbf s) }\P (\mathscr{E}(w_{k-1,k-1}(\mathbf r,\mathbf s)))
\sum_{c_1,\ldots,c_{r_k}\in \mathscr{C}_k} \mathbb{E}[\mathbbm{1}_{i_1} \cdots \mathbbm{1}_{i_{r_k}}
\vert \mathscr{E}(w_{k-1,k-1}(\mathbf r,\mathbf s))].
\end{equation}
The number of possible choices of $r_k$ different disjoint $c \in \mathscr{C}_k$ without using the vertices allocated for $w_{k-1,k-1}(\mathbf r,\mathbf s)$ are
\begin{equation}
\dfrac{(n_2 - a_2)!}{k!^{r_k}(n_2 - a_2-kr_k)!}(1+o(1))= \dfrac{(n_2)^{kr_k}}{k!^{r_k}}(1+o(1)).
\end{equation}
The probability that the first forms a cycle is
\begin{equation}
\dfrac{2k-2}{\ell_n-a_e-3}\dfrac{2k-4}{\ell_n-a_e-5}\cdots \dfrac{2}{\ell_n-a_e-2k+3} \dfrac{1}{\ell_n-a_e-2k+1}(1+o(1)).
\end{equation}
Again, for all the other cycles we just have to subtract the $2k$ half-edges that we have used for the previous ones so that
\begin{equation}
\mathbb{E}[\mathbbm{1}_{i_1} \mathbbm{1}_{i_2} \cdots \mathbbm{1}_{i_{r_k}}\vert \mathscr{E}(w_{k-1,k-1}(\mathbf r,\mathbf s))]
= \dfrac{(2k-2)!!^{r_k}}{\ell_n^{r_k k}}(1+o(1)).
\end{equation}
Thus, we obtain
\begin{equation}\begin{split}
\sum_{c_1,\ldots,c_{r_k}\in \mathscr{C}_k}
& \mathbb{E}[\mathbbm{1}_{i_1} \mathbbm{1}_{i_2} \cdots \mathbbm{1}_{i_{s_k}}\vert \mathscr{E}(w_{k-1,k-1}(\mathbf r,\mathbf s))]\\
=&\dfrac{n_2^{kr_k}}{k!^{r_k}}\dfrac{(2k-2)!!^{r_k}}{(\ell_n)^{r_kk}}(1+o(1))
=\left( \dfrac{(2p_2)^k}{2kd^k}\right)^{r_k}(1+o(1)).
\end{split}\end{equation}
This advances the induction hypothesis. We now use induction to show that \eqref{mome} holds for every $k\geq 0$, and consequently prove the claim through the method of moments
in Lemma \ref{poi}. \qed
\bigskip
We next show that in case of finite second moment, in particular, under the condition
\[
\lim_{n \to \infty} \dfrac{\mathbb{E}[D_n(D_n -1)]}{\mathbb{E}[D_n]} \to \nu < \infty,
\]
asymptotic distribution of the number of self-loops and multiple edges is independent from $(C_k)_{k\geq 3}$ and $(L_k)_{k \geq 2}$.
We first notice that connectivity and simplicity are {\em not} independent, since self-loops and multiple edges among vertices of degree 2 make the graph simultaneously disconnected and not simple, so for $\mathrm{CM}_n(\boldsymbol{d})$ to be simple, we have to require $C_1 (n) = C_2(n)=0$.
We define the number of self-loops and multiple edges in $\mathrm{CM}_n(\boldsymbol{d})$ by $S(n), M(n)$. We will show the following main result:
\begin{thm}[Poisson convergence of number self-loops and multiple edges]
\label{poism}
Consider $\mathrm{CM}_n(\boldsymbol{d})$ in the critical window for connectivity defined in Condition \ref{cond-crit-conn}, and let $\nu_n= \mathbb{E} [D_n(D_n-1)]/\mathbb{E}[D_n] \to \nu \leq \infty$. Then
\begin{equation}
((L_k(n))_{ k \geq 2},(C_k(n))_{k\geq 3},S(n),M(n)) \xrightarrow{d} ((L_k)_{ k \geq 2},(C_k)_{ k \geq 3},S,M),
\end{equation}
with $((L_k)_{ k \geq 2},(C_k)_{ k \geq 3},S,M)$ independent Poisson random variables with
\begin{equation}\begin{split}
L_k \overset{d}{=} {\sf Poi} \left(\dfrac{\rho_1^2 (2p_2)^{k-2}}{2d^{k-1}}\right),& \qquad
C_k \overset{d}{=} {\sf Poi} \left(\dfrac{(2p_2)^k}{2kd^k} \right),\\
S \overset{d}{=} {\sf Poi} \left(\nu/2\right),&\qquad
M \overset{d}{=} {\sf Poi} \left(\nu^2/4\right).\quad
\end{split}\end{equation}
\end{thm}
\proof We again use multivariate method of moments in Lemma \ref{poi}. We aim to find the combined factorial moments of $((L_j(n))_{2\leq j \leq k}, (C_j(n))_{3\leq j \leq k},S(n),M(n))$, and show that
\begin{equation}
\label{mome2}
\begin{split}
\mathbb{E} [(L_2(n))_{s_2}(C_3(n))_{r_3}\cdots (C_k(n))_{r_k}(L_k(n))_{s_k}(S(n))_{t}(M(n))_{u}]
\\ \to \left(\dfrac{\nu}{2}\right)^{t+2u}\prod_{j=2}^{k} \left( \dfrac{\rho_1^{2} (2p_2)^{j-2}}{2d^{j-1}}\right)^{s_j}
\prod_{j=1}^{k} \left( \dfrac{(2p_2)^{j}}{2kd^{j}}\right)^{r_j}.
\end{split}
\end{equation}
We now define
\[
w'_{k,j}(\mathbf r,\mathbf s)= \{ c_i (1), ..., c_i(r_i)\in \mathscr{C}_i, 3 \leq i \leq k; l_i(1),...,l_{i}(s_i) \in \mathscr{L}_i, 2\leq i \leq j \}
\]
as the choice of subsets that can form such lines and cycles, and by Lemma \ref{poi},
\begin{equation}
\sum_{w'_{k,k}(\mathbf r,\mathbf s)}\P (\mathscr{E}(w'_{k,k}(\mathbf r,\mathbf s)))\mathbb{E} [(S (n))_{t}(M (n))_{u}\mid \mathscr{E}(w'_{k,k}(\mathbf r,\mathbf s))].
\end{equation}
Conditionally on $\mathscr{E}(w'_{k,k}(\mathbf r,\mathbf s))$, the random vector $(S(n), M(n))$ has the same law as the number of self-loops and multiple edges in a configuration model with degree sequence $\boldsymbol{d}'$, which is obtained from $\boldsymbol{d}$ by removing the vertices appearing in $w'_{k,k}(\mathbf r,\mathbf s)$. We notice that $\boldsymbol{d}'$ is independent from the exact choice of $w'_{k,k}(\mathbf r,\mathbf s)$. Thus, when $D'_n$ denotes the degree of a uniform random vertex selected from $\boldsymbol{d}'$ and $\nu' = \lim_{n \to \infty} \dfrac{\mathbb{E}[D'_n(D'_n -1)]}{\mathbb{E}[D'_n]}$,
\[
\mathbb{E} [(S (n))_{t} (M(n))_{u}] \to \left(\dfrac{\nu'}{2}\right)^{t+2u}
\]
(see e.g., \cite{Jan09, Jan14}). Since we are removing only a finite number of vertices from $\boldsymbol{d}$, we have that $\nu' =\nu$ and we thus obtain
\begin{equation}
\left(\nu/2\right)^{t+2u}\sum_{w'_{k,k}(\mathbf r,\mathbf s)}\P (\mathscr{E}(w'_{k,k}(\mathbf r,\mathbf s))).
\end{equation}
We finally obtain \eqref{mome2} using the same induction argument used to prove \eqref{mome}, which completes the proof.
\qed
\section{Connectivity among vertices of degree at least three}
\label{sec-conn-three}
In this section, we show that in the connectivity critical window whp{}{} all vertices $v$ with $d_v \geq 3$ are in the giant component. This result is already known when $\min_{i \in [n]} d_i \geq 2$, we show that it still holds even in the presence of a sufficiently small amount of vertices of degree $1$. To do so we will prove the following theorem:
\begin{thm}[Connectivity among vertices with $d_v \geq 3$]
\label{geq3}
Consider $\mathrm{CM}_n(\boldsymbol{d})$ in the connectivity critical window defined in Condition \ref{cond-crit-conn}. Then
\begin{equation}
\label{bd-1}
\mathbb{E} [\#\{ v \in [n] \colon d_v \geq 3, |\mathscr{C}(v)| < n/2\} ] \to 0.
\end{equation}
Consequently,
\begin{equation}
\label{bd-2}
\mathbb{E} [\#\{ v \in [n] \setminus \mathscr{C}_{\rm max} \colon d_v \geq 3\}] \to 0.
\end{equation}
\end{thm}
We will use the usual exploration process of the configuration model, as we describe now. At each time $t$, we define the sets of half-edges $\{ \mathcal{A}_t , \mathcal{D}_t , \mathcal{N}_t \}$ (the active, dead and neutral sets), and explore them in the following way:
\begin{itemize}
\item[{\tt Initalize}] We pick a vertex $v \in [n]$ uniformly at random with $d_v \geq 3$ and we set all its half-edges as active. All other half-edges are set as neutral.
\item[{\tt Step}] At each step $t$, we pick a half-edge $e_1(t) $ in $\mathcal{A}_t$ uniformly at random, and we pair it with another half-edge $e_2(t)$ chosen uniformly at random in $\mathcal{A}_t \cup \mathcal{N}_t$. We set $e_1(t), e_2(t)$ as dead.\\
\textbf{If} $e_2(t) \in \mathcal{N}_t$, then we find the vertex $v(e_2(t))$ incident to $e_2(t)$ and activate all its other half-edges.
\end{itemize}
As usual, the above exploration forms the graph at the same time as that it explores the neighborhood of the vertex $v$. A convenient way to encode the randomness in the exploration algorithm is to first choose a permutation $\xi$ of the half-edges, chosen uniformly at random from the set of all permutations of the half-edges. Then we run the exploration choosing as $e_1(t)$ and $e_2(t)$ always the first feasible half-edges in the permutation according to the exploration rules. This means that we take the first available active half-edge as $e_1(t)$, pair it to the first available active or neutral half-edge as $e_2(t)$ to create an edge consisting of $e_1(t)$ and $e_2(t)$, and then to update the status of all the half-edges as above.
The above description, that we will rely on for the remainder of this document, offers the possibility to analyse some properties of the exploration before running it and will be useful to prove that whp we will not run out of high-degree vertices too early in the exploration.
We define the process $S_t^{\scriptscriptstyle(v)} = |\mathcal{A}_t|$. The update rules of $S_t^{\scriptscriptstyle(v)}$ are
\begin{equation}
\label{St-rec}
S_0^{\scriptscriptstyle(v)}= d_v, \quad\qquad
S_{t+1}^{\scriptscriptstyle(v)}-S_t^{\scriptscriptstyle(v)}= \begin{cases}d_{v(e_2(t))}-2 & \text{ if } e_2(t) \in \mathcal{N}_t, \\
-2 & \text{ if } e_2(t) \in \mathcal{A}_t.\end{cases}
\end{equation}
We define $T_0$ as the smallest $t$ such that $X_t =0$ and
\eqn{
\label{T-half-def}
T_{1/2}=\max \{t\colon |\mathcal{N}_t| > n/2\}.
}
By definition of the exploration process, if $T_0 \geq T_{1/2}$ then $|\mathscr{C} (v)| \geq n/2$ (and, in particular, $v\in \mathscr{C}_{\rm max}$), so that proving Theorem \ref{geq3} follows by proving the following proposition:
\begin{prop}[No hit of zero of exploration]
\label{surv}
Consider $\mathrm{CM}_n(\boldsymbol{d})$ in the critical window for connectivity defined in Condition \ref{cond-crit-conn}. Let $v$ be such that $d_v \geq 3$. Then
\begin{equation}
\P ( \exists t \leq T_{1/2} \colon S_t^{\scriptscriptstyle(v)} =0) = o(n^{-1}).
\end{equation}
\end{prop}
\medskip
Since there are $n$ vertices in the graph, Proposition \ref{surv} indeed proves \eqref{bd-1} in Theorem \ref{geq3}.
We rely on the following result:
\begin{lem}[Bound on the depletion of high-degree vertices]
\label{dom}
Consider $\mathrm{CM}_n(\boldsymbol{d})$ in the connectivity critical window defined in Condition \ref{cond-crit-conn} and perform the exploration up to time $T_{1/2}=\max \{t \colon |\mathcal{N}_t| > n/2\}$. Then there exists $\varepsilon >0$ such that
\begin{equation}
\P ( \# \{ v \in \mathcal{N}_{T_{1/2}}\colon d_v \geq 3\} < \varepsilon n) = o(n^{-1}).
\end{equation}
\end{lem}
\proof Let us consider the exploration from a permutation $\xi$ of the set of the half-edges chosen uniformly at random, as described above \eqref{St-rec}. We call $\mathcal T _{n/2} (\xi)$ the set of vertices such that all their half-edges are among the last $n/2$ of the permutation $\xi$. The previous definitions imply that $\mathcal T _{n/2} (\xi) \subseteq \mathcal{N}_{T_{1/2}}$.
We now pick a $k>2$ such that $p_k=\lim n_k/n >0$, from the definition of the connectivity critical window we know that such $k$ exists. We want to find a lower bound on $N_{T_{1/2}}(k)= \# \{ v \in \mathcal{N}_{T_{1/2}}\colon d_v =k\} \geq \# \{ v \in \mathcal T _{n/2} (\xi) \colon d_v =k\}$.
Before running the exploration, we sequentially locate the half-edges of the vertices of degree $k$ in $\xi$. We stop this process once we have examined $n/(4k)$ vertices, or when we run out of vertices of degree $k$. We define the $\sigma$-algebra ${\mathscr F}^k_i$ generated by the positions of the half-edges of the first $i$ vertices that we have examined. We then find that, at each step $j$, thanks to the stopping conditions, there are still at least $n/4$ available spots among the last $n/2$ half-edges in $\xi$, so that
\begin{equation}
\P (v_j \in \mathcal T _{n/2} (\xi) | {\mathscr F}^k_{j-1} ) \geq \left( \dfrac{n}{4 \ell_n}\right)^k .
\end{equation}
We know that $\lim_{n \to \infty} \left( \dfrac{n}{4 \ell_n}\right)^k = \big(1/4d\big)^k\equiv q_k$, so that
\begin{equation}
N_{T_{1/2}}(k) \overset{st}{\geq} {\sf Bin} \Big((p_k \wedge \dfrac{1}{4k}) n, q_k\Big).
\end{equation}
By concentration of the binomial distribution (see e.g., \cite{ArrGor89}), there exists a $c=c(a,q_k)$ such that, uniformly in $n$,
\begin{equation}\label{bin}
\P \left({\sf Bin} \left(an, q_k\right) \leq \dfrac{an}{2}q_k\right)\leq \mathrm{e}^{-cn} = o(n^{-1}).
\end{equation}
The claim follows by picking $\varepsilon < \dfrac{1}{2} \big(p_k \wedge \dfrac{1}{4k}\big)q_k$. \qed
\bigskip
We notice that $S_{t+1}^{\scriptscriptstyle(v)} -S_t ^{\scriptscriptstyle(v)}<0$ only when one of the following events occurs:
\begin{itemize}
\item[$\rhd$] $A(t)=\{d_{v(e_2(t))}=1\}$, where $e_2(t)$ is the half-edge to which the $t$th paired half-edge is paired. In this case $S_{t+1}^{\scriptscriptstyle(v)} -S_t^{\scriptscriptstyle(v)} = -1$. Thanks to Lemma \ref{dom}, if we define ${\mathscr F}_k$ as the $\sigma$-algebra generated by the first $k$ steps of the exploration, then, uniformly for $t \leq T_{1/2}$,
\eqn{
\label{A(t)-bd}
\P (A(t)\mid {\mathscr F}_{t-1}) \leq \dfrac{2\rho_1}{ \sqrt{n}}.
}
\item[$\rhd$] $B(t)=\{e_2(t) \in \mathcal{A}_t\}$, where $e_2(t)$ is the half-edge to which the $t$th paired half-edge is paired. In this case $S_{t+1}^{\scriptscriptstyle(v)} -S_t^{\scriptscriptstyle(v)} = -2$. From the description of the exploration, we obtain that,
uniformly for $t \leq T_{1/2}$,
\eqn{
\label{B(t)-bd}
\P (B(t)| {\mathscr F}_{t-1}) \leq \dfrac{S_t^{\scriptscriptstyle(v)}-1}{\ell_n -t-1} \leq \dfrac{2 S_t^{\scriptscriptstyle(v)}}{n}.
}
\end{itemize}
Now we prove three lemmas that together will yield Proposition \ref{surv}.
The first lemma contains a lower bound on the survival time of the process. Indeed, we show that whp{}{} the component of $v$ is at least of polynomial size with respect to $n$:
\begin{lem}[No early hit of zero]
\label{1/8}
Let $\mathrm{CM}_n(\boldsymbol{d})$ be in the connectivity critical window defined in Condition \ref{cond-crit-conn}. Then,
\begin{equation}
\P ( \exists t \leq n^{1/8}\colon S_t ^{\scriptscriptstyle(v)}=0) = o(n^{-1}).
\end{equation}
\end{lem}
\proof We need one of the following three events to occurs for the process to die out before time $n^{1/8}$ :
\[\begin{split}
F_1 =\bigcup_{s_1,s_2,s_3 \leq n^{1/8}}& A(s_1) \cap A(s_2) \cap A (s_3) \cap \{ S_{s_1}^{\scriptscriptstyle(v)},S_{s_2}^{\scriptscriptstyle(v)},S_{s_3}^{\scriptscriptstyle(v)} \leq 3 \}, \\
F_2=\bigcup_{s_1,s_2 \leq n^{1/8}}& A(s_1) \cap B (s_2) \cap \{ S_{s_1}^{\scriptscriptstyle(v)},S_{s_2}^{\scriptscriptstyle(v)} \leq 3 \}, \\
F_3=\bigcup_{s_1,s_2 \leq n^{1/8}}& B(s_1) \cap B (s_2) \cap \{ S_{s_1}^{\scriptscriptstyle(v)},S_{s_2}^{\scriptscriptstyle(v)} \leq 4 \}.
\end{split}
\]
We estimate using \eqref{A(t)-bd} and \eqref{B(t)-bd} to obtain
\begin{align}
\P (F_1) &\leq {{n^{1/8}}\choose{3}} \left(\dfrac{2\rho_1}{ \sqrt{n}}\right)^3 \leq \dfrac{4\rho_1^3}{3} \dfrac{n^{3/8}}{n^{3/2}}= o(n^{-1}), \\
\P (F_2) &\leq {{n^{1/8}}\choose{2}} \dfrac{2\rho_1}{ \sqrt{n}} \dfrac{6}{n}\leq \dfrac{3 \rho_1}{2}\dfrac{n^{1/4}}{n^{3/2}}= o(n^{-1}),\\
\P (F_3) &\leq {{n^{1/8}}\choose{2}} \left( \dfrac{8}{n} \right)^2 \leq 32\dfrac{n^{1/4}}{n^{2}}= o(n^{-1}).
\end{align}
Applying the union bound proves the claim.
\qed
\medskip
The next lemma proves instead that when the process is sufficiently low, it is very unlikely to decrease further, since we have few active half-edges to create loops with:
\begin{lem}[Unlikely to dip even lower]
\label{down6}
Let $\mathrm{CM}_n(\boldsymbol{d})$ be in the connectivity critical window defined in Condition \ref{cond-crit-conn}. Fix $v$ such that $d_v \geq 3$. Then, for every $t \leq T_{1/2}$ and $\gamma > 0$,
\begin{equation}\label{o-2}
\P \Big( \sum_{i \leq \gamma n^{1/8}} (S_{t+i+1}^{\scriptscriptstyle(v)}-S_{t+1}^{\scriptscriptstyle(v)} )\mathbbm{1}_{S_{t+i+1}^{\scriptscriptstyle(v)}<S_{t+i}^{\scriptscriptstyle(v)}<3\gamma n^{1/8}}\geq 6\Big) = o(n^{-2}).
\end{equation}
\end{lem}
\proof As in the proof of Lemma \ref{1/8}, we find some events that must occur in order that the event in the left-hand side of \eqref{o-2} occurs. We start by introducing some notation. For $1\leq i<j$ and $s_i\geq 0$, we write $A_{[i,j]}(t)=A(t+s_i) \cap \dots \cap A(t+s_j), B_{[i,j]}(t)=B(t+s_i) \cap \dots \cap B(t+s_j)$. Then, for the event in the left-hand side of \eqref{o-2} to occur, we need that one of the following events occurs:
\[
\begin{split}
G_1=\bigcup_{s_1,\ldots,s_6 \leq \gamma n^{1/8}}& A_{[1,6]}(t) \cap \bigcap_{i \leq 6 } \{ S_{t+s_i }^{\scriptscriptstyle(v)}\leq 3\gamma n^{1/8}\},\\
G_2=\bigcup_{s_1,\ldots,s_5 \leq \gamma n^{1/8}}& A_{[1,4]}(t) \cap B(t+s_5) \cap \bigcap_{i \leq 5} \{ S_{t+s_i}^{\scriptscriptstyle(v)}\leq 3\gamma n^{1/8} \},\\
G_3=\bigcup_{s_1,\dots ,s_4 \leq \gamma n^{1/8}}& A_{[1,2]}(t)\cap B_{[3,4]} \cap \bigcap_{i \leq 4} \{ S_{t+s_i}^{\scriptscriptstyle(v)}\leq 3\gamma n^{1/8}\},\\
G_4=\bigcup_{s_1,\ldots, s_3 \leq \gamma n^{1/8}}& B_{[1,3]}(t)\cap \bigcap_{i\leq 3} \{ S_{t+s_i}^{\scriptscriptstyle(v)}\leq 3\gamma n^{1/8}\}.
\end{split}
\]
Again we estimate using \eqref{A(t)-bd} and \eqref{B(t)-bd} to obtain
\begin{align}
\P (G_1) \leq &{{\gamma n^{1/8}}\choose{6}}\left(\dfrac{2\rho_1}{ \sqrt{n}}\right)^6 \leq \dfrac{2^6\gamma^6 \rho_1^6}{6!} \dfrac{ n^{6/8}}{n^3}
= o(n^{-2}),\\
\P (G_2) \leq &{{\gamma n^{1/8}}\choose{5}}\left(\dfrac{2\rho_1}{ \sqrt{n}}\right)^4
\dfrac{6\gamma n^{1/8}}{n} \leq \dfrac{2^53\gamma \rho_1^4\gamma^6}{5!} \dfrac{ n^{6/8}}{n^3}
= o(n^{-2}),\\
\P (G_3) \leq &{{\gamma n^{1/8}}\choose{4}}\left(\dfrac{2\rho_1}{ \sqrt{n}}\right)^2 \left( \dfrac{6\gamma n^{1/8}}{n}\right)^2
\leq \dfrac{2^43^2\gamma^6 \rho_1^2}{4!} \dfrac{ n^{6/8}}{n^3}
= o(n^{-2}),\\
\P (G_4) \leq &{{\gamma n^{1/8}}\choose{3}}\left( \dfrac{6\gamma n^{1/8}}{n}\right)^3 \leq \dfrac{6^3\gamma^6 }{3!} \dfrac{ n^{6/8}}{n^3}
= o(n^{-2}).
\end{align}
Applying the union bound proves the claim.
\qed
\bigskip
We now show that not only the exploration survives up to time $n^{1/8}$ but also we have a quite large number of active half-edges:
\begin{lem}[Law of large numbers lower bound on exploration]
\label{many}
Fix $v$ such that $d_v \geq 3$. The exploration on $\mathrm{CM}_n(\boldsymbol{d})$ in the connectivity critical window defined in Condition \ref{cond-crit-conn} satisfies that there exists a $\gamma >0$ such that
\begin{equation}
\P (S_{n^{1/8}}^{\scriptscriptstyle(v)} < 2\gamma n^{1/8} ) =o(n^{-1}).
\end{equation}
\end{lem}
\proof We divide the proof into two cases:
\begin{enumerate}
\item There exists $t < n^{1/8}$ such that $S_t^{\scriptscriptstyle(v)} \geq 3\gamma n^{1/8}$. In this case, fix $n$ so large that $3\gamma n^{1/8} -6 \geq 2\gamma n^{1/8}$.
Then, note that in order for $S_{n^{1/8}}^{\scriptscriptstyle(v)} < 2\gamma n^{1/8}$ to occur and since $S_{t+1}^{\scriptscriptstyle(v)}-S_{t}^{\scriptscriptstyle(v)}\geq -2$, we must have that
$\sum_{i \leq \gamma n^{1/8}} (S_{t+i+1}^{\scriptscriptstyle(v)}-S_{t+1}^{\scriptscriptstyle(v)} )\mathbbm{1}_{S_{t+i+1}^{\scriptscriptstyle(v)}<S_{t+i}^{\scriptscriptstyle(v)}<3\gamma n^{1/8}}\geq 6$, which
by Lemma \ref{down6} implies that $S_{n^{1/8}}^{\scriptscriptstyle(v)} \geq 3\gamma n^{1/8} -6 \geq 2\gamma n^{1/8}$ has probability $o(n^{-2})$.
\item $S_t^{\scriptscriptstyle(v)} <3\gamma n^{1/8}$ for all $t \leq n^{1/8}$. In this case, we know from Lemma \ref{down6} that with probability $o(n^{-2})$ the sum of the down steps $(S_{t+i}^{\scriptscriptstyle(v)}-S_{t+i+1}^{\scriptscriptstyle(v)}) \mathbbm{1}_{S_{t+i+1}^{\scriptscriptstyle(v)}<S_{t+i}^{\scriptscriptstyle(v)}<3\gamma n^{1/8}}$ is at most $6$. Under this condition, we recall Lemma \ref{dom} and note that $d_{v_t}\geq 3$ with probability at least $\varepsilon$, for some $\varepsilon >0$, since $n^{1/8} \leq T_{1/2}$. Thus,
\begin{equation}
S_{n^{1/8}}^{\scriptscriptstyle(v)} \overset{st}{\geq}{\sf Bin} (n^{1/8}, \varepsilon)-6.
\end{equation}
By concentration of the binomial distribution (see e.g., \cite{ArrGor89})
\begin{equation}\label{bin}
\P \Big({\sf Bin} (n^{1/8}, \varepsilon) \leq \dfrac{1}{2}\varepsilon n^{1/8}\Big)\leq \mathrm{e}^{-cn^{1/8}} = o(n^{-2}).
\end{equation}
for sufficiently large $n$. The claim now follows by choosing $\gamma < \varepsilon / 4$.
\end{enumerate}
\qed
\bigskip
Now we know that at time $t=n^{1/8}$, with probability $1-o(n^{-1})$, $S_t^{\scriptscriptstyle(v)} \geq 2 \gamma n^{1/8}$. This means that from that point onwards, we need at least $\gamma n^{1/8}$ steps for the process to die. To prove Proposition \ref{surv} we use the following lemma:
\begin{lem}[Process does not go down too much]
\label{down}
Let $\mathrm{CM}_n(\boldsymbol{d})$ be in the connectivity critical window defined in Condition \ref{cond-crit-conn}. Fix $v$ such that $d_v \geq 3$.
Then, for every $\gamma > 0$,
\begin{equation}
\P (\exists t \in (n^{1/8},T_{1/2})\colon S_{t+\gamma n^{1/8}}^{\scriptscriptstyle(v)}<S_{t}^{\scriptscriptstyle(v)}< 3 \gamma n^{1/8}-6) = o(n^{-1}).
\end{equation}
\end{lem}
\proof
First fix $t\in (n^{1/8},T_{1/2})$. Again we split the proof into two parts:
\begin{enumerate}
\item There exists $i < \gamma n^{1/8}$ such that $S_{t+i}^{\scriptscriptstyle(v)} \geq 3\gamma n^{1/8}$. In this case, we again know from Lemma \ref{down6} that $S_{t+\gamma n^{1/8}}^{\scriptscriptstyle(v)} \geq 3\gamma n^{1/8} -6 \geq 2\gamma n^{1/8}$ with probability $1-o(n^{-2})$.
\item $S_{t+i}^{\scriptscriptstyle(v)} <3\gamma n^{1/8}$ for all $t \leq \gamma n^{1/8}$. In this case we know from Lemma \ref{down6} that with probability $o(n^{-2})$ the sum of the down steps $(S_{t+i}^{\scriptscriptstyle(v)}-S_{t+i+1}^{\scriptscriptstyle(v)}) \mathbbm{1}_{S_{t+i+1}^{\scriptscriptstyle(v)}<S_{t+i}^{\scriptscriptstyle(v)}<3\gamma n^{1/8}}$ is at most $6$. Under this condition we can again write
\begin{equation}
S_{t+\gamma n^{1/8}}^{\scriptscriptstyle(v)}-S_t ^{\scriptscriptstyle(v)} \overset{st}{\geq} \mathrm{Bin} (n^{1/8}, \varepsilon)-6.
\end{equation}
Formula \eqref{bin} proves that the probability that $S_{t+\gamma n^{1/8}}^{\scriptscriptstyle(v)}<S_t ^{\scriptscriptstyle(v)}$ is at most $o(n^{-2})$.
\end{enumerate}
The union bound implies that
\begin{equation}
\P (\exists t \in (n^{1/8},T_{1/2})\colon S_{t+\gamma n^{1/8}}^{\scriptscriptstyle(v)}<S_{t}^{\scriptscriptstyle(v)}< 3 \gamma n^{1/8}-6) \leq \ell_n \ o(n^{-2})= o(n^{-1}).
\end{equation}
\qed
\bigskip
Now we are ready to complete the proof of Proposition \ref{surv}:
\begin{proof}[Proof of Proposition \ref{surv}] Lemmas \ref{1/8} and \ref{many} show that up to time $n^{1/8}$ the process is very unlikely
to die and very likely to grow at least until polynomial size:
\begin{equation}
\P (T_0 > n^{1/8}, S_{n^{1/8}}^{\scriptscriptstyle(v)}> 2\gamma n^{1/8}) = 1 - o(n^{-1}).
\end{equation}
Now we define the sequence of random variables $Q_i = S_{(1+\gamma i)n^{1/8}}^{\scriptscriptstyle(v)}$, so that $\P ( Q_0 < 2 \gamma n^{1/8}) = o(n^{-1})$.
By Lemma \ref{down},
\begin{equation}
\P\left(Q_{i+1} \geq Q_i \ \forall i\leq \dfrac{ T_{1/2}}{\gamma n^{1/8}} \right) = 1- o(n^{-1}),
\end{equation}
and consequently
\begin{equation}
\P \left(Q_i \geq 2 \gamma n^{1/8} \ \forall i \leq \dfrac{ T_{1/2}}{\gamma n^{1/8}}\right) = 1-o(n^{-1}).
\end{equation}
Since $S_{t+1}^{\scriptscriptstyle(v)} -S_t^{\scriptscriptstyle(v)} \geq -2$, we know that $S_{t+s}^{\scriptscriptstyle(v)} \geq S_t^{\scriptscriptstyle(v)} - 2s$, so we conclude that
\begin{equation}
\P (S_t^{\scriptscriptstyle(v)} >0\ \forall t \leq T_{1/2} ) = 1 - o(n^{-1}).
\end{equation}
This completes the proof of Proposition \ref{surv}.
\end{proof}
\bigskip
\noindent
We continue with the proof of Theorem \ref{geq3}:
\medskip
\noindent
{\it Proof of Theorem \ref{geq3}.} Proposition \ref{surv} proves \eqref{bd-1} in Theorem \ref{geq3}. To prove \eqref{bd-2} in Theorem \ref{geq3},
we use that if $|\mathscr{C}(v)|>n/2$, then $v\in \mathscr{C}_{\rm max}$, to bound
\eqan{
\mathbb{E} [\#\{ v \in [n] \setminus \mathscr{C}_{\rm max} \colon d_v \geq 3\}]
&\leq \mathbb{E} [\#\{ v\in[n] \colon d_v \geq 3, |\mathscr{C}(v)|< n/2\}]=o(1)
}
by Proposition \ref{surv}. \qed
\bigskip
\noindent
To show that actually the size of the graph without the giant component has bounded expectation we need a slightly stronger result:
\begin{prop}[Clusters of vertices of degree at least three outside $\mathscr{C}_{\rm max}$]
\label{out}
Let $\mathrm{CM}_n(\boldsymbol{d})$ be in the connectivity critical window defined in Condition \ref{cond-crit-conn}. Then
\begin{equation}
\mathbb{E} [\# \{ v \notin \mathscr{C}_{\rm max}\colon v \leftrightarrow [n]\setminus (\mathscr{N}_1\cup\mathscr{N}_2)\}] \to 0,
\end{equation}
where, for a set of vertices $A\subseteq [n]$, $v \leftrightarrow A$ denotes that there exists $a\in A$ such that $v$ and $a$ are in the same connected component.
\end{prop}
\proof We have already proved that $\mathbb{E} [\#\{ v \in [n] \setminus \mathscr{C}_{\rm max}\colon d_v \geq 3\}] \to 0$.
We now initialize the exploration starting from a vertex $v$ with $d_v \in\{ 2,1\}$. Notice that the probability for the process to survive for $n^{1/8}$ steps without finding vertices of degree $3$ is smaller than $e^{-cn^{1/8}}$ for some $c>0$, since at every step the probability to find a vertex $w$ with $ d_w\geq 3$ is bounded away from $0$.
\begin{itemize}
\item[$\rhd$] If $d_v =2$ and our exploration finds a vertex $w$ with $ d_w \geq 3$ before time $n^{1/8}$, then for the process to die out before time $n^{1/8}$, we again need one of the events $F_1, F_2, F_3$ to occur. The proof that $\mathbb{E} [\# \{ v \in \mathscr{N}_2 \setminus \mathscr{C}_{\rm max}\colon v \leftrightarrow [n]\setminus (\mathscr{N}_1\cup\mathscr{N}_2)\}]\to 0$ is then identical to the proof of Proposition \ref{surv}.
\item[$\rhd$] In the connectivity critical window, we have that $n_1 = O (\sqrt{n})$. If $d_v =1$ and our exploration at a certain point finds a vertex $w$ with $d_w \geq 3$, then for the process to die out before time $n^{1/8}$ we need one of the following events to occur:
\eqan{
F'_1 &=\bigcup_{s_1,s_2 \leq n^{1/8}}A(s_1) \cap A(s_2) \cap \{ S_{s_1}^{\scriptscriptstyle(v)},S_{s_2}^{\scriptscriptstyle(v)} \leq 2 \}, \\
F'_2&=\bigcup_{s \leq n^{1/8}} B (s) \cap \{ S_{s}^{\scriptscriptstyle(v)} =2 \}.\nonumber
}
We estimate using \eqref{A(t)-bd} and \eqref{B(t)-bd} to obtain
\begin{align}
\P (F'_1) &\leq {{n^{1/8}}\choose{2}} \left(\dfrac{2\rho_1}{ \sqrt{n}}\right)^2 \leq 2\rho_1^2 \dfrac{n^{1/4}}{n}= o(n^{-1/2}), \\
\P (F'_2) &\leq n^{1/8} \dfrac{4}{n}= o(n^{-1/2}).
\end{align}
From here the proof that $\mathbb{E} [\# \{ v \in \mathscr{N}_1 \setminus \mathscr{C}_{\rm max}: v \leftrightarrow w; d_w \geq 3 \}]\to 0$ is then identical to the proof of Proposition \ref{surv}.
\end{itemize}
Since
\eqan{
&\mathbb{E} [\# \{ v \notin \mathscr{C}_{\rm max}\colon v \leftrightarrow [n]\setminus (\mathscr{N}_1\cup\mathscr{N}_2)\}]\\
&\qquad= \mathbb{E} [\#\{ v \in [n] \setminus \mathscr{C}_{\rm max} \colon d_v \geq 3\}]\nonumber\\
&\qquad\qquad+ \mathbb{E} [\# \{ v \in (\mathscr{N}_1 \cup \mathscr{N}_2) \setminus \mathscr{C}_{\rm max}\colon v \leftrightarrow [n]\setminus (\mathscr{N}_1\cup\mathscr{N}_2)\}],\nonumber
}
we obtain the claim.
\qed
\section{Proof of the Main Theorems }
\label{sec-compl-pf}
We can now finally prove the main theorems, putting together results from the previous two sections.
\begin{proof}[Proof of Theorem \ref{main}]
We know that
\begin{equation}
\{ \mathrm{CM}_n(\boldsymbol{d}) \text{ is connected} \}= \{C_k(n)=L_k(n)=0 \ \forall k\} \cap \{[n]\setminus (\mathscr{N}_1 \cup \mathscr{N}_2)\subseteq \mathscr{C}_{\rm max}\}.
\end{equation}
We have proved in Theorem \ref{geq3} that, whp{}, $[n]\setminus \mathscr{C}_{\rm max}\subseteq \mathscr{N}_1 \cup \mathscr{N}_2$. Thus,
\eqn{
\mathbb{P}(\mathrm{CM}_n(\boldsymbol{d}) \text{ is connected})=\mathbb{P}(C_k(n)=L_k(n)=0 \ \forall k)+o(1).
}
By Theorem \ref{poicl} and the independence of $C _k, L_k$, for each $j < \infty$,
\begin{equation}
\lim_{n \to \infty} \P (C_k(n)=L_k(n)=0 \ \forall k \leq j) = \prod_{k=1}^{j} \P (C_k =0) \prod_{k=2}^{j} \P (L_k=0).
\end{equation}
To pass to the limit we use dominated convergence. We compute that
\begin{equation}
\mathbb{E} [L_k(n)] = n_1 \dfrac{2 n_2}{\ell_n -1}\dfrac{2 n_2-2}{\ell_n -3}\cdots \dfrac{2 n_2 - 2k +4 }{\ell_n - 2k +3}\dfrac{n_1 -1}{\ell_n -2k+1}
\leq \dfrac{n_1^2 (2n_2)^{k-2}}{(\ell_n-2k)^{k-1}}.
\end{equation}
Since $\dfrac{n_1}{\sqrt{n}}\to \rho_1$, $\dfrac{n_2}{n} \to p_2$ and $\dfrac{\ell_n}{n}\to d$, for each $\varepsilon$ and $n$ big enough such that
\begin{equation}
\label{exp-Lk-bd}
\mathbb{E} [L_k(n)] \leq \dfrac{n_1^2(2 n_2)^{k-2}}{2(\ell_n-2k)^{k-1}}
\leq \dfrac{(\rho_1^2+\varepsilon)^2}{2(d-\varepsilon)} \left(\dfrac{2(p_2+\varepsilon)}{d-\varepsilon}\right)^{k-2}.
\end{equation}
For $\varepsilon$ small enough $2(p_2+\varepsilon)< d-\varepsilon$ so the series on the right hand side of \eqref{exp-Lk-bd} is exponentially small in $k$.
Similarly for $C_k(n)$,
\begin{equation}
\mathbb{E}[C_k(n)] =\dfrac{1}{2k} n_2 \dfrac{2n_2 -2}{\ell_n-2}\cdots \dfrac{2n_2 -2k+4}{\ell_n-2k+4}\dfrac{1}{\ell_n-2k+2}\leq \dfrac{(2n_2)^k}{k(\ell_n-2k)^k}.
\end{equation}
As before, we have for every $\varepsilon>0$
\begin{equation}
\label{exp-Ck-bd}
\mathbb{E} [C_k(n)]\leq \dfrac{1}{k}\dfrac{(2n_2)^k}{(\ell_n-2k)^k}
\leq \dfrac{(2n_2)^k}{k(\ell_n-2k)^k} \leq \dfrac{(2p_2+2 \varepsilon)^k}{k(d-\varepsilon)^k}.
\end{equation}
Again, for $\varepsilon>0$ small enough, $2(p_2+\varepsilon)< d-\varepsilon$, so that the series on the right hand side of \eqref{exp-Ck-bd} is exponentially small in $k$.
Since
\[
\{C_k(n)=L_k(n)=0 \ \forall k\} = \bigcap_j \{C_k(n)=L_k(n)=0 \ \forall k \leq j\},
\]
we obtain
\eqan{
\lim_{n \to \infty} \P (C_k(n)=L_k(n)=0 \ \forall k) &\leq\lim_{j \to \infty}\lim_{n \to \infty} \P (C_k(n)=L_k(n)=0 \ \forall k \leq j) \\
&=\prod_{k=1}^{\infty} \P (C_k =0) \prod_{k=2}^{\infty} \P (L_k=0)\nonumber\\
&=\exp{\left(-\sum_{k=1}^{\infty} \dfrac{(2p_2)^k}{2kd^k} -\sum_{k=2}^{\infty} \dfrac{\rho_1^{2} (2p_2)^{k-2}}{2d^{k-1}}\right)}\nonumber\\
&= \left(\dfrac{d-2p_2}{d}\right)^{1/2}\exp{\left(-\dfrac{\rho_1^2}{2(d-2p_2)}\right)},\nonumber
}
where we use that $-\sum_{k\geq 1} x^k/k=\log(1-x)$ for $x\geq 0$.
For the lower bound, we use
\eqan{
\lim_{n \to \infty} \P (C_k(n)=L_k(n)=0 \ \forall k) &\geq\lim_{j \to \infty}\lim_{n \to \infty} \P (C_k(n)=L_k(n)=0 \ \forall k \leq j)\\
&\qquad- \limsup_{j\rightarrow \infty}\limsup_{n\rightarrow \infty} \P( \exists k > j \colon C_k(n)+L_k(n) \geq 1).\nonumber
}
We find, using the Markov inequality,
\eqan{
\label{truncation}
&\limsup_{j\rightarrow \infty}\limsup_{n\rightarrow \infty} \P( \exists k > j \colon C_k(n)+L_k(n) \geq 1)\\
&\qquad= \limsup_{j\rightarrow \infty}\limsup_{n\rightarrow \infty} \P( \sum_{k > j} ( C_k(n)+L_k(n)) \geq 1)\nonumber\\
&\qquad\leq \limsup_{j\rightarrow \infty}\limsup_{n\rightarrow \infty} \sum_{k > j} \mathbb{E} [ C_k(n)+L_k(n)] =0,\nonumber
}
by \eqref{exp-Lk-bd} and \eqref{exp-Ck-bd}.
As a result,
\begin{equation}
\label{connfin}
\lim_{n \to \infty} \P (C_k(n)=L_k(n)=0 \ \forall k)
= \left(\dfrac{d-2p_2}{d}\right)^{1/2}\exp{\left(-\dfrac{\rho_1^2}{2(d-2p_2)}\right)}.
\end{equation}
From \eqref{connfin}, we obtain \eqref{conn} using Theorem \ref{geq3}.
\medskip
We next investigate the boundary cases in Remark \ref{rem-bc}. The result in \eqref{bc-1} follows in an identical way as in the proof of Theorem \ref{main}.
For the result for $d=\infty$ in \eqref{bc-2}, we notice that if $\mathbb{E} [D_n] \to \infty$ then for, all $k \geq 1$,
\begin{equation}
\lim_{n\to \infty} \dfrac{2n_2}{\ell_n-2k} =0,
\end{equation}
so that $\sum_{k\geq 3} L_k(n) +\sum_{k\geq 1} C_k(n)\overset{\sss\prob}{\rightarrow} 0$ by \eqref{exp-Lk-bd} and \eqref{exp-Ck-bd} and the Markov inequality.
Moreover,
\begin{equation}
\P (L_2 (n) = 0)=\prod_{i=1}^{n_1} \dfrac{\ell_n - n_1 -i +1}{\ell_n -2i +1} = \mathrm{e}^{-\frac{n_1^2}{2 \ell_n} (1+o(1))},
\end{equation}
so that
\begin{equation}
\lim_{n \to \infty} \P (\mathrm{CM}_n(\boldsymbol{d}) \text{ is connected})= \lim_{n \to \infty} \P(L_2(n) =0) = \lim_{n \to \infty} \mathrm{e}^{-\frac{n_1^2}{2 \ell_n}}.
\end{equation}
\medskip
Further we notice that
\begin{equation}
n - |\mathscr{C}_{\rm max} | = \sum_{k=1}^\infty k (C_k(n)+L_k(n)) + \# \{ v \notin \mathscr{C}_{\rm max}: v \leftrightarrow [n]\setminus (\mathscr{N}_1 \cup \mathscr{N}_2)\}.
\end{equation}
From Proposition \ref{out} we know that $\mathbb{E}[\# \{ v \notin \mathscr{C}_{\rm max}: v \leftrightarrow [n]\setminus (\mathscr{N}_1 \cup \mathscr{N}_2)\}]\to 0$, so that
\begin{equation}
\begin{split}
n - |\mathscr{C}_{\rm max} |&= \sum_{k\geq 1} k (C_k(n)+L_k(n))+o_{\sss \prob}(1).
\end{split}
\end{equation}
By \eqref{exp-Lk-bd} and \eqref{exp-Ck-bd} and dominated convergence, we obtain
\begin{equation}
n - |\mathscr{C}_{\rm max} |\overset{d}{\rightarrow} \sum_{k=1}^\infty k (C_k+L_k),
\end{equation}
which completes the proof of \eqref{dist}.
Since we have shown convergence of all moments, we also obtain
\eqan{
\lim_{n \to \infty} \mathbb{E} [n - |\mathscr{C}_{\rm max} |]
&= \lim_{n \to \infty} \sum_{k=1}^\infty k \mathbb{E}[ C_k(n)+L_k(n)]\\
&\quad=\lim_{j \to \infty} \sum_{k=1}^j k \dfrac{(2p_2)^k}{2kd^k}
+\sum_{k=2}^j k \dfrac{\rho_1^2(2p_2)^{k-1}}{2d^{k-1}}
=\dfrac{\rho_1^2(2d-p_2)}{2(d-p_2)^2}+\dfrac{p_2}{d-2p_2},\nonumber
}
as required.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainsim}] If we condition on simplicity, then we already have that $C_1(n)=C_2(n)=0$. Therefore, we find using the same method
as in the previous proof that
\eqan{
&\lim_{n \to \infty} \P (C_k(n)=L_k(n)=0 \ \forall k\mid \mathrm{CM}_n(\boldsymbol{d}) \text{ is simple}) \\
&\qquad= \prod_{k=3}^{\infty} \P (C_k =0) \prod_{k=2}^{\infty} \P (L_k=0)
=\exp{\left(-\sum_{k=3}^{\infty} \dfrac{(2p_2)^k}{2kd^k}-\sum_{k=2}^{\infty} \dfrac{\rho_1^{2} (2p_2)^{k-2}}{2d^{k-1}} \right) }\nonumber\\
&\qquad=\left(\dfrac{d-2p_2}{d}\right)^{1/2}\exp{\left(- \dfrac{\rho_1^2}{2(d-2p_2)} +\dfrac{p_2^2+d p_2}{d^2} \right) },\nonumber
}
from which we obtain \eqref{conns} thanks to Theorem \ref{geq3}.
\medskip
We recall that $\mathscr{N}_n(\boldsymbol{d})$ denotes the number of simple graphs with degree distribution $\boldsymbol{d}$. We know that
\begin{equation}
\mathscr{N}_n(\boldsymbol{d})=\exp\left\lbrace -\dfrac{\nu}{2}- \dfrac{\nu^2}{4} \right\rbrace \dfrac{(\ell_n-1)!!}{\prod_{i \in [n]}d_i !}(1+o(1))
\end{equation}
Since $\mathrm{CM}_n(\boldsymbol{d})$ conditioned on being simple has the uniform distribution over all possible simple graphs with degree sequence $\boldsymbol{d}$,
\begin{equation}
\mathscr{N}^C_n(\boldsymbol{d})=\mathscr{N}_n(\boldsymbol{d}) \P(\mathrm{CM}_n(\boldsymbol{d}) \text{ is connected}
\mid \mathrm{CM}_n(\boldsymbol{d}) \text{ is simple}),
\end{equation}
which yields the claim.
\end{proof}
\section*{Acknowledgments}
The work of LF and RvdH is supported by the Netherlands Organisation for Scientific Research (NWO) through the Gravitation {\sc Networks} grant 024.002.003. The work of RvdH is also supported by the Netherlands Organisation for Scientific Research (NWO) through VICI grant 639.033.806.
\begin{small}
\bibliographystyle{abbrv}
| 2023-04-23T06:40:35.658Z | 2016-03-11T02:09:12.000Z | redpajama/arxiv | arxiv_0001 | 748 | 9,990 |
b461af79e193f68eb071b56077bdd0fe6e7c0748 | \section{Introduction}
Optimization on manifolds, or Riemannian optimization, is a method for solving problems of the form
\begin{equation*}
\min_{x\in \mathcal{M}} f(x)
\end{equation*}
where $f\colon \mathcal{M}\to\mathbb{R}$ is a (cost) function and the search space
$\mathcal{M}$ is smooth, in the sense that it
admits the structure of a differentiable manifold. Although the
definition of differentiable manifold is technical and abstract, many
familiar sets satisfy this definition and are therefore compatible with
the methods of optimization on manifolds. Examples include the \emph{sphere}
(the set of points with unit Euclidean norm) in $\mathbb{R}^n$, the set of
\emph{positive definite matrices}, the set of \emph{orthogonal matrices}
as well as the set of $p$-dimensional subspaces of $\mathbb{R}^n$ with
$p < n$, also known as the \emph{Grassmann} manifold.
To perform optimization, the function $f$ needs to be defined for points on the
manifold $\mathcal{M}$. Elements of $\mathcal{M}$ are often represented by
elements of $\mathbb{R}^n$ or $\mathbb{R}^{m\times n}$, and $f$ is often well defined
on some or all of this \enquote{ambient} Euclidean space. If $f$ is also
differentiable, it makes sense for
an optimization algorithm to use the derivatives of $f$ and adapt them to the
manifold setting in order to iteratively refine solutions based on curvature
information. This is one of the key aspects of Manopt \citep{manopt}, which
allows the user to pass a function's gradient and Hessian to state of the art
solvers which exploit this information to optimize over the manifold
$\mathcal{M}$. However, working out and implementing gradients and higher
order derivatives is a laborious and error prone task, particularly when the
objective function acts on matrices or higher rank tensors. Manopt's state of
the art Riemannian Trust Regions solver, described in \cite{absil2007},
requires second order directional derivatives (or a
numerical approximation thereof), which are particularly challenging to work out
for the average user, and more error prone and tedious even for an experienced
mathematician.
It is these difficulties which we seek to address with this toolbox.
\textsc{Pymanopt}\xspace supports a variety of modern Python libraries for
automated differentiation of cost functions acting on vectors, matrices or
higher rank tensors. Combining optimization on manifolds and
automated differentiation enables a convenient workflow for rapid
prototyping that was previously unavailable to practitioners. All that is
required of the user is to instantiate a manifold, define a cost function, and
choose one of \textsc{Pymanopt}\xspace's solvers. This means that the Riemannian Trust
Regions solver in \textsc{Pymanopt}\xspace is just as easy to use as one of the
derivative-free or first order methods.
\section{The Potential of Optimization on Manifolds and \textsc{Pymanopt}\xspace Use Cases}
Much of the theory of how to adapt Euclidean optimization algorithms to
(matrix) manifolds can be found in \cite{smith1994, edelman1998, absil2008}.
The approach of optimization on manifolds is superior to performing free (Euclidean)
optimization and projecting the parameters back onto the
search space after each iteration (as in the projected gradient
descent method), and has been shown to outperform standard algorithms
for a number of problems.
\cite{hosseini2015} demonstrate this advantage for a well-known problem in
machine learning, namely inferring the maximum likelihood parameters of a
mixture of Gaussian (MoG) model. Their alternative to the
traditional expectation maximization (EM) algorithm uses
optimization over a product manifold of positive definite (covariance)
matrices. Rather than optimizing the likelihood function directly, they
optimize a reparameterized version which shares the same local optima. The
proposed method, which is on par with EM and shows less variability in running
times, is a striking example why we think a toolbox like \textsc{Pymanopt}\xspace, which
allows the user to readily experiment with and solve problems involving
optimization on manifolds, can
accelerate and pave the way for improved machine learning algorithms.\footnote{A quick example implementation for inferring MoG parameters
is available at
\href{https://pymanopt.github.io/MoG.html}{pymanopt.github.io/MoG.html}.}
Further successful applications of optimization on manifolds include matrix
completion tasks \citep{vandereycken2013, boumal2015},
robust PCA \citep{podosinnikova2014},
dimension reduction for
independent component analysis (ICA) \citep{theis2009}, kernel ICA
\citep{shen2007} and similarity learning \citep{shalit2012}.
Many more applications to machine learning and other fields exist. While a full
survey on the usefulness of these methods is well beyond the scope
of this manuscript, we highlight that at the time of writing, a search for the
term \enquote{manifold optimization} on the IEEE Xplore Digital Library lists
1065 results; the Manopt toolbox itself is referenced in 90 papers indexed by
Google Scholar.
\section{Implementation}
Our toolbox is written in Python and uses NumPy and SciPy for
computation and linear algebra operations. Currently \textsc{Pymanopt}\xspace is
compatible with cost functions defined using
Autograd \citep{autograd},
Theano \citep{team2016theano}
or
TensorFlow \citep{tensorflow2015-whitepaper}.
\textsc{Pymanopt}\xspace itself and all the required software is open source, with no
dependence on proprietary software.
To calculate derivatives, Theano uses symbolic differentiation, combined with
rule-based optimizations,
while both Autograd and TensorFlow use reverse-mode
automatic differentiation. For a
discussion of the distinctions between the two approaches and an overview of
automatic differentiation in the context of machine learning, we refer the
reader to \cite{baydin2015}.
Much of the structure of \textsc{Pymanopt}\xspace is based on that of the Manopt Matlab
toolbox. For this early release, we have implemented all of the solvers and a
number of the manifolds found in Manopt, and plan to implement more, based on
the needs of users. The codebase is structured in a modular way and thoroughly
commented to make extension to further solvers, manifolds, or backends for automated differentiation
as straightforward as possible. Both a user and developer documentation are
available. The GitHub
repository at \href{https://github.com/pymanopt/pymanopt}{github.com/pymanopt/pymanopt}
offers a convenient way to ask for help or request features by raising an
issue, and contains guidelines for those wishing to contribute to the project.
\section{Usage: A Simple Instructive Example}
All automated differentiation in \textsc{Pymanopt}\xspace is performed behind the scenes so
that the amount of setup code required by the user is minimal. Usually only the
following steps are required:
\begin{enumerate}\itemsep0em
\item[(a)] Instantiation of a manifold $\mathcal{M}$
\item[(b)] Definition of a cost function $f\colon\mathcal{M}\to\mathbb{R}$
\item[(c)] Instantiation of a \textsc{Pymanopt}\xspace solver
\end{enumerate}
We briefly demonstrate the ease of use with a simple example.
Consider the problem of finding an $n\times n$ positive semi-definite (PSD)
matrix $S$ of rank $k<n$ that best approximates a given $n\times n$
(symmetric) matrix $A$, where closeness between $A$ and its low-rank
PSD approximation $S$ is measured by the following loss function
$$L_{\delta}(S,A) \triangleq \sum_{i=1}^n \sum_{j=1}^n H_\delta\left(s_{i,j}-a_{i,j}\right)$$
for some $\delta > 0$ and $H_\delta(x) \triangleq \sqrt{x^2+\delta^2}-\delta $ the pseudo-Huber loss function.
This loss function is robust against outliers as $H_\delta(x)$ approximates $|x|-\delta$ for large values of $x$ while being approximately quadratic for small values of $x$ \citep{huber1964robust}.
This can be formulated as an optimization problem on the manifold of PSD matrices:
$$\min_{S\in\mathcal{PSD}_k^n} L_{\delta}(S,A)$$
where $\mathcal{PSD}_k^n\triangleq\{M\in\mathbb{R}^{n\times n}:M\succeq 0,
\operatorname{rank}(M)=k\}$. This task is easily solved using \textsc{Pymanopt}\xspace:
\pythonexternal{example.py}
The examples folder within the \textsc{Pymanopt}\xspace toolbox holds further instructive examples, such as performing inference in mixture of Gaussian models using optimization on manifolds instead of the expectation maximization algorithm. Also see the examples section on \href{http://pymanopt.github.io}{pymanopt.github.io}.
\section{Conclusion}
\textsc{Pymanopt}\xspace enables the user to experiment with different state
of the art solvers for optimization problems on manifolds, like the Riemannian Trust Regions
solver, without any extra effort. Experimenting with different cost functions,
for example by changing the pseudo-Huber loss $L_{\delta}(S,A)$ in the code above to the Frobenius norm $||S-A||_F$, a $p$-norm $||S-A||_p$, or some more complex function, requires just a small change in the definition of the cost
function. For problems of greater complexity, \textsc{Pymanopt}\xspace offers a significant
advantage over toolboxes that require manual differentiation by enabling users
to run a series of related experiments without returning
to pen and paper each time to work out derivatives. Gradients and Hessians only
need to be derived if they are required for other analysis of a problem. We
believe that these advantages, coupled with the potential for extending
\textsc{Pymanopt}\xspace to large-scale applications using TensorFlow, could lead to
significant progress in applications of optimization on manifolds.
\section*{Acknowledgments}
We would like to thank the developers of the Manopt Matlab toolbox, in
particular Nicolas Boumal and Pierre-Antoine Absil, for developing Manopt,
and for the generous help and advice they have given. We would also like
to thank Heiko Strathmann for his thoughtful advice as well as the anonymous reviewers for their constructive feedback and idea for a more suitable application example.
\newpage
| 2023-04-23T06:40:36.016Z | 2016-09-09T02:03:32.000Z | redpajama/arxiv | arxiv_0001 | 766 | 1,558 |
82282d1d404282004da5381ed1883a2393bb7f85 | \section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
By submitting a manuscript to 3DV, the authors assert that it has not been
previously published in substantially similar form. Furthermore, no paper
which contains significant overlap with the contributions of this paper
either has been or will be submitted during the 3DV 2022 review period to
{\bf either a journal} or any conference or any
workshop. {\bf Papers violating this condition will be rejected}.
If there are papers that may appear to the reviewers
to violate this condition, then it is your responsibility to: (1)~cite
these papers (preserving anonymity as described in Section 1.6 below),
(2)~argue in the body of your paper why your 3DV paper is non-trivially
different from these concurrent submissions, and (3)~include anonymized
versions of those papers in the supplemental material.
\subsection{Paper length}
3DV papers should be no longer than 8 pages, excluding references.
The references section will not be included in the page count, and
there is no limit on the length of the references section. Overlength
papers will simply not be reviewed. This includes papers where the
margins and formatting are deemed to have been significantly altered
from those laid down by this style guide. Note that this \LaTeX\
guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no
provision for supervised revisions of manuscripts. The reviewing
process cannot determine the suitability of the paper for presentation
in eight pages if it is reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\threedvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $097.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith, it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors12} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2022 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors12b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the 3DV70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
FAQ: Are acknowledgements OK? No. Leave them for the final copy.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be
kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches
(22.54 cm) high.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors12}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Color is valuable, and will be visible to readers of the electronic copy.
However ensure that, when printed on a monochrome printer, no important
information is lost by the conversion to grayscale.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee_fullname}
\section*{Acknowledgements}
Research presented here has been supported by the UCL Centre for Doctoral Training in Foundational AI under UKRI grant number EP/S021566/1.
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
\label{sec:intro}
Reconstructing temporally consistent shape deformations from visual inputs remains a challenging open problem in computer vision with many applications in AR/VR and content creation. While model-agnostic approaches that exploit local shape smoothness priors~\cite{Newcombe15} have been extremely successful, the remarkable recent advances in machine learning driven capture of domain-specific 3D parametric models such as for human bodies~\cite{SMPL15,Scape05,Joo15}), hands~\cite{MANO17}, faces~\cite{PaysanKARV09, FLAME17, Wang19} or animals~\cite{ZuffiKJB16} have made them an attractive alternative. However, parametric models suffer from two important drawbacks. First, since they do not exploit local geometry priors they often fail to capture local geometry details and have a tendency to over-regularize. Secondly, their construction typically requires manual intervention to aid alignment, or fully dense annotations across all samples to provide dense correspondences at training time.
Neural Parametric Models (NPMs)~\cite{Palafox21} were recently proposed to tackle some of the challenges faced by traditional parametric models by leveraging deep neural networks and implicit functions to learn a disentangled representation of shape and pose. NPMs are learnt from data alone without the need for any class specific knowledge and, once trained, can be fit to new observations via test-time optimization. While NPMs offer an appealing alternative to traditional 3D parametric models, since they only require the same identity to be seen in different poses (including a canonical pose) dropping the need for registration across different identities, they still require known dense correspondences during training between different poses of the same identity and their canonical pose. Moreover, NPMs do not learn any local geometry regularity priors given that flow predictions are conditioned on global shape and pose codes and local geometry priors are not exploited.
To tackle both limitations we propose Geometric Neural Parametric Models (GNPMs). We represent surfaces as point clouds and exploit the ability of dynamic graph neural network architectures~\cite{YueWang18} to explicitly take into account the local structure around deformed points to learn local features and enforce local geometric regularity. In addition, we relax the need for any correspondences at all during training by disentangling 4D dynamics into shape and pose latent spaces via a cycle consistency loss.
Geometric priors are enforced by adapting the EdgeConv~\cite{YueWang18} graph convolution operators to design our model as an auto-decoder~\cite{DeepSDF}. Our intuition is that points tend to deform coherently with their neighbours~\cite{ARAP07}. Furthermore, local features learnt via graph convolutions can also help the model to learn semantic relations between non-neighbouring points, which is potentially useful in deformations such as dancing where distant body parts move synchronously.
Inspired by ~\cite{Paschalidou2021CVPR}, we show that the learned features can be additionally used to segment shapes into semantically meaningful parts which remain consistent across identities, in a completely unsupervised way (see Figure~\ref{fig:deformed_clust}).
While it is known that geometric-based methods can be inefficient ~\cite{Wu19}, we provide an efficient implementation that allows our model to run at a comparable cost to MLP-based models.
To learn without known correspondences we exploit the observation that transformations between posed and canonical spaces should be cycle consistent. This loss allows us to infer a dense deformation field, by jointly learning the weights of two networks that perform a bi-directional mapping between posed and canonical spaces and the respective shape and pose latent spaces (see Fig.~\ref{fig:self_cycle}). To fit the model to new unseen identities and/or deformations, we use test-time optimisation to minimise the cycle consistency loss to recover shape and pose latent vectors.
In summary, our contributions are:
\begin{itemize}
\item GNPM is a geometric-aware neural parametric model that learns to disentangle pose and shape exploiting the local structure of the data via edge convolutions.
\item GNPM learns shape and pose embeddings and long-term dense correspondences without the need for ground truth annotations during training.
\item GNPMs learn rich geometric features useful for downstream tasks such as unsupervised part segmentation.
\end{itemize}
\end{document}
\section{Related Work}
\textbf{Parametric Models:}
Parametric 3D models have become a prevalent tool to model deformable 3D shapes.
They learn to disentangle deformations into several factors of interest and have been applied to various domains such as human bodies ~\cite{Scape05, Joo15, SMPL15, GHUMcvpr20}, hands and faces ~\cite{MANO17} ~\cite{PaysanKARV09, FLAME17, Wang19} and animals ~\cite{ZuffiKJB16}.
Despite their success, they struggle to capture fine-grained details like wrinkles or to model clothes. Also, their construction can be tedious as it often requires domain knowledge or manual tuning. Neural based methods~\cite{Palafox21} offer a compelling alternative to learn directly from data without the need for manual tuning or domain-specific knowledge.
\textbf{Supervised Neural Deformation Models:}
Building on 3D OccNet~\cite{Mescheder18}, OFlow~\cite{OccFlowNiemeyer19ICCV} learns a continuous spatio-temporal representation of 4D dynamics that assigns motion vectors to every location in space-time. However, it degrades when capturing long sequences.
Inspired by Dynamic Fusion~\cite{Newcombe15}, Bozic \emph{et al.}~\cite{bozic2021neuraldg} learn a globally consistent deformation graph while learning dense surface details via local MLPs. However, it is limited to be sequence-specific and cannot be used for shape or pose transfer.
Palafox \emph{et al.}~\cite{Palafox21} recently introduced Neural Parametric Models (NPMs) leveraging the representation power of implicit functions to disentangle latent spaces of shape and pose. However, NPMs disregard the local geometric structure of 3D shapes and require dense correspondences for training.
While our geometric model also learns to disentangle between shape and pose, unlike~\cite{Palafox21} we take into account the geometric structure and can learn long term correspondences in a self-supervised manner without the need for dense ground truth annotations.
\textbf{Self-Supervised Neural Deformation Models:}
LoopReg~\cite{Bhatnagar2020} was the first end-to-end learning framework to solve scan registration with a self-supervision loop. Backward and forward maps are learnt to predict correspondences between every input scan point and the model surface. SCANimate~\cite{Saito2021} also uses a self-supervision cycle to learn an implicit dense field of skinning weights to map surface points to a canonical pose. Although both~\cite{Bhatnagar2020, Saito2021} can model shape deformation in a semi-supervised or unsupervised way, they do not learn latent shape/deformation spaces and rely on SMPL as the underlying body model.
In contrast, our model, GNPM, learns disentangled shape and deformation latent spaces. More recently, Neuromorph~\cite{Eisenberger21} jointly solves shape interpolation and dense correspondences in an unsupervised way using edge convolutions~\cite{YueWang18} and a single forward pass. Although Neuromorph can estimate dense correspondences between shapes of different categories in different poses the latent interpolation is modelled via time, limiting its ability to learn a parameterised representation.
\end{document}
\section{Method}
\begin{figure}[t]
\centering
\includegraphics[width = 1\linewidth]{figs/gnpm_self_cycle_v4.pdf}
\caption{\textbf{Self-supervision cycle.}
Our model is trained to map an input scan to their canonical t-pose and then back to the deformed shape, imposing cycle consistency. In this way, we learn dense correspondences without any ground truth annotations. In addition, we disentangle shape and pose by learning two embeddings which are jointly optimised with the weights of the graph neural networks.}
\label{fig:self_cycle}
\end{figure}
We introduce Geometric Neural Parametric Models (GNPM), a geometric-aware model that disentangles 4D dynamics into latent spaces of shape identity and pose and can be learnt without the need for correspondences across shapes.
We choose point clouds as our shape representation given their lightweight nature and that they naturally match the raw output of commodity depth cameras. Moreover, when combined with the EdgeConv architecture~\cite{YueWang18}, local geometric structure can be exploited to capture both local and global shape properties, unlike the more recently popular implicit representations~\cite{Palafox21}.
We adopt EdgeConv layers (see~\ref{back:edge_conv}) and modify the architecture to an auto-decoder~\cite{DeepSDF}.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{figs/gnpm_sparse_dense_multi_3.pdf}
\caption{Results of time-consistent dense correspondence estimation for 3D point-cloud inputs from the CAPE~\cite{CAPE} (top) and DFAUST~\cite{dfaust2017} (bottom) datasets. We show comparisons with the ground truth dense correspondence maps.}
\label{fig:sparse_dense}
\end{figure*}
Given a dataset of multiple shape identities in various poses, but without any registration within or across identities, we jointly learn: (\emph{i}) the weights of a network that predicts globally consistent dense point correspondences, and (\emph{ii}) shape and pose latent embeddings. Instead of requiring per-sequence dense correspondences as supervision signal~\cite{Palafox21},
we use a self-supervision cycle, in similar spirit to ~\cite{Bhatnagar2020, Saito2021}, to learn the shape/pose latent spaces without any need for registration during training.
Given an input posed shape, the forward network learns to deform it to its canonical t-pose. The backward network then learns to deform the t-posed shape back to the input posed shape. Together, forward and backward networks form an identity mapping, and the distance between predicted and input shapes is used as supervision signal. Our only requirement on the training dataset is that each shape identity should be observed in a canonical pose (e.g., T-pose), which is satisfied on a variety of datasets~\cite{Deform4D,CAPE,MANO17}).
At test time, given new observations of an unseen identity, the weights of the network are frozen and shape and pose embedding vectors are jointly optimised by minimizing the same cycle-consistency loss.
\subsection{Shape/Pose Network and Embeddings}
To circumvent the lack of correspondences we use a self-supervision cycle that conducts a bi-directional mapping between input and canonical poses. Our model is composed of two networks, both conditioned on the shape and pose latents (see Section~\ref{fig:self_cycle}). Given an input posed shape, the forward network predicts a dense deformation field $\delta \tilde{x}$ that maps it onto the canonical t-pose while the backward network learns to deform the canonical shape back to the input pose.
Figure~\ref{fig:self_cycle} shows an overview of the self-supervision cycle. Forward and backward networks are implemented as auto-decoders, with parameters $\theta_a$ and $\theta_b$, and EdgeConv layers that predict dense deformation fields.
\noindent \textbf{Forward Network:}
The forward network learns a geometric-aware deformation field that deforms the input posed shape to its canonical t-pose shape. More formally, the input to the network is a set of $N$ observed points on the surface of the shape associated with the $f^{th}$ frame and $c^{th}$ identity, $X^{cf} = \{x_i^{cf} \}^{N}_{i=1} \in \mathbb{R}^3$ to which we apply positional encoding $\gamma(\cdot)$~\cite{NeRF}. The forward network forms a dynamic graph $\mathcal{G}^l$, via k-NN, conditional on a learnable per-frame $D_p$-dimensional latent pose code $p_f$, and a fixed $D_s$-dimensional latent shape code $s_c$.
A series of EdgeConv layers are applied, each using the output features of the previous layer to re-estimate the graph $\mathcal{G}^l$ (see~\ref{back:edge_conv}). Per-point pooling is performed after the last layer and a shallow MLP is applied to predict dense point-wise deformations $\{\delta x^{cf}_i \}^{N}_{i=1} \in \mathbb{R}^3$.
\begin{align}
f^{a}_{\theta_a}: \mathbb{R}^{N \times 51} \times \nonumber \mathbb{R}^{D_s} \times \mathbb{R}^{D_p} &\rightarrow \mathbb{R}^{N \times 3} \\
f^{a}_{\theta_a}(\gamma(x_i), s_c, p_f) &= \delta \tilde{x}.
\label{eq:forward}
\end{align}
\noindent\textbf{Backward Network:}
The backward network learns the inverse dense deformation field that deforms the canonical t-pose shape to the posed shape: $\{\delta y^{cf}_i \}^{N}_{i=1} \in \mathbb{R}^3$. While the architecture is equivalent to the forward network, the parameters are not shared.
\begin{align}
f^{b}_{\theta_b}: \mathbb{R}^{N \times 51} \times \nonumber \mathbb{R}^{D_s} \times \mathbb{R}^{D_p} &\rightarrow \mathbb{R}^{N \times 3} \\
f^{b}_{\theta_b}(\gamma(\tilde{y_i}), s_c, p_f) &= \delta \tilde{y}
\label{eq:backward}
\end{align}
\noindent \textbf{Losses:} Combining (\ref{eq:forward}) and (\ref{eq:backward}) we can close the cycle and have a self-supervision loop.
We define the loss $\mathcal{L}_{loop}$ as the $l_1$ distance between the shape predicted by the cycle and the input shape.
\begin{equation} \label{eq:cycle}
\begin{split}
f^{a}_{\theta_a}(\gamma(x_i), s_c, p_f) = \delta \tilde{x_i} \\
\tilde{y_i} = x_i + \delta \tilde{x_i}\\
f^{b}_{\theta_b}(\gamma(\tilde{y_i}), s_c, p_f) = \delta \tilde{y_i} \\
\tilde{x_i} = \tilde{y_i} + \delta \tilde{y_i}\\
\end{split}
\end{equation}
\begin{equation}
\mathcal{L}_{loop}(\tilde{x}^{cf}_{i}, x^{cf}_{i}) = \parallel \tilde{x}^{cf}_{i} - x^{cf}_{i} \parallel
\end{equation}
To prevent the network from learning an identity mapping by setting $\delta \tilde{x}$ and $\delta \tilde{y}$ to zero, we use an additional symmetric ICP loss that minimizes the distance between the ground truth t-pose shape points and the t-pose predicted by the forward network
\begin{equation}
\mathcal{L}_{icp}(\tilde{y}_i^{cf}) = \parallel \tilde{y}_i^{cf} - NN_{\mathcal{R}}(\tilde{y}_i^{cf}) \parallel^2_2
\end{equation}
Where $NN_{\mathcal{R}}(.)$ is a function that queries the nearest neighbour of a 3D point in the set $\mathcal{R}$ of input points.
Which we found important to prevent correspondence flipping in the presence of extreme motion (see ~\ref{subsec:ablation}).
To aid temporal smoothness we constrain the current and next pose latents to be close by adding an $l_1$ loss between them $\mathcal{L}_{lt}$.
Also we found it very useful to enforce temporal regularization for both networks output which we called spacial temporal loss $\mathcal{L}_{st}$.
\begin{equation}
\mathcal{L}_{lt}(p_f, p_{f+1}) = \parallel p_f - p_{f+1} \parallel \\
\end{equation}
\noindent Inspired by the intuition points in consecutive frames deform coherently ~\cite{ARAP07}.
And as we do not have access to dense correspondences, we use the input points and query the nearest neighbour of the next frame input points.
Then we use $l_1$ loss between the current network prediction and the prediction of the nearest points in the next frame.
\begin{equation}
\begin{split}
\mathcal{L}^{a}_{st}(\tilde{x}_i^{cf}) = \parallel \delta \tilde{x}_i^{cf} - \delta \tilde{x}_i^{c(f+1)} \parallel \\
\delta \tilde{x}_i^{c(f+1)} = f^{a}_{\theta_a}(NN_{\mathcal{Q}^{a}}({x}_i^{c(f+1)})) \\
\mathcal{L}^{b}_{st}(\tilde{y}_i^{cf}) = \parallel \delta \tilde{y}_i^{cf} - \delta \tilde{y}_i^{c(f+1)} \parallel \\
\delta \tilde{y}_i^{c(f+1)} = f^{b}_{\theta_a}(NN_{\mathcal{Q}^{b}}({y}_i^{c(f+1)})) \\
\end{split}
\label{eq:temp}
\end{equation}
Where $NN_{\mathcal{Q}}(.)$ is a function that queries the nearest neighbour of a 3D point in the set $\mathcal{Q}$ of input points.
Our final temporal regularization loss is:
\begin{equation}
\mathcal{L}_{temp} = \mathcal{L}_{lt} + \mathcal{L}^{a}_{st} + \mathcal{L}^{b}_{st}
\end{equation}
Finally, we minimize this objective over all possible $F$ deformation fields across all shape identities $C$ with respect to the individual pose and shape codes $\{p_f\}^F_{f=1}$ and $\{s_c\}^C_{c=1}$ respectively and the forward and backward network weights $\theta_a, \theta_b$.
To enforce a compact pose and shape manifold we also regularize the both codes with $\sigma_p$ and \; $\sigma_s$.
\begin{multline}
\argmin_{\theta_a, \theta_b, \{s_c\}_{c=1}^{C}, \{p_f\}_{f=1}^{F}} \overset{C}{\underset{\substack{c=1}}{\sum}} \; \overset{F}{\underset{\substack{f=1}}{\sum}} \; \overset{N}{\underset{\substack{i=1}}{\sum}}\; \mathcal{L}_{loop} + \lambda_{icp} \mathcal{L}_{icp} \\
+ \lambda_{temp} \mathcal{L}_{temp} + \frac{\| p_f \|^2_2}{\sigma^2_p} + \frac{\| s_c \|^2_2}{\sigma^2_s}
\end{multline}
\subsubsection{EdgeConv Layers}
\label{back:edge_conv}
In contrast to previous neural parametric models~\cite{Palafox21} that treat each data point independently, we exploit the power of edge convolutions in DGCNNs~\cite{YueWang18} to learn local geometric structure in deformed 3D point clouds. EdgeConv~\cite{YueWang18} alleviates the lack of topology in point clouds by proposing a differential neural network module suitable for CNN-based high-level tasks. Local geometric structure is exploited by constructing a local neighbourhood graph and applying convolution-like operations on the edges in a dynamic setting, where the connectivity is learned and changes throughout the training.
The learned feature space not only captures semantic relations within a local neighbourhood, but also between distant points, which is particularly useful in the context of deformations where different body parts might move synchronously.
Given a set of $N$ uniformly sampled 3D surface points, denoted by $X = \{x_1, . . . , x_N\} \subseteq \mathbb{R}^{\textsc{D}}$, where $\textsc{D}$ is the point cloud dimensionality, a directed graph $G = (V, E)$ of vertices $V$ and edges $E$, is first built to represent the local point cloud structure via k-nearest neighbours (k-NN). In our setting, input points are represented by their 3D coordinates $x_i = (x_i ,y_i , z_i)$, but ${\textsc{D}}$ more generally refers to the dimensionality at each layer. Edge features are then defined as $e_{ij} = h_{\Theta}(x_i , x_j )$, where $h_{\Theta} : \mathbb{R}^{\textsc{D}} \times \mathbb{R}^{\textsc{D}} \xrightarrow[]{} \mathbb{R}^{{\textsc{D}}'}$ is a multi-layer perceptron (MLP) with learnable parameters $\Theta$. Channel-wise symmetric aggregation $\Box$ (e.g., $\sum$ or $max$) is then applied on the edge features associated with each vertex $i$ with its output given by
\begin{equation}
x'_i = \underset{j:(i, j) \in \mathcal{E}}{\Box} h_{\Theta}(x_i, x_j).
\end{equation}
Unlike GCNNs that work on a fixed input graph, EdgeConv recomputes the graph neighbourhood structure at each layer. In practice, a pairwise distance matrix is computed in feature space, and the closest $k$ points are selected for each vertex point at each layer.
\begin{figure*}[t]
\centering
\includegraphics[width = 0.85 \linewidth]{figs/gnpm_dense_mesh_2.pdf}
\caption{\textbf{Mesh Visualisation.}
We sampled 300K points and use Poisson surface reconstruction ~\cite{kazhdan06poisson} to get meshes from our model.
Left we have the reconstructed point clouds. Right we have the reconstructed meshes with Poisson.
We use a supervised GNPM version of our model where we assume we have access to dense correspondences.
And we also shows the result of evaluating NPM~\cite{Palafox21} on depth and complete 3D scans.
As we can see that our method supervised/self-supervised can recover good quality meshes.
}
\label{fig:gnpm_poisson_mesh}
\end{figure*}
\subsection{Test Optimization}
Once the network weights and shape and pose latent embeddings have been learnt, they can be used to fit a new sequence to the model by optimizing for the per-identity shape and per-frame pose codes that best explain the observations. We use the pose network and minimize the cycle consistency training objective with respect to the shape and pose latent codes $\{s_c\}^S_{c=1}, \{p_l\}^F_{f=1}$, keeping network weights fixed:
\begin{multline}
\argmin_{\{s_c\}_{c=1}^{C}, \{p_f\}_{f=1}^{F}} \overset{C}{\underset{\substack{c=1}}{\sum}} \; \overset{F}{\underset{\substack{f=1}}{\sum}} \; \overset{N}{\underset{\substack{i=1}}{\sum}}\; \mathcal{L}_{loop}
+ \frac{\| p_f \|^2_2}{\sigma^2_p} + \frac{\| s_c \|^2_2}{\sigma^2_s}
\end{multline}
The shape and pose codes are initialized to the mean of the respective embedding. When dealing with a sequence, the pose code for each frame can be initialized with the result for the previous frame.
\subsection{Implementation Details}
\noindent \textbf{Forward and Backward Networks:}
For both forward and backward networks, we stack three EdgeConv layers with two MLPs to compute edge features. LeakyReLU with $0.2$ negative slope was used as the activation function and $max$ as the channel-wise aggregation function for all layers. The graph connectivity is updated for each layer, based on the learned features in previous ones. Features from each EdgeConv are concatenated and passed through the MLPs for the final prediction. The dimensionality of the EdgeConv features is set to ${\textsc{D}}=128$, except for the last layer where ${\textsc{D}}=256$. Shape and pose latent dimensionalities are set to $128$.
We use the Adam optimizer~\cite{AdamKingma2015} with learning rates of $1e-3$ for the shape/pose model and latent codes. In addition, we apply a learning rate scheduler with a decay factor of 0.5 every 30 epochs.
And use a cosine annealing our ICP loss weight $\lambda_{icp}$ with $1e^{-1}$ initial weight and $1e^{-2}$ minimum weight.
As for the regularization losses we use $5e^{-2}$ for temporal loss weight $\lambda_{temp}$.
We use shape and pose code regularization weights $\sigma_s=\sigma_p=1e-4$.
Despite the symbolic k-NN implementation allowing more point samples, we sample 1024 points per batch from each shape as input to the model. The latent codes are initialized randomly from a normal distribution.
We use the positional encoding proposed in~\cite{NeRF} to encode query points with 8 frequency bands.
\begin{figure}[t!]
\centering
\includegraphics[width = 1 \linewidth]{figs/gnpm_mano.pdf}
\caption{Results of time-consistent dense correspondence estimation for 3D point-cloud inputs from the MANO~\cite{MANO17} dataset. We show ground truth dense correspondences for comparison.}
\label{fig:mano}
\end{figure}
\noindent \textbf{Efficient k-NN Implementation:}
We use the KeyOps~\cite{KeyOps21, feydy2020fast} python package to symbolically implement the k-NN algorithm. In the supplementary material we provide a comparison with a naive dense k-NN implementation and show a tenfold improvement in terms of training time.
\end{document}
\section{Experimental Evaluation}
\noindent \textbf{Datasets:}
We evaluate our model on real-world scans from the CAPE~\cite{CAPE} and D-FAUST~\cite{dfaust2017} datasets, which provide real clothed humans and their corresponding SMPL+D registration. We train on 31 different posed shapes from 35 different distinct identities~\cite{CAPE} and test on a total of 4 test sequences from 4 unseen characters. We also learn a hand model from the MANO ~\cite{MANO17} dataset.\\
\noindent\textbf{Evaluation metrics:}
We measure reconstruction and 4D tracking performance following the established per-frame evaluation protocol~\cite{Mescheder18, Palafox21}.
The $l_2$ Chamfer distance (C-$l_2$) offers a measure combining the accuracy and completeness of the reconstructed surface.
We also use End-Point Error (EPE), which measures keyframe-to-frame deformation $l_2$ distance between predicted and ground truth as in ~\cite{bozic2021neuraldg}. We visualise corresponding points with the same colour.
\begin{table*}[!h]
\begin{center}
\resizebox{0.7\textwidth}{!}{\begin{tabular}{lccccll}
\cline{1-5}
\textbf{Method} & Input &\textbf{EPE} $\downarrow$ & \textbf{C-$l_2$} $(\times 10^{-3})$ $\downarrow$ & \\ \cline{1-5}
NPM~\cite{Palafox21} & \emph{3D scan} &\textbf{0.231}& {0.019} & \\
GNPM (K 10) & \emph{3D scan} & {0.287}& \textbf{0.0122} & \\ \cline{1-5}
\end{tabular}}
\end{center}
\caption{Comparison with NPM~\cite{Palafox21} on real scans of CAPE~\cite{CAPE}.
Since GNPM (our approach) requires full scans as input, we also evaluated NPM~\cite{Palafox21} on complete input scans for a fair comparison. Note that, in contrast to our method, NPM~\cite{Palafox21} can take partial observations (depth maps) as input at test time. However, it requires known dense correspondences at training time, unlike our approach.}
\label{tab:gnpm}
\end{table*}
\subsection{Evaluation on CAPE Dataset}
\noindent \textbf{Reconstruction and unsupervised dense correspondences:}
We compare our model to the state of the art on the CAPE ~\cite{CAPE} dataset which provides ground truth dense correspondences.
We compare with NPM~\cite{Palafox21} (the most related approach to ours), although it requires dense correspondences at training time, unlike our approach.
Figure~\ref{fig:sparse_dense} shows how our method is capable of reconstructing the deformed shapes and estimating temporally consistent point correspondences accurately. Despite only using 1024 points at training time, we can densify the points during test time by sampling new points and combining the result to get a dense deformed version of each shape.
We use this property to further evaluate the quality of the deformation and use Poisson surface reconstruction~\cite{kazhdan06poisson} on 300K points to get surface reconstruction.
Figure~\ref{fig:gnpm_poisson_mesh} shows a visualisation of the meshes obtained after reconstructing with our method followed by Poisson reconstruction~\cite{kazhdan06poisson}. We note that Poisson surface reconstruction ~\cite{kazhdan06poisson} requires surface normals which we estimate. For fair comparison we sample 300K GT points and run them through Poisson reconstruction to obtain the GT meshes. We also visualise the results obtained by NPM~\cite{Palafox21} with two types of inputs: depth images and 3D scans.
As we can see our method can obtain reasonable meshes.
In addition, we trained a supervised version GNPM where GT dense correspondences were given at training time (similarly to NPM). For this model we only use the backward network and we learn to disentangle shape and pose latents with the same network. Figure~\ref{fig:gnpm_poisson_mesh} shows visualisations of the predicted dense point clouds and reconstructed meshes with this 'supervised' version of our model.
Table~\ref{tab:gnpm} shows a numerical evaluation against NPM~\cite{Palafox21} on the CAPE test set.
It is important to note that our approach GNPM can only be used with complete scans as input. For that reason we also evaluated NPM~\cite{Palafox21} using complete input scans even though this method can take partial observations (depth images) as input. But since ours cannot, we evaluated on complete input scans for fairness.
Table~\ref{tab:gnpm} shows our method achieves better Chamfer distance (C-$l_2$) while having a comparable End-Point Error (EPE) to NPM~\cite{Palafox21}, which is trained with ground truth dense correspondences, while our method works without them.
GNPM can find the shape and pose latents more efficiently during test time. While NPM~\cite{Palafox21} report that optimizing over an input sequence of 100 frames takes approximately 4 hours on a GeForce RTX 3090, our approach takes 20 minutes on a GeForce RTX 3080 on a similar length sequence.
\begin{table}[!h]
\begin{center}
\begin{tabular}{lcccll}
\cline{1-4}
\textbf{Method} & Input &\multicolumn{1}{l}{\textbf{EPE} $\downarrow$} & \multicolumn{1}{l}{\textbf{C-$l_2$} $(\times 10^{-3})$ $\downarrow$} & & \\ \cline{1-4}
ONet 4D~\cite{Mescheder18} & \emph{3D scan} & - & 0.028 & & \\
OFlow~\cite{OccFlowNiemeyer19ICCV} & \emph{3D scan} & - & 0.031 & & \\
GNPM (K 10) & \emph{3D scan} &\textbf{0.253}& \textbf{0.0094} & & \\ \cline{1-4}
\end{tabular}
\end{center}
\caption{Comparison with state-of-the-art methods on real scans of the DFAUST~\cite{dfaust2017} dataset. Since OFlow~\cite{OccFlowNiemeyer19ICCV} works only on sequences of up to 17 frames, we report the average over sub-sequences of such length.}
\label{tab:dfaus_comparison}
\end{table}
\subsection{Evaluation on DFAUST Dataset}
Table~\ref{tab:dfaus_comparison} shows a quantitative evaluation on the DFAUST dataset~\cite{dfaust2017}. All methods take 3D scans as input and our approach results in the lowest Chamfer distance error. We take the results for ONet~\cite{Mescheder18} and OFlow~\cite{OccFlowNiemeyer19ICCV_2} directly from the publications.
\subsection{Qualitative evaluation on Mano Dataset}
\textbf{Hand reconstruction.}
We demonstrate GNPMs ability to work on the MANO hand dataset~\cite{MANO17}.
Figure~\ref{fig:mano} shows our GNPM results showing its ability to model complex deformations of the hand and to establish temporally consistent and dense correspondences. Our comparison with ground truth correspondences shows that GNPM can infer high quality dense 4D temporal tracking, despite the challenging deformations shown by hands.
\subsection{Latent Applications}
\noindent\textbf{Shape and Pose Transfer:}
Disentangling shape and pose spaces allows us to transfer shape and/or pose between identities. Given an input identity in a specific pose, we can map the same pose to new identities. Alternatively, we can fix the identity and transfer pose latents. Examples of both are shown in Fig.~\ref{fig:sp_transfer}.
\subsection{Unsupervised 3D Part Segmentation}
In this section we explore the potential of the features learnt by the EdgeConv network. Figure~\ref{fig:deformed_clust}, shows the result of performing unsupervised clustering on the learnt features for each frame independently, which leads to consistent part segmentations across identities and poses.
We can observe that by grouping points according to their features we discover \textit{locally consistent} groups, to give a semantically meaningful part segmentation. We experimented with different cluster sizes as shown in Fig.~\ref{fig:deformed_clust}.
\subsection{Ablation Studies}
\label{subsec:ablation}
\noindent\textbf{Geometric Inductive Bias:}
To evaluate the our architecture choices, we replace all EdgeConv layers with MLP-based forward and backward networks using NPM~\cite{Palafox21} pose model architecture choices and trained the networks on CAPE~\cite{CAPE} dataset.
The MLP-based self-supervised architecture infers the deformation field independently for each point without considering local neighbourhood structure.
As shown in Table ~\ref{tab:edge_mlp}, we found using EdgeConv layers is advantageous to learning deformations.
Figure ~\ref{fig:edge_mlp} shows qualitative evaluation.
MLP based architecture struggles with modelling arms and can not learn a deformation field that fully reconstructs the shape.
On the other hand, the EdgConv based architecture does not suffer from this problem. We conclude this inductive bias is important in learning deformation fields without dense correspondences.
And removing this bias results in a degenerate performance.\\
\begin{table}[!h]
\begin{center}
\resizebox{0.52\textwidth}{!}{\begin{tabular}{lccll}
\cline{1-3}
\textbf{Method} & \multicolumn{1}{l}{\textbf{EPE} $\downarrow$} & \multicolumn{1}{l}{\textbf{C-$l_2$} $(\times 10^{-3})$ $\downarrow$} & & \\ \cline{1-3}
Self-supervision (MLP) & 0.515 & 0.0635 & & \\
Self-supervision (EdgConv) &\textbf{0.287} & \textbf{0.0122} & & \\ \cline{1-3}
\end{tabular}}
\end{center}
\caption{Evaluation of geometric inductive bias.
We use NPM~\cite{Palafox21} MLP architecture choices and replace all EdgeConv layer with MLPs.
As we can see without considering the local neighbourhood around the points the model fail to learn deformation field.}
\label{tab:edge_mlp}
\end{table}
\noindent\textbf{ICP Loss:}
We found that for frames with extreme deformation, the forward network has some difficulties dealing with points from similar parts like hands that change sides with respect to the canonical t-pose see supplementary materials.
Using an asymmetric ICP loss leads to flipping in the prediction of the forward network.
The backward network can handle this to some extent but as we are enforcing a geometric constraint, it fails to completely reverse points flipping resulting in a knot in the waist of the shape. Using a symmetric ICP loss where the nearest neighbours distance is computed from both source and target is important in learning extreme deformation without dense correspondences.\\
\section{Limitations}
While GNPM demonstrates potential for learning disentangled pose and shape space without dense correspondences, it still struggles with modelling fine details.
For instance, in modelling human deformation often the hand is not modelled in great detail.
A potential reason for that could be our uniform sampling approach. Denser sampling in areas like hands and faces could lead to better estimates of detailed deformations. Finally, our method is restricted to full meshes at test time. Future work includes extending our method to deal with partial observations such as depth maps.
\end{document}
\section{Conclusion}
We have proposed Geometric Neural Parametric Models (GNPM), a learned parametric model that takes as input point-clouds of different identities performing complex motions and disentangles shape/identity and deformations by mapping each pose to its canonical t-pose configuration and learning dense time-consistent correspondences. In addition, our model learns disentangled shape and pose embeddings which can later be used for interpolation, editing or transfer.
In contrast to previous learnt neural parametric models such as NPMs~\cite{Palafox21}, our model does not require ground truth known correspondences at training time, which are costly to obtain. We exploit the concept of cycle-consistency to establish correspondences and show results on real-world scans from the CAPE~\cite{CAPE}, DFAUST~\cite{dfaust2017} and MANO~\cite{MANO17} datasets. Our network uses edge convolutions to extract features, which can be further exploited for downstream tasks such as unsupervised part segmentation. While our current approach cannot deal with partial scans from depth input images we envisage a completion network as future work.
\end{document}
\section{Supervised GNPM}
As presented in the experiments section, we trained a GNPM with dense correspondence supervision.
We use the backward network only similar to NPM~\cite{Palafox21} pose network and model a deformation field while unlike NPM~\cite{Palafox21} we learn a disentangled shape and pose code with the same network.
We enforce disentanglement by constraining the shape code to be the same for each identity while having a different pose code for each frame.
\begin{align}
f^{a}_{\theta_a}: \mathbb{R}^{N \times 51} \times \nonumber \mathbb{R}^{D_s} \times \mathbb{R}^{D_p} &\rightarrow \mathbb{R}^{N \times 3} \\
f^{a}_{\theta_a}(\gamma(x_i), s_c, p_f) &= \delta \tilde{x}. \\
\tilde{y_i} = x_i + \delta \tilde{x_i}.
\label{eq:forward}
\end{align}
\subsection{Training}
As we have access to the dense correspondences we use $l_1$ loss between the predicted deformation and the ground truth points.
We also found it useful to add the temporal regularization loss on both the current and next pose latents and network outputs as defined in the main text.
\begin{equation}
\begin{split}
\mathcal{L}_{reco}(\tilde{y}^{cf}_{i}, y^{cf}_{i}) = \parallel \tilde{y}^{cf}_{i} - y^{cf}_{i} \parallel
\end{split}
\end{equation}
We minimize this objective over all possible $F$ deformation fields across all shape identities $C$ with respect to the individual pose and shape codes $\{p_f\}^F_{f=1}$ and $\{s_c\}^C_{c=1}$ respectively and the forward and backward network weights $\theta_a$.
To enforce a compact pose and shape manifold we also regularize the both codes with $\sigma_p$ and \; $\sigma_s$.
\begin{multline}
\argmin_{\theta_a, \{s_c\}_{c=1}^{C}, \{p_f\}_{f=1}^{F}} \overset{C}{\underset{\substack{c=1}}{\sum}} \; \overset{F}{\underset{\substack{f=1}}{\sum}} \; \overset{N}{\underset{\substack{i=1}}{\sum}}\; \mathcal{L}_{reco}
+ \lambda_{temp} \mathcal{L}_{temp} \\
+ \frac{\| p_f \|^2_2}{\sigma^2_p} + \frac{\| s_c \|^2_2}{\sigma^2_s}
\end{multline}
\subsection{Test optimization}
Once the network weights and shape and pose latent embeddings have been learnt, they can be used to fit a new sequence to the model by optimizing for the per-identity shape and per-frame pose codes that best explain the observations.
We minimize the Chamfer distance $\mathcal{L}_{cf}$ and the symmetric ICP loss $\mathcal{L}_{icp}$ as training objective with respect to the shape and pose latent codes $\{s_c\}^S_{c=1}, \{p_l\}^F_{f=1}$, keeping network weights fixed:
\begin{multline}
\argmin_{\{s_c\}_{c=1}^{C}, \{p_f\}_{f=1}^{F}} \overset{C}{\underset{\substack{c=1}}{\sum}} \; \overset{F}{\underset{\substack{f=1}}{\sum}} \; \overset{N}{\underset{\substack{i=1}}{\sum}}\; \mathcal{L}_{cf} + \lambda_{icp} \; \mathcal{L}_{icp} \\
+ \frac{\| p_f \|^2_2}{\sigma^2_p} + \frac{\| s_c \|^2_2}{\sigma^2_s}
\end{multline}
The shape and pose codes are initialized to the mean of the respective embedding. When dealing with a sequence, the pose code for each frame can be initialized with the result for the previous frame.\\
\textbf{Evaluation Result on CAPE}.
Table~\ref{tab:gnpm_submat} shows a comparison between NPM~\cite{Palafox21}, our supervised and self-supervised GNPM.
Note that all methods were evaluated using complete 3D scans as inputs.
Our method achieves better Chamfer distance (C-$l_2$) while having a comparable End-Point Error (EPE) to NPM~\cite{Palafox21}, which is trained with dense correspondences, while our method works without it.
GNPM can find the shape and pose latent more efficient during test time.
We report total test optimization time for both methods on 200 frames input.\\
\noindent Figure ~\ref{fig:gnpm_dense_mesh_extend} shows the reconstructed meshes from our method using Poisson reconstruction~\cite{kazhdan06poisson} as well as NPM~\cite{Palafox21} with depth and 3D scan inputs.
We also note that Possion surface reconstruction ~\cite{kazhdan06poisson} requires normal which we estimate.
And to have fair comparison we samples 300K GT points and run them through Poisson reconstruction to get meshes from GT points.
As we can see our method supervised and self-supervised is capable to get reasonable meshes.
\begin{figure}[t]
\centering
\includegraphics[width = 1.0\linewidth]{figs/gnpm_symm_icp_loss.pdf}
\caption{\textbf{Evaluation of our symmetric ICP loss.}
Colors indicated the correspondences, as we can see without the symmetric ICP loss, the network tends to flip the correspondences with extreme deformation.
}
\label{fig:icp_abl}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width = 1\linewidth]{figs/gnpm_dense_mesh_extended_2.pdf}
\caption{\textbf{Mesh Visualization.}
We sampled 300K points and use Poisson surface reconstruction ~\cite{kazhdan06poisson} to get meshes from our model.
Left we have the reconstructed point clouds. Right we have the reconstructed meshes with Poisson.
We use a supervised GNPM version of our model where we assume we have access to dense correspondences.
And we also shows the result of evaluating NPM~\cite{Palafox21} on depth and complete 3D scans.
As we can see that our method supervised/self-supervised can recover good quality meshes.}
\label{fig:gnpm_dense_mesh_extend}
\end{figure*}
\section{Symmetric ICP Loss}
As presented in the main text, we use an ICP inspired loss based of nearest neighbour search to prevent the network from learning an identity mapping by setting $\delta \tilde{x}$ and $\delta \tilde{y}$ to zero for the output of the forward network.
\begin{equation*}
\mathcal{L}_{icp}(\tilde{x}_i^{cf}) = \parallel \tilde{x}_i^{cf} - NN_{\mathcal{R}}(\tilde{x}_i^{cf}) \parallel^2_2
\end{equation*}
Where $NN_{\mathcal{R}}(.)$ is a function that queries the nearest neighbour of a 3D point in the set $\mathcal{R}$ of input points.
\noindent We found a symmetric loss is necessary to prevent correspondence flipping in the presence of extreme deformation.
Figure ~\ref{fig:icp_abl} shows an example of extreme deformation.
We found the forward network has some difficulties dealing with points from similar parts like hands that change sides to the canonical t-pose.
Using an asymmetric ICP loss leads to flipping in the prediction of the forward network.
The backward network can handle this to some extent, but as we are enforcing a geometric constraint, it fails to completely reverse points flipping, resulting in a knot in the waist of the shape.
We modified the distance computation in the nearest neighbour to consider not just the distance from predicted points to the set of input GT points but also the distance from input GT points to the set of predicted points.\\
\begin{equation}
\begin{split}
{x'}_i^{cf} = NN_{D_{p \rightarrow \mathcal{Q}}}({x}_i^{cf}) \\
\mathcal{D}_{p \leftrightarrow q}({x}_i^{cf}) = \mathcal{D}_{p \rightarrow \mathcal{Q}}({x}_i^{cf}) + \mathcal{D}_{q \rightarrow \mathcal{P}}({x'}_i^{cf})
\end{split}
\end{equation}
Where $NN_{\mathcal{D}}(.)$ is a function that queries the nearest neighbour of a 3D point from distance matrix $D$, and $\mathcal{D}_{r \rightarrow \mathcal{R}}(.)$ is a function gets the distance matrix computed for the point $r$ in the set points $\mathcal{R}$.
\noindent We conclude using a symmetric ICP loss where the nearest neighbours distance matrix computed from both source and target is necessary in learning extreme deformation without dense correspondences.\\
\section{Efficient k-NN Implementation}
We use the KeyOps~\cite{KeyOps21, feydy2020fast} python package to symbolically implement the k-NN algorithm. We evaluated the full training time for 1024, 2048, and 4096 points, comparing against naive (Dense k-NN) on five identities (one sequence each).
Table~\ref{tab:naive_vs_symbolic_KNN} shows that our symbolic implementation significantly reduces the training time, particularly as the number of points increases (for 4K points the improvement is 10 fold). Both implementations were run on an NVIDIA GeForce GTX 1080.
\begin{table}[!h]
\centering
\begin{tabular}{cccll}
\cline{1-3}
\multirow{2}{*}{\textbf{Sample Size}} & \multicolumn{2}{c}{\textbf{Train Time (h)}} & & \\ \cline{2-3}
& \textbf{Dense} & \textbf{Symbolic} & & \\ \cline{1-3}
1024 & 2.32 & \textbf{1.0} & & \\
2048 & 7.94 & \textbf{2.04} & & \\
4096 & 56.49 & \textbf{4.7} & & \\ \cline{1-3}
\end{tabular}
\caption{Training time comparison between a naive dense k-NN implementation and our efficient symbolic k-NN implementation for different number of points. The improvement of our symbolic implementation in terms of training increases with the sample size.}
\label{tab:naive_vs_symbolic_KNN}
\end{table}
\section{Test Time Optimization}
We use Adam optimizer~\cite{AdamKingma2015} with learning rates of $1e-3$ for both codes $\{p_f\}^F_{f=1}$ and $\{s_c\}^C_{c=1}$.
Additionally, we apply a learning rate scheduler with a decay factor of $0.5$ every $30$ epochs.
In our experiments, we optimize for a total of I = 200 iterations, but we found the model is able to reconstruct the sequence reasonably well on only 100 iteration.
We use a pose code regularization $\sigma_p$ of $1e-4$.
The latent codes for shape $\{s_c\}^C_{c=1}$ and pose $\{p_f\}^F_{f=1}$ are initialized with mean codes learned from training.
\noindent As a reference, our optimized implementation takes approximately 1 hour optimizing over an input sequence of 2k frames on two GeForce RTX 3080.
\end{document} | 2023-04-23T06:40:36.157Z | 2022-09-23T02:01:52.000Z | redpajama/arxiv | arxiv_0001 | 774 | 9,380 |
d1554992f006d2720f30fa00024fc1418a5bb179 | \section{Introduction}\label{sec1}
Deep learning (or deep neural networks) has been successfully applied in a variety of real-world areas, including natural language processing \cite{graves2013speech,bahdanau2014neural, young2018recent}, computer vision \cite{krizhevsky2009learning, goodfellow2014generative,long2015fully}, speech recognition \cite{noda2015audio, deng2014ensemble, yu2016automatic}, et al.. One type of deep learning model, called deep residual networks (ResNets), has shown state-of-the-art performance in image recognition \cite{he2016deep, qiu2017learning, zhang2017residual}. By incorporating the shortcut connection, the ResNets improves learning performance with deeper and wider architectures \cite{bishop1995neural, ripley2007pattern}. It has been considered as a default practice of the convolutional neural networks (CNN) models and a powerful tool to deal with complex image recognition problems. For example, ResNets-152 achieves 19.38\% top-1 error on the ImageNet data with 152 layers \cite{he2016deep}; and ResNets-1001 reaches 4.92\% test error on the CIFAR-10 data with 1000 layers \cite{he2016identity}.
A ResNets transforms the input and hidden layers iteratively to filter the information:
\begin{equation}
\label{restrans}
h_{t+1}=h_t + \sigma(h_t, \theta_t) \quad \text{for}\quad t = 0, 1, \dots, T-1
\end{equation}
where $h_0$ is the input data, $h_1,h_2, \dots, h_T$ are the hidden layers, $\sigma$ denotes the activation function, $\theta_t$ represents the parameters that link $h_t$ and $h_{t+1}$. These iterative updates can be interpreted as an Euler discretization of a nonlinear ordinary differential equation (ODE) \cite{haber2017stable, li2017maximum}:
\begin{equation}
\label{ode}
h'(t) =\sigma(h(t),\theta (t))
\end{equation}
Based on this interpretation, many novel deep learning models and training algorithms have been proposed \cite{chang2018reversible, haber2017stable, lu2018beyond, li2017maximum, chen2018neural}. For example, Chen et al. proposed the Neural-ODE model that replaced the multiple residual blocks with one ODE system in the model architecture \cite{chen2018neural}. To train such a Neural-ODE model, the authors developed the publicly available software, torchdiffeq, which adopted various ODE solvers at the back-propagation step. Compared with the original ResNets, many advantages of the proposed Neural-ODE were demonstrated, such as memory efficiency, adaptive computation of solving ODEs, scalable and invertible normalizing flow constructions, and building continuous time-series models.
Conceptually, the Neural-ODE model combines the strengths of both parametric and nonparametric approaches for model construction and training. Parametric dynamical models, such as ordinary differential equations (ODEs), have long been studied in, e.g., mathematics, physics, and engineering; therefore, a rich amount of theories and application experiences have been accumulated \cite{lasalle1968stability, simmons2016differential, arnold2012geometrical, yu2016revisiting,yu2017effects, yu2021assessing}. In general, the ODE models use parameterized mathematical functions to describe the changes of crucial variables given a dynamic system, such as the logistic model for population growth \cite{yu2016revisiting}, prey-predator dynamics in ecological studies \cite{tang2015holling}, and Susceptible-Infectious-Recovered (SIR) model in infectious disease transmission \cite{yu2017effects}. Then the parameters of the ODE model are estimated based on observed data so that the proposed ODE can describe the dynamic behavior of a system. However, this parametric approach is limited by its assumption of the underlying mechanisms, which may partially capture the real dynamics. On the other hand, the nonparametric models, such as recurrent neural networks (RNNs) \cite{sherstinsky2020fundamentals}, multilayer perceptron (MLP), and deep residual networks (ResNets), have been successfully applied in a variety of areas due to their universal approximation property. By connecting the deep residual networks and ordinary differential equations, the Neural-ODE model serves as a promising model to better capture the real-world dynamics.
Previously, we have proposed a generalized ODE (GODE) to model the dynamics of discrete data \cite{miao2014generalized}. Specifically, the GODE model can apply to a wide range of data types, including those that follow the exponential family distribution, and the associated link function is capable of accommodating latent time-varying variables governed by ODEs. Note that the latent time-varying variables in GODE can be considered as a hidden layer in neural networks. Especially, it is equivalent to the hidden layer of ODE block in the Neural-ODE \cite{chen2018neural}. In practice, the ordinary differential equations (ODEs) with time-varying coefficients \cite{chen2008efficient, xue2010sieve} could be used to describe the time-varying changes of model parameters, which results in more flexible dynamic systems. This type of dynamic model has been widely used in many biomedical applications \cite{chen2008efficient, xue2010sieve, liang2010estimation}. In this study, we coupled the time-varying ODE and GODE ideas to generalize the standard Neural-ODEs model proposed by Chen et al. \cite{chen2018neural} and propose a neural generalized ODE (Neural-GODE) model. The performance of the proposed Neural-GODE is evaluated and compared with existing deep residual networks (ResNet) \cite{he2016deep} and the standard Neural-ODEs model \cite{chen2018neural} using benchmark datasets, MNIST \cite{lecun1998gradient} and CIFAR-10 \cite{krizhevsky2009learning}.
\section{Method}\label{sec2}
Given a supervised learning problem with input $ \mathcal{X}$ and output label $ \mathcal{Y}$, the essential task is to find a mapping
\begin{equation}
\label{supmach}
F: \mathcal{X} \to \mathcal{Y}
\end{equation}
where $\mathcal{X} \subset \mathbb{R}^p$, $\mathcal{Y} \subset \mathbb{R}$, such that $F(\mathbf{X}_i)$ can accurately predict $y_i$, and $(\mathbf{X}_i,y_i)$ is the $i$-th sample, $i=1, 2, \dots, n$. Usually, $F$ is approximated by penalized regression, algorithms, and networks, such as Lasso, support vector machine (SVM), and multilayer perceptron (MLP). Especially with the universal approximation property, multilayer feedforward networks, such as residual neural networks (ResNets), multilayer perceptron (MLP), convolutional neural networks (CNN), are widely applied in complex prediction tasks \cite{hornik1989multilayer}. Multilayer feedforward networks use the forward propagation technique that processes the inputs in a nonlinear and forward direction way to filter the information.
For example, in a general residual neural networks (ResNets), the forward propagation of input $Z_0 \in R^{n\times p}$, with $T$ layers is given by
\begin{equation}
\label{restrans1}
Z_{(t+1)}=Z_t+h\sigma (Z_t K_t+b_t ) \quad \text{for} \quad t=0,1, \dots,T-1,
\end{equation}
where $Z_0=X$, and $Z_1,Z_2,\dots,Z_T$ are the hidden layers, $t$ is the layer index, $\sigma$ is the activation function, and $h$ is the scaling factor, $K_t$ and $b_t$ are the constant weights of the $t$-th hidden layer. The iterative updates of hidden layers $Z_t$, equation (\ref{restrans1}), can be interpreted as a discretized nonlinear ordinary differential equation (ODE):
\begin{equation}
\label{ode2}
\dot{Z}(t)=\sigma{Z(t)\beta(t)+b(t)}
\end{equation}
where $Z(t)$ is a time-varying variable with initial $Z(0)=X$; $\beta(t)$ and $b(t)$ are time-varying parameters.
Assume the outcome $Y$ follows an exponential family distribution with mean of $E(Y)= \mu $ and link function involving a hidden variable $Z_T$:
\begin{equation}
\label{link}
\eta = g(\mu)=g^{*} (Z_T,\theta),
\end{equation}
where $Z_T \doteq Z(T)$ is the solution to the ODE in equation (\ref{ode2}). The optimization problem with respect to $\beta(t), b(t), W, b$ can be written as
\begin{align}
&\text{min}\frac{1}{n}L(\beta(t), b(t), W, b\mid X, Y) +\lambda P(\beta(t), b(t), W, b) \nonumber \\
& \text{s.t.} \quad \dot{Z}(t)=\sigma{Z(t)\beta(t)+b(t)}, \quad Z(0)= X, \nonumber \\
& \eta = g(\mu)=g^{*} (Z_T,\theta). \label{opt}
\end{align}
Here, $L(\beta(t),b(t),W,b\mid X,Y) $ is the likelihood function, and $P(\beta(t), b(t), W, b)$ is the penalty function, $\lambda$ is the penalty parameter. Adopting the idea of the time-varying ODE model in Xue et al. \cite{xue2010sieve}, we can approximate the time-varying parameters using B-splines, i.e.,
\begin{align}
\beta(t) &= B_1 (t)\xi, \nonumber \\
b(t)&= B_2 (t)\zeta, \label{timevary}
\end{align}
where $B_1 (t)$ and $B_2 (t)$ are B-spline basis functions and $(\xi, \zeta)$ are constant parameters. A brief review of the B-spline will be shown in the next subsection.
\subsection{B-spline function}\label{subsec21}
The spline functions have been widely used in nonparametric regression and varying-coefficient models in statistical research \cite{perperoglou2019review}. In particular, the splines are generally used for modeling the smooth functions of the interested variables, such as nonlinear effects of covariates, time-dependent effects in regression models, and time-series data modeling.
B-splines are a more general type of curve than B$\acute{e}$zier curve \cite{bartels1995introduction}. A B-spline with degree k and n control points can be written as
\begin{equation}
\label{bspline}
\beta(t)= \sum_{i=0}^{n-1}B_{i,k} (t)\xi_i,
\end{equation}
where $B_{i,k} (t)$ is the degree $k$ basis function, and $\xi_i$ is the coefficient of $i$-th control point. The knot vector is
$\{t_0,t_1,t_2,\dots, t_{k+n} \}$. $B_{i,k} (t)$ is calculated recursively by the following formula:
\begin{align}
\label{basis}
B_{i, 0} & = \left \{
\begin{matrix}
1, & t_i \leq t < t_{i+1}, \\
0, & \text{otherwise},
\end{matrix}
\right.\nonumber \\
B_{i,k} &= \frac{t-t_{i}}{t_{i+k}-t_i} B_{i, k-1}(t) + \frac{t_{i+k+1}-t}{t_{i+k+1}-t_{i+1}}B_{i+1,k-1}(t).
\end{align}
In this study, we apply the B-spline to parameterize the time-varying parameters in the GODE. The degree of the basis function, $k$, controls the complexity of the time-varying effect; the number of the control points, $n$, controls the number of parameters in the GODE. If $k=0,n=1$, equation (\ref{bspline}) can be rewritten as
\begin{equation}
\label{basis1}
\beta(t)=B_{0,0}(t)\xi_0,
\end{equation}
where
\begin{equation}
\label{basis2}
B_{0,0}(t) = \left \{
\begin{matrix}
1, & t_0 \leq t < t_{1}, \\
0, & \text{otherwise}.
\end{matrix}\right.
\end{equation}
In this case, the knot vector is $\{t_0,t_1 \}$, i.e., the first and endpoints of the integration time interval. And, equation (\ref{basis1}) and (\ref{basis2}) are equivalent to $\beta(t) = \xi_0$, and $B_{0,0} (t)=1$. Therefore, the Neural-GODE model is reduced to the standard Neural-ODE model with constant parameters when $k=0, n=1$. On the other hand, increasing the number of control points (knots), $n$, increases number of parameters in the integration interval. If an ODE is solved by the simple Euler method with small step size, the increased number of knots makes each Euler step have different parameters, i.e., the increased number of different $\theta_t$ in equation (\ref{restrans}). Since the ResNets can be interpreted as nonlinear ODE as equation (\ref{restrans}), the Neural-GODE with time-varying parameters can represent the ResNets in this case.
\subsection{Model architectures}\label{subsec22}
We experiment with the residual networks (ResNets) architecture that has been used for comparison in Chen et al. 2018 \cite{chen2018neural}, see Fig. 1. The ResNets first down-samples the input three times with convolution layers, then applies multiple standard residual blocks \cite{he2016deep}, see Fig. 1 (left panel). Each residual block consists of two convolution layers with a kernel size of 3. The standard neural-ODE model and neural-GODE model replace the residual blocks with an ODE block, see Fig. 1 (right panel). However, the ODE block of the Neural-GODE is a system with time-varying parameters, which makes the model more
flexible for training.
\begin{figure}[h]%
\centering
\includegraphics[width=0.5\textwidth]{model_structure.pdf}
\caption{Model architectures.}\label{fig1}
\end{figure}
\subsection{Implementation}\label{subsec23}
The implementation of the ResNets, Neural-ODE, and Neural-GODE models follow the MNIST training in \cite{chen2018neural}. The training images are transformed by randomly cropping with padding of 4 on each border. We apply the group normalization over each mini-batch \cite{wu2018group} right after each convolution layer and before the activation. We initialize the weights of the B-spline parameterized filter in the customized convolution layer (the module “convt” in Fig. 1). We use SGD with a mini-batch size of 128 in training. The learning rate starts from 0.1 and is divided by 10 at epochs 60, 100, and 140. A total of 160 epochs are trained for each model. We use momentum of 0.9. The dropout is not applied, following the practice in \cite{ioffe2015batch}. We apply the Euler method with a step size of 0.05 for solving the ODE systems both in the Neural-ODEs and Neural-GODE by using package torchdiffeq in \cite{paszke2017automatic}.
In testing, random cropping is not applied, and a mini-batch size of 1000 is used. The model performance is reported based on the testing dataset using accuracy as the metrics. All the experiments are implemented on GPU (Tesla V100 with 16GB G-Memory), programming code can be found at \url{https://github.com/Duo-Yu/Neural-GODE}.
\section{Experimental Results}\label{sec3}
We evaluate the Neural-GODE on two benchmark datasets, i.e. MNIST and CIFAR-10.
\subsection{Model performances}\label{subsec31}
The MNIST and CIFAR-10 are standard benchmark datasets for computer vision and deep learning. In the MNIST, images are white-black handwritten digits with a size of 28×28 pixels. The MNIST classification aims to predict the ten handwritten digits. It has 70, 000 images with 60, 000 samples in the training dataset and 10, 000 in the testing dataset. The CIFAR-10 dataset is 60, 000 colored images with 10 classes. Each image has 32×32 pixels. Generally, the training set of CIFAR-10 consists of 50, 000 samples. The ResNets is implemented with 6 and 20 residual blocks on MINIST and CIFAR-10, respectively. With such numbers of residual blocks can reach more than 99\% accuracy in training on both datasets. The ODE systems are solved from 0 to 1 in the standard Neural-ODE and the proposed Neural-GODE. The number of control points of B-spline in Neural-GODE, n, is 4 and 8, respectively, on MNIST and CIFAR-10 training. The degree of the B-spline is 1. The effect of these hyperparameters, including the B-spline order, degree, and the integration interval of ODE systems, is shown in the following subsection, Table II.
For test error, the Neural-GODE has the best performance, see Table I. The ResNet has a slightly lower accuracy than that of Neural-GODE. The standard Neural-ODE has the worst performance compared to the other two models. Especially in the classification task based on CIFAR-10, which is more complex than the MNIST. The standard Neural-ODE model has about 2\% lower accuracy than the Neural-GODE, Table I. In terms of training efficiency, although the standard Neural-ODE and Neural-GODE are slower than ResNets, they show higher memory efficiency. Both Neural-ODE and Neural-GODE consist of a smaller number of training parameters than that of ResNets, see Table I. In summary, the Neural-GODE model has the advantage in both predictive accuracy and memory efficiency compared to the other two models.
\begin{table*}[h]
\begin{center}
\begin{minipage}{\textwidth}
\caption{Model performance comparison}\label{tab1}%
\begin{tabular}{@{}lllll@{}}
\toprule
\multicolumn{2}{c}{Model} & Neural-GODE & ResNets & Neural-ODE\\
\midrule
\multirow{3}{*}{MNIST}
& Test error (\%) & 0.31& 0.33 & 0.40\\
& \# params (M) & 0.43 & 0.57 & 0.21 \\
& Time/iteration (s) &0.035 & 0.012 & 0.038\\
\midrule
\multirow{3}{*}{CIFAR-10}
& Test error (\%) & 13.49 & 13.47& 15.32\\
& \# params (M) & 0.72 & 1.6 & 0.21 \\
& Time/iteration (s) &0.038 & 0.026& 0.041\\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table*}
\begin{table}[h]
\begin{center}
\begin{minipage}{174pt}
\caption{Effect of time-varying parameters}\label{tab2}%
\begin{tabular}{@{}llllll@{}}
\toprule
n & k & T & \#params & Time(s) & Test error (\%)\\
\midrule
2& 1& 1 & 281,738 & 0.039 &14.68\\
4& 1& 1& 429,194 & 0.038& 14.19\\
6& 1& 1& 576,650& 0.037& 13.97\\
8& 1& 1& 724,106 & 0.038& 13.49\\
10& 1& 1& 871,562 & 0.037& 13.65\\
12& 1& 1& 1,019,018& 0.038 & 13.64\\
\midrule
8 &2 &1 &724,106 &0.038 &14.16\\
8& 3 &1 &724,106& 0.036& 14.10\\
8& 4 &1 &724,106& 0.036 &13.80\\
8& 5 &1 &724,106& 0.037 &14.22\\
\midrule
8& 1& 2& 724,106 &0.064 &13.79\\
8& 1& 3& 724,106& 0.093& 13.84\\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
The main difference between Neural-GODE and Neural-ODE is that the ‘convt’ layer weightes are parameterized with B-spline functions of integration time (t) in Neural-GODE rather than constant in the Neural-ODE. As an illustration, we plot the weights of the first convolution layer of the residual block in the ResNets, the first ‘convt’ layer of the ODE blocks in the Neural-GODE and Neural-ODE in Fig. 2. We observe that the kernel weights from the Neural-ODE are constant across the integration time (red lines in Fig. 2) as expected. The weights in the Neural-GODE (blue lines in Fig. 2) and ResNets (green lines in Fig. 2) vary across layers or integration time, and the weights of the Neural-GODE are smoother than those of the ResNets, which might be the reason why the proposed Neural-GODE model could outperform the ResNets and Neural-ODE models in terms of computing efficiency and prediction accuracy.
\begin{figure*}[h]%
\centering
\includegraphics[width=0.9\textwidth]{c1_13_12.pdf}
\caption{The patterns of selected estimated weights of a 3×3 kernel from three models from the CIFAR-10 data example. The x-axis represents the index of layers in ResNets, and the integration time in Neural-GODE and Neural-ODE. The red line denotes the weight of the first ‘convt’ layer in the ODE block of the Neural-ODE; the blue line is the weight of the first ‘convt’ layer in the ODE block of the Neural-GODE; and the green line is the weight of the first convolution layer in the residual blocks of the ResNets.}\label{fig2}
\end{figure*}
\begin{table*}[h]
\begin{center}
\begin{minipage}{\textwidth}
\caption{ODE solver comparison}\label{tab3}%
\begin{tabular}{@{}llllll@{}}
\toprule
\multicolumn{2}{c}{Model} & \multicolumn{2}{c}{Neural-GODE} & \multicolumn{2}{c}{Neural-ODE} \\
\midrule
& & Dopri5& Euler & Dopri5 & Euler \\
\multirow{2}{*}{MNIST}
& Test error (\%) & 0.37 & 0.31 & 0.37 & 0.40\\
& Time (s) &0.153& 0.035 & 0.063 & 0.038\\
\midrule
\multirow{2}{*}{CIFAR-10}
& Test error (\%) & 13.99& 13.49& 15.40 & 15.32\\
& Time (s) &0.224& 0.038 &0.072& 0.041 \\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table*}
\subsection{Layer-varying parameters }\label{subsec32}
To evaluate the effect of hyperparameters of Neural-GODE on prediction results, such as the number of control points ($n$) and degree ($k$) of B-spline as well as the integration interval endpoint ($T$) of ODE, we use different settings to train the Neural-GODE on CIFAR-10 data. We find that there exist an optimal number of control points for B-spline, which directly affects the Neural-GODE model size (the number of parameters). Based on CIFAR-10, the optimal number of control points is 8 when the ODE is solved by the Euler method with a step size of 0.05, see Table II. If we increase the number of control points from 2 to 8, the test error is reduced by 1\%. However, if the number of control points increases from 8 to 12, the test error increases slightly. According to the experiment, the linear B-spline fits the data well, i.e., when the degree of B-spline ($k$) is 1, the prediction accuracy reaches the highest. If we increase the endpoint ($T$), the training speed will significantly decrease. For example, given the linear B-spline with 8 control points, if the end integration time increases from 1 to 3, each iteration training time increases from 0.038s to 0.093s, which is about 2.4 times increase, see Table II. At the same time, we do not find much benefit in terms of prediction accuracy when we increase the endpoint of ODE.
\subsection{ODE solvers}\label{subsec33}
In the experiment, we mainly focused on the Euler method to solve the ODEs because of its first-order equivalence to the general residual network (see Method section). However, we could use alternative ODE solvers in the neural ODE models. For example, instead of using the fixed-step method such as Euler, we also implement another adaptive-step method, i.e., Runge-Kutta of order 5 of Dormand-Prince-Shampine (dopri5 in the package \textit{torchidffeq}). We observe that the Runge-Kutta and Euler methods perform similarly in predictive accuracy (see Table III). For the MNIST data set, both Neural-ODE and Neural-GODE produced 0.37\% of the test error when the Runge-Kutta method was applied. Using the Euler method with a step size of 0.05, the Neural-GODE has a slightly lower test error than the standard Neural-ODE. However, the Neural-GODE model shows lower test errors than the standard Neural-ODE using either the Runge-Kutta or the Euler methods. In terms of training efficiency, the Runge-Kutta method is significantly slower than the Euler method, which is the main reason that we use the Euler method as the ODE solver in our experiment, see Table III. Similar comparison results are observed from training experiments based on CIFAR-10.
\subsection{ODE function forms}\label{subsec34}
In this section, we investigate the effect of the number of customized CNN layers inside the ODE block of Neural-GODE. The standard Neural-ODE directly replaces the residual blocks with one ODE block consisting of two convolution layers. The "time" t is considered as a separate channel and concatenated with other image channels. The concatenated channels are the input of the ODE block in which the convolution functions are standard. Instead of combining the "time" t with the convolution layer input, we parameterize t inside the convolution function. Specifically, time-varying kernels are used rather than using the kernels with constant weights. Besides the number of control points of the B-spline, the number of customized CNN layers can also affect the total number of parameters and the complexity of the ODE block. We compare the predictive performance given the different number of control points and the number of customized CNN layers based on CIFAR-10. The fixed parameters include the degree of B-spline (k = 1), the integration interval ([0, 1]), and the ODE solver (Euler method with a step size of 0.05). We observe a trade-off between the number of control points and the number of customized CNN layers of the ODE block. When the number of control points of the B-spline is small, a larger number of CNN layers may increase the predictive performance. For example, if the number of control points is 2 or 4, better predictive accuracy can be reached when the number of CNN layers is 3, Table 4. However, if we further increase the number of control points, two layers of CNN have better performance. Since the ODE block with more CNN layers takes a longer time to solve, we apply two layers of CNN with eight control points of the B-spline for CIFAR-10 training.
\begin{table}[h]
\begin{center}
\begin{minipage}{174pt}
\caption{Effect of the number of the customized CNN layers.}\label{tab4}%
\begin{tabular}{@{}lllll@{}}
\toprule
n & \# layers & \#params & Time (s) & Test error(\%)\\
\midrule
\multirow{4}{*}{2}
&1& 207,754 &0.024& 15.83\\
&2& 281,738 &0.039& 14.68\\
&3& 355,594 &0.047& 14.31\\
&4& 429,450 &0.061 &14.37\\
\midrule
\multirow{4}{*}{4}
&1 &281,482& 0.024& 16.26\\
&2& 429,194 &0.038& 14.19\\
&3& 576,778 &0.047& 13.89\\
&4& 724,362 &0.062& 14.40\\
\midrule
\multirow{4}{*}{6}
&1& 355,210 &0.025& 15.67\\
&2& 576,650 &0.037& 13.97\\
&3& 797,962 &0.048& 14.07\\
&4& 1,019,274& 0.062 &14.59\\
\midrule
\multirow{4}{*}{8}
&1&428,938& 0.025 &15.41\\
&2& 724,106 &0.038& 13.49\\
&3& 1,019,146& 0.047& 13.95\\
&4& 1,314,186& 0.063& 14.02\\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
\section{Conclusion}\label{sec4}
The idea of bridging deep residual networks (ResNets) with the discretized ordinary differential equations (ODE) has raised many interests in the deep learning research field recently \cite{haber2017stable, chang2018reversible, lu2018beyond, li2017maximum, li2019deep,chen2018neural}. With the well-established ODE properties, theories, and numerical solutions, novel deep learning architectures and training algorithms have been proposed based on such connection. In this study, we further explore the performance of Neural-ODE based on two benchmark classification tasks, i.e., the classification problems of MNIST and CIFAR-10. We confirm that the Neural-ODE model has a training efficiency advantage but compared to the deep ResNets in terms of training memory as stated in Chen et al. \cite{chen2018neural}.
On the other hand, the predictive accuracy is not as good as the ResNets. For the CIFAR-10 dataset, the test error of the standard Neural-ODE is 15.32\% with 0.21 million parameters, while the test error of the ResNets is 13.73\% with 1.6 million parameters, see Table I.
To overcome the predictive accuracy disadvantage of the standard Neural-ODE, we propose a time-varying Neural-GODE, which improves the model flexibility and eventually improves the predictive performance. Instead of using constant weights of CNN layers from the standard Neural-ODE, we parameterize the kernel weights with time-varying parameters. Specifically, the weights are B-spline functions of "time" t. As a result, the proposed Neural-GODE model reaches similar or slightly better prediction accuracy than ResNets.
The ResNets model is initially developed for image recognition [1]. For the CIFAR-10 data analysis, He et al. designed a sophisticated deep ResNets model which can eventually reach a test error of 6.43\% with the 110 layers, and the first layer was 3×3 convolutions. Then they used a stack of 6n layers with 3×3 convolutions on the feature maps of sizes {32,16,8}, respectively, with 2n layers for each feature map size, where n=18. In our new Neural-GODE model, the input image is directly down-sampled by three layers of 3×3 convolutions, then processed by one GODE block. The map size in the ODE block is 8. To reach a lower test error using the Neural-GODE model, we will explore multiple ODE blocks with different map sizes in future research.
With the benefits of training efficiency and accuracy, the proposed Neural-GODE model can be applied in various tasks. It is straightforward to apply the Neural-GODE in any image recognition problem, such as the radiology images in electronic health records (EHR). On the other hand, the Neural-GODE can be applied in time-series analysis for one-dimensional data inputs \cite{bonnaffe2021neural}, which will also be our future applied research.
In statistical literature, the neural networks (deep learning models) are considered as the non-parametric regression models. As early as the year that a neural network with one hidden layer was proved to have the universal approximation property \cite{hornik1989multilayer}, White has evaluated the asymptotic results of back-propagation learning in single hidden-hidden layer feedforward network models \cite{white1989some}. Also, Shen et al. studied the asymptotic properties, including the consistency, convergence rate, and asymptotic normality for the neural network sieve estimators with one hidden layer \cite{shen2019asymptotic}. Recently, theoretical properties of deep learning models have been explored from a statistical perspective ([48-50]. However, the theoretical properties have not been established for more complicated deep learning models, such as recurrent neural networks (RNN), convolutional neural networks (CNNs) (including ResNets), and Neural-ODE. We have established some asymptotic results for the sieve estimators of the ODE with both constant and time-varying parameters \cite{xue2010sieve}. The large-sample properties for the GODE model have also been investigated in our previous work \cite{miao2014generalized}. These theoretical results have laid a solid foundation to establish the asymptotic properties for the proposed Neural-GODE model, in which future research is warranted.
\section*{Acknowledgements}
This work was supported in part by NIH grant R01 AI087135(HW),grant from Cancer Prevention and Research Institute of Texas (PR170668) (HW), grant NSF/ECCS 2133106 (HM), and NSF/DMS 1620957 (HM).
| 2023-04-23T06:40:36.429Z | 2022-09-23T02:02:13.000Z | redpajama/arxiv | arxiv_0001 | 782 | 4,750 |
94429253df82b7ae40272af1fac136bcf744451b | \section{Introduction}
Let $p$ be a prime number, let $k$ be a finite field of characteristic $p$, and let $X$ be a smooth projective curve defined over $k$.
Associated to $X$ is its algebraic (or \'{e}tale) fundamental group
$
\pi_1(X).
$
This profinite group fits into the short exact sequence
\begin{equation} \label{FES}
1 \to \pi_1(X_{\bar k}) \to \pi_1(X) \to \mathrm{Gal}(\bar k/k) \to 1
\end{equation}
where $\bar k$ is an algebraic closure of $k$, and $X_{\bar k}$ is the base change of $X$ to $\bar k$.
Grothendieck has shown in \cite[exp. XIII]{SGA1} that $\pi_1(X_{\bar k})$, and thus also $\pi_1(X)$, is topologically finitely generated.
Moreover, he shows that for every prime $\ell \neq p$, the maximal pro-$\ell$ quotient of $\pi_1(X_{\bar k})$ admits the pro-$\ell$ presentation
\begin{equation}
\langle x_1, \dots, x_{2g} \mid [x_1, x_2] \cdots [x_{2g-1}, x_{2g}] = 1 \rangle
\end{equation}
where $g$ is the genus of $X$, and $[a,b] = aba^{-1}b^{-1}.$
This result is complemented by the work \cite{Sha47} of Shafarevich, showing that the maximal pro-$p$ quotient of $\pi_1(X_{\bar k})$ is free pro-$p$.
The structure of $\pi_1(X_{\bar k})$, and therefore also of $\pi_1(X)$, is well-understood once $g \leq 1$.
On the other hand, in case $g \geq 2$, the group $\pi_1(X)$ encompasses the structure of $X$ in a non-trivial way.
One evidence for this is the work \cite{Tam04} of Tamagawa which shows that there are only finitely many smooth projective curves $Y$ over $\bar k$ with $\pi_1(Y) \cong \pi_1(X_{\bar k})$. Another result in this vein, obtained by Mochizuki in \cite{Moch07}, says (roughly speaking) that $X$ can be (functorially) reconstructed from $\pi_1(X)$.
In this work, we shed further light on the structure of $\pi_1(X)$.
\begin{thm} \label{Res}
The \'{e}tale fundamental group of a smooth projective curve over a finite field is topologically finitely presented.
Moreover, the number of relations is at most the number of generators (in some presentation).
\end{thm}
Recall that if $K$ is the function field of $X$, then $\pi_1(X)$ can be identified with $\mathrm{Gal}(K^{\mathrm{ur}}/K)$, where $K^{\mathrm{ur}}$ is the maximal unramified extension of $K$ (inside a fixed separable closure). As a result, the group $\pi_1(X)$ is also studied as a function field analog of the generalized class group $\mathrm{Gal}(L^{\mathrm{ur}}/L)$ of a number field $L$. Examples include the works \cite{BW17, W17} by Boston and Wood.
These works suggest, vaguely speaking, that for a randomly chosen $X$, the group $\pi_1(X)$ is given by a random balanced presentation (that is, a presentation where the number of generators is at least the number of relations). Motivated by this, Liu and Wood in \cite{LW17} study random balanced presentations and their variants.
\thmref{Res} may thus be viewed as a deterministic counterpart of the works by Boston, Liu, and Wood.
Arithmetic topology, as studied for instance in \cite{Mor11}, postulates that the group $\mathrm{Gal}(L^{\mathrm{ur}}/L)$ should exhibit properties similar to those of the fundamental group of a $3$-manifold. Indeed, balanced presentations are attributes of $3$-manifold groups (see \cite{AFW15}), and Pardon asks in \cite{Par11} whether an analog of this can be established for $\mathrm{Gal}(L^{\mathrm{ur}}/L)$. This serves as an additional motivation for our theorem.
Our proof of \thmref{Res} starts from a formula proved by Lubotzky in \cite{Lub01}. The latter, once combined with Grothendieck's finite generation result mentioned above, reduces our task to an estimation of the dimensions of several cohomology groups. We perform these estimations using the Lyndon-Hochschild-Serre spectral sequence. An important input is duality in cohomology, deduced from the aforementioned results on maximal pro-$\ell$ and pro-$p$ quotients by Grothendieck and Shafarevich.
Consequently, we determine the deficiency of (any presentation of) $\pi_1(X)$, obtaining a result that mirrors Epstein's work \cite{Eps61} on the deficiency of $3$-manifold groups.
Furthermore, our arguments apply not only to $\pi_1(X)$ itself, but to any closed topologically finitely generated subgroup of it.
\begin{cor} \label{cor}
The group $\pi_1(X)$ is topologically coherent.
\end{cor}
Coherence means that all the closed topologically finitely generated subgroups of $\pi_1(X)$ are topologically finitely presented.
Once again, we have an analogy to a result on $3$-manifolds, namely the coherence of their fundamental groups, established by Scott in \cite{Sco73a, Sco73b}.
\section{Proof of \thmref{Res}}
We set $G \defeq \pi_1(X)$, and $N \defeq \pi_1(X_{\bar k})$, so equation \eqref{FES} reads
\begin{equation} \label{RFES}
1 \to N \to G \to \widehat{ \mathbb{Z}} \to 1.
\end{equation}
Following \cite{Lub01}, denote by $d(G)$ the least cardinality of a generating set of $G$,
and by $r(G)$ the least number of relations needed to present $G$ (with any number of generators).
Furthermore, define the deficiency of $G$ to be
\begin{equation}
\mathrm{def}(G) \defeq d(G) - r(G).
\end{equation}
If $g = 0$, we get from \cite[Theorem 10.1.2 (i) a]{NSW13} that $N = 1$, so $G \cong \widehat{\mathbb{Z}}$ by \eqref{RFES}.
Hence $\mathrm{def}(G) = d(G) - r(G) = 1 - 0 = 1$.
We assume henceforth that $g \geq 1$, and the core of the proof is devoted to showing that $\mathrm{def}(G) = 0$, or equivalently that $r(G) = d(G)$.
By \cite[Theorem 0.2]{Lub01}, we know that $r(G)$ equals
\begin{equation} \label{LF}
\sup_{\lambda} \ \sup_M \Bigg\lceil \frac{\dim_{\mathbb{F}_\lambda} H^2(G,M) - \dim_{\mathbb{F}_\lambda} H^1(G,M)}{\dim_{\mathbb{F}_\lambda} M} \Bigg\rceil + d(G) - \textbf{1}_{M \ncong \mathbb{F}_{\lambda}}
\end{equation}
where $\lambda$ ranges over the prime numbers, $\mathbb{F}_\lambda$ is the finite field with $\lambda$ elements (and a trivial $G$-action),
and $M$ ranges over all simple $\mathbb{F}_\lambda \llbracket G \rrbracket$-modules.
Let us first see that $r(G) \geq d(G)$. For that, take any $\lambda \neq p$ and let $M$ be the (trivial) $G$-module $\mathbb{F}_\lambda$.
Grothendieck's aforementioned result on the maximal pro-$\lambda$ quotient of (open subgroups of) $N$
implies that $N$ is a Poincar\'{e} group of dimension $2$ at $\lambda$
(for an alternative proof, see \cite[Theorem 10.1.2 (i) b]{NSW13}). As $\widehat{ \mathbb{Z}}$ is a Poincar\'{e} group of dimension $1$ at $\lambda$, it follows from \cite[Theorem 3.7.4]{NSW13} applied to \eqref{RFES} that $G$ is a Poincar\'{e} group of dimension $3$ at $\lambda$ (see also \cite[Corollary 10.1.3 (ii)]{NSW13}).
The latter implies that
\begin{equation}
\dim_{\mathbb{F}_\lambda} H^2 \big(G,M\big) =
\dim_{\mathbb{F}_\lambda} H^{1}\big(G, \mathrm{Hom}(M, \mathbb{F}_\lambda)\big) =
\dim_{\mathbb{F}_\lambda} H^1\big(G,M\big)
\end{equation}
as $M = \mathbb{F}_\lambda$. It follows from \eqref{LF} that indeed $r(G) \geq d(G)$.
We shall now start proving that $r(G) \leq d(G)$. For that, we fix a prime $\lambda$ and a simple (and thus finite) $\mathbb{F}_\lambda \llbracket G \rrbracket$-module $M$.
By \eqref{LF}, it suffices to show that
\begin{equation} \label{Goal}
\dim_{\mathbb{F}_\lambda} H^2(G,M) - \dim_{\mathbb{F}_\lambda} H^1(G,M) \leq \textbf{1}_{M \ncong \mathbb{F}_{\lambda}} \cdot \dim_{\mathbb{F}_\lambda} M.
\end{equation}
The Lyndon-Hochschild-Serre spectral sequence, associated to \eqref{RFES} and to $M$, gives rise (see \cite{KHW12}) to the seven-term exact sequence
\begin{equation*}
\begin{split}
0 &\to H^1\big(\widehat{\mathbb{Z}}, M^N\big) \to H^1\big(G,M\big) \to H^1\big(N,M\big)^{\widehat{\mathbb{Z}}} \to
H^2\big(\widehat{\mathbb{Z}}, M^N\big) \\
&\to \mathrm{Ker}\big(H^2(G,M) \rightarrow H^2(N,M) \big) \to H^1\big(\widehat{\mathbb{Z}},H^1(N,M)\big) \to H^3\big(\widehat{\mathbb{Z}},M^N\big).
\end{split}
\end{equation*}
Since the cohomological dimension of $\widehat{\mathbb{Z}}$ is $1$,
the rightmost term in each row of the exact sequence above vanishes, so we obtain the exact sequence
\begin{equation} \label{H1}
0 \to H^1\big(\widehat{\mathbb{Z}}, M^N\big) \to H^1\big(G,M\big) \to H^1\big(N,M\big)^{\widehat{\mathbb{Z}}} \to 0
\end{equation}
and the exact sequence
\begin{equation} \label{H2}
0 \to H^1\big(\widehat{\mathbb{Z}},H^1(N,M)\big) \to H^2\big(G, M\big) \to H^2\big(N, M\big).
\end{equation}
It follows from \eqref{H1} that
\begin{equation} \label{h1}
\dim_{\mathbb{F}_\lambda} H^1\big(G,M\big) \geq \dim_{\mathbb{F}_\lambda} H^1\big(N,M\big)^{\widehat{\mathbb{Z}}}
\end{equation}
and from \eqref{H2} (combined with \cite[Proposition 1.7.7 (i)]{NSW13}) that
\begin{equation} \label{h2}
\begin{split}
\dim_{\mathbb{F}_\lambda} H^2\big(G,M\big) &\leq
\dim_{\mathbb{F}_\lambda} H^1\big(\widehat{\mathbb{Z}},H^1(N,M)\big) + \dim_{\mathbb{F}_\lambda} H^2\big(N, M\big) \\
&\leq \dim_{\mathbb{F}_\lambda} H^1\big(N,M\big)_{\widehat{\mathbb{Z}}} + \dim_{\mathbb{F}_\lambda} H^2\big(N, M\big).
\end{split}
\end{equation}
In order to establish \eqref{Goal}, we want to compare \eqref{h1} with \eqref{h2}, so we now show that
\begin{equation} \label{TP}
\dim_{\mathbb{F}_\lambda} H^1\big(N,M\big)^{\widehat{\mathbb{Z}}} = \dim_{\mathbb{F}_\lambda} H^1\big(N,M\big)_{\widehat{\mathbb{Z}}}.
\end{equation}
From Grothendieck's result, we know that $N$ is finitely generated, so $V \defeq H^1(N,M)$ is a finite dimensional vector space over $\mathbb{F}_\lambda$. Therefore, taking a generator $\sigma$ of $\widehat{\mathbb{Z}}$, and considering the linear map it induces on $V$, we find that
\begin{equation} \label{LA}
\begin{split}
\dim_{\mathbb{F}_\lambda} V_{\widehat{\mathbb{Z}}} &= \dim_{\mathbb{F}_\lambda} V/\mathrm{Im}(\sigma - 1)
= \dim_{\mathbb{F}_\lambda} V - \dim_{\mathbb{F}_\lambda} \mathrm{Im}(\sigma - 1) \\
&= \dim_{\mathbb{F}_\lambda} \mathrm{Ker}(\sigma - 1) = \dim_{\mathbb{F}_\lambda} V^{\widehat{\mathbb{Z}}}
\end{split}
\end{equation}
so we have arrived at \eqref{TP}.
Subtracting \eqref{h1} from \eqref{h2}, we can therefore use \eqref{TP} to get that
\begin{equation} \label{BD}
\dim_{\mathbb{F}_\lambda} H^2\big(G,M\big) - \dim_{\mathbb{F}_\lambda} H^1\big(G,M\big) \leq
\dim_{\mathbb{F}_\lambda} H^2\big(N,M\big).
\end{equation}
We are now ready to verify \eqref{Goal}, distinguishing between several cases.
Suppose that $\lambda \neq p$. We have already treated the case $M = \mathbb{F}_\lambda$ while showing that $r(G) \geq d(G)$, so let us assume that $\textbf{1}_{M \ncong \mathbb{F}_\lambda} = 1$. As already mentioned before, $N$ is a Poincar\'{e} group of dimension $2$ at $\lambda$, so we have
\begin{equation} \label{CD}
\begin{split}
\dim_{\mathbb{F}_\lambda} H^2\big(N,M\big) &= \dim_{\mathbb{F}_\lambda} H^0\big(N, \mathrm{Hom}(M, \mathbb{F}_\lambda) \big) \\
&\leq \dim_{\mathbb{F}_\lambda} \mathrm{Hom}\big(M, \mathbb{F}_\lambda\big) = \dim_{\mathbb{F}_\lambda} M.
\end{split}
\end{equation}
Hence, in this case, \eqref{Goal} is a consequence of \eqref{BD} and \eqref{CD}.
Suppose now that $\lambda = p$.
The freeness of the maximal pro-$\lambda$ quotient of (open subgroups of) $N$ proved by Shafarevich,
implies that the cohomological $\lambda$-dimension of $N$ is $1$ (for an alternative proof, see \cite[Theorem 10.1.12 (ii)-(iii)]{NSW13}).
This implies that
\begin{equation} \label{DD}
\dim_{\mathbb{F}_\lambda} H^2\big(N,M\big) = 0.
\end{equation}
Therefore, in this case, \eqref{Goal} follows from \eqref{BD} and \eqref{DD}.
We have thus finished showing that $r(G) \leq d(G)$.
By \cite[Corollary 2.5]{Lub01}, there exists a presentation of $G$ with $d(G)$ generators and $r(G)$ relations.
We have shown that $r(G) = d(G)$, so this is a balanced presentation.
We have seen that $N$ is not procyclic, therefore
\begin{equation}
r(G) = d(G) > 1
\end{equation}
so in the language of \cite[Theorem 0.2]{Lub01}, $G$ is not $d(G)$-abelian-indexed.
Hence, it follows from \cite[Theorem 0.1]{Lub01} that the deficiency of every presentation of $G$ equals $\mathrm{def}(G)$.
That is, for every presentation
\begin{equation}
1 \to K \to F \to G \to 1
\end{equation}
where $F$ is a free profinite group, we have
\begin{equation}
d(F) - d_F(K) = \mathrm{def}(G) = 0.
\end{equation}
Here, $d_F(K)$ is the number of relations in the presentation, namely the least cardinality of a generating set for $K$ as a normal subgroup of $F$.
\section{Deducing \corref{cor}}
An extension of the above proof that $r(G) \leq d(G)$ shows that any finitely generated $G_0 \leq_c G$ is finitely presented.
Indeed, setting $N_0 \defeq G_0 \cap N$ we get the exact sequence
\begin{equation}
1 \to N_0 \to G_0 \to C \to 1
\end{equation}
where $C$ is a procyclic group.
Fixing a prime number $\lambda$, and a simple $\mathbb{F}_\lambda \llbracket G_0 \rrbracket$-module $M$,
we arrive (arguing as in \eqref{h1} and \eqref{h2}) at the bound
\begin{equation*}
\dim H^2\big(G_0,M\big) - \dim H^1\big(G_0,M\big) \leq
\dim H^2\big(N_0,M\big) + \dim V_{C} - \dim V^{C}
\end{equation*}
where (as before) $V \defeq H^1(N_0, M)$, and dimensions are taken over $\mathbb{F}_\lambda$.
Let $W$ be a finite dimensional quotient of $V$ on which $C$ acts trivially.
As $V$ is a direct limit of finite $C$-modules, there exists a finite dimensional $C$-submodule $U$ of $V$ that surjects onto $W$.
Arguing as in \eqref{LA}, we get
\begin{equation}
\dim W \leq \dim U_C = \dim U^C \leq \dim V^C
\end{equation}
so it follows that
\begin{equation} \label{WE}
\dim V_C \leq \dim V^C.
\end{equation}
Since $G_0$ is finitely generated, it follows from (the analog of) \eqref{h1} that $\dim V^C$ is finite,
so combining the first inequality of this section with \eqref{WE}, we obtain
\begin{equation} \label{BDD}
\dim H^2\big(G_0,M\big) - \dim H^1\big(G_0,M\big) \leq \dim H^2\big(N_0,M\big)
\end{equation}
which is the analog of \eqref{BD}.
In \eqref{CD} and \eqref{DD}, it is shown that
\begin{equation} \label{H2BB}
\dim H^2(L, M) \leq \dim M
\end{equation}
for every open subgroup $L$ of $N$ (to which the action of $N_0$ on $M$ extends).
Writing $N_0$ as the inverse limit of a family of open subgroups of $N$,
we conclude from \eqref{H2BB} (using \cite[Proposition 1.5.1]{NSW13}) that
\begin{equation} \label{H2BBB}
\dim H^2(N_0, M) \leq \dim M.
\end{equation}
Combining \eqref{H2BBB} with \eqref{BDD}, we conclude from \eqref{LF} (or from \cite[Theorem 0.3]{Lub01}) that $G_0$ is finitely presented.
\section*{Acknowledgments}
Mark Shusterman is grateful to Melanie Matchett Wood for all the discussions and encouragement in the course of the work on this paper.
| 2023-04-23T06:40:36.932Z | 2018-11-13T02:05:02.000Z | redpajama/arxiv | arxiv_0001 | 802 | 2,650 |
450bbf43246d761e7eb7abdd04e969dc82f5f34d | \section{Introduction}
Analysis of modal content of the response of dynamic systems is of interest in many application areas, ranging from electric power networks to vibration of structures. Many approaches to modal analysis occur in the literature. For linear time invariant systems, modal content consists of the eigenmodes, and can be studied analytically. For nonlinear systems, the possibility of global oscillations gives rise to global oscillatory modes that might not be connected to the eigenmodes of the system's linearization at an equilibrium. In this paper, we focus on local modal analysis of nonlinear autonomous systems near an equilibrium, paying particular attention to what can be viewed as eigenmodes in a neighborhood of the equilibrium point of interest. The aim of the paper is to explore the possibility of extending to the nonlinear setting the modal participation analysis pursued by Hashlamoun, Hassouneh and Abed \cite{abed} for linear systems. This analysis attempts to systematically quantify the relative contributions of system modes to system states, and of system states to system modes. Here, system states refers to the scalar elements of the system state vector.
In the early 1980s, Verghese, Perez-Arriaga and Schweppe \cite{Perez-Arriaga1,Perez-Arriaga2} introduced quantities they referred to as modal participation factors. These quantities have been used widely, especially in the electric power systems field. In 2009, the authors of \cite{abed} presented a new approach to the fundamental definition of modal participation factors. The idea of modal participation factors, which will be reviewed further in the next section, is to give measures of the relative contribution of system modes in system states, and of system states in system modes. In \cite{abed}, such measures are developed by taking an average of relative contribution measures over an uncertain set of system initial conditions. The idea is that fixing the system initial condition affects the modal participations, and that initial conditions are in reality uncertain, indeed possibly random due to inherent noise. Indeed, if one takes a view that the initial time also isn't fixed, noise can be viewed as having the effect of allowing the initial condition to be re-set over time, effectively allowing the initial condition to explore a neighborhood of an equilibrium point over a short time interval. By taking an averaging approach, the authors of \cite{abed} find that a dichotomy arises in this new view of modal participation factors. In this dichotomy, participation factors measuring {\it mode-in-state participation} need to be viewed as distinct from participation factors measuring {\it state-in-mode participation}. This dichotomy was not recognized prior to \cite{abed}, and a single formula was previously used to quantify both types of modal participation.
In \cite{abed}, it was found that analytical formulas for mode-in-state participation factors fell out of the analysis very nicely, under basic symmetry assumptions on the distribution of the initial state. {The same symmetry assumptions did not allow for a similarly simple derivation of state-in-mode participation factors, and when a formula was obtained in a particular scenario on the initial state, that formula was more complicated than for the mode-in-state case and didn't share the desirable property of being independent to rescaling of the system state variables (i.e., the formula wasn't invariant under changes of state variable units). This issue is now better understood by the authors, and will be reported on elsewhere.}
Here, we explore extension of the work in \cite{abed}, {especially for the analysis of mode-in-state participation}, to the nonlinear setting, for local system behavior near an equilibrium point. We are able to give an analysis and derivation of formulas for mode-in-state participation factors (under basic symmetry assumptions as in the linear case).
This work follows a different approach to defining participation factors for nonlinear models than that pursued in \cite{vittal}, where modal participation was studied from a fixed initial state using Taylor series methods.
{Before proceeding to the development of the paper, it is perhaps useful to provide a brief discussion of studies on modal participation, addressing motivation of researchers on this topic, the various approaches taken in different disciplines, and applications that have been pursued.}
{The present work is motivated by the original work of Verghese, Perez-Arriaga and Schweppe \cite{Perez-Arriaga1,Perez-Arriaga2} that was mentioned above. The authors of \cite{Perez-Arriaga1,Perez-Arriaga2} introduced their notion of modal participation factors as a tool to aid in modal analysis of large power grids, with benefits anticipated in tasks such as model order reduction and control design. Oscillatory modes are common in power systems, and it is important to have systematic tools for their analysis. Since power grids consist of interconnected areas and can cover large expenses of territory (indeed entire continents), engineers are naturally interested in obtaining reduced models that capture modes of special interest. The modal participation factors of \cite{Perez-Arriaga1,Perez-Arriaga2} were employed for this purpose, in an overarching framework that the authors referred to as Selective Modal Analysis (SMA) (a recent review of SMA is \cite{verghese_sma2013}). In addition, modal analysis in a power grid should provide tools for determining the best sites for insertion of actuators to control modes that may be troubling or dangerous, or for determining the best locations for placing measurement devices that allow system operators to monitor such modes in real time. An example of a recent application of the concepts in \cite{Perez-Arriaga1,Perez-Arriaga2} to power systems is \cite{setiada2018}, which focuses on power grids with significant levels of renewable generation . Early examples of work on actuator placement in power networks using the original modal participation concept include \cite{kundur1992,yakout1994}. Recent examples of modal participation studies in power systems motivated by the more recent approach of Hashlamoun, Hassouneh and Abed \cite{abed} include \cite{ramos,moraco,hill,netto}. The approach has also been applied in power electronics \cite{powerelectronics} and electromagnetic devices \cite{cenedese2016}.}
{The term ``modal participation factors" is also commonly used in the field of structural analysis, with applications in mechanical, aerospace, and civil engineering. The concept of modal participation factors introduced in electric power engineering in \cite{Perez-Arriaga1,Perez-Arriaga2} was developed independently of the notion used in structural analysis. Modal participation factors as studied in structural analysis have been used, for example, to study vibrations of tall buildings \cite{park} and rotorcraft dynamics \cite{prasad}. The concepts of modal participation factors in electric power engineering and in structural analysis are distinct. In the structural analysis framework, the focus has been on the impact of forcing functions on modal response. In contrast, in the electric power engineering concept, a large ostensibly autonomous dynamic system is considered (motivated by the driving power grid application). Bridging between these frameworks could be a fruitful area for future investigation. The two types of modal participation factors (in electric power engineering and related control theory literature, and in structural analysis) are not absolute by any means. These concepts are definitions deemed suitable for various purposes by their authors and employed over many years by practitioners in the respective fields. Later researchers have at times proposed modifications to address a perceived need for improvements. For example, in structural analysis, Chopra \cite{chopra96} introduced a new notion of modal participation factor aiming to make major improvements to the standard definition used in that field, including providing a more clear measure of modal contribution of an external forcing function and removing unit dependence from the standard notion. Similarly, the original concept of \cite{Perez-Arriaga1,Perez-Arriaga2} in electric power engineering has been revisited in \cite{abed}, as noted above. In the remainder of the paper, we focus on the modal participation factors concepts that have been used in electric power engineering, beginning with the work of \cite{Perez-Arriaga1,Perez-Arriaga2} and continuing with the work of \cite{abed}. The concepts from structural analysis were mentioned above to provide context for this work in the larger literature, but will not be addressed in the technical work in this paper.}
The remainder of the paper is organized as follows. In Section 2, needed background material is recalled. In Section 3, mode-in-state participation factors are defined for nonlinear systems in the vicinity of an equilibrium point, under a symmetry assumption on the uncertainty in the system initial condition.
Conclusions and issues for further research are discussed in Section 4.
A preliminary version of this paper appeared in \cite{bh_abed_cdc}.
\section{Background}
{In this section, we give background material that is relevant to our investigation. In particular, we recall modal participation factors (the original definition as well as the more recent work as described above). We also sample some of the applications of modal participation factors, including very recent references to the literature. Finally, we recall two fundamental theorems on local representations of nonlinear autonomous systems. As noted above, the remainder of the paper focuses on modal participation analysis as pursued in the electric power engineering literature and related work in control theory, and on extending the concepts given in \cite{abed} to a nonlinear setting.}
\subsection{Modal Participation Factors for Linear Systems: Original Definition \cite{Perez-Arriaga1, Perez-Arriaga2}}
Let $\Sigma_L$ denote the linear time-invariant system
\[\label{sigma_l}\Sigma_L: \dot{x}=Ax\]
where $x \in \mathbb{R}^n$ and the state dynamics matrix $A \in \mathbb{R}^{n \times n}$ has $n$ distinct eigenvalues $\lambda_i$, $i=1,\ldots,n$.
The system state $x(t)$ of course consists of a linear combination of exponential functions \[x^i(t) = e^{\lambda_i t} c^i,\] where the vectors $c^i$ are determined by the system's initial condition $x(0)$. These functions are the \emph{system modes} and are useful in \emph{modal analysis of linear systems}.
Let $r^i$ be the right eigenvector of the matrix $A$ associated with eigenvalue $\lambda_i$, $i=1,\cdots,n$, and let $\ell^i$ be the left(row) eigenvector of $A$ associated with the eigenvalue $\lambda_i$, $i=1,\cdots,n$. The right and left eigenvectors are taken to satisfy the normalization
\[\ell^i r^j=\delta_{ij}, \]
where $\delta_{ij}=1$ if $i=j$ and $\delta_{ij}=0$ if $i \ne j$.
Given a linear system $\dot{x}=Ax$ with initial condition $x(0)=x^0$, its solution can be written as
\[x(t)=e^{At}x^0=\sum_{i=1}^n (\ell^i x^0)e^{\lambda_i t} r^i.\]
The $k-$th state variable evolves according to
\[x_k(t)=(e^{At}x^0)_k=\sum_{i=1}^n (\ell^i x^0)e^{\lambda_i t} r^i_k\]
Using these facts and taking {two scenarios with rather special initial conditions}, Verghese, Perez-Arriaga and Schweppe \cite{Perez-Arriaga1,Perez-Arriaga2} motivated the following definition of quantities $p_{ki}$ which they named modal participation factors:
\begin{eqnarray}
\label{eq:pki}
p_{ki}:= \ell_k^i r_k^i
\end{eqnarray}
Choosing the initial condition to be $x^0=e_k$, the unit vector {along} the $k$-th coordinate axis, the authors of \cite{Perez-Arriaga1,Perez-Arriaga2} gave reasoning for considering the quantities $p_{ki}$ as mode-in-state participation factors. The scalars $p_{ki}$ are dimensionless. Next, employing a coordinate transformation to focus on the system modes and considering instead an initial condition $x^0 = r^i$, the right eigenvector corresponding to $\lambda_i$, the quantities $p_{ki}$ were also given an interpretation as state-in-mode participation factors. Thus, it has been very common in papers and books using modal participation factor analysis to interchangeably refer to participation of modes in states and participation of states in modes, always using the same formula
(\ref{eq:pki}) for both types of participation measure.
\subsection{Modal Participation Factors for Linear Systems: Recent Approach (\cite{abed})}
{In \cite{abed}, it was argued that a deeper analysis of modal participation would not necessarily lead to identical measures for mode in state and state in mode participation factors. This issue continues to deserve the attention of the control, dynamics, and power systems research communities, largely because of the importance of modal analysis in many complex systems, in power engineering and in other application areas.}
{Simple examples were used in \cite{abed} to motivate the need for a new approach to defining modal participation factors. In fact, the examples indicated that it would be desirable to achieve definitions that gave different measures} for mode-in-state participation factors and state-in-mode participation factors. Indeed, the examples showed that, especially when quantifying the contribution of system states in system modes, the formula (\ref{eq:pki}) could well fall short of giving an intuitively acceptable result. Thus, new fundamental definitions were given based on averaging over the system initial condition, taken to be uncertain.
The linear system
\[\dot{x}=A x\]
usually represents the small perturbation dynamics near an equilibrium. The initial condition for such a perturbation is usually viewed as being an uncertain vector of small norm. In \cite{abed}, new definitions of mode-in-state and state-in-mode participation factors were given using deterministic {(i.e., set-theoretic)} and probabilistic uncertainty models for the initial condition.
\begin{definition}
\label{def-set}
In the set-theoretic formulation, the participation
factor measuring relative influence of the mode associated with
$\lambda_i$ on state component $x_k$ is
\begin{eqnarray}
p_{ki} &:=& \avg \frac{(\ell^i x^0) r^i_k}{x^0_k}
\label{eq:defmode-in-state}
\end{eqnarray}
whenever this quantity exists. Here, $x_k^0 = \sum_{i=1}^n (\ell^i x^0)
r^i_k$ is the value of $x_k(t)$ at $t = 0$, and ``${\rm avg}_{x^0t
\in {\cal S}}$'' is an operator that computes the average of a {scalar}
function over a set ${\cal S}\subset R^n$ (representing the set of
possible values of the initial condition $x^0$).
\end{definition}
With a probabilistic description of the uncertainty in the initial
condition $x^0$, the average in~(\ref{eq:defmode-in-state}) is
replaced {in \cite{abed}} by a mathematical expectation:
\begin{definition}
\label{def-prob}
The general formula for the
participation factor $p_{ki}$ measuring participation of mode $i$ in
state $x_k$ becomes
\begin{eqnarray}
p_{ki} &:=& E ~ \left\{ \frac{(\ell^i x^0) r^i_k}{x^0_k}\right\}
\label{eq:def-prob}
\end{eqnarray}
where the expectation is evaluated using some assumed joint
probability density function $f(x^0)$ for the initial condition
uncertainty. (Of course, this definition applies only when the
expectation exists.)
\end{definition}
In \cite{abed}, it was found that both Definition~ \ref{def-set} (Eq. (\ref{eq:defmode-in-state})) and Definition~\ref{def-prob} (Eq. (\ref{eq:def-prob})) lead to a simple result that agrees with Eq.~(\ref{eq:pki}) under a symmetry assumption on the uncertainty in the initial condition. In the set-theoretic definition, the symmetry assumption is that the
initial condition uncertainty set ${\cal S}$ is symmetric with
respect to each of the hyperplanes $\{x_k=0\}$, $k=1,\dots,n$. In the probabilistic setting of Definition~\ref{def-prob}, the assumption is that the the initial condition
components $x^0_1,x^0_2,\dots,x^0_n$ are independent random variables with marginal density functions which are symmetric with respect to $x_k^0=0$, $k=1,2,\cdots,n$, or are jointly uniformly distributed over a sphere centered at the origin. Under either the set-theoretic or probabilistic symmetry assumption, it was found in \cite{abed} that
the same expression originally introduced by Perez-Arriaga, Verghese
and Schweppe~\cite{Perez-Arriaga1,Perez-Arriaga2} results as a measure of mode-in-state participation factors:
\begin{eqnarray}
p_{ki} &=& \ell^i_k r^i_k. \label{eq:pf-classical}
\end{eqnarray}
\subsection{State-in-Mode Participation Factors}
Hashlamoun, Hassouneh and Abed \cite{abed} also gave similar set-theoretic and probabilistic definitions for {state-in-mode} participation factors for linear systems. The calculations were found to be less straightforward than for the mode-in-state participation factors setting, even under {the same} symmetry assumption {on the initial condition as used in the mode-in-state participation factor calculation}. We will not recall the details of the development of state-in-mode participation factors for linear systems from \cite{abed}. We will simply recall from \cite{abed} the general definition and an associated result for the case of distinct real eigenvalues to have an idea of the nature of the results.
\begin{definition}
\label{def-modeinstate}
The participation factor of state $x_k$ in mode $i$ is
\[\pi_{ki} := \mbox{E}\bigg\{\frac{\ell_k^i x_k^0}{\sum_{j=1}^n (\ell_j^i x_j^0)}\bigg\}=\mbox{E}\bigg\{\frac{\ell_k^i x_k^0}{z_i^0}\bigg\},\]
whenever this expectation exists, where $z_i^0=z_i(0)=\ell^i x^0$, and where $z_i(t)$ is the $i^{th}$ system mode
\[z_i(t)=e^{\lambda_i t}\ell^i x^0=e^{\lambda_i t} \sum_{j=1}^n (\ell_j^i x_j^0). \]
\end{definition}
It was shown in \cite{abed} that
\begin{eqnarray*}
\pi_{ki}&=& \mbox{E}\bigg\{\frac{\ell_k^i x_k^0}{\sum_{j=1}^n (\ell_j^i x_j^0)}\bigg\} \\
&=& \ell_k^i r_k^i + \sum_{j=1, j \ne i}^n \ell_k^i r_k^j E\bigg\{ \frac{z_j^0}{z_i^0}\bigg\}
\end{eqnarray*}
Note that the first term in the expression for $\pi_{ki}$ coincides with $p_{ki}$, the original participation factors formula. However, the second term does not vanish in general. This is true even when the components $x_1^0$, $x_2^0$,
$\cdots$, $x_n^0$ representing the initial conditions of the state are assumed to be independent.
Assuming that the units of the state variables have been scaled to ensure that the probability
density function $f(x^0)$ is such that the components
$x^0_1,x^0_2,\dots,x^0_n$ are jointly uniformly distributed over the
unit sphere in $R^n$ centered at the origin, modal participation factors were evaluated in \cite{abed} using Definition \ref{def-modeinstate}, yielding the following explicit formula that is applicable under the foregoing uncertainty model for the system initial state.
\begin{proposition} (\cite{abed})
Under the assumption that the initial condition has a uniform probability density on a sphere centered at the origin, the participation factor of state $x_k$ in mode $i$ is
\begin{eqnarray}
\pi_{ki} &=& \ell^i_k r^i_k ~+~ \sum_{j=1,~j\neq i}^{n} \ell^i_k r^j_k
\frac{l^j(\ell^i)^T}{\ell^i(\ell^i)^T}. \label{eq:st-in-mode-res}
\end{eqnarray}
\end{proposition}
\subsection{Poincar\'e Linearization}
Poincar\'e linearization is a well known technique for transforming an autonomous nonlinear system into a locally equivalent linear system via diffeomorphism. The technique is useful in this paper for extending the definitions of mode-in-state participation factors proposed in \cite{abed} to the nonlinear setting. In the following, we review the technique.
Consider a nonlinear {system of ordinary differential equations} \[\label{nonlinear_ode} \dot{x}=f(x),\]
where $x \in \mathbb{R}^n$ and $f$ is an analytic vector field on $\mathbb{R}^n$. Let $A=\frac{\partial f}{\partial x}|_{x=0}$ be the Jacobian of $f$ at the origin.
\begin{definition}(\cite{arnold})
Given a matrix $A \in \mathbb{R}^{n \times n}$ with eigenvalues $\lambda_i$, $i=1,\cdots,n$, we say that the $n-$tuple $\lambda=(\lambda_1,\cdots,\lambda_n)$ is resonant if among the eigenvalues there exists a relation of the form
\[(m,\lambda)=\sum_{k=1}^n m_k \lambda_k=\lambda_s, \]
where $m=(m_1,\cdots,m_n)$, $m_k \ge 0$, $\sum_k m_k \ge 2$. Such a relation is called a \emph{resonance}.
The number $|m|=\sum_{k=1}^n m_k$ is called the \emph{order} of the resonance.
\end{definition}
\begin{example} (\cite{arnold})
The relation $\lambda_1=2 \lambda_2$ is a resonance of order $2$; the relation $2 \lambda_1= 3 \lambda_2$ is not a resonance; the relation $\lambda_1+\lambda_2=0$, or equivalently $\lambda_1=2 \lambda_1+\lambda_2$, is a resonance of order $3$.
\end{example}
\smallskip\noindent
\begin{theorem}[Poincar\'e's Theorem \cite{arnold}] If the eigenvalues of the matrix $A$ are nonresonant, then the nonlinear ODE
\[\dot{x}=Ax+O(|| x||^2 )\]
can be reduced to the linear ODE
\[\label{linear_ode}\dot{y}=Ay \]
by a formal change of variable $x=y+\cdots$ (the dots denote series starting with terms of degree two or higher).
\end{theorem}
If the $n-$tuple $\lambda=(\lambda_1,\cdots,\lambda_n)$ is resonant, we will say that
$$x^m:=x_1^{m_1}\cdots x_n^{m_n} e_s$$
is resonant if $\lambda_s=(m,\lambda)$, $|m| \ge 2$ with $e_i$ a vector in the eigenbasis of $A$ and $x_i$ are the coordinates with respect to the basis $e_i$. For example, for the resonance $\lambda_1=2 \lambda_2$, the unique resonant monomial is $x_2^2 e_1$. For the resonance $\lambda_1+\lambda_2=0$, all monomials $(x_1 x_2)^k x_s e_s$ are resonant \cite{arnold}.
\begin{theorem}[Poincar\'e-Dulac Theorem \cite{arnold}] If the eigenvalues of the matrix $A$ are resonant, then the nonlinear ODE
\[\dot{x}=Ax+\cdots \]
can be reduced to the ODE
\[\dot{y}=Ay+w(y) \]
by a formal change of variable $x=y+\cdots$ (the dots denote series starting with terms of degree two or higher), where all monomials in the series $w$ are resonant.
\end{theorem}
There are also several convergence results associated with Poincar\'e linearization, of which the following is the most well known.
\smallskip\noindent
\begin{theorem}[Poincar\'e-Siegel]
Suppose the eigenvalues $\{\lambda_i\}$, $i=1,\cdots,n$, of the linear part of an analytic vector field at an equilibrium point are nonresonant and either $\mbox{Re}( \lambda_i) >0$, $i=1,\cdots,n$ or $\mbox{Re}(\lambda_i) < 0$, $i=,\cdots,n$, or the $(\lambda_i)$ satisfy the Siegel condition, i.e. are such that there exists $C>0$ and $\nu$ such that for all $i=1,\cdots,n$
\[|\lambda_i-(m,\lambda)| \ge \frac{C}{|m|^{\nu}} \]
for all $m=(m_1,\cdots,m_n)$, where $(m_i)$ are nonnegative integers with $|m|=\sum_{i=1}^n m_i \ge 2$. Then the power series in Poincar\'e's Theorem converges in some neighbourhood of the equilibrium point.
\end{theorem}
\begin{remark} There are also some convergence results in the case of resonant eigenvalues; the reader is encouraged to consult \cite{arnold} for further details on Poincar\'e linearization.\end{remark}
\subsection{Hartman-Grobman Theorem}
Another very important result in the
local qualitative theory of nonlinear ordinary differential equations is the Hartman-Grobman Theorem, which
says that near a hyperbolic equilibrium point $x^e$, the nonlinear system (\ref{nonlinear_ode})
has the same qualitative structure as the linear system (\ref{linear_ode}).
\begin{theorem}\cite{perko}
Let $E$ be an open subset
of $\mathbb{R}^n$ containing the origin, let $f \in {C}^1(E)$, and let $\phi_t$ be the flow of the
nonlinear system (\ref{nonlinear_ode}). Suppose that $f(0) = 0$ and that the matrix $A = Df(0)$
has no eigenvalue with zero real part. Then there exists a homeomorphism
$\varphi$ of an open set $U$ containing the origin onto an open set $V$ containing the origin such that for each $x^0 \in U$, there is an open interval $I_0 \subset \mathbb{R}$
containing zero such that for all $x^0 \in U$ and $t \in I_0$
\[\varphi \circ \phi_t(x^0) = e^{At}\varphi(x^0),\]
i.e., $\varphi$ maps trajectories of (\ref{nonlinear_ode}) near the origin onto trajectories of (\ref{linear_ode}) near
the origin.
\end{theorem}
\section{Mode-in-State Participation Factors for Nonlinear Systems}
Consider a nonlinear ODE
\[\label{nlsys}\dot{x}=f(x)\]
with $f \in {\cal C}(\mathbb{R}^n;\mathbb{R}^n)$, $f(0)=0$, and consider the Taylor expansion of $f$ around the origin
\[\label{taylor_exp}\dot{x}=Ax+\tilde{f}^{[2]}(x)+O(||x||^3)\]
where $A=\frac{\partial f}{\partial x}|_{x=0}$ and $\tilde{f}^{[2]}$ represents terms of order 2. We have the following result.
\begin{theorem}If the eigenvalues of $A$ are nonresonant (resp. satisfy one of the conditions of the Poincar\'e-Siegel Theorem) then there exists a diffeomorphism that formally (resp. analytically) transforms the nonlinear ODE (\ref{nlsys}) into a linear ODE. In this case, the mode-in-state participation factors of (\ref{nlsys}) are the same as those of the linearized system $\dot{x}=Ax$.
\end{theorem}
\begin{proof}
First, we normalize $A$ using the change of coordinates
\[\label{chancor1} z=V^{-1}x,\]
where $V=[r^1\, r^2 \, \cdots r^n]$ represents the matrix of right eigenvectors of $A$. Under the change of coordinates (\ref{chancor1}) the ODE (\ref{taylor_exp}) becomes
\[\dot{z}=\Lambda z + V^{-1} \tilde{f}^{[2]}(V^{-1}z)+O(||z||^3) :=\Lambda z + f^{[2]}(z)+O(||z||^3)\]
Next, we normalize the higher order terms through the change of coordinates
\[ \label{chancor2} \tilde{z}=\phi(z)=z+\phi^{[2]}(z)+O(||z||^3)=z+z^T\left[\begin{array}{c} P_1\\ \vdots \\ P_n\end{array} \right] z+O(||z||^3)\]
where $\phi \in {\cal C}(\mathbb{R}^n;\mathbb{R}^n)$. Using Poincar\'e linearization, we know that if the eigenvalues of $A$ are nonresonant, then there is a formal change of coordinates $\phi$ such that the trajectories of (\ref{nlsys}) are locally diffeomorphic to the trajectories of
\[\label{linsys} \dot{\tilde{z}}=\Lambda \tilde{z}\]
If $\Lambda=\mbox{diag}(\lambda_i)|_{i=1}^n$ , then
\[\tilde{z}(t)=e^{\Lambda t} \tilde{z}(0), \]
whose $i-$th component is
\[\tilde{z}_i(t)=e^{\lambda_i t}\tilde{z}_i(0).\]
Using (\ref{chancor2}), we get $z(t)=\phi^{-1}(e^{\Lambda t}\phi(z^0))$, which can be rewritten as
\[z(t)=e^{\Lambda t}\phi(z^0)-\phi(z(0))^Te^{\Lambda^t t} \left[\begin{array}{c} P_1\\ \vdots \\ P_n\end{array} \right] e^{\Lambda t}\phi(z(0)) +O(||z||^3),\]
and
\[z_i(t)=e^{\lambda_i t}\phi_i(z^0)-\phi^T(z^0)e^{\Lambda^T t}P_ie^{\Lambda t}\phi(z^0)+\cdots \]
Using (\ref{chancor1}), we get
\[x_k(t)=\left[ \begin{array}{ccc} r^1 \cdots r^n\end{array} \right]_{\scriptsize \mbox{k-th row} }\left[\begin{array}{c}z_1\\ \vdots\\ z_n \end{array} \right]=\sum_i r_k^i z_i(t) \]
\[=\sum_i r_k^i (e^{\lambda_i t}\phi_i(z^0)-\phi^T(z^0)P_i\phi(z^0))+\cdots\]
It is instructive to consider the linear case first. We set $P_i=0$ and the higher order terms are also set to zero in (\ref{chancor2}). This gives
\[x_k(t)=\sum_{i=1}^nr_k^i e^{\lambda_i t}\phi_i(z^0)=\sum_{i=1}^nr_k^ie^{\lambda_i t}\ell^i x^0 \]
Then the participation of the $e^{\lambda_i t}$ mode in the state $x_k(t)$ is \[p_{ki}:=\mbox{avg}\frac{e^{\lambda_i t}r_k^i\ell^i x^0}{x_k(t)}|_{t=0}=\ell_k^i r_k^i\] (agreeing, of course, with the previous calculation of \cite{abed} in the linear case \cite{abed}).
Next, we consider the nonlinear setting, where we assume that $P_i \ne 0$. The participation of $e^{\lambda_i t}$ in $x_k(t)$ is obtained using the set-theoretic definition as follows (quantities are evaluated at time $t=0$):
$$\mbox{avg}\frac{e^{\lambda_i t}r_k^i\phi_i(z^0)}{x_k(t)}|_{t=0}=\mbox{avg}\frac{e^{\lambda_i t}r_k^i\phi_i(z^0)}{\sum_{i=1}^nr_k^i(e^{\lambda_it}\phi_i(z^0)-\sum_{j,m}\theta_{j,m}e^{(\lambda_j+\lambda_m)t})}|_{t=0}$$
Since $\phi_i(z^0)=\ell^ix^0+\cdots$, then
\[\sum_{j,m}\theta_{j,m}=\sum_{j,m}\phi_j(z^0)\phi_m(z^0)p_{j,m}= \sum_{j,m} (\ell^jx^0)(\ell^mx^0)p_{j,m}\]
Hence, the participation of the mode $e^{\lambda_i t}$ in $x_k(t)$ is
\begin{eqnarray*}p_{ki}&:=&\mbox{avg}\frac{e^{\lambda_i t}r_k^i\phi_i(z^0)}{x_k(t)}|_{t=0}\\ &=& \mbox{avg}\frac{e^{\lambda_i t}r_k^i\phi_i(z^0)}{\sum_{i=1}^nr_k^i(e^{\lambda_it}\phi_i(z^0)-\sum_{j,m}\theta_{j,m}e^{(\lambda_j+\lambda_m)t})}|_{t=0}=r_k^i \ell_k^i. \end{eqnarray*}
\end{proof}
Perhaps somewhat surprisingly, under the assumptions made, the mode-in-state participation factors are seen to agree with those of the linearized system.
\paragraph{Example}
Consider a nonlinear system whose linear part is from an example in \cite{abed}:
\[ \label{nl1}\dot{x}=\underbrace{\left[ \begin{array}{cc} a & b \\ 0 & d \end{array}\right]}_{A_1}x+\Psi(x),
\]
with $\Psi$ a polynomial of order $N \ge 2$. If $a \ne m \cdot d$ for any $m \in \mbox{I}\!\mbox{N}$, then the eigenvalues of the matrix $A_1$ are nonresonant and, therefore, by the Poincar\'e's theorem there exists a formal transformation that transforms (\ref{nl1}) to
\[\label{poincare_lin1}\dot{z}=Az. \]
Furthermore, if $\lambda_1=a$ and $\lambda_2=d$
satisfy one of the conditions of the Poincar\'e-Siegel Theorem, then the transformation is analytic. In both cases, the mode-in-state participation factors of (\ref{nl1}) are locally equal to the mode-in-state participation factors of the linear system (\ref{poincare_lin1}).
A similar result holds for the following nonlinear system, whose linear part is from another example of \cite{abed}:
\[ \label{nl2}\dot{x}=\underbrace{\left[ \begin{array}{cc} 1 & 1 \\ -d & -d \end{array}\right]}_{A_1}x+\Psi(x),
\]
with $d \ne 1$ (nonresonance condition) and $\Psi$ is a polynomial of order $N \ge 2$.
If the eigenvalues are resonant, and the origin is hyperbolic, we can still say something on the mode-in-state participation factors.
\begin{theorem} \cite{perko} If the origin is a hyperbolic point then there exists a homeomorphism that transforms the nonlinear ODE (\ref{nlsys}) into the linear ODE (\ref{linsys}). In this case, the mode-in-state participation factors of (\ref{nlsys}) are the same as those of the linearized system $\dot{x}=Ax$.
\end{theorem}
\begin{proof}
First, we normalize $A$ using the change of coordinates (\ref{chancor1}) where $V=[r^1\, r^2 \, \cdots r^n]$ represents the matrix of right eigenvectors of $A$. Under the homeomorophism in the Hartman-Grobman theorem the ODE (\ref{nlsys}) becomes
\[\dot{z}=\Lambda z \]
The proof regarding mode-in-state participation factors comes directly from applying the result for the linear case in Section 3.
\end{proof}
\paragraph{Example}\cite{perko} Consider the system \begin{eqnarray}\label{eqn1} \dot{y}&=&-y \\ \label{eqn1a} \dot{z}&=&z+y^2 \end{eqnarray}
It can be shown \cite{perko} that with the homeomorphism
\[\phi(y,z)=\left[ \begin{array}{c}y \\ z+\frac{y^2}{3} \end{array}\right] \]
the solution of (\ref{eqn1})-(\ref{eqn1a}) is homeomorphic to the solution of
\begin{eqnarray}\label{eqn2} \dot{y}&=&-y \\ \dot{z}&=&z \end{eqnarray}
and, therefore, the mode-in-state participation factors of the nonlinear system are the same as those of the linearized system.
\section{Conclusion}
There is a dichotomy in modal participation for linear systems. Hence we expect a similar dichotomy for nonlinear systems. Participation of modes in states is relatively easy to evaluate using averaging over an uncertain set of initial conditions assuming symmetric uncertainty. Somewhat surprisingly, the mode-in-state participation formulas under these circumstances were found to be the same for a nonlinear system as for its linearization, assuming the nonresonance condition. Participation of states in modes for nonlinear systems is an open question, and its distinction from mode-in-state participation factors is part of the dichotomy in modal participation seen in the linear case. Besides calculation of state-in-mode participation factors, some other issues that could be considered in future work are: computing modal participation factors for nonlinear systems from data; using the Frobenius-Perron operator to compute these measures. {Another possible extension is to use the recently introduced ``nonlinear eigenvalues'' and ``nonlinear eigenvectors'' for nonlinear systems \cite{kawano,padoan} to introduce possibly ``more nonlinear'' notions of modal participation factors for nonlinear systems.}
{Moreover, as mentioned in Section 1, bridging between modal participation concepts that have been proposed and used in different engineering fields would be worthwhile.}
\section{ACKNOWLEDGMENTS}
BH thanks the European Commission and the
Scientific and the Technological Research Council of Turkey (Tubitak)
for financial support received through Marie Curie Fellowships. EHA thanks the US Air Force Office of Scientific Research for partial support under grant \#FA9550-09-1-0538.
| 2023-04-23T06:40:37.759Z | 2018-11-13T02:05:41.000Z | redpajama/arxiv | arxiv_0001 | 828 | 5,404 |
f204a5760c99683a00c13721599f3b6d9020c38f | \section{Introduction}
The probably most studied acquisition system for X-ray phase-contrast imaging
is the Talbot-Lau grating interferometer.
This system allows to measure a X-ray absorption image and two additional
images, namely the differential phase image and the dark-field image.
The X-ray dark-field measures ultra-small angle scattering, which is caused
by inhomogeneities in materials at micrometer scale~\cite{Pfeiffer08:HXD,Wen09:FXS, Revol14:LFS}.
Recently, X-ray dark-field imaging has received much attention for its potential applications in medical imaging and non-destructive material testing.
The investigated applications in medical imaging span a wide range. Examples are
the identification of different lung diseases~\cite{weber2012investigation,yaroshenko2016visualization,hellbach2015vivo,hellbach2018depiction}, lung cancer~\cite{scherer2017x}, the identification of micro-calcifications~\cite{michel2013dark}, or the differentiation of kidney stones~\cite{scherer2015non}.
Other examples are the detection of bone structures~\cite{Wen09:FXS} and fractures~\cite{hauke2018hairline} as well as brain connectivity~\cite{wieczorek2018brain}.
Also for material testing there are a wide range of application of the dark-field signal~\cite{Revol14:LFS,reza2014investigation,ludwig2018non,yang2014dark,prade2017nondestructive}.
The origin of the observed dark-field can have various reasons, such as small-angle x-ray scattering,
an intra-pixel differential phase contrast that cannot be resolved, or even beam hardening \cite{koenig2016origin}.
While the effects are not clearly separable, we will focus on the dark-field created through small-angle scattering.
Two properties of the dark-field signal are particularly interesting. First,
ultra-small angle scattering is caused by structural variations at the scale of
few micrometers, which is significantly below the resolution of conventional
X-ray imaging systems~\cite{Yashiro10:OOV,Revol11:SPP}. Second, a grating-based system
allows to measure the 3-D orientation of elongated micrometer-sized structures
such as fibers~\cite{Bayer13:PAD}. Traditional absorption X-ray
systems have to be able to fully resolve a fiber in order to measure its
orientation. In contrast to that, X-ray dark-field imaging enables to deduce
the fiber orientation of considerably smaller structures.
Jensen~\textit{et~al.}~\cite{Jensen10:DXD} and Revol~\textit{et~al.}~\cite{Revol12:OSX} explored the
fundamentals of the dark-field orientation-dependency.
In a tomographic setup, either the object or the imaging system rotates during the
acquisition.
During the rotation, the relative orientation between object and system
changes, which leads to a variation in the signal. This signal variation
allows to reconstruct the orientation of the structure.
There have been several reconstruction methods proposed in previous
works~\cite{Bayer14:RSV,Hu15:3DT,Malecki14:XTT,Vogel15:CXT,Wieczorek16:xrt,Schaff17:NID}.
However all of them are based on 2-D projection models of the 3-D structure.
This means that the models rely on the reconstruction of several 2-D slices and
are not compatible with true 3-D trajectories.
In this work, we aim to overcome this limitation by proposing a dark-field projection model over the 3-D space.
This allows to directly estimate the 3-D structure, and to use sophisticated 3-D trajectories such as a helix.
\subsection{Talbot-Lau Interferometer}
\label{sec:talbot_lau}
\begin{figure}[tb]
\centering
\def0.22\textwidth} \input{exp3.pdf_tex} {0.8\textwidth}
\input{TLI_setup.pdf_tex}
\caption{Sketch of an X-ray Talbot-Lau interferometer. The setup consists
of a source~$S$, a detector~$D$, and three
gratings~$G_0$,~$G_1$, and~$G_2$ in between. The
global coordinate system is denoted as $\vec{x}$, $\vec{y}$, $\vec{z}$, an
example fiber in the beam path is denoted as $\vec{f}$, and the sensitivity
direction of the setup is denoted as $\vec{s}$.}
\label{fig:TLI-sketch}
\end{figure}
The Talbot-Lau interferometer is a grating-based phase-contrast setup.
A sketch of the system is shown in Fig.~\ref{fig:TLI-sketch}.
The system is an extension of conventional X-ray imaging setups, where
three gratings $G_0$, $G_1$, and $G_2$ are placed between
the source and detector.
X-rays are generated by a conventional X-ray tube $S$. This X-ray tube
can be operated in an X-ray regime that is compatible with medical applications,
such that a medical X-ray detector $D$~\cite{Pfeiffer06:PRD,Weitkamp06:TGI} can be used.
Grating $G_0$ effectively separates X-rays from the large source into
narrow slit sources that are individually coherent, but mutually incoherent.
$G_1$ imprints a periodic phase modulation onto the wave front to create an interference pattern at the detector.
Both gratings $G_0$ and $G_1$ have periods that are in the range of few micrometers.
For operation with the much lower resolution of clinical X-ray detectors,
the interference pattern is sampled with the $G_2$ grating in
front of the detector, which also has a period in the range of micrometers.
The sampling at the detector can be either
performed by slightly detuning the grating $G_2$, which leads to the Moir{\'e} effect~\cite{Takeda82:FTM,Bennett10:GBS,Bevins12:MCX}, or
by performing phase stepping~\cite{Pfeiffer06:PRD,Weitkamp06:TGI}.
Both approaches
sample points on the interference curve, which can then be fitted by a sine.
In practice, two scans are performed, namely a reference scan without object
in the beam path, and an object scan with the object. By comparing the
sinusoidal curve of both scans, it is possible to calculate the three quantities
absorption, differential phase, and dark-field.
As in standard X-ray imaging, absorption is defined as the change in the
average intensity. The differential phase is the angular shift of the sine.
The dark-field signal is given by the ratio of the amplitude of the sine over
the average intensity.
For this work, it is important to note that all three signals are
created by sampling the sinosoidal function in one direction. We call this
direction the \textit{sensitivity direction} $\vec{s}$.
The sensitivity direction is perpendicular to the grating bars.
\subsection{Related Work}
\label{sec:related_work}
X-ray Tomography is performed by rotating either the X-ray setup or the object
during the acquisition. This rotation changes the orientation of the object
relative to the sensitivity direction. A key difference between traditional
X-ray absorption and dark-field is the impact of this relative orientation:
X-ray absorption is independent of the relative orientation, while X-ray
dark-field depends on it.
This makes a major difference for the choice of reconstruction algorithm. The
popular filtered backprojection (FBP) algorithm implicitly assumes that the
signal strength is independent of the viewing direction --- which does in
general not hold for X-ray dark-field imaging.
The tomographic reconstruction, in general, requires the inversion of a projection model.
For the angle-dependent dark-field signal, several 2-D projection models were proposed,
which are discussed briefly in the following.
Jensen~\textit{et~al.}~\cite{Jensen10:DXD} first showed the angle dependency of
dark-field projections.
They rotated the object around the optical axis of the system, and found that
the variations in visibility can be described by the first two orders of the
Fourier expansion. Shortly afterwards, Revol~\textit{et~al.}~\cite{Revol12:OSX} modeled
the dark-field scatter by a 2-D Gaussian function and showed that the logarithm
of the dark-field signal can be formulated as
\begin{equation}
\tilde{V}(\omega) = A + B \cdot \sin^2\left(\omega - \theta \right) \enspace,
\end{equation}
where $\omega$ is the rotation angle of the fiber around the optical axis,
$\theta$ is the starting angle of the fiber in the $\vec{x}\vec{y}$-plane (see Fig.~\ref{fig:revol})
and $A$, $B$ are an isotropic and anisotropic contribution of the scatter, respectively.
The projection models~\cite{Jensen10:DXD,Revol12:OSX} assume that the object is
rotated around the optical axis, which limits these models to thin sample
layers. Malecki~\textit{et~al.}~\cite{malecki2013coherent} investigated the signal
formation for the superposition of layers with different fiber orientations.
They conclude that the dark-field signal can be represented as the line
integral along the beam direction over the anisotropic scattering components.
In order to describe the dark-field for thicker objects,
Bayer~\textit{et~al.}~\cite{Bayer13:PAD} proposed another projection model. They showed
that the projection of a fibrous structure also depends on the azimuthal
angle $\phi$. This corresponds to the angle of the fiber projection in the $\vec{x}\vec{z}$
plane in Fig.~\ref{fig:bayer}. They derive the dark-field signal as
\begin{equation}
\tilde{V}(\phi) = A + B \cdot \sin^2\left(\phi - \omega \right) \enspace .
\end{equation}
The third projection model was proposed by Schaff~\textit{et~al.}~\cite{Schaff17:NID}
and is shown in Fig.~\ref{fig:schaff}. Here, the
grating bars are aligned along the 2-D trajectory, and the dark-field signal is
measured along along the rotation axis. Schaff~\emph{et al.} approximate this
signal as constant with respect to the tomographic rotation, such that the the
scattering strength only depends on the angle between the fiber and the
rotation axis.
\begin{figure}
\centering
\subfloat[Revol~\cite{Revol12:OSX} projection model \label{fig:revol}]{
\def0.22\textwidth} \input{exp3.pdf_tex} {0.25\linewidth}
\input{revol_projection_model2.pdf_tex}
}
\qquad
\subfloat[Bayer~\cite{Bayer13:PAD} projection model \label{fig:bayer}]{
\def0.22\textwidth} \input{exp3.pdf_tex} {0.25\linewidth}
\input{bayer_projection_model2.pdf_tex}
}
\qquad
\subfloat[Schaff~\cite{Schaff17:NID} projection model \label{fig:schaff}]{
\def0.22\textwidth} \input{exp3.pdf_tex} {0.25\linewidth}
\input{schaff_projection_model2.pdf_tex}
}
\caption{Sketch of three different 2-D projection models from previous works.
The rotation angle is given as $\omega$. The fiber vector is denoted as $\vec{f}$,
and $\theta$ and $\phi$ are the elevation angle and azimuthal angle, respectively.
$\vec{s}$ is the sensitivity direction.}
\label{fig:projection_models}
\end{figure}
This approximation simplifies the reconstruction, since a normal FBP algorithm can be used.
However, for the two other projection models, the resulting signal per voxel
varies along the trajectory.
2-D object orientations are in this case reconstructed via iterative
reconstruction~\cite{Bayer14:RSV,Malecki14:XTT,Hu15:3DT,Vogel15:CXT,Wieczorek16:xrt}.
Among these works, Bayer~\textit{et~al.}~\cite{Bayer14:RSV} proposed a method to reconstruct 2-D in-plane orientations of fibers.
Hu~\textit{et~al.}~\cite{Hu15:3DT} proposed to reconstruct the 3-D
orientation by combining two 2-D in-plane scans with different
trajectories. X-ray tensor tomography has been proposed by Malecki~\textit{et~al.}~\cite{Malecki14:XTT},
Vogel~\textit{et~al.}~\cite{Vogel15:CXT}, and Wieczorek~\textit{et~al.}~\cite{Wieczorek16:xrt} by combining multiple 2-D planes.
Since all projection models describe the dark-field only as a function
of one angle, it is only possible to reconstruct a 2-D slice. The
reconstruction of the full 3-D distribution of oriented materials requires the
combination of scans from several trajectories, which overall
leads to quite complex acquisition protocols.
Malecki~\textit{et~al.}~\cite{Malecki14:XTT} reconstructed a scattering tensor by using
the model from Revol~\textit{et~al.}~\cite{Revol12:OSX} and rotated the sample into a
finite number of scattering directions. Hu~\textit{et~al.}~\cite{Hu15:3DT} used the
model by Bayer~\textit{et~al.}~\cite{Bayer13:PAD,Bayer14:RSV} and used two 2-D
reconstructions to compute the 3-D fiber direction, while Schaff~\textit{et~al.}~\cite{Schaff17:NID} fit
a 3-D ellipse to individually reconstructed 2-D slices.
Previous works take different approaches to describe the 3-D nature of
X-ray dark-field, ranging from Gaussian distributions~\cite{Jensen10:DXD} over
a kartesian basis~\cite{Vogel15:CXT} to a spherical harmonics basis~\cite{Wieczorek16:xrt}.
However, to our knowledge, there exists to date no direct 3-D reconstruction
algorithm. One of the reasons for this may be the fact that
a reconstruction method requires the inversion of a projection model, which
to our knowledge has not been defined yet in 3-D.
The definition of a 3-D model makes it possible to use 3-D dark-field trajectories.
For example, the helix is a popular 3-D trajectory with favorable properties in
traditional absorption tomography. In this case, Tuy’s condition for absorption
image can be applied, and the completeness of such a certain trajectory can be
shown~\cite{maier2015discrete}. In principle, a similar system can be pursued
for dark-field tomography if a well-described 3-D trajectory is available.
As long as only 2-D trajectories can be used, the best known acquisition
schemes that fully measure the scattering orientations are still quite
complex~\cite{Sharma2017DesignOA}.
\subsection{Contributions and Organization of this Work}
In this work, we propose a fully three-dimensional X-ray dark-field projection
model. Previous works are limited to descriptions of 2-D projections of the
dark-field signal, which limits the reconstruction to 2-D scatter projections,
and constrains the trajectories to 2-D. In contrast, the proposed model enables
the use of an arbitrary scanning geometry, and overcomes the need for
combining several 2-D trajectories. The proposed model allows to use established 3-D scanning trajectories
to acquire the 3-D scatter distribution, like for example a helical geometry.
Furthermore, it enables the development of novel 3-D geometries that aim at
optimizing the recovery of directional information for specific clinical
examinations or visual inspection tasks.
Additionally, the proposed model is very general. It allows to freely choose
the ray direction and the sensitivity direction. That way, it overcomes the
restriction of earlier works to parallel beam geometries. Instead, it allows to
model a cone beam, which is of major importance for many popular hardware
designs, like for example a line scanner.
We only use the assumption that the scatter distribution of the dark-field
signal is a 3-D Gaussian, and we derive the general projection model from that.
Furthermore, we discuss the impact of additional constraints if they are
available, and demonstrate the consistency with existing 2-D models.
In the experiments, we show that the proposed model accurately fits predicted
dark-field values from a wave simulation as well as from real experiments.
The paper is organized as follows.
Section~\ref{sec:projection_model} provides a mathematical derivation of the proposed model, which
describes the dark-field signal formation in a very general way.
Afterwards, in Sec.~\ref{sec:additional_constraints}, we discuss the impact of
additional constraints on the model and show that our model is consistent with
the 2-D projection models discussed in Sec.~\ref{sec:related_work}.
Experiments that link the predicted signal to simulations and actual measurements are presented in Sec.~\ref{sec:experiments}.
We conclude the paper in Sec.~\ref{sec:conclusions}.
\section{Proposed X-ray Dark-field Projection Model}
\label{sec:projection_model}
The X-ray dark-field signal measures the X-ray small angle scattering of
microstructures in a sample. X-ray dark-field scattering has the special
property that its observed magnitude depends on the relative orientation of
the sample in the setup.
To characterize the signal, we use the notion of \emph{isotropic} and
\emph{anisotropic} scattering components. This notion was originally introduced
for 2-D projection models. A schematic sketch of this model is shown in
Fig.~\ref{fig:models2d}. Here,
the isotropic component scatters
in all directions equally strongly, independent of the sample or setup
orientation. Conversely, observations of scatter of the anisotropic component
vary with the sample and setup orientation.
\begin{figure}
\centering
\subfloat[2-D Dark-field Signal\label{fig:2Ddfmodel}] {
\def0.22\textwidth} \input{exp3.pdf_tex} {0.25\textwidth}
\input{dark-field-signal2D.pdf_tex}
}
\qquad
\subfloat[2-D Scattering Model\label{fig:2Dmodel}] {
\def0.22\textwidth} \input{exp3.pdf_tex} {0.25\textwidth}
\input{2Dmodel.pdf_tex}
}
\caption{Isotropic ($d_\text{iso}$) and Anisotropic ($d_\text{aniso}$) parts for the 2-D dark-field signal and scattering model.}
\label{fig:models2d}
\end{figure}
Thus, if a sample scatters purely isotropically, its signal is independent of
the orientation. Such a signal can be
reconstructed in a similar manner as X-ray absorption. However, if a sample
exhibits partially anisotropic scatter, the signal formation depends on the
orientation and thus becomes considerably more difficult to reconstruct. In
particular, any algorithm for 2-D or 3-D reconstruction has to explicitly take
the direction-dependent signal variation into account.
In order to model the signal formation, we introduce the notion of a
\emph{fiber}. A fiber is a microstructure that exhibits a mixture of isotropic
and anisotropic scattering. The derivation of the model is organized as follows.
First, we expose the relationship between a fiber and its associated scatter
distribution in Sec.~\ref{sec:relationship}.
In Sec.~\ref{sec:fiber_projection}, we show how the fiber is projected by the
X-ray onto the sensitivity direction.
In Sec.~\ref{sec:scattering}, we show how the projected image of the fiber is
converted to a scatter distribution.
In Sec.~\ref{sec:complete_model}, we state the complete model, which is the
actually observed dark-field signature for a sample point. Afterwards,
Sec.~\ref{sec:line_integrals} shows how the measured signal is expressed as
line integrals.
The dark-field signal formation depends on three quantities, namely the
directions of the X-ray, the dark-field sensitivity direction, and the orientation of
the fiber. We describe a very general model that considers all three
quantities as arbitrary vectors in 3-D. This generality has several
advantages. It allows us to model not only a system with parallel beam and a
perpendicular sensitivity direction, but instead arbitrary acquisition
geometries. Examples for such more general system designs are the use of a
cone-beam scanning geometry, which influences the ray direction, or the use of
a curved X-ray detector, which results in different sensitivity directions. It
also allows to model a 3-D helical scanning trajectory, which requires
flexibility in all these quantities.
\subsection{Relationship between Fiber and Scatter Distribution}
\label{sec:relationship}
We make the simplifying assumption that a fiber has the shape of a cylinder.
More specifically, the fiber cross section is assumed to be a circle, and the
height of the cylinder is assumed to be at least as long as the radius
of that circle. The isotropic scattering component is mainly determined by the
radius of the circle. The anisotropic scattering component is connected to the
size of the cylinder, and will be more rigorously defined in
Sec.~\ref{sec:scattering}.
\begin{figure}[tb]
\centering
\subfloat[Fiber and its associated Gaussian scatter distribution.\label{fig:scatter}] {
\def0.22\textwidth} \input{exp3.pdf_tex} {0.3\textwidth}
\input{fiber_and_scatter.pdf_tex}
}
\qquad
\subfloat[Basis vectors of the 3-D Gaussian scatter spheroid, \newline where $\sigma_1 = \sigma_2 > \sigma_3$.
Adapted from~\cite{Spheroid}.\label{fig:spheroid}] {
\def0.22\textwidth} \input{exp3.pdf_tex} {0.4\textwidth}
\input{Spheroids_basis_vectors.pdf_tex}
}
\caption{Illustrations of the 3-D Gaussian scatter distribution.}
\label{fig:scatter3D}
\end{figure}
Mathematically, we represent a fiber as a 3-D vector $\vec{f}$ in
$\mathbb{R}^3$, where the vector is parallel to the cylinder axis.
The observed fiber creates dark-field scatter.
Scatter is not deterministic, and therefore commonly described as a
distribution.
For the following discussion, we are only interested in the relative
orientations of the fiber and its associated scatter. Thus, without loss of
generality, we assume that a fiber and its scatter distribution are rooted in
the origin of the coordinate system. We assume that the shape of a fiber
scatter distribution is a 3-D Gaussian $g(\vec{x})$, which is in line with
earlier models on 2-D scatter distributions~\cite{Jensen10:DXD}. Then,
\begin{equation}
g( \vec{x})= \frac{1}{\sqrt{(2\pi)^3} \left| \Sigma \right|} \cdot
\exp \left(-\frac{1}{2} (\vec{x}^\top \Sigma^{-1} \vec{x})\right) \enspace ,
\end{equation}
where $\Sigma$ denotes the $3\times 3$ covariance matrix. The shape of
$g(\vec{x})$ is completely described by $\Sigma$.
We make the mild assumption that this covariance matrix $\Sigma$ can be
diagonalized (which is satisfied for any non-trivial 3-D Gaussian scatter
observation).
Then, the eigenvalues of $\Sigma$ describe the scatter strength with respect to
its eigenbasis spanned by the eigenvectors $\vec{b}_1$, $\vec{b}_2$,
$\vec{b}_3$. The eigenvalues correspond to the variances, i.e., the squared
standard deviations along each principal axis of the distribution:
\begin{equation}
\Sigma =
\begin{pmatrix}
\sigma_1^2 & 0& 0\\
0 & \sigma_2^2 &0 \\
0 & 0& \sigma_3^2\\
\end{pmatrix} \enspace .
\end{equation}
These variances have a special distribution, which comes from the particular
case of a scattering fiber: the main scattering direction of the fiber is the
2-D subspace that is perpendicular to $\vec{f}$. This is illustrated in
Fig.~\ref{fig:scatter}. All scatter directions within this 2-D subspace are indistinguishable.
As a consequence, the two largest eigenvalues $\sigma_1^2$ and $\sigma_2^2$ are
identical, i.e., $\sigma_1^2 = \sigma_2^2$. The weakest scattering is observed in
the direction of $\vec{f}$, which is quantified by the smallest eigenvalue
$\sigma_3^2 \le \sigma_1^2$. This is illustrated in Fig.~\ref{fig:scatter}.
The eigenvector $\vec{b}_3$ is associated with the smallest eigenvalue
$\sigma_3^2$ and parallel to $\vec{f}$. More specifically, both vectors are
identical with the exception that their sign might be flipped, i.e., $\vec{b}_3 = \pm \vec{f}$.
The restrictions on the eigenvalues induces that the shape of the scattering function is a oblate spheroid.
A 3-D sketch of the eigenvalues is shown in Fig.~\ref{fig:spheroid}.
\subsection{3-D Fiber Projection Model}
\label{sec:fiber_projection}
The dark-field signal formation depends on three geometric vectors, namely the
direction of the X-ray, the dark-field sensitivity direction, and the orientation of the fiber.
Ultimately, we seek the projection of the fiber along the ray direction onto the sensitivity direction.
This is a mapping from the 3-D fiber vector onto a (1-D) scalar value.
A non-parallel X-ray projection, e.g. from a cone beam, is modelled by a rotation of the fiber.
The sensitivity direction can
have an arbitrary orientation in space. To relate the fiber direction and the sensitivity direction,
we introduce a virtual plane that is perpendicular to the X-ray. Both the fiber and the sensitivity
direction are projected onto that plane. Then, the 2-D projection of the fiber onto the sensitivity
direction in the plane is performed. The resulting equations show that the plane cancels, and that
the projection of the fiber onto the sensitivity direction can be written as a
scalar product. The mathematical details are presented below.
\begin{figure}
\centering
\subfloat[Sketch of the projection of vector $\vec{f}$ and $\vec{s}$ on the plane $\vec{E}$.
Please note, that this sketch does not match the exact mathematical description,
since the plane $\vec{E}$ does not lie in the origin. \label{fig:plane_sideways}] {
\def0.22\textwidth} \input{exp3.pdf_tex} {0.6\textwidth}
\input{projection_on_plane.pdf_tex}
}
\quad
\subfloat[Planar view on the plane $\vec{E}$ with the projected vectors $\vec{f}'$ and $\vec{s}'$.
\label{fig:plane_planar_view}] {
\def0.22\textwidth} \input{exp3.pdf_tex} {0.3\textwidth}
\input{projection_in_plane.pdf_tex}
}
\caption{Projection of the fiber direction and the sensitivity direction onto the plane.}
\label{fig:plane}
\end{figure}
Let us consider a single fiber vector $\vec{f}$. Without loss of generality, this
fiber is located in the origin of our world coordinate system.
The X-ray dark-field projection ray $\vec{r}$ passes through that fiber, and
thereby also the origin of the coordinate system.
In imaging systems, all X-rays that form one projection are typically modelled
as either parallel or diverging from a central ray $\vec{c}$. This changes the
relative orientation between $\vec{r}$ and the fiber vector $\vec{f}$. To
correct for the diverging ray, we denote the angle of divergence as $\alpha$,
and rotate the fiber in the plane spanned by $\vec{c}$ and $\vec{r}$ in the
opposite direction. The corresponding rotation matrix is denoted as
$\vec{R}_{\alpha}$. In the case of parallel projection, $\vec{R}_{\alpha}$ is
the $3\times 3$ identity matrix.
We project the fiber $\vec{f}$ onto a plane~$\vec{E}$ that is perpendicular to the X-ray
direction $\vec{r}$.
For this projection, we use orthogonal projections instead of perspective
projections of the scatter pattern. This is possible, because the projection
of a fiber signature onto the detector is in the range of micrometers, but a
single detector pixel is typically two orders of magnitude larger.
An orthogonal projection of a 3-D vector onto a plane can be performed with an
inner product between the vector and a transformation matrix consisting of the
3-D coordinates of the 2-D basis.
We define the 2-D projection plane as a plane where $\vec{r}$ is the normal
vector. Since $\vec{r}$ passes through the origin, we find it convenient to
choose the plane to also pass through the origin, i.e.,
\begin{equation}
\vec{E} = \left( \vec{r}^{\text{ortho}}_1, \vec{r}^{\text{ortho}}_2 \right)\enspace,
\end{equation}
with $\vec{E} \in \mathbb{R}^{3\times 2}$ where
$\vec{r}^{\text{ortho}}_1$ is a vector perpendicular to $\vec{r}$, i.e., $\vec{r}^\top \vec{r}^{\text{ortho}}_1 = 0$, and
$\vec{r}^{\text{ortho}}_2 = \vec{r} \times \vec{r}^{\text{ortho}}_1$ is the
second vector spanning the plane, also perpendicular to $\vec{r}$.
This projection is visualized in Fig.~\ref{fig:plane_sideways}.
The projection of the fiber along the ray and onto the 2-D plane $\vec{E}$ is
then given as product of the rotated fiber $\vec{f}$ with $\vec{E}$, i.e.,
\begin{equation}
\vec{f}' = (\vec{R}_{\alpha}\vec{f})^\top \vec{E}\enspace,
\end{equation}
where $\vec{f}' \in \mathbb{R}^2$ is now a two-dimensional vector in the plane
$\vec{E}$.
The sensitivity direction $\vec{s}$ denotes the direction along which the X-ray
dark-field signal can be measured. It is a 3-D vector with an arbitrary
orientation. To relate the fiber with the sensitivity direction, we also
project $\vec{s}$ onto plane $\vec{E}$. Analogously to the fiber-plane
projection, we also use here an orthogonal projection. The 2-D projection of $\vec{s}$ on $\vec{E}$ is
\begin{equation}
\vec{s}' = \vec{s}^\top \vec{E}\enspace.
\end{equation}
The projection of both vectors $\vec{f}'$ and $\vec{s}'$ on $\vec{E}$ are shown
in Fig.~\ref{fig:plane_planar_view}.
To determine the alignment of the fiber $\vec{f}$ with sensitivity direction
$\vec{s}$, the inner product is computed, i.e.
\begin{align}
\vec{f}'' & = \vec{f}'^\top \cdot \vec{s}' \\
& = \left((\vec{R}_{\alpha}\vec{f})^\top \vec{E}\right)^\top \cdot \left( \vec{s}^\top \vec{E} \right) = \left(\vec{E}^\top \left(\vec{R}_{\alpha}\vec{f}\right) \right) \cdot \left( \vec{s}^\top \vec{E} \right) \enspace.
\label{eqn:proj_fiber_to_sensitiviy}
\end{align}
Equation~\ref{eqn:proj_fiber_to_sensitiviy} can be simplified by noting that
the inner product commutes, which leads to
\begin{align}
\vec{f}'' & = \left( \vec{s}^\top \vec{E} \right) \cdot \left(\vec{E}^\top \left(\vec{R}_{\alpha}\vec{f} \right) \right) \\
& = \vec{s}^\top \cdot \left( \vec{R}_{\alpha}\vec{f}\right) \enspace,
\label{eqn:fiber_to_sensitivity}
\end{align}
since $\vec{E} \vec{E}^\top = \vec{I}$. Equation~\ref{eqn:fiber_to_sensitivity}
shows that the projection of the fiber through the system onto the sensitivity
direction reduces to directly computing the inner product between the fiber and
the sensitivity direction.
Note that in the case of a cone beam, the rotation of the fiber by
$\vec{R}_\alpha$ can also be replaced by a rotation of the sensitivity
direction $\vec{s}$ in the opposite direction. While we believe that the
rotation of the fiber $\vec{f}$ is more intuitive, it may be preferable for
an actual implementation of a reconstruction algorithm to rotate the
sensitivity direction $\vec{s}$, since $\vec{s}$ is a given quantity from the
setup geometry, and $\vec{f}$ is the unknown variable.
\subsection{3-D Projection Model for Scattering}
\label{sec:scattering}
The projection of the fiber onto the sensitivity direction can be translated into the
projection of the scatter. The scatter is the actually observed quantity in the
imaging system. The inverse of this conversion links the observations to the
unknown fiber direction.
As discussed in Sec.~\ref{sec:relationship}
the scatter distribution for a given fiber $\vec{f}$ is given as
\begin{equation}
\vec{f} \mapsto \left(\begin{array}{ccc}\sigma_1^2 & & \\ & \sigma_2^2 & \\ & & \sigma_3^2 \end{array}\right) (\vec{b}_1 \quad \vec{b}_2 \quad \vec{b}_3)\enspace,
\end{equation}
where $\sigma_1^2 = \sigma_2^2$, and $\vec{b}_1$, $\vec{b}_2$, $\vec{b}_3$
are an orthogonal basis.
In Sec.~\ref{sec:fiber_projection} we considered the transformation from the 3-D fiber to a 1-D signal.
We now want to describe this transformation for the scattering distribution.
Since the distribution is described by an orthogonal basis,
we can transform the basis vectors $\vec{b}_i$ individually to get the transformation.
Since we defined our projection for an arbitrary fiber $\vec{f}$,
we can use the same mapping for each basis vector $\vec{b}_i$.
The measured projection of the scattering component $i$ is then given as
\begin{align}
\sigma_i^{2\,\prime\prime} &= \vec{s}^\top \cdot \left( \vec{R}_{\alpha} \left(\sigma_i^2 \vec{b}_i \right) \right)\\ &=\sigma_i^2 \cdot \left( \vec{s}^\top \cdot \left( \vec{R}_{\alpha} \vec{b}_i \right) \right) \enspace.
\end{align}
Under the consideration that the scattering distribution is symmetric, the
variance may not depend on the sign of the basis vectors $\vec{b}_i$, and its
oscillation has to be of period $\pi$. In analogy to previous 2-D
models~\cite{Revol12:OSX,Bayer14:RSV}, both requirements are addressed by
squaring the inner product. The projected variance is thus
\begin{equation}
{\hat\sigma}_i^{2\, \prime\prime} = \sigma_i^2 \cdot \left( \vec{s}^\top \cdot \left(\vec{R}_{\alpha} \vec{b}_i \right) \right)^2 \enspace.
\label{eq:projected_variance}
\end{equation}
\subsection{Complete 3-D Dark-Field Projection Model}
\label{sec:complete_model}
With the individual projections of the fiber and the scattering distribution at
hand, we combine both in this section to directly describe the scatter
distribution for a given fiber. To this end, we use the introduced notions of
\textit{isotropic} and \textit{anisotropic} scattering.
The isotropic part results in an equal amount of scatter in all directions,
while the anisotropic part depends on the relative orientation of the fiber,
ray direction, and sensitivity direction. The goal is to describe the 1-D
dark-field scattering signal in dependency of the fiber, since the fiber is the
quantity that shall eventually be reconstructed.
The observed dark-field signal is modeled as
\begin{equation}
d = d_\text{iso} + d_\text{aniso} \left(\vec{s}^\top (\vec{R}_{\alpha}\vec{f})\right)^2 \enspace .
\label{eqn:3Ddfmodel}
\end{equation}
Here, we again square the scaling factor of the anisotropic part to resemble
the fact that the signal has a period of $\pi$ instead of $2\pi$.
Analogously, the amount of isotropic and anisotropic scattering is also defined
over the variances of the 3-D scattering function. Thus, the isotropic component
is given as
\begin{equation}
d_\text{iso} = \sigma_{1}^2 = \sigma_{2}^2 \enspace ,
\end{equation}
while the anisotropic component is
\begin{equation}
d_\text{aniso} = - \left( \sigma_1^2 - \sigma_3^2 \right) \enspace .
\end{equation}
To define the anisotropic component as the subtraction from the isotropic
scattering may appear counter-intuitive at first glance. However, it allows to
directly represent the fiber $\vec{f}$ in the model. We believe that it is
useful for building a reconstruction algorithm on top of the model to have the
fiber direction directly accessible, since it is the primary quantity of interest.
The derivation to use the fiber vector in the of Eq.~\ref{eqn:3Ddfmodel}
comes from the projected variance in Eq.~\ref{eq:projected_variance}.
If we consider the smallest scattering component, we observe
\begin{equation}
{\hat\sigma}_3^{2\, \prime\prime} = \sigma_3^2 \cdot \left( \vec{s}^\top \cdot \left(\vec{R}_{\alpha} \vec{b}_3 \right) \right)^2 \enspace.
\label{eq:sigma_3}
\end{equation}
Since the eigenvector $\vec{b}_3$ and the fiber vector $\vec{f}$ are collinear,
we can substitute $\vec{b}_3$ in Eq.~\ref{eq:sigma_3} by $\vec{f}$. This leads to
\begin{equation}
{\hat\sigma}_3^{2\, \prime\prime} = \sigma_3^2 \cdot \left( \vec{s}^\top \cdot \left(\vec{R}_{\alpha} \vec{f} \right) \right)^2 \enspace,
\end{equation}
which is used to get the dark-field model in Eq.~\ref{eqn:3Ddfmodel}.
In 2-D models, the isotopic component is defined as the amount that scatters in all directions
equally, while the anisotropic component is defined as an additional component in the direction
perpendicular to the fiber direction.
In 3-D, a direct adaptation of this approach is somewhat more complicated,
since the additional scatter of the fiber perpendicular to its main axis forms a 2-D subspace.
We argue that the concept of isotropic and anisotropic components is not really transferable to the 3-D case.
In 3-D, one can interpret Eq.~\ref{eqn:3Ddfmodel} as the reduction of
observed scatter in the direction of the main axis of the fiber, which is
mathematically correct, yet somewhat counter-intuitive.
The projection of the dark-field does not only depend on the scattering strength described in Eq.~\ref{eqn:3Ddfmodel},
but also on the length of the projection rays through the fiber.
Since the fiber is assumed to be smaller than one pixel, the dark-field per voxel $\vec{x}$ can be expressed as:
\begin{equation}
d(\vec{x}) = C(\vec{f},\, \alpha,\, \sigma_1,\, \sigma_3) \cdot d \enspace ,
\label{eq:fiber_per_voxel}
\end{equation}
where $C(\vec{f},\, \alpha,\, d_\text{iso},\, d_\text{aniso})$ is a function describing the average length through the fiber cylinder,
dependent on the fiber direction, the ray direction and linked to isotropic and anisotropic values.
\subsection{Dark-Field Line Integrals}
\label{sec:line_integrals}
In standard X-ray projection imaging, the measured signal intensity is the line integral along the X-ray beam line $L$.
Malecki~\textit{et~al.}~\cite{malecki2013coherent} showed that the superposition of dark-field signals results in a line integral along the beam direction.
The dark-field signal is similar to the absorption signal given by
\begin{equation}
D_L = \exp\left[-\int_L d(\vec{x}) \, \mathrm{d}L \right] \enspace .
\label{eq:3D}
\end{equation}
Here, the line integral is only influenced by the object geometry.
To make the dark-field more interpretable and achieve a non-linear intensity transformation as oftentimes used in X-ray absorption processing,
several authors use $-\log(D_L)$~\cite{bech2010quantitative}.
\section{Impact of Additional Constraints on the Model}
\label{sec:additional_constraints}
The proposed projection model is very general. In this section we will show how
specific assumptions allow for simplifications. In particular, we show that the
model is consistent with the more constrained 2-D projection models by
Revol~\cite{Revol12:OSX}, Bayer~\cite{Bayer13:PAD}, and
Schaff~\cite{Schaff17:NID}. We will now show that we are consistent with these
if we constrain our model to parallel beams and a circular 2-D trajectory. As
sketched in Fig.~\ref{fig:projection_models}, we define for all three 2-D
models the sensitivity direction as $\vec{s} = (1,\, 0,\, 0)^\top$.
In a parallel beam geometry, the rotation matrix $\vec{R}_\alpha$ simplifies to the identity,
i.e., $\vec{R}_\alpha = \mathds{1}$.
Consequently, the anisotropic component only depends on the relative orientation of the fiber and sensitivity direction, thus
\begin{equation}
d = d_\text{iso} + d_\text{aniso} \left(\vec{s}^\top \vec{f}\right)^2 \enspace .
\end{equation}
Revol~\textit{et~al.} ~rotates the fiber around the ray direction.
Thus, the fiber orientation of $\vec{f}$ in the $\vec{xy}$ plane depends on the starting angle $\theta$ and the rotation angle $\omega$.
We will denote this dependency as $\vec{f}(\omega)$. The fiber is then given as
\begin{equation}
\vec{f}(\omega) =
\begin{pmatrix}
\cos (\omega - \theta) &-\sin (\omega - \theta) &0\\
\sin (\omega - \theta) &\cos (\omega - \theta) &0\\
0&0&1
\end{pmatrix}
\begin{pmatrix}
f_x\\f_y\\f_z
\end{pmatrix} \enspace ,
\end{equation}
with $\vec{f} = (f_x,\, f_y,\, f_z)^\top$.
The dark-field model thus becomes $d = d_\text{iso} + d_\text{aniso} \left((1,\, 0,\, 0) \vec{f}(\omega) \right)^2$,
which can be transformed into the original formulation $A + B \cdot \sin^2\left(\omega - \theta \right)$.
The mapping to the model by Bayer~\textit{et~al.} can be performed in a similar manner.
Here, the fiber is rotated around the $\vec{y}$-axis. Then,
\begin{equation}
\vec{f}(\omega) =
\begin{pmatrix}
\cos (\phi -\omega) & 0 & \sin (\phi - \omega) \\
0 & 1 & 0 \\
-\sin (\phi - \omega) & 0 & \cos (\phi - \omega)
\end{pmatrix}
\begin{pmatrix}
f_x\\f_y\\f_z
\end{pmatrix}
\enspace ,
\label{eq:bayer}
\end{equation}
which results in the original formulation $A + B \cdot \sin^2\left(\phi \right)$.
The model by Schaff~\textit{et~al.} constrains the sensitivity direction parallel to the rotation axis.
Then, the projection of the fiber vector (as stated in Eq.~\ref{eqn:fiber_to_sensitivity}) is given as
\begin{equation}
\vec{f}(\omega)'' = (1,\, 0,\, 0)
\left(\begin{pmatrix}
1 & 0 & 0 \\
0 & \cos (\omega - \Theta) & -\sin (\omega - \Theta) \\
0 & \sin (\omega - \Theta) & \cos (\omega - \Theta)
\end{pmatrix}
\begin{pmatrix}
f_x\\f_y\\f_z
\end{pmatrix}
\right) = f_x \enspace ,
\end{equation}
which is constant.
It is interesting to note, however, that the model does not consider variations
in intersection lengths through the fiber. In practice, the signal is only then
approximately constant, if the fiber exhibits only a small elevation angle. In
this case, the intersection length is nearly identical for different rotation
angles
$\omega$.
In summary, the proposed model can be transformed into each of the three
existing 2-D models with the addition of suitable constraints.
At the same time, however, the proposed model is general enough to also
represent a full 3-D space with an arbitrarily oriented X-ray, fiber, and
sensitivity.
\section{Experiments and Results}
\label{sec:experiments}
In this section we experimentally evaluate the proposed 3-D dark-field projection model.
We sequentially evaluate different aspects of the model, to mitigate the
combinatorial complexity of evaluating the full parameter space.
The evaluated aspects of the model are
\begin{enumerate}
\item Dark-field projection model (Equation~\ref{eqn:3Ddfmodel})
\item Dark-field signal of a single fiber (Equation~\ref{eq:fiber_per_voxel})
\item Dark-field measurements (Equation~\ref{eq:3D}).
\end{enumerate}
The corresponding experiments are described and discussed in the sections~\ref{sec:res}, \ref{sec:cxi}, and \ref{sec:real}, respectively.
To evaluate the proposed projection model, we compare the results to simulated
and real dark-field signals in Sec~\ref{sec:cxi} and \ref{sec:real}.
\subsection{Dark-field Projection Model}
\label{sec:res}
The formulation of the dark-field in Eq.~\ref{eqn:3Ddfmodel} is sufficiently
flexible to describe different trajectories and sensitivity directions. In
this experiment, we show the dependency of the dark-field on the X-ray
direction and sensitivity direction.
To this end, we simulate three different trajectories as shown in Fig.~\ref{fig:bvm-traj}.
We evaluate the dark-field for two different fiber vectors,
both with larger scattering coefficient $\sigma_1 = \sigma_2 = 1$ and smaller scattering coefficient $\sigma_3 = 0.5$.
The fiber directions are $\vec{f} = \{1, 0, 0\}$ and $\vec{f} = \{1, 1, 1\}$,
respectively.
All three trajectories have a source-isocenter distance of \SI{600}{\milli\meter} and source-detector distance of \SI{1200}{\milli\meter}.
We simulate two circular 2-D trajectories over $360^\circ$ with an angular
increment of $1.5^\circ$. Both trajectories have different sensitivity
directions. The sensitivity direction $\vec{s}$ for trajectory (a) in Fig.~\ref{fig:bvm-traj}
can be represented by the vector $\{1,\, 0\}$ in the detector plane and lies therefore in the rotation plane.
Trajectory (b) in Fig.~\ref{fig:bvm-traj} has the sensitivity direction along the rotation axis,
i.e. the vector $\{0,\, 1\}$ in the 2-D detector plane.
The third trajectory (c) is a helical 3-D trajectory, also with an angular increment of $1.5^\circ$ and a pitch $h = 0.5$.
The sensitivity direction is aligned with the helical trajectory.
The sensitivity direction is in all cases always chosen perpendicularly to the
projection ray in order to not introduce an additional scaling factor from the inner product in Eq.~\ref{eqn:3Ddfmodel}.
\begin{figure}[tb]
\centering
\def0.22\textwidth} \input{exp3.pdf_tex} {0.25\textwidth}
\subfloat[\label{Eins}]{ \def0.22\textwidth} \input{exp3.pdf_tex} {0.25\textwidth} \input{exp1.pdf_tex} }
\quad
\subfloat[\label{Zwei}]{ \def0.22\textwidth} \input{exp3.pdf_tex} {0.25\textwidth} \input{exp2.pdf_tex} }
\quad
\subfloat[\label{Drei}]{ \def0.22\textwidth} \input{exp3.pdf_tex} {0.22\textwidth} \input{exp3.pdf_tex} }
\caption{Three different trajectories to evaluate the dark-field projection model (Sec.~\ref{sec:res}).
(a, b) Circle trajectory. (c) Helical trajectory. The sensitivity direction $s$ is shown for each scanning mode.}
\label{fig:bvm-traj}
\end{figure}
The resulting dark-field signals are shown in Fig.~\ref{fig:bvm-result} over the rotation angle $\omega$.
In magenta, the dark-field signals for both fibers on the circular trajectory (a)
are shown. They oscillate with a regular sinosoidal. The fiber that lies within
the rotation plane (magenta, dotted line) reaches the minimum and maximum
theoretically possible dark-field values. The elevated fiber (magenta, solid line)
creates an overall stronger signal that, due to the elevation, never reaches
the minimum. This is also illustrated in the example scattering spheroid in
Fig.~\ref{fig:iso_aniso}. Here, the magenta circumference indicates the measured
scatter intersection for trajectory (a) for the elevated fiber vector $\vec{f}
= \{1, 1, 1\}$.
For the second 2-D trajectory (b), the sensitivity direction is aligned with
the rotation axis. As shown in the light blue lines in Fig.~\ref{fig:bvm-result},
this results in a constant signal for both fiber directions. In this case, the
elevated fiber $\vec{f} = \{1, 1, 1\}$ (solid, light blue) creates a weaker signal.
The scattering strength of the elevated fiber is shown as a light blue dot on
the spheroid in Fig.~\ref{fig:iso_aniso}.
The most complex trajectory is the helical 3-D trajectory (trajectory (c) in Fog.~\ref{fig:bvm-traj}).
The dark-field signals for both fibers are shown in black in Fig.~\ref{fig:bvm-result}.
Due to the constant change in angle between the fiber and the sensitivity direction, the signal change is not symmetric over the $360^\circ$.
While this observation holds for both fibers, it is more pronounced for fiber vector $\vec{f} = \{1, 1, 1\}$ (solid black line).
\begin{figure}[tb]
\centering
\subfloat[Line plot of dark-field for different trajectories and
fiber vectors. The corresponding grating orientations are shown in Fig.~\ref{fig:bvm-traj} and $\sigma_1 = 1$, $\sigma_3 = 0.5$. \label{fig:bvm-result}] {
\def0.22\textwidth} \input{exp3.pdf_tex} {0.15\textwidth}
\input{line_plot_trajectories.tikz}
}
\qquad
\subfloat[Projected vectors shown on the spheroid for the fiber vector $\vec{f} = \{1, 1, 1\}$.
Magenta ist the trajectory (a). Light blue shows the signal for trajectory (b). \label{fig:iso_aniso}] {
\includegraphics[height=0.25\textwidth]{isotrop_and_anisotrop_new.pdf}
}
\caption{Projections of the dark-field projection model from Eq.~\ref{eqn:3Ddfmodel} according to the trajectories in Fig.~\ref{fig:bvm-traj}.}
\label{fig:bvm-projections}
\end{figure}
The experiments demonstrate how the dark-field signal depends on the X-ray
direction and the sensitivity direction. Furthermore, the experiment also
shows that the dark-field signal behaves differently for 2-D and 3-D
trajectories. These differences in the predicted signals demonstrate the need
of a 3-D projection model for performing a true 3-D reconstruction.
\subsection{Dark-field Signal of a Single Fiber}
\label{sec:cxi}
\begin{figure}[tb]
\centering
\def0.22\textwidth} \input{exp3.pdf_tex} {0.25\textwidth}
\subfloat[Simulations with the proposed model.\label{fig:sim-planes}]{
\input{planes_cxi_1.0_2.0_1.5_0.3.tikz} }
\quad
\subfloat[CXI simulations.\label{fig:sim-cxi}]{
\input{cxi_simulations.tikz}}
\caption{Dark-field signals (Eq.~\ref{eq:fiber_per_voxel}) for the rotation of a fiber in the three planes spanned by the axes of the coordinate system.
(a) generated signal with the proposed model. (b) wavefront simulations.}
\label{fig:simulation_plane}
\end{figure}
To verify the proposed dark-field signal for a complete fiber (Eq.~\ref{eq:fiber_per_voxel}), we compare it to numerical simulations.
We simulate the dark-field with a simulation framework for coherent X-ray imaging (CXI) from Ritter~\textit{et~al.}~\cite{ritter2014simulation}.
The setup parameters for the simulation are chosen as follows
The $G_1$ is placed at \SI{0.01}{\meter}, with a period of \SI{4.37e-6}{\meter}, a height of \SI{5e-6}{\meter} and a duty-cycle of $0.5$.
The $G_2$ is placed at \SI{0.17}{\meter}, with a period of \SI{2.4e-6}{\meter}, a height of \SI{300e-6}{\meter} and a duty-cycle of $0.5$.
Both gratings are simulated as gold. The detector is positioned immediately behind $G_2$.
The size of the focal spot is set to \SI{10}{\micro\meter}, and the pixel width is set to \SI{50}{\micro\meter}.
The design energy of the system is \SI{25}{\kilo\electronvolt}.
The simulated object is a teflon fiber (PTFE) with a radius of \SI{1.79}{\micro\meter} and a length of \SI{15}{\micro\meter}.
We set the fiber parameters in the model to the eigenvalues $\sigma_1 = 1.5$ and $\sigma_2 = 0.3$, which correspond to the teflon fiber parameters.
The negative logarithm of the simulated signal is shown in Fig.~\ref{fig:simulation_plane}. Figure~\ref{fig:sim-planes} shows the result with our model,
while Fig.~\ref{fig:sim-cxi} shows the result from the CXI simulations.
The dark-field signals are shown in the three planes spanned by the coordinate system, namely $\vec{xy}$-, $\vec{xz}$-, and $\vec{yz}$-plane.
Overall, the proposed model and the wavefront simulations agree very well.
The different magnitude between the two signals simulations, are due to the range chosen parameter spaces for the experiments.
The signal in the $\vec{xy}$-plane (blue, solid line) changes only slightly in both simulations.
In this plane, the change between the larger and the smaller scatter eigenvalues is observed.
While our model leads to a distinct sinusoidal change (Fig.~\ref{fig:sim-planes}),
it is more noisy for the CXI simulation (Fig.~\ref{fig:sim-cxi}) due to numerical instabilities.
The dark-field signal in the $\vec{xz}$- and $\vec{yz}$-plane (red and green
lines, respectively) increases with the rotation angle $\omega$, i.e., with
increasing inclination of the fiber into the beam direction.
Equation~\ref{eqn:3Ddfmodel} predicts for such an inclination no increase, but
instead a constant signal. However, the reason for the increasing signal lies
in the increased intersection length of the ray through the fiber, as denoted in Eq.~\ref{eq:fiber_per_voxel}.
It is also interesting to note that this increase is even stronger than the
difference of the scatter eigenvalues.
The green signal that shows the $\vec{yz}$-plane is affected by both effects,
the scattering eigenvalues and intersection length through the fiber.
Overall, the wavefront simulations and the predicted values of the proposed
model agree very well.
\subsection{Dark-field Measurements}
\label{sec:real}
We show that the proposed complete projection model (Eq.~\ref{eq:3D}) also
agrees with experimentally obtained dark-field measurements.
The real data consists of a carbon fiber reinforced polymer rod with a diameter of \SI{4}{\milli\meter}.
The fiber was measured at five different tilting (elevation) angles, namely
$10,~20,~30,~40,$ and $50^\circ$. A projection image at each angle is shown in
Fig.~\ref{fig:cfk-rods}. For each tilting angle, $100$ projection images are
taken over a rotation of $180^\circ$ at \SI{40}{kVp} and \SI{40}{\milli\ampere}.
The measurements were performed with a Siemens MEGALIX CAT Plus 125/40/90-125GW
medical X-ray tube using a tungsten anode. The used X-ray flat panel detector was a
PerkinElmer Dexela 1512 with \SI{74.8}{\micro\meter} pixel pitch,
running in $2 \times 2$ binning mode for processing a faster read-out resulting in a \SI{150}{\micro\meter} pixel pitch.
For each projection 30 phase-steps with \SI{0.1}{\second} acquisition time were used. The gratings have a
period of \SI{24,39}{\micro\meter} for $\text{G}_0$,
\SI{2,18}{\micro\meter} for $\text{G}_1$, and \SI{2,4}{\micro\meter} for
$\text{G}_2$. The setup is \SI{1,854}{\meter} long, with a $\text{G}_0 -
\text{G}_1$ distance of \SI{1473}{\milli\meter}, a $\text{G}_1 - \text{G}_2$
distance of \SI{142}{\milli\meter}, and a $\text{G}_0 - \text{object}$ distance
of \SI{1118}{\milli\meter}. The dark-field is given as $-\log(V/V_{\text{ref}})$.
For the extracted dark-field signal we used the central pixels of the rod along the yellow, dashed line in Fig.~\ref{fig:cfk-rods}.
\begin{figure}
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[$10^\circ$ elevation \label{10}]{
\begin{picture}(75,75)%
\put(0,0){\includegraphics[width=0.15\textwidth]{DCI-3_windowing.jpg}
\multiput(0,22)(2,0){38}{\color[rgb]{1,1,0}\line(1,0){0.7}}
\end{picture}
}
\quad
\subfloat[$20^\circ$ elevation \label{20}]{
\begin{picture}(75,75)%
\put(0,0){\includegraphics[width=0.15\textwidth]{DCI-7_windowing.jpg}
\multiput(0,22)(2,0){38}{\color[rgb]{1,1,0}\line(1,0){0.7}}
\end{picture}
}
\quad
\subfloat[$30^\circ$ elevation \label{30}]{
\begin{picture}(75,75)%
\put(0,0){\includegraphics[width=0.15\textwidth]{DCI-9_windowing.jpg}
\multiput(0,22)(2,0){38}{\color[rgb]{1,1,0}\line(1,0){0.7}}
\end{picture}
}
\quad
\subfloat[$40^\circ$ elevation \label{40}]{
\begin{picture}(75,75)%
\put(0,0){\includegraphics[width=0.15\textwidth]{DCI-11_windowing.jpg}
\multiput(0,22)(2,0){38}{\color[rgb]{1,1,0}\line(1,0){0.7}}
\end{picture}
}
\quad
\subfloat[$50^\circ$ elevation \label{50}]{
\begin{picture}(75,75)%
\put(0,0){\includegraphics[width=0.15\textwidth]{DCI-12_windowing.jpg}
\multiput(0,22)(2,0){38}{\color[rgb]{1,1,0}\line(1,0){0.7}}
\end{picture}
}
\caption{Carbon-rods at different elevation angles. First projection image with a windowing between $-0.31$ and $+1.0$.
For the measured dark-field signal we used the central pixels of the rod along the yellow, dashed line. }
\label{fig:cfk-rods}
\end{figure}
To simulate the dark-field of the carbon rod, we have to estimate the corresponding parameters.
We simulated the dark-field signal of the rod with a diameter of \SI{40}{pixel} and used
scatter parameters $\sigma_1 = 1.4$ and $\sigma_3 = 1$ .
However, since the real measurements bring along a lot of other unknown parameters,
such as the number of fibers within the rod, the dark-field values are not quantitatively comparable.
Figure~\ref{fig:real} shows the dark-field signal of the fiber over
\SI{180}{\degree} around the $y$-axis for different elevation angles, where an
elevation angle of $0^\circ$ represents a fiber that is aligned with the
rotation axis. The sensitivity direction is aligned with the rotation plane.
Figure~\ref{fig:sim-xy} shows the dark-field simulations, while Fig.~\ref{fig:sim-real} shows the real dark-field measurements.
Please note, that the rod were not aligned perfectly parallel to the detector plane on the beginning, which leads to shift in the rotation angle.
The simulations are in very good agreement with the real dataset.
In both cases, the signal fluctuation increases with increasing tilting angle of the fiber.
The influence of the path length of the projection ray through the object can be seen in the non-regular fluctuation along the rotation angles.
As a sidenote, the noise in the plot showing the real data corresponds to the
overall noise level of the setup. This can be observed in
Fig.~\ref{fig:cfk-rods} in the image background.
\begin{figure}[tb]
\centering
\def0.22\textwidth} \input{exp3.pdf_tex} {0.25\textwidth}
\input{legend_rotation.tikz}
\subfloat[Simulations with the proposed model. The trajectory consists of 200 projections over \SI{180}{\degree}. \label{fig:sim-xy}]{
\input{xz_0.7_20.0_1.4_1.0_s2_zoom.tikz}}
\quad
\subfloat[Real measurements of carbon-rods. 100 projections over $180^\circ$.) \label{fig:sim-real}]{
\input{xz_copy.tikz}}
\caption{Dark-field signal of carbon-rods with a diameter of \SI{4}{\milli\meter}.}
\label{fig:real}
\end{figure}
\section{Conclusions and Outlook}\label{sec:conclusions}
In this paper, we propose a X-ray dark-field imaging projection model.
It explicitly calculates structural quantities in 3-D using the direction of
the fiber, the ray direction and the sensitivity direction. To our knowledge,
this is the first true 3-D dark-field model.
We believe that this model is a powerful tool for further development of X-ray
dark-field imaging. In contrast to existing (2-D) projection models, where the
imaging trajectory is pre-defined, our model allows to image
arbitrary 3-D trajectories, like for example a helical trajectory. The
proposed model is very general, but by addition of suitable constraints it can
be curbed to any of the existing 2-D models.
We evaluated the consistency of the model with itself, with a wave-front
simulation, and with experimental dark-field measurements.
In future work, we will investigate an algorithm for X-ray dark-field
reconstruction that can make full use of 3-D trajectories.
\section*{Acknowledgements}
The authors acknowledge funding from the German Research Foundation (DFG). \\ Project DFG GZ: AN 257/21-1~$\mid$~MA 4898/6-1.\\
L.F and V.L are supported by the International Max Planck Research School for Optics and Imaging.
\section*{Author Contributions Statement}
S.H. and A.M. conceived the model. S.H. developed the initial theoretical
framework. L.F. and C.R. expanded the theoretical framework. L.F. performed the
experiments and analyzed the data. J.B. carried out the wavefront simulations. V.L.
and G.A. performed the real data measurements. L.F., S.H., and C.R. wrote the
paper with input from all authors.
\section*{Additional Information}
\textbf{Competing interests.} The authors declare no competing interests.
| 2023-04-23T06:40:37.966Z | 2019-03-05T02:34:02.000Z | redpajama/arxiv | arxiv_0001 | 838 | 9,091 |
4b6c25fd6b827de44817f0310c7c1e77be0f809a |
\subsection{Spatial-Temporal Attention (STA) Framework}
{\bf Backbone Network.} Various network architectures, like VGG~\cite{simonyan2014very}, Resnet~\cite{he2016deep}, and Google Inception~\cite{szegedy2016rethinking}, can be employed as the backbone network to extract feature maps for each frame. We choose ResNet50~\cite{he2016deep} as the backbone network, which was adopted by most previous works. In particular, ResNet50 has one convolutional block named $conv1$, and followed by four residual block named $conv2,3,4,5$ respectively. We further make two modifications on the original ResNet50: 1) the stride of the first residual block $conv5$ is set to $1$; 2) the average pooling layer and fully connected layer are removed. The input video is first reduced to $N$ frames by random sampling, and each selected frame is fed into the backbone network. As a result, each video $V=\{I_1,...,I_n,...,I_N\}$ is represented by a set of $16 \times 8$ feature maps $\{f_{n}\}_{\{n=1:N\}}$, and each feature map has $D=2048$ channels.
{\bf Spatial-Temporal Attention Model}.
We propose the spatial-temporal attention model to automatically learn from each image frame the discriminative regions that are useful for re-identification. Previous video-based person re-identification methods~\cite{liu2017quality,zhou2017see} consider each frame as a whole image and assign one weight for each frame. However, different regions of a person body should have different influences on the re-identification task. Thus, our approach aims to discover the discriminative representation of these region for each frame. Li~\emph{et al. } \cite{li2018diversity} also employ a spatial-temporal attention model, in which they use different convolutional layers to extract salient region of person body and adopt traditional temporal attention model for frame selection. There are three major drawbacks of this method. First, it involves more computation because of more convolutional layers, and its input sequence length has to be fixed due to the temporal attention model. Second, multiple spatial attention models used in their approach are independent from each other, without utilizing the spatial relationships that exist between human body parts. As a result, the extracted spatial attentions could be scattered and do not reflect the complete human body in the foreground. Third, the spatial attention information and temporal attention information are obtained by two different models which would cause error accumulation. Different from the existing methods, our spatial-temporal attention assigns attention weights, which contain both of spatial attention information and temporal attention information, to each spatial region in different frames automatically without any additional parameters. Experiments in Table~\ref{exp:mars} demonstrate the advantages of our method compared to \cite{li2018diversity}. To the best of our knowledge, our model is the first video-based person Re-ID model that can discover the discriminative parts but reserve the spatial relationship, and achieve frame selection at the same time.
The illustration of the spatial-temporal attention model is shown in Fig.~\ref{fig:sta}. Given the feature maps of an input video $\{f_{n}\}_{\{n=1:N\}}$, we first generate the corresponding attention map $g_{n}$ by operating the $\ell_2$ normalization on the square sum through the depth channel. Specifically,
\begin{equation}\label{eq:attentionmap}
g_{n}(h, w) = \frac{||\sum_{d=1}^{d=D}f_{n}(h, w, d)^2||_2}{\sum_{h,w}^{H, W}||\sum_{d=1}^{d=D}f_{n}(h, w, d)^2||_2}
\end{equation}
where $H,W$ are the height and width of feature maps. Thus, each frame has one corresponding attention map. Both the feature maps and the attention maps of the $N$ frames are then divided into $K$ blocks horizontally:
\begin{equation}
\left\{
\begin{aligned}
g_{n}=& [g_{n,1},..., g_{n,k},..., g_{n,K}] \\
f_{n}=& [f_{n,1},..., f_{n,k},..., f_{n,K}]
\end{aligned}
\right.
\end{equation}
Here, $g_{n, k}$ represents the spatial attention map on $k$th regions of $n$th frame. After that, we employ the $\ell_1$ normalization on all values in each block to obtain one spatial attention score for that region.
\begin{equation}
s_{n,k}=\sum_{i,j}||g_{n,k}(i,j)||_{1}
\end{equation}
Since the feature maps are after ReLU activation and all the values are greater than or equal to zero, the higher response of an attention map means the better representation of the person for re-identification task. The same procedure is operated on all selected frames of the input video to obtain the $N\times K$ matrix $S$ of spatial attention scores.
Instead of using multiple convolutional layers to formulate the temporal attention model as in \cite{li2018diversity,zhou2017see}, we directly compare the attention scores that are from different frames but on the same spatial region, and compute each attention over the $\ell_1$ normalization among them to obtain the normalized spatial-temporal attention scores. Specifically,
\begin{equation}
S(n,k)=\frac{s_{n,k}}{\sum_{n}||s_{n,k}||_1}
\end{equation}
As a result, each spatial region from different frames is assigned with a specific attention score based on the spatial-temporal attention information.
\begin{figure}[t]
\centering
\footnotesize
\includegraphics[width=0.5\textwidth]{STA.pdf}
\caption{ {\bf Details of the spatial-temporal attention model with inter-frame regularization.} Given a set of feature maps from the input video, we generate corresponding attention maps for each frame. The inter-frame regularization is used to restrict the difference among frames in the same video tracklet. Then, attention maps are horizontally split into four equal spatial region, and the spatial region from the same spatial region but different frames are used to calculate the 2-D attention score matrix}
\vspace{-5mm}
\label{fig:sta}
\end{figure}
\subsection{Inter-Frame Regularization}
For the video-based person Re-ID, the images from the same video tracklet of a person should represent the appearance of the same person. Such information is further exploited by our approach as the inter-frame regularization to restrict the difference of the learned attention maps among frames. This inter-frame regularization helps to avoid the cases in which the learned attention scores of each spatial region focus on one specific frame and largely ignore the other frames.
Specifically, since each frame has a corresponding feature map $f_{n}$, which is used to classify the person identification during training. One possible way is to add a classification loss to all frames to make sure they share the same identification. However, there maybe some noisy samples which are hard to classify and hence make the training processing unstable. An alternative solution is to use Kullback-Leibler (KL) divergence to evaluate the similarity of each frame, but lots of close-to-zero elements exist in attention maps. These elements will drop dramatically when employing the $log$ operation in the KL divergence, and make the training processing unstable as well~\cite{lin2017structured}. Thus, in order to encourage the spatial-temporal attention model to preserve the similarity and meanwhile avoid focusing on one frame, we design the inter-frame regularization which measures the difference among input image frames. For convenience, we define $G$ as the collection of attention maps generated from the input image frames,
\begin{equation}
G = [g_1, ..., g_N]
\end{equation}
Assume $g_i,g_j$ are attention maps calculated by Eqn. (\ref{eq:attentionmap}) for two frames $i$ and $j$. We employ the square Frobenius Norm~\cite{meyer2000matrix} of the difference between $g_i$ and $g_j$. Specifically,
\begin{equation}
\begin{aligned}
Reg & = ||g_i - g_j||_F \\
& = \sqrt{\sum_{h=1}^{H}\sum_{w=1}^{W}|g_i(h, w)-g_j(h, w)|^2}
\end{aligned}
\end{equation}
Note that we randomly choose two frames $i$ and $j$ from the $N$ frames of each video for this regularization term. In order to restrict the difference between two frames, we minimize this regularization term $Reg$ by adding it to the original objective function $L_{total}$ defined in Eqn. (\ref{eq:loss}) after multiplied by a coefficient $\lambda$.
\begin{equation}
\min(L_{total} + \lambda Reg)
\end{equation}
\subsection{Feature Fusion Strategy}
After STA model with inter-frame regularization, we obtain an $N \times K$ matrix $S$ that assigns an attention score $s_{n,k}$ for the feature map $f_{n,k}$ of each spatial region and each frame. Inspired by~\cite{fu2018horizontal}, we propose a strategy for feature fusion by combining the global and discriminative information of each tracklet as described in Alg.~\ref{algo:a1}.
Given the attention score matrix and a set of feature maps, we first divide feature maps into several spatial regions just like what we operate on attention map, and pick the spatial region that has the highest corresponding score compared to other frames. Then, we repeat this operation for every spatial region and concatenate those regions together to obtain a feature map that contains the most discriminative regions of input frames. Next, we use every attention score as a weight and employ the element-wise multiplication on every split feature map to generate another feature map with the global information of input frames. Finally, we concatenate these two feature maps together and employ the global average pooling followed by a fully connected layer to generate the representation vector $X$ for the Re-ID task.
\begin{equation}
X=[x_{1}, x{2}, ..., x_{n}]
\end{equation}
\begin{algorithm}
\footnotesize
\DontPrintSemicolon
\SetKwInOut{Input}{Input {} {} {} {} {}}
\SetKwInOut{Initialization}{Initialization {} {}}
\Input{A set of feature maps $\{f_n\}_{n=1:N}$;\\ An attention score matrix $S$;\\ Size of Feature map, $H, W, D$; \\ Length of Sequence $N$; \\Number of Spatial blocks $K$;}
\KwOut{Feature map after fusion $F_{fuse}$;}
\Initialization{$F_1 = Zeros(H, W, D)$, $F_2 = Zeros(H, W, D)$;\\ $H_{s} = \lfloor H/4 \rfloor$;}
\For{n = 1 : N}{
\For{k = 1 : K}{
$f_{n,k} \gets f_{n}(H_{s}*(k-1): H_{s}*k, :, :)$;
}
}
\For{k = 1 : K}{
$m \gets$ the index of the maximum value in $S(:,k)$; \\
$F_1(H_{s}: H_{s}, :, :) \gets f_{mk}$; \\
\For{n = 1:N}{
$F_2(H_{s}*(k-1): H_{s}*k, :, :) \mathrel{+}= f_{nk} * s_{nk}$;
}
}
$F_{fuse} \gets {F_1, F_2}$; \\
\Return{$F_{fuse}$};
\caption{Algorithm of Feature Fusion Strategy}
\label{algo:a1}
\end{algorithm}
\subsection{Loss Function}
In this paper, we utilize both the batch-hard triplet loss proposed in \cite{hermans2017defense} and the softmax loss jointly to train the STA model through the combination the metric learning and discriminative learning.
The triplet loss with hard mining is first proposed in~\cite{hermans2017defense} as an improved version of the original semi-hard triplet loss~\cite{schroff2015facenet}. We randomly sample $P$ identities and $K$ tracklets for each mini-batch to meet the requirement of the batch-hard triplet loss. Typically, the loss function is formulated as follows:
\begin{equation}
\begin{aligned}
L_{triplet} = \sum_{i=1}^P\sum_{a=1}^K[\alpha + &
\overbrace{\max_{p=1...K}||x_{a}^{(i)}-x_{p}^{(i)}||_2}^{hardest \ positive} \\
&- \underbrace{\min_{\substack{n=1...K\\
{j=1...P}\\
j \neq i}} ||x_{a}^{(i)}-x_{p}^{(i)}||_{2}}_{hardest \ negative}]_+
\end{aligned}
\end{equation}
where $x_{a}^{(i)}, x_{p}^{(i)}, x_{n}^{(i)}$ are features extracted from the anchor, positive and negative samples respectively, and $\alpha$ is the margin hyperparameter to control the differences of intra and inter distances. Here, positive and negative samples refer to the person with same or different identity from the anchor.
Besides batch-hard triplet loss, we employ softmax cross entropy loss for discriminative learning as well. The original softmax cross entropy loss can be formulated as follows:
\begin{equation}
L_{softmax} = -\sum^P_{i=1}\sum^{K}_{a=1} \log \frac{e^{W^T_{y_{a, i}}x_{a,i}}}{\sum_{k=1}^{C}e^{W^T_{k}x_{a,i}}}
\end{equation}
where $y_{i,a}$ is the ground truth identity of the sample $\{a,i\}$, and $C$ is number of classes. Our loss function for optimization is the combination of softmax loss and batch-hard triplet loss as follows:
\begin{equation}\label{eq:loss}
L_{total} = L_{softmax} + L_{triplet}
\end{equation}
\begin{table*}[t]\setlength{\tabcolsep}{8pt}
\centering
\footnotesize
\begin{tabular}{l|c|c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Model} & \multicolumn{4}{c}{MARS} & \multicolumn{4}{|c}{DukeMTMC-VideoReID} \\
\cline{2-9}
& R1 & R5 & R10 & mAP & R1 & R5 & R10 & mAP \\ \hline
Baseline & 74.5 & 88.8 & 91.8 & 64.0 & 79.1 & 93.9 & 96.0 & 76.8 \\
Baseline + TL & 80.8 & 92.1 & 94.3 & 74.0 & 90.6 & 95.8 & 96.7 & 89.7 \\
Baseline + TL + Avg & 82.5 & 92.9 & 94.9 & 75.0 & 91.8 & 97.4 & 98.0 & 91.0 \\
Baseline + TL + STA & 84.8 & 94.6 & 96.2 & 78.0 & 93.3 & 98.1 & 98.6 & 92.7 \\
Baseline + TL + STA + Fusion & 85.3 & 95.1 & 96.4 & 79.1 & 95.3 & 98.1 & 99.1 & 93.9 \\
Baseline + TL + STA + Fusion + Reg & 86.3 & 95.7 & 97.1 & 80.8 & 96.2 & 99.3 & 99.6 & 94.9 \\ \hline
\end{tabular}
\caption{Comparison of different proposed components, where TL, Avg, STA, Fusion, and Reg represent the triplet loss, average pooling, spatial-temporal attention module, feature fusion strategy, and inter-frame regularization respectively. R-1, R-5, R-10 accuracies (\%) and mAP (\%) are reported. Baseline model corresponds to ResNet50 trained with softmax loss on video datasets MARS or DukeMTMC-VideoReID respectively.}
\label{exp:sta}
\vspace{-2mm}
\end{table*}
\begin{table*}[t]\setlength{\tabcolsep}{12pt}
\centering
\footnotesize
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Sequence Length} & \multicolumn{4}{c}{MARS} & \multicolumn{4}{|c}{DukeMTMC-VideoReID} \\
\cline{2-9}
& R1 & R5 & R10 & mAP & R1 & R5 & R10 & mAP \\ \hline
N=2 & 81.7 & 93.8 & 95.7 & 75.7 & 90.3 & 97.6 & 98.6 & 89.0 \\
N=4 & 86.3 & 95.7 & 97.1 & 80.8 & 96.2 & 99.3 & 99.6 & 94.9 \\
N=6 & 86.2 & 95.7 & 96.9 & 81.0 & 96.0 & 99.4 & 99.7 & 95.0 \\
N=8 & 86.2 & 95.7 & 97.1 & 81.2 & 96.0 & 99.3 & 99.6 & 95.0 \\ \hline
\end{tabular}
\caption{Performance comparison of STA model with different sequence lengths during testing on MARS dataset and Duke-VideoReID dataset. Here, we use the model trained with the sequence length of 4 and the spatial regions number of 4.}
\label{exp:seq}
\vspace{-2mm}
\end{table*}
\subsection{Datasets and Evaluation Protocol}
{\bf Mars dataset}~\cite{zheng2016mars} is one of the largest video-based person re-identification dataset. It contains 17,503 tracklets from 1,261 identities, and additional 3,248 tracklets serving as distractors. These video tracklets are captured by six cameras in a university campus. The total 1,261 identities are split into 625 identities for training and 636 identities for testing. Every identity in the training set has 13 video tracklets on average, and each tracklet has 59 frames on average. The ground truth labels are detected and tracked using the Deformable Part Model (DPM)~\cite{felzenszwalb2008discriminatively} and GMCP tracker~\cite{zamir2012gmcp}.
{\bf DukeMTMC-VideoReID dataset}~\cite{wu2018cvpr_oneshot} is another large-scale benchmark dataset for video-based person Re-ID. It is derived from the DukeMTMC dataset~\cite{ristani2016MTMC}. The DukeMTMC-VideoReID dataset contains 4,832 tracklets from 1,812 identities, and it is split into 702, 702 and 408 identities for training, testing and distraction respectively. In total, it has 369,656 frames of 2,196 tracklets for training, and 445,764 frames of 2,636 tracklets for testing and distraction. Each tracklet has 168 frames on average. The bounding boxes are annotated manually.
{\bf Evaluation Protocol.}
In our experiments, we use the Cumulative Matching Characteristic (CMC) curve and the mean average precision (mAP) to evaluate the performance of the STA model. For each query, CMC represents the accuracy of the person retrieval. We report the Rank-1, Rank-5, Rank-20 scores to represent the CMC curve. The CMC metric is effective when each query corresponds to only one ground truth clip in the gallery. However, when multiple ground truth clips exist in the gallery, and the objective is to return to the user as many correct matches as possible, CMC would not effectively measure the performances of models on this objective. Comparatively, mAP is a comprehensive metric that is well-suited for both single-match and multiple-match objectives.
\subsection{Implementation Details}\label{sec:implement_details}
\begin{table*}[t]\setlength{\tabcolsep}{12pt}
\centering
\footnotesize
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Number of Spatial Regions} & \multicolumn{4}{c}{MARS} & \multicolumn{4}{|c}{DukeMTMC-VideoReID} \\
\cline{2-9}
& R1 & R5 & R10 & mAP & R1 & R5 & R10 & mAP \\ \hline
K=2 & 85.3 & 95.1 & 96.6 & 80.3 & 94.7 & 99.0 & 99.6 & 93.8\\
K=4 & 86.3 & 95.7 & 97.1 & 80.8 & 96.2 & 99.3 & 99.6 & 94.9 \\
K=8 & 85.5 & 95.3 & 96.9 & 80.4 & 95.2 & 99.1 & 99.4 & 93.8 \\ \hline
\end{tabular}
\caption{Performance comparison of the STA model trained with different number of spatial regions on MARS dataset and Duke-VideoReID dataset. Here, we keep the sequence length as a constant of 4.}
\label{exp:spatial}
\end{table*}
\begin{table*}[t]\setlength{\tabcolsep}{12pt}
\centering
\footnotesize
\begin{tabular} {l|c|c|c|c}
\hline
Model & R1 & R5 & R20 & mAP\\ \hline
CNN+Kiss.+MQ~\cite{zheng2016mars} & 68.3 & 82.6 & 89.4 & 49.3 \\
SeeForest~\cite{zhou2017see} & 70.6 & 90.0 & 97.6 & 50.7 \\
Latent Parts~\cite{li2017learning} & 71.8 & 86.6 & 93.0 & 56.1 \\
QAN~\cite{liu2017quality} & 73.7 & 84.9 & 91.6 & 51.7 \\
K-reciprocal~\cite{zhong2017re} & 73.9 & -- & -- & 68.5 \\
TriNet~\cite{hermans2017defense} & 79.8 & 91.4 & -- & 67.7 \\
RQEN~\cite{song2017region} & 77.8 & 88.8 & 94.3 & 71.1 \\
CSACSE~\cite{chen2018video} & 81.2 & 92.1 & -- & 69.4 \\
STAN~\cite{li2018diversity} & 82.3 & -- & -- & 65.8 \\
CSACSE~\cite{chen2018video} + Optical Flow & 86.3 & 94.7 & {\bf 98.2} & 76.1 \\ \hline
STA & {\bf 86.3} & {\bf 95.7} & 98.1 & {\bf 80.8} \\
STA + ReRank & {\bf 87.2} & {\bf 96.2} & {\bf 98.6 } & {\bf 87.7} \\ \hline
\end{tabular}
\caption{Comparison of the STA model with the state-of-the-arts on MARS dataset. Here, we show the results tested with sequence length of 4 and spatial region number of 4 as well.}
\label{exp:mars}
\end{table*}
\begin{table}[t]\setlength{\tabcolsep}{4pt}
\centering
\footnotesize
\begin{tabular} {l|l|l|l|l}
\hline
Model & R1 & R5 & R20 & mAP\\ \hline
ETAP-Net(supervised)~\cite{wu2018cvpr_oneshot} & 83.6 & 94.6 & 97.6 & 78.3 \\ \hline
STA & {\bf 96.2} & {\bf 99.3} & {\bf 99.6} & {\bf 94.9} \\ \hline
\end{tabular}
\caption{Comparison of the STA model with the state-of-the-art on DukeMTMC-VideoReID dataset. Here, we show the results tested with sequence length of 4 and spatial region number of 4 as well.}
\label{exp:duke}
\end{table}
As discussed in ``Proposed Method'', we first randomly select $N=4$ frames from the input tracklet, and use the modified ResNet50 initialized on the ImageNet~\cite{deng2009imagenet} dataset as the backbone network. The number of spatial regions is set to $K=4$. And, each frame is augmented by random horizontal flipping and normalization. Each mini-batch is sampled with randomly selected $P$ identities and randomly sampled $K$ images for each identity from the training set. In our experiment, we set $P=16$ and $K=4$ so that the mini-batch size is $64$. And, we recommend to set the margin parameter in triplet loss to $0.3$. During training, we use the Adam ~\cite{kingma2014adam} weight decay $0.0005$ to optimize the parameters for $70$ epochs. The overall learning rate is initialized to $0.0003$ and decay to $3 \times 10^{-5}$ and $3 \times 10^{-6}$ after training for $200$ and $400$ epochs respectively. The total training process lasts for 800 epochs. For evaluation, we extract the feature vector after the first fully connected layer as the representation of query tracklet.
Our model is implemented on Pytorch platform and trained with two NVIDIA TITAN X GPUs. All our experiments on different datasets follow the same settings as above.
\subsection{Ablation Study}
To verify the effectiveness of each component in STA model, we conduct several analytic experiments including w/ or w/o triplet loss, w/ or w/o spatial-temporal attention model, w/ or w/o feature fusion, and w/ or w/o inter-frame regularization. In addition, we carry out experiments to investigate the effect of varying the sequence length $N$ and the number $K$ of spatial regions. Note that all the remaining settings are the same as those discussed in ``Implementation Details''.
{\bf Effectiveness of Components.}
In Table~\ref{exp:sta}, we list the results of each component in our STA framework. {\bf Baseline} represents the ResNet50 model trained with softmax loss on MARS/DukeMTMC-VideoReID dataset. {\bf TL} corresponds to the hard-batch triplet loss, and ``+ TL'' means the hard-batch triplet loss is combined with the softmax loss in Baseline model. Note that, the {\bf Baseline} and {\bf Baseline + TL} both treat each tracklet frame by frame, i.e., the image-based models. {\bf STA} is our proposed spatial-temporal attention model in which the number of spatial regions is set to $K=4$ and the input sequence length is set to $N=4$ as well. It generates a $4 \times 4$ attention score matrix, and uses the score of the same region but from a different frame to calculate the weighted sum feature maps of each region.
Compared to {\bf Baseline + TL}, {\bf STA} improves Rank-1 and mAP accuracy by $4.0\%$ and $4.0\%$ on MARS, as well as $2.7\%$ and $2.4\%$ on DukeMTMC-VideoReID respectively. These results show that the spatial-temporal attention model is very effective at discovering discriminative image regions which are useful for boosting re-identification performance. {\bf Fusion} means aggregating feature representation by the proposed fusion strategy described in Alg.~\ref{algo:a1}. It is obvious that the proposed fusion strategy can further improve the performance by combining the most discriminative information and global information together. {\bf Reg} refers to the proposed inter-frame regularization term. From the comparison of w/ and w/o {\bf Reg}, we can find the Rank-1 accuracy and mAP improve by $1.0\%$ and $1.7\%$ on Mars, as well as $0.9\%$ and $1.0\%$ on DukeMTMC-VideoReID respectively. This improvement shows the proposed inter-frame regularization term can balance the frame diversity and thus further improve the performance.
{\bf Consistence of Sequence Length.} In Table~\ref{exp:seq}, we show the robustness of the trained model to different sequence length of input tracklet. For fair comparison, we use the model trained with sequence length of 4 and spatial region number of 4, and evaluate the performance with different sequence length: 2, 4, 6, and 8. As we can see, the performances are very consistent when input sequence length is 4, 6 or 8. (\emph{e.g. } all the Rank-1 accuracies are above $85\%$ and the mAP are above $80\%$ on MARS.)
This is because the proposed STA model does not involve more parameters, so there is no restriction on the sequence length. In addition, the performances with $N=4$ surpass most of the state-of-the-art methods which usually need more frames for good representation of the input video.
{\bf Influence of Spatial Region Number.} To investigate how the number of spatial regions influences the final performance, we conduct experiments with three different spatial regions: 2, 4, and 8 with the same sequence length 4. The results are listed in Table~\ref{exp:spatial}. In these experiments, the STA framework always achieves the best result with 4 spatial regions. Since the size of feature maps is $16 \times 8$, with 2 or 8 spatial regions, it could be too coarse to contain enough information or too small to contain enough information for the re-identification task.
\subsection{Comparison with the State-of-the-arts}
Table~\ref{exp:mars} and Table~\ref{exp:duke} report the comparison of our proposed STA model with the state-of-the-art techniques. On each dataset, our approach achieves the best performance, especially on mAP. We attain R1/mAP: $86.3/80.8 (87.2/87.7)$ on MARS before and after re-ranking. In addition, we achieve R1/mAP: $96.2/94.9$ on DukeMTMC-VideoReID.
{\bf Results on MARS.} Comparisons between our approach and the state-of-the-art approaches on MARS are shown in Table~\ref{exp:mars}.
The results show that our approach achieves {\bf 80.8\%} in mAP, which surpasses all existing work by more than {\bf 4.0\%}. Even for the Rank-1 and Rank-5, our approach achieves competitive results compared to the most recent work listed in Table~\ref{exp:mars}. It's noted that the CSACSE + Optical Flow method~\cite{chen2018video} incorporates optical flow as the extra information, which not only brings in more computation, but also causes the drawback that its whole network cannot be trained end-to-end. Comparing to other related work that also does not use optical flow, our approach improves the Rank-1 accuracy and mAP by {\bf 4.0\%} and {\bf 15.0\%} respectively. After implementing re-ranking, the Rank-1 accuracy and mAP can be improved to {\bf 87.2\%} and {\bf 87.7\%}, which outperforms the CSACSE + Optical Flow method~\cite{chen2018video} by {\bf 11.6\%} on mAP.
{\bf Results on DukeMTMC-VideoReID.} Table~\ref{exp:duke} shows the video-base Re-ID performance on DukeMTMC-VideoReID dataset. This dataset is new to the field, and there is only one published baseline~\cite{wu2018cvpr_oneshot} on this dataset. Comparing to this baseline, our approach improves more than {\bf 10\%} on Rank-1 accuracy and mAP. Although there are few results reported on this dataset, we have good reason to believe that our approach works well because it achieves {\bf 96.2\%} and {\bf 94.9\%} on Rank-1 accuracy and mAP respectively.
\section{Introduction}
\input{1_intro.tex}
\section{Related Work}
\input{2_related.tex}
\section{Proposed Method}
\input{3_method.tex}
\section{Experiments}
\input{4_experiments.tex}
\section{Conclusion}
\input{5_conclusion.tex}
| 2023-04-23T06:40:38.472Z | 2018-11-13T02:01:41.000Z | redpajama/arxiv | arxiv_0001 | 851 | 4,383 |
35a0b570cae1fcbf97b9c1c90bbf6389b5276a62 | \section{Introduction}
Probabilistic models are used in a broad swathe of disciplines ranging from the social and behavioural sciences, biology, the physical and computational sciences, to name but a few.
At their very core, probabilistic models are defined in terms of random variables, which range over a set of outcomes that are subject to chance.
For example, a measurement on a quantum system is a random variable.
By performing the measurement, we record the outcome as the value of the random variable.
Repeated measurements on the same preparation allow determining the probability of each outcome.
Probabilistic programming offers a convenient way to express probabilistic models by unifying techniques from conventional programming such as modularity, imperative or functional specification, as well as the representation and use of uncertain knowledge.
A variety of probabilistic programming languages (PPLs) have been proposed (see \cite{gordon:henzinger:2014} for references), which have attracted interest from artificial intelligence, programming languages, cognitive science, and the natural language processing communities \cite{goodman:stuhlmuller:2014}.
However, as far as the authors can tell, there has been little interest in PPLs from the physics research community.
The aim of this article is raise awareness of PPLs to this community by showing how quantum correlations can be simulated by probabilistic programming.
The core purpose of a PPL is to specify a model in terms of random variables and probability distributions \cite{gordon:henzinger:2014}.
As a consequence, a PPL is restricted to computing statistical correlations between variables which are a mathematical consequence of the underlying event space.
Quantum theory, on the the hand, has a different underlying event space.
This in turn allows correlations between variables to emerge that go beyond those governed by standard probability theory.
In particular, local hidden variables are straightforward to represent in a PPL, since they correspond to what classical probabilities can express.
Nonlocal correlations, however, cannot be described by a local hidden variable model~\cite{bell1964epr}.
The question arises as to how to simulate such correlations using probabilistic programming.
This article addresses this question by using a hypergraph formalism that has recently emerged in quantum information~\cite{acin:2015}.
The advantage of the hypergraph formalism is that it provides a flexible, abstract representation for rendering into the syntax of a PPL.
In addition, constraints inherent to the experimental design being simulated can be structurally expressed within the hypergraphs.
We will show that by embedding this hypergraph formalism in a PPL, an EPR\footnote{The acronym EPR describes from Einstein, Rosen and Podolky's famous paper which subsequently led to experimental protocols being developed to investigate quantum entanglement~\cite{einstein1935can}.} experiment can be simulated where quantum correlations are produced.
In addition, we provide qualitative and quantitative comparisons between several implementations in contemporary PPLs under an open source license~\footnote{The code is available at \url{https://github.com/askoj/bell-ppls}}.
This opens the door to the possibility of reliably and meaningfully simulating experiments in quantum contextuality by means of probabilistic programs.
\section{Probabilistic Programming and the ERP experiment}
The basis of the EPR experiment is two systems $A$ and $B$ which are represented as bivalent variables ranging of $\{0,1\}$.
Variables $A$ and $B$ are respectively conditioned by bivalent variables $X$ and $Y$, with both ranging over $\{0,1\}$ .
Four experiments are performed by means of joint measurements on $A$ and $B$ depending on the value of the respective conditioning variables.
As a consequence, the experiments produce four pairwise distributions over the four possible outcomes from the joint measurements:
\begin{align*}
p(A,B|X=0,Y=0) \\
p(A,B|X=0,Y=1) \\
p(A,B|X=1,Y=0) \\
p(A,B|X=1,Y=1)
\end{align*}
In order to simplify the notation,
variable $A_i$ is distributed as $p(A|X=i), i \in \{0,1\}$.
In a similar way, variables $B_0$ and $B_1$ are introduced.
Therefore, the preceding four pairwise distributions can be represented as the the grid of sixteen probabilities depicted in Fig.~\ref{fig:p16}.
\begin{figure}[h]
\centering
\includegraphics[width=5.25cm]{diagram_probabilities_1}
\centering
\captionsetup{justification=centering,margin=2cm}
\caption {Four pairwise distributions in an EPR experiment}
\label{fig:p16}
\end{figure}
The EPR experiment is subject to constraint know as the ``no-signalling" condition.
No-signalling entails that the marginal probabilities observed in relation to one variable do not vary according to how the other variable is conditioned:
\begin{align}
p_1 + p_2 = p_5 + p_6 \\
p_9 + p_{10} = p_{13} + p_{14} \\
p_1 + p_3 = p_9 + p_{11} \\
p_5 + p_7 = p_{13} + p_{15}
\end{align}
The goal of an EPR experiment is to empirically determine whether quantum particles are entangled.
We will not go into the details of what entanglement is, but rather focus on showing how statistical correlations between variables determine the presence of entanglement.
Entanglement is determined if any of the following inequalities is violated.
\begin{eqnarray}
|\Xv{A_{0}B_{0}} + \Xv{A_{0}B_{1}} + \Xv{A_{1}B_{0}} - \Xv{A_{1}B_{1}} | &\leq 2 \label{eqn:chsh1}\\
|\Xv{A_{0}B_{0}} + \Xv{A_{0}B_{1}} - \Xv{A_{1}B_{0}} + \Xv{A_{1}B_{1}} | &\leq 2 \label{eqn:chsh2} \\
|\Xv{A_{0}B_{0}} - \Xv{A_{0}B_{1}} + \Xv{A_{1}B_{0}} + \Xv{A_{1}B_{1}} | &\leq 2 \label{eqn:chsh3} \\
|-\Xv{A_{0}B_{0}} + \Xv{A_{0}B_{1}} + \Xv{A_{1}B_{0}} + \Xv{A_{0}B_{0}} | &\leq 2 \label{eqn:chsh4}
\end{eqnarray}
where the correlations are defined as follows:
\begin{align}
\Xv{A_{0}B_{0}} &= (p_1+p_4) - (p_2 + p_3) \\
\Xv{A_{0}B_{1}} &= (p_5+p_8) - (p_6 + p_7) \\
\Xv{A_{1}B_{0}} &= (p_9+p_{12}) - (p_{10} + p_{11}) \\
\Xv{A_{1}B_{1}} &= (p_{13}+p_{16}) - (p_{14} + p_{15})
\end{align}
For historical reasons, the set of four inequalities have become known as the Clauser-Horn-Shimony-Holt (CHSH) inequalities \cite{Shimony:Bell}.
The data is collected from the four experiments defined above by subjecting a large number of pairs $(A,B)$ of quantum particles to joint measurements. More specifically, each such pair is measured in one of the four measurement conditions represented by the grid of probabilities depicted in Fig.~\ref{fig:p16}.
The maximum possible violation of the CHSH inequalities is 4, i.e., three pairs of variables are maximally correlated (=1) and the fourth is maximally anti-correlated (=-1).
However, if the experiment is modelled by a joint probability distribution across the four variables $A_0,A_1,B_0,B_1$, the maximum value that can be computed by any of the inequalities happens to be 2.
This is why the boundary of violation in the inequalities is 2 as it demarcates the boundary which standard statistical correlations cannot transcend.
This fact presents a challenge for a PPL, which is based on standard probability theory.
How can a PPL be developed to simulate non-classical quantum correlations?
\section{Design of an EPR Simulation Experiment using PPLs}
\begin{figure}[h]
\captionsetup{justification=centering}
\centering
\includegraphics[width=12cm]{experimental_design.png}
\caption{Framework for the EPR experiment}\label{fig:PPL-framework}
\end{figure}
Fig.~\ref{fig:PPL-framework} depicts the framework for how
a PPL can be used to simulate EPR experiments.
A phenomenon $P$, e.g., entangled quantum particles, is to be studied.
An experimental design is devised in which $P$ is examined in the four experimental conditions called ``measurement contexts".
A measurement context $M_i, 1 \leq i \leq 4$ is designed to study $P$ from a particular experimental perspective.
For example, one measurement context corresponds to $X=0$ and $Y=1$ which yields probabilities over the four possible outcomes of joint measurements of $A$ and $B$.
We will denote these outcomes as
$\{00|01,01|01,10|01,11|01\}$.
For example, $00|01$ denotes the outcome $A=0,B=0$ in the measurement context $M_2=\{X=0,Y=1\}$.
Measurement contexts are formally defined as hyperedges in a hypergraph called a ``contextuality scenario''.
Contextuality scenarios $\mathcal{X}_i, 1\leq i \leq 2$ are composed into a composite contextuality scenario $\mathcal{X}$, which is a hypergraph describing the phenomenon $P$.
Composition offers the distinct advantage of allowing experimental designs to be theoretically underpinned by hypergraphs in a modular way \cite{bruza:2018}.
More formally, a \emph{contextuality scenario} is a hypergraph $X=(V,E)$ such that:
\vspace*{-0.2cm}
\begin{itemize}
\item $v \in V$ denotes an outcome which can occur in a measurement context
\vspace*{-0.2cm}
\item $e \in E$ is the set of all possible outcomes given a particular measurement context
\end{itemize}
See Definition 2.2.1 in Ref.~\cite{acin:2015}.
It is important to note that the PPL functions as both a means to simulate an EPR experiment as well as determine whether quantum correlations are present.
As we will see below, each hyperedge of $\mathcal{X}$ is a probability distribution over outcomes in a given measurement context.
In EPR experiments these
distributions are computed by a sampling process which ensures that the no-signaling constraint is adhered to.
In order to achieve this, the hypergraphs $\mathcal{X}_i$ are composed using the Foulis--Randall (FR) product \cite{acin:2015} (see the next section).
As a consequence, the PPL must implement this product for a valid simulation of an EPR experiment.
Much of the technical detail to follow describes how this can be achieved.
To our knowledge the FR product has never been implemented before in a PPL.
Several such implementations will be specified below in various PPLs and then compared.
At the conclusion of the simulation, the CHSH equalities can be applied to correlations computed from relevant hyperedges in the composite contextuality scenario $\mathcal{X}$ to determine whether quantum correlations are present.
If so, the PPL has successfully simulated phenomenon $P$ as exhibiting quantum, rather than, classical statistics.
\subsection{Foulis--Randall product}
The FR product is used to compose contextuality scenarios as its product ensures no signalling between systems represented by the variables $A$ and $B$ \cite{acin:2015}.
The \emph{Foulis--Randall product} is the scenario $ H_{A} \otimes H_{B} $ with
\begin{align*}
V\left ( H_{A} \otimes H_{B} \right ) = V(H_{A}) \times V(H_{B}), \; \; \;
E(H_{A} \otimes H_{B}) = E_{A \rightarrow B} \cup E_{B \rightarrow A}
\end{align*}
where
\begin{align*}
E_{A\rightarrow B}:= \left \{ \bigcup_{a\in e_{A}} \left \{ a \right \} \times f\left ( a \right ) : e_{A} \in E_{A} , f : e_{A} \rightarrow E_{B} \right \} \\
\linebreak
E_{A\leftarrow B}:= \left \{ \bigcup_{b\in e_{B}} \left \{ b \right \} \times f\left ( b \right ) : e_{B} \in E_{B} , f : e_{B} \rightarrow E_{A} \right \}
\end{align*}
The preceding definition formalizes the simultaneous measurements of the two systems $A$ and $B$ such that no-signalling occurs between these systems \cite{acin:2015}.
The no-signalling constraint is imposed by means of a set of specific hyperedges which are a consequence of the FR product.
We now turn to the issue of modularity which was mentioned previously.
There are two systems $A$ and $B$.
System $A$ has two measurement contexts: 1) $A|X=0$ and 2) $A|X=1$, where both measurements yield an outcome $A=0$ or $A=1$.
In the hypergraph formalism, a measurement context is formalized by a hyperdge.
The hypergraph $H_A$ therefore has two hyperedges, one for each measurement contexts.
These two hyperedges are visually represented on the LHS of Fig.~\ref{fig:h_a_h_b}.
Similarly, hypergraph $H_B$ comprises two edges.
$H_A$ and $H_B$ can be viewed as modules which can be composed in various ways to suit the requirements of a particular experimental design.
In the EPR experiment, four measurement contexts are required in which $A$ are $B$ are jointly measured subject to the no-signalling condition.
\begin{figure}[h]
\captionsetup{justification=centering}
\centering
\includegraphics[width=8cm]{h_a_h_b.png}
\caption{Hypergraph Representation Of EPR Systems A \& B}\label{fig:h_a_h_b}
\end{figure}
In order to achieve this, the hypergraphs $H_A$ and $H_B$ are composed using the FR product to produce a composite hypergraph.
The corresponding hypergraph contains 12 edges. Four of these edges correspond to the four pairwise distributions depicted in Fig.~\ref{fig:p16} and 8 additional edges which ensure that no-signalling can occur.
The FR product produces the hypergraph depicted in Fig.~\ref{fig:twelve_constraints}.
This hypergraph corresponds to composite contextuality scenario $\mathcal{X}$ depicted in Fig.~\ref{fig:PPL-framework}.
\begin{figure}[h]
\captionsetup{justification=centering}
\centering
\includegraphics[width=9cm]{twelve_constraints.png}
\caption{Hyperedges Of Foulis--Randall Product}\label{fig:twelve_constraints}
\end{figure}
To assist with understanding this formalism, one single hyperedge's calculation is considered.
Let $e_{A}$ be equivalent to edge $\left \{ 0|0 , 1|0 \right \}$ of hypergraph $H_{A}$
The relevant calculation associated with the instance may then be one of two combinations:
$f\left ( 0|0 \right ) \cup f\left ( 1|0 \right )$ or
$f\left ( 1|0 \right ) \cup f\left ( 0|0 \right )$
The first of the two combinations is selected, expanding to the following expression:
$\left \{ 00|00 , 01|00 \right \} \cup \left \{ 10|01 , 11|01 \right \}$
The hyperedge is isolated in Fig.~\ref{fig:foulis_randall_one_hyperedge}.
\begin{figure}[h]
\captionsetup{justification=centering}
\centering
\includegraphics[width=9cm]{foulis_randall_one_hyperedge.png}
\caption{Single Hyperedge Of Foulis--Randall Product}\label{fig:foulis_randall_one_hyperedge}
\end{figure}
In what follows, we implement this hypergraph formalism in several probabilistic programming languages and evaluate the advantages of each.
\section{Implementations}
In this section, four commonly available PPLs illustrate a simulation of the same EPR experiment.
The goal of this comparison is to judge their relative effectiveness for this purpose.
\subsection{Scope Of Investigation}
Four PPLs were chosen for both qualitative and quantitaive comparison and are listed in Table~\ref{tab:characteristics}.
While other PPLs such as Stan~\cite{carpenter2017stan}, Church~\cite{goodman2008church}, or WebPPL~\cite{goodman:stuhlmuller:2014} were considered for the investigation, we decided to exclude such domain-specific languages on the basis of limited applications in quantum physics.
Probabilistic programming frameworks that only focus on directed graphs, such as Edward~\cite{tran2016edward}, were also excluded, since this feature is not relevant to the EPR experiment in the hypergraph formalism.
\begin{table}
\begin{tabular}{llll}
\textbf{Name} & \textbf{Programming language} & \textbf{License} & \textbf{Supported OS}\\
\hline
PyMC3~\cite{salvatier2016probabilistic} & Python & Apache-2.0 & Windows, Mac, Linux \\
Figaro~\cite{figaro} & Scala & Custom & Windows, Mac, Linux \\
Turing.jl~\cite{Turing2016} & Julia & MIT & Windows, Mac, Linux \\
Pyro~\cite{bingham2018pyro} & Python & Custom & Windows, Mac, Linux \\
\hline
\end{tabular}
\caption{Basic characteristics of probabilistic programming languages}
\label{tab:characteristics}
\end{table}
\subsection{Qualitative Comparison Of PPLs}
The qualitative comparison highlights important pragmatic aspects of probabilistic programs, and is defined by the following criteria.
\subsubsection{Criteria Of Comparison}
\begin{itemize}
\item \textbf{Extensibility: }The PPL accommodates for simulation of complex experimental settings. This may be inherent in the PPL's means of extension i.e., is open-source, or whether its syntactic constructs provide flexibility in specifying data structures and the flow of control.
\item \textbf{Accessibility: }The PPL is intuitive and coherent. Possibly by means of expressive constructs, or comprehensive supporting documentation, accessibility may also be demonstrated by the PPL's community base, or degree of application.
\item \textbf{Acceleration: }The PPL implements methods of optimization for its execution, reflected in the speed and resource-utilization of its compilation. Acceleration may also be demonstrated in the PPL's scalability.
\end{itemize}
These criteria are derived from criteria commonly used to judge programming languages.
\subsubsection{Extensibility Of PPLs}
Regarding extensibility, all PPLs are supplemented with repositories containing source code that can be (with respect to licenses) modified and re-compiled. While all PPLs offer containment systems for configuration of probabilistic models, only PyMC3 and Figaro provide tools for the diagnosis and validation of models. In considering Turing.jl's dependence on the Distributions.jl~\cite{distributions_package} package, it can be said that all PPLs provide a number of distribution configurations. In contrast, not all PPLs offer flexibility in step methods used for sampling. This can be overlooked considering that beyond common sampling methods, excess of configurations are typically specialised. Figaro and Pyro are the only PPLs to offer control-flow independence in probabistic inference; both Figaro's inference algorithms and Pyro's strong integration with Python allow for atomic inference processing. Both are also the only PPLs to offer comprehensive open universe simulation\cite{milch2010extending}. All PPLs except Turing.jl provide constructs for the manipulation of the underlying inference algorithms. Table~\ref{tab:extensibility} demonstrates the articulated features in comparison.
\begin{table}
\begin{tabular}{lllll}
\textbf{Criteria} & \textbf{PyMC3} & \textbf{Figaro} & \textbf{Turing.jl} & \textbf{Pyro}\\
\hline
Control-Flow Independence & \ding{55} & \ding{51} & \ding{55} & \ding{51} \\
Open Universe Simulation & \ding{55} & \ding{51} & \ding{55} & \ding{51} \\
Distribution Configurations & \textbf{$\sim$60} & \textbf{$\sim$36} & \textbf{?} & \textbf{$\sim$39} \\
Step Methods & \textbf{$\sim$8} & \textbf{$\sim$41} & \textbf{$\sim$13} & \textbf{$\sim$4} \\
Algorithm Manipulation & \ding{51} & \ding{51} & \ding{55} & \ding{51} \\
Online Repository & \ding{51} & \ding{51} & \ding{51} & \ding{51} \\
Model Configuration & \ding{51} & \ding{51} & \ding{51} & \ding{51} \\
Model Validation & \ding{51} & \ding{51} & \ding{55} & \ding{55} \\
\hline
\end{tabular}
\caption{Comparison Of Extensibility Of PPLs}
\label{tab:extensibility}
\end{table}
\subsubsection{Accessibility Of PPLs}
While PyMC3 and Turing.jl have seen a wealth of research projects conducted since their conception, the later debut of Figaro and Pyro has perpetuated fewer examples of application. In light of this, both PPLs provide tutorial literature, and have more comprehensive API reference documentation than the former two. In contrast, Turing.jl has limited tutorial content to support its usage, and does not provide a complete API reference. For ease of use, Pyro advertises its design for agile development, however its syntactic conventions do not warrant any significant differences compared to PyMC3. Nevertheless, both are more easily applied than Figaro or Turing.jl.
\subsubsection{Acceleration Of PPLs}
Concerning acceleration, PyMC3 bases its optimization on Theano's architecture, which is an open-sourced project originally produced at the Université de Montréal \cite{alrfou2016theano}. Correctly applying the Theano architecture with respect to the GPU on which the PPL is running is a multi-staged process. As Theano depends on the NVIDIA CUDA Developer Toolkit, the GPU's compatibility with the toolkit's contained drivers must be verified before installation can occur. Thereafter, the software `self-validates', and PyMC3 configuration settings must be altered to recognize the GPU support. Only in the instance that the toolkit is correctly installed can PyMC3 take full advantage of its GPU acceleration capabilities. For contrast, no other PPLs evaluated require manual extension of acceleration. For current experimentation, Theano may be suitable, however its discontinuation as of 2017 \cite{peng_2017} poses a threat to using it as a stable basis for future development. For comparison, Figaro was designed specifically for usage within demanding experimental designs. The development team has stressed the library’s capability with its various capabilities e.g. open universe models, spatio-temporal models, recursive models, or infinite models \cite{milch2010extending}.
Similarly, Uber AI Labs stresses that Pyro can be easily scaled to projects of demanding size \cite{pyro_ui_nov2017}; it should be noted that Pyro is based on Pytorch framework, and as a result takes advantage of Pytorch's strong GPU accelerated machine learning tools~\cite{paszke2017pytorch}.
\subsection{Illustration of the EPR Experiment in the four PPLs}
In specifying the EPR experiment in different PPLs varying syntactic constructs can be are highlighted and contrasted, as well as the differing approaches to the simulation.
\subsubsection{PyMC3}
A PyMC3 model defines a context management system used to isolate operations undertaken on stochastic random variables, and thus Python's \verb|with| keyword is applied to automate the release of resources after implementation. Inside the model, the \verb|Bernoulli| method specifies that a distribution of Bernoulli values will be simulated for a given random variable. A probability is also given to direct the sampler towards a bias when generating the distribution. To assist with randomizing results, a \verb|Uniform| distribution is also declared. Then the \verb|sample| method invokes a number of iterations over the specified model. PYMC3 designates tuning of results prior to sampling, as well as indication of a sampling method for which a number of algorithms are offered. In the example, \verb|Metropolis| implies the Metropolis-Hastings algorithm will be used to obtain random results. Upon execution, the model generates a trace containing distributions reflecting the earlier declared random variables.
As this is the first example of code for the experimentation, annotations expressing the meaning of the code are included throughout the implementation.
\begin{lstlisting}[language=python]
from numpy import zeros, array, fliplr, sum
from itertools import product
import pymc3 as pm
\end{lstlisting}
The first block of the implementation declares assistant methods used for value conversions. The first, being the \verb|get_vertex| method linearises the binary variables $X$, $Y$, $A$, and $B$ used to express probabilities of the global distribution into an index of an array.
\begin{align*}
get\_vertex\left ( x ,\ y ,\ a ,\ b \right ) := ((x\times 8)+(y\times 4))+(b+(a\times 2))
\end{align*}
\begin{lstlisting}[language=python]
def get_vertex(a, b, x, y):
return ((x*8)+(y*4))+(b+(a*2))
\end{lstlisting}
The second method, \verb|get_hyperedges| leverages enumeration techniques to retrieve hyperedges for contained vertices.
\begin{lstlisting}[language=python]
def get_hyperedges(H, n):
l = []
for idx, e in enumerate(H):
if n in e:
l.append(idx)
return l
\end{lstlisting}
The \verb|foulis_randall_product| method generates the binary coordinates for all hyperedges in the FR product.
\begin{lstlisting}[language=python]
def foulis_randall_product():
fr_edges = []
\end{lstlisting}
The first step involves declaring the hypergraphs for both EPR systems.
\begin{align*}
(((( 0 ,\ 0 ) ,\ ( 1 ,\ 0 ) ) ,\ (( 0 ,\ 1 ) ,\ ( 1 ,\ 1 ))) ,\ ((( 0 ,\ 0 ) ,\ ( 1 ,\ 0 )) ,\ (( 0 ,\ 1 ) ,\ ( 1 ,\ 1 )) ))
\end{align*}
\begin{lstlisting}[language=python]
H = [ [[[0, 0], [1, 0]], [[0, 1], [1, 1]]],
[[[0, 0], [1, 0]], [[0, 1], [1, 1]]]]
\end{lstlisting}
The next step involves producing four hyperedges to represent the four explicit joint measurement contexts on both systems. Two variables are given to assist with computing this result.
\begin{align*}
g \in E_{A} ,\ h \in E_{B}
\end{align*}
\begin{lstlisting}[language=python]
for edge_a in H[0]:
for edge_b in H[1]:
\end{lstlisting}
Thereafter, each hyperedge is defined as the combined sets produced by the following expression.
\begin{align*}
( \forall i \in g ,\ \forall j \in h : \forall w \in i ,\ \forall x \in j : ( w_{1} ,\ x_{1} ,\ w_{2} ,\ x_{2} ) )
\end{align*}
\begin{lstlisting}[language=python]
fr_edge = []
for vertex_a in edge_a:
for vertex_b in edge_b:
fr_edge.append([
vertex_a[0], vertex_b[0],
vertex_a[1], vertex_b[1]])
fr_edges.append(fr_edge)
\end{lstlisting}
The last step involves calculating the hyperedges of both systems as dependent on the other. To achieve this programmatically, three variables are declared. The first ($m$) are all members of set $M$, where $M$ are the possible measurement choices for the scenario (in this case two), the second ($n$) being the other possible measurement choice, and the last variable $o$ being all edges from the hypergraph associated with the measurement choice.
\begin{align*}
\forall m \in M : n = m' ,\ \forall o \in E(H_{m})
\end{align*}
\begin{lstlisting}[language=python]
for mc in range(0,2):
mc_i = abs(1-mc)
for edge in H[mc]:
\end{lstlisting}
For each $o$, in some selected hypergraph, a second variable $j$ is declared as two possible values. For each possibility, a hyperedge is then defined as the variable $k$, being the combination of all vertices in $o$ given to a function that produces the hyperedge.
\begin{align*}
\forall j \in ( 1 ,\ 2 ) : k = ( \forall l \in E(H_{n}) : f \left ( l ,\ j ,\ m ,\ n ,\ o \right ) )
\end{align*}
\begin{lstlisting}[language=python]
for j in range(0,2):
fr_edge = []
for i in range(0, len(edge)):
\end{lstlisting}
The mentioned function computes the hyperedge by declaring single-use variables $s$, $q$, $r$, $u$, $t$, and $v$.
\begin{align*}
E(H_{n})_s = l' ,\ q = o_{|s-j|+1} ,\ r = l_{1} ,\ u = l_{2} ,\ t = \left ( q_{1} ,\ r_{1} ,\ q_{2} ,\ r_{2} \right ) ,\ v = \left ( q_{1} ,\ u_{1} ,\ q_{2} ,\ u_{2} \right )
\end{align*}
\begin{lstlisting}[language=python]
edge_b = H[mc_i][i]
vertex_a = edge[abs(i-j)]
vertex_b = edge_b[0]
vertex_c = edge_b[1]
vertices_a = [
vertex_a[0], vertex_b[0],
vertex_a[1], vertex_b[1]]
vertices_b = [
vertex_a[0], vertex_c[0],
vertex_a[1], vertex_c[1]]
\end{lstlisting}
Thereafter, a set is constructed by use of its variables, and portions of the desired hyperedges are iteratively returned.
\begin{align*}
\left ( \left ( t_{m} ,\ t_{n} ,\ t_{m+2} ,\ t_{n+2} \right ) ,\ \left ( v_{m} ,\ v_{n} ,\ v_{m+2} ,\ v_{n+2} \right ) \right )
\end{align*}
Upon calculation of the last step of the process, the hyperedges corresponding to the measurement choices of both EPR systems as dependent on the other are produced, totaling the necessary constraints described in binary format.
\begin{lstlisting}[language=python]
fr_edge.append([
vertices_a[mc], vertices_a[mc_i],
vertices_a[mc+2], vertices_a[mc_i+2]]
)
fr_edge.append([
vertices_b[mc], vertices_b[mc_i],
vertices_b[mc+2], vertices_b[mc_i+2]])
fr_edges.append(fr_edge)
return fr_edges
\end{lstlisting}
To compute the four pairwise distributions at the basis of the EPR experiment (See Fig.~\ref{fig:p16}, an iterative sampling process is undertaken for the four variables $a$, $b$, $x$, and $y$ that were previously mentioned as specifying one of the 16 possible vertices.
These values are restricted to binary outcomes by means of specifying Bernoulli distributions for which the sampler runs the process.
The experiment is fixed such that each vertex has an equal possibility of being sampled as any other vertex, and results may only be discounted if they do not comply with specified input correlations.
Upon selecting a vertex at a step in the iterative process, the array index associated with the binary representation is incremented by one, via the use of the vertex mapping function. Simultaneously, another array representing the hyperedges of the FR product is also incremented by one at all indexes associated with the hyperedges containing the said vertex. The iterative process only exits when the sum of the global distribution is equivalent to the desired number of iterations. Thereafter, each vertex is normalised by the sum of the values in the corresponding array of hyperedges in which its associated vertex is contained, and is multiplied by 3 (for reflection of the number of associated hyperedges). A visualisation of the hyperedges associated with the vertex at index $00|00$ of the global distribution can be seen in Fig.~\ref{fig:foulis_randall_normalised}.
\begin{figure}[h]
\captionsetup{justification=centering}
\centering
\includegraphics[width=9cm]{foulis_randall_normalised.png}
\caption{Visualisation Of Hyperedges Associated With Vertex 00$|$00}\label{fig:foulis_randall_normalised}
\end{figure}
If the said vertex sustains a weight of 10, and the combined weight of its associated hyperedges is 40, the normalised weight of the vertex will equate to 0.75.
\begin{lstlisting}[language=python]
def generate_global_distribution(constraints,N):
hyperedges = foulis_randall_product()
hyperedges_tallies = zeros(12)
global_distribution = zeros(16)
while sum(global_distribution) < N:
with pm.Model():
pm.Uniform('C',0.0,1.0)
pm.Bernoulli('A',0.5)
pm.Bernoulli('B',0.5)
pm.Bernoulli('X',0.5)
pm.Bernoulli('Y',0.5)
S = pm.sample(N,tune=0, step=pm.Metropolis())
c = S.get_values('C')
a = S.get_values('A')
b = S.get_values('B')
x = S.get_values('X')
y = S.get_values('Y')
for i in range(0, N):
if (c[i] < constraints[x[i]][y[i]][a[i],b[i]]):
for edge in get_hyperedges(hyperedges,
[a[i], b[i], x[i], y[i]]):
hyperedges_tallies[edge] += 1
global_distribution[
get_vertex(a[i], b[i], x[i], y[i])] += 1
z = [0,1]
for a, b, x, y in product(z,z,z,z):
summed_tally = (sum(hyperedges_tallies[e]
for e in get_hyperedges(hyperedges, [a, b, x, y])))
global_distribution[get_vertex(a, b, x, y)] /= summed_tally
global_distribution *= 3
return global_distribution
\end{lstlisting}
Given below in Listing~\ref{fig:pymc3_implementation} is the complete undivided implementation of the EPR experimentation in PyMC3.
\begin{lstlisting}[language=python,caption={PyMC3 Implementation Of EPR Simulation},captionpos=b,label={fig:pymc3_implementation}]
from numpy import zeros, array, fliplr, sum
from itertools import product
import pymc3 as pm
def get_vertex(a, b, x, y):
return ((x*8)+(y*4))+(b+(a*2))
def get_hyperedges(H, n):
l = []
for idx, e in enumerate(H):
if n in e:
l.append(idx)
return l
def foulis_randall_product():
fr_edges = []
H = [ [[[0, 0], [1, 0]], [[0, 1], [1, 1]]],
[[[0, 0], [1, 0]], [[0, 1], [1, 1]]]]
for edge_a in H[0]:
for edge_b in H[1]:
fr_edge = []
for vertex_a in edge_a:
for vertex_b in edge_b:
fr_edge.append([
vertex_a[0], vertex_b[0],
vertex_a[1], vertex_b[1]])
fr_edges.append(fr_edge)
for mc in range(0,2):
mc_i = abs(1-mc)
for edge in H[mc]:
for j in range(0,2):
fr_edge = []
for i in range(0, len(edge)):
edge_b = H[mc_i][i]
vertex_a = edge[abs(i-j)]
vertex_b = edge_b[0]
vertex_c = edge_b[1]
vertices_a = [
vertex_a[0], vertex_b[0],
vertex_a[1], vertex_b[1]]
vertices_b = [
vertex_a[0], vertex_c[0],
vertex_a[1], vertex_c[1]]
fr_edge.append([
vertices_a[mc], vertices_a[mc_i],
vertices_a[mc+2], vertices_a[mc_i+2]]
)
fr_edge.append([
vertices_b[mc], vertices_b[mc_i],
vertices_b[mc+2], vertices_b[mc_i+2]])
fr_edges.append(fr_edge)
return fr_edges
def generate_global_distribution(constraints,N):
hyperedges = foulis_randall_product()
hyperedges_tallies = zeros(12)
global_distribution = zeros(16)
while sum(global_distribution) < N:
with pm.Model():
pm.Uniform('C',0.0,1.0)
pm.Bernoulli('A',0.5)
pm.Bernoulli('B',0.5)
pm.Bernoulli('X',0.5)
pm.Bernoulli('Y',0.5)
S = pm.sample(N,tune=0, step=pm.Metropolis())
c = S.get_values('C')
a = S.get_values('A')
b = S.get_values('B')
x = S.get_values('X')
y = S.get_values('Y')
for i in range(0, N):
if (c[i] < constraints[x[i]][y[i]][a[i],b[i]]):
for edge in get_hyperedges(hyperedges,
[a[i], b[i], x[i], y[i]]):
hyperedges_tallies[edge] += 1
global_distribution[
get_vertex(a[i], b[i], x[i], y[i])] += 1
z = [0,1]
for a, b, x, y in product(z,z,z,z):
summed_tally = (sum(hyperedges_tallies[e]
for e in get_hyperedges(hyperedges, [a, b, x, y])))
global_distribution[get_vertex(a, b, x, y)] /= summed_tally
global_distribution *= 3
return global_distribution
\end{lstlisting}
\subsubsection{Turing.jl}
Like PyMC3, Turing.jl isolates operations on random variables to a single model with the use of the \verb|@model| macro. To obtain randomly sampled non-negative values for a Bernoulli distribution, the model requires the declaration of a uniform Beta prior, invoked with the \verb|Beta| method. Then Bernoulli distributions are declared with a \verb|Bernoulli| method, once again accompanied by probabilities describing sampling biases for later generated distributions, as well as a \verb|Uniform| distribution.
In the \verb|generate_global_distribution| method of Listing~\ref{fig:turing_implementation}, the \verb|sample| function invokes the model, a step-method, as well as the number of desired iterations. In this case, the Sequential Monte Carlo sampling (abbreviated to \verb|SMC|) has been applied.
To obtain the trace of a distribution in the model, the output must be indexed with the precession of a colon. In the example, the results are retrieved, tallied, and normalised by means of the \verb|foulis_randall_product| method, before returning the result.
\begin{lstlisting}[language=java,caption={Turing.jl Implementation Of EPR Simulation},captionpos=b,label={fig:turing_implementation}]
fr_edges
H
[[[0.0,0.0],[1.0,0.0]],[[0.0,1.0],[1.0,1.0]]]]
fr_edge
append!( fr_edge,
[[ H[1][i][k][1],H[2][j][l][1] ,
H[1][i][k][2],H[2][j][l][2] ]] )
append!(fr_edges,[fr_edge])
mc_i
fr_edge = Array{Array{Float64}}(0)
edge_b
vertex_a
vertex_b
vertex_c
vertices_a
vertex_a[2], vertex_b[2]]
vertices_b
vertex_a[2], vertex_c[2]]
this_edge_b
append!(fr_edge,[[
vertices_a[mc], vertices_a[mc_i],
vertices_a[m
append!(fr_edge,[[
vertices_b[mc], vertices_b[mc_i],
vertices_b[m
append!(fr_edges,[fr_edge])
fr_edges
((
l
if
append!(l,i)
l
@model
z
a
b
x
y
c
hyperedges
hyperedges_tallies
global_distribution
r
a
b
x
y
c
I
associated_hyperedges
hyperedges_tallies[
associated_hyperedges[j]]
global_distribution[
summed_amount
I
associated_hyperedges
summed_amount
global_distribution
global_distribution
\end{lstlisting}
\subsubsection{Figaro}
To achieve a joint-probability distribution on a measurement context of random variables, Figaro's syntactic elements reveal fundamental differences in its approach. A class is the advised object for the purpose of declaring a model. For each random variable in the model, a probability bias is applied to Figaro's \verb|Flip| method, generating a Bernoulli distribution on which the \verb|If| method can then associate the results to desired values, or perpetuation of other methods.
In Listing~\ref{fig:figaro_implementation}, states are bound to integers. Thereafter, possible joint outcomes of random variables are permuted through articulation of expressions concerning the previously mentioned states. In the \verb|GenerateGlobalDistribution| method, it can be seen that after initialising the FR product (via the \verb|FoulisRandallProduct| method), the sampling process is called by means of \verb|start|, \verb|stop|, and \verb|kill| chains applied on the \verb|algorithm| object. On the preceding line, the \verb|MetropolisHastings| method implies the Metropolis-Hastings step-method will be used for the sampling process, and the \verb|outcomes| of the \verb|model| class will be considered. For more complex experiments, the \verb|ProposalScheme| may be modified, however not in this case. The \verb|sampleFromPosterior| sub-method chained to calls on each variable compile the required distributions on execution. The \verb|take| sub-method chained to the sampling methods are used to declare the number of outcomes retrieved from the sampler. This aspect is consequent of sampler delivering results via \verb|Stream| primitives, a resource-efficient consideration ensuring that only required data is evaluated. Thereafter, the proceeding code tallies the indexes of the \verb|globalDistribution| array, and normalises the results.
\begin{lstlisting}[caption={Figaro Implementation Of EPR Simulation},captionpos=b,label={fig:figaro_implementation}]
Array( Array( Array
Array( Array
Array( Array( Array
Array( Array
foulisRandallEdge ++= Array(Array
vertexA
}
}
foulisRandallEdges ++= Array(foulisRandallEdge)
}
}
vertexA
vertexA
foulisRandallEdge ++= Array(
Array
verticesA(measurementChoice),
verticesA(measurementChoiceInverse),
verticesA(measurementChoice
verticesA(measurementChoiceInverse
foulisRandallEdge ++= Array(
Array
verticesB(measurementChoice),
verticesB(measurementChoiceInverse),
verticesB(measurementChoice
verticesB(measurementChoiceInverse
}
foulisRandallEdges ++= Array(foulisRandallEdge)
}
}
}
foulisRandallEdges
}
}
}
((x
}
n: Array
l :+= i
}
}
l
}
Array[Array[Array[Array
ProposalScheme.default, model
algorithm.start()
algorithm.stop()
algorithm.kill()
model
model
model
model
model
val x_x = x(i).toInt
val y_y = y(i).toInt
val a_a = a(i).toInt
val b_b = b(i).toInt
hyperedges, Array(a_a, b_b, x_x, y_y))) {
hyperedgesTallies(edge) +=
}
globalDistribution(GetVertex(a_a, b_b, x_x, y_y)) +=
}
}
}
Array(a.toDouble,b.toDouble,x.toDouble,y.toDouble))
summedAmount += hyperedgesTallies(edgeIndex)
}
globalDistribution(GetVertex(a,b,x,y)) =
globalDistribution(GetVertex(a,b,x,y)) / summedAmount
}
globalDistribution
}
}
\end{lstlisting}
\subsubsection{Pyro}
Pyro's context management is integrated into Python's \verb|def| containers; or can be flexibly given implicitly, encouraging the use of stochastic functions to specify probabilistic models. Inside a container, probabilities of random variables are specified first. As Pyro is built upon PyTorch, explicit values match PyTorch types, in this case resulting in \verb|Tensor| type values.
As can be seen in Listing~\ref{fig:pyro_implementation} in the \verb|generate_global_distribution| method, Pyro's atomic sampling capabilities allow for the requirement of fewer syntactic constructs to communicate the sampling process. Each \verb|sample| call accepts a distribution, in this case either \verb|Bernoulli| or \verb|Uniform|. Upon sampling, the outcomes are tallied and normalised, before presenting the result.
\begin{lstlisting}[language=python,caption={Pyro Implementation Of EPR Simulation},captionpos=b,label={fig:pyro_implementation}]
from pyro import sample
from torch import Tensor
from torch.autograd import Variable
from numpy import zeros, array, fliplr, sum
from itertools import product
from pyro.distributions import Bernoulli, Uniform
def get_vertex(a, b, x, y):
return ((x*8)+(y*4))+(b+(a*2))
def get_hyperedges(H, n):
l = []
for idx, e in enumerate(H):
if n in e:
l.append(idx)
return l
def foulis_randall_product():
fr_edges = []
H = [ [[[0, 0], [1, 0]], [[0, 1], [1, 1]]],
[[[0, 0], [1, 0]], [[0, 1], [1, 1]]]]
for edge_a in H[0]:
for edge_b in H[1]:
fr_edge = []
for vertex_a in edge_a:
for vertex_b in edge_b:
fr_edge.append([
vertex_a[0], vertex_b[0],
vertex_a[1], vertex_b[1]])
fr_edges.append(fr_edge)
for mc in range(0,2):
mc_i = abs(1-mc)
for edge in H[mc]:
for j in range(0,2):
fr_edge = []
for i in range(0, len(edge)):
edge_b = H[mc_i][i]
vertex_a = edge[abs(i-j)]
vertex_b = edge_b[0]
vertex_c = edge_b[1]
vertices_a = [
vertex_a[0], vertex_b[0],
vertex_a[1], vertex_b[1]]
vertices_b = [
vertex_a[0], vertex_c[0],
vertex_a[1], vertex_c[1]]
fr_edge.append([
vertices_a[mc], vertices_a[mc_i],
vertices_a[mc+2], vertices_a[mc_i+2]]
)
fr_edge.append([
vertices_b[mc], vertices_b[mc_i],
vertices_b[mc+2], vertices_b[mc_i+2]])
fr_edges.append(fr_edge)
return fr_edges
def generate_global_distribution(constraints,N):
hyperedges = foulis_randall_product()
hyperedges_tallies = zeros(12)
global_distribution = zeros(16)
while sum(global_distribution) < N:
a = int(sample('A', Bernoulli(Variable(Tensor([0.5])))))
b = int(sample('B', Bernoulli(Variable(Tensor([0.5])))))
x = int(sample('X', Bernoulli(Variable(Tensor([0.5])))))
y = int(sample('Y', Bernoulli(Variable(Tensor([0.5])))))
value = float(sample('C', Uniform(Variable(Tensor([0.0])),
Variable(Tensor([1.0])))))
if (value < constraints[x][y][a,b]):
for edge in get_hyperedges(hyperedges, [a, b, x, y]):
hyperedges_tallies[edge] += 1
global_distribution[get_vertex(a, b, x, y)] += 1
z = [0,1]
for a, b, x, y in product(z,z,z,z):
summed_tally = (sum(hyperedges_tallies[e]
for e in get_hyperedges(hyperedges, [a, b, x, y])))
global_distribution[get_vertex(a, b, x, y)] /= summed_tally
global_distribution *= 3
return global_distribution
\end{lstlisting}
\subsection{Input correlations for sampling}
In order to provide flexibility in investigating simulations of quantum and super-quantum correlations,
correlations between $A$ and $B$ in the four measurement contexts are specified.
For example, the following code fragments ~\ref{fig:pyro_implementation_constraints}~\ref{fig:turing_implementation_constraints}~\ref{fig:figaro_implementation_constraints} specify that super-quantum correlations will be simulated by specifying $A$ and $B$ to be maximally correlated in three measurement contexts and maximally anti-correlated in the fourth.
With these input correlations, maximum violation of the CHSH inequalities would be expected, essentially simulating a PR box~\cite{popescu1994quantum}.
\begin{lstlisting}[language=python,caption={Implementation of input correlations in PYMC3 And Pyro},captionpos=b,label={fig:pyro_implementation_constraints}]
constraints = [[ array([[0.5, 0], [0., 0.5]]),
array([[0.5, 0], [0., 0.5]]) ],
[ array([[0.5, 0], [0., 0.5]]),
array([[0, 0.5], [0.5, 0.]]) ]]
p = generate_global_distribution((constraints, 5000)
\end{lstlisting}
\begin{lstlisting}[language=java,caption={Implementation of input correlations in Turing.jl},captionpos=b,label={fig:turing_implementation_constraints}]
constraints
[[0.5, 0.0], [0.0, 0.5]] ],
[ [[0.5, 0.0], [0.0, 0.5]],
[[0.0, 0.5], [0.5, 0.0]] ]]
p
\end{lstlisting}
\begin{lstlisting}[language=java,caption={Implementation of input correlations in Figaro},captionpos=b,label={fig:figaro_implementation_constraints}]
Array( Array(Array
Array(Array
Array( Array(Array
Array(Array
\end{lstlisting}
\subsection{Specifying the CHSH inequalities}
For each of the four PPLs, code is specified ~\ref{fig:pymc_pyro_chsh}~\ref{fig:turing_chsh}~\ref{fig:figaro_chsh} that implements the system of four CHSH inequalities.
The outcome is a vector of four Boolean values expressing whether the respective inequality was violated.
\subsubsection{PYMC3 And Pyro}
\begin{lstlisting}[language=python,caption={Specification of the CHSH inequalities in PYMC3 And Pyro},captionpos=b,label={fig:pymc_pyro_chsh}]
def equality(v1,v2,v3,v4):
def f1(v1,v2):
return abs((2 * (p[v1] + p[v2])) - 1)
def f2(v1,v2,v3,v4):
return (p[v1] + p[v2]) - (p[v3] + p[v4])
delta = 0.5 * (
(f1(0,1) - f1(4,5)) + (f1(8,9) - f1(12,13)) +
(f1(0,2) - f1(4,6)) + (f1(8,10) - f1(12,14)))
return 2 * (1 + delta) >= abs(
(v1*f2(0,3,1,2)) + (v2*f2(4,7,5,6)) +
(v3*f2(8,11,9,10)) + (v4*f2(12,15,13,14)))
tests = [
equality(1,1,1,-1),
equality(1,1,-1,1),
equality(1,-1,1,1),
equality(-1,1,1,1)]
\end{lstlisting}
\subsubsection{Turing.jl}
\begin{lstlisting}[language=java,caption={Specification of the CHSH inequalities in Turing.jl},captionpos=b,label={fig:turing_chsh}]
(p[v1]
delta
(2
(v1
(v3
tests = [
\end{lstlisting}
\subsubsection{Figaro}
\begin{lstlisting}[language=java,caption={Specification of the CHSH inequalities in Figaro},captionpos=b,label={fig:figaro_chsh}]
Math.abs(
}
(p(v1) + p(v2)) - (p(v3) + p(v4))
}
(f1
(f1
(v1*f2
(v3*f2
}
Equality
Equality
Equality
Equality
)
\end{lstlisting}
Upon conducting simulations using the input correlations given above, the predicted maximum violation of the CHSH inequalities were observed for all four PPLs specified above.
\section{Numerical Evaluation}
We conducted several experiments to compare the numerical accuracy and execution time of the different implementations.
\subsection{Accuracy Of Tests}
For all PPLs, statistical outputs confirming the success of the FR product (and consequently the no-signalling condition) are given with an acceptable margin of error. While this is consequent of more than a single factor, it is perceived that the largest contributor to accuracy is the computation of random values for each PPL. What can be observed is that, with larger sample sizes (bearing more perfectly random distributions), that the margin of error decreases, as can be seen below. This is typical, as the experiment design's normalisation process is dependent on the even spread of tallies across the global distribution, and more specifically, the degree to which the sampler is random. It should also be considered that the various PPLs apply data types that round values for a loss of statistical precision where it may serve meaning. For example, while a single value may lose a minute portion of its whole beyond the decimal point (due to automatic rounding), when calculating a handful of these values per iteration of some few thousand iterations, the difference becomes observable.
\begin{figure}[h]
\captionsetup{justification=centering}
\centering
\includegraphics[width=13cm]{Accuracy.png}
\caption{Accuracy Of EPR Experimentation}\label{fig:tests_accuracy}
\end{figure}
From observing the results in Fig.~\ref{fig:tests_accuracy}, it can be seen immediately that before the first 20,000 iterations, all PPLs exhibit substantial noise that rules out the possibility of accounting for said window of results. In Monte Carlo inference, noise of this kind is common, where accounting for 'burn-in' iterations at the beginning of the sampling process may possibly minimize the unpredictability of the results. It is also seen that Figaro and Turing.jl have consistently greater margins of error than those of PyMC3 and Pyro. Overcoming this error would be achieved through improved float precision where applicable in either PPL's programming language. It cannot be said which of either PyMC3 or Pyro display the most accurate results. Of interest, it is perceived that where other PPLs may have required multiple instances of the entire sampling process to tally the global distribution, Pyro could accurately equate the global distribution atomically, thus improving its accuracy.
\subsection{Elapsed Time Of Execution}
Another statistic that has been observed is the compilation time of each PPL, which typically increases with number of iterations. For relativity of results, it should be noted that all non-accelerated tests of this kind were executed within a bash execution terminal, on a Macintosh operating system, bearing a 45nm ``Penryn" 2.4 GHz Intel ``Core 2 Duo" processor, and 4 GB of SDRAM, whereas all accelerated tests were executed within a bash execution terminal of Amazon Web Services Linux (2nd distribution). The specification of the 'Elastic-Compute Cloud' on which the Linux distribution executed was a 2018 ``p2.xlarge" 2.7 GHz Intel Broadwell processor, with 61 GB of SDRAM. The instance also provides an NVIDIA GK210 GPU multi-vCPU (count of 4) processor, and 12 GB of GPU RAM.
\begin{figure}[h]
\captionsetup{justification=centering}
\centering
\includegraphics[width=13cm]{Time.png}
\caption{Execution time of EPR Experimentation}\label{fig:tests_time}
\end{figure}
In Fig.~\ref{fig:tests_time}, it can be seen that among the accelerated results, the fastest PPL is Figaro by scale of almost an entire logarithmic unit. Thereafter, Pyro and Turing.jl tie in second position, however Pyro demonstrates inference of a stabler nature. Despite the Theano architecture's utilisation of the supplied GPU, PyMC3 is then the most affected by size of experimental setup. When the Theano architecture is non-accelerated, it can be seen that PyMC3's performance drastically decreases. For comparison, Pyro has also been tested on a non-accelerated architecture, where the difference in performance is reasonably smaller than that of PyMC3. This forms the suggestion that PyMC3 should not be applied in non-accelerated environments. In all cases, it can be seen that all PPLs exhibit a linear order of growth in the given scenario.
\section{Discussion}
Recall that the challenge posed was how to develop probabilistic program which can simulate quantum correlations in an EPR experiment.
The solution adopted was to program a hypergraph formalism to underpin the simulations.
This formalism is modular where the FR product of the modules is used to impose the no-signalling constraint.
In execution, all four PPLs successfully simulated an EPR experiment producing quantum correlations.
Therefore, we conclude that the hypergraph formalism has been shown to be a promising basis for such simulations.
In addition, the hypergraph formalism is also rendered into program syntax in a fairly straightforward way.
However, the formalism does pose a challenge with respect to the accessibility criterion of the PPLs.
The challenge is due to an inherent ambiguity present in the composite hypergraph produced by the FR product, which has 12 edges in the EPR experiment. Four of those edges represent ``actual" measurement contexts (depicted in Figure \ref{fig:p16}), whilst the remaining eight edges impose the no-signalling condition. The hypergraph formalism is agnostic to this distinction, which is important to distinguish when designing and programming simulations.
On the other hand, the strength of the hypergraph formalism is its flexibility and modularity. In particular, modularity offers the potential to cover a wide variety of experimental designs whilst at the same time offering a conceptually simple route to program specification. We have shown that the EPR experiment is based on two modules which represent measurements on the individual systems $A$ and $B$.
More generally, joint measurements on multiple systems, and the constraints they must satisfy can then be expressed in terms of a composition operator that combines the modules into a suitable global data structure in the program, which underpins both the sampling and simulation.
With regard to sampling, an immediate suggestion is stronger control-flow integration in the sampling process. Rather than repeatedly generating distributions that are indexed for random results, all PPLs should offer atomic sampling similar to the likes of Pyro or Figaro, where single values could be observed, or returned from a \verb|Stream| primitive in a single procedure. As a sampling process contains variables that are akin to those of iterative loops, it may also serve PPLs to re-imagine the sampling process as a paradigm of the contained programming language, rather than as a single procedure that operates in isolation of the entire program. Providing control over each iteration of the distribution could also improve the legibility of the program, while minimising convolution in the procedures that typically come afterwards.
In terms of the qualitative comparison between the four PPLs, Figaro demonstrated the most benefits for general usage. This could be seen in its capacity to deliver specialised features where other PPLs could not. For what features the alternatives provide, they may appropriately match Figaro i.e., consider that PyMC3 offers more configurations for distributions described in probabilistic models, or that Pyro achieves control-flow independence with fewer syntactic constructs. Such arguments have been overlooked when taking into account the efforts that the main developer, Charles River Analytics, has made to ensure that its PPL is competitively implemented in wider applications. For the likes of accessibility, Figaro's origins in Scala do not present the same benefits as Pyro in agile development. However, the simulation of quantum correlations is not a rigorous process. Furthermore, acceleration of PPLs was observed to be minimal in difference (exempt of PyMC3) for the case of the experimentation. Thus it wouldn't be perceived that this is a determinant factor.
In experimentation, we found that Pyro provided the syntactic constructs needed to neatly describe its processes in fewer procedures than those of the others. While PyMC3's origins in Python also made it an expressive alternative, the excessive nomenclature surrounding the declarations of methods and data-types for both Turing.jl and Figaro convoluted their descriptions. While in comparison to the other PPLs, Figaro's accuracy is inferior, it could be argued that the sample iterations describe an experimental setting that does not consider improving float precision. Coupled with the trend of Figaro's improvement in its number of iterations, and the measure of accuracy between PPLs may converge. The same cannot be said for the time complexity of the EPR experimentation, where it was observed that PyMC3's compilation grew substantially with the number of sample iterations being executed. Still, in instances where accuracy is a key factor and limitations are perceived in Pyro's functionality, PyMC3 would be the suitable alternative.
\section{Conclusion}
Probabilistic programming offers new possibilities for quantum physicists to specify and simulate experiments, such as the EPR experiment illustrated in this article.
This is particularly relevant for experiments requiring advanced statistical inference on unknown parameters, especially in the case of techniques that involve large amounts of data and computational time.
Furthermore, probabilistic machine learning models that are conveniently expressed in probabilistic programming languages can advance our understanding of the underlying physics of the experiments.
It is important to note that the benefits of probabilistic programming are not restricted to experiments involving the analysis of quantum correlations.
Since any probabilistic programming language is based on random variables, we can ask the question what exactly is a random variable in quantum physics.
Focusing on a single measurement context, due to the normalization constraint, we can think of the measurement context as a (conditional) probability distribution over random variables, which describes the measurement outcomes.
The probability distribution is a normalized measure over the sigma algebra defined by the outcomes.
This measure is defined via Born's rule, that is, the quantum state is embedded in the measure.
An EPR experiment is essentially a state preparation protocol where deterministic operations are embedded in the quantum state (the unitary operations leading to its preparation), followed by the measurement, which results in stochastic outcomes.
More generally, we can think of a larger system where we only measure a subsystem.
This leads to quantum channels, which are described by completely positive trace preserving maps.
A quantum channel, however, must be deterministic in the sequence of events, and, for instance, a measurement choice at a later time step cannot depend on the outcome of a previous measurement.
We must factor in such classical and quantum memory effects, as well as the potentially indeterminate causal order of events.
The quantum comb~\cite{chiribella2008quantum,chiribella2009theoretical} or process matrix~\cite{oreshkov2012quantum,pollock2018complete,pollock2018operational} formalism addresses these more generic requirements.
Either formalism introduces a generalized Born's rule, where deterministic and stochastic parts of the system clearly separate, and thus give a clear way of defining random variables.
Probabilistic programming offers potential in expressing models designed in these frameworks.
\section{Acknowledgements}
This research was supported by the Asian Office of Aerospace Research and Development (AOARD) grant: FA2386-17-1-4016. This research was supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation.
| 2023-04-23T06:40:38.658Z | 2018-11-13T02:12:40.000Z | redpajama/arxiv | arxiv_0001 | 855 | 8,596 |
c27aa5756df1f26d611ce5321918e168abbf0747 | \section{INTRODUCTION}
In our current understanding of pulsar magnetospheres and radiation mechanisms, strongly magnetized rotating neutron stars play a central role. The underlying plasma processes like particle acceleration, pair creation and pulsed emission profiles throughout the whole electromagnetic spectrum strongly depend on the peculiar magnetic field geometry and strength adopted or extracted from numerical simulations of the magnetosphere. For instance radio emission is believed to emanate from the polar caps, therefore in regions of strong gravity where curvature and frame-dragging effects are considerable due to the high compacity of neutron stars $\Xi = R_{\rm s}/R \approx0.5$ for typical models with its mass~$M$, its radius~$R$ and the Schwarzschild radius given by $R_{\rm s}=2\,G\,M/c^2$, $G$ being the gravitational constant and $c$ the speed of light. Detailed quantitative analysis of radio pulse polarization and pair cascade dynamics could greatly benefit from a better quantitative description of the electromagnetic field around the polar caps. Although there exists an extensive literature about flat space-time electrodynamics, only little work has been done to include general-relativistic effects.
The first general solution for an oblique rotator in flat vacuum space-time was found by~\cite{1955AnAp...18....1D} with closed analytical formulas. This solution is often quoted to explain the magnetic dipole radiation losses. To be truly exact, we emphasize that the Poynting flux~$L_{\rm sd}$ derived from his solution does not strictly coincide with the point dipole losses~$L_{\rm dipole}$ but depends on the ratio~$R/r_{\rm L}$, where $r_{\rm L}=c/\Omega$ is the light cylinder radius and $\Omega$ the rotation rate of the neutron star. It is only equal to the textbook equation for dipole losses in the limit of vanishing radius $\lim\limits_{R\to0} L_{\rm sd} = L_{\rm dipole}$. The distinction is meaningful at least for checking results emanating from numerical computations. Indeed, because of limited computer resources, we are often forced to take ratios~$R/r_{\rm L}\lesssim1$ not completely negligible compared to unity. Therefore the computed spin-down luminosity can significantly deviate from the point dipole losses. Moreover, \cite{1974AnPhy..87..244C} showed in the case of an aligned rotator that the electric field induced by frame-dragging effects could be as high as the one induced by the stellar rotation itself. These results were extended to an oblique rotator a few years later by~\cite{1980Ap&SS..70..295C} thanks to a formalism developed earlier by~\cite{1974PhLA...47..261C, 1974PhRvD..10.1070C, 1975PhLA...54....5C}. It is therefore crucial to treat Maxwell equations in the general-relativistic framework in order to analyse quantitatively acceleration and radiation in the vicinity of the neutron star. This led~\cite{1976GReGr...7..459P} to seek for an approximate solution of Maxwell equations in a curved space-time either described by the Schwarzschild metric or by the Kerr metric, using a linearised approach employing the Newman-Penrose formalism. He computed the structure of the electromagnetic waves propagating in vacuum and launched by a rotating dipole. He also gave an expression for the Poynting flux~$\dot{E}$ depending on the ratio~$R/r_{\rm L}$. The exact analytical solution for the static magnetic dipole in Schwarzschild space-time was given by \cite{1964ZETF...47..1030G, 1974PhRvD..10.3166P} and extended to multipoles by \cite{1970Ap&SS...9..146A}.
\cite{1992MNRAS.255...61M} also studied the influence of space-time curvature and frame dragging effects on the electric field around the polar caps of a pulsar and confirmed the earlier claims of an increase in its strength. \cite{1995ApJ...449..224S} computed the electric field for an aligned rotator in vacuum in the Schwarzschild metric. The aligned rotator has also been investigated by \cite{2000PThPh.104.1117K} with special emphasize to particle acceleration in vacuum. \cite{1997ApJ...485..735M} and \cite{2003ApJ...584..427S} took a similar approach to study the acceleration of particles around polar caps. \cite{2001MNRAS.322..723R, 2002MNRAS.331..376Z, 2004MNRAS.352.1161R} computed the electromagnetic field in the exterior of a slowly rotating neutron star in the slow rotation metric as well as inside the star and investigated the impact of oscillations. They gave approximate analytical expressions for the external electromagnetic field close to the neutron star. \cite{2004MNRAS.348.1388K} extended the previous work by solving numerically the equations for the oblique rotator in vacuum in general relativity. They retrieve \cite{2001MNRAS.322..723R} results close to the surface and the Deutsch solution for distances larger than the light cylinder~$r\ggr_{\rm L}$.
It is the purpose of this paper to elucidate quantitatively and accurately some aspects of general-relativistic effects on the electrodynamics close to the neutron star. Our goal is to derive a general formalism to compute the solution of Maxwell equations in curved space-time for any multipole component of the magnetic field. Consequently, we use a 3+1 formalism of electrodynamics in curved space-time as presented in~\S\ref{sec:Modele}. Next we show how to solve for the electromagnetic field for an aligned rotator in~\S\ref{sec:Aligne}. This method is easily extended to a perpendicular rotator as explained in~\S\ref{sec:Orthogonal}. Because Maxwell equations in vacuum are linear, the most general solution for an oblique rotator will be a linear superposition of the weighted aligned and perpendicular rotator. Conclusions and future possible work are drawn in~\S\ref{sec:Conclusion}.
\section{The 3+1 formalism}
\label{sec:Modele}
The covariant form to describe the gravitational and electromagnetic field in general relativity is the natural way to write them down in a frame independent way. Nevertheless, it is more intuitive to split space-time into an absolute space and a universal time, similar to our all day three dimensional space, rather than to use the full four dimensional formalism. Another important advantage of a 3+1 split is a straightforward transcription of flat space techniques for scalar, vector and tensor fields to curved spaces. We start with a description of the special foliation used for the metric. Next we derive Maxwell equations in this foliation and conclude on some words about force-free electrodynamics which will be treated in another work but for completeness we give the useful expressions already in this paper.
\subsection{The split of the space-time metric}
We therefore split the four dimensional space-time into a 3+1~foliation such that the metric~$g_{ik}$ can be expressed as
\begin{equation}
\label{eq:metrique}
ds^2 = g_{ik} \, dx^i \, dx^k = \alpha^2 \, c^2 \, dt^2 - \gamma_{ab} \, ( dx^a + \beta^a \, c\,dt ) \, (dx^b + \beta^b \, c\,dt )
\end{equation}
where $x^i = (c\,t,x^a)$, $t$ is the time coordinate or universal time and $x^a$ some associated space coordinates. We use the Landau-Lifschitz convention for the metric signature given by $(+,-,-,-)$ \citep{LandauLifchitzTome2}. $\alpha$ is the lapse function, $\beta^a$ the shift vector and $\gamma_{ab}$ the spatial metric of absolute space. By convention, latin letters from $a$ to $h$ are used for the components of vectors in absolute space (in the range~$\{1,2,3\}$) whereas latin letters starting from $i$ are used for four dimensional vectors and tensors (in the range~$\{0,1,2,3\}$). Our derivation of the 3+1 equations follow the method outlined by \cite{2011MNRAS.418L..94K}. A fiducial observer (FIDO) is defined by its 4-velocity~$n^i$ such that
\begin{subequations}
\begin{align}
n^i & = \frac{dx^i}{d\tau} = \frac{c}{\alpha} \, ( 1, - \mathbf \beta) \\
n_i & = (\alpha \, c, \mathbf 0)
\end{align}
\end{subequations}
This vector is orthogonal to the hyper-surface of constant time coordinate~$\varSigma_t$. Its proper time~$\tau$ is measured according to
\begin{equation}
d\tau = \alpha\,dt
\end{equation}
The relation between the determinants of the space-time metric~$g$ and the pure spatial metric~$\gamma$ is given by
\begin{equation}
\sqrt{-g} = \alpha \, \sqrt{\gamma}
\end{equation}
For a slowly rotating neutron star, the lapse function is
\begin{equation}
\label{eq:Lapse}
\alpha = \sqrt{ 1 - \frac{R_s}{r} }
\end{equation}
and the shift vector
\begin{subequations}
\begin{align}
\label{eq:Shift}
c \, \mathbf \beta = & - \omega \, r \, \sin\vartheta \, \mathbf{e}_\varphi \\
\omega = & \frac{R_s\,a\,c}{r^3}
\end{align}
\end{subequations}
We use spherical coordinates~$(r,\vartheta,\varphi)$ and an orthonormal spatial basis~$(\mathbf{e}_{\rm r},\mathbf{e}_\vartheta,\mathbf{e}_\varphi)$. The spin~$a$ is related to the angular momentum~$J$ by $J=M\,a\,c$. It follows that $a$ has units of a length and should satisfy $a \leq R_s/2$. Introducing the moment of inertia~$I$, we also have $J=I\,\Omega$. For the remainder of the paper, it is also convenient to introduce the relative rotation of the neutron star according to
\begin{equation}
\tilde{\omega} = \Omega - \omega
\end{equation}
In the special case of a homogeneous and uniform neutron star interior with spherical symmetry, the moment of inertia is
\begin{equation}
\label{eq:Inertie}
I = \frac{2}{5} \, M \, R^2
\end{equation}
Thus the spin parameter can be expressed as
\begin{equation}
\label{eq:spin}
\frac{a}{R_s} = \frac{2}{5} \, \frac{R}{R_s} \, \frac{R}{r_{\rm L}}
\end{equation}
We adopt this simplification for the neutron star interior in order to compute the spin parameter~$a$.
\subsection{Maxwell equations}
Let $F^{ik}$ and ${^*F}^{ik}$ be the electromagnetic tensor and its dual respectively, see appendix~\ref{app:metric}. It is useful to introduce the following spatial vectors $(\mathbf B, \mathbf E, \mathbf D, \mathbf H)$ such that
\begin{subequations}
\label{eq:BDEH}
\begin{align}
B^a & = \alpha \, {^*F}^{a0} \\
E_a & = \frac{\alpha}{2} \, e_{abc} \, c \, {^*F}^{bc} \\
D^a & = \varepsilon_0 \, c \, \alpha \, F^{a0} \\
H_a & = - \frac{\alpha}{2\,\mu_0} \, e_{abc} \, F^{bc}
\end{align}
\end{subequations}
$\varepsilon_0$ is the vacuum permittivity and $\mu_0$ the vacuum permeability. $e_{abc} = \sqrt{\gamma} \, \varepsilon_{abc}$ is the fully antisymmetric spatial tensor and $\varepsilon_{abc}$ the three dimensional Levi-Civita symbol. The contravariant analog is $e^{abc} = \varepsilon^{abc}/\sqrt{\gamma}$. Relations eq.~(\ref{eq:BDEH}) can be inverted such that
\begin{subequations}
\begin{align}
{^*F}^{a0} & = \frac{B^a}{\alpha} \\
{^*F}^{ab} & = \frac{1}{c\,\alpha} \, e^{abc} \, E_c = \frac{1}{c\,\sqrt{-g}} \, \varepsilon^{abc} \, E_c \\
F^{a0} & = \frac{D^a}{\varepsilon_0 \, c \, \alpha} \\
F^{ab} & = - \frac{\mu_0}{\alpha} \, e^{abc} \, H_c = - \frac{\mu_0}{\sqrt{-g}} \, \varepsilon^{abc} \, H_c
\end{align}
\end{subequations}
These three dimensional vectors can be recast into
\begin{subequations}
\begin{align}
E_a & = c \, F_{0a} \\
H_a & = \frac{{^*F}_{0a}}{\mu_0} \\
B^a & = - \frac{1}{2} \, e^{abc} \, F_{bc} \\
D^a & = \frac{\varepsilon_0 \, c}{2} \, e^{abc} \, {^*F}_{bc}
\end{align}
\end{subequations}
These expressions are also easily inverted such that
\begin{subequations}
\begin{align}
F_{0a} & = \frac{E_a}{c} \\
{^*F}_{0a} & = \mu_0 \, H_a \\
F_{ab} & = - e_{abc} \, B^c = - \sqrt{\gamma} \, \varepsilon_{abc} \, B^c \\
{^*F}_{ab} & = \frac{e_{abc}}{\varepsilon_0\,c} \, D^c = \frac{\sqrt{\gamma}}{\varepsilon_0\,c} \, \varepsilon_{abc} \, D^c
\end{align}
\end{subequations}
All these antisymmetric tensors are summarized in appendix~\ref{app:metric}. With these definitions of the spatial vectors, Maxwell equations take a more traditional form in the curved three dimensional space. The system reads
\begin{subequations}
\begin{align}
\label{eq:Maxwell1}
\mathbf{\nabla}\cdot \mathbf B & = 0 \\
\label{eq:Maxwell2}
\mathbf{\nabla} \times \mathbf E & = - \frac{1}{\sqrt{\gamma}} \, \partial_t (\sqrt{\gamma} \, \mathbf B) \\
\label{eq:Maxwell3}
\mathbf{\nabla}\cdot \mathbf D & = \rho \\
\label{eq:Maxwell4}
\mathbf{\nabla} \times \mathbf H & = \mathbf J + \frac{1}{\sqrt{\gamma}} \, \partial_t (\sqrt{\gamma} \, \mathbf D)
\end{align}
\end{subequations}
The source terms $(\rho, \mathbf J)$ are given by
\begin{subequations}
\begin{align}
\rho \, c & \equiv \alpha \, I^0 \\
J^a & \equiv \alpha \, I^a
\end{align}
\end{subequations}
$I^k$ being the 4-current density. The above differential operators should be understood as defined in a three dimensional curved space, the absolute space with associated spatial metric~$\gamma_{ab}$, such that
\begin{subequations}
\begin{align}
\mathbf{\nabla}\cdot \mathbf B & \equiv \frac{1}{\sqrt{\gamma}} \, \partial_a (\sqrt{\gamma} \, B^a) \\
\mathbf{\nabla} \times \mathbf E & \equiv e^{abc} \, \partial_b \, E_c \\
\mathbf E \times \mathbf B & \equiv e^{abc} \, E_b \, B_c
\end{align}
\end{subequations}
The special case of a diagonal spatial metric is given in appendix~\ref{app:operateur}. The three dimensional vector fields are not independent, they are related by two important constitutive relations, namely
\begin{subequations}
\label{eq:Constitutive}
\begin{align}
\label{eq:ConstitutiveE}
\varepsilon_0 \, \mathbf E & = \alpha \, \mathbf D + \varepsilon_0\,c\,\mathbf\beta \times \mathbf B \\
\label{eq:ConstitutiveH}
\mu_0 \, \mathbf H & = \alpha \, \mathbf B - \frac{\mathbf\beta \times \mathbf D}{\varepsilon_0\,c}
\end{align}
\end{subequations}
The curvature of absolute space is taken into account by the lapse function factor~$\alpha$ in the first term on the right-hand side and the frame dragging effect is included in the second term, the cross-product between the shift vector~$\mathbf\beta$ and the fields. We see that $(\mathbf D, \mathbf B)$ are the fundamental fields, actually those measured by a FIDO, see below.
\subsection{Force-free conditions}
The source terms have not yet been specified. Having in mind to apply the above equations to the pulsar magnetosphere, we give the expressions for the current in the limit of a force-free plasma, neglecting inertia and pressure. The force-free condition in covariant form reads
\begin{equation}
F_{ik} \, I^k = 0
\end{equation}
and in the 3+1~formalism it becomes
\begin{subequations}
\begin{align}
\mathbf J \cdot \mathbf E & = 0 \\
\rho \, \mathbf E + \mathbf J \times \mathbf B & = \mathbf 0
\end{align}
\end{subequations}
which implies $\mathbf E \cdot \mathbf B = 0 $ and therefore also $\mathbf D \cdot \mathbf B = 0 $. As in the special relativistic case, the current density is found to be, see the derivation for instance in \cite{2011MNRAS.418L..94K}
\begin{equation}
\mathbf J = \rho \, \frac{\mathbf E \times \mathbf B}{B^2} + \frac{\mathbf B \cdot \mathbf{\nabla} \times \mathbf H - \mathbf D \cdot \mathbf{\nabla} \times \mathbf E}{B^2} \, \mathbf B
\end{equation}
Because $c\,B^a={^*F}^{ak}\,n_k$ and $D^a/\varepsilon_0=F^{ak}\,n_k$, $\mathbf B$ and $\mathbf D/\varepsilon_0$ can be interpreted as the magnetic and electric field respectively as measured by the FIDO. Moreover
\begin{equation}
\label{eq:DensiteFIDO}
I^k \, n_k = \rho \, c^2
\end{equation}
thus $\rho$ is the electric charge density as measured by this same observer. Using the projection tensor defined by
\begin{equation}
\label{eq:Projection}
p_i^k = \delta_i^k - \frac{n_i\,n^k}{c^2}
\end{equation}
its electric current density~$\mathbf j$ is given by
\begin{equation}
\label{eq:CourantFIDO}
\alpha \, \mathbf j = \mathbf J + \rho \, c \, \mathbf \beta
\end{equation}
Maxwell equations~(\ref{eq:Maxwell1})-(\ref{eq:Maxwell4}), the constitutive relations (\ref{eq:ConstitutiveE}),(\ref{eq:ConstitutiveH}) and the prescription for the source terms set the background system to be solved for any prescribed metric. In the next section, we show how to solve this system in a simple way by introducing a vector spherical harmonic basis in curved space as summarized in appendix~\ref{app:HSV}.
For the remainder of this paper, we will only focus on the vacuum field solutions, leaving the force-free case for future work. Note that we choose to keep all physical constants in the formulas because this helps to check easier the consistency with dimensionality of the equations.
\section{ELECTROMAGNETIC FIELD OF AN ALIGNED DIPOLE}
\label{sec:Aligne}
The system to be solved being linear, we treat separately the aligned and the perpendicular case, the general oblique configuration being a weighted linear superposition of both solutions. We first address the simple static and rotating aligned dipole magnetic field before investigating the interesting perpendicular rotator as a special case of an oblique rotator.
\subsection{Static dipole}
We start with a non rotating neutron star, setting the spin to zero, $a=0$, therefore $\mathbf \beta = \mathbf 0$, followed by a simplification of the constitutive relations. The electric field vanishes, thus $\mathbf E = \mathbf D = 0$ whereas $\mu_0 \, \mathbf H = \alpha \, \mathbf B$. As a consequence, the magnetic field satisfies the static ($\partial_t=0$) Maxwell equations given by
\begin{subequations}
\begin{align}
\label{eq:Static1}
\mathbf{\nabla}\cdot \mathbf B & = 0 \\
\label{eq:Static2}
\mathbf{\nabla} \times (\alpha \, \mathbf B ) & = 0
\end{align}
\end{subequations}
Far from the neutron star, we expect to retrieve the flat space-time expression for the dipole magnetic field with magnetic moment $\mathbf \mu$ or, written explicitly,
\begin{equation}
\label{eq:BdipolePlat}
\mathbf B = \frac{\mu_0}{4\,\pi\,r^3} \, \left[ \frac{3\,(\mathbf \mu \cdot \mathbf r) \, \mathbf r}{r^2} - \mathbf \mu \right] = - \frac{\mu_0\,\mu}{4\,\pi} \, \sqrt{\frac{8\,\pi}{3}} \, \mathrm{Re} \left[ \mathbf{\nabla} \times \frac{\mathbf \Phi_{1,0}}{r^2}\right]
\end{equation}
In curved space-time, the meaning of a dipole field needs to be explicitly defined. We take as a definition for the dipolar magnetic field the one which is expressed only with the first vector spherical harmonic $\mathbf \Phi_{1,0}$ corresponding to the mode $(l,m)=(1,0)$ according to its flat space-time expression Eq.~(\ref{eq:BdipolePlat}). This is valid for a symmetry around the $z$-axis because $m=0$. The perpendicular case or more generally the oblique rotator would include the mode $(l,m)=(1,1)$ for the dipolar field. This will be done in section~\ref{sec:Orthogonal}.
Thus we expand the magnetic field according to the divergencelessness prescription and look for a separable solution with the prescription
\begin{equation}
\label{eq:Bstatic1}
\mathbf B = \mathrm{Re} \left[ \mathbf{\nabla} \times ( f_{1,0}^B(r) \, \mathbf \Phi_{1,0} ) \right]
\end{equation}
with the boundary condition
\begin{equation}
\label{eq:Bstatic2}
\lim\limits_{r\to+\infty} f_{1,0}^B(r) = - \frac{\mu_0\,\mu}{4\,\pi\,r^2} \, \sqrt{\frac{8\,\pi}{3}}
\end{equation}
$\mathbf \Phi_{1,0}$ is a vector spherical harmonic, see for instance \cite{2012MNRAS.424..605P}. The vector spherical harmonics being proper functions of the curl linear differential operator insure that such separable solutions do indeed exist. These linear algebra properties are absolutely fundamental and make vector spherical harmonics extremely useful to solve linear partial differential equations involving vector fields. Note that $f_{1,0}^B(r)$ is the unique unknown in this simple problem and depends only on the radial coordinate~$r$. Eq.~(\ref{eq:Static1}) is automatically satisfied by construction whereas inserting the expansion eq.~(\ref{eq:Bstatic1}) into eq.~(\ref{eq:Static2}) following the property eq.~(\ref{eq:RotRotPhi}) of appendix~\ref{app:HSV} (for $l=1$) leads to a second order linear ordinary differential equation for the scalar function~$f_{1,0}^B$ such that
\begin{equation}
\label{eq:Laplacef10}
\partial_r(\alpha^2\,\partial_r(r\,f_{1,0}^B)) - \frac{2}{r} \, f_{1,0}^B = 0
\end{equation}
The exact solution to this boundary problem which asymptotes to the flat dipole at large distances as prescribed by eq.~(\ref{eq:Bstatic2}) is given by
\begin{equation}
\label{eq:DipoleSchwarzf10}
f_{1,0}^{B({\rm dip})} = \frac{\mu_0\,\mu}{4\,\pi} \, \sqrt{\frac{8\,\pi}{3}} \, \frac{3\,r}{R_s^3} \, \left[ {\rm ln} \left( 1 - \frac{R_s}{r} \right) + \frac{R_s}{r} + \frac{R_s^2}{2\,r^2}\right]
\end{equation}
which corresponds to the solution shown in~\cite{1964ZETF...47..1030G}. The non vanishing magnetic field components are
\begin{subequations}
\label{eq:MagneticStatic}
\begin{align}
\label{eq:MagneticStaticR}
B^{\hat r} & = - 6 \, \frac{\mu_0}{4\,\pi} \, \left[ {\rm ln} \left( 1 - \frac{R_s}{r} \right) + \frac{R_s}{r} + \frac{R_s^2}{2\,r^2} \right] \, \frac{\mu\,\cos\vartheta}{R_s^3} \\
\label{eq:MagneticStaticT}
B^{\hat \vartheta} & = 3 \, \frac{\mu_0}{4\,\pi} \, \left[ 2 \, \sqrt{ 1 - \frac{R_s}{r}} \, {\rm ln} \left( 1 - \frac{R_s}{r} \right) + \frac{R_s}{r} \, \frac{2\,r-R_s}{\sqrt{r\,(r-R_s)}} \right] \, \frac{\mu\,\sin\vartheta}{R_s^3}
\end{align}
\end{subequations}
Corrections to first order compared to flat space-time are
\begin{subequations}
\begin{align}
\label{eq:Correction}
B^{\hat r} & = \frac{\mu_0}{4\,\pi} \, \frac{2\,\mu\,\cos\vartheta}{r^3} \, \left[ 1 + \frac{3}{4} \, \frac{R_s}{r} + o\left( \frac{R_s}{r} \right) \right] \\
B^{\hat \vartheta} & = \frac{\mu_0}{4\,\pi} \, \frac{\mu\,\sin\vartheta}{r^3} \, \left[ 1 + \frac{R_s}{r} + o\left( \frac{R_s}{r} \right) \right]
\end{align}
\end{subequations}
This first example shows how easy it is to compute the solution once the expansion onto vector spherical harmonics has been performed and knowing their properties and action on linear differential operators.
\subsection{Rotating dipole}
Next we consider the more useful case of a rotating magnetic dipole with magnetic moment aligned to the rotation axis. Now the situation becomes much more involved. First, rotation induces an electric field and secondly frame dragging effects mix electric and magnetic fields through the constitutive relations eq.~(\ref{eq:Constitutive}). To demonstrate how our formalism works, we decided to split the task in two steps. First we neglect frame dragging effects and look solely for the induced electric field. In a second stage, we add frame dragging.
\subsubsection{A pedestrian way}
Frame dragging effects could become important and should be included. Nevertheless, before dealing with the most general expression including frame dragging, we think it is educational to introduce the reasoning by hand and work out a low order expansion explicitly without any frame dragging effect. This would be acceptable for sufficiently low rotation and we can in the first stage neglect the shift vector setting $\mathbf\beta=0$ as in the previous paragraph. Maxwell equations then become
\begin{subequations}
\begin{align}
\label{eq:Tournant1}
\mathbf{\nabla}\cdot \mathbf D & = 0 \\
\label{eq:Tournant2}
\mathbf{\nabla} \times (\alpha \, \mathbf D ) & = 0 \\
\mathbf{\nabla}\cdot \mathbf B & = 0 \\
\mathbf{\nabla} \times (\alpha \, \mathbf B ) & = 0
\end{align}
\end{subequations}
These equations are particularly straightforward to solve because it represents a decoupled system of two unknown vector fields, one for $\mathbf D$ and one for $\mathbf B$. From the flat space-time solution, we know that the electric field will be quadrupolar which means only one mode is present namely $(l,m)=(2,0)$ for the axisymmetric case. Thus we expand both fields according to
\begin{subequations}
\begin{align}
\label{eq:TournantD2}
\mathbf D & = \mathrm{Re} \left[ \mathbf{\nabla} \times ( f_{2,0}^D \, \mathbf \Phi_{2,0} ) \right] \\
\label{eq:TournantB1}
\mathbf B & = \mathrm{Re} \left[ \mathbf{\nabla} \times ( f_{1,0}^B \, \mathbf \Phi_{1,0} ) \right]
\end{align}
\end{subequations}
This expansion insure automatically and analytically the divergencelessness nature of both $\mathbf D$ and $\mathbf B$. Moreover, these expressions lead as in the previous static regime to a separable solution for both the electric and magnetic field. Straightforward calculations show that $f_{1,0}^B$ again satisfies eq.~(\ref{eq:Laplacef10}) whereas $f_{2,0}^D$ has to be solution of another second order linear differential equation given by
\begin{equation}
\label{eq:Laplacef20}
\partial_r(\alpha^2\,\partial_r(r\,f_{2,0}^D)) - \frac{6}{r} \, f_{2,0}^D = 0
\end{equation}
It is obtained by inserting the expansion eq.~(\ref{eq:TournantD2}) into eq.~(\ref{eq:Tournant2}) following the property eq.~(\ref{eq:RotRotPhi}) of appendix~\ref{app:HSV} but now for $l=2$. The exact solution of this homogeneous linear differential equation and vanishing at infinity reads
\begin{equation}
\label{eq:DipoleSchwarzf20}
f_{2,0}^D = \frac{K}{R_s^2\,r} \, \left[ 6 \, \frac{r^2}{R_s^2} \, \left( 3 - 4 \, \frac{r}{R_s} \right) \, {\rm ln} \left( 1 - \frac{R_s}{r} \right) + 1 + 6 \, \frac{r}{R_s} \, \left( 1 - 4 \, \frac{r}{R_s} \right) \right]
\end{equation}
where $K$ is a constant to be determined from the boundary conditions at the surface of the neutron star. We now discuss this inner boundary condition in more details. Inside a perfectly conducting star, the rotation of the plasma induces an electric field $\mathbf E$ which satisfies
\begin{equation}
\label{eq:CLE}
\mathbf E + r\,\Omega\,\sin\vartheta \,\mathbf{e}_\varphi \times \mathbf B = 0
\end{equation}
This implies an electric field as measured by a FIDO given by
\begin{equation}
\label{eq:CLD}
\mathbf D = - \varepsilon_0 \, \frac{\tilde{\omega}}{\alpha}\,r\,\sin\vartheta \,\mathbf{e}_\varphi \times \mathbf B = \varepsilon_0 \, c \, \frac{\tilde{\omega}}{\alpha} \, \frac{\mathbf \beta}{\omega} \times \mathbf B
\end{equation}
For this FIDO, the electromagnetic field symbolized by $(\mathbf D, \mathbf B)$ has to verify the jump conditions across an interface as in flat space-time. In other words, the magnetic field component normal to the surface and the electric field components lying in the plane of the interface are continuous functions. More explicitly, the normal component~$B^{\hat r}$ and the tangential components~$(D^{\hat \vartheta}, D^{\hat \varphi})$ have to be continuous across the stellar surface. By construction, it can be verified by projection of eq.~(\ref{eq:TournantD2}) onto $\mathbf{e}_\varphi$ that the component $D^{\hat \varphi}$ remains zero in the exterior vacuum space, as it is inside the star. Note that this remark is consistent with the projection of eq.~(\ref{eq:CLD}) onto $\mathbf{e}_\varphi$. For the other tangential component, by projection of eq.~(\ref{eq:CLD}) onto $\mathbf{e}_\vartheta$ we have to enforce the condition
\begin{equation}
\label{eq:LimiteFD20}
D^{\hat \vartheta} = - \varepsilon_0 \, \frac{\tilde{\omega}}{\alpha}\,r\,\sin\vartheta \, B^{\hat r}
\end{equation}
This has to be compared with the projection of eq.~(\ref{eq:TournantD2}) onto $\mathbf{e}_\vartheta$ and given by
\begin{equation}
\label{eq:LimiteD20}
D^{\hat \vartheta} = \frac{3}{2} \, \sqrt{\frac{5}{6\,\pi}} \, \frac{\alpha}{r} \, \partial_r(r\,f^D_{2,0}) \, \sin\vartheta \, \cos \vartheta
\end{equation}
In order to deduce the constant of integration~$K$ in eq.~(\ref{eq:DipoleSchwarzf20}), eq.~(\ref{eq:LimiteFD20}) and~(\ref{eq:LimiteD20}) should be compared at the stellar surface setting $r=R$. $B^{\hat r}$ is known from the static dipole solution and given by eq.~(\ref{eq:MagneticStaticR}). By direct calculation from eq.~(\ref{eq:DipoleSchwarzf20}) we arrive at
\begin{equation}
\left.\partial_r(r\,f^D_{2,0})\right|_{r=R} = 36 \, \frac{K\,R}{R_{\rm s}^4} \, \left[ \left( 1 - 2\,\frac{R}{R_s} \right) \, \ln\alpha_R^2 - 2 - \frac{R_s^2}{6\,R^2\,\alpha_R^2} \right]
\end{equation}
The constant $K$ then follows immediately from the above condition. We get
\begin{equation}
\label{eq:K}
K = \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \frac{1}{9} \, \sqrt{\frac{6\,\pi}{5}} \, R_s \, R \,
\frac{\tilde{\omega}_R}{\alpha_R^2} \, C_1 \, C_2
\end{equation}
where
\begin{subequations}
\begin{align}
\label{eq:C1}
\alpha_R & = \sqrt{ 1 - \frac{R_s}{R} }\\
\omega_R & = \frac{a\,R_s\,c}{R^3} \\
\tilde{\omega}_R & = \Omega - \omega_R \\
C_1 & = \ln\alpha_R^2 + \frac{R_s}{R} + \frac{R_s^2}{2\,R^2} \\
C_2 & = \left[ \left( 1 - 2\,\frac{R}{R_s} \right) \, \ln\alpha_R^2 - 2 -
\frac{R_s^2}{6\,R^2\,\alpha_R^2} \right]^{-1}
\end{align}
\end{subequations}
The magnetic field remains the same as for the static dipole and the electric field yields
\begin{subequations}
\begin{align}
\label{eq:Drot1}
D^{\hat r} & = - \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \frac{R}{R_s^3} \,
\frac{\tilde{\omega}_R}{\alpha_R^2} \, C_1 \, C_2 \, \left[ \left( 3
- 4\,\frac{r}{R_s} \right) \, \ln\alpha^2 + \frac{R_s^2}{6\,r^2}
+ \frac{R_s}{r} - 4 \right] \, ( 3\,\cos^2\vartheta - 1 ) \\
D^{\hat \vartheta} & = 6 \, \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \frac{R}{R_s^3} \,
\frac{\tilde{\omega}_R}{\alpha_R^2} \, \alpha \, C_1 \, C_2 \, \left[
\left( 1 - 2\,\frac{r}{R_s} \right) \, \ln\alpha^2 - 2 -
\frac{R_s^2}{6\,r^2\,\alpha^2} \right] \, \cos\vartheta \, \sin\vartheta \\
D^{\hat \varphi} & = 0
\end{align}
\end{subequations}
So far, we did not include any frame dragging effect symbolized by the cross product in the constitutive relations eq.~(\ref{eq:ConstitutiveE}) and~(\ref{eq:ConstitutiveH}). Now we proceed to the inclusion of the frame dragging effect to look for more accurate solutions taking explicitly into account the rotation of the neutron star. Because these constitutive relations and the vacuum Maxwell equations are linear, we use a power series expansion of the unknown vector fields with respect to a small adimensionalized parameter~$\varepsilon$ which is related to the neutron star spin such that $\varepsilon=O(\Omega)$. Any vector field $\mathbf{V}$ is expanded into
\begin{equation}
\label{eq:Seri}
\mathbf{V} = \sum_{k\ge0} \varepsilon^k \, \mathbf{V}_k = \mathbf{V}_0 + \sum_{k\ge1} \varepsilon^k \, \mathbf{V}_k
\end{equation}
$\mathbf{V}_0$ is the static field for the non rotating star. Thus, to zero-th order, the electric field vanishes, $\mathbf{D}_0 = \mathbf{E}_0 = 0$. They are at least first order in $\Omega$. The shift vector is a quantity of first order so we write it as $\mathbf{\beta} = \varepsilon \, \mathbf{\beta}_1$. From the constitutive relations, we get the $k$-th order of the auxiliary electric field for $k\ge1$ as
\begin{equation}
\label{eq:Ek1}
\varepsilon_0 \, \mathbf{E}_k = \alpha \, \mathbf{D}_k + \varepsilon_0 \, c \, \mathbf{\beta}_1 \times \mathbf{B}_{k-1}
\end{equation}
and for the $k$-th order of the auxiliary magnetic field
\begin{equation}
\label{eq:Ek2}
\mu_0 \, \mathbf{H}_k = \alpha \, \mathbf{B}_k - \frac{\mathbf{\beta}_1 \times \mathbf{D}_{k-1}}{\varepsilon_0 \, c}
\end{equation}
Moreover, for any order, we have the constraints
\begin{subequations}
\begin{align}
\label{eq:Contraintek}
\mathbf{\nabla}\cdot \mathbf{B}_k & = \mathbf{\nabla}\cdot \mathbf{D}_k = 0 \\
\mathbf{\nabla} \times \mathbf{H}_k & = \mathbf{\nabla} \times \mathbf{E}_k = 0
\end{align}
\end{subequations}
As a consequence, we obtain a hierarchical set of partial differential equations for the fields~$\{\mathbf{B}_k, \mathbf{D}_k\}$ such that
\begin{subequations}
\begin{align}
\label{eq:hierarchy1}
\mathbf{\nabla} \times (\alpha \, \mathbf{D}_k) & = - \varepsilon_0 \, c \, \mathbf{\nabla} \times (\mathbf{\beta}_1 \times \mathbf{B}_{k-1}) \\
\label{eq:hierarchy2}
\mathbf{\nabla} \times (\alpha \, \mathbf{B}_k) & = \frac{1}{\varepsilon_0 \, c} \, \mathbf{\nabla} \times (\mathbf{\beta}_1 \times \mathbf{D}_{k-1})
\end{align}
\end{subequations}
for $k\ge1$. The initialisation for $k=0$ corresponds to the static dipole with $\mathbf B_0$ given by eq.~(\ref{eq:DipoleSchwarzf20}), therefore $\mathbf{D}_0 = \mathbf{E}_0 = 0$ as expected. We immediately conclude that the first perturbation in magnetic field corresponds to a second order term symbolized by~$\mathbf{B}_2$ ($\mathbf{B}_1=0$). We look for the first order perturbation in electric field corresponding to an electric quadrupole with $(l,m)=(2,0)$ such that
\begin{equation}
\label{eq:Tournant}
\mathbf{D}_1 = \mathbf{\nabla} \times ( f_{2,0}^D \, \mathbf \Phi_{2,0} )
\end{equation}
From now on, we suppress the real part symbol, it should be understood that the physical fields correspond to the real parts of the expressions derived below. Inserting this expansion into eq.~(\ref{eq:hierarchy1}) with $k=1$, the function~$f_{2,0}^D$ is solution of the following second order inhomogeneous linear ordinary differential equation
\begin{equation}
\label{eq:LaplaceSourcef20}
\partial_r(\alpha^2\,\partial_r(r\,f_{2,0}^D)) - \frac{6}{r} \,
f_{2,0}^D = 12 \, \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \sqrt{\frac{6\,\pi}{5}} \, \frac{a\,c}{R_s^2\,r^2} \, \left[ \ln\alpha^2 + \frac{R_s}{r} +
\frac{R_s^2}{2\,r^2} \right]
\end{equation}
The right hand side is obtained from the property eq.~(\ref{eq:RotBETARotVHS}). To solve this equation, we use standard techniques. First we look for the general solution to the homogeneous equation which is nothing else than eq.~(\ref{eq:Laplacef20}) with the subsequent solution eq.~(\ref{eq:DipoleSchwarzf20}), which we write here as $f_{2,0}^{D(h)}$. Next a peculiar solution of the inhomogeneous eq.~(\ref{eq:LaplaceSourcef20}) and vanishing at infinity is given by
\begin{equation}
\label{eq:SolPart20}
f_{2,0}^{D(p)} = - 2 \, \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \sqrt{\frac{6\,\pi}{5}} \, \frac{a\,c}{R_s^2\,r} \, \left[ \ln\alpha^2 + \frac{R_s}{r} \right]
\end{equation}
In order to satisfy the boundary condition on the star, we also need the following expression
\begin{equation}
\partial_r (r\,f_{2,0}^{D(p)}) = - 2 \, \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \sqrt{\frac{6\,\pi}{5}} \, \frac{a\,c}{R^3 \, \alpha_R^2}
\end{equation}
in order to compute $\partial_r (r\,f_{2,0}^{D})$ in eq.~(\ref{eq:LimiteD20}) from $f_{2,0}^{D} = f_{2,0}^{D(h)} + f_{2,0}^{D(p)}$. The constant~$K$ will be determined from the inner boundary condition, now taking the frame dragging effect into account because of the presence of the peculiar solution~$f_{2,0}^{D(p)}$, it has to be set to
\begin{equation}
\label{eq:Kb1}
K = \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \frac{C_2}{9\,\alpha_R^2} \, \sqrt{\frac{6\,\pi}{5}} \, \left[ R_s \, R \, \tilde{\omega}_R \, C_1 + \frac{1}{2}\, \, \frac{\omega_R\,R_s^3}{R} \right]
\end{equation}
Putting all pieces together, the full solution $f_{2,0}^{D} = f_{2,0}^{D(h)} + f_{2,0}^{D(p)}$ reads
\begin{multline}
f_{2,0}^D = \frac{K}{R_s^2\,r} \, \left[ 6 \, \frac{r^2}{R_s^2} \, \left( 3 - 4 \, \frac{r}{R_s} \right) \, {\rm ln} \left( 1 - \frac{R_s}{r} \right) + 1 + 6 \, \frac{r}{R_s} \, \left( 1 - 4 \, \frac{r}{R_s} \right) \right] \\
- 2 \, \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \sqrt{\frac{6\,\pi}{5}} \, \frac{a\,c}{R_s^2\,r} \, \left[ \ln\alpha^2 + \frac{R_s}{r} \right]
\end{multline}
Taking the value of the constant~$K$ into account, we finally get
\begin{multline}
f_{2,0}^{D({\rm quad})} = \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi\,r} \, \sqrt{\frac{6\,\pi}{5}} \left\{ \frac{C_2}{18\,\alpha_R^2} \, \left( \frac{\omega_R\,R_s}{R} + 2 \, C_1 \, \frac{\tilde{\omega_R}\,R}{R_s} \right) \right. \times \\
\times \left. \left[ 6 \, \frac{r^2}{R_s^2} \, \left( 3 - 4 \, \frac{r}{R_s} \right) \, {\rm ln} \left( 1 - \frac{R_s}{r} \right) + 1 + 6 \, \frac{r}{R_s} \, \left( 1 - 4 \, \frac{r}{R_s} \right) \right] - 2 \, \frac{\omega\,r^3}{R_s^3} \, \left( \ln\alpha^2 + \frac{R_s}{r} \right) \right\}
\end{multline}
If we separate the frame dragging effect~$\omega$ from the pure rotation~$\Omega$, we get
\begin{multline}
f_{2,0}^{D({\rm quad})} = \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \sqrt{\frac{6\,\pi}{5}} \left\{ \frac{2}{3} \, \frac{C_1\,C_2\,\Omega\,R\,r}{\alpha_R^2\,R_s^3} \, \left[ \left( 3
- 4\,\frac{r}{R_s} \right) \, \ln\alpha^2 + \frac{R_s^2}{6\,r^2} + \frac{R_s}{r} - 4 \right] - \right. \\
\left. 2 \, \frac{\omega\,r^4}{R_s^5} \, \left( \frac{R_s^2}{r^2} \, \left( \ln\alpha^2 + \frac{R_s}{r} \right) + \frac{C_2\,R_s^2}{3\,\alpha_R^2\,R^2} \, \left( \ln\alpha_R^2 + \frac{R_s}{R} \right) \, \times \right. \right. \\
\left.\left.\left[ \left( 3 - 4\,\frac{r}{R_s} \right) \, \ln\alpha^2 + \frac{R_s^2}{6\,r^2} + \frac{R_s}{r} - 4 \right] \right) \right\}
\end{multline}
The components of the electric field are then
\begin{subequations}
\begin{align}
\label{eq:Drot2}
D^{\hat r} & = - \sqrt{\frac{5}{4\,\pi}} \, \frac{\sqrt{6}}{r} \, f_{2,0}^{D({\rm quad})} \, P_2(\cos\vartheta) \\
D^{\hat \vartheta} & = \frac{3}{2} \, \sqrt{\frac{5}{6\,\pi}} \, \frac{\alpha}{r} \, \partial_r(r\, f_{2,0}^{D({\rm quad})}) \, \cos\vartheta \, \sin\vartheta \\
D^{\hat \varphi} & = 0
\end{align}
with the radial derivative given explicitly by
\begin{align}
\partial_r(r\, f_{2,0}^{D({\rm quad})}) & = \frac{2}{3} \, \sqrt{\frac{6\,\pi}{5}} \, \frac{\varepsilon_0 \, \mu_0 \, \mu \, r}{4\,\pi} \, \left\{ \frac{6\,C_1\,C_2\,\Omega\,R}{\alpha_R^2\,R_s^3} \, \left[ \left( 1 - 2 \, \frac{r}{R_s} \right) \, {\rm ln} \alpha^2 - 2 - \frac{R_s^2}{6\,\alpha^2\,r^2} \right] \right. \\
& \left. - \frac{\omega\,r^3}{R_s^5} \left( \frac{6\,C_2\,R_s^2}{\alpha_R^2\,R^2} \, \left( \ln\alpha_R^2 + \frac{R_s}{R} \right) \, \left[ \left( 1 - 2 \, \frac{r}{R_s} \right) \, {\rm ln} \alpha^2 - 2 - \frac{R_s^2}{6\,\alpha^2\,r^2} \right] + 3 \, \frac{R_s^4}{\alpha^2\,r^4} \right) \right\} \nonumber
\end{align}
\end{subequations}
These expressions are the same as equations (124)-(125)-(126) in \cite{2001MNRAS.322..723R} specialized to the aligned rotator. In the newtonian limit we find
\begin{subequations}
\begin{align}
\label{eq:DrotNewt}
D^{\hat r} & = - \frac{\Omega \, B \, R^5}{r^4} \, ( 3 \, \cos^2 \vartheta - 1 ) \\
D^{\hat \vartheta} & = - \frac{\Omega \, B \, R^5}{r^4} \, 2 \, \cos \vartheta \, \sin \vartheta \\
D^{\hat \varphi} & = 0
\end{align}
\end{subequations}
as it should be.
\subsubsection{General formalism}
The properties of the vector spherical harmonics in curved space allow us to derive in a systematic way the relations between the expansion coefficients of~$\{\mathbf{B}_k, \mathbf{D}_k\}$. Because of the axisymmetry of the problem, there are no toroidal components of neither the magnetic nor the electric part. Therefore, all coefficients with $m>0$ vanish. Thus we expand both fields according to
\begin{subequations}
\begin{align}
\mathbf{D}_k & = \sum_{l\geq1} \mathbf{\nabla} \times ( f_{l,0}^{D(k)} \, \mathbf \Phi_{l,0} ) \\
\mathbf{B}_k & = \sum_{l\geq1} \mathbf{\nabla} \times ( f_{l,0}^{B(k)} \, \mathbf \Phi_{l,0} )
\end{align}
\end{subequations}
the superscript~$(k)$ denotes the order of the expansion in the spin parameter, related to the frame dragging effect. Injecting those expressions into eqs.~(\ref{eq:hierarchy1}) and (\ref{eq:hierarchy2}), we get for $(l,k)\geq1$ according to eq.~(\ref{eq:RotBETARotVHS}) in appendix~\ref{app:HSV}
\begin{subequations}
\begin{multline}
\label{eq:fl0Dk}
\partial_r(\alpha^2\,\partial_r(r\,f_{l,0}^{D(k)})) - \frac{l(l+1)}{r} \, f_{l,0}^{D(k)} = \\
3 \, \varepsilon_0 \, \omega \, \left[ l \sqrt{\frac{(l-1)(l+1)}{(2l-1)(2l+1)}} \, f_{l-1,0}^{B(k-1)} - (l+1) \sqrt{\frac{l(l+2)}{(2l+3)(2l+1)}} \, f_{l+1,0}^{B(k-1)} \right]
\end{multline}
\begin{multline}
\label{eq:fl0Bk}
\partial_r(\alpha^2\,\partial_r(r\,f_{l,0}^{B(k)})) - \frac{l(l+1)}{r} \, f_{l,0}^{B(k)} = \\
- 3 \, \mu_0 \, \omega \, \left[ l \sqrt{\frac{(l-1)(l+1)}{(2l-1)(2l+1)}} \, f_{l-1,0}^{D(k-1)}- (l+1) \sqrt{\frac{l(l+2)}{(2l+3)(2l+1)}} \, f_{l+1,0}^{D(k-1)} \right]
\end{multline}
\end{subequations}
It is understood that $f_{0,0}^{D(k)} = f_{0,0}^{B(k)} = 0$. The very important fact about this hierarchical system of second order linear partial differential equations relating the $f_{l,0}^{D(k)}$ to the $f_{l,0}^{D(k)}$ is its \textit{uncoupled} nature. Indeed the coefficients $f_{l,0}^{D(k)}$ and $f_{l,0}^{B(k)}$ are related to the immediately lowest order expansion coefficients $f_{l,0}^{D(k-1)}$ and $f_{l,0}^{B(k-1)}$. Consequently, we can find the solution to any order by simply computing more coefficients. The recurrence starts with the static aligned magnetic dipole which is the zero-th order approximation of the solution, with subscript~$(0)$. As already noted in the previous paragraph, the electric field is a first order effect at least.
For concreteness, let us work out the approximate solution to third order, i.e. including to vector spherical harmonic functions in the expansion of both the electric and the magnetic field. At first glance, this seems a rather high degree of accuracy for such solution in contrast to the first order expansion of the background metric. Nevertheless, we have in mind to use such results as a benchmark to test forthcoming general relativistic electromagnetic solvers in free space and in the force-free approximation in order to extend the code presented in \cite{2012MNRAS.424..605P}. This justifies our wish to reach a high degree of accuracy for the numerical solutions even if the metric is only first order accurate in the spin parameter~$a$.
The electric field is a consequence of the rotation of the star, thus to zero-th order, there is only a magnetic field, i.e. $f_{l,0}^{D(0)} = 0 $ and $f_{1,0}^{B(0)} = f_{1,0}^{B({\rm dip})}$, the dipole in Schwarzschild space-time given by eq.~(\ref{eq:DipoleSchwarzf10}), all other $f_{l,0}^{B(0)}$ with $l\geq2$ being equal to zero. The initialisation with $f_{l,0}^{D(0)} = 0 $ implies that there are no first order corrections to the magnetic field because eq.~(\ref{eq:fl0Bk}) has a vanishing right hand side. It is a linear homogeneous second order partial differential equation with zero boundary conditions on the star and at infinity. Therefore the solution vanishes identically leading to $f_{l,0}^{B(1)}=0$. The first correction comes from the coefficients $f_{l,0}^{D(1)}$ which have to satisfy eq.~(\ref{eq:fl0Dk}). There is only one inhomogeneous equation corresponding to $l=2$ with the right hand side including $f_{1,0}^{B(0)}$. If written explicitly, we retrieve eq.~(\ref{eq:LaplaceSourcef20}) namely
\begin{equation}
\partial_r(\alpha^2\,\partial_r(r\,f_{2,0}^{D(1)})) - \frac{6}{r} \,
f_{2,0}^{D(1)} = \frac{6}{\sqrt{5}} \, \varepsilon_0 \, \omega \, f_{1,0}^{B(0)}
\end{equation}
with its subsequent solution. The next order includes a perturbation in the magnetic field. Indeed, the coefficients~$f_{l,0}^{B(2)}$ have to satisfy eq.~(\ref{eq:fl0Bk}) with source terms emanating only from $f_{2,0}^{D(1)}$, supplemented with vanishing boundary conditions. The two equations are inhomogeneous, namely
\begin{subequations}
\begin{align}
\label{eq:f10B2}
\partial_r(\alpha^2\,\partial_r(r\,f_{1,0}^{B(2)})) - \frac{2}{r} \, f_{1,0}^{B(2)} & = \frac{6}{\sqrt{5}} \, \frac{\omega}{\varepsilon_0 \, c^2} \, f_{2,0}^{D(1)} \\
\label{eq:f30B2}
\partial_r(\alpha^2\,\partial_r(r\,f_{3,0}^{B(2)})) - \frac{12}{r} \, f_{3,0}^{B(2)} & = - 18 \, \sqrt{\frac{2}{35}} \, \frac{\omega}{\varepsilon_0 \, c^2} \, f_{2,0}^{D(1)}
\end{align}
\end{subequations}
Finally, this perturbed magnetic field will feed back to the electric field to third order with the non-vanishing coefficients satisfying
\begin{subequations}
\begin{align}
\label{eq:f20D3}
\partial_r(\alpha^2\,\partial_r(r\,f_{2,0}^{D(3)})) - \frac{6}{r} \, f_{2,0}^{D(3)} & = \frac{6}{\sqrt{5}} \, \varepsilon_0 \, \omega \, \left[ f_{1,0}^{B(2)} - 3 \sqrt{\frac{2}{7}} \, f_{3,0}^{B(2)} \right] \\
\label{eq:f40D3}
\partial_r(\alpha^2\,\partial_r(r\,f_{4,0}^{D(3)})) - \frac{20}{r} \, f_{4,0}^{D(3)} & = 4 \, \sqrt{\frac{15}{7}} \, \varepsilon_0 \, \omega \, f_{3,0}^{B(2)}
\end{align}
\end{subequations}
but with boundary conditions at the stellar surface according to eq.~(\ref{eq:CLD}). Equations (\ref{eq:f10B2})-(\ref{eq:f20D3}) show the hierarchical set we are led to in order to improve the solution step by step by including an increasing number of multipoles of order~$l$ in accordance with the degree of approximation desired in the spin parameter. Some of these equations can be solved analytically with source terms, but we were unable to write down simple expressions for the solution with appropriate boundary conditions except for the very few first coefficients.
Finding closed expression is a cumbersome task. Eventually, we decided to solve the above set of equations numerically by spectral methods. We expand the solutions into rational Chebyshev functions as defined in \cite{Boyd2001}. See below for the details. Our starting point is to use a finite number of multipolar coefficients in the expansion of both the fields, $N_D$ terms for $\mathbf{D}$ and $N_B$ terms for $\mathbf{B}$, writing
\begin{subequations}
\label{eq:DBDvlptAlign}
\begin{align}
\mathbf{D} & = \sum_{l=1}^{N_D} \mathbf{\nabla} \times ( f_{l,0}^{D} \, \mathbf \Phi_{l,0} ) \\
\mathbf{B} & = \sum_{l=1}^{N_B} \mathbf{\nabla} \times ( f_{l,0}^{B} \, \mathbf \Phi_{l,0} )
\end{align}
\end{subequations}
The order in the spin parameter, previously labelled as~$(k)$, has disappeared in the numerical solution, we do not perturb anymore according to~$a$. Each of the coefficient $f_{l,0}^{D}$ and $f_{l,0}^{B}$ has to satisfy the differential equation which is given for the magnetic field by
\begin{multline}
\label{eq:fl0B}
\partial_r(\alpha^2\,\partial_r(r\,f_{l,0}^{B})) - \frac{l(l+1)}{r} \, f_{l,0}^{B} = \\
- 3 \, \frac{\omega}{\varepsilon_0 \, c^2} \, \left[ l \sqrt{\frac{(l-1)(l+1)}{(2l-1)(2l+1)}} \, f_{l-1,0}^{D}- (l+1) \sqrt{\frac{l(l+2)}{(2l+3)(2l+1)}} \, f_{l+1,0}^{D} \right]
\end{multline}
and for the electric field by
\begin{multline}
\label{eq:fl0D}
\partial_r(\alpha^2\,\partial_r(r\,f_{l,0}^{D})) - \frac{l(l+1)}{r} \, f_{l,0}^{D} = \\
3 \, \varepsilon_0 \, \omega \, \left[ l \sqrt{\frac{(l-1)(l+1)}{(2l-1)(2l+1)}} \, f_{l-1,0}^{B} - (l+1) \sqrt{\frac{l(l+2)}{(2l+3)(2l+1)}} \, f_{l+1,0}^{B} \right]
\end{multline}
The boundary conditions at infinity enforce vanishing coefficients whereas on the neutron star surface, we have to impose continuity of the tangential $\mathbf D$ and normal $\mathbf B$ components on the stellar surface. Introducing the expansions eq.~(\ref{eq:DBDvlptAlign}) into eq.~(\ref{eq:CLD}), then projecting along~$\mathbf{e}_\vartheta$ and using the useful identities for frame-dragging presented in appendix~\ref{sec:framedragging} we get the relation between the coefficients of $\mathbf D$ and $\mathbf B$ as
\begin{multline}
\alpha^2 \, \left[ \sqrt{\frac{l+2}{l+1}} \,J_{l+1,0} \, \partial_r(r\,f_{l+1,0}^{D}) - \sqrt{\frac{l-1}{l}} \,J_{l,0} \, \partial_r(r\,f_{l-1,0}^{D}) \right] = \\
\varepsilon_0 \, r \, \tilde{\omega} \, \left[ \sqrt{l\,(l+1)} \, ( 1 - J_{l,0}^2 - J_{l+1,0}^2 ) \, f_{l,0}^{B} - \right. \\
\left. \sqrt{(l-2)\,(l-1)} \, J_{l,0} \, J_{l-1,0} \, f_{l-2,0}^{B} - \sqrt{(l+2)\,(l+3)} \, J_{l+1,0} \, J_{l+2,0} \, f_{l+2,0}^{B}\right]
\end{multline}
where quantities have to be evaluated on the neutron star surface, at $r=R$. Some recurrence formulas are very useful to deal with spherical harmonics. The three recurrences used to impose the boundary conditions are
\begin{subequations}
\begin{align}
\sin\vartheta \, \partial_\vartheta Y_{l,m} & = l \, J_{l+1,m} \, Y_{l+1,m} - (l+1) \, J_{l,m} \, Y_{l-1,m} \\
\cos\vartheta \, Y_{l,m} & = J_{l+1,m} \, Y_{l+1,m} + J_{l,m} \, Y_{l-1,m} \\
\cos^2\vartheta \, Y_{l,m} & = J_{l+1,m} \, J_{l+2,m} \, Y_{l+2,m} + ( J_{l+1,m}^2 + J_{l,m}^2 ) \, Y_{l,m} + J_{l,m} \, J_{l-1,m} \, Y_{l-2,m}
\end{align}
with the constants given by
\begin{align}
J_{l,m} & = \sqrt{\frac{l^2-m^2}{4\,l^2-1}}
\end{align}
\end{subequations}
Let us write down explicitly the equations for the three first coefficients in $\mathbf{B}$ and $\mathbf{D}$. The system of partial differential equations reads
\begin{subequations}
\begin{align}
\partial_r(\alpha^2\,\partial_r(r\,f_{1,0}^{B})) - \frac{2}{r} \, f_{1,0}^{B} & = \frac{6}{\sqrt{5}} \, \frac{\omega}{\varepsilon_0 \, c^2} \, f_{2,0}^{D} \\
\label{eq:f30B}
\partial_r(\alpha^2\,\partial_r(r\,f_{3,0}^{B})) - \frac{12}{r} \, f_{3,0}^{B} & = \frac{1}{\sqrt{7}} \, \frac{\omega}{\varepsilon_0 \, c^2} \, \left[ - 18 \, \sqrt{\frac{2}{5}} \, f_{2,0}^{D} + 4 \, \sqrt{15} \, f_{4,0}^{D} \right] \\
\partial_r(\alpha^2\,\partial_r(r\,f_{5,0}^{B})) - \frac{30}{r} \, f_{5,0}^{B} & = \frac{2}{\sqrt{11}} \, \frac{\omega}{\varepsilon_0 \, c^2} \, \left[ - 5 \, \sqrt{6} \, f_{4,0}^{D} + 9 \, \sqrt{\frac{35}{13}} \, f_{6,0}^{D} \right] \\
\label{eq:f20D}
\partial_r(\alpha^2\,\partial_r(r\,f_{2,0}^{D})) - \frac{6}{r} \, f_{2,0}^{D} & = \frac{6}{\sqrt{5}} \, \varepsilon_0 \, \omega \, \left[ f_{1,0}^{B} - 3 \sqrt{\frac{2}{7}} \, f_{3,0}^{B} \right] \\
\label{eq:f40D}
\partial_r(\alpha^2\,\partial_r(r\,f_{4,0}^{D})) - \frac{20}{r} \, f_{4,0}^{D} & = 2 \, \sqrt{3} \, \varepsilon_0 \, \omega \, \left[ 2 \, \sqrt{\frac{5}{7}} \, f_{3,0}^{B} - 5 \, \sqrt{\frac{2}{11}} \, f_{5,0}^{B} \right] \\
\partial_r(\alpha^2\,\partial_r(r\,f_{6,0}^{D})) - \frac{42}{r} \, f_{6,0}^{D} & = 18 \, \sqrt{\frac{35}{143}} \, \varepsilon_0 \, \omega \, f_{5,0}^{B}
\end{align}
\end{subequations}
The associated boundary conditions are
\begin{subequations}
\label{eq:CLfD}
\begin{align}
\sqrt{\frac{2}{5}} \, \alpha^2 \, \partial_r(r\,f_{2,0}^{D}) & =
\varepsilon_0 \, r \, \tilde{\omega} \, \left[ \frac{2\,\sqrt{2}}{5} \, f_{1,0}^{B} - \frac{12}{5 \, \sqrt{7}} \, f_{3,0}^{B} \right] \\
\alpha^2 \, \left[ \frac{2}{3}\,\sqrt{\frac{5}{7}} \, \partial_r(r\,f_{4,0}^{D}) - \sqrt{\frac{6}{35}} \, \partial_r(r\,f_{2,0}^{D}) \right] & =
\varepsilon_0 \, r \, \tilde{\omega} \, \left[ \frac{44}{15\,\sqrt{3}} \, f_{3,0}^{B} - \frac{2}{5} \, \sqrt{\frac{6}{7}} \, f_{1,0}^{B} - \frac{20}{3}\,\sqrt{\frac{10}{231}} \, f_{5,0}^{B} \right] \\
\alpha^2 \, \left[ \sqrt{\frac{42}{143}} \, \partial_r(r\,f_{6,0}^{D}) - \frac{2}{3}\,\sqrt{\frac{5}{11}} \, \partial_r(r\,f_{4,0}^{D}) \right] & =
\varepsilon_0 \, r \, \tilde{\omega} \, \left[ \frac{58}{39} \, \sqrt{\frac{10}{3}} \, f_{5,0}^{B} - \frac{40}{3\,\sqrt{231}} \, f_{3,0}^{B} \right]
\end{align}
\end{subequations}
again where quantities have to be evaluated on the neutron star surface, at $r=R$.
We emphasize that the magnetic field at the neutron star surface is exactly matched to the expression for the general-relativistic static dipole, eq.~(\ref{eq:DipoleSchwarzf10}). All other multipole fields $f_{l,0}^{B}$ with $l\neq1$ vanish at $r=R$ by our definition.
\subsubsection{Numerical integration}
The computation of the electromagnetic field in vacuum has been reduced to a system of linear ordinary differential equations of second order. Moreover, it is a boundary value problem to be solved in a semi-infinite interval, from $r=R$ to $r=+\infty$. Several different techniques exist to treat such a system. We choose to employ spectral methods, expanding the unknown functions onto special basis functions. According to \cite{Boyd2001}, dealing with rational Chebyshev functions~$TL_k(y)$ is a judicious choice for the interval~$[R,+\infty[$. These functions are defined by
\begin{subequations}
\begin{align}
TL_k(y) & = T_k (x) \\
x & = \frac{y-L}{y+L} \\
y & = r - R
\end{align}
\end{subequations}
where $T_k$ are the Chebyshev polynomials of order~$k$ and $y\in[0,+\infty[$. $L$ is a scaling parameter which should reproduce the characteristic length of the problem. We choose $L=R$. Any radial function~$f(r)$ is therefore expanded into a finite number of $N_r$~terms such that
\begin{equation}
f(r) = \sum_{k=0}^{N_r-1} f_k \, TL_k(y(r))
\end{equation}
The inner and outer boundary conditions for the magnetic field coefficients~$f^B_{l,m}$ are expressed as
\begin{subequations}
\begin{align}
\label{eq:CLf1}
\sum_{k=0}^{N_r-1} f_k & = 0 \\
\sum_{k=0}^{N_r-1} (-1)^k \, f_k & = f(R)
\end{align}
\end{subequations}
Actually, all the $f(R)$ vanish except for the dipole $f^B_{1,0}(R)$, recall that we strictly impose a dipolar magnetic field on the neutron star surface. The outer boundary conditions for the electric field coefficients~$f^D_{l,m}$ are the same as eq.~(\ref{eq:CLf1}). The inner boundary conditions are different because we enforce conditions on the derivative, see eq.~(\ref{eq:CLfD}), but it remains a relation involving linear terms in the expansion coefficients. We use what \cite{Boyd2001} calls the boundary-bordering method leading to a linear algebra system to be solved. For more details on spectral and pseudo-spectral methods, see for instance also \cite{2006spme.book.....C}. Technically, we use Mathematica~9 to compute the matrix coefficients and invert the system to find the expansion coefficients.
For the subsequent numerical applications, we normalize the magnetic moment of the neutron star to unity, $\mu=1$. In order to demonstrate the accuracy of our spectral algorithm to solve the system of linear ordinary differential equations, we begin with the static aligned dipole. The neutron star radius is set to $R=r_{\rm L}/10$ although it is irrelevant for the static dipole case because there is no rotation and no scaling with~$r_{\rm L}$. The Schwarzschild radius is chosen with increasing value compared to the stellar radius, we take $R=\{2,20,200,2000\} \, R_{\rm s}$. The absolute value of the expansion coefficients of the function~$f_{1,0}^{B}$ are shown in fig.~\ref{fig:fb10_align} with a number of collocations points $N_{\rm r}=51$. For any value of the compactness parameter~$\Xi$, we get the prescribed 15~digits of accuracy, although that for compactness close to unity, we need more coefficients. Indeed, for very low compactness $\Xi=1/2000$, red curve with full circles, less than ten coefficients are required to get full accuracy. The same remark holds for any $\Xi\ll1$. For the typical compactness of a neutron star, $\Xi=0.5$, magenta curve with full triangles, we need almost 25~coefficients to achieve the required accuracy. Nevertheless, the spectral convergence of our computation is clearly identified by the exponential decrease of the magnitude of the highest-order coefficients within an accuracy of 15~digits. The other question to address is the efficiency of our spectral algorithm to reproduce the exact solution depending on the strength of deviation from the flat space-time metric. In order to check the correctness of these coefficients, we have to compare them with the expansion coefficients of the analytical solution given by eq.~(\ref{eq:DipoleSchwarzf10}).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fB10_stat_NR51.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic field functions $f_{1,0}^{B}$ for the static aligned dipole. $k$ corresponds to the order of the k-th rational Chebyshev function and the numbers in the legend depict the ratio~$R/R_{\rm s}$.}
\label{fig:fb10_align}
\end{figure}
To do this, we define the absolute error between the analytical~$f_{1,0}^{B({\rm dip})}$ and the numerical~$f_{1,0}^{B({\rm num})}$ solution by
\begin{equation}
\label{eq:ErreurAbs}
\textrm{error} (f_{1,0}^{B}) = \left| \frac{f_{1,0}^{B({\rm dip})} - f_{1,0}^{B{(\rm num})}}{{\rm max}(f_{1,0}^{B({\rm dip})})} \right|
\end{equation}
This error is plotted in fig.~\ref{fig:fb10_align_erreur} and shows a perfect match between both solutions, within the numerical accuracy. We reach 15~digits of significance for the relevant coefficients, those which are not zero numerically. This explains the decreasing number of significant digits when the coefficients are close to zero. They are meaningless.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{erreur_abs_fB10_stat_NR51.eps}
\caption{Absolute error of the numerical solution compared to the analytical expression for the magnetic field~$f_{1,0}^{B}$ for the static aligned dipole. $k$ corresponds to the order of the k-th rational Chebyshev function and the numbers in the legend depict the ratio~$R/R_{\rm s}$.}
\label{fig:fb10_align_erreur}
\end{figure}
This first example demonstrates the very high accuracy obtainable by our spectral method. Next, we pursue with the rotating aligned dipole. Rotation combined with frame dragging effects will produce higher order multipole coefficients which to first order depend linearly on the spin parameter~$a$. We already gave an approximate analytical solution to the lowest order, i.e. the induced electric field without taking into account the perturbation in the magnetic field. Nevertheless with our numerical integration procedure, we are able to give solutions to any order in the multipole moments~$l$. We therefore proceed in an increasing order of complexity. Starting with only the two functions $f_{1,0}^{B}$ and $f_{2,0}^{D}$ to the lowest approximation, corresponding to the magnetic dipole and to the electric quadrupole, we then successively add the couple $(f_{3,0}^{B},f_{4,0}^{D})$ and conclude with two more functions $(f_{5,0}^{B},f_{6,0}^{D})$. Consequently, we can quantitatively estimate the contribution to the electromagnetic field from higher multipoles other than dipole and quadrupole.
We performed different sets of calculation by combining slow and fast rotation $r_{\rm L}=\{10,1000\}\,R$ with low and high compactness $R=\{2,2000\}\,R_{\rm s}$ with normalized magnetic moment $\mu=1$. We start with a very slowly rotating dipole for which $r_{\rm L}=1000\,R$ and a low compactness $R=2000\,R_{\rm s}$ in order to look for small perturbations of the electric field induced by frame dragging effects. We can therefore compare the approximate analytical expressions with the more accurate numerical one. The absolute value of the rational Chebyshev coefficients of the lowest order approximation are shown in fig.~\ref{fig:f_align_rot_1_r1000_rs2000} for $f_{1,0}^{B}$ and $f_{2,0}^{D}$. Spectral convergence is achieved as expected. The discrepancy between the analytical solution and the numerical computation are small, less than $10^{-3}$, the absolute error between both sets of coefficients is close to zero as can be seen in fig.~\ref{fig:erreur_align_rot_1_r1000_rs2000}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f_aligne_rot_1_NR31_r1000_rs2000.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic and electric field functions, $f_{1,0}^{B}$ in red circles and $f_{2,0}^{D}$ in blue squares, for the aligned rotating dipole. The parameters are $R=2000\,R_{\rm s}$ and $r_{\rm L}=1000\,R$.}
\label{fig:f_align_rot_1_r1000_rs2000}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{erreur_abs_f_aligne_rot_1_NR31_r1000_rs2000.eps}
\caption{Difference of the numerical solution compared to the first order analytical approximate expression for the magnetic field~$f_{1,0}^{B({\rm dip})}$ in red circles and for the electric field~$f_{2,0}^{D({\rm quad})}$ in blue squares. The parameters are $R=2000\,R_{\rm s}$ and $r_{\rm L}=1000\,R$.}
\label{fig:erreur_align_rot_1_r1000_rs2000}
\end{figure}
In a second set of calculations, we increased the frame-dragging effects by taking $r_{\rm L}=10\,R$ and $R=2000\,R_{\rm s}$. The absolute value of the rational Chebyshev coefficients of the lowest order approximation are shown in fig.~\ref{fig:f_align_rot_1_r10_rs2000} for $f_{1,0}^{B}$ and $f_{2,0}^{D}$. Spectral convergence is achieved as expected. Here also the absolute discrepancy between both sets of coefficients is close to zero as can be seen in fig.~\ref{fig:erreur_align_rot_1_r10_rs2000}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f_aligne_rot_1_NR31_r10_rs2000.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic field and electric field functions, $f_{1,0}^{B}$ in red circles and $f_{2,0}^{D}$ in blue squares, for the aligned rotating dipole. The parameters are $R=2000\,R_{\rm s}$ and $r_{\rm L}=10\,R$.}
\label{fig:f_align_rot_1_r10_rs2000}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{erreur_abs_f_aligne_rot_1_NR31_r10_rs2000.eps}
\caption{Difference of the numerical solution compared to the first order analytical approximate expression for the magnetic field~$f_{1,0}^{B({\rm dip})}$ in red circles and for the electric field~$f_{2,0}^{D({\rm quad})}$ in blue squares. The parameters are $R=2000\,R_{\rm s}$ and $r_{\rm L}=10\,R$.}
\label{fig:erreur_align_rot_1_r10_rs2000}
\end{figure}
In a third set of calculations, we increased the compactness by taking $r_{\rm L}=1000\,R$ and $R=2\,R_{\rm s}$. These values are typical for radio pulsars. The absolute value of the rational Chebyshev coefficients of the lowest order approximation are shown in fig.~\ref{fig:f_align_rot_1_r1000_rs2} for $f_{1,0}^{B}$ and $f_{2,0}^{D}$. Spectral convergence is achieved as expected. Here also the absolute discrepancy between both sets of coefficients is close to zero as can be seen in fig.~\ref{fig:erreur_align_rot_1_r1000_rs2}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f_aligne_rot_1_NR31_r1000_rs2.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic field and electric field functions, $f_{1,0}^{B}$ in red circles and $f_{2,0}^{D}$ in blue squares, for the aligned rotating dipole. The parameters are $R=2\,R_{\rm s}$ and $r_{\rm L}=1000\,R$.}
\label{fig:f_align_rot_1_r1000_rs2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{erreur_abs_f_aligne_rot_1_NR31_r1000_rs2.eps}
\caption{Difference of the numerical solution compared to the first order analytical approximate expression for the magnetic field~$f_{1,0}^{B({\rm dip})}$ in red circles and for the electric field~$f_{2,0}^{D({\rm quad})}$ in blue squares. The parameters are $R=2\,R_{\rm s}$ and $r_{\rm L}=1000\,R$.}
\label{fig:erreur_align_rot_1_r1000_rs2}
\end{figure}
In a last set of calculations, we increased the rotation frequency by taking $r_{\rm L}=10\,R$ and $R=2\,R_{\rm s}$. These values are typical for millisecond pulsars. The absolute value of the rational Chebyshev coefficients of the lowest order approximation are shown in fig.~\ref{fig:f_align_rot_1_r10_rs2} for $f_{1,0}^{B}$ and $f_{2,0}^{D}$. Spectral convergence is achieved as expected. The absolute discrepancy is shown in fig.~\ref{fig:erreur_align_rot_1_r10_rs2}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f_aligne_rot_1_NR31_r10_rs2.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic field and electric field functions, $f_{1,0}^{B}$ in red circles and $f_{2,0}^{D}$ in blue squares, for the aligned rotating dipole. The parameters are $R=2\,R_{\rm s}$ and $r_{\rm L}=10\,R$.}
\label{fig:f_align_rot_1_r10_rs2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{erreur_abs_f_aligne_rot_1_NR31_r10_rs2.eps}
\caption{Difference of the numerical solution compared to the first order analytical approximate expression for the magnetic field~$f_{1,0}^{B({\rm dip})}$ in red circles and for the electric field~$f_{2,0}^{D({\rm quad})}$ in blue squares. The parameters are $R=2\,R_{\rm s}$ and $r_{\rm L}=10\,R$.}
\label{fig:erreur_align_rot_1_r10_rs2}
\end{figure}
Next, we go on in this section about the aligned rotator by computing higher order multipoles $l=\{3,4\}$ to demonstrate that they are several orders of magnetic less than the magnetic dipolar and electric quadrupolar moment. Results are shown in fig.~\ref{fig:f_align_rot_2_r1000_rs2000} for two more multipoles with a slowly rotating non compact star. The same for a rapidly rotating neutron star is shown in fig.~\ref{fig:f_align_rot_2_r10_rs2}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f_aligne_rot_2_NR31_r1000_rs2000.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic field and electric field functions $f_{1,0}^{B}, f_{3,0}^{B}$ and $f_{2,0}^{D}, f_{4,0}^{D}$ for the aligned rotating dipole. The parameters are $R=2000\,R_{\rm s}$ and $r_{\rm L}=1000\,R$.}
\label{fig:f_align_rot_2_r1000_rs2000}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f_aligne_rot_2_NR31_r10_rs2.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic field and electric field functions $f_{1,0}^{B}, f_{3,0}^{B}$ and $f_{2,0}^{D}, f_{4,0}^{D}$ for the aligned rotating dipole. The parameters are $R=2\,R_{\rm s}$ and $r_{\rm L}=10\,R$.}
\label{fig:f_align_rot_2_r10_rs2}
\end{figure}
We conclude this section by computing even higher order multipoles $l=\{5,6\}$ to demonstrate that they are also several orders of magnetic less than the lower multipolar moments. For a total of 6 multipoles, we get the coefficients represented in fig.~\ref{fig:f_align_rot_3_r1000_rs2000} for the slowly rotating non compact star and for a rapidly rotating neutron star in fig.~\ref{fig:f_align_rot_3_r10_rs2}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f_aligne_rot_3_NR31_r1000_rs2000.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic field and electric field functions $f_{1,0}^{B}, f_{3,0}^{B}, f_{5,0}^{B}$ and $f_{2,0}^{D}, f_{4,0}^{D}, f_{6,0}^{D}$ for the aligned rotating dipole.}
\label{fig:f_align_rot_3_r1000_rs2000}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f_aligne_rot_3_NR31_r10_rs2.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic field and electric field functions $f_{1,0}^{B}, f_{3,0}^{B}, f_{5,0}^{B}$ and $f_{2,0}^{D}, f_{4,0}^{D}, f_{6,0}^{D}$ for the aligned rotating dipole.}
\label{fig:f_align_rot_3_r10_rs2}
\end{figure}
The dipolar magnetic field as well as the electric quadrupolar field are not significantly affected by the higher multipolar fields. Indeed, we show the discrepancy in the expansion coefficients in fig.~\ref{fig:erreur_multipole_r10_rs2}. We first compare the dipole magnetic field quadrupole electric field expansion versus a dipole plus octupole~$l=3$ expansion of the magnetic field and a quadrupole plus $l=4$ electric fields, denoted by $f_{1,0}^{B(2-1)}$ and $f_{2,0}^{D(2-1)}$. The same can be performed with a threefold expansion for both fields and denoted by $f_{1,0}^{B(3-1)}$ and $f_{2,0}^{D(3-1)}$. Higher order multipoles can also be compared by inspection of~$f_{3,0}^{B(3-2)}$ and $f_{4,0}^{D(3-2)}$. Comparison with the lowest order expansion is not possible because this approximate solution does not contain neither $f_{3,0}^{B}$ nor $f_{4,0}^{D}$. We conclude from the plots in fig.~\ref{fig:erreur_multipole_r10_rs2} that the discrepancy in the expansion coefficients is not relevant. In other words, adding higher multipoles will not significantly perturb the lower expansion coefficients. For an almost non rotating and non compact star, the discrepancies are shown in fig.~\ref{fig:erreur_multipole_r1000_rs2000}. They are weaker than in the previous case.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{fB_erreur_NR31_r10_rs2.eps} &
\includegraphics[width=0.5\textwidth]{fD_erreur_NR31_r10_rs2.eps}
\end{tabular}
\caption{Discrepancy between the coefficients of the rational Chebyshev expansion of the magnetic and electric field functions $f\{_{1,0}^{B}, f_{3,0}^{B}\}$ and $\{f_{2,0}^{D}, f_{4,0}^{D}\}$ for the aligned rotating dipole depending on the number of multipoles used in the expansion. $(2-1)$ means comparison between an expansion with four multipoles, two for $B$ and two for $D$, and two multipoles, magnetic dipole and electric quadrupole. $(3-1)$ means comparison between an expansion with six multipoles, three for $B$ and three for $D$, and two multipoles, magnetic dipole and electric quadrupole. $(3-2)$ means comparison between an expansion with six multipoles, three for $B$ and three for $D$, and four multipoles. The case with only a magnetic dipole and an electric quadrupole is excluded because it contains only $f_{1,0}^{B}$ and $f_{2,0}^{D}$.}
\label{fig:erreur_multipole_r10_rs2}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{fB_erreur_NR31_r1000_rs2000.eps} &
\includegraphics[width=0.5\textwidth]{fD_erreur_NR31_r1000_rs2000.eps}
\end{tabular}
\caption{Same as fig.~\ref{fig:erreur_multipole_r10_rs2} but for a weakly rotating and non compact star.}
\label{fig:erreur_multipole_r1000_rs2000}
\end{figure}
\section{ELECTROMAGNETIC FIELD OF AN ORTHOGONAL DIPOLE}
\label{sec:Orthogonal}
In this last section, we investigate the orthogonal rotator in vacuum as a generalization of the Deutsch solution. We first check that we retrieve the static perpendicular dipole magnetic field in a Schwarzschild space-time. We resume the section with the perpendicular rotating dipole to modest numerical accuracy.
\subsection{Static dipole}
The orthogonal static dipole follows the same lines as those for the aligned static dipole. Far from the neutron star, we expect to retrieve the flat space-time expression so we develop the magnetic field according to
\begin{equation}
\label{eq:BstaticOrtho1}
\mathbf B = \mathbf{\nabla} \times ( f_{1,1}^B \, \mathbf \Phi_{1,1} )
\end{equation}
with the boundary condition at infinity such that the magnetic field becomes
\begin{equation}
\label{eq:BstaticOrthoPlat}
\mathbf B = Re \left[ \sqrt{\frac{16\,\pi}{3}} \, \frac{\mu_0\,\mu}{4\,\pi} \mathbf{\nabla} \times ( \frac{\mathbf \Phi_{1,1}}{r^2} ) \right]
\end{equation}
This corresponds to the boundary conditions at infinity
\begin{equation}
\label{eq:BstaticOrtho2}
\lim\limits_{r\to+\infty} f_{1,1}^B = \sqrt{\frac{16\,\pi}{3}} \, \frac{\mu_0\,\mu}{4\,\pi\,r^2}
\end{equation}
We take as a definition for an orthogonal dipole the presence of only one spherical harmonic, namely $(l,m)=(1,1)$. The procedure then follows exactly the same lines as for the static aligned dipole. We refer to this case for more details about the calculations.
It is then straightforward to show that the scalar function~$f_{1,1}^B$ must satisfy the same equation as $f_{1,0}^B$, namely
\begin{equation}
\label{eq:Laplacef11}
\partial_r(\alpha^2\,\partial_r(r\,f_{1,1}^B)) - \frac{2}{r} \, f_{1,1}^B = 0
\end{equation}
The exact solution for the dipole magnetic field which asymptotes to the flat dipole is given by
\begin{equation}
\label{eq:DipoleSchwarzf11}
f_{1,1}^{B({\rm dip})} = - \sqrt{\frac{16\,\pi}{3}} \, \frac{3\,\mu_0\,\mu\,r}{4\,\pi\,R_s^3} \, \left[ {\rm ln}\,\alpha^2 + \frac{R_s}{r} + \frac{R_s^2}{2\,r^2}\right]
\end{equation}
which is related to the aligned solution by
\begin{equation}
f_{1,1}^{B({\rm dip})} = - \sqrt{2} \, f_{1,0}^{B({\rm dip})}
\end{equation}
The magnetic field components are
\begin{subequations}
\begin{align}
\label{eq:MagneticStaticOrtho}
B^{\hat r} = & - 6 \, \left[ {\rm ln} \, \alpha^2 + \frac{R_s}{r} + \frac{R_s^2}{2\,r^2} \right] \, \frac{\mu_0\,\mu\,\sin\vartheta\,\cos\varphi}{4\,\pi\,R_s^3} \\
B^{\hat \vartheta} = & - 3 \, \left[ 2 \, \alpha \, {\rm ln} \,
\alpha^2 + \frac{R_s}{r} \, \frac{2\,r-R_s}{\sqrt{r\,(r-R_s)}}
\right] \, \frac{\mu_0\,\mu\,\cos\vartheta\,\cos\varphi}{4\,\pi\,R_s^3} \\
B^{\hat \varphi} = & + 3 \, \left[ 2 \, \alpha \, {\rm ln} \,
\alpha^2 + \frac{R_s}{r} \, \frac{2\,r-R_s}{\sqrt{r\,(r-R_s)}}
\right] \, \frac{\mu_0\,\mu\,\sin\varphi}{4\,\pi\,R_s^3}
\end{align}
\end{subequations}
Corrections to first order compared to flat space-time are
\begin{subequations}
\begin{align}
\label{eq:CorrectionOrdre1}
B^{\hat r} = & \frac{2\,\mu_0\,\mu\,\sin\vartheta\,\cos\varphi}{4\,\pi\,r^3} \, \left[ 1 + \frac{3}{4} \, \frac{R_s}{r} + o\left( \frac{R_s}{r} \right) \right] \\
B^{\hat \vartheta} = & - \frac{\mu_0\,\mu\,\cos\vartheta\,\cos\varphi}{4\,\pi\,r^3} \, \left[ 1 + \frac{R_s}{r} + o\left( \frac{R_s}{r} \right) \right] \\
B^{\hat \varphi} = & \frac{\mu_0\,\mu\,\sin\varphi}{4\,\pi\,r^3} \, \left[ 1 + \frac{R_s}{r} + o\left( \frac{R_s}{r} \right) \right]
\end{align}
\end{subequations}
We now switch to the most interesting case, the general relativistic orthogonal rotating dipole in vacuum.
\subsection{Stationary rotator}
\subsubsection{General formalism to any order}
We next look for the stationary solution to Maxwell equations in curved vacuum space. In this vacuum, the fields $\mathbf{D}$ and $\mathbf{B}$ are divergencelessness. We therefore expand them according to the most general prescription
\begin{subequations}
\begin{align}
\label{eq:Decomposition_HSV_div_0_D}
\mathbf{D}(r,\vartheta,\varphi,t) = & \sum_{l=1}^\infty\sum_{m=-l}^l \left( \mathbf{\nabla} \times [f^D_{l,m}(r,t) \, \mathbf{\Phi}_{l,m}] + g^D_{l,m}(r,t) \, \mathbf{\Phi}_{l,m} \right) \\
\label{eq:Decomposition_HSV_div_0_B}
\mathbf{B}(r,\vartheta,\varphi,t) = & \sum_{l=1}^\infty\sum_{m=-l}^l \left( \mathbf{\nabla} \times [f^B_{l,m}(r,t) \, \mathbf{\Phi}_{l,m}] + g^B_{l,m}(r,t) \, \mathbf{\Phi}_{l,m} \right)
\end{align}
\end{subequations}
We extended the method outlined by \cite{2012MNRAS.424..605P} and employ complex quantities. Therefore, the time-dependent part is proportional to $e^{-i\,m\,\Omega\,t}$. Remember that the divergence-free property of the electromagnetic field is insured by construction, it is a consequence of the above expansion, eqs.~(\ref{eq:Decomposition_HSV_div_0_D})-(\ref{eq:Decomposition_HSV_div_0_B}). The remaining Maxwell equations involving the curl are satisfied if and only if the $f^{D}_{l,m}$ are solutions to the second order linear partial differential equation
\begin{subequations}
\begin{multline}
\alpha \, \mathcal{R}_l[f^{D}_{l,m}] = - i \, \varepsilon_0 \, m \, ( \Omega - \omega ) \, g^{B}_{l,m} + \\
3 \, \varepsilon_0 \, \alpha \, \frac{\omega}{r} \left[ f^{B}_{l-1,m} \, \sqrt{(l-1)(l+1)} \, J_{l,m} - f^{B}_{l+1,m} \, \sqrt{l\,(l+2)} \, J_{l+1,m} \right]
\end{multline}
and similarly for the coefficients~$f^{B}_{l,m}$
\begin{multline}
\alpha \, \mathcal{R}_l[f^{B}_{l,m}] = i \, \mu_0 \, m \, ( \Omega - \omega ) \, g^{D}_{l,m} -\\
3 \, \mu_0 \, \alpha \, \frac{\omega}{r} \left[ f^{D}_{l-1,m} \, \sqrt{(l-1)(l+1)} \, J_{l,m} - f^{D}_{l+1,m} \, \sqrt{l\,(l+2)} \, J_{l+1,m} \right]
\end{multline}
To derive these expressions, we put the expansions eqs.~(\ref{eq:Decomposition_HSV_div_0_D})-(\ref{eq:Decomposition_HSV_div_0_B}) into eqs.~(\ref{eq:Maxwell2})-(\ref{eq:Maxwell4}), then project onto the $\mathbf{\Phi}_{lm}$ and used identities from the appendix~\ref{sec:framedragging}. Moreover, there exists a simple algebraic relation between $g^{D}_{l,m}$ and $f^{B}_{l,m}$ on one side, and between $g^{B}_{l,m}$ and $f^{D}_{l,m}$ on the other side. We find
\begin{align}
\label{eq:gDvsfB}
\alpha \, g^{D}_{l,m} = & + i \, \varepsilon_0 \, m \, \tilde{\omega} \, f^{B}_{l,m} \\
\alpha \, g^{B}_{l,m} = & - i \, \mu_0 \, m \, \tilde{\omega} \, f^{D}_{l,m}
\end{align}
\end{subequations}
obtained by projection of the same equations but now onto $\mathbf{e}_{\rm r}$.
All these relations can be summarized in two inhomogeneous Helmholtz equations for the electric field~$f^{D}_{l,m}$
\begin{subequations}
\label{eq:Helmholtz}
\begin{multline}
\label{eq:HelmholtzD}
\alpha^2 \, \mathcal{R}_l[f^{D}_{l,m}] + m^2 \, \frac{\tilde\omega^2}{c^2} \, f^{D}_{l,m} = \\
3 \, \varepsilon_0 \, \alpha^2 \, \frac{\omega}{r} \, \left[ f^{B}_{l-1,m} \, \sqrt{(l-1)(l+1)} \, J_{l,m} - f^{B}_{l+1,m} \, \sqrt{l\,(l+2)} \, J_{l+1,m} \right]
\end{multline}
and similarly for the magnetic field~$f^{B}_{l,m}$
\begin{multline}
\label{eq:HelmholtzB}
\alpha^2 \, \mathcal{R}_l[f^{B}_{l,m}] + m^2 \, \frac{\tilde\omega^2}{c^2} \, f^{B}_{l,m} = \\
- 3 \, \mu_0 \, \alpha^2 \, \frac{\omega}{r} \, \left[ f^{D}_{l-1,m} \, \sqrt{(l-1)(l+1)} \, J_{l,m} - f^{D}_{l+1,m} \, \sqrt{l\,(l+2)} \, J_{l+1,m} \right]
\end{multline}
\end{subequations}
The boundary conditions on the neutron star surface are imposed in the following way. Introducing the expansions eq.~(\ref{eq:Decomposition_HSV_div_0_D}) and eq.~(\ref{eq:Decomposition_HSV_div_0_B}) into eq.~(\ref{eq:CLD}), then projecting along~$\mathbf{e}_\vartheta$ and~$\mathbf{e}_\varphi$ using the formula eq.~(\ref{eq:BETAVHS}) in appendix~\ref{app:HSV} we get the relation between the coefficients of $\mathbf D$ and $\mathbf B$ as
\begin{subequations}
\begin{align}
\sum_{l,m} i \, \frac{g^D_{l,m}}{\sqrt{l\,(l+1)}} \, \sin\vartheta \, \partial_\vartheta Y_{l,m} & = - \frac{m \, \alpha}{r\, \sqrt{l\,(l+1)}} \, \partial_r ( r \, f^D_{l,m}) \, Y_{l,m} \\
\sum_{l,m} - \frac{\alpha}{r} \, \partial_r ( r \, f^D_{l,m}) \, \frac{\sin\vartheta}{\sqrt{l\,(l+1)}} \, \partial_\vartheta Y_{l,m} - i \, \frac{m}{\sqrt{l\,(l+1)}} \, g^D_{l,m} \, Y_{l,m} & = \varepsilon_0 \, \frac{\Omega-\omega}{\alpha} \, \sin^2\vartheta \, \sqrt{l\,(l+1)} \, f^B_{l,m} \, Y_{l,m}
\end{align}
\end{subequations}
This can be rearranged by indexation with the same $Y_{l,m}$ such that
\begin{subequations}
\begin{align}
\label{eq:BC1}
\sqrt{\frac{l-1}{l}} \, J_{l,m} \, g^D_{l-1,m} - \sqrt{\frac{l+2}{l+1}} \, J_{l+1,m} \, g^D_{l+1,m} = \frac{i\,m\,\alpha}{r\,\sqrt{l\,(l+1)}} \, \partial_r ( r \, f^D_{l,m}) & \\
\label{eq:BC2}
\alpha^2 \, \sqrt{\frac{l+2}{l+1}} \, J_{l+1,m} \, \partial_r ( r \, f^D_{l+1,m}) - \alpha^2 \, \sqrt{\frac{l-1}{l}} \, J_{l,m} \, \partial_r ( r \, f^D_{l-1,m}) - i \, \frac{m \, \alpha \, r}{\sqrt{l\,(l+1)}} \, g^D_{l,m} & = \\
\varepsilon_0 \, r \, \tilde{\omega} \, \left[ \sqrt{l\,(l+1)} \, ( 1 - J_{l,m}^2 - J_{l+1,m}^2 ) \, f_{l,m}^{B} - \right. & \nonumber \\
\left. \sqrt{(l-2)\,(l-1)} \, J_{l,m} \, J_{l-1,m} \, f_{l-2,m}^{B} - \sqrt{(l+2)\,(l+3)} \, J_{l+1,m} \, J_{l+2,m} \, f_{l+2,m}^{B}\right] & \nonumber
\end{align}
\end{subequations}
So it seems that we have two different boundary constraints for the~$f^D_{l,m}$. Actually this is not the case, there is no inconsistency. Indeed, eq.~(\ref{eq:BC1}) can be rearranged into
\begin{equation}
\label{eq:BC3}
\alpha^2 \, \partial_r ( r \, f^D_{l,m}) = \varepsilon_0 \, r \, \tilde{\omega} \, \left[ \sqrt{(l+1)\,(l-1)} \, J_{l,m} \, f^B_{l-1,m} - \sqrt{l\,(l+2)} \, J_{l+1,m} \, f^B_{l+1,m} \right]
\end{equation}
Inserting this expression into the left hand side of eq.~(\ref{eq:BC2}), we get its right hand side. Therefore, eq.~(\ref{eq:BC2}) is redundant with eq.~(\ref{eq:BC1}), it follows from it. Consequently the correct boundary condition to impose on the $f^D_{l,m}$ is eq.~(\ref{eq:BC3}) and only eq.~(\ref{eq:BC3}). Moreover, because the dipole corresponds to a $m=1$ mode and the problem is linear, we only expect $m=1$ azimuthal modes in the sought solutions.
\subsubsection{Near zone or quasi-static solution}
Before solving numerically the full set of ordinary differential equations, we investigate the near zone solution for $r\llr_{\rm L}$. This is also called the quasi-static regime because it does not contain the electric displacement current. This approximation implies that we can neglect the terms involving $m \, \tilde \omega /c$ in eqs.~(\ref{eq:Helmholtz}). To the lowest order, we find that the magnetic field is given by its static approximation~$f_{1,1}^{B({\rm dip})}$. We therefore look for the first order perturbation in the electric field~$f_{21}^D$, solution of
\begin{equation}
\label{eq:StaticfD21a}
\mathcal{R}_2[f^{D}_{2,1}] = 3 \, \sqrt{\frac{3}{5}} \, \varepsilon_0 \, \frac{\omega}{r} \, f^{B({\rm dip})}_{1,1}
\end{equation}
Written explicitly, we get
\begin{equation}
\label{eq:StaticfD21b}
\partial_r(\alpha^2\,\partial_r(r\,f_{2,1}^D)) - \frac{6}{r} \, f_{2,1}^D = -36 \, \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \sqrt{\frac{\pi}{5}} \, \frac{a\,c}{R_s^2\,r^2} \, \left[ \ln\alpha^2 + \frac{R_s}{r} + \frac{R_s^2}{2\,r^2} \right]
\end{equation}
which is exactly the same partial differential equation as eq.~(\ref{eq:LaplaceSourcef20}) apart from a constant factor in the inhomogeneous term, in front of $f^{B({\rm dip})}_{1,1}$. Consequently, a particular solution of eq.~(\ref{eq:StaticfD21b}) vanishing at infinity is given by
\begin{equation}
\label{eq:SolPart21}
f_{2,1}^{D(p)} = 6\, \sqrt{\frac{\pi}{5}} \, \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \frac{a\,c}{R_s^2\,r} \, \left[ \ln\alpha^2 + \frac{R_s}{r} \right]
\end{equation}
The homogeneous solution is again given by eq.~(\ref{eq:DipoleSchwarzf20}). In order to satisfy the boundary condition on the star which is from eq.~(\ref{eq:BC3})
\begin{equation}
\label{eq:BC}
\sqrt{5} \, \alpha^2 \, \partial_r(r\,f_{2,1}^{D}) = \sqrt{3} \, \varepsilon_0 \, r \, \tilde{\omega} \, f_{1,1}^{B({\rm dip})}
\end{equation}
we must set the constant to
\begin{equation}
\label{eq:Kb2}
K = - \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \frac{C_2}{3\,\alpha_R^2} \, \sqrt{\frac{\pi}{5}} \, \left[ R_s \, R \, \tilde{\omega}_R \, C_1 + \frac{1}{2}\, \, \frac{\omega_R\,R_s^3}{R} \right]
\end{equation}
The full solution reads
\begin{multline}
f_{2,1}^D = \frac{K}{R_s^2\,r} \, \left[ 6 \, \frac{r^2}{R_s^2} \, \left( 3 - 4 \, \frac{r}{R_s} \right) \, {\rm ln} \left( 1 - \frac{R_s}{r} \right) + 1 + 6 \, \frac{r}{R_s} \, \left( 1 - 4 \, \frac{r}{R_s} \right) \right] \\
+ 6 \, \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi} \, \sqrt{\frac{\pi}{5}} \, \frac{a\,c}{R_s^2\,r} \, \left[ \ln\alpha^2 + \frac{R_s}{r} \right]
\end{multline}
Taking the value of the constant~$K$ into account, we get
\begin{multline}
f_{2,1}^D = -3 \, \frac{\varepsilon_0 \, \mu_0 \, \mu}{4\,\pi\,r} \, \sqrt{\frac{\pi}{5}} \left\{ \frac{C_2}{18\,\alpha_R^2} \, \left( \frac{\omega_R\,R_s}{R} + 2 \, C_1 \, \frac{\tilde{\omega_R}\,R}{R_s} \right) \right. \times \\
\times \left. \left[ 6 \, \frac{r^2}{R_s^2} \, \left( 3 - 4 \, \frac{r}{R_s} \right) \, {\rm ln} \left( 1 - \frac{R_s}{r} \right) + 1 + 6 \, \frac{r}{R_s} \, \left( 1 - 4 \, \frac{r}{R_s} \right) \right] - 2 \, \frac{\omega\,r^3}{R_s^3} \, \left( \ln\alpha^2 + \frac{R_s}{r} \right) \right\}
\end{multline}
This is exactly the same expression as for the aligned rotator, except for a constant factor. Indeed, we have
\begin{equation}
f_{2,1}^{D({\rm quad})} = - \frac{3}{\sqrt{6}} \, f_{2,0}^{D({\rm quad})}
\end{equation}
The components of the electric field follow then immediately from this remark. In the general case of an oblique rotator with inclination angle~$\chi$ the near zone quasi-static regime of the electric field is given by
\begin{equation}
\mathbf{D}_1 = \mathbf{\nabla} \times ( \cos \chi \, f_{2,0}^{D({\rm quad})} \, \mathbf \Phi_{2,0} + \sin \chi \, f_{2,1}^{D({\rm quad})} \, \mathbf \Phi_{2,1} )
+ \sin \chi \, g_{1,1}^{D({\rm dip})} \, \mathbf \Phi_{1,1}
\end{equation}
Note that we have to add the component related to $g_{1,1}^{D({\rm dip})}$ because it is connected to $f_{1,1}^{B({\rm dip})}$ via eq.~(\ref{eq:gDvsfB}). The components are explicitly
\begin{subequations}
\begin{align}
\label{eq:Drot3}
D^{\hat r} & = - \sqrt{\frac{30}{\pi}} \, \frac{f_{2,0}^{D({\rm quad})}}{4\,r} \, ( \cos \chi \, ( 3 \, \cos^2 \vartheta - 1 ) + 3 \, \sin \chi \, \cos\vartheta \, \sin \vartheta \, e^{i\,\varphi} ) \\
D^{\hat \vartheta} & = \frac{3}{4} \, \sqrt{\frac{5}{6\,\pi}} \, \frac{\alpha}{r} \, \partial_r(r\,f_{2,0}^{D({\rm quad})} ) \, ( 2 \, \cos \chi \, \cos \vartheta \, \sin \vartheta + \sin \chi \, ( \sin^2 \vartheta - \cos^2 \vartheta ) e^{i\,\varphi} ) \\
& + \frac{1}{2} \, \sqrt{\frac{3}{2\,\pi}} \, \varepsilon_0 \, \frac{\tilde{\omega}}{\alpha} \, f_{1,0}^{B({\rm dip})} \, \sin\chi \, e^{i\,\varphi} \nonumber \\
D^{\hat \varphi} & = \frac{1}{2} \, \sqrt{\frac{3}{2\,\pi}} \, \left[ - \frac{\sqrt{5}}{2} \, \frac{\alpha}{r} \, \partial_r(r\,f_{2,0}^{D({\rm quad})} ) + \varepsilon_0 \, \frac{\tilde{\omega}}{\alpha} \, f_{1,0}^{B({\rm dip})}\right] \, \sin \chi \, \cos \vartheta \, i \, e^{i\,\varphi}
\end{align}
\end{subequations}
It is understood that the physical quantities are only the real parts of the above expressions.
These equations are exactly the same as equations (124)-(125)-(126) in \cite{2001MNRAS.322..723R} for the general oblique case, except for a typo in their $E^{\hat \phi}$ component, there should be a minus sign immediately after the first bracket, otherwise $E^{\hat \phi}$ would not vanish on the neutron star surface. It is understood that their $E^{\hat \phi}$ corresponds to our definition of $D^{\hat \varphi}$. In the newtonian limit we find as expected the flat space-time quadrupolar expressions
\begin{subequations}
\begin{align}
\label{eq:DperpNewt}
D^{\hat r} & = - \frac{\Omega \, B \, R^5}{r^4} \, ( \cos \chi \, ( 3 \, \cos^2 \vartheta - 1 ) + 3 \, \sin \chi \, \cos\vartheta \, \sin \vartheta \, e^{i\,\varphi} ) \\
D^{\hat \vartheta} & = - \frac{\Omega \, B \, R^5}{r^4} \, \left[ 2 \, \cos \chi \, \cos \vartheta \, \sin \vartheta + \sin \chi \, \left( \sin^2 \vartheta - \cos^2 \vartheta + \frac{r^2}{R^2}\right) \, e^{i\,\varphi} \right] \\
D^{\hat \varphi} & = \frac{\Omega \, B \, R^5}{r^4} \, \left( 1 - \frac{r^2}{R^2}\right) \, \sin \chi \, \cos \vartheta \, i \, e^{i\,\varphi}
\end{align}
\end{subequations}
Next we want to look for the solution in the wave zone which leads to a net Poynting flux. We do it by numerical integration of the above mentioned system of partial differential equations, the Helmholtz system with appropriate boundary conditions.
\subsubsection{Numerical solution in whole vacuum}
For the scalar Helmholtz equation, we get into problem applying straightforwardly our expansion into rational Chebyshev functions because the solution oscillates asymptotically. This behaviour cannot be reproduced by the $TL_k$ functions. We therefore supplement these basis functions with an extra function mimicking the right asymptotic behaviour of the solution. We know from the flat space-time expression that it should tend to the spherical Hankel function $h_l^{(1)}(r/r_{\rm L})$. Moreover, we want to impose an asymptotic expansion that tends to only this function. We achieve this by the following expansion of the unknown coefficients~$f_{l,m}^{B/D}$ as
\begin{equation}
r \, f(r) = \sum_{k=0}^{N_r-2} f_k \, TL_k(y(r)) + f_{N_r-1} \, r \, h_l^{(1)}(r/r_{\rm L})
\end{equation}
and we impose
\begin{equation}
\lim\limits_{r\to+\infty} \sum_{k=0}^{N_r-2} f_k \, TL_k(y(r)) = 0
\end{equation}
which is simply expressed as
\begin{equation}
\sum_{k=0}^{N_r-2} f_k = 0
\end{equation}
In this way we get the correct asymptotic behaviour of each coefficient as
\begin{equation}
\lim\limits_{r\to+\infty} f(r) = f_{N_r-1} \, h_l^{(1)}(r/r_{\rm L})
\end{equation}
We solve numerically the minimal truncated system involving $f_{1,1}^B$ and $f_{2,1}^D$ because of computational resources limitations. From the above discussion, the electromagnetic field is expanded into
\begin{subequations}
\begin{align}
\mathbf B & = \mathbf{\nabla} \times ( f_{1,1}^B \, \mathbf \Phi_{1,1} ) - i \, \mu_0 \, \frac{\tilde{\omega}}{\alpha} \, f_{2,1}^D \, \mathbf \Phi_{2,1} \\
\mathbf D & = \mathbf{\nabla} \times ( f_{2,1}^2 \, \mathbf \Phi_{2,1} ) + i \, \varepsilon_0 \, \frac{\tilde{\omega}}{\alpha} \, f_{1,1}^B \, \mathbf \Phi_{1,1}
\end{align}
\end{subequations}
The elliptic problems to be solved are
\begin{subequations}
\label{eq:Perp}
\begin{align}
\frac{\alpha^2}{r} \, \frac{\partial}{\partial r} \left( \alpha^2\,\frac{\partial}{\partial r}(r\,f^{B}_{1,1}) \right) - \alpha^2 \, \frac{2}{r^2} \, f^{B}_{1,1} + \frac{(\Omega-\omega)^2}{c^2} \, f^{B}_{1,1} & = 3 \, \sqrt{\frac{3}{5}} \, \mu_0 \, \alpha^2 \, \frac{\omega}{r} \, f^{D}_{2,1} \\
\frac{\alpha^2}{r} \, \frac{\partial}{\partial r} \left( \alpha^2\,\frac{\partial}{\partial r}(r\,f^{D}_{2,1}) \right) - \alpha^2 \, \frac{6}{r^2} \, f^{D}_{2,1} + \frac{(\Omega-\omega)^2}{c^2} \, f^{D}_{2,1} & = 3 \, \sqrt{\frac{3}{5}} \, \varepsilon_0 \, \alpha^2 \, \frac{\omega}{r} \, f^{B}_{1,1}
\end{align}
\end{subequations}
and the boundary condition is the same as in the quasi-static regime, eq.~(\ref{eq:BC}). In the asymptotic limit of very large distances, we know that the solution relaxes to the Deutsch field, therefore
\begin{subequations}
\begin{align}
\lim\limits_{r\to+\infty} f_{1,1}^B & = f_{1,1}^{B(\infty)} \, h_1^{(1)} \left(\frac{r}{r_{\rm L}}\right) \\
\lim\limits_{r\to+\infty} f_{2,1}^D & = f_{2,1}^{D(\infty)} \, h_2^{(1)} \left(\frac{r}{r_{\rm L}}\right)
\end{align}
\end{subequations}
where $f_{1,1}^{B(\infty)}$ and $f_{2,1}^{D(\infty)}$ are two constants derived from the numerical solution of eqs.(\ref{eq:Perp}), actually corresponding to the last term $f_{N_r-1}$ in the expansion.
Two examples of the coefficients obtained by this procedure for $f_{1,1}^{B}$ and $f_{2,1}^{D}$ are shown in the non relativistic limit with $R=2000\,R_{\rm s}$ and $r_{\rm L}=1000\,R$, fig.\ref{fig:u_perp_rot_1_r1000_rs2000}, and in the extreme relativistic limit with $R=2\,R_{\rm s}$ and $r_{\rm L}=10\,R$, fig.\ref{fig:u_perp_rot_1_r10_rs2}. The convergence of the first few coefficients is fast but after number ten or so, the decrease in the magnitude of the coefficient becomes rather weak. This is probably due to the asymptotic expression we choose as spherical Hankel functions, those useful in flat space-time. Switching to more accurate asymptotic behaviour in a Schwarzschild background metric would certainly help to improve the convergence but such functions do not (yet) exist in the literature. The presence of the lapse function~$\alpha$ make the convergence to spherical Hankel functions only first order.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{u_perp_rot_1_NR101_r1000_rs2000.eps}
\caption{Absolute value of the coefficients of the rational Chebyshev expansion of the magnetic field and electric field functions $f_{1,1}^{B}$ and $f_{2,1}^{D}$ for the perpendicular rotating dipole for $R=2000\,R_{\rm s}$ and $r_{\rm L}=1000\,R$. The large value of the last coefficient in the expansion corresponds to the asymptotic behavior related to the spherical Hankel functions.}
\label{fig:u_perp_rot_1_r1000_rs2000}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{u_perp_rot_1_NR101_r10_rs2.eps}
\caption{Same as fig.~\ref{fig:u_perp_rot_1_r1000_rs2000} but for $R=2\,R_{\rm s}$ and $r_{\rm L}=10\,R$.}
\label{fig:u_perp_rot_1_r10_rs2}
\end{figure}
To conclude, we compute the Poynting flux at infinity and compare it with the flat space-time value obtained from the point magnetic dipole losses. The Poynting vector is given by $\mathbf{S} = \mathbf{E} \wedge \mathbf{H}$ but asymptotically the electric field $\varepsilon_0 \, \mathbf E$ tends towards $\mathbf D$ and the magnetic field $\mu_0 \, \mathbf H$ tends towards $\mathbf B$. In order to get the spin-down of the neutron star, we only need the radial component of the Poynting vector such that $S_{\rm r} = c^2 \, ( D_\vartheta \, B_\varphi - D_\varphi \, B_\vartheta)$. We already know that the Poynting flux for the perpendicular rotator in flat space-time is given by
\begin{equation}
L_{\rm sd}^{\rm flat} = \frac{\mu_0 \, c}{6\,\pi} \, \frac{\mu^2}{r_{\rm L}^4} = \frac{8\,\pi}{3\,\mu_0\,c^3} \, \Omega^4\,B^2\,R^6
\end{equation}
For the Deutsch field, this flux is
\begin{equation}
L_{\rm sd}^{\rm Deustch} = \frac{4}{5} \, \frac{45 - 3\,x^4 + 2\,x^6}{(1 + x^2)\,(36 - 3\,x^4 + x^6)} \, L_{\rm sd}^{\rm flat} \approx ( 1 - x^2 ) \, L_{\rm sd}^{\rm flat}
\end{equation}
where $x=R/r_{\rm L}$ and the approximation is valid for $x\ll1$. In the general-relativistic case we have
\begin{equation}
L_{\rm sd}^{\rm gr} = \int S_{\rm r} \, r^2 \, d\Omega = \frac{1}{2} \, (|f_{1,1}^{B(\infty)}|^2 + |f_{2,1}^{D(\infty)}|^2 )
\end{equation}
For comparison between general-relativistic situation and flat space-time we compute the normalized flux as
\begin{equation}
\label{eq:Poynting}
\frac{L_{\rm sd}^{\rm gr}}{L_{\rm sd}^{\rm flat}} = 3 \, \pi \, (|f_{1,1}^{B(\infty)}|^2 + |f_{2,1}^{D(\infty)}|^2 )
\end{equation}
This expression does not take into account neither the magnetic field amplification as measured at the surface of the neutron star no the gravitational redshift of the rotation frequency. These can be deduced analytically from the lapse function~$\alpha$ and from the expression of the magnetic field in curved space-time, see \cite{2004MNRAS.352.1161R}. The ratio in eq.~(\ref{eq:Poynting}) is shown in table~\ref{tab:FluxPoynting}. With our choice of normalization according to the point dipole formula, we don't notice any significant change in the Poynting flux, modulo gravitational redshift and field amplification. The difference is at most 15\%, which is much less than the previously mentioned effects. Therefore, the order of magnitude given by \cite{2004MNRAS.352.1161R} is actually a good estimate of the magnetic dipole losses of an orthogonal rotator in general relativity.
\begin{table}
\begin{center}
\begin{tabular}{rrrrr}
\hline
$R/R_{\rm s}$ & $r_{\rm L}/R$ & Point & Deutsch & General \\
& & dipole & field & relativity \\
\hline
\hline
2000 & 1000 & 1 & 0.9999 & 1.0088 \\
2000 & 10 & 1 & 0.9901 & 0.9645 \\
2 & 1000 & 1 & 0.9999 & 1.0226 \\
2 & 10 & 1 & 0.9901 & 1.1570 \\
\hline
\end{tabular}
\end{center}
\caption{Normalized Poynting flux for the general-relativistic perpendicular rotator compared to the expectation from the point dipole losses. Neither gravitational redshift nor magnetic field amplification are taken into account here. This should emphasize the effect of frame-dragging only.}
\label{tab:FluxPoynting}
\end{table}
\section{CONCLUSION}
\label{sec:Conclusion}
In this paper, we showed how to look for a systematic solution to the stationary Maxwell equations in the background space-time of a slowly rotating neutron star following the 3+1 foliation and an expansion of the unknown electromagnetic field onto vector spherical harmonics. We obtained numerical solutions of high accuracy for the aligned rotator and less accurate for the orthogonal rotator. We hope that these results will serve as a benchmark for general-relativistic codes solving the electromagnetic field in a static background metric like for instance pulsar and black hole magnetospheres. The orthogonal rotator could still benefit from some improvements by replacing the asymptotic spherical Hankel functions by more precise functions which take into account at least to first order the perturbation in the metric induced by the presence of the mass~$M$. This would lead to more rapid convergence of the solution but those analytical solutions do not exist.
A next step would be to solve the time dependent Maxwell equations instead of looking for solutions to the boundary value problem, especially difficult to handle with high accuracy for an orthogonal rotator. This could improve the estimate of the magnetic dipole losses in a curved space-time.
A further step to this work will be to include a force-free plasma surrounding the neutron star in order to compute the pulsar force-free magnetosphere in the general-relativistic case. The same technique could be useful for the black hole magnetosphere. However, because of the non linearity implied by the force-free current, it is impossible to solve the system semi-analytically as we did here. Computing the force-free magnetosphere requires numerical simulations. This is the subject of a forthcoming paper in which we will describe a time dependent pseudo-spectral code using the vector spherical harmonics expansion to solve Maxwell equations in a curved vacuum space-time.
\section*{Acknowledgments}
I would like to thank \'Eric Gourgoulhon and Serguei Komissarov for helpful discussions.
| 2023-04-23T06:40:39.924Z | 2013-08-06T02:08:52.000Z | redpajama/arxiv | arxiv_0001 | 906 | 15,898 |
8ecaf919652be27d19d2f530add70c4592fbd713 | \section{Introduction}
There are several theoretical reasons to establish a new physical and
fundamental scale of nature that would enable us to explore deep into
the transition threshold between general relativity and quantum
mechanics \cite{Smolin:2006pa}, \cite{PhysRevLett.88.190403}.
One of those reasons is to come closer to a
theory of quantum gravity $QG$ \cite{rovelli}. Traditionally, the
Planck's length $L_{p}=\sqrt{\hbar G/c^3}$ has been postulated as a
fundamental scale. However, there are other ways to explore this
second invariant scale. These are called deformed special relativity
(DSR) models and can considered a theoretical limit of some larger
theories of Quantum Gravity (QG)
\cite{Smolin:2010xa}. DSR models are being outlined as a
phenomenological description of presently unknown quantum gravity
effects \cite{Calmet:2010tx}. Those models are initially thought as a
way of modifying the dispersion relations of relativistic particles
without introducing a preferred frame. This modifications have been
used to look into CPT/Lorentz symmetry violations \cite{Dolgov:2009yk} and
provide possible explanations of resulting baryon asymmetry in
cosmology without using the Sakharov's conditions \cite{DiGrezia:2005yx}.
In recent years many aspects of $DSR$ models have been studied
and some of them are still the subject of
research \cite{Smolin:2010xa}, however there is no consensus about
which is the position space associated with these theories.
In this work we present a method to construct some DRS models, applying deformed
scale transformations to the Lorentz generators. In doing this,
a non-linear representation of the Lorentz group arises which, in
turn, leaves the Heisenberg algebra unchanged. The scale
transformations are introduced by deforming a momentum-dependent scale
parameter. These transformations are carried out on the quantum
operator instead of on the eigenvalues, as it is usually presented in the
literature \cite{Heuson:2003zt}. We show the equivalence of these two
approximations.
This work is organized as follows: in Sec. II some
generalities about the Lorentz group are presented. In section III the
deformed scale transformations of the momentum and position operators
are calculated and in section IV the non linear Lorentz transformation
is applied to both the momentum and position operators. The new dispersion
relations are found and the two scales are shown explicitly for these
theories. Finally, in section V the equivalence between the
transformation on the momentum operators and on the momentum of the
particles is proven and some particular cases are analyzed.
\section{General Considerations}
It is well known that the angular momentum operators can be written as a function of
the momentum and position operators as follows
\be\label{M} M^{\alpha\beta}=i (p^\alpha x^\beta -p^\beta
x^\alpha). \ee
They are the boosts and rotations generators and
satisfy the Lorentz algebra
\begin{eqnarray}\label{MM}
&\left[ M_{\alpha\beta}, M_{\mu\nu}\right] =\nonumber \\
& -i\left(g_{\alpha\nu} M_{\beta\mu}+g_{\beta\mu}
M_{\alpha\nu}-g_{\alpha\mu} M_{\beta\nu}- g_{\beta\nu}
M_{\alpha\mu} \right)\, ,
\end{eqnarray}
this algebra is a consequence of the more fundamental Heisenberg algebra
\be\label{px}[x^\alpha,x^\beta]=0,\,\,\, [p^\alpha,p^\beta]=0,\,\,\,
[p^\alpha,x^\beta]= ig^{\alpha\beta}.\ee \noindent
The set of $M$ matrices in (\ref{M}) are the generators of an unitary
representation of the Lorentz group.
\begin{equation}
\Lambda= \exp (i \omega_{\alpha\beta} M^{\alpha\beta}/2).
\end{equation}
We can find other
unitary representations for the Lorentz group applying an unitary
transformation to the group elements
\begin{equation}
\Lambda \rightarrow \widetilde\Lambda= U \Lambda U^{\dagger}.
\end{equation}
which implies the transformation on the generators
\begin{eqnarray}\label{UMU}
M \rightarrow {\widetilde M}^{\alpha \beta}=U \,M^{\alpha\beta}U^{\dagger}\,.
\end{eqnarray}
these new generators satisfy the same Lorentz algebra (\ref{MM}).
It is clear that if $U$ and the $M$'s commute, these representations
are actually the same. Thus, in order
to find other non trivial equivalent representations, $U$ must not be a Lorentz scalar.
That transformation on the generators is carried out by the transformation on the basic operators
\begin{eqnarray}\label{Up}
p^\alpha \rightarrow \widetilde p^\alpha = U p^\alpha U^{\dagger},\\[2mm]\label{Ux}
x^\alpha \rightarrow \widetilde x^\alpha = U x^\alpha U^{\dagger},
\end{eqnarray}
which, as stated, preserve the Heisenberg algebra.
We can build non standard representation of the Lorentz group using an adequate operator $U$.
In this paper we will construct such representation, starting with
the ordinary scale transformation on the basic operators but using
a parameter that depends on some components of the momentum operator
in order to prevent $U$ from being a Lorentz scalar.
This is not a new idea; in fact the Magueijo-Smolin model \cite{Smolin:2006pa}
is generated in this way, but constructed in the momentum representation.
We will extend that procedure for the quantum field operators.
\section{Deformed Scale Transformations}
Let us now build the ordinary scale transformations in order to
gain the necessary insight and learn how they can be deformed.
The finite scale transformation is given in equation (\ref{Up}) with
$U=\exp(\epsilon D)$ are
\be\label{UpU} \widetilde p^\alpha=
e^{\epsilon D} p^\alpha e^{-\epsilon D}, \ee \noindent
where $\epsilon$ is a parameter and $D$ is the dilatation
operator given by \be\label{Dilaton}D= i p_\alpha x^\alpha + c,\ee
with $c$ some constant. The Heisenberg algebra (\ref{px}) implies \be
[D,p^\alpha]=p^\alpha,\,\,\, [D,x^\alpha]=-x^\alpha.\ee
The ordinary scale
transformations mean that $\epsilon$ is a constant and the transformation
(\ref{UpU}) can be reduced to $ \widetilde
p^\alpha=e^\epsilon p^\alpha;$ in this case and since
$M^{\alpha\beta}$ commutes with $D,$ we have $\widetilde \Lambda= \Lambda$.
In order to avoid the usual scale transformation one can propose that
$\epsilon$ does not commute with the $M^{\alpha\beta}$, although it should indeed
commute with $p^\alpha$. This can be done by choosing $\epsilon$
as a function of $p$ and not Lorentz invariant. In this
case we obtain one class of deformed scale transformations.
We will work with the scale parameter $\epsilon$ as an homogeneous
function of degree $s$ in $p$, that is
\be\label{eap} \epsilon (ap)=a^s \epsilon ( p),\ee
where $a$ is a constant. Now $ \epsilon
$ has dimension $s$ and we have \be\label{De} [D,\epsilon]=
s\epsilon,\ee \noindent where the operator $D$ is given in (\ref{Dilaton}) with
Re $ \, c= 2+s/2$, in order to make $\epsilon D$ anti Hermitian or
equivalently, $U$ unitary.
\subsection{Deformed Scale Transformation of $p$ and $x$}
The expression (\ref{UpU}) can be written, using a Hausdorff expansion, as
\be\label{pe}
\widetilde p^\alpha
=\sum_{n=0}^\infty \frac{[\![(\epsilon D)^{(n)}, p^\alpha]\!]}{n!} ,
\ee \noindent where $ [\![\dots ]\!]$ is the multiple commutator
defined by the recurrence relation
\begin{equation}\label{conmutator} [\![A^{(n+1)}, B]\!]\equiv [ A,
[\![A^{(n)}, B]\!]],\end{equation}
\noindent with the initial condition $[\![A^{(0)}, B]\!]=B$.
Proposing $[\![(\epsilon D)^{(n)}, p^\alpha]\!]= \theta_n \epsilon^n p^\alpha$ and using
(\ref{conmutator}) we find $\theta _{n+1}=(ns+1)\theta_n$,
with the initial condition $\theta_0=1$, this gives
\be\label{edp} \theta_n= (-1)^n s^n
\frac{(-1/s)!}{(-1/s-n)!}\, .\ee \noindent
Introducing (\ref{edp}) in
(\ref{pe}) and adding all the terms, we have
\begin{equation}\label{tildep} \widetilde
p^\alpha=(1-s\epsilon)^{-1/s} p^\alpha . \end{equation}
\noindent In the simplest case, $s=0$, we obtain $\widetilde p^\alpha=
e^{\epsilon} p^\alpha$ as expected when $\epsilon $ is a
constant; nevertheless in (\ref{tildep}) with $s=0$ $\epsilon$ can be a function of $p$.
Similarly, the transformation of $x$ is performed via a
Hausdorff expansion for (\ref{Ux}) which now reads
\be\label{xe}
\widetilde x^\alpha
=\sum_n \frac{[\![(\epsilon D)^{(n)}, x^\alpha]\!]}{n!}.
\ee
\noindent
In general the $n$-th
commutator can be parametrized as \be\label{edx} [ \![(\epsilon
D)^{(n)}, x^\alpha ] \! ]= \alpha_n \epsilon^n x^\alpha+
i\beta_n\epsilon^{n-1}\epsilon ^\alpha D,\ee
\noindent
where
\be\label{ea}
\epsilon^\alpha=i[x^\alpha,\epsilon]=\frac{\partial
\epsilon(p)}{\partial p_\alpha}.\ee \noindent
From
(\ref{conmutator}) in (\ref{edx}) we obtain the recurrence relation
\[ \alpha_{n+1}=(ns-1)\alpha_n,\]
\[ \beta_{n+1}=[(n-1)s-1]\beta_n+\alpha_n,\] to finally find
\be\label{a}\alpha_n= s^n\frac{(1/s)!}{(1/s-n)!}(-1)^n ,\ee \noindent
\be\label{b} \beta_n=n\alpha_{n-1},\ee \noindent where the initial conditions
$\alpha_0=1$ and $\beta_0=0$ were considered.
Adding all the terms
in (\ref{xe}) and considering (\ref{a}) and (\ref{b}) we obtain
\begin{equation}\label{tildex} \widetilde
x^\alpha=(1-s\epsilon)^{1/s}[x^\alpha+i\epsilon^\alpha
D].\end{equation}
\noindent
One can check from equations (\ref{tildep}) and (\ref{tildex}) that the
canonical relations
$$[\widetilde p^\alpha,\widetilde
x^\beta]=ig^{\alpha\beta}$$ still hold.
\subsection{Non Linear Lorentz Transformations.}
Using the expressions for the scaled momenta (\ref{tildep}) and
coordinates (\ref{tildex}), one can construct explicitly the new
Lorentz generators from (\ref{UMU}),
\begin{eqnarray}
\widetilde M^{\alpha\beta}= M^{\alpha\beta} - i(p^\alpha\epsilon^\beta-
p^\beta\epsilon^\alpha)D.
\end{eqnarray}
The non-linear Lorentz transformations over the momentum operators are
therefore
\be p^\alpha\rightarrow \widehat p^\alpha= \widetilde\Lambda^{\dagger}
p^\alpha\widetilde\Lambda.\ee
Associativity of this transformation can be implemented in steps,
$\widehat p^\alpha= U(\Lambda^{\dagger}(U^{\dagger} p^\alpha
U)\Lambda) U^{\dagger},$
to give
\be\label{hatp} \widehat
p^\alpha =[1+s(\epsilon'-\epsilon)]^{-1/s} p^{\alpha'},\ee \noindent
where $\epsilon'$ is the $\epsilon$ function applied over
$p^{\alpha'}={\Lambda^{\alpha'}}_\beta p^\beta$. From (\ref{hatp})
it can be seen that if $\epsilon$ is a Lorentz scalar, then
$\epsilon=\epsilon'$ and $\widehat p^\alpha=p^{\alpha'}$.
In the same fashion, as in the case of $p$, under non-linear Lorentz
transformation, $x$ transforms as \be\label{hatx} \widehat
x^\alpha=[1+s(\epsilon'-\epsilon)]^{1/s}
[x^{\alpha'}-i({\epsilon'}^{\alpha'}-\epsilon^{\alpha'})D
],\ee \noindent where
$\epsilon^{\beta'}={\Lambda^{\beta'}}_\alpha \partial\epsilon/\partial
p_\alpha$ and $\epsilon'^{\beta'}=\partial\epsilon'/\partial
p_{\beta'}$.
Once again, from (\ref{hatp}) and (\ref{hatx}), it follows that
the new operators satisfy the canonical commutation relations
\[ [\widehat p^\alpha,\widehat x^\alpha]=ig^{\alpha\beta}.\]
\section{General Properties of the Non-Linear Lorentz Transformation.}
A simultaneous unitary transformation over the momentum operators $p$
and coordinates $x$ can be called a canonical transformation,
just like in classical mechanics. This is
because it preserves the canonical commutation relations (\ref{px}). This
type of transformation can be seen as a pasive canonical transformation since
the coordinate operators in the phase space are transformed
and the coordinates themselves
are not. The corresponding active transformations
occur when only the states are
directly transformed.
Let the state $|p_e\rangle$ be an eigenstate of $p^\alpha$ with
eigenvalue $p_e^\alpha$,
\be p^\alpha |p_e\rangle =p^\alpha_e
|p_e\rangle.\ee
under non-linear Lorentz transformations, it
changes as
\be |p_e\rangle \rightarrow \widetilde
\Lambda|p_e\rangle. \ee
\noindent To find out how a new state appears we apply $p^\alpha$ as in
\be p^\alpha \widetilde
\Lambda|p_e\rangle= \widetilde \Lambda( \widetilde \Lambda ^\dagger
p^\alpha \widetilde \Lambda)|p_e\rangle= \frac{p_e^{\alpha'}}{[1+s
\epsilon(p_e')-s\epsilon(p_e)]^{1/s}}\widetilde
\Lambda|p_e\rangle,\ee
\noindent which means that
$\widetilde\Lambda|p_e\rangle$ is an eigenstate of $p^\alpha$ with
eigenvalus $\widehat p_e^\alpha =p_e^\alpha /[1+s
\epsilon(p_e')-\epsilon(p_e)]^{1/s}$. Thus one can write
\be\widetilde\Lambda|p_e\rangle=|\widehat p_e\rangle.\ee
\noindent The
equivalence of active and passive transformations is realized as
the invariance of the mean value of $p$ when the transformation is carried out,
\be\langle\psi|p\,|\psi\rangle\rightarrow \langle\psi|\widehat
p\,|\psi\rangle= \langle\widehat \psi|p\,|\widehat
\psi\rangle.\ee
According to (\ref{hatx}) we can see that it is not possible to
proceed similarly for the coordinates because the operator
transforms mixing the coordinate and momentum operators.
An apparent paradox is found here since coordinate transformations can be
written in such a way that the particle spacial coordinates are now a
function of its energy. This can be explained by considering that,
from the operator point of view, the new spacial coordinates depend on
the old momenta, so the new and old coordinates do not actually
commute, although the new momenta do commute with their untransformed
partners.
A point in space time is the eigenvalue of the $x$ eigenstate, which
describes a particle with a well defined position. This state, as
seen by any other observer, is a superposition of the $|\widehat x \rangle$
eigenstates, so it does not have a well defined position in the new
system.
\subsection{Velocity Scale.}
From now on we will work with the eigenvalues of the momentum
operators instead of the operators themselves and we will omit the subindex $e$.
From (\ref{hatp}) we conclude that the moment eigenvalues satisfy
\be\label{p/p}\frac{\widehat{\bf
p}}{\widehat p^0}=\frac{{\bf p}'}{ p^{0'}}.
\ee
\noindent Calling ${\bf p}/p^0$ the Lorentz velocity $\bf v$, we
have $ \widehat{\bf v}={\bf v}'$, from the linear Lorentz
transformation (\ref{p/p}) can be written
in terms of the velocities as \be\label{vpvp} \widehat v_{_\parallel}
=\frac{ \beta -v_{_\parallel} }{ 1 - \beta v_{_\parallel}}, \,\,\,
\widehat v_{_\perp} =\frac{ v_{_\perp} }{ \Gamma ( 1 - \beta
v_{_\parallel})}, \ee \noindent where $\perp $ and $\parallel $
stand for the parallel and perpendicular components of the Lorentz velocities
with respect to the relative velocity $\beta$, and
\be \Gamma=\frac{1}{\sqrt{1-\beta^2}}.\ee
\noindent
This transformation satisfies the addition rule as in the conventional
relativity, thus the speed of light $v=1$ is still a natural scale of
the theory. To see that more clearly one can write (\ref{vpvp}) as
\be\label{vv} 1-\widehat v^2= \frac{1}{ \Gamma^2(1-\beta v_{\parallel
})^2} (1-{v}^2).\ee
\noindent From (\ref{vv}) we see that if
$\beta<1$ and $v\leq 1$ then $\widehat v\leq1$. But the Lorentz velocity is not the
real particle velocity, it is only the velocity of the particle in the
limit $\epsilon \rightarrow 0$. The connection between Lorentz velocity and particle velocity will be
developed in the next subsection.
\subsection{Dispersion Relations.}
The function $\epsilon(\widehat p)$ is given by
\be\label{hate}
\widehat\epsilon\equiv\epsilon(\widehat
p)=\frac{\epsilon'}{1+s(\epsilon'-\epsilon)},\ee
\noindent where the
homogeneity of $\epsilon$ (\ref{eap}) was used. From this expression and
(\ref{hatp}) one can obtain
\be\label{p/e} \frac{\widehat p^\alpha
}{{\widehat\epsilon}^{\,1/s}}=\frac{
p^{\alpha'}}{{\epsilon'}^{1/s}}.\ee
\noindent
Moreover, from (\ref{hate}) we get
\b
\frac {\widehat \epsilon}{\epsilon'} =\frac {1-s\widehat
\epsilon}{1-s\epsilon},\ee
\noindent therefore (\ref{p/e}) is written as
\be\label{p/1-e} \frac{\widehat p^\alpha}{(1-s\widehat
\epsilon\,)^{1/s}} = \frac{ p^{\alpha'}}{(1-s \epsilon)^{1/s}}.
\ee
\noindent
Squaring both sides of (\ref{p/1-e}) we find an invariant quantity,
which can be identified as the invariant mass of the particle
\be\label{m} \frac{ p^2}{(1-s \epsilon)^{2/s}}=m^2. \ee
\noindent This
equation also gives us the new dispersion relations; that is, a new relation between momentum and energy.
Despite some discussions in the literature on a proper definition of particle speed in the DSR theories,
\cite{Kosinski:2002gu}, \cite{Ghosh:2007rw}, in this paper we
take the particle velocity as the group velocity $ {\bf u}={\partial p_0}/{\partial{\bf p }}$
which is different from the Lorentz velocity ${\bf v}$.
Starting from (\ref{m}) and taking the $ {\bf p }$ derivative we find
\be \label{vp}
{\bf u}={\bf v} \left(\frac{1-s\epsilon + p^i\epsilon _i /(v\gamma)^2 }{1- s\epsilon -p^0 \epsilon _0/\gamma^2 }\right),
\ee
where $i=1,2,3$ and $\epsilon _\mu= \partial \epsilon/\partial p^\mu$.
One can see that if $|\bf v|\rightarrow 1$ then
$1/\gamma^2 \rightarrow0$ and $|{\bf u}|\rightarrow 1$.
This means that the particle velocity has the same limit that the Lorentz velocity.
At this point we can calculate all the dynamics of the free particle
starting from the Hamiltonian, which is given by $H= \int {\bf u} \cdot d{\bf p} =p^0$;
nevertheless, in what follows we will use
the Lagrangian formalism in covariant form, which in principle is equivalent.
\subsection{Energy Scale}
We will now show that these type of models have both a momentum and an energy scale.
Let us start analyzing first the massless particles,
that is $|{\bf v}|=1$ or $|{\bf p}|=p_0$. With this in mind and reversing (\ref{p/1-e}) we can compute
\be\label{1/p} \frac{1}{(\widehat p^0 )^s}- \frac{s\widehat \epsilon}{(\widehat
p^0 )^s} = \frac{1}{\Gamma^s(1-\beta v_{_\parallel })^s}\left( \frac{1}{
(p^0) ^s}- \frac{s \epsilon}{ (p^0) ^s}\right)
\ee
\noindent
and because $\epsilon'= \epsilon(\Lambda p)$ and a Lorentz transformation with $p^0=| {\bf p}|$ we have
\[ \epsilon'= \epsilon\Big(\Gamma (1-\beta v_{_\parallel })\Big)= \Gamma^s (1-\beta v_{_\parallel })^s\epsilon,\]
we obtain
\be \label{e/p}
\frac{{\epsilon'}^{1/s}}{p^{0'}}=\frac{\epsilon^{1/s}}{p^0}.\ee
\noindent
According to (\ref{p/e}) and (\ref{e/p}) one can see that
$\frac{\epsilon^{1/s}}{p^0}$ is an invariant for a massless particle;
this quantity has length units and therefore
can be equated to some length $l_{p}$ that could be the same order of the Planck
length
\be\label{lp}l_{p} =\frac{(s\epsilon)^{1/s}}{ p^0 },\ee
thus (\ref{1/p}) is written as
\be \frac{1}{(\widehat
p^0 )^s}-l_p ^s = \frac{1}{\Gamma^s(1-\beta v_{_\parallel }
)^s}\left( \frac{1}{ (p^0) ^s}- l_p ^s \right). \ee
\noindent
If the particle energy in some system satisfies $p_0 < 1/l_p$ then,
in any other system, the energy will satisfy $\widehat p^0 < 1/l_p$,
because $\Gamma(1-\beta v_{_\parallel }) >0$.
For massive particles the energy scale is the same.
It suffices to note that when
the particle energy is increasing, the Lorentz velocity limit is 1.
In this limit the massive particle behaves like a massless particle.
Then the analysis for the massless particle applies in this limit as well.
From (\ref{lp}) we can see that for high energy massive
particles the quantity $s\epsilon$ behaves like $(l_p p_0)^s$. Hence it is natural to think that,
for low energy, $s\epsilon $ has the form $(l_p p_0)^sf(v) $
where $f(v)\rightarrow 1$ when $v\rightarrow 1$.
\section{Covariant Lagrangian Formulations.}
In this section we will find the Lagrangian for free particles in some of these models.
From the Lagrangian it could be possible to make and educated guess about
the way in which the coordinates transform
in those models. Actually, the dispersion relations allow us to find the particle
energy as a function of the particle momentum, the only obstacle is that
we must deal with a not easily solvable algebraic equation.
Nevertheless when $s$ is small it becomes easier.
In particular, we will concentrate in low values $s = 1$ and $s=2$.
For each value of $s$ we still can choose the function of the Lorentz
speed magnitude $v$.
For simplicity we only consider functions of the type $v^r$ where $0<r<s$
then for fixed values of $s$ and $r$ we have
\be s\epsilon = l_p^s p_0^{s-r} |{\bf p}|^r,\ee
and the dispersion relation is
\be\label{pp2} p^2=m^2(1-l_p^s p_0^{s-r} |{\bf p}|^r)^{2/s}.\ee
Therefore for $s=1, 2$ we have five different models:
(1,0), (1,1), (2,0), (2,1), (2,1) according to $(s,r)$ as in Table \ref{lagrangian}.
Here we can identify the model (1,0) with the Magueijo-Smolin Model \cite{Smolin:2006pa}
For each of these models we need to find the conjugate
momentum coordinates and the Lagrangian based on the dispersion relations associated
with each model, following the procedure given in \cite{Ghosh:2007rw}.
The idea is to construct the Lagrangian linearly in the velocities,
imposing the dispersion relation as a constraint through a Lagrange multiplier in the following way
\be L= \dot x\cdot p -\frac{e}{2}[p^2-m^2(1-s\epsilon)]^{2/s}\ee
then, applying the Euler-Lagrange equations for $p$ and using the
constraint we finally find the Lagrangian as a function of the velocities.
The results of this procedure for some models are shown in the Table \ref{lagrangian}.
\begin{table}[th]
\caption{Lagrangian of the different models.}
\label{lagrangian}
\begin{center}
\begin{tabular}{c|l}
\hline\hline
$(s,r)$& Lagrangian\\
\hline
(1,0)& $ \displaystyle {L=\frac{m}{1-m^2 l_p^2}\left( \sqrt{ \dot x_0^2 -(1-m^2l_p^2) \dot{\bf x} ^2} - m l_p \dot x_0\right) }$\\[4mm]
\hline
(1,1)& $ \displaystyle L=\frac{m}{1+m^2 l_p^2}\left( \sqrt{ (1+m^2l_p^2)\dot x_0^2 - \dot{\bf x} ^2} - m l_p |\dot {\bf x}|\right) $\\[4mm]
\hline
(2,0)& $ \displaystyle L=\frac{m}{\sqrt{1+m^2 l_p^2}} \sqrt{ \dot x_0^2 - (1+m^2l_p^2)\dot{\bf x} ^2} $\\[4mm]
\hline
(2,1)& $ \displaystyle L=\frac{m}{{1+ m^4 l_p^4/4}} \sqrt{ \dot x_0^2 -m^2 l_p^2 |\dot{\bf x}| \dot x_0 - \dot{\bf x} ^2} $\\[4mm]
\hline
(2,2)& $ \displaystyle L=\frac{m}{\sqrt{1+m^2 l_p^2}} \sqrt{ (1+m^2l_p^2)\dot x_0^2 - \dot{\bf x} ^2} $ \\[4mm]
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Efective Metric.}
Except in the case (1,1) these results suggest that the Lagrangians of
those models can be written as a function of the effective metric $\tilde g_{\mu\nu}$, to give
\begin{equation} L=m'\sqrt{\dot x^\mu \widetilde g_{\mu\nu}\dot x^\nu}\end{equation}
where $m'$ is chosen in such a way that $\widetilde g_{ij}=-\delta_{ij}$.
In general $\tilde g_{\mu\nu}$ will depend on the mass of the particle and could depend
on the direction of the vector over which it acts.
We can propose a Lorentz transformation that leaves invariant this metric,
$\widetilde\Lambda^T\widetilde g\widetilde\Lambda= \widetilde g$,
writing $\widetilde g=\Gamma^T g \Gamma$ we find that $\widetilde \Lambda =\Gamma^{-1}\Lambda \Gamma$.
For Example in the models (1,0), (2,0) and (2,2)
the metric and the $\Gamma$ matrix can be written as
\begin{equation} \widetilde g=\left(\begin{array}{cc}1/ \alpha^2&0\\0&-1 \end{array}\right) ,\,\, \Gamma=\left(\begin{array}{cc}1/ \alpha&0\\0&-1 \end{array}\right) \end{equation}
where $m'$ and $\alpha $ are given in the Table \ref{constants}.
\begin{table}[th]
\caption{Constants of the different models.}
\label{constants}
\begin{center}
\begin{tabular}{c|c|c}
\hline\hline
$(s,r)$&$m'$& $\alpha$ \\
\hline
(1,0)& ${m}{(1-m^2l_p^2)}^{1/2}$ & $ (1-m^2l_p^2)^{1/2}= (1+m'^2l_p^2)^{-1/2} $\\[4mm]
\hline
(2,0)& $ m$ & $(1+m^2l_p^2)^{1/2} =(1+m'^2l_p^2)^{1/2}$ \\[4mm]
\hline
(2,2)& $ {m}{(1+m^2l_p^2)}^{-1/2}$ & $ (1+m^2l_p^2)^{-1/2} =(1-m'^2l_p^2)^{1/2}$ \\[4mm]
\hline
\end{tabular}
\end{center}
\end{table}
The representation in the coordinate space of a Lorentz transformation is
\begin{equation}\widetilde \Lambda= \left(\begin{array}{cc} \gamma& -\gamma
\alpha{\bf \beta}\cdot\\-\gamma {\bf \beta}/\alpha&\gamma {\rm P_\parallel }+{\rm P_\perp }
\end{array}\right) ,\end{equation}
where ${\rm P_\parallel }=\beta\beta^T$ is parallel to the velocity
projection operator and ${\rm P_\perp }=1-{\rm P_\parallel }$ is the corresponding perpendicular
projection operator.
This leads to the following velocity addition formula (in two dimension for simplicity)
\be \label{uu}
u'= \frac {u+ \beta / \alpha}{1+u \beta\alpha},
\ee
this implies that $u'$ depends on the mass of the particle.
Note that for a photon in any system $u=1$, this is because the photon is massless and $\alpha=1$.
In the other hand (\ref{uu}) implies that ``together'' is a relative concept because the coordinates of particles of different mass transform differently.
Finally the particle momentum $p_\mu=\partial L/\partial {\dot x}^\mu$ satisfy the constraint
\be\label{pgp} p_\mu \widetilde{g}^{\mu\nu} p_\nu= \alpha^2 p_0^2 -{\bf p}^2= m'^2.\ee
It is easy to prove that this relation is equivalent to the dispersion relations given in
\ref{pp2} for the three models with the respective constants shown in Table \ref{constants}.
\subsection{Quantum Field Theory.}
Something interesting in this kind of theories is the
fact that there exists a natural Lorentz invariant cut-off in the loop integrals
which appear to higher order in the quantum corrections.
Let us first consider a real field scalar associated with a free particle. In order to quantize the theory the replacement
$p_0 \rightarrow i\partial /\partial t$ and ${\bf p}\rightarrow-i\nabla$, ($m'\rightarrow m$) are made in (\ref{pgp}) and
the modified Klein-Gordon equation reads
\begin{equation} \left( {\alpha^2}{\partial _0^2}- \nabla ^2+ m^2 \right)\phi =0.\end{equation}
The Lagrangian for free particle associated to this equation would be
\begin{equation} L= \frac{1}{2} \left( \alpha^2 \dot\phi^2 - (\nabla\phi)^2 +m^2\phi^2 \right), \end{equation}
then momentum of the associated field $ \pi(x) = \alpha^2 \dot\phi(x)$ which satisfy the canonical equal time
commutations relations $[\phi (x) , \pi(y)]_{x_0=y_0}={i} \delta^3({\bf x-y})$.
The Fourier expansion of the Klein-Gordon field is therefore
\begin{equation} \phi(x)=\int \frac{d^3 {\bf k}}{(2\pi)^3 2\alpha \omega_{\bf k}} \left( a({\bf k}) e^{-i\check{k}\cdot x } +a^\dagger ({\bf k}) e^{i\check{k}\cdot x } \right) , \end{equation}
with $\check k= (\omega_{\bf k}/\alpha, {\bf k})$ and $\omega_{\bf k} = \sqrt{{\bf k}^2+ m^2 }$,
where the $a$ operator algebra is given by
\begin{equation} [ a({\bf k}),a({\bf k'})] = 2\alpha \omega _ { \bf k } \delta^3({\bf k-k'}).\end{equation}
Finally the propagator, in the momentum space has the form
\begin{equation} \Delta (k) =\frac{i}{\alpha^2 k_0^2- {\bf k}^2- m^2 },\end{equation}
where Im $ m^2\rightarrow 0^-$.
When the interaction $\lambda \phi^4$ is introduced the
one loop correction to the scalar propagator is
\begin{equation} -i\Sigma(k)= \frac{\lambda}{ 8\pi^3} \int_0^{E_p} {\bf k}^2d|{\bf k}|\int_{-E_p}^{E_p} dk_0 \frac{1}{\alpha^2 k_0^2- {\bf k}^2- m^2 }\end{equation}
When this integral is carried out
the leading terms will contain terms proportional to $\sim E_p^2$ and $\sim \ln (E_p/m)$.
Here $E_p$ acts a cutoff. This is somehow natural because the theory is
invariant under this deformed Lorentz transformations. Nevertheless we would
need a finite renormalization term in order to remove the quadratic and logarithmic terms in $E_p$.
At this point
we can not proceed any further with the radiative corrections. This is because it is not known how
to add two momenta in order to find the vertex correction in this new relativity.
This kind of problem is highly non trivial and is characteristic of all of these theories with two scales.
\section{Conclusions.}
An alternative approach to the non-linear representations of Lorentz transformations has been introduced; in this approach one changes the generators of a scale deformed transformation as in equation (\ref{UMU}). The deformation of the transformation is obtained using momentum operator-dependent parameters. When the transformation of the operator is calculated in terms of momentum eigenstates it becomes clear that this type of representations of the Lorentz Group correspond to a double scaled invariant models.
We have found Smolin's model is one of a family of such models, described here. Moreover, this type of transformations can be applied to the position operator and it was found that it transforms mixing both momentum and position operators, very much like a typical canonical transformation in classical mechanics. From this one can conclude that it is not possible to find a coordinate transformation for a particle in this model which does not contain the momentum operator. The interpretation is that having a precise position in space depends on the frame of the observer. If a particle appears at a definite position for one observer (occupying an eigenstate of the position operator in that frame) it appears for another observer, in relative movement, as having a position which is the combination of different eigenstates of the position operator.
A covariant Lagrangian formulation of this relativistic models has been presented as well. This allows us to study this models introducing an effective metric which has a dependency on the mass of the particle. Advancing in this program we present a second quantization of a spinless particle in this conditions, invariant under the transformations found above. This theory, in principle, should be finite since the integrals in the momenta can be properly re-normalized. This is possible because the second scale of the theory implies a cut-off for the momenta of the particles in the quantized version of the model. Although there appear some quadratic terms on the energy scale they could be removed as well using renormalization again. Vertex corrections are a problem since the addition rules for the momenta in these models is not well understood and therefore vertex corrections are yet to be calculated.
Double scaled Lorents transformations are an interesting approach to understand a fundamental length scale in nature. This work advances that understanding and proposes a whole family which systematically introduces those models, although several problems persist. More research would be needed to find for example non-linear Lorentzian transformations extended to include more parameters in the scale transformations that can depend on the coordinates. With this one would obtain a length, instead of an energy, cut-off.
\section*{Acknowledgments.}
This paper was supported by the \textit{Comit\'e para el Desarrollo de la Investigaci\'on} CODI, Universidad de Antioquia.
| 2023-04-23T06:40:42.101Z | 2012-10-24T02:04:50.000Z | redpajama/arxiv | arxiv_0001 | 964 | 5,109 |
c2858438495a0ea059d6a43ad45c19e60048bc2c | \section{Introduction}
\label{intro}
Leptogenesis is one of the most attractive mechanisms to explain the
origin of the baryon asymmetry of the Universe (BAU)~\cite{fukugita86}. This is so because it arises naturally in simple extensions of the standard model (SM) which
can also explain why the neutrino masses are so tiny. In this mechanism a lepton asymmetry is produced in the out of equilibrium decay of heavy Majorana neutrinos, which is then partially converted into a baryon asymmetry by non-perturbative sphaleron processes (see~\cite{davidson08} for a complete review).
In the most economical model (type I seesaw) the heavy neutrinos $N_i$ are SM singlets with Majorana masses $M_i$, which only interact with the lepton doublets $\ell_\alpha\; (\alpha=e, \mu, \tau)$ and Higgs field $h$ via Yukawa interactions, $\mathcal{L}_Y = - \lambda_{\alpha i}\,{\widetilde h}^\dag\, \overline{P_R N_i} \ell_\alpha + h.c.\,$. The number of new parameters associated to the type I seesaw with three singlets, one per each SM family, is 18. However the baryon asymmetry $Y_B \equiv n_B/s$ produced via $N_1$-leptogenesis depends on a few combinations of them (here $n_B$ and $s$ are the baryon and entropy densities). When flavour effects~\cite{barbieri99,endoh03,abada06,nardi06,abada06II,blanchet06} are not relevant the main parameters are $M_1$, which determines the epoch of leptogenesis, $\epsilon_1$, that gives a measure of the amount of CP violation per $N_1$-decay (see below for a precise definition), and the effective mass, $\tilde m_1 \equiv (\lambda^\dag \lambda)_{11} v^2/M_1$ (with $v$ the vev of the Higgs field), which is an appropriate measure of the intensity of the Yukawa interactions of $N_1$. If the CP asymmetry $\epsilon_1$ is constant during leptogenesis (which is usually a good approximation), the final baryon asymmetry, $Y_B^f$, is simply proportional to $\epsilon_1$. In this case it can be expressed as $Y_B^f = k \epsilon_1 \eta$, with $k \simeq 1/724$ a numerical factor and $\eta$ is the so called efficiency, which carries the dynamical information and it is mainly a function of $\tilde m_1$. By definition $\abs{\eta} \le 1$ and the maximum efficiency is obtained when $\tilde m_1 \sim 10^{-3}$~eV. This value is determined by the condition that the decay rate of $N_1$ equals the Hubble expansion rate at a temperature $T=M_1$, so that the Yukawa interactions of $N_1$ are barely out of equilibrium at the time it becomes non-relativistic. This result is amazing given that the contribution of $N_1$ to the masses of the light neutrinos ($m_i, i=1,2,3$) in the type I seesaw is expected to be of the same order as $\tilde m_1$ (barring cancellations due to phases). In other words, an efficient leptogenesis mechanism suggests a scale for the light neutrino masses which is roughly of the correct order of magnitude. Moreover, the most simple models for leptogenesis require an interesting upper bound for the masses of the light neutrinos, $m_i \lesssim 0.15$~eV in the one flavour approximation~\cite{buchmuller03} and $m_i \lesssim$~few eV when flavour effects are taken into account~\cite{abada06,desimone07II}.
The conditions described above provide a highly non-trivial connection between baryogenesis via leptogenesis and the low energy parameters $m_i$. But this is not enough at all to probe leptogenesis. Unfortunately in the most simple models no more generic relations can be established between leptogenesis and low energy parameters. In fact, for a very hierarchical spectrum of heavy singlet neutrinos $M_1 \ll M_2 \ll M_3$,
the $L$-violating CP asymmetry generated in the decay of the lightest singlet
has an upper bound proportional to $M_1$, the so-called
Davidson-Ibarra (DI) bound \cite{davidson02}.
This implies a lower bound $\sim 10^9$~GeV for the mass of the sterile neutrinos
in order for $N_1$-dominated leptogenesis to be successful.
Careful numerical studies show that the DI bound can be evaded for moderate
hierarchies, e.g. the lower bound on $M_1$ is relaxed by more than one order
of magnitude with respect to the hierarchical limit one for $M_3/M_2 \sim
M_2/M_1 \sim 10$~\cite{hambye03}.
However to reach these low values of $M_1$ some unlikely
cancellations are needed, which are not motivated by
any underlying symmetry.
Flavour effects do not substantially change
this result. Therefore leptogenesis occurs at very high energies in these scenarios. In addition no generic relation can be made between low and high energy phases, i.e. leptogenesis can work for any value of the observable PMNS phases~\cite{davidson07}.
In conclusion, leptogenesis in the context of the type I seesaw with hierarchical heavy neutrinos provides a simple and natural explanation to the BAU, but it will not be possible to test this scenario in foreseeable experiments. This has motivated research in different directions. For example in~\cite{frere08} some ways to falsify (rather than probe) leptogenesis at the LHC were investigated. Also, one can avoid the DI bound resorting to resonant leptogenesis, i.e., a resonant
enhancement of the CP asymmetry which occurs when there are at least two strongly
degenerated heavy neutrinos, such that $M_2 - M_1 \sim \Gamma_N$, being
$\Gamma_N$ their decay width~\cite{covi96II,Anisimov05}. In this scenario,
leptogenesis is feasible at much lower temperatures, $T \sim {\cal O}(1 \;\rm{TeV})$
\cite{pilaftsis04,piu,pilaftsis08,deppisch10}. However it is not enough to have leptogenesis at the TeV scale in order to probe it. This is so because the most crucial parameters for observing effects from the heavy neutrinos are the active-sterile neutrino mixings, which in the type I seesaw are roughly given by the ratio $m_D/M_1 \sim \sqrt{m_i/M_1}$ (with $m_D \sim \lambda_{\alpha i} v$), and hence are too small.
Therefore it is very interesting that there are well motivated seesaw models which not only yield a heavy neutrino quasi-degenerate spectrum but can also provide a large active-sterile neutrino mixing, namely those
that have an approximately conserved $B-L$~\cite{mohapatra86} (with $L$ being conserved at the perturbative level).
In these models
the tiny neutrino masses are proportional to small
lepton number-breaking parameters, which are technically natural
since a larger symmetry is realized when they vanish. This implies that the heavy neutrinos can be much lighter than in the generic seesaw, within the energy
reach of LHC.
Also, lepton flavour violating rare decays as well as non-unitarity of the leptonic mixing matrix
are present even in the limit of conserved $B-L$, and therefore they are unsuppressed
by the light neutrino masses \cite{Bernabeu87,GonzalezGarcia91,Hernandez09}.
As a consequence,
much attention has been devoted recently to this class of low scale seesaw models,
since they have a rich phenomenology both at LHC \cite{Han06,delaguila07,Kersten07}
and at low energy charged lepton rare decay experiments, such as
$\mu \rightarrow e \gamma$, and also lead to successful resonant leptogenesis
\cite{asaka08}.
It has also been noticed that
even if the heavy neutrinos that generate the BAU are not quasi Dirac,
or the mass splitting is outside the resonant regime, in seesaw models with almost conserved
$B-L$ the scale of leptogenesis can be lower than in the standard seesaw
\cite{antusch09,racker12}, provided flavour effects are at work.
This is so because there is a $L$-conserving part in the flavoured CP-asymmetries
which escapes the DI bound. In these notes we review and summarize the results of~\cite{racker12} on the possibility of having successful leptogenesis driven by the
purely flavoured $L$-conserving contribution to the CP asymmetries, in the context
of seesaw models with small violation of $B-L$.
\section{Leptogenesis in models with an almost conserved $B-L$}
\label{sec:dos}
In general there are at least two species of neutrinos involved in leptogenesis, one, called here $N_1$, which is mainly responsible for the generation of the lepton asymmetry during its production and decay, and another one, $N_2$, that makes the most important virtual contribution to the CP asymmetry in $N_1$ decays. If $B-L$ is only slightly violated, then each $N_i$ must satisfy one of the two following conditions:
\begin{itemize}
\item[(i)] $N_i$ is a Majorana neutrino with two degrees of freedom, whose Yukawa interactions violate lepton number and therefore the couplings $\lambda_{\alpha i }$ must be small.
\item[(ii)] The $N_i$ is a Dirac or quasi-Dirac neutrino with four degrees of freedom;
this means that
there are two Majorana neutrinos $N_{\ih}$ and $N_{\il}$ with masses $M_i + \mu_i$ and $M_i - \mu_i$ respectively. The parameter $\mu_i \ll M_i$ measures the amount of $B-L$ violation,
so that if $B-L$ is conserved, $\mu_i =0$ and $N_i= (N_{\ih} + i N_{\il})/\sqrt{2}$
is a Dirac fermion.
The Yukawa interactions can be expressed as
\begin{equation}
\mathcal{L}_{Y_{Ni}} = - \lambda_{\alpha i}\,{\widetilde h}^\dag\,
\overline{P_R \frac{N_{\ih} + i N_{\il}}{\sqrt{2}}} \ell_\alpha -
\lambda^\prime_{\alpha i}\,{\widetilde h}^\dag\,
\overline{P_R \frac{N_{\ih} - i N_{\il}}{\sqrt{2}}}
\ell_\alpha + h.c. \, ,
\end{equation}
where
$\lambda^\prime_{\alpha i} \ll 1$.
The terms proportional to $\lambda^\prime_{\alpha i}$ induce lepton number violation even when $\mu_i \to 0$ and hence they are similar in nature to the ones described in (i). Instead the $\lambda_{\alpha i}$ can be large, because they do not vanish in the $B-L$
conserved limit: in the absence of $\mu_i$ and
$\lambda^\prime_{\alpha i}$, a perturbatively conserved lepton number can be defined,
by assigning $L_N =1$ to $N_i$, and $L_{\ell_\alpha} =1$ to the SM leptons.
There are two cases that are relatively easy to analyze,
\begin{itemize}
\item (iia) $\mu_i \ll \Gamma_{N_{\ih}}, \Gamma_{N_{\il}} \; \,$ (Dirac limit), and
\item (iib) $\Gamma_{N_{\ih}}, \Gamma_{N_{\il}} \; \ll \mu_i \ll M_i \; \,$ (Majorana limit).
\end{itemize}
Here $\Gamma_{N_{\ih}}$ and $\Gamma_{N_{\il}}$ are the decay widths of $N_{\ih}$ and $N_{\il}$ respectively.
\end{itemize}
A comprehensive research on leptogenesis in models with small violation of $B-L$ can be obtained considering the different possibilities (i) or (ii) for both $N_1$ and $N_2$. Since we have not considered the widely-studied case of a resonant contribution of $N_2$ to the CP asymmetry in $N_1$ decays, the optimum situation for generating a lepton asymmetry is when $N_2$ satisfies (ii). In this way the CP asymmetries $\epsilon_{\alpha 1}$ in the decays of $N_1$ into leptons of flavour $\alpha$, $\epsilon_{\alpha 1} \equiv \tfrac{\Gamma(\proname{N_1}{\ell_\alpha h}) - \Gamma(\proname{N_1}{\bar \ell_\alpha \bar h})}{\sum_\alpha \Gamma(\proname{N_1}{\ell_\alpha h}) + \Gamma(\proname{N_1}{\bar \ell_\alpha \bar h})}$, being proportional to the Yukawa couplings of $N_2$, can be enhanced. In turn, for $N_1$ the simplest possibility is (i). It can also satisfy (iib), in which case $N_{\unol}$ and $N_{\unoh}$ behave as two independent Majorana neutrinos regarding the generation of the BAU, that would roughly double with respect to case (i). However if $N_1$ satisfies (iia), then it is (or effectively behaves as) a Dirac neutrino, i.e. lepton number is conserved in its decays, and therefore the only possibilities to end up with a non-zero BAU is to have important washouts from the two Majorana components of
$N_2$ (if $\mu_2 \gg \Gamma_{N_{\dosl}, N_{\dosh}}$) or let the sphalerons freeze out during leptogenesis~\cite{gonzalezgarcia09}.
Motivated by the previous discussion we have considered a scenario for leptogenesis involving three fermion singlets $N_1, N_{\dosl}, N_{\dosh}$ (each of them having two degrees of freedom), with respective masses $M_1, M_2 - \mu_2, M_2 + \mu_2$ and Yukawa couplings given by the Lagrangian
\begin{equation}
\label{eq:lagy12}
\mathcal{L}_{Y} = - \lambda_{\alpha 1}\,{\widetilde h}^\dag\, \overline{P_R N_{1}} \ell_\alpha - \lambda_{\alpha 2}\,{\widetilde h}^\dag\,
\overline{P_R \frac{N_{\dosh} + i N_{\dosl}}{\sqrt{2}}} \ell_\alpha + h.c. \; .
\end{equation}
The parameters $\lambda_{\alpha 1}$ violate lepton number and hence $\lambda_{\alpha 1} \ll \lambda_{\alpha 2}$.
As shown in~\cite{racker12} it is convenient to take $M_1 < M_2$ in order to obtain the lowest energy scale for leptogenesis within this framework, which corresponds to the so called $N_1$-leptogenesis. Then $Y_B^f$ is proportional to the CP asymmetries $\epsilon_{\alpha 1}$, which have a $L$-violating part suppressed by the small $L$-violating parameter $\mu_2$ and an unsuppressed $L$-conserving piece, $\epsilon_{\alpha 1}^L$, whose contribution to the total CP asymmetry is null, i.e. $\sum_\alpha \epsilon_{\alpha 1}^L = 0$. In order to have a large $Y_B^f$ (not suppressed by $\mu_2$), it is mandatory to have flavour effects so that there is a contribution to $Y_B^f$ coming from $\epsilon_{\alpha 1}^L$~\cite{nardi06}. In turn, to have the appropriate flavour effects, it is crucial to demand that the couplings of $N_1$ and $N_2$ be small enough, such that the Yukawa interactions of the $\tau$ are the strongest ones (see~\cite{racker12} for a detailed explanation). When this happens the density matrix of leptons is diagonal in the orthogonal basis $(\ell_\tau, \ell_{\tau \perp},{\ell'}_{\! \tau \perp})$, with $\ell_{\tau \perp}$ and ${\ell'}_{\! \tau \perp}$ being determined by the fastest interaction acting in the plane perpendicular to $\ell_\tau$. Something similar occurs in the antilepton sector. Then, as a first approximation, the lepton asymmetries in the flavours $\ell_\tau, \ell_{\tau \perp}$, and ${\ell'}_{\! \tau \perp}$ evolve independently. In this case, although $\sum_\alpha \epsilon_{\alpha 1}^L = 0$, $Y_B^f$ can get contributions from the individual $\epsilon_{\alpha 1}^L$. This is so because the final amount of lepton asymmetry in a given flavour also depends on how much of the produced asymmetry was erased, and this can be different for each flavour.
Actually the evolutions of the different lepton flavour asymmetries are not completely independent. On one hand, spectator processes~\cite{buchmuller01,nardi05} effectively couple the flavour asymmetries, nevertheless we have checked that their effect on $Y_B^f$ is at most a few tens of percent. One the other hand, there are $L$-conserving but
$L_\alpha$-violating scatterings $\proname{\ell_\beta h}{\ell_\alpha h}$,
$\proname{\ell_\beta \bar{h}}{\ell_\alpha \bar{h}}$, and
$\proname{h\bar{h}}{\ell_\alpha\bar{\ell_\beta}}$, hereafter called generically
flavour changing interactions (FCI), which are inherent to models with an approximately conserved $B-L$.
The FCI play a crucial role because they tend to equilibrate the different flavour asymmetries~\cite{aristizabal09,fong10}, effectively diminishing flavour effects and consequently $Y_B^f$. The cross sections of the FCI have been calculated in~\cite{racker12}, finding important differences with previous literature.
Summarizing, in order to determine the BAU generated in models for leptogenesis with small violation of $B-L$, it is very important to consider the FCI and the adequate conditions for having flavour effects. The set of Boltzmann equations (BE) taking into account these elements can be read in~\cite{racker12}. For the case $\mu_2 \gg \Gamma_{N_{\dosl, \dosh}}$ the BE are like the ones typically found in the literature. Instead, if $\mu_2 \ll \Gamma_{N_{\dosl, \dosh}}$ then $N_{\dosl}$ and $N_{\dosh}$ combine to form a Dirac neutrino $N_2 \equiv (N_{\dosh} + i N_{\dosl})/\sqrt{2}$, and therefore there is an asymmetry generated among the degrees of freedom of $N_2$ which has to be taken into account~\cite{gonzalezgarcia09}.
\section{Results}
\label{sec:res}
The relevant parameters for leptogenesis are $M_1$, $M_2/M_1$, $(\lambda^\dag \lambda)_{11}$, $(\lambda^\dag \lambda)_{22}$, the projectors $K_{\alpha i} \equiv \lambda_{\alpha i} \lambda_{\alpha i}^* /(\lambda^{\dagger} \lambda)_{ii}$, and $\mu_2$.
We have determined the minimum value of $M_1$ compatible with successful leptogenesis as a function of $M_2/M_1$, maximizing $Y_B^f$ over the remaining parameters. To obtain the baryon asymmetry we have solved numerically the appropriate set of BE~\footnote{For simplicity we have neglected spectator processes during leptogenesis and the asymmetry developed among the degrees of freedom of the Higgs~\cite{buchmuller01,nardi05}, as well as $\Delta L=1$ scatterings~\cite{nardi07II,fong10II}. However we have checked that their inclusion modifies the results by at most a few tens of percent.}, and to get successful leptogenesis we have required
$Y_B^f = 8.75 \times 10^{-11}$ \cite{komatsu10}.
The result is represented with the thick continuous curves in Fig.~\ref{fig:1}, the red line corresponding to the case $\mu_2 \gg \Gamma_{N_{\dosl, \dosh}}$ and the green one to $\mu_2 \ll \Gamma_{N_{\dosl, \dosh}}$.
\begin{figure}
\includegraphics[width=0.35\textheight,angle=270]{M1minVsM2dM1_maj_diracII.eps}
\caption{Lowest value of $M_1$ yielding successful leptogenesis as a function of $M_2/M_1$. The red curves are for the case $\mu_2 \gg \Gamma_{N_{\dosl, \dosh}}$ and the green ones for $\mu_2 \ll \Gamma_{N_{\dosl, \dosh}}$. The thick continuous curves give the physically correct bound, while the thin dashed ones show the result that would be obtained if the Yukawa couplings of $N_2$ were allowed to take values as large as 1 for all values of $M_2/M_1$.}
\label{fig:1}
\end{figure}
As can be seen it is possible to have neutrino masses as low as $M_1 \sim 10^{6}$~GeV, i.e. around three orders of magnitude below the lower bound for the standard case of type I seesaw with hierarchical heavy neutrinos. Such lower bound on $M_1$ in turn yields a lower bound for the reheating temperature, $T_{RH}$, of the same order, since to thermally produce the neutrinos
$M_1 \lesssim 5 \, T_{RH}$~\cite{giudice04,buchmuller04,racker08}. An interesting consequence is that the bound $T_{RH} \gtrsim 10^6$~GeV can be compatible with the upper bound on $T_{RH}$ required to avoid the gravitino problem in SUGRA models~\cite{Khlopov84,Ellis84,Kawasaki08}. Moreover, $M_1$ values around $10^{6}$~GeV can be achieved for a wide range of $N_2$ masses and also for different values of the Yukawa couplings (see~\cite{racker12} for details on this point as well as on the relation between the parameters defined above and the light neutrino masses).
An important issue for obtaining the bound on $M_1$ has been to determine how large the Yukawa couplings of $N_2$ can be without violating the condition that the rates of processes involving $N_2$ be slower than the rates of the $\tau$-Yukawa interactions.
For comparison we have also plotted in Fig.~\ref{fig:1} the -wrong- bound that would be obtained if $(\lambda^\dag \lambda)_{22}$ were allowed to be as large as 1. It is clear that as $M_2$ approaches $M_1$ the requirement of an upper bound for $(\lambda^\dag \lambda)_{22}$ becomes very relevant.
Finally let us comment that for simplicity the results depicted in Fig.~\ref{fig:1} were obtained assuming that $\ell_e$ is perpendicular to the decay eigenstates of $N_1$ and $N_2$, so that only two flavour asymmetries are generated. We have checked that in the more general three flavour case it is possible to lower the bound on $M_1$ by a factor up to almost 4 with respect to the two flavour case~\cite{racker12}.
\section{Conclusions}
Seesaw models with an almost conserved $B-L$ are an interesting alternative to explain the smallness of neutrino masses because they can lead, in principle, to large active-sterile neutrino mixings. We have found another merit of these models, namely that leptogenesis is possible for
$M_1 \gtrsim 3-10 \times 10^5 \,$~GeV, i.e. around three orders of magnitude below the standard type I seesaw case, without resorting to the resonant enhancement of the CP asymmetry for strongly degenerate heavy neutrinos.
However, it is also clear that such energy scale is too large to have both, successful non-resonant leptogenesis and active-sterile neutrino mixings large enough to produce observable effects.
\subsection*{Acknowledgments}
The author wishes to thank his collaborators on this subject, Concha Gonz\'alez-Garc\'\i a, Manuel Pe\~na and Nuria Rius. \\
This work has been supported by the Spanish MINECO Subprogramme Juan de la Cierva and it has also been partially supported by the Spanish MINECO grants FPA-2007-60323, FPA2011-29678-C02-01, and
Consolider-Ingenio CUP (CSD2008-00037). In addition we acknowledge partial support from the European Union FP7 ITN INVISIBLES (Marie Curie Actions, PITN- GA-2011- 289442).
| 2023-04-23T06:40:42.629Z | 2013-09-25T02:09:54.000Z | redpajama/arxiv | arxiv_0001 | 980 | 3,382 |
5acf41f27f71d7b7bb59af0441739c23b61e2fb7 | \section{Introduction}
Nonequilibrium phenomena in lattice are the oldest and most
fundamental problems in solid state physics. In conventional solids,
acceleration due to external field is relatively small compared to
electronic energy, and various scattering mechanisms make the transport
diffusive enough so that the small field approximation has often been
applicable. The quantum Boltzmann method has been applied
effectively~\cite{kadanoff,mahan} and linear response limit has been
widely used in the solid-state literature. However, recent progress in nano-devices
and optical lattice systems has made rigorous high-field formalism necessary to
understand their non-perturbative effects such as the Bloch oscillation.
In such regime, understanding the interplay of non-perturbative
field-effect and the many-body physics has emerged as one of the most
pressing problems in nano-science.
Combining the nonequilibrium and quantum many-body effects is an
extremely challenging task. Much effort has been exerted towards
understanding strong correlation physics in quantum dot physics,
especially the prototypical nonequilibrium Kondo problem. Analytical theories
\cite{rosch,schoeller,mehta} and many numerical methods have been
proposed along the time-dependent~\cite{werner,feiguin,schiro}, and
steady-state simulations~\cite{prl07,anders}. In such systems with
localized interacting region, the important question of energy
dissipation could have been side-stepped, and the existence of
steady-state has not been a major issue.
In the past few years, non-perturbative inclusion of electric-field and
many-body effects in lattice systems has been one of the central issues
in the field. Theories for lattice nonequilibrium have been
formulated~\cite{turkowski,freericks}, mostly based on the dynamical
mean-field theory (DMFT) for an $s$-orbital tight-binding (TB) lattice
with on-site interaction~\cite{eckstein,aoki,amaricci,aron}. Various
attempts have been made to include dissipation mechanism to the driven
lattice by fermion bath~\cite{aoki} and bosonic baths~\cite{vidmar}.
This work corresponds to the analytic solution of the non-interacting
limit of the models considered in Refs.~\onlinecite{aoki,amaricci}.
Although a long-held belief in solid-state transport has been that,
under a finite electric-field, the Fermi sea is perturbatively shifted
by drift velocity, many calculations performed under the DMFT framework
have suggested that the system approaches a steady-state with infinitely
hot electron gas even for small field. With inclusion of proper
dissipation mechanism, one expects the Boltzmann picture of displaced
Fermi surface at small fields and a recovery of the Bloch oscillation in
the high-field limit.
However, it has been unclear so far what approximations, such as
single-band approximation without Landau-Zener tunneling or the nature
of on-site interaction, are responsible for the rather peculiar
long-time states obtained from numerical theories. One of the goals of
this paper is that we provide exact solutions to one of the simplest
dissipation models with on-site fermion thermostats and give analytic
understanding of the problem, and guide possible future modeling.
Due to the nature of the one-body reservoirs, the problem can be solved
exactly (see Fig.~\ref{fig1}). With identical reservoirs on each site,
the Hamiltonian can be block-diagonalized according to the wave-vector
of electrons in the transport direction. The block-diagonal Hamiltonian
can then be exactly solved by a time-dependent perturbation
theory~\cite{jauho,blandin} using the nonequilibrium Green function
theory. The calculation of the wave-vector dependent occupation number
supports the semi-classical Boltzmann transport theory despite the
lack of momentum scattering.
DC electric current of this model is shown analytically
to recover the familiar semi-classical Boltzmann equation
result~\cite{lebwohl}. Based on these findings, we conclude that the
fermion thermostat model, despite its crude modeling to realistic
dissipation mechanism, can serve as a minimal setup for the studies of
strong correlation effects in driven lattice models. Although the model
considered here is one-dimensional, the result can be readily extended
to any spatial dimensions since the model is one-body and conserves
momentum.
\section{Model}
We study a quadratic model of a one-dimensional $s$-orbital
tight-binding model connected to fermionic reservoirs (see
Fig.~\ref{fig1}) under a uniform electric field $E$. The effect of the electric
field is absorbed in the temporal gauge as the Peierls phase
$\varphi(t)=(eEa)t$ to the
hopping integral~\cite{turkowski} $\gamma$.
The time-dependent Hamiltonian then reads
\begin{eqnarray}
\hat{H}(t)&=&-\gamma\sum_i (e^{i\varphi(t)}
d^\dagger_{i+1}d_i+H.c.)+\sum_{i\alpha}\epsilon_\alpha
c^\dagger_{i\alpha}c_{i\alpha}\nonumber \\
& &-g\sum_{i\alpha}(c^\dagger_{i\alpha}d_i+H.c.),
\end{eqnarray}
with $d^\dagger_i$ as the (spinless) electron operator on the
tight-binding chain on site $i$, $c^\dagger_{i\alpha}$ with the
reservoir fermion states connected to the site $i$ with the continuum index
$\alpha$ along each reservoir chain. Here we do not specify the explicit connectivity of the reservoir
chains, but each chains are assumed to have an identical dispersion
relation $\epsilon_\alpha$. Notice that the electric field is applied
only on the tight-binding chain $\{d^\dagger_i\}$. The coupling between
the TB site and the reservoir is given by the identical tunneling
parameter $g$. The Peierls phase $\varphi(t)$ is
given as
\begin{equation}
\varphi(t) = \left\{\begin{array}{cc}
0, & \mbox{for }t<0 \\
\Omega t, & \mbox{for }t>0
\end{array}\right.
\end{equation}
$\Omega=eEa$ is the
Bloch-oscillation frequency due to the electric field.
\begin{figure}
\includegraphics[width=\linewidth]{fig1}
\caption{(Color online) One-dimensional tight-binding lattice of orbital $d_i$ under an
electric field $E$. Each lattice site is connected to an identical
fermionic bath of $\{c_{i\alpha}\}$ with the continuum index $\alpha$
along the reservoir chain direction.
}
\label{fig1}
\end{figure}
We note that the whole system has discrete translational symmetry in the
transport direction and
the Hamiltonian is readily block-diagonalized with respect to the
wave-vector $k$ as $d^\dagger_k=\sqrt{N_d^{-1}}\sum_j
e^{ikR_j}d^\dagger_j$ and $c^\dagger_{k\alpha}=\sqrt{N_d^{-1}}\sum_j
e^{ikR_j}c^\dagger_{j\alpha}$ (with lattice sites $R_j=aj$ and the number of
sites along the TB chain $N_d$),
\begin{eqnarray}
\hat{H}(t)&=&\sum_k\left[
-2\gamma\cos(k+\varphi(t))d^\dagger_kd_k+\sum_{\alpha}\epsilon_\alpha
c^\dagger_{k\alpha}c_{k\alpha}\right.\nonumber \\
&& \left. -g\sum_{\alpha}(c^\dagger_{k\alpha}d_k+H.c.)\right].
\end{eqnarray}
Here $\epsilon_{d}(k)=-2\gamma\cos(k)$ is the tight-binding dispersion
at zero $E$-field.
Then each $k$-sector can be treated and solved separately. So from now
on, we suppress the $k$-subscript until necessary with the following Hamiltonian,
\begin{eqnarray}
\hat{H}_k(t)&=&
-2\gamma\cos(k+\varphi(t))d^\dagger d+\sum_{\alpha}\epsilon_\alpha
c^\dagger_{\alpha}c_{\alpha}\nonumber \\
&& -g\sum_{\alpha}(c^\dagger_{\alpha}d+H.c.).
\label{eq:hk}
\end{eqnarray}
It is important to note that the $k$-dependence enters the problem as
$k+\varphi(t)$ for $t>0$. This problem is simply a resonant level
model~\cite{jauho}
where the level is modulated sinusoidally for $t>0$.
\section{Solution for occupation number and current}
The time-dependent Hamiltonian~(\ref{eq:hk}) can be exactly solved by
the nonequilibrium Keldysh Green function method~\cite{blandin}. We
write the Hamiltonian as $\hat{H}_k(t)=\hat{H}_0+\hat{V}(t)$ with the
time-independent unperturbed part $\hat{H}_0=\hat{H}_k(0)$ and the
time-dependent perturbation as $\hat{V}(t)=\hat{H}_k(t)-\hat{H}_k(0)$,
\begin{equation}
\hat{V}(t) = -2\gamma\left[\cos(k+\varphi(t))-\cos(k)\right]d^\dagger d
\equiv v(t)d^\dagger d.
\end{equation}
When the perturbation is one-body on discrete states the lesser and
greater part of the self-energy is zero, and the lesser $d$-Green
function $G^<$ is expressed only in terms of the transient term,
symbolically written in the matrix form as~\cite{blandin}
\begin{equation}
{\bf G}^<=[I+{\bf G}^r{\bf V}]{\bf G}_0^<[I+{\bf V}{\bf G}^a]\label{eq:glss}
\end{equation}
and the retarded Green function ${\bf G}^r$ is given by the usual Dyson's
equation
\begin{equation}
{\bf G}^r={\bf G}_0^r+{\bf G}_0^r{\bf V}{\bf G}^r,\label{eq:gr}
\end{equation}
where the matrix product denotes convolution-integrals in time.
First with the retarded functions, the non-interacting limit has the
time-translational symmetry and
\begin{eqnarray}
G_0^r(t-t')&=&-i\theta(t-t')\int_{-\infty}^\infty d\epsilon
\frac{\Gamma/\pi}{\epsilon^2+\Gamma^2}e^{-i\epsilon(t-t')}\nonumber \\
&=&-i\theta(t-t')e^{-i\epsilon_d(k)(t-t')-\Gamma|t-t'|},
\end{eqnarray}
where we we use a flat-band DOS for the
reservoir in the infinite-band limit with the hybridization broadening
$\Gamma=\pi g^2 N(0)$ and the density of states of the fermion bath
$N(0)=\sum_\alpha\delta(\epsilon_\alpha)$.
Writing $G^r(t,t')=G_0^r(t-t')g^r(t,t')$,
Eq.~(\ref{eq:gr}) becomes
\begin{equation}
g^r(t,t')=1-i\int_{t'}^t ds\,v(s)g^r(s,t'),
\end{equation}
which can be solved as
\begin{equation}
g^r(t,t')=\exp\left[-i\int_{t'}^t v(s)ds\right],
\end{equation}
and finally we have for the full retarded Green function
\begin{equation}
G^r(t,t')=-i\theta(t-t')e^{-i\epsilon_d(k)(t-t')-\Gamma|t-t'|}
\exp\left[-i\int_{t'}^t v(s)ds\right].
\end{equation}
\begin{figure}
\includegraphics[width=\linewidth]{fig2}
\caption{(Color online) Expectation value of occupation number $n_k(t)$ of wave-vector
$k$ and the current $J_k(t)$ for $\gamma=1$, $\Omega=0.5$, $\Gamma=0.1$
and at $k=\pi/2+0.1$. After the initial time of $\Gamma^{-1}$, transient
behavior dies out and the expectation values reach steady oscillation.
}
\label{fig:nk}
\end{figure}
For the same-time argument
for $G^<$ we have the Dyson's equation
\begin{eqnarray}
&&G^<(t,t) = G_0^<(t,t) \nonumber\\
\quad && +\int_0^t \left[G^r(t,s)v(s)G_0^<(s,t)+
G_0^<(t,s)v(s)G^a(s,t)\right]ds\nonumber\\
\quad && +\int_0^t \int_0^t
G^r(t,s)v(s)G_0^<(s,s')v(s')G^a(s',t)dsds'.
\end{eqnarray}
We set the initial lesser Green function with the half-filled reservoir
at zero temperature as
\begin{equation}
G_0^<(t,t')=i\int_{-\infty}^0 d\omega
\frac{\Gamma/\pi}{(\omega-\epsilon_d(k))^2+\Gamma^2}e^{-i\omega(t-t')}.
\end{equation}
After some straightforward steps, the occupation number for the
wave-vector $k$, $n_k(t)=-iG^<(t,t)$, becomes
\begin{eqnarray}
n_k(t) & = & \int_{-\infty}^0 d\omega
\frac{\Gamma/\pi}{(\omega-\epsilon_d(k))^2+\Gamma^2}\times\\
& &\left|1-i\int_0^t ds\,v(s)e^{i(\omega-\epsilon_d(k)+i\Gamma)(t-s)
-i\int_{s}^t v(s')ds'}
\right|^2.\nonumber
\end{eqnarray}
Fig.~\ref{fig:nk} shows the above $n_k(t)$ numerically evaluated for
$\gamma=1$, $\Omega=0.5$, $\Gamma=0.1$ and at $k=\pi/2+0.1$.
Due to the exponential factor $e^{-\Gamma(t-s)}$, the integral converges
to a steady-state oscillation state after time $t\approx \Gamma^{-1}$ and the
transient behavior dies out.
Therefore, for long-time behavior, the time-integral range $[0,t]$ can be changed
to $[-\infty,t]$ for easier analytic treatment. After an
integral-by-parts and some straightforward steps, we have
\begin{eqnarray}
n_k(t) & &= \frac{\Gamma}{\pi}\int_{-\infty}^0 d\omega
\times
\label{eq:nkt}\\
& &\left|\int_{-\infty}^0 ds\,e^{-i(\omega+i\Gamma)s
-i(2\gamma/\Omega)\sin(k+\Omega (t+s))}
\right|^2.\nonumber
\end{eqnarray}
An identity for Bessel functions $J_n(x)$
\begin{equation}
e^{ix\cos\theta}=\sum_{n=-\infty}^\infty i^nJ_n(x)e^{in\theta}
\end{equation}
can be used to perform the integrals as
\begin{eqnarray}
n_k(t) & = &
\frac{\Gamma}{\pi}\sum_{nm}\frac{J_n(\frac{2\gamma}{\Omega})J_m(\frac{2\gamma}{\Omega})e^{i(m-n)(k+\Omega
t)}}{-(m-n)\Omega+2i\Gamma}\times\nonumber\\
&&\left[\frac12\log\frac{m^2\Omega^2+\Gamma^2}{n^2\Omega^2+\Gamma^2}
+i\chi_{mn}\right]
\label{eq:nk}
\end{eqnarray}
with
\begin{equation}
\chi_{mn}=
\pi+\tan^{-1}\frac{m\Omega}{\Gamma}+
\tan^{-1}\frac{n\Omega}{\Gamma}.
\end{equation}
\begin{figure}
\includegraphics[width=0.85\linewidth]{fig3}
\caption{(color online) Occupation number $n(k_m)$ with respect to the gauge-invariant
mechanical wave-vector $k_m=k+\Omega t$ from Eq.~(\ref{eq:nk}) at
$\Gamma=0.1$. At zero
field ($\Omega=0$, dashed line), $n(k_m)$ is given by the Fermi-dirac
distribution with the smooth steps from the damping $\Gamma$. As the
field increases, the distribution shifts to higher wave-vector as predicted by the Boltzmann
theory. With higher field ($\Omega>\Gamma$), the distribution
develops strong nonlinear effect with increasing effective temperature.
}
\label{fig:nkt}
\end{figure}
To interpret the $k$-occupation number, we should study the quantities
with respect to the physically meaningful gauge-invariant (mechanical) wave-vector
$k_m=k+\Omega t$. The occupation number can be easily evaluated by replacing $k+\Omega t$
by $k_m$ in Eq.~(\ref{eq:nk}), as shown in FIG.~\ref{fig:nkt} for the
damping at $\Gamma=0.1$. As the field $\Omega$ increases, the
$k$-occupation number to the Fermi-Dirac distribution shifted towards
the field direction. Despite the lack of momentum scattering in the
system, the picture of displaced Fermi sea remains valid for small
field. The fermion thermostats acting as particle reservoirs seem to
dephase the Peierls factor when an electron is absorbed in the
reservoir, hence leading to the similar effect as the momentum
scattering.
In appendix \ref{sec:nkt}, it has been shown analytically that the shift of the
wave-vector at small field is
\begin{equation}
\delta k = \frac{\Omega}{\Gamma}\propto E\tau,
\end{equation}
as expected in the Boltzmann transport picture. The momentum shift
$\delta k$, in the low-field limit, corresponds to the drift velocity
which is proportional to the electric field $E$ and the lifetime
$\tau(\sim\Gamma^{-1})$
of the transport electron given by the reservoir. As the field increases
the shift of Fermi surface deviates from the linear relation. As the
field is further increased ($\Omega\gg\Gamma$), the distribution significantly
deviates from the sharp low temperature distribution and all $k_m$
gradually become equally occupied.
Another gauge-invariant quantities are the local variables. For instance,
by taking the $k$-summation of Eq.~(\ref{eq:nk}), one obtains the local
electron density. Due to the term $k+\Omega$ in the expression, the
average over $k\in[-\pi/a,\pi/a]$ is equivalent to the time-average over
$t\in[0,2\pi/\Omega]$, i.e., the local density becomes time-independent
for large time limit. Specifically, the $k$-summation requires $m=n$ and
we have
\begin{equation}
\bar{n}_{\rm local}(t)=
\frac12\sum_{m=-\infty}^{\infty}\left[
J_m\left(\frac{2\gamma}{\Omega}\right)
\right]^2=\frac12.\nonumber
\end{equation}
Now we turn to the calculation of electric current,
\begin{equation}
J_k(t)=\frac{\partial\epsilon_d(k+\Omega t)}{\partial k}n_k(t)
=2\gamma\sin(k+\Omega t)n_k(t).
\end{equation}
Due to the sine-function, the DC current has contributions only from $m-n=\pm
1$ in Eq.~(\ref{eq:nk}). After some manipulations, we have
\begin{eqnarray}
\bar{J}_k & = &
\frac{2\gamma\Gamma}{\pi(\Omega^2+4\Gamma^2)}\sum_{m}J_m\left(\frac{2\gamma}{\Omega}\right)
J_{m-1}\left(\frac{2\gamma}{\Omega}\right)\times\nonumber\\
&&\left[\Gamma\log\frac{m^2\Omega^2+\Gamma^2}{(m-1)^2\Omega^2+\Gamma^2}
+\Omega\chi_{m,m-1}\right].
\label{eq:jk}
\end{eqnarray}
As in the case for $n_k(t)$, the DC limit of $J_k(t)$ becomes
independent of $k$. The total current is shown in Figs.~\ref{fig:omega} and
\ref{fig4}. Similar plot has been obtained in the interacting model
from numerical calculation of Hubbard model connected to fermion
bath~\cite{amaricci}.
\begin{figure}
\includegraphics[width=\linewidth]{fig4}
\caption{(color online) DC current as a function of the damping $\Gamma$ and electric
field $\Omega=eEa$. For small field, the current has a linear dependence on
the $E$-field showing an Ohm's law-like behavior. As $E$ increases, the
Bloch oscillation behavior takes over and the DC current decreases. The
dashed lines are the simplified expression, Eq.~(\ref{eq:drude}).
}
\label{fig:omega}
\end{figure}
It is instructive to simplify the above expression in the limit of
$\Omega,\Gamma\ll\gamma$ where
the DC current is reduced to the expression
\begin{equation}
\bar{J}\approx\frac{4\gamma\Gamma\Omega}{\pi(\Omega^2+4\Gamma^2)}.
\label{eq:drude}
\end{equation}
Detailed derivation is provided in Appendix. This approximate expression is shown
as dashed lines in Fig.~\ref{fig:omega}. Despite that the formula was
derived for $\Omega,\Gamma\ll\gamma$, it shows remarkable accuracy to the DC
current for a wide range of $\Gamma$ and $E$.
It is also interesting to note that a similar formula has been
derived for a super-lattice system with Ohmic scattering within the
semi-classical Boltzmann transport equation~\cite{lebwohl}. Although the
current has the same dependence on the damping and the electric field,
it should be emphasized that the two models have quite different scattering
mechanism where in the Boltzmann approach~\cite{lebwohl} the momentum relaxation is
explicitly built-in while in our case the lattice wave-vector scattering
does not happen and a very different Fermi surface structure results.
In the low-field limit, the current~(\ref{eq:drude}) recovers the form of
the Drude conductivity per electron,
\begin{equation}
\bar{J}\approx\frac{\gamma\Omega}{\pi\Gamma}\propto \frac{E\tau}{m^*},
\end{equation}
with $\gamma\sim 1/m^*$ and $\Gamma\sim 1/\tau$.
\begin{figure}
\includegraphics[width=\linewidth]{fig5}
\caption{(Color online) Contour plot of DC current as a function of damping and
electric field.
}
\label{fig4}
\end{figure}
\section{Conclusions}
Calculations on electron transport with fermion thermostats confirm
salient features of numerical results, and map the model to the
Boltzmann transport picture, such as Fermi surface shift in the
Brillouin zone by the drift velocity and the Ohmic-like limit of
electric current. In particular, the electric current,
Eq.~(\ref{eq:drude}), recovered the semi-classical transport result even
without any momentum scattering. Explicit and exact calculations clarify
the steady-state nature of the model which might have different
scattering processes from the realistic solid-state transport systems.
Nevertheless, its phenomenological similarity to the conventional
semi-classical pictures has been established. The findings lead us to
conclude that the fermion bath model, despite its drastic
simplifications in the one-particle coupling and the lack of momentum
scattering, can be considered as a rudimentary and minimal dissipation
mechanism which will be invaluable in further modeling strong correlation
physics through dynamical mean-field theory.
\section{Acknowledgements}
Author is grateful for helpful discussions with Kwon Park and Woo-Ram
Lee. This work has been supported by the National Science
Foundation with the Grant number DMR-0907150. Author also thanks the
Asia-Pacific Center for Theoretical Physics at Pohang, Korea, where
part of the work has been completed.
| 2023-04-23T06:40:43.001Z | 2013-01-15T02:07:54.000Z | redpajama/arxiv | arxiv_0001 | 993 | 3,246 |
a0d81f28569434c3bd52a517435f50f18159e539 | \section{Introduction}
Private quantum channels are a basic tool in quantum cryptography \cite{AMTW}. Conditional expectations and trace vectors are notions that have played a role in the theory of operator algebras for more than half a century \cite{Ume,MvN}. In this paper we show that there is an intimate relationship between the two subjects. Specifically we give a new geometric characterization of single qubit private quantum channels via the Bloch sphere representation for qubit states that relies on trace vectors. We further show that trace vectors completely describe the private states for quantum channels that are themselves conditional expectations.
In the next two sections we introduce private quantum channels, conditional expectations, and trace vectors. We discuss basic properties and include simple examples of each. We then consider the single qubit case in detail, giving a trace vector characterization of private states for unital quantum channels. We finish with a complete characterization of the private states for channels that are themselves conditional expectations in terms of trace vectors.
\section{Private Quantum Channels}\label{secQCandCE}
We will use $\mathcal H$ or $\mathcal K$ to denote Hilbert spaces (which are assumed to be finite dimensional in this paper) and denote a $d$-dimensional Hilbert space by $\mathcal H_d$. We denote the set of linear operators on $\mathcal H$ by $\mathcal L(\mathcal H)$, and we use $\mathbb M_d$ to denote the algebra of $d \times d$ complex matrices, which when convenient will be regarded as the matrix representations of operators in $\mathcal L(\mathcal H_d)$ with respect to a given orthonormal basis for $\mathcal H_d$. The identity element of an operator space $X$ will be denoted by $\1_X$, or simply by $\1$ if the space is implied by the context, and we will write $\1_d$ for the identity of $\mathbb M_d$.
We will use Dirac notation for vectors $\ket{\phi}$ and vector duals $\bra{\phi}$. Thus pure states are represented as $\ket{\phi}\bra{\phi}$. General quantum states are represented by density operators (nonnegative operators with trace equal to 1), and we will use notation such as $\rho$, $\sigma$ in that case. The adjoint of an operator $A$ will be written $A^\dagger$, and we will reserve the asterisk notation when discussing abstract C$^*$-algebras.
A quantum channel is a linear, completely positive, trace preserving map $\mathcal E:\mathcal L(\mathcal H)\rightarrow \mathcal L(\mathcal K)$. (Channels are generally defined on the set of trace class operators, with their dual maps defined on the set of bounded operators, but the sets coincide in the finite dimensional case.) Every channel can be written as
\begin{equation}\label{eq:Kraus}
\mathcal E(\rho)=\sum_{i=1}^nK_i \rho K_i^\dagger,
\end{equation}
for some operators $K_i : \mathcal H \rightarrow \mathcal K$ with $\sum_{i=1}^nK_i^\dagger K_i=\1$, for any density operator $\rho$. We call a representation of $\mathcal E$ as in equation (\ref{eq:Kraus}) a \emph{Kraus decomposition} of $\mathcal E$. A channel $\mathcal E$ is called a \emph{random unitary channel} if it admits a decomposition
\begin{equation}\label{eq:RUdecomp}
\mathcal E(\rho)=\sum_ip_iU_i\rho U_i^\dagger \quad\quad\forall \rho,
\end{equation}
where $\{p_i\}$ form a probability distribution and $U_i$ are unitary operators.
In quantum cryptography, a private quantum channel (PQC) is the quantum analogue to the classical one-time pad. The following definition gives the mathematical framework for the notion in quantum information.
\begin{definition}
Let $\mathcal S\subseteq \mathcal H$ be a set of pure states and let $\mathcal E:\mathcal L(\mathcal H)\rightarrow \mathcal L(\mathcal H)$ be a channel. Let $\rho_0$ be a density operator acting on $\mathcal H$. Then $[\mathcal S, \mathcal E, \rho_0]$ is a \emph{private quantum channel} (PQC) if for any state $\ket{\phi}\in\mathcal S$, we have
\[
\mathcal E(\ket{\phi}\bra{\phi})=\rho_0.
\]
\end{definition}
PQCs were first considered in \cite{AMTW}, where the authors consider a particular class of random unitary channels. The most basic example is the following.
\begin{definition}
A map $\mathcal E:\mathfrak{B}(\mathcal H_{2^n})\rightarrow \mathfrak{B}(\mathcal H_{2^n})$ is called a \emph{depolarizing channel} if, for any density matrix $\rho\in \mathfrak{B}(\mathcal H_{2^n})$, we have
\[
\mathcal E(\rho)=\frac{p}{n}\1+(1-p)\rho,
\]
where $0 < p \leq 1$ is a probability. When $p=1$ the \emph{completely depolarizing channel} is obtained, which gives the simplest example of a PQC, where every pure state is a private state. We denote the completely depolarizing channel by $\mathcal E_{\C}$.
\end{definition}
\section{Conditional Expectations and Trace Vectors}\label{secCEandTV}
We recall a basic definition from operator algebras. Suppose there exists an orthogonal direct sum decomposition of a Hilbert space as $\mathcal H=\bigoplus_i(M_i\otimes N_i)\oplus K$. Let $\mathcal A$ be an algebra of operators in $\mathcal L(\mathcal H)$ consisting of all operators that belong to the set $\mathcal A=\bigoplus_i(\1_{M_i}\otimes\mathcal L(N_i))\oplus 0_K$, where $0_K$ is the zero operator on $K$. We call $\mathcal A$ a \emph{(concrete finite dimensional) C$^*$-algebra}. $\mathcal A$ is unital if $\1_{\mathcal H}\in \mathcal A$; i.e., $\mathcal K$ is the zero subspace. A $*$-subalgebra $\mathcal B$ of $\mathcal A$ is a subset that is also a C$^*$-algebra. See \cite{Dav} for basic C$^*$-algebra theory.
\begin{definition} Let $\mathcal A$ be a C$^*$-algebra and let $\mathcal B\subseteq \mathcal A$ be a unital $^*$-subalgebra. We call a linear map $\mathcal E_{\mathcal B}:\mathcal A\rightarrow \mathcal B$ a
\emph{conditional expectation} of $\mathcal A$ onto $\mathcal B$ if
\begin{enumerate}
\item[(i)] $\mathcal E_{\mathcal B}(b)=b$ for all $b\in \mathcal B$;
\item[(ii)]$\mathcal E_{\mathcal B}(b_1ab_2)=b_1\mathcal E_{\mathcal B}(a)b_2$ for all $b_1, b_2\in \mathcal B$ and for all $a\in \mathcal A$;
\item[(iii)] $a \in \mathcal A, \, a\ge 0$ implies $\mathcal E_{\mathcal B}(a)\ge 0$.
\end{enumerate}
\end{definition}
Conditional expectations were first considered in \cite{Ume}. We are interested in conditional expectations from $\mathbb{M}_n$ onto a subalgebra that are also quantum channels and hence trace preserving. We will therefore restrict ourselves to trace preserving conditional expectations.
We will call such maps \emph{conditional expectation channels}.
More examples of conditional expectation channels will be discussed below, but for the reader familiar with quantum information, we note here that the $n$-qubit completely depolarizing channel $\mathcal E_{\C}$ is the conditional expectation onto the trivial scalar algebra ${\mathbb C} \cdot\1_{2^n}$. One way to see how conditional expectations inevitably arise in the theory is through trace inner products.
\begin{definition} A linear functional $\tau:\mathcal{A} \to \mathbb{C}$ is a \emph{faithful trace} if
\begin{enumerate}
\item[(i)] $\tau(a_1 a_2)=\tau(a_2 a_1)$
\item[(ii)] $\tau(a^\dagger a)>0$ for all $a\in \mathcal{A}$ with $a\neq 0$.
\end{enumerate}
\end{definition}
Given a faithful trace $\tau$ on $\mathcal{A}$ we can define an inner product $\langle a_1, a_2 \rangle=\tau(a_1^\dagger a_2)$.
We note that if $\mathcal{A}$ has a faithful trace $\tau$, the orthogonal projection onto $\mathcal{B}$ with respect to this inner product is the unique $\tau$-preserving conditional expectation from $\mathcal{A}$ to $\mathcal{B}$. The essential structure of this argument can be found in \cite{Ume}. The most well-known example is the so-called Hilbert-Schmidt inner product $\langle A, B \rangle = \operatorname{Tr} (A^\dagger B) $ on $\mathcal M_n$.
\subsection{Trace Vectors}
We now consider trace vectors, a notion that initially arose in work of Murray and von Neumann \cite{MvN}, and has more recently been studied in the field of matrix theory.
\begin{definition}
Let $\mathcal A$ be a $*$-subalgebra of $\mathcal L(\mathcal H_n)$. A vector $\ket{v}$ is a
\emph{trace vector} of $\mathcal A$ if
\[
\bra{v}a\ket{v}=\frac1n\operatorname{Tr} a \quad \forall a\in \mathcal A.
\]
More generally, given a density operator $\rho_0$, we say $\ket{v}$ is a \emph{trace vector with respect to $\rho_0$} of $\mathcal A$ if
\begin{equation}\label{eq:gentv}
\bra{v}a\ket{v}=\operatorname{Tr} (\rho_0 a) \quad \forall a\in \mathcal A.
\end{equation}
Thus by ``trace vector'', we really mean ``trace vector with respect to $\frac1n\1_n$''.
\end{definition}
By letting $a=\1$ in the definition of a trace vector, we find $\langle{v}|v\rangle=1$; a trace vector has unit length. It is easy to build a trace vector from other trace vectors in order to create a more general class of examples. Indeed, if $\ket{v_i}$ is a trace vector of the algebra $\mathcal A_i=(\1_{M_i}\otimes\mathcal L(N_i))\oplus 0_K$ for $i\in \{1,\dots, q\}$, then $\ket{v}=\bigoplus_{i=1}^q \ket{v_i}$ is a trace vector of the algebra $\mathcal A=\bigoplus_{i=1}^q \mathcal A_i$. In this way, trace vectors behave predictably. This also allows us to consider each summand separately, as we will do later.
\begin{example}\label{maxentangle}
As a fundamental example for quantum information, consider a maximally entangled state $\ket{\varphi_{e}}\in \mathcal H_m\otimes \mathcal H_n$. That is, a state
$\ket{\varphi_{e}}=\frac1{\sqrt{d}}\sum_{i=1}^d \ket{e_i}\otimes \ket{f_i}$, where $\{\ket{e_i}\}$ and $\{\ket{f_i}\}$ form orthonormal sets in $\mathcal H_m$ and $\mathcal H_n$ respectively, and $d=\min\{m,n\}$. If $m \geq n$, then one can check via direct calculation that $\ket{\varphi_{e}}$ is a trace vector for the algebra $\1_m \otimes \mathcal L(\mathcal H_n)$. And if $m = n$ an analogous calculation works for $\mathcal L(\mathcal H_m) \otimes \1_n$.
\end{example}
The general case is clarified by the following theorem of the third author. We recall that a vector $\ket{v}$ is a
\emph{separating vector} of an algebra $\mathcal A$ if $a\ket{v}=0$ for some $a\in \mathcal A$ implies $a=0$.
\begin{theorem}\label{rthm1}
If $\mathcal A$ is a unital $*$-subalgebra of $\mathbb{M}_n$, then the following conditions are equivalent:
\begin{enumerate}
\item $\mathcal A$ is unitarily equivalent to
$
\oplus_{i=1}^q \left(\1_{m_i}\otimes \mathbb{M}_{n_i}\right),
$
where $m_i\ge n_i$ for all $i$ and $\sum_{i=1}^q m_in_i=n$.
\item $\mathcal A$ has a separating vector.
\item $\mathcal A$ has a trace vector.
\item There exists a set of trace vectors of $\mathcal A$ that form an orthonormal basis of $\C^n$.
\end{enumerate}
\end{theorem}
This result is proved in \cite{Per03}. A related infinite dimensional open problem goes all the way back to von Neumann \cite{Ge}. Consider two simple cases. It is clear that $\mathbb{M}_n$ has no trace vectors -- from both the theorem and the definition of trace vectors. On the other hand, let $\Delta_2$ be the algebra of $2 \times 2$ diagonal matrices with respect to a basis $\{ \ket{0}, \ket{1} \}$. One can readily check that the trace vectors for $\Delta_2$ are (up to complex phase) all vectors of the form $\ket{\psi} = \frac{1}{\sqrt{2}} ( \ket{0} + e^{i\theta} \ket{1} )$, for $0 \leq \theta < 2 \pi$; in other words, the set of all states that lie on the equator in the Bloch sphere representation for qubits (this point is further elucidated in the next section).
\section{Private Quantum Channels on the Bloch Sphere}
In this section we give a geometric characterization of single qubit unital PQCs in terms of the Bloch sphere representation \cite{NC} for single qubit states. We also show how the private states for such PQCs are determined by trace vectors. An alternative description was discussed in \cite{BZ}, where the entropy of sets of such private states was considered.
Every unital quantum channel is a random unitary channel in the single qubit case \cite{LS}. Thus, our private quantum channel $[\mathcal S, \mathcal E, \rho_0]$ in this case is given by a random unitary channel $\mathcal E: \mathbb{M}_2\rightarrow \mathbb{M}_2$, a set of pure states $\mathcal S$, and an output density matrix $\rho_0$. More precisely, we have $\mathcal E(\ket{v}\bra{v})=\rho_0$ for all $\ket{v}\in \mathcal S$. We would like to allow for the possibility of orthonormal vectors in $\mathcal S$. As the channel is unital this can only occur if $\rho_0=\frac1n\1$, and hence we shall focus on this case here.
Using the Bloch sphere representation, we can associate
to any density matrix $\rho\in \mathbb{M}_2$ a Bloch vector $\vec{r}\in\mathbb{R}^3$ satisfying $\|\vec{r}\|\leq{1}$, where
\begin{equation}\label{eq:paulidecompofrho}
\rho=\frac{\1+\vec{r}\cdot \vec{\sigma} }2.
\end{equation}
We use $\vec{\sigma}$ to denote the \emph{Pauli vector}, that is, $\vec{\sigma}=(\sigma_x,\sigma_y, \sigma_z)^T$.
Note that the set $\{\1, \sigma_x, \sigma_y, \sigma_z\}$ forms a basis for the \emph{real} vector space of Hermitian matrices in $\mathbb{M}_2$. We recall that a state is pure if and only if $\|\vec{r}\|=1$ and that the maximally mixed state $\frac {{\rm 1\hspace*{-0.3ex}\rule{.075ex}{1.14ex}\hspace*{0.15ex}}}{n}$ has Bloch vector $\vec{r}=\vec{0}$.
As discussed in \cite{KR01}, every linear map $\Phi:\mathbb{M}_2\rightarrow\mathbb{M}_2$ can be represented in the basis $\{\1, \sigma_x, \sigma_y, \sigma_z\}$ by a $4\times4$ matrix $\mathbb{T}$, and $\Phi$ preserves the trace if and only if the first row of the matrix $\mathbb{T}$ satisfies $t_{1k}=\delta_{1k}$; i.e.,
\begin{equation}
\mathbb{T}=\left( \begin{array}{cc} 1 & \mathbf{0} \\
\vec{t} & T \end{array} \right)
\end{equation}
where $T$ is a $3\times3$ matrix, $\mathbf{0}$ is a row vector, and $\vec{t}$ is a column vector.
The transformation $\Phi$ maps the subspace of Hermitian matrices into itself \emph{iff} $\mathbb{T}$ is real; finally, the map $\Phi$ is unital \emph{iff} $\vec{t}=\vec{0}$.
Thus, every unital qubit channel $\mathcal E$ can be represented as
\begin{equation}\label{eq:channeldecomp}
\mathcal E\left(\frac1{2}\left[\1+\vec{r}\cdot\vec{\sigma}\right]\right)=\frac1{2}\left[\1+(T\vec{r})\cdot\vec{\sigma}\right],
\end{equation}
where $T$ and $\vec{t}$ are real, and we recall any density matrix can be written as in equation (\ref{eq:paulidecompofrho}). Here, the submatrix $T$ represents a deformation of the Bloch sphere, while the vector $\vec{t}$ represents a translation. This affine mapping of the Bloch sphere into itself is also discussed in section 8.3.2 of \cite{NC}.
We are of course interested in cases where $\mathcal S$ is nonempty. This is easily seen to occur precisely when $T$ in equation (\ref{eq:channeldecomp}) has non-trivial nullspace.
Thus we consider the cases in which $T$ has non-trivial nullspace; that is, the subspace of vectors $\vec{r}$ such that $T \vec{r} = 0$ is one, two, or three-dimensional.
Finally, we note that in the single qubit case the unital subalgebras of the algebra $\mathcal A=\mathbb{M}_2$ can be easily classified. They are
$\mathbb{M}_2$, $\C\cdot\1_2$ (the two trivial cases), and, up to unitary conjugation, $\Delta_2$, the subalgebra of all diagonal matrices in $\mathbb{M}_2$. To be precise, this third case refers to the subalgebras $\mathcal B$ of the form $U^\dagger\Delta_2 U$, where $U\in \mathcal A$ is unitary.
\begin{theorem}\label{thm:N(T)}
Let $\mathcal E:\mathbb{M}_2\rightarrow\mathbb{M}_2$ be a unital qubit channel, with $T$ the mapping induced by $\mathcal E$ as in equation (\ref{eq:channeldecomp}). Then there are three possibilities for a private quantum channel $[\mathcal S, \mathcal E, \frac12\1]$ with $\mathcal S$ nonempty:
\begin{enumerate}
\item If the nullspace of $T$ is 1-dimensional, then $\mathcal S$ consists of a pair of orthonormal states.
\item If the nullspace of $T$ is 2-dimensional, then the set $\mathcal S$ is the set of all trace vectors of the subalgebra $U^\dagger\Delta_2 U$ of $2\times 2$ diagonal matrices up to a unitary equivalence.
\item If the nullspace of $T$ is 3-dimensional, then $\mathcal E$ is the completely depolarizing channel and $\mathcal S$ is the set of all unit vectors. In other words, $\mathcal S$ is the set of all trace vectors of $\C\cdot \1_2$.
\end{enumerate}
\end{theorem}
\begin{figure}[h!]
\centering
\includegraphics[width=4in]{BlochCase1.jpg}
\vspace{-20pt}
\caption{Case (1)}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=4in]{BlochCase2.jpg}
\vspace{-20pt}
\caption{Case (2)}
\end{figure}
\proof
We shall write $\vec{r}_\phi$ for the Bloch sphere vector representation of a single qubit state $\ket{\phi}$. It is clear from equation~(\ref{eq:channeldecomp}) that $\mathcal E(\ket{\phi}\bra{\phi}) = \frac{1}{2} \1$ if and only if $T \vec{r}_\phi = 0$. Hence the relevant set that yields private states here is the intersection of the nullspace of $T$ and the surface of the Bloch sphere.
Case (1): The nullspace of $T$ is 1-dimensional. In this case, the nullspace is a single line through the origin of the Bloch sphere and the range of $T$ is a plane through the origin. Obviously this line meets the surface of the Bloch sphere in two antipodal points. These two antipodal points correspond to a pair of orthonormal single qubit states. Figure~1 gives an example.
Case (2): The nullspace of $T$ is 2-dimensional. In this case, the nullspace is a plane through the origin of the Bloch sphere. This plane meets the surface of the sphere in a great circle. The pure states corresponding to the points on this circle are precisely the private states for the channel. See an illustration in Figure~2.
To see how these private states arise from the trace vector perspective, let us consider the action of the channel more directly. As the nullspace of $T$ is 2-dimensional, its range is a line through the origin. For simplicity we shall assume this line is the $z$-axis; other cases are unitarily equivalent to this case. Thus, the range of $T$ intersects the sphere in the north and south poles, corresponding to the pure states $\ket{0}\bra{0}$ and $\ket{1}\bra{1}$ respectively. The action of $T$ here will be a possible rotation of the Bloch sphere followed by a projection of the sphere onto the $z$-axis, followed by a possible contraction. By unitary equivalence, we only need consider the case where there is no initial rotation of the Bloch sphere.
In terms of the Pauli matrices $\sigma_x,\sigma_y,\sigma_z$, this means the action of the channel is given by $\mathcal E(\sigma_x)=0$, $\mathcal E(\sigma_y)=0$ and $\mathcal E(\sigma_z) = p \sigma_z$ for some $0 < p \leq 1$.
Now $\Delta_2$ is the algebra of all diagonal matrices with respect to the ordered basis $\{ \ket{0}, \ket{1} \}$; explicitly, $\Delta_2$ is the set of all operators of the form $a \ket{0}\bra{0} + b \ket{1}\bra{1}$ for arbitrary scalars $a,b$. Then the projection onto the $z$-axis is a conditional expectation onto the subalgebra $\Delta_2$; call it $\mathcal E_{\Delta}$. Explicitly,
\[
\mathcal E_{\Delta}\left(
\begin{bmatrix}
a & b\\
c & d
\end{bmatrix}\right)= \begin{bmatrix}
a & 0\\
0 & d
\end{bmatrix},
\text{ for any matrix } \begin{bmatrix}
a & b\\
c & d
\end{bmatrix}.
\]
One can check directly that $\mathcal E = p \mathcal E_{\Delta} + (1-p)\mathcal E_{\C}$, where $\mathcal E_{\C}$ is the completely depolarizing channel.
As $\mathcal E_{\C}$ adds no restrictions to the private states for $\mathcal E$, it suffices to show that the trace vectors for $\Delta_2$ are precisely the pure states that lie on the equator of the Bloch sphere. But the equator states are precisely the states that satisfy $|\bra{\phi}\ket{0}| = \frac{1}{\sqrt{2}} = |\bra{\phi}\ket{1}|$. And it is easy to see that these are the states which do indeed satisfy the trace vector condition for the algebra $\Delta_2$.
\begin{figure}[h!]
\centering
\includegraphics[width=3.5in]{BlochMM.jpg}
\vspace{-10pt}
\caption{Case (3)}
\end{figure}
Case (3): The nullspace of $T$ is 3-dimensional, in other words $T$ is the zero operator. In this case $T$ maps the entire Bloch sphere to its origin, which corresponds to the maximally mixed state $\frac{1}{2} \1$, as shown in Figure~3. It is clear in this case that $\mathcal E$ is the completely depolarizing channel $\mathcal E_{\C}$. Moreover, the set $\mathcal S$ has no restrictions; that is, $\mathcal S$ is the set of all unit vectors. In other words, $\mathcal S$ is the set of all trace vectors of $\C\cdot \1_2$.
\qed
\section{Private States for Conditional Expectation Channels}
The following result clarifies the general connection between conditional expectation channels, trace vectors and private states.
\begin{theorem}\label{thm:PQCiffTVs}
Let $\mathcal E :\mathbb{M}_n \rightarrow\mathcal A$ be a conditional expectation channel. Then $[\mathcal S, \mathcal E, \rho_0]$ is a private quantum channel if and only if $\mathcal S$ is a set of trace vectors of $\mathcal A$ with respect to $\rho_0 \in \mathcal A$.
\end{theorem}
\proof
Let us first assume that $[\mathcal S, \mathcal E, \rho_0]$ is a PQC. Then $\mathcal E(\ket{v}\bra{v}) = \rho_0$ for all $\ket{v} \in \mathcal S$, and in particular note that $\rho_0$ belongs to $\mathcal A$. Thus for all $\ket{v}\in \mathcal S$ and for all $a \in \mathcal A$, we have
\begin{eqnarray*}
\bra{v} a \ket{v} &=& \operatorname{Tr} (\ket{v}\bra{v} a) \\
&=& \operatorname{Tr} (\mathcal E(\ket{v}\bra{v} a)) \\
&=& \operatorname{Tr} (\mathcal E(\ket{v}\bra{v}) a)
= \operatorname{Tr} (\rho_0 a),
\end{eqnarray*}
where the second and third identities follow from the trace preservation and conditional expectation properties of $\mathcal E$ respectively. It follows that the states of $\mathcal S$ are trace vectors of $\mathcal A$ with respect to $\rho_0$.
For the converse, observe that when the vector states of $\mathcal S$ are trace vectors of $\mathcal A$ with respect to $\rho_0$, a similar calculation shows for all $\ket{v}\in\mathcal S$ and for all $a\in \mathcal A$ that
\begin{eqnarray*}
\operatorname{Tr} (\rho_0 a) &=& \bra{v} a \ket{v} \\
&=& \operatorname{Tr} (\ket{v}\bra{v} a) \\
&=& \operatorname{Tr} (\mathcal E(\ket{v}\bra{v} a))
= \operatorname{Tr} (\mathcal E(\ket{v}\bra{v}) a).
\end{eqnarray*}
As $\rho_0$ belongs to $\mathcal A$, it follows that $[\mathcal S, \mathcal E, \rho_0]$ forms a private quantum channel.
\qed
\begin{example}
Of course the three cases of Theorem~\ref{thm:N(T)} when applied to a unital single qubit conditional expectation channel $\mathcal E_{\mathcal A}:\mathbb{M}_2\rightarrow \mathcal A$ are covered by this theorem. Indeed, applying Theorem \ref{thm:PQCiffTVs} to $\mathcal E_{\mathcal A}$ and letting $\rho_0=\frac12\1$ yields $[\mathcal S, \mathcal E_{\mathcal A}, \frac12\1]$ is a PQC if and only if $\mathcal A=U^\dagger\Delta_2U$ or $\mathcal A=\C\cdot\1_2$ and $\mathcal S$ is a set of trace vectors of $\mathcal A$. Case 1 of Theorem \ref{thm:N(T)} is an example of when $\mathcal S$ is a proper subset of the set of all trace vectors of $U^\dagger\Delta_2U$, whereas Case 2 occurs when $\mathcal S$ is the entire set. Case 3 occurs when $\mathcal S$ is the set of all trace vectors of $\C\cdot\1_2$.
\end{example}
\begin{example}
Conditional expectations arise as the most basic non-trivial examples of private quantum communication using a private shared Cartesian frame \cite{BHS}. Let $\mathcal H=(\C^2)^{\otimes N}$, and for simplicity suppose $N$ is even. Decompose the space as
\[
(\C^2)^{\otimes N}=\bigoplus_{j=0}^{N/2}\mathbb{H}_j\otimes \mathbb{K}_j,
\]
where the special unitary group $\operatorname{SU(2)}$ acts irreducibly on $\mathbb{H}_j$ and trivially on $\mathbb{K}_j$. As formulated in \cite{BHS}, if Alice and Bob share a reference frame to which Eve does not have access, and Alice prepares $N$ qubits in a state $\rho$ and sends them to Bob, Eve will see the resulting state simply as a mixture of all rotations $\Omega\in \operatorname{SU(2)}$. This situation can be summed up with the channel $\mathcal E$, defined by
\[
\mathcal E(\rho)=\sum_{j=0}^{N/2}(\mathcal E_{\C j}\otimes id_{\mathbb{K}_j})(\Pi_j\rho\Pi_j),
\]
where $\mathcal E_{\C j}$ is the completely depolarizing channel on $\mathbb{H}_j$ and $\Pi_j$ is the projection onto $\mathbb{H}_j$. One can see immediately that $\mathcal E$ is in fact a conditional expectation channel that maps onto the algebra $\oplus_j (\1_{\mathbb{H}_j} \otimes \mathcal L(\mathbb{K}_j))$. Thus, as noted in Theorem~\ref{thm:PQCiffTVs}, private states for $\mathcal E$ can be found using trace vectors, which in this case can be constructed on the summands of the direct sum in a manner analogous to Example~\ref{maxentangle}.
\end{example}
\section{Outlook}
We see two main potential outcomes of the present work. Firstly, it is clear even just from the examples we have discussed here that there are numerous conditional expectation channels of relevance in quantum information, though they have not been viewed from this perspective before. It should be possible to use the conditional expectation and trace vector machinery to construct other new and useful examples of private quantum channels. Secondly, this work raises the intriguing possibility that a much more extensive theory of private quantum channels and private states could be developed. With few exceptions, the work on private channels appearing in the literature has focused primarily on specific instances and channels, rather than an overarching theory. We intend to continue these investigations elsewhere.
\vspace{0.1in}
{\noindent}{\it Acknowledgements.} We are grateful to Aron Pasieka for assistance with the Bloch sphere figures. D.W.K. was supported by Ontario Early Researcher Award 048142, NSERC Discovery Grant 400160 and NSERC Discovery Accelerator Supplement 400233. R.P. was supported by NSERC Discovery Grant 400096. S.P. was supported by an NSERC doctoral scholarship.
\bibliographystyle{amsalpha}
| 2023-04-23T06:40:44.259Z | 2012-10-25T02:04:25.000Z | redpajama/arxiv | arxiv_0001 | 1,020 | 4,381 |
4953713e07fffbcc92ad6d12444e944675d5fd5b | \section{Introduction.}
Schr\"{o}dinger operators with non-mixed interface conditions have been
recently considered in \cite{FMN2}, by introducing the modified 1D Laplacian
$\Delta_{\theta}
\begin{equation}
\left\{
\begin{array}
[c]{l
\medskip D(\Delta_{\theta})=\left\{ u\in H^{2}(\mathbb{R}\backslash\left\{
a,b\right\} ):\left[
\begin{array}
[c]{l
\smallskip e^{-\frac{\theta}{2}}u(b^{+})=u(b^{-});\ e^{-\frac{3}{2}\theta
}u^{\prime}(b^{+})=u^{\prime}(b^{-})\\
e^{-\frac{\theta}{2}}u(a^{-})=u(a^{+});\ e^{-\frac{3}{2}\theta}u^{\prime
}(a^{-})=u^{\prime}(a^{+})
\end{array}
\right. \right\} \medskip\,,\\
\Delta_{\theta}u(x)=u^{\prime\prime}(x)\quad\text{for }x\in\mathbb{R
\backslash\left\{ a,b\right\} \,.
\end{array}
\right. \label{Laplacian_mod
\end{equation}
where $u(x^{\pm})$ respectively denote the right and left limit of the
function $u$ in $x$. For all $\theta\in\mathbb{C}\backslash\left\{ 0\right\}
$, the operator $\Delta_{\theta}$ describes a singularly perturbed Laplacian,
with non-selfadjoint point interactions acting in the boundary points
$\left\{ a,b\right\} $. It is worthwhile to notice that the boundary
conditions in (\ref{Laplacian_mod}) do not model an usual non-selfadjoint
point interaction (that is $\delta$ or $\delta^{\prime}$ type).
The interest in quantum models arising from $\Delta_{\theta}$ stands upon the
fact that a sharp exterior complex dilation, depending on $\theta=i\tau$ with
$\tau>0$, maps $-i\Delta_{\theta}$ into the accretive operator: $\left.
-ie^{-2\theta\,1_{\mathbb{R}\backslash\left( a,b\right) }(x)}\Delta
_{2\theta}\right. $, where $1_{D}$ denotes the characteristic function of the
domain $D$ (e.g. in Lemma 3.1 in \cite{FMN2}). For a short-range potential
$\mathcal{V}$ (i.e.: $\mathcal{V}\in L^{1}$) compactly supported in $\left[
a,b\right] $, the corresponding complex deformed Schr\"{o}dinger operato
\begin{equation}
\mathcal{H}_{\theta}\left( \mathcal{V},\theta\right) =-e^{-2\theta
\,1_{\mathbb{R}\backslash\left( a,b\right) }(x)}\Delta_{2\theta
}+\mathcal{V\,}\,,\qquad\text{supp}\mathcal{V}=\left[ a,b\right] \,,
\label{H_mod_def
\end{equation}
is the generator of a contraction semigroup and, in the case of time dependent
potentials, uniform-in-time estimates hold for the dynamical system. According
to the complex dilation technique (see \cite{AgCo}, \cite{BaCo}) the quantum
resonances of the undeformed operator $\mathcal{H}_{\theta}\left(
\mathcal{V}\right)
\begin{equation}
\mathcal{H}_{\theta}\left( \mathcal{V}\right) =-\Delta_{\theta
+\mathcal{V}\,, \label{H_mod
\end{equation}
are detected by exterior complex dilations and identify with the spectral
points of $\mathcal{H}_{\theta}\left( \mathcal{V},\theta\right) $ in a
suitable sector of the second Riemann sheet. Then, the adiabatic evolution
problem for the resonant states of $\mathcal{H}_{\theta}\left( \mathcal{V
\right) $ rephrases, through an exterior complex dilation, as the adiabatic
evolution problem for the corresponding eigenstates of $\mathcal{H}_{\theta
}\left( \mathcal{V},\theta\right) $. In this framework, accounting the
contractivity property of the semigroup $e^{-it\mathcal{H}_{\theta}\left(
\mathcal{V},\theta\right) }$ a 'standard' adiabatic theory can be developed
(e.g. in \cite{Nenciu}). This approach has been introduced in \cite{FMN2}
where an adiabatic theorem is obtained for shape resonances in the regime of
quantum wells in a semiclassical island. The purpose of this work is to
justify the use of Hamiltonians of the type $\mathcal{H}_{\theta}\left(
\mathcal{V}\right) $ in the modelling of quantum systems.
The relevance of the artificial interface conditions (\ref{Laplacian_mod}),
stands upon the fact that they are expected to introduce small errors, w.r.t.
the selfadjoint case, controlled by $\left\vert \theta\right\vert $. The
quantum dynamics generated by $\Delta_{\theta}$ has been considered in
\cite{FMN2}. The explicit character of the model allows to obtain the
asymptotic expansio
\begin{equation}
e^{-it\Delta_{\theta}}=e^{-it\Delta}+\mathcal{R}\left( t,\theta\right) \,,
\label{Exp_FMN1
\end{equation}
holding in a suitable neighbourhood: $\left\vert \theta\right\vert <\delta$
(see Proposition 2.2 in \cite{FMN2}). Here, the reminder $\mathcal{R}\left(
t,\theta\right) $ is strongly continuous w.r.t. $t$ and $\theta$, exhibits
the group property w.r.t. the time variable and is such tha
\begin{equation}
\sup_{t\in\mathbb{R\,}}\left\Vert \mathcal{R}\left( t,\theta\right)
\right\Vert _{\mathcal{L}\left( L^{2}(\mathbb{R})\right) }=\mathcal{O
\left( \left\vert \theta\right\vert \right) \,. \label{Exp_FMN1_rem
\end{equation}
Thus, for $\theta$ small enough, $\Delta_{\theta}$ generates a group, strongly
continuous both w.r.t. $t$ and $\theta$, and allowing uniform-in-time
estimates. In the perspective of modelling realistic physical situations
through the modified Schr\"{o}dinger operators $\mathcal{H}_{\theta}\left(
\mathcal{V}\right) $, an important step would consists in extending to this
class of operators the expansion obtained in (\ref{Exp_FMN1}) for
$\mathcal{H}_{\theta}\left( 0\right) =-\Delta_{\theta}$. A possible approach
considers $\mathcal{H}_{\theta}\left( \mathcal{V}\right) $ as a
(selfadjoint) perturbation of the modified Laplacian: $\mathcal{H}_{\theta
}\left( 0\right) $; it is worthwhile to notice that this would give a weaker
result. For instance, implementing a Picard iteration on the Duhamel formul
\begin{equation}
u_{t}(\theta)=e^{-it\Delta_{\theta}}u_{0}+i\int_{0}^{t}e^{-i\left(
t-s\right) \Delta_{\theta}}\mathcal{V}u_{s}(\theta)\,ds\,, \label{Duhamel
\end{equation}
and making use of the expansion (\ref{Exp_FMN1}), yields, in the case of a
bounded potential $\mathcal{V}\in L^{\infty}$, the time dependent estimate
\begin{align}
\left\Vert u_{t}(\theta)\right\Vert _{L^{2}(\mathbb{R})} & \leq
C_{1}\left\Vert u_{0}\right\Vert _{L^{2}(\mathbb{R})}\,e^{C_{2}\left\Vert
\mathcal{V}\right\Vert _{L^{\infty}(\mathbb{R})}t}\label{est5}\\
& \nonumber\\
\left\Vert u_{t}(\theta)-u_{t}(0)\right\Vert _{L^{2}(\mathbb{R})} & \leq
C_{3}\left\vert \theta\right\vert \,\left\Vert u_{0}\right\Vert _{L^{2
(\mathbb{R})}\,\,te^{C_{4}\left\Vert \mathcal{V}\right\Vert _{L^{\infty
}(\mathbb{R})}t} \label{est5_1
\end{align}
where $C_{i}$, $i=1,..4$, are suitable positive constants. It follows that,
for an initial state $u_{0}$, the corresponding mild solution to the quantum
evolution problem, $u_{t}(\theta)$, is Lipschitz continuous w.r.t. $\theta$,
with a Lipschitz constant bounded by an exponentially increasing function of
time. As an aside, we notice that the estimate (\ref{est5}) may also be
obtained as a consequence of the Hille-Yoshida-Phillips Theorem, by using the
second resolvent formula for $\left( \mathcal{H}_{\theta}\left(
\mathcal{V}\right) -z\right) ^{-1}$ and resolvent estimates for $\left(
\mathcal{H}_{\theta}\left( 0\right) -z\right) ^{-1}$ arising from
(\ref{Exp_FMN1}).
The relation (\ref{est5_1}) yields a finite-time control, depending on
$\left\Vert \mathcal{V}\right\Vert _{L^{\infty}(\mathbb{R})}$, of the error
introduced on the quantum evolution by the interface conditions. However, when
$\mathcal{V}$ describes the (possibly non-linear) interactions involving
charge carriers in resonant heterostructures, its norm $\left\Vert
\mathcal{V}\right\Vert _{L^{\infty}(\mathbb{R})}$ is expected to be small
compared to the energy of the particles, while the quantum evolution of
relevant observables is characterized by a long time scale, corresponding to
the inverse of the imaginary part of the shape resonances (examples of this
mechanism are exhibited in \cite{PrSj} and \cite{FMN3}). In this framework,
the use of modified Hamiltonians of the type $\mathcal{H}_{\theta}\left(
\mathcal{V}\right) $ would be justified by a stronger uniform-in-time
estimate for the error $\left\Vert u_{t}(\theta)-u_{t}(0)\right\Vert
_{L^{2}(\mathbb{R})}$ as $\theta\rightarrow0$.
Adopting a different approach, in what follows the operator $\mathcal{H
_{\theta}\left( \mathcal{V}\right) $ is considered as a non-selfadjoint
perturbation of the selfadjoint Hamiltonian $\mathcal{H}_{0}\left(
\mathcal{V}\right) $. Non-selfadjoint perturbations of the type
$T(x)=T+xAB^{\ast}$ have been studied in \cite{Kato} where, under smoothness
assumptions on $A$ and $B$, the 'stationary' wave operators for the couple
$\left\{ T(x),T\right\} $ are given and the corresponding similarity between
$T$ and $T(x)$ is exploited to define the dynamics generated by $-iT(x)$. This
strategy is adapted here to the case where $T=\mathcal{H}_{0}\left(
\mathcal{V}\right) $, while the perturbation is determined by generic,
non-mixed, interface conditions occurring at the boundaries of the potential's
support. This is a larger class of operators, parametrized by a couple of
complex, which includes both the cases of $\mathcal{H}_{\theta}\left(
\mathcal{V}\right) $ and $\left( \mathcal{H}_{\theta}\left( \mathcal{V
\right) \right) ^{\ast}$. From an accurate resolvent analysis and explicit
generalized eigenfunctions formulas, we deduce, in this extended framework, a
small-$\theta$ expansion of the 'stationary wave operators'. Then the quantum
evolution group generated by $-i\mathcal{H}_{\theta}\left( \mathcal{V
\right) $ is determined by conjugation from $e^{-it\mathcal{H}_{0}\left(
\mathcal{V}\right) }$ and an uniform-in-time estimate for the 'distance'
between the two dynamics is obtained (see Theorem \ref{Theorem1} below).
Similarity transformations, from non-selfadjoint to similar selfadjoint
operators, have been recently studied in \cite{KreSieZel}, where the authors
focus on the particular case of 1D Schr\"{o}dinger operators defined with
non-selfadjoint Robin-type conditions occurring at the boundary of an
interval. In the case of parity and time-reversal symmetry ($\mathcal{PT
$-symmetry), the similarity of this model with a selfadjoint Hamiltonian is
derived. It is worth noticing that, when $\theta\in i\mathbb{R}$, the modified
Laplacian $\Delta_{\theta}$ actually exhibits the $\mathcal{PT}$-symmetry
(once the parity is defined with respect to the point $\left( a+b\right)
/2$). However, the models introduced in the next sections are generically not
$\mathcal{PT}$-symmetric (see the definition (\ref{Q_teta}) below).
\subsection{Schr\"{o}dinger operators with non-mixed interface conditions.}
We consider the family of modified Schr\"{o}dinger operators $Q_{\theta
_{1},\theta_{2}}(\mathcal{V})$, depending on a couple of complex parameters,
$\left( \theta_{1},\theta_{2}\right) \in\mathbb{C}^{2}$, and on a
selfadjoint short-range potential, compactly supported over the interval
$\left[ a,b\right] \subset\mathbb{R}$
\begin{equation}
\mathcal{V}\in L^{1}(\mathbb{R},\mathbb{R})\,,\qquad\text{supp
\mathcal{V}=\left[ a,b\right] \,. \label{V
\end{equation}
The parameters $\theta_{1}$ and $\theta_{2}$ fix the interface conditions
\begin{equation}
\left\{
\begin{array}
[c]{ccc
e^{-\frac{\theta_{1}}{2}}u(b^{+})=u(b^{-})\,, & & e^{-\frac{\theta_{2}}{2
}u^{\prime}(b^{+})=u^{\prime}(b^{-})\,,\\
& & \\
e^{-\frac{\theta_{1}}{2}}u(a^{-})=u(a^{+})\,, & & e^{-\frac{\theta_{2}}{2
}u^{\prime}(a^{-})=u(a^{+})\,,
\end{array}
\right. \label{B_C_1
\end{equation}
occurring at the boundary of the potential's support and $Q_{\theta_{1
,\theta_{2}}(\mathcal{V})$ is defined as follow
\begin{equation}
Q_{\theta_{1},\theta_{2}}(\mathcal{V}):\left\{
\begin{array}
[c]{l
D\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right) =\left\{ u\in
H^{2}\left( \mathbb{R}\backslash\left\{ a,b\right\} \right) \,\left\vert
\ \text{(\emph{\ref{B_C_1}}) holds}\right. \right\} \,,\\
\\
\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\,u\right) (x)=-u^{\prime
\prime}(x)+\mathcal{V}(x)\,u(x)\,,\qquad x\in\mathbb{R}\backslash\left\{
a,b\right\} \,.
\end{array}
\right. \label{Q_teta
\end{equation}
The set $\left\{ Q_{\theta_{1},\theta_{2}}(\mathcal{V})\,,\ \left(
\theta_{1},\theta_{2}\right) \in\mathbb{C}^{2}\right\} $ is closed w.r.t.
the adjoint operation: a direct computation shows tha
\begin{equation}
\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right) ^{\ast}=Q_{-\theta
_{2}^{\ast},-\theta_{1}^{\ast}}(\mathcal{V})\,. \label{Q_teta_adj
\end{equation}
The subset of selfadjoint operators in this class is identified by the
conditions: for $\theta_{j}=r_{j}e^{i\varphi_{j}}$, $j=1,2$
\begin{equation}
\left\{
\begin{array}
[c]{l
\varphi_{1}+\varphi_{2}=\pi+2\pi k\,,\quad k\in\mathbb{Z}\,,\\
r_{1}=r_{2}\,.
\end{array}
\right. \label{Selafadj_cond
\end{equation}
When (\ref{Selafadj_cond}) are not satisfied, the corresponding operator
$Q_{\theta_{1},\theta_{2}}(\mathcal{V})$ is neither selfadjoint nor symmetric,
since in this case: $Q_{\theta_{1},\theta_{2}}(\mathcal{V})\not \subset
\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right) ^{\ast}$.
For each couple $\left\{ \theta_{1},\theta_{2}\right\} $, $Q_{\theta
_{1},\theta_{2}}\left( \mathcal{V}\right) $ identifies with a, possibly
non-selfadjoint, extension of the Hermitian operator $Q^{0}(\mathcal{V})
\begin{equation}
D\left( Q^{0}(\mathcal{V})\right) =\left\{ u\in H^{2}\left( \mathbb{R
\right) \,\left\vert \ u(\alpha)=u^{\prime}(\alpha)=0\,,\ \alpha=a,b\right.
\right\} \,,
\end{equation}
and defines an explicitly solvable model w.r.t. the selfadjoint Hamiltonian
$Q_{0,0}\left( \mathcal{V}\right) $. Non-selfadjoint models arising from
proper extensions of Hermitian operators with gaps have been already
considered in literature, for instance in \cite{DerMa}, \cite{Ryz1} (see also
\cite{Mamo1}-\cite{MaMo3} and \cite{BrMaNaWo} for the general case of adjoint
pairs of operators). In these works, the formalism of \emph{boundary triples
}(e.g. in \cite{LyaSto} for adjoint pairs) is adopted; this leads to
Krein-like formulas expressing the resolvent of an extended operator in terms
of the resolvent of a 'reference' extension plus a finite rank part depending
on the Weyl function of the triple. In Section \ref{Section_Resolvent_1} we
give Krein-like formulas for the difference $\left. \left( Q_{\theta
_{1},\theta_{2}}\left( \mathcal{V}\right) -z\right) ^{-1}-\left(
Q_{0,0}\left( \mathcal{V}\right) -z\right) ^{-1}\right. $. Exploiting this
framework, we show that $Q_{\theta_{1},\theta_{2}}(\mathcal{V})$ is an
analytic family in the sense of Kato both w.r.t. the variables $\theta_{1}$
and $\theta_{2}$ and study its spectral profile depending on $\mathcal{V}$.
The result is exposed in the Proposition \ref{Proposition_spectrum}; in
particular, for a defined positive $\mathcal{V}$, we obtain: $\sigma\left(
Q_{\theta_{1},\theta_{2}}\left( \mathcal{V}\right) \right) =\sigma
_{ac}\left( Q_{0,0}\left( \mathcal{V}\right) \right) =\mathbb{R}_{+}$,
provided that $\theta_{1}$ and $\theta_{2}$ are small enough.
Under the same assumptions, in Section \ref{Section_Similarity}, a family of
intertwining operators $\mathcal{W}_{\theta_{1},\theta_{2}}$\ for the
couple\newline$\left\{ Q_{\theta_{1},\theta_{2}}(\mathcal{V}),Q_{0,0
(\mathcal{V})\right\} $ is introduced as the analogous of the usual
stationary wave operators in selfadjoint frameworks. Using the eigenfunctions
expansion obtained in Subsection \ref{Section_Resolvent_2}, we get a
small-$\theta_{i}$ expansion of $\mathcal{W}_{\theta_{1},\theta_{2}}$ allowing
to define the quantum evolution group $e^{-iQ_{\theta_{1},\theta_{2}}\left(
\mathcal{V}\right) }$ from $e^{-iQ_{0,0}\left( \mathcal{V}\right) }$ by
conjugation. Then, we develop a quantitative comparison showing that $\left.
e^{-iQ_{\theta_{1},\theta_{2}}\left( \mathcal{V}\right) }-e^{-iQ_{0,0
\left( \mathcal{V}\right) }\right. $ is controlled by $\left\vert
\theta_{i}\right\vert $, $i=1,2$, uniformly in time, in the $L^{2}$-operator
norm. The result is presented in the following theorem and the proof is given
in Subsection \ref{Section_Theorem1}. It can be adapted to the particular case
of $\mathcal{H}_{\theta}\left( \mathcal{V}\right) $, by noticing that:
$\mathcal{H}_{\theta}\left( \mathcal{V}\right) =Q_{\theta,3\theta
}(\mathcal{V})$.
\begin{theorem}
\label{Theorem1}Let $\mathcal{V}$ fulfills the conditions (\ref{V})
\begin{equation}
\left\langle u,\mathcal{V\,}u\right\rangle _{L^{2}(\left( a,b\right)
)}>0\qquad\forall\,u\in L^{2}(\left( a,b\right) )\,,
\end{equation}
and assume $\left\vert \theta_{j}\right\vert <\delta$, $j=1,2$, with
$\delta>0$ small enough. Then $-iQ_{\theta_{1},\theta_{2}}(\mathcal{V})$
generates a strongly continuous group of bounded operators on $L^{2}\left(
\mathbb{R}\right) $, $\left\{ e^{-itQ_{\theta_{1},\theta_{2}}(\mathcal{V
)}\right\} _{t\in\mathbb{R}}$. For a fixed $t$, $e^{-itQ_{\theta_{1
,\theta_{2}}(\mathcal{V})}$ defines an analytic family of bounded operators
w.r.t. $\left( \theta_{1},\theta_{2}\right) $ and the expansio
\begin{equation}
e^{-itQ_{\theta_{1},\theta_{2}}(\mathcal{V})}=e^{-itQ_{0,0}(\mathcal{V
)}+\mathcal{R}\left( t,\theta_{1},\theta_{2}\right) \,,
\end{equation}
holds with an uniformly bounded in time reminder s.t
\begin{equation}
\sup_{t\in\mathbb{R}}\left\Vert \mathcal{R}\left( t,\theta_{1},\theta
_{2}\right) \right\Vert _{\mathcal{L}\left( L^{2}(\mathbb{R})\right)
}=\mathcal{O}\left( \theta_{1}\right) +\mathcal{O}\left( \theta_{2}\right)
\,.
\end{equation}
\end{theorem}
In the Subsection \ref{Section_WO} the pair $\left\{ Q_{\theta_{1},\theta
_{2}}\left( \mathcal{V}\right) ,Q_{0,0}\left( \mathcal{V}\right) \right\}
$ is considered as a scattering system and we investigate the existence of
non-stationary wave operators. Under the assumptions of the Theorem
\ref{Theorem1}, it is shown that $\mathcal{W}_{\theta_{1},\theta_{2}}$
coincides with a 'physical' wave operator, according to the time dependent
definition (Lemma \ref{Lemma_WO}). Although exploited in our analysis, the
small perturbations condition does not seem to be necessary in the proof of
this result: a possible strategy for its extension to the case where $\left\{
\theta_{1},\theta_{2}\right\} $ are not small is finally mentioned.
Further perspectives of this work, concerning the regime of quantum wells in a
semiclassical island, are discussed in the Section \ref{Sec_small_h}.
\subsection{\label{Sec_Notation}Notation}
In what follows, we make use of a generalization of the Landau notation,
$\mathcal{O}\left( \cdot\right) $, defined according to:
\begin{definition}
\label{Landau_Notation}Let be $X$ a metric space and $f,g:X\rightarrow
\mathbb{C}$. Then $f=\mathcal{O}\left( g\right) $ $\overset{def
{\Longleftrightarrow}$ $\forall\,x\in X$ it holds: $\left.
f(x)=p(x)g(x)\right. $, being $p$ a bounded map $X\rightarrow\mathbb{C}$.
\end{definition}
The next notation are also adopted\smallskip\newline$1_{\Omega}(\cdot)$ is the
characteristic function of a domain $\Omega$.\smallskip\newline$\mathcal{B
_{\delta}(p)$ is the open ball of radius $\delta$ centered in a point
$p\in\mathbb{C}$.\smallskip\newline$\mathcal{C}_{x}^{k}(U)$ is the set of
$\mathcal{C}^{k}$-continuous functions w.r.t. $x\in U\subseteq\mathbb{R
$.\smallskip\newline$\mathcal{H}_{z}(D)$ is the set of holomorphic functions
w.r.t. $z\in D\subseteq\mathbb{C}$.\smallskip\newline$\partial_{j}f\left(
x_{1,}...x_{n}\right) $, $j\in\left\{ 1,..n\right\} $, denotes the
derivative of $f$ w.r.t. the variable $x_{j}$.\smallskip\newline$S_{\eta}$
denotes the complex half-plane: $S_{\eta}=\left\{ z\in\mathbb{C\,}\left\vert
\ \operatorname{Im}z>-\eta\right. \right\} $. In particular, $\mathbb{C
^{+}$ coincides with $S_{0}$.\smallskip\newline The notation '$\lesssim$',
appearing in some of the proofs, denotes the inequality: '$\leq C$' being $C$
a suitable positive constant.
\section{\label{Section_Resolvent_1}Boundary triples and Krein-like resolvent
formulas.}
Point perturbation models, as $Q_{\theta_{1},\theta_{2}}(\mathcal{V})$, can be
described as restrictions of a \emph{larger} operator through linear relations
on an Hilbert space. Let introduce $Q(\mathcal{V})
\begin{equation}
\left\{
\begin{array}
[c]{l
D(Q(\mathcal{V}))=H^{2}\left( \mathbb{R}\backslash\left\{ a,b\right\}
\right) \,,\\
\\
\left( Q(\mathcal{V})\,u\right) (x)=-u^{\prime\prime}(x)+\mathcal{V
(x)\,u(x)\qquad\text{for }x\in\mathbb{R}\backslash\left\{ a,b\right\} \,,
\end{array}
\right. \label{Q
\end{equation}
with $\mathcal{V}$ defined according to (\ref{V}), and let $Q^{0
(\mathcal{V})$ be such that: $\left( Q^{0}(\mathcal{V})\right) ^{\ast
}=Q(\mathcal{V})$. Explicitly, $Q^{0}(\mathcal{V})$ identifies with the
symmetric restriction of $Q(\mathcal{V})$ to the domain
\begin{equation}
D\left( Q^{0}(\mathcal{V})\right) =\left\{ u\in D(Q(\mathcal{V
))\,\left\vert \ u(\alpha)=u^{\prime}(\alpha)=0\ \forall\,\alpha\in\left\{
a,b\right\} \right. \right\} \,.
\end{equation}
The related defect spaces, $\mathcal{N}_{z}=\ker(Q(\mathcal{V})-z)$, are
$4$-dimensional subspaces of $D(Q(\mathcal{V}))$ generated, for $z\in
\mathbb{C}\backslash\mathbb{R}$, by the independent solutions to the proble
\begin{equation}
\left\{
\begin{array}
[c]{l
(-\partial_{x}^{2}+\mathcal{V}-z)u(x)=0\,,\quad x\in\mathbb{R}\backslash
\left\{ a,b\right\} \,,\\
\\
u\in D(Q(\mathcal{V}))\,.
\end{array}
\right. \label{defect_fun
\end{equation}
A \emph{boundary triple} $\left\{ \mathbb{C}^{4},\Gamma_{0},\Gamma
_{1}\right\} $ for $Q(\mathcal{V})$ is defined with two linear boundary maps
$\Gamma_{i=1,2}:D(Q(\mathcal{V}))\rightarrow\mathbb{C}^{4}$ fulfilling, for
any $\psi,\varphi\in D(Q(\mathcal{V}))$, the equatio
\begin{equation}
\left\langle \psi,Q(\mathcal{V})\varphi\right\rangle _{L^{2}(\mathbb{R
)}-\left\langle Q(\mathcal{V})\psi,\varphi\right\rangle _{L^{2}(\mathbb{R
)}=\left\langle \Gamma_{0}\psi,\Gamma_{1}\varphi\right\rangle _{\mathbb{C
^{4}}-\left\langle \Gamma_{1}\psi,\Gamma_{0}\varphi\right\rangle
_{\mathbb{C}^{4}}\,,\label{BVT_1
\end{equation}
and such that the transformation $\left( \Gamma_{0},\Gamma_{1}\right)
:D(Q(\mathcal{V}))\rightarrow\mathbb{C}^{4}\times\mathbb{C}^{4}$ is
surjective. A proper\emph{ }extension $Q_{ext}$ of $Q^{0}(\mathcal{V})$ is
called \emph{almost solvable }if there exists a boundary triple $\left\{
\mathbb{C}^{4},\Gamma_{0},\Gamma_{1}\right\} $ and a matrix $M\in
\mathbb{C}^{4,4}$ such that it coincides with the restriction of
$Q(\mathcal{V})$ to the domain: $\left\{ u\in D(Q(\mathcal{V}))\,\left\vert
\ M\Gamma_{0}u=\Gamma_{1}u\right. \right\} $. Using the notation
$Q_{ext}=Q_{M}(\mathcal{V})$, the characterizatio
\begin{equation}
Q^{0}(\mathcal{V})\subset Q_{M}(\mathcal{V})\subset Q(\mathcal{V
)\,,\quad\left( Q_{M}(\mathcal{V})\right) ^{\ast}=Q_{M^{\ast}
(\mathcal{V})\,.\label{extensions
\end{equation}
holds (e.g. \cite{Ryz1}, Theorem 1.1). In what follows, $\tilde{Q
(\mathcal{V})$ denotes the particular restriction of $Q(\mathcal{V})$
associated with the conditions: $\Gamma_{0}u=0$, i.e
\begin{equation}
D\left( \tilde{Q}(\mathcal{V})\right) =\left\{ u\in D\left( Q(\mathcal{V
)\right) \,,\ \Gamma_{0}u=0\right\} \,.
\end{equation}
According to the relation (\ref{BVT_1}), $\tilde{Q}(\mathcal{V})$ is
selfadjoint and $\mathbb{C}\backslash\mathbb{R}\subset\mathcal{\rho}\left(
\tilde{Q}(\mathcal{V})\right) $. Let, $\gamma(z,\mathcal{V})$ and
$q(z,\mathcal{V})$ be the linear maps defined b
\begin{equation}
\gamma(z,\mathcal{V})=\left( \left. \Gamma_{0}\right\vert _{\mathcal{N}_{z
}\right) ^{-1}\,,\qquad q(z,\mathcal{V})=\Gamma_{1}\circ\gamma(z,\mathcal{V
)\,,\qquad z\in\mathcal{\rho}\left( \tilde{Q}(\mathcal{V})\right)
\,,\label{gamma_q_def
\end{equation}
where $\left. \Gamma_{0}\right\vert _{\mathcal{N}_{z}}$ is the restriction of
$\Gamma_{0}$ to $\mathcal{N}_{z}$. These define holomorphic families of
bounded operators in $\mathcal{L}\left( \mathbb{C}^{4},L^{2}\left(
\mathbb{R}\right) \right) $ and $\mathcal{L}\left( \mathbb{C
^{4},\mathbb{C}^{4}\right) $ (e.g. in \cite{DerMa} and \cite{BruGeyPan}). The
maps $\gamma(\cdot,z,\mathcal{V})$ and $q(z,\mathcal{V})$ are respectively
referred to as the \emph{Gamma field }and the \emph{Weyl function} associated
with the triple $\left\{ \mathbb{C}^{4},\Gamma_{0},\Gamma_{1}\right\} $.
With this formalism, a resolvent formula expresses the difference $\left.
\left( Q_{M}(\mathcal{V})-z\right) ^{-1}-\left( \tilde{Q}(\mathcal{V
)-z\right) ^{-1}\right. $ in terms of finite rank operator with range
$\mathcal{N}_{z}
\begin{equation}
\left( Q_{M}(\mathcal{V})-z\right) ^{-1}-\left( \tilde{Q}(\mathcal{V
)-z\right) ^{-1}=\gamma(z,\mathcal{V})\left( M-q(z,\mathcal{V})\right)
^{-1}\gamma^{\ast}(\bar{z},\mathcal{V})\,,\quad z\in\rho\left( Q_{M
(\mathcal{V})\right) \cap\mathcal{\rho}\left( \tilde{Q}(\mathcal{V})\right)
\label{krein
\end{equation}
(e.g. in \cite{Ryz1}, Theorem 1.2). In many situations, the interface
conditions occurring in the points $\left\{ a,b\right\} $ can also be
represented in the form: $A\Gamma_{0}u=B\Gamma_{1}u$, where $A,B\in
\mathbb{C}^{4,4}$. We denote with $Q_{A,B}(\mathcal{V})$ the corresponding
restrictio
\begin{equation}
\left\{
\begin{array}
[c]{l
D\left( Q_{A,B}(\mathcal{V})\right) =\left\{ u\in D(Q(\mathcal{V
))\,\left\vert \ A\Gamma_{0}u=B\Gamma_{1}u\right. \right\} \,,\\
\\
Q_{A,B}(\mathcal{V})\,u=Q(\mathcal{V})\,u\,.
\end{array}
\right. \,.\label{restriction_def
\end{equation}
With this parametrization, we have: $\tilde{Q}(\mathcal{V})=Q_{1,0
(\mathcal{V})$, while the resolvent's formula rephrases a
\begin{equation}
\left( Q_{M}(\mathcal{V})-z\right) ^{-1}-\left( \tilde{Q}(\mathcal{V
)-z\right) ^{-1}=-\gamma(z,\mathcal{V})\left[ \left( Bq(z,\mathcal{V
)-A\right) ^{-1}B\right] \gamma^{\ast}(\bar{z},\mathcal{V})\,,\qquad
z\in\rho\left( Q_{M}(\mathcal{V})\right) \label{krein_AB
\end{equation}
In the perspective of a comparison between the quantum models arising from
$Q_{\theta_{1},\theta_{2}}(\mathcal{V})$ and $Q_{0,0}(\mathcal{V})$ a natural
choice i
\begin{equation
\begin{array}
[c]{ccc
\Gamma_{0}u
\begin{pmatrix}
u^{\prime}(b^{-})-u^{\prime}(b^{+})\smallskip\medskip\\
u(b^{+})-u(b^{-})\smallskip\medskip\\
u^{\prime}(a^{-})-u^{\prime}(a^{+})\smallskip\medskip\\
u(a^{+})-u(a^{-})
\end{pmatrix}
\,, & & \Gamma_{1}u=\frac{1}{2
\begin{pmatrix}
u(b^{+})+u(b^{-})\smallskip\medskip\\
u^{\prime}(b^{+})+u^{\prime}(b^{-})\smallskip\medskip\\
u(a^{+})+u(a^{-})\smallskip\medskip\\
u^{\prime}(a^{+})+u^{\prime}(a^{-})
\end{pmatrix}
\,,
\end{array}
\, \label{BVT
\end{equation}
which leads to: $\tilde{Q}(\mathcal{V})=Q_{0,0}(\mathcal{V})$. According to
the definitions (\ref{Q_teta}) and (\ref{Q}), the operator $Q_{\theta
_{1},\theta_{2}}(\mathcal{V})$ identifies with the restriction of
$Q(\mathcal{V})$ parametrized by the $\mathbb{C}^{4,4}$-block-diagonal
matrice
\begin{equation
\begin{array}
[c]{ccc
A_{\theta_{1},\theta_{2}}
\begin{pmatrix}
a(\theta_{1},\theta_{2}) & & \\
& & \\
& & a(-\theta_{1},-\theta_{2})
\end{pmatrix}
\,, & & B_{\theta_{1},\theta_{2}}
\begin{pmatrix}
b(\theta_{1},\theta_{2}) & & \\
& & \\
& & b(-\theta_{1},-\theta_{2})
\end{pmatrix}
\,,
\end{array}
\label{AB_teta1,2
\end{equation}
defined wit
\begin{equation
\begin{array}
[c]{ccc
a(\theta_{1},\theta_{2})
\begin{pmatrix}
1+e^{\frac{\theta_{2}}{2}} & 0\\
0 & 1+e^{\frac{\theta_{1}}{2}
\end{pmatrix}
\,, & & b(\theta_{1},\theta_{2})=
\begin{pmatrix}
0 & 1-e^{\frac{\theta_{2}}{2}}\\
e^{\frac{\theta_{1}}{2}}-1 & 0
\end{pmatrix}
\,.
\end{array}
\label{ab_teta1,2
\end{equation}
Using (\ref{BVT}) and (\ref{ab_teta1,2})-(\ref{ab_teta1,2}), the linear
relations (\emph{\ref{B_C_1}}) rephrase a
\begin{equation}
A_{\theta_{1},\theta_{2}}\Gamma_{0}u=B_{\theta_{1},\theta_{2}}\Gamma_{1}u\,,
\label{bc_teta1_teta_2
\end{equation}
which leads to the equivalent definition
\begin{equation}
Q_{\theta_{1},\theta_{2}}(\mathcal{V}):\left\{
\begin{array}
[c]{l
\mathcal{D}\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right) =\left\{
u\in D(Q(\mathcal{V}))\,\left\vert \ A_{\theta_{1},\theta_{2}}\Gamma
_{0}u=B_{\theta_{1},\theta_{2}}\Gamma_{1}u\right. \right\} \,,\\
\\
Q_{\theta_{1},\theta_{2}}(\mathcal{V})\,u=Q(\mathcal{V})\,u\,.
\end{array}
\right. \label{Q_teta1_teta2
\end{equation}
In this framework, the relation (\ref{krein_AB}) explicitly writes a
\begin{equation}
\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})-z\right) ^{-1}=\left(
Q_{0,0}(\mathcal{V})-z\right) ^{-1}-\sum_{i,j=1}^{4}\left[ \left(
B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right)
^{-1}B_{\theta_{1},\theta_{2}}\right] _{ij}\left\langle \gamma(e_{j},\bar
{z},\mathcal{V}),\cdot\right\rangle _{L^{2}(\mathbb{R})}\gamma(e_{i
,z,\mathcal{V})\,, \label{krein_1
\end{equation}
where $\left\{ e_{i}\right\} _{i=1}^{4}$ is the standard basis in
$\mathbb{C}^{4}$, while $\gamma(v,z,\mathcal{V})$ denotes the action of
$\gamma(z,\mathcal{V})$ on the vector $v$. The corresponding integral kernel,
$\mathcal{G}_{\theta_{1},\theta_{2}}^{z}(x,y)$, i
\begin{equation}
\mathcal{G}_{\theta_{1},\theta_{2}}^{z}(x,y)=\mathcal{G}_{0,0}^{z
(x,y)-\sum_{i,j=1}^{4}\left[ \left( B_{\theta_{1},\theta_{2}
\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right) ^{-1}B_{\theta_{1
,\theta_{2}}\right] _{ij}\gamma(e_{j},y,z,\mathcal{V})\,\gamma(e_{i
,x,z,\mathcal{V})\,, \label{G_z_teta
\end{equation}
\subsection{The Jost's solutions.\label{Section_Jost}}
In order to obtain explicit representations of the operators $\gamma
(\cdot,z,\mathcal{V})$ and $q(z,\mathcal{V})$ appearing at the r.h.s. of
(\ref{krein_1}), it is necessary to define a particular basis of the defect
spaces $\mathcal{N}_{z}$. A possible choice is given in terms of the Green's
function of the operator $\left( Q_{0,0}(\mathcal{V})-z\right) $ and of
their derivatives. This motivates the forthcoming analysis, where the
properties of the functions in $\mathcal{N}_{z}$ are investigated by using the
Jost's solutions associated with $Q_{0,0}(\mathcal{V})$. Our aim is to provide
with explicit low and high energy asymptotic in the case of compactly
supported and defined positive potentials. We follow a standard approach
adapting arguments from one dimensional scattering to this particular case.
Detailed computations, for selfadjoint one-dimensional Schr\"{o}dinger
operators with generic short range potentials, are presented in \cite{Yafa}.
Consider the proble
\begin{equation}
\left( -\partial_{x}^{2}+\mathcal{V}\right) u=\zeta^{2}u\,,\qquad\text{for
}x\in\mathbb{R}\ \text{and }\zeta\in\mathbb{C}^{+}\,. \label{Jost_eq
\end{equation}
The Jost solutions to (\ref{Jost_eq}), $\chi_{\pm}$, are respectively defined
by the exterior condition
\begin{equation}
\left. \chi_{+}\right\vert _{x>b}=e^{i\zeta x}\,,\qquad\left. \chi
_{-}\right\vert _{x<a}=e^{-i\zeta x}\,. \label{Jost_sol
\end{equation}
The next proposition resumes some properties of the functions $\chi_{\pm}$ in
the case of compactly supported potentials.
\begin{proposition}
\label{Proposition_Jost}Let $\mathcal{V}$ be defined according to (\ref{V}).
The solutions $\chi_{\pm}$ to the problem (\ref{Jost_eq})-(\ref{Jost_sol})
belong to $\mathcal{C}_{x}^{1}\left( \mathbb{R},\,\mathcal{H}_{\zeta}\left(
\mathbb{C}^{+}\right) \right) $ having continuous extension to the real
axis. For $\zeta\in\overline{\mathbb{C}^{+}}$, the relation
\begin{equation}
\chi_{\pm}\left( x,\zeta\right) =e^{\pm i\zeta x}\mathcal{O}\left(
1\right) \,,\qquad\partial_{x}\chi_{\pm}\left( x,\zeta\right) =e^{\pm
i\zeta x}\mathcal{O}\left( 1+\left\vert \zeta\right\vert \right) \,,
\label{Jost_sol_bound
\end{equation}
hold with $\mathcal{O(\cdot)}$ referred to the metric space $\mathbb{R
\times\overline{\mathbb{C}^{+}}$.
\end{proposition}
The proof follows, with slightly modifications, the one given in \cite{Yafa}
in the case of 1D short-range potentials: an integral setting for
(\ref{Jost_eq}) and explicit estimates for the corresponding integral kernel
are used to discuss the convergence of the solution developed as a Picard
series. To this aim, we need the following simple Lemma.
\begin{lemma}
\label{Lemma_Jost_est}Let $\mathcal{V}$ be defined according to (\ref{V}) and
$F(x)=\left\vert \int_{x}^{x_{0}}\left\vert \,\mathcal{V}(t)\right\vert
\,dt\right\vert $, with $x_{0}\neq x$. If $f$ is continuous and such that:
$\left\vert f(x)\right\vert \leq\,\frac{F^{n}(x)}{n!}$ for $n\in\mathbb{N}$,
then it result
\begin{equation}
\left\vert \int_{x}^{x_{0}}f(t)\mathcal{V}(t)\,dt\right\vert \leq\frac
{F^{n+1}(x)}{(n+1)!}\,. \label{G_ineq
\end{equation}
\end{lemma}
\begin{proof}
For $x_{0}>x$ we have $F(x)=\int_{x}^{x_{0}}\left\vert \,\mathcal{V
(t)\right\vert \,dt$, we get: $\partial_{x}F(x)=-\left\vert \mathcal{V
(x)\right\vert $. The
\[
\left\vert \int_{x}^{x_{0}}f(t)\mathcal{V}(t)\,dt\right\vert \leq-\int
_{x}^{x_{0}}\left\vert f(t)\right\vert \,\partial_{t}F(t)\,dt\leq-\int
_{x}^{x_{0}}\frac{F^{n}(x)}{n!}\partial_{t}F(t)\,dt=-\int_{x}^{x_{0}
\frac{\partial_{t}F^{n+1}(x)}{\left( n+1\right) !}\,dt\,.
\]
For $x_{0}<x$, we have $F(x)=\int_{x_{0}}^{x}\left\vert \,\mathcal{V
(t)\right\vert \,dt$ and $\partial_{x}F(x)=\left\vert \mathcal{V
(x)\right\vert $. The
\[
\left\vert \int_{x_{0}}^{x}f(t)\mathcal{V}(t)\,dt\right\vert \leq\int_{x_{0
}^{x}\left\vert f(t)\right\vert \,\partial_{t}F(t)\,dt\leq\int_{x_{0}
^{x}\frac{F^{n}(x)}{n!}\partial_{t}F(t)\,dt=\int_{x_{0}}^{x}\frac{\partial
_{t}F^{n+1}(x)}{\left( n+1\right) !}\,dt\,.
\]
Both the above relations imply (\ref{G_ineq}) since, by definition,
$F(x_{0})=0$.
\end{proof}
\begin{proof}
[Proof of the Proposition \ref{Proposition_Jost}]Here we focus on the case of
$\chi_{+}$, while the problem for $\chi_{-}$ can be analyzed similarly. Using
the integral kernel: $\left. -\left( \zeta\right) ^{-1}\sin\zeta\left(
t-x\right) \right. $, the equation (\ref{Jost_eq}) rephrases into the
equivalent integral for
\begin{equation}
u(x,\zeta)=u_{0}(x,\zeta)
{\displaystyle\int\limits_{x_{0}}^{x}}
\frac{\sin\zeta\left( t-x\right) }{\zeta}\mathcal{V}(t)u(t,\zeta)\,dt\,.
\label{Int_Jost_eq
\end{equation}
In order to account for the conditions (\ref{Jost_sol}), we replace in
(\ref{Int_Jost_eq}): $x_{0}=b$, $x<b$ and $u_{0}=e^{i\zeta x}$. Then,
introducing the rescaled functions: $b_{+}=e^{-i\zeta x}\chi_{+}$ an
\begin{equation}
\mathcal{K}_{+}\left( t,x,\zeta\right) =-e^{i\zeta(t-x)}\frac{\sin
\zeta\left( t-x\right) }{\zeta}\,, \label{Jost_eq_kernel
\end{equation}
we get the equatio
\begin{equation}
b_{+}(x,\zeta)=1
{\displaystyle\int\limits_{x}^{b}}
\mathcal{K}_{+}\left( t,x,\zeta\right) \mathcal{V}(t)b_{+}(t,\zeta
)\,dt\,,\quad\text{for }x<b\,, \label{Int_Jost_eq_rescaled
\end{equation}
while, for $x>b$ one has: $b_{+}=1$. The corresponding solution formally
writes as a Picard series: $\left. b_{+}=\sum_{n=0}^{+\infty}b_{+,n}\right.
$ whose terms are defined according t
\begin{equation}
b_{+,0}=1\,,\qquad b_{+,n}(x,\zeta)=
{\displaystyle\int\limits_{x}^{b}}
\mathcal{K}_{+}\left( t,x,\zeta\right) \mathcal{V}(t)b_{+,n-1
(t,\zeta)\,dt\,,\quad n\in\mathbb{N}^{\ast}\,,\quad x<b\,. \label{Jost_Picard
\end{equation}
Let $\eta>0$ and introduce the auxiliary domain $S_{\eta}$ (see the definition
given in the subsection \ref{Sec_Notation}). The rescaled kernel is a smooth
map of $t$ and $x$ with values in $\mathcal{H}_{\zeta}\left( S_{\eta}\right)
$; then, using (\ref{Jost_Picard}), an induction over $n$ leads to: $\left.
b_{+,n}\in\mathcal{C}_{x}^{1}\left( (-\infty,b),\,\mathcal{H}_{\zeta}\left(
S_{\eta}\right) \right) \right. $. Next, the convergence of the sum
$\sum_{n=0}^{+\infty}b_{+,n}$ is considered, at first, in the case of a
bounded interval $x\in(c,b)$, then in the whole interval $\left(
-\infty,b\right) $. Finally, the low and high-energy behaviour of $\chi_{+}$
are investigated to obtain the relations in (\ref{Jost_sol_bound}). In what
follows, we assume: $\eta>0$, $c<a$ and use the notation $U_{\eta
,c}=(c,b)\times S_{\eta}$; a direct computation yield
\begin{equation}
1_{\left( x,b\right) }(t)1_{U_{\eta,c}}(x,\zeta)\mathcal{K}_{+}\left(
t,x,\zeta\right) =\mathcal{O}\left( \frac{1}{1+\left\vert \zeta\right\vert
}\right) \,;\quad1_{\left( x,b\right) }(t)1_{U_{\eta,c}}(x,\zeta
)\partial_{x}\mathcal{K}_{+}(t,x,\zeta)=\mathcal{O}\left( 1\right) \,,
\label{Jost_kernel_asympt1
\end{equation}
where the symbols $\mathcal{O}(\cdot)$, introduced in the Definition
\ref{Landau_Notation}, here refer to the metric space: $(c,b)^{2}\times
S_{\eta}$. According to (\ref{Jost_kernel_asympt1}), a positive constant
$C_{a,b,c,\eta}$, possibly depending on the data, exists such tha
\begin{equation}
\sup_{\substack{\left\{ x,\zeta\right\} \in U_{c,\eta}\\t\in(x,b)
}\left\vert \mathcal{K}_{+}\left( t,x,\zeta\right) \,\right\vert
<C_{a,b,c,\eta}\,,
\end{equation}
Let introduce the rescaled potential: $\mathcal{\tilde{V}}\left( x\right)
=C_{a,b,c,\eta}\mathcal{V}\left( x\right) $ and the function $F(x)=\int
_{x}^{b}\left\vert \,\mathcal{\tilde{V}}(t)\right\vert \,dt$. As a consequence
of Lemma \ref{Lemma_Jost_est}, we have:\newline$\left. \left\vert
1_{U_{\eta,c}}\,b_{+,n+1}\right\vert \leq1_{U_{\eta,c}}\frac{F^{n+1}
{(n+1)!}\right. $. Since $\left\Vert F\right\Vert _{L^{\infty}(c,b)
=C_{a,b,c,\eta}\left\Vert \mathcal{V}\right\Vert _{L^{1}(a,b)}$, this yields
the estimat
\begin{equation}
\sup_{\left\{ x,\zeta\right\} \in U_{c}^{+}}\left\vert 1_{U_{\eta,c
}\,b_{+,n+1}\right\vert \leq\frac{C_{a,b,c,\eta}^{n+1}\left\Vert
\mathcal{V}\right\Vert _{L^{1}(a,b)}^{n+1}}{(n+1)!}\,, \label{Jost_est_0
\end{equation}
and the Picard series uniformly converges to $b_{+}\in\mathcal{C}_{x
^{0}\left( (c,b),\,\mathcal{H}_{\zeta}\left( S_{\eta}\right) \right) $. In
particular, (\ref{Jost_est_0}) implie
\begin{equation}
\sup_{\left\{ x,\zeta\right\} \in U_{\eta,c}}\left\vert b_{+}\right\vert
\leq e^{C_{a,b,c,\eta}\left\Vert \mathcal{V}\right\Vert _{L^{1}(a,b)
}\,\Rightarrow1_{U_{\eta,c}}b_{+}=\mathcal{O}\left( 1\right) \,,
\label{Jost_est_1
\end{equation}
and, taking into account the definition: $\chi_{+}=e^{i\zeta x}b_{+}$, it
follow
\begin{equation}
1_{U_{\eta,c}}\left( x,\zeta\right) \chi_{+}\left( x,\zeta\right)
=e^{i\zeta x}\mathcal{O}\left( 1\right) \,. \label{Jost_est_1_1
\end{equation}
Next, consider $\partial_{x}b_{+}=b_{+}^{\prime}$. For $x\in(c,b)$, it
fulfills the equatio
\begin{equation}
b_{+}^{\prime}(x,\zeta)=
{\displaystyle\int\limits_{x}^{b}}
\partial_{x}\mathcal{K}_{+}\left( t,x,\zeta\right) \mathcal{V
(t)b_{+}(t,\zeta)\,dt\,,\quad x\in(c,b)\,. \label{Int_Jost_eq_rescaled1
\end{equation}
The regularity of the r.h.s. of (\ref{Int_Jost_eq_rescaled1}) is a consequence
of the properties of $b_{+}$ and of the kernel $\partial_{x}\mathcal{K}_{+}$.
In particular, making use of the above characterization of $b_{+}$, we get:
$b_{+}^{\prime}\in\mathcal{C}_{x}^{0}\left( (c,b),\,\mathcal{H}_{\zeta
}\left( S_{\eta}\right) \right) $. Moreover, being $b_{+}$ and
$\partial_{x}\mathcal{K}_{+}$ uniformly bounded for $\left\{ x,\zeta\right\}
\in U_{\eta,c}$ and $t\in(x,b)$, the r.h.s. of (\ref{Int_Jost_eq_rescaled1})
is $\mathcal{O}\left( 1\right) $. Then, a direct computation shows tha
\begin{equation}
1_{U_{\eta,c}}\left( x,\zeta\right) \partial_{x}\chi_{+}\left(
x,\zeta\right) =e^{i\zeta x}\mathcal{O}\left( 1+\left\vert \zeta\right\vert
\right) \,. \label{Jost_est_2
\end{equation}
To discuss the case $x<c$, we notice that, when $x\in(-\infty,a)$, the
solution $b_{+}$ explicitly writes in the for
\begin{equation}
b_{+}(x,\zeta)=B_{+}(\zeta)+B_{-}(\zeta)e^{-2i\zeta x}\,. \label{Jost_ext
\end{equation}
The condition $b_{+}\in\mathcal{C}_{x}^{1}\left( (c,b),\,\mathcal{H}_{\zeta
}\left( S_{\eta}\right) \right) $ compels the coefficients $B_{\pm}$ to be
holomorphic in $S_{\eta}$ with the only possible exception of the point
$\zeta=0$; this leads: $b_{+}\in\mathcal{C}_{x}^{1}\left( (-\infty
,a),\,\mathcal{H}_{\zeta}\left( S_{\eta}\backslash\left\{ 0\right\}
\right) \right) $. In $\zeta=0$, the maps $\zeta\rightarrow B_{\pm}$ may
diverge, but, in such a case, a compensation between the different
contributions at the r.h.s of (\ref{Jost_ext}) take place to assure the
regularity of $\zeta\rightarrow1_{U_{\eta,c}}b_{+}$ in the origin. Therefore,
$B_{\pm}$ may have, at most, a simple pole in $\zeta=0$ and the condition
\begin{equation}
\lim_{\zeta\rightarrow0}\left( B_{+}(\zeta)+B_{-}(\zeta)e^{-2i\zeta
x}\right) =c_{0}\,,\qquad\lim_{\zeta\rightarrow0}-2i\zeta B_{-
(\zeta)e^{-2i\zeta x}=c_{1}\,, \label{Jost_lowenergy
\end{equation}
holds for any $x\in\left( c,a\right) $. Since these are independent of $x$,
the function $b_{+}$ can be extended to: $b_{+}\in\mathcal{C}_{x}^{1}\left(
(-\infty,a),\,\mathcal{H}_{\zeta}\left( S_{\eta}\right) \right) $.
To conclude the proof, we need to extend the relations (\ref{Jost_est_1_1}),
(\ref{Jost_est_2}) to the case of $x\in\left( -\infty,a\right) $ and
$\zeta\in\overline{\mathbb{C}^{+}}$. According to (\ref{Jost_ext}) and
(\ref{Jost_lowenergy}), for any fixed $\zeta\in\overline{\mathbb{C}^{+}}$, the
functions $b_{+}$ and $b_{+}^{\prime}$ are uniformly bounded w.r.t.
$x\in\left( -\infty,a\right) $. In particular, in a neighbourhood
$\mathcal{B}_{1}\left( 0\right) \cap\overline{\mathbb{C}^{+}}$ of $\zeta=0$
we hav
\begin{equation}
1_{\left( -\infty,a\right) }(x)1_{\mathcal{B}_{1}\left( 0\right)
\cap\overline{\mathbb{C}^{+}}}(\zeta)\partial_{x}^{i}b_{+}\left(
x,\zeta\right) =\mathcal{O}\left( 1\right) \,,\qquad i=0,1\,.
\label{Jost_in_bound
\end{equation}
To obtain estimates as $\left\vert \zeta\right\vert \rightarrow\infty$, the
high energy asymptotics of the coefficients $B_{\pm}$ is needed. The
$x$-derivative of (\ref{Jost_ext}) i
\begin{equation}
b_{+}^{\prime}(x,\zeta)=-2i\zeta B_{-}(\zeta)e^{-2i\zeta x}\,.
\label{Jost_ext_1
\end{equation}
As it has been previously shown, for $\left\{ x,\zeta\right\} \in U_{\eta
,c}$ it results $b_{+}^{\prime}(x,\zeta)=\mathcal{O}\left( 1\right) $.
Taking $x\in\left( c,a\right) $, and using (\ref{Jost_ext_1}), this implies:
$\zeta B_{-}(\zeta)=\mathcal{O}\left( 1\right) $ in the sense of the metric
space $S_{\eta}$; it follow
\begin{equation}
\zeta B_{-}(\zeta)=\mathcal{O}\left( 1\right) \qquad\text{in }\zeta
\in\overline{\mathbb{C}^{+}}\backslash\mathcal{B}_{1}\left( 0\right) \,.
\label{Jost_coeff
\end{equation}
Similarly, since $b_{+}(x,\zeta)=\mathcal{O}\left( 1\right) $ for $\left\{
x,\zeta\right\} \in U_{\eta,c}$, taking $x\in\left( c,a\right) $ and using
the relations (\ref{Jost_ext}) and (\ref{Jost_coeff}), we ge
\begin{equation}
B_{+}(\zeta)=\mathcal{O}\left( 1\right) \,\qquad\text{in }\zeta\in
\overline{\mathbb{C}^{+}}\backslash\mathcal{B}_{1}\left( 0\right) \,.
\label{Jost_coeff1
\end{equation}
From these relations and the representations (\ref{Jost_ext}) and
(\ref{Jost_ext_1}), we obtai
\begin{equation}
1_{\left( -\infty,a\right) }(x)1_{\overline{\mathbb{C}^{+}}\backslash
\mathcal{B}_{1}\left( 0\right) }(\zeta)\partial_{x}^{i}b_{+}\left(
x,\zeta\right) =\mathcal{O}\left( 1\right) \,,\qquad i=1,2\,.
\label{Jost_ext_bound
\end{equation}
Then, taking into account the definition $\chi_{+}=e^{i\zeta x}b_{+}$, the
relations (\ref{Jost_sol_bound}) follows from (\ref{Jost_est_1_1}),
(\ref{Jost_est_2}), (\ref{Jost_ext_bound}) and (\ref{Jost_ext_bound}).
\end{proof}
The Jost function, denoted in the following with $w(\zeta)$, is defined as the
Wronskian associated with the couple $\left\{ \chi_{+}(\cdot,\zeta),\chi
_{-}(\cdot,\zeta)\right\} $. Settin
\begin{equation}
w(f,g)=fg^{\prime}-f^{\prime}g\,, \label{Wronskian
\end{equation}
we hav
\begin{equation}
w(\zeta)=\chi_{+}(\cdot,\zeta)\partial_{1}\chi_{-}(\cdot,\zeta)-\partial
_{1}\chi_{+}(\cdot,\zeta)\chi_{-}(\cdot,\zeta) \label{Jost_fun
\end{equation}
According to the definition of $\chi_{\pm}$, this function is independent of
the space variable, while due to the result of Proposition
\ref{Proposition_Jost}, $w(\zeta)$ is holomorphic w.r.t. $\zeta$ in an open
half complex plane including $\overline{\mathbb{C}^{+}}$. The point spectrum
of $Q_{0,0}(\mathcal{V})$ is defined by the solutions $z=\zeta^{2}$ to the
problem: $w(\zeta)=0\,$, $\zeta\in\mathbb{C}^{+}$ (e.g. in \cite{Yafa},
Chp.5). Since $Q_{0,0}(\mathcal{V})$ is a selfadjoint Schr\"{o}dinger operator
with a short range potential, the point spectrum is non-degenerate and located
on the negative real axis, while: $\sigma_{ac}\left( Q_{0,0}(\mathcal{V
)\right) =\left[ 0,+\infty\right) $. Then, $w\left( \zeta\right) $ does
not annihilates almost everywhere in the closed upper complex plane, with the
only possible exceptions of a discrete subset of the positive imaginary axis.
Next, consider $\zeta=k\in\mathbb{R}$ and let $w_{0}(k)$ be the Wronskian
associated with $\left\{ \chi_{+}(\cdot,-k),\chi_{-}(\cdot,k)\right\}
;$\ $the behavior of $w(k)$ on the real axis follows by using the relation
\begin{align}
\chi_{+}(\cdot,k) & =\frac{1}{2ik}\left( w_{0}^{\ast}(k)\chi_{-
(\cdot,k)-w(k)\chi_{-}(\cdot,-k)\right) \,,\label{Jost_sol_identity_1}\\
& \nonumber\\
\chi_{-}(\cdot,k) & =\frac{1}{2ik}\left( w_{0}(k)\chi_{+}(\cdot
,k)-w(k)\chi_{+}(\cdot,-k)\right) \,, \label{Jost_sol_identity_2
\end{align}
expressing the Jost's solutions $\chi_{\pm}(\cdot,k)$ in terms of the linearly
independent couples $\chi_{-}(\cdot,\pm k)$ and $\chi_{+}(\cdot,\pm k)$
respectively (e.g. in \cite{Yafa}, chp. 5). Plugging
(\ref{Jost_sol_identity_2}) into (\ref{Jost_sol_identity_1}), indeed, it
follows: $\left\vert w(k)\right\vert ^{2}=4k^{2}\,+\left\vert w_{0}\left(
k\right) \right\vert ^{2}$, which entail
\begin{equation}
\left\vert w(k)\right\vert ^{2}\geq4k^{2}\,. \label{w_ineq
\end{equation}
Let introduce the functions $\mathcal{G}^{z}(x,y)$ and $\mathcal{H}^{z}(x,y)
\begin{equation}
\mathcal{G}^{z}(x,y)=\frac{1}{w(\zeta)}\mathcal{\,}\left\{
\begin{array}
[c]{c
\chi_{+}(x,\zeta)\chi_{-}(y,\zeta)\,,\qquad x\geq y\,,\\
\\
\chi_{-}(x,\zeta)\chi_{+}(y,\zeta)\,,\qquad x<y\,,
\end{array}
\right. \qquad z=\zeta^{2}\,, \label{G_z
\end{equation
\begin{equation}
\mathcal{H}^{z}(x,y)=-\frac{1}{w(\zeta)}\mathcal{\,}\left\{
\begin{array}
[c]{c
\chi_{+}(x,\zeta)\partial_{1}\chi_{-}(y,\zeta)\,,\qquad x\geq y\,,\\
\\
\chi_{-}(x,\zeta)\partial_{1}\chi_{+}(y,\zeta)\,,\qquad x<y\,,
\end{array}
\right. \,,\qquad z=\zeta^{2}\,, \label{H_z
\end{equation}
Assume $\zeta\in\mathbb{C}^{+}$ to be such that $w(\zeta)\neq0$ and
$y\in\mathbb{R}$; from the equation (\ref{Jost_eq}) and the relations
(\ref{Jost_sol_bound}), it follows that the maps $x\rightarrow\mathcal{G
^{z}(\cdot,y)$ and $x\rightarrow\mathcal{H}^{z}(\cdot,y)$ are exponentially
decreasing as $\left\vert x-y\right\vert \rightarrow\infty$ (with a decreasing
rate depending on $\operatorname{Im}\zeta$) and fulfill the boundary condition
problem
\begin{equation}
\left\{
\begin{array}
[c]{lll
\left( -\partial_{x}^{2}+\mathcal{V-}\zeta^{2}\right) \mathcal{G}^{z
(\cdot,y)=0 & & \text{in }\mathbb{R}/\left\{ y\right\} \\
& & \\
\mathcal{G}^{z}(y^{+},y)=\mathcal{G}^{z}(y^{-},y)\,, & & \partial
_{1}\mathcal{G}^{z}(y^{+},y)-\partial_{1}\mathcal{G}^{z}(y^{-},y)=-1\,,
\end{array}
\right. \label{Green_eq
\end{equation}
an
\begin{equation}
\left\{
\begin{array}
[c]{lll
\left( -\partial_{x}^{2}+\mathcal{V-}\zeta^{2}\right) \mathcal{H}^{z
(\cdot,y)=0 & & \text{in }\mathbb{R}/\left\{ y\right\} \\
& & \\
\mathcal{H}^{z}(y^{+},y)-\mathcal{H}^{z}(y^{-},y)=1\,, & & \partial
_{1}\mathcal{H}^{z}(y^{+},y)=\partial_{1}\mathcal{H}^{z}(y^{-},y)\,,
\end{array}
\right. \label{D_Green_eq
\end{equation}
For $z=\zeta^{2}$ s.t. $w(\zeta)\neq0$ and $y\in\left\{ a,b\right\} $, the
functions $\mathcal{G}^{z}(\cdot,y)$, $\mathcal{H}^{z}(\cdot,y)$ form a basis
of the defect space $\mathcal{N}_{z}$, which writes a
\begin{equation}
\mathcal{N}_{z}=l.c.\left\{ \mathcal{G}^{z}(x,b)\,,\ \mathcal{H
^{z}(x,b)\,,\ \mathcal{G}^{z}(x,a)\,,\ \mathcal{H}^{z}(x,a)\right\} \,.
\label{defect1
\end{equation}
According to the equation (\ref{Green_eq}), $\mathcal{G}^{z}(\cdot,y)$
identifies with the integral kernel of $\left( Q_{0,0}(\mathcal{V})-z\right)
^{-1}$,while, as a consequence of the definitions (\ref{G_z})-(\ref{H_z}) and
the results of the Proposition \ref{Proposition_Jost}, the maps $z\rightarrow
\mathcal{G}^{z}(x,y)$, $z\rightarrow\mathcal{H}^{z}(x,y)$ are meromorphic in
$\mathbb{C}\backslash\mathbb{R}_{+}$ with a branch cut along the positive real
axis and poles, corresponding to the points in $\sigma_{p}\left(
Q_{0,0}(\mathcal{V})\right) $, located on the negative real axis. In
particular, due to the inequality (\ref{w_ineq}), these functions continuously
extend up to the branch cut, both in the limits: $z\rightarrow k^{2}\pm i0$,
with the only possible exception of the point $z=0$.
In the case of defined positive potentials, it is possible to obtain uniform
estimates of $\mathcal{G}^{z}(x,y)$\ and $\mathcal{H}^{z}(x,y)$ up to the
whole branch cut. Next, we assume $\mathcal{V}$ to fulfill the additional
conditio
\begin{equation}
\left\langle u,\mathcal{V\,}u\right\rangle _{L^{2}(a,b)}>0\qquad\forall\,u\in
L^{2}(\mathbb{R})\,, \label{V_pos
\end{equation}
and introduce, for $\zeta\in\mathbb{C}^{+}$ and $z=\zeta^{2}$, the function
\begin{equation}
G^{\zeta}(x,y)=\mathcal{G}^{\zeta^{2}}(x,y)\,;\qquad\partial_{1}^{i}H^{\zeta
}(x,y)=\partial_{1}^{i}\mathcal{H}^{\zeta^{2}}(x,y)\,, \label{GH_zeta
\end{equation}
where the notation $\partial^{0}u=u$ is adopted. These are characterized as follows.
\begin{lemma}
\label{Lemma_Green_ker}Let $\mathcal{V}$ fulfill (\ref{V}) and (\ref{V_pos}).
For all $\left( x,y\right) \in\mathbb{R}^{2}$, $x\neq y$, the maps
$\zeta\rightarrow G^{\zeta}(x,y)$ and $\zeta\rightarrow\partial_{1
^{i}H^{\zeta}(x,y)$, $i=0,1$, defined according to (\ref{GH_zeta}) are
holomorphic in $\mathbb{C}^{+}$ and continuously extend to $\overline
{\mathbb{C}^{+}}$. In particular, for $\zeta=k\in\mathbb{R}$, it results:
$G^{k}\left( \cdot,y\right) ,H^{k}\left( \cdot,y\right) \in\mathcal{C
_{x}^{1}\left( \mathbb{R}\backslash\left\{ y\right\} \,,\ \mathcal{C
_{k}^{0}\left( \mathbb{R}\right) \right) $, while the relation
\begin{equation}
G^{\zeta}(x,y)=e^{i\zeta\left\vert x-y\right\vert }\mathcal{O}\left( \frac
{1}{1+\left\vert \zeta\right\vert }\right) \,,\quad H^{\zeta}(x,y)=e^{i\zeta
\left\vert x-y\right\vert }\mathcal{O}\left( 1\right) \,,\quad\partial
_{1}H^{\zeta}(x,y)=e^{i\zeta\left\vert x-y\right\vert }\mathcal{O}\left(
1+\left\vert \zeta\right\vert \right) \,. \label{Green_ker_bound
\end{equation}
hold with $\mathcal{O}\left( \cdot\right) $ referred to the metric space
$\mathbb{R}^{2}\times\overline{\mathbb{C}^{+}}$.
\end{lemma}
\begin{proof}
The conditions (\ref{V}), (\ref{V_pos}) and the relation (\ref{w_ineq})
prevent $w\left( \zeta\right) $ to have zeroes in $\overline{\mathbb{C}^{+
}\backslash\left\{ 0\right\} $. Computing $w\left( \zeta\right) $, we hav
\begin{equation}
w(\zeta)=\chi_{+}(a,\zeta)\partial_{1}\chi_{-}(a,\zeta)-\partial_{1}\chi
_{+}(a,\zeta)\chi_{-}(a,\zeta)\,. \label{w_explicit_a
\end{equation}
Using the exterior conditions (\ref{Jost_sol}), the coefficients $\partial
_{1}^{j}\chi_{-}(a,\zeta)$, $j=0,1$, are explicitly given b
\begin{equation}
\chi_{-}(a,\zeta)=e^{-i\zeta a}\,,\qquad\partial_{1}\chi_{-}(a,\zeta)=-i\zeta
e^{-i\zeta a}\,, \label{w_explicit_a1
\end{equation}
For $x<b$, the function $\chi_{+}(\cdot,\zeta)$ writes as $\chi_{+
(x,\zeta)=e^{i\zeta x}b_{+}(x,\zeta)$, where $b_{+}(\cdot,\zeta)$ solves the
equation (\ref{Int_Jost_eq_rescaled}) and can be represented as the sum of the
Picard series: $\left. b_{+}(\cdot,\zeta)=\sum_{n=0}^{+\infty}b_{+,n
(\cdot,\zeta)\right. $ whose terms are defined by a recurrence relation given
in (\ref{Jost_Picard}). Under the condition (\ref{V}), it has been shown that
this series uniformly converges to $b_{+}\in\mathcal{C}_{x}^{1}\left( \left(
c,b\right) ,\,\mathcal{H}_{\zeta}\left( S_{\eta}\right) \right) $, being
$\left( c,b\right) $ any interval including the point $a$ (see the proof of
the Proposition \ref{Proposition_Jost}); in particular the relations:
$\partial_{1}^{j}b_{+}(\cdot,\zeta)=\mathcal{O}\left( 1\right) $, $j=0,1$,
hold with the symbols $\mathcal{O}(\cdot)$ referring to the metric space:
$(c,b)\times S_{\eta}$. Let $\zeta=0$; the relations (\ref{Jost_Picard}) write
a
\begin{equation}
b_{+,n}(x,0)
{\displaystyle\int\limits_{x}^{b}}
(t-x)\mathcal{V}(t)b_{+,n-1}(t,0)\,dt\,,\quad n\in\mathbb{N}^{\ast}\,,\ x<b\,.
\end{equation}
Using the conditions: $\left\langle u,\mathcal{V}u\right\rangle _{L^{2
(a,b)}>0$ and $b_{+,0}=1$, an induction argument leads to: $b_{+,n}(x,0)\geq0$
and $b_{+}(x,0)\geq1$. Taking the limit of (\ref{Int_Jost_eq_rescaled1}) as
$\zeta\rightarrow0$, we ge
\begin{equation}
\partial_{x}b_{+}(x,0)=
{\displaystyle\int\limits_{x}^{b}}
\mathcal{V}(t)b_{+}(t,0)\,dt\,,\qquad x<b\,.
\end{equation}
Since $b_{+}>0$ and $\mathcal{V}>0$, at least in a subset of $(a,b)$, we have:
$\partial_{1}b_{+}(a,0)<0\,$. With the notation introduced above, the equation
(\ref{w_explicit_a}) rephrases a
\begin{equation}
w(\zeta)=-\partial_{1}b_{+}(a,\zeta)-2i\zeta\,b_{+}(a,\zeta)\,.
\label{w_explicit
\end{equation}
Then, according to the conditions: $\partial_{1}b_{+}(a,0)\neq0$, and
$b_{+}(\cdot,\zeta)=\mathcal{O}\left( 1\right) $, we have: $w(\zeta
)=-\partial_{1}b_{+}(a,\zeta)+\mathcal{O}(\zeta)$ which implies $w\left(
0\right) \neq0$.
As a consequence, the function $p\left( \zeta\right) $ defined b
\begin{equation}
p\left( \zeta\right) =\frac{1+\zeta}{w\left( \zeta\right) }\,,
\end{equation}
is bounded in any bounded set $\mathcal{B}_{R}\left( 0\right) \cap
\overline{\mathbb{C}^{+}}$, $R>0$. Moreover, using the equation
(\ref{Int_Jost_eq_rescaled}), we hav
\begin{equation}
b_{+}(a,\zeta)=1
{\displaystyle\int\limits_{a}^{b}}
\frac{e^{2i\zeta(t-x)}-1}{\zeta}\mathcal{V}(t)b_{+}(t,\zeta)\,dt\,;
\end{equation}
since $b_{+}(\cdot,\zeta)$ is uniformly bounded for $\zeta\in\overline
{\mathbb{C}^{+}}$, and $\left\vert 1/\zeta\right\vert \left\vert
e^{2i\zeta(t-x)}-1\right\vert \leq2/\left\vert \zeta\right\vert $, it follows:
\newline$\lim\nolimits_{\zeta\rightarrow\infty\,,\ \zeta\in\overline
{\mathbb{C}^{+}}}b_{+}(a,\zeta)=1$. Set $M=\sup_{\zeta\in\overline
{\mathbb{C}^{+}}}\left\vert \partial_{1}b_{+}(a,\zeta)\right\vert $, and let
$\tilde{R}>0$ be such that $\left\vert \zeta\right\vert \left\vert
b_{+}(a,\zeta)\right\vert /M>1$ for any $\zeta\in\overline{\mathbb{C}^{+
}\backslash\mathcal{B}_{\tilde{R}}\left( 0\right) $. From the representation
(\ref{w_explicit}) it follows
\begin{equation}
\sup_{\zeta\in\overline{\mathbb{C}^{+}}\backslash\mathcal{B}_{\tilde{R
}\left( 0\right) }\left\vert p\left( \zeta\right) \right\vert \leq\frac
{1}{M}\frac{1+\left\vert \zeta\right\vert }{2\left\vert \zeta\right\vert
\frac{\left\vert b_{+}(a,\zeta)\right\vert }{M}-1\,}\lesssim1\,.
\end{equation}
Then, $p\left( \zeta\right) $ results uniformly bounded as $\zeta
\in\overline{\mathbb{C}^{+}}$ and we can writ
\begin{equation}
\left( w\left( \zeta\right) \right) ^{-1}=\mathcal{O}\left( \frac
{1}{1+\left\vert \zeta\right\vert }\right) \,,\label{w_inverse_bound
\end{equation}
in the sense of the metric space $\overline{\mathbb{C}^{+}}$ (see Definition
\ref{Landau_Notation}).
From the definitions (\ref{G_z})-(\ref{H_z}), (\ref{GH_zeta}) and the result
of Proposition \ref{Proposition_Jost}, the functions $\zeta\rightarrow
G^{\zeta}(x,y)$ and $\zeta\rightarrow\partial_{1}^{i}H^{\zeta}(x,y)$, $i=0,1$,
are meromorphic in $\mathbb{C}^{+}$, while, the previous result implies that,
in our assumptions, these maps have no poles in $\mathbb{C}^{+}$ and
continuously extend to the whole real axis. The relations
(\ref{Green_ker_bound}) follows from (\ref{Jost_sol_bound}) by using
(\ref{w_inverse_bound}).
\end{proof}
\subsection{Resolvent analysis.}
The results of the previous Sections and, in particular, the Krein's-like
formula given in (\ref{krein_1}), allow a detailed resolvent analysis for the
operators $Q_{\theta_{1},\theta_{2}}(\mathcal{V})$. At this concern, we recall
that the maps $z\rightarrow q(z,\mathcal{V})$ and $z\rightarrow\gamma
(e_{i},z,\mathcal{V})$, appearing at the r.h.s. of (\ref{krein_1}), are
holomorphic in $\mathbb{C}\backslash\sigma\left( Q_{0,0}(\mathcal{V})\right)
$, while, from the definitions (\ref{ab_teta1,2})-(\ref{ab_teta1,2}), the
matrix coefficients in $A_{\theta_{1},\theta_{2}}$ and $B_{\theta_{1
,\theta_{2}}$ are holomorphic functions of the parameters $\left( \theta
_{1},\theta_{2}\right) $ in the whole $\mathbb{C}^{2}$. The
\begin{equation}
d(z,\theta_{1},\theta_{2})=\det\left( B_{\theta_{1},\theta_{2}
\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right) \,,\label{d_z_teta
\end{equation}
defines an holomorphic function of the variables $\left( z,\theta_{1
,\theta_{2}\right) $ in $\mathbb{C}\backslash\sigma\left( Q_{0,0
(\mathcal{V})\right) \times\mathbb{C}^{2}$. Moreover, for any couple $\left(
\theta_{1},\theta_{2}\right) $, the set of singular point
\begin{equation}
\mathcal{S}_{\theta_{1},\theta_{2}}=\left\{ z\in\mathbb{C\,}\left\vert
\ d(z,\theta_{1},\theta_{2})=0\right. \right\} \,,\label{S_teta
\end{equation}
is discrete. As a consequence, the representation (\ref{krein_1}) makes sense
in the dense open set \newline$\mathbb{C}\backslash\left( \sigma\left(
Q_{0,0}(\mathcal{V})\right) \cup\mathcal{S}_{\theta_{1},\theta_{2}}\right)
$. Let us fix $z\in\mathbb{C}\backslash\left( \sigma\left( Q_{0,0
(\mathcal{V})\right) \cup\mathcal{S}_{\tilde{\theta}_{1},\tilde{\theta}_{2
}\right) $, for a given couple $\left( \tilde{\theta}_{1},\tilde{\theta
_{2}\right) \in\mathbb{C}^{2}$; using the expansio
\begin{equation}
d(z,\theta_{1},\theta_{2})=d(z,\tilde{\theta}_{1},\tilde{\theta
_{2})+\mathcal{O}\left( \theta_{1}-\tilde{\theta}_{1}\right) +\mathcal{O
\left( \theta_{2}-\tilde{\theta}_{2}\right) \,,
\end{equation}
it results: $d(z,\theta_{1},\theta_{2})\neq0$ for all $\left( \theta
_{1},\theta_{2}\right) $ in a suitable neighbourhood of $\left(
\tilde{\theta}_{1},\tilde{\theta}_{2}\right) $. This implies that, for any
couple of parameters $\left( \tilde{\theta}_{1},\tilde{\theta}_{2}\right) $,
there exists $z\in\mathcal{\rho}\left( Q_{\tilde{\theta}_{1},\tilde{\theta
}_{2}}(\mathcal{V})\right) $ and a positive constant $\delta$, possibly
depending on $\left( \tilde{\theta}_{1},\tilde{\theta}_{2}\right) $, such
that: $z\in\mathcal{\rho}\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V
)\right) $ for all $\left( \theta_{1},\theta_{2}\right) \in\mathcal{B
_{\delta}\left( \left( \tilde{\theta}_{1},\tilde{\theta}_{2}\right)
\right) $. Next, for such a $z$, consider the map $\left( \theta_{1
,\theta_{2}\right) \rightarrow\left( Q_{\theta_{1},\theta_{2}
(\mathcal{V})-z\right) ^{-1}$ defined for $\left( \theta_{1},\theta
_{2}\right) \in\mathcal{B}_{\delta}\left( \left( \tilde{\theta}_{1
,\tilde{\theta}_{2}\right) \right) $. Since $z\notin\mathcal{S}_{\theta
_{1},\theta_{2}}$, the coefficients of the finite rank part at the r.h.s. of
(\ref{krein_1}) are holomorphic w.r.t. $\left( \theta_{1},\theta_{2}\right)
$ and $\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})-z\right) ^{-1}$ forms
an analytic family in $\mathcal{L}\left( L^{2}\left( \mathbb{R}\right)
\right) $. Then, $Q_{\theta_{1},\theta_{2}}(\mathcal{V})$ is analytic in the
sense of Kato, w.r.t. the parameters $\left( \theta_{1},\theta_{2}\right) $.
As this result suggests, when $\left( \theta_{1},\theta_{2}\right) $ is
close to the origin of $\mathbb{C}^{2}$, a part of the point spectrum
$\sigma_{p}\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right) $ is formed
by non-degenerate eigenvalues holomorphically dependent on $\left( \theta
_{1},\theta_{2}\right) $ and converging, in the limit $\left( \theta
_{1},\theta_{2}\right) \rightarrow\left( 0,0\right) $, to the corresponding
points of $\sigma_{p}\left( Q_{0,0}(\mathcal{V})\right) $ (see the point
$(ii)$ in the next Proposition \ref{Proposition_spectrum}). As an aside we
notice that, for generic compactly supported potentials, new spectral points
(not converging to $\sigma_{p}\left( Q_{0,0}(\mathcal{V})\right) $) may
eventually arise in a complex neighbourhood of the origin, due to the
interface conditions. Nevertheless, if the additional assumption of positive
potentials (\ref{V_pos}) is adopted, it is possible to prove the identity
$\sigma\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right) =\sigma\left(
Q_{0,0}(\mathcal{V})\right) $ provided that $\theta_{i=1,2}$ are small
enough. To fix this point, we need appropriate estimates for the coefficients
of the finite rank part in (\ref{krein_1}).
The relations (\ref{krein_1}) and (\ref{G_z_teta}) can be made explicit by
computing the matrix representation of $\left( B_{\theta_{1},\theta_{2
}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right) $ w.r.t. the basis
$\left\{ e_{j}\right\} _{j=1}^{4}$ and (\ref{defect1}). Making use of the
definition (\ref{gamma_q_def}), a direct computation yield
\begin{equation}
\gamma(\cdot,z,\mathcal{V})
\begin{pmatrix}
1 & & & \\
& -1 & & \\
& & 1 & \\
& & & -1
\end{pmatrix}
\,,\quad\text{with:\ }\left\{
\begin{array}
[c]{l
\gamma(e_{1},z,\mathcal{V})=\mathcal{G}^{z}(x,b)\,;\quad\gamma(e_{2
,z,\mathcal{V})=-\mathcal{H}^{z}(x,b)\,;\\
\\
\gamma(e_{3},z,\mathcal{V})=\mathcal{G}^{z}(x,a)\,;\quad\gamma(e_{4
,z,\mathcal{V})=-\mathcal{H}^{z}(x,a)\,.
\end{array}
\right. \label{gamma_z
\end{equation}
The matrix coefficients of $q(z,\mathcal{V})$ are related to the boundary
values of the functions $\gamma(e_{i},z,\mathcal{V})$, $i=1...4$, as
$x\rightarrow b^{\pm}$ or $x\rightarrow a^{\pm}$. Using the definitions
(\ref{G_z}) and (\ref{H_z}), it follows: $\partial_{1}\mathcal{G
^{z}(x,y)=\mathcal{H}^{z}(y,x)\,$; in particular, the boundary values at
$x\rightarrow y^{\pm}$ are related b
\begin{equation}
\partial_{1}\mathcal{G}^{z}(y^{\pm},y)=\mathcal{H}^{z}(y^{\mp},y)\,.
\end{equation}
Using these relations and the boundary conditions in (\ref{Green_eq
)-(\ref{D_Green_eq}), a direct computation yield
\begin{equation}
q(z,\mathcal{V})
\begin{pmatrix}
\mathcal{G}^{z}(b,b)\medskip & \frac{1}{2}-\mathcal{H}^{z}(b^{+},b) &
\mathcal{G}^{z}(b,a) & -\mathcal{H}^{z}(b,a)\\
\mathcal{H}^{z}(b^{+},b)\medskip-\frac{1}{2} & -\partial_{1}\mathcal{H
^{z}(b,b) & \mathcal{H}^{z}(a,b) & -\partial_{1}\mathcal{H}^{z}(b,a)\\
\mathcal{G}^{z}(a,b)\medskip & -\mathcal{H}^{z}(a,b) & \mathcal{G}^{z}(a,a) &
-\left( \frac{1}{2}+\mathcal{H}^{z}(a^{-},a)\right) \\
\mathcal{H}^{z}(b,a) & -\partial_{1}\mathcal{H}^{z}(a,b) & \mathcal{H
^{z}(a^{-},a)+\frac{1}{2} & -\partial_{1}\mathcal{H}^{z}(a,a)
\end{pmatrix}
\,. \label{q_z
\end{equation}
\begin{lemma}
\label{Lemma_Krein_coeff}Let the matrix $\left( B_{\theta_{1},\theta_{2
}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right) $ be defined according
to the relations (\ref{q_z}), (\ref{AB_teta1,2})-(\ref{ab_teta1,2}) and assume
$\mathcal{V}$ to fulfill the conditions (\ref{V}), (\ref{V_pos}). There exists
$\delta>0$ such that, for all $\left( \theta_{1},\theta_{2}\right)
\in\mathcal{B}_{\delta}\left( \left( 0,0\right) \right) $, $\left(
B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right)
$ is invertible in $z\in\mathbb{C}/\mathbb{R}_{+}$. The coefficients of the
inverse matrix are holomorphic w.r.t. $\left( z,\theta_{1},\theta_{2}\right)
$ with: $\left( \theta_{1},\theta_{2}\right) \in\mathcal{B}_{\delta}\left(
\left( 0,0\right) \right) $, $z\in\mathbb{C}/\mathbb{R}_{+}$; they have
continuous extensions to the branch cut both in the limits $z=k^{2
+i\varepsilon$, $\varepsilon\rightarrow0^{\pm}$.
\end{lemma}
\begin{proof}
Using the notation introduced in (\ref{GH_zeta}), for $z=\zeta^{2}$, $\zeta
\in\mathbb{C}^{+}$, a direct computation leads t
\begin{gather}
\left( B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2
}\right) =\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\label{Krein_coeff_0}\\%
\begin{pmatrix}
\mathcal{\beta}\left( \theta_{2}\right) \left( H^{\zeta}(b^{+},b)-\frac
{1}{2}\right) & -\mathcal{\beta}\left( \theta_{2}\right) \partial
_{1}H^{\zeta}(b,b) & \mathcal{\beta}\left( \theta_{2}\right) H^{\zeta
}(a,b) & -\mathcal{\beta}\left( \theta_{2}\right) \partial_{1}H^{\zeta
}(b,a)\\
\mathcal{\beta}\left( \theta_{1}\right) G^{\zeta}(b,b) & \mathcal{\beta
}\left( \theta_{1}\right) \left( \frac{1}{2}-H^{\zeta}(b^{+},b)\right) &
\mathcal{\beta}\left( \theta_{1}\right) G^{\zeta}(b,a) & -\mathcal{\beta
}\left( \theta_{1}\right) H^{\zeta}(b,a)\\
\mathcal{\beta}\left( -\theta_{2}\right) H^{\zeta}(b,a) & -\mathcal{\beta
}\left( -\theta_{2}\right) \partial_{1}H^{\zeta}(a,b) & \mathcal{\beta
}\left( -\theta_{2}\right) \left( H^{\zeta}(a^{-},a)+\frac{1}{2}\right) &
-\mathcal{\beta}\left( -\theta_{2}\right) \partial_{1}H^{\zeta}(a,a)\\
\mathcal{\beta}\left( -\theta_{1}\right) G^{\zeta}(a,b)\medskip &
-\mathcal{\beta}\left( -\theta_{1}\right) H^{\zeta}(a,b) & \mathcal{\beta
}\left( -\theta_{1}\right) G^{\zeta}(a,a) & -\mathcal{\beta}\left(
-\theta_{1}\right) \left( \frac{1}{2}+H^{\zeta}(a^{-},a)\right)
\end{pmatrix}
\nonumber\\
\qquad\qquad\qquad\qquad\qquad\qquad
\begin{pmatrix}
\alpha\left( \theta_{2}\right) & & & \\
& \alpha\left( \theta_{1}\right) & & \\
& & \alpha\left( -\theta_{2}\right) & \\
& & & \alpha\left( -\theta_{1}\right)
\end{pmatrix}
\nonumber
\end{gather}
where $\alpha\left( \theta\right) $ and $\mathcal{\beta}\left(
\theta\right) $ are defined b
\begin{equation}
\mathcal{\alpha}\left( \theta\right) =1+e^{\frac{\theta}{2}}\,,\qquad
\mathcal{\beta}\left( \theta\right) =1-e^{\frac{\theta}{2}}\,.
\label{alpha_beta
\end{equation}
As consequence of the Lemma \ref{Lemma_Green_ker}, for defined positive
potentials the above relation rephrases a
\begin{align}
& \left( B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1
,\theta_{2}}\right) =\label{Krein_coeff}\\
&
\begin{pmatrix}
\mathcal{\beta}\left( \theta_{2}\right) \mathcal{O}\left( 1\right)
-\alpha\left( \theta_{2}\right) & \mathcal{\beta}\left( \theta_{2}\right)
\mathcal{O}\left( 1+\left\vert \zeta\right\vert \right) & \mathcal{\beta
}\left( \theta_{2}\right) e^{i\zeta\left( b-a\right) }\mathcal{O}\left(
1\right) & \mathcal{\beta}\left( \theta_{2}\right) e^{i\zeta\left(
b-a\right) }\mathcal{O}\left( 1+\left\vert \zeta\right\vert \right) \\
\mathcal{\beta}\left( \theta_{1}\right) \mathcal{O}\left( \frac
{1}{1+\left\vert \zeta\right\vert }\right) & \mathcal{\beta}\left(
\theta_{1}\right) \mathcal{O}\left( 1\right) -\alpha\left( \theta
_{1}\right) & \mathcal{\beta}\left( \theta_{1}\right) e^{i\zeta\left(
b-a\right) }\mathcal{O}\left( \frac{1}{1+\left\vert \zeta\right\vert
}\right) & \mathcal{\beta}\left( \theta_{1}\right) e^{i\zeta\left(
b-a\right) }\mathcal{O}\left( 1\right) \\
\mathcal{\beta}\left( -\theta_{2}\right) e^{i\zeta\left( b-a\right)
}\mathcal{O}\left( 1\right) & \mathcal{\beta}\left( -\theta_{2}\right)
e^{i\zeta\left( b-a\right) }\mathcal{O}\left( 1+\left\vert \zeta\right\vert
\right) & \mathcal{\beta}\left( -\theta_{2}\right) \mathcal{O}\left(
1\right) -\alpha\left( -\theta_{2}\right) & \mathcal{\beta}\left(
-\theta_{2}\right) \mathcal{O}\left( 1+\left\vert \zeta\right\vert \right)
\\
\mathcal{\beta}\left( -\theta_{1}\right) e^{i\zeta\left( b-a\right)
}\mathcal{O}\left( \frac{1}{1+\left\vert \zeta\right\vert }\right) &
\mathcal{\beta}\left( -\theta_{1}\right) e^{i\zeta\left( b-a\right)
}\mathcal{O}\left( 1\right) & \mathcal{\beta}\left( -\theta_{1}\right)
\mathcal{O}\left( \frac{1}{1+\left\vert \zeta\right\vert }\right) &
\mathcal{\beta}\left( -\theta_{1}\right) \mathcal{O}\left( 1\right)
-\alpha\left( -\theta_{1}\right)
\end{pmatrix}
\,,\nonumber
\end{align}
being the symbols $\mathcal{O}\left( \cdot\right) $ referred to the metric
space $\overline{\mathbb{C}^{+}}$ and defining holomorphic functions of
$\zeta\in\mathbb{C}^{+}$ with continuous extension the real axis. Due to the
definition of $\mathcal{\alpha}\left( \theta\right) $, $\mathcal{\beta
}\left( \theta\right) $, the coefficients of $\left( B_{\theta_{1
,\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right) $ result
separately w.r.t. $\left( \theta_{1},\theta_{2}\right) \in\mathbb{C}^{2}$,
$z\in\mathbb{C}/\mathbb{R}_{+}$, and admit, for each couple $\left(
\theta_{1},\theta_{2}\right) $, continuous extensions to the branch cut. In
particular, setting $\zeta=k\in\mathbb{R}_{\pm}$ at the r.h.s. of
(\ref{Krein_coeff}) corresponds to consider the limits of $\left(
B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right)
_{ij}$ for $z\rightarrow k^{2}\pm i0$ respectively. Making use of this
expression, and taking into account (\ref{alpha_beta}), a determinant's
expansion follow
\begin{equation}
d(z,\theta_{1},\theta_{2})=4\left( 1+\cosh\frac{\theta_{1}}{2}\right)
\left( 1+\cosh\frac{\theta_{2}}{2}\right) +\mathcal{O}\left( \theta
_{1}\right) +\mathcal{O}\left( \theta_{2}\right) \,, \label{d_exp
\end{equation}
where $\mathcal{O}\left( \theta_{i}\right) $, being referred to the metric
space $\mathcal{B}_{1}\left( \left( 0,0\right) \right) \times\mathbb{C}$,
defines holomorphic functions w.r.t. $\left( \theta_{1},\theta_{2}\right)
\in\mathcal{B}_{1}\left( \left( 0,0\right) \right) $ and $z\in
\mathbb{C}/\mathbb{R}_{+}$, allowing continuous extensions to the branch cut
in the above-specified sense. According to the Definition
\ref{Landau_Notation}, $\mathcal{O}\left( \theta_{i}\right) $ writes a
\begin{equation}
\mathcal{O}\left( \theta_{i}\right) =\theta_{i}\,p(z,\theta_{1},\theta
_{2})\,,
\end{equation}
with $p(z,\theta_{1},\theta_{2})$ uniformly bounded in $\mathcal{B}_{1}\left(
\left( 0,0\right) \right) \times\mathbb{C}$. Therefore $\delta>0$ exists
such that, when $\left( \theta_{1},\theta_{2}\right) \in\mathcal{B}_{\delta
}\left( \left( 0,0\right) \right) $, it results $\left\vert d(z,\theta
_{1},\theta_{2})\right\vert >1$ for all $z\in\mathbb{C}/\mathbb{R}_{+}$. In
these conditions, the matrix $\left( B_{\theta_{1},\theta_{2}
\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right) $ is invertible and the
coefficients $\left( B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V
)-A_{\theta_{1},\theta_{2}}\right) _{ij}^{-1}$ are separately holomorphic
w.r.t. $\left( \theta_{1},\theta_{2}\right) \in\mathcal{B}_{\delta}\left(
\left( 0,0\right) \right) $, $z\in\mathbb{C}/\mathbb{R}_{+}$, having
continuous extensions to the whole branch cut, both in the limits
$z=k^{2}+i\varepsilon$, $\varepsilon\rightarrow0^{\pm}$.
\end{proof}
We are now in the position to develop the spectral analysis for the operators
$Q_{\theta_{1},\theta_{2}}(\mathcal{V})$ starting from the resolvent's formula
(\ref{krein_1}).
\begin{proposition}
\label{Proposition_spectrum}Let $Q_{\theta_{1},\theta_{2}}(\mathcal{V})$ be
defined according to (\ref{V}), (\ref{Q_teta}). The operator's spectrum
characterizes as follows\medskip:\newline$i)$ For any $\left( \theta
_{1},\theta_{2}\right) \in\mathbb{C}^{2}$, the essential part of the spectrum
is $\sigma_{ess}\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right)
=\mathbb{R}_{+}$.\medskip\newline$ii)$ Let $E_{0}$ be an eigenvalue of
$Q_{0,0}(\mathcal{V})$ and assume $\varepsilon_{0}>0$ small enough; for any
fixed $\varepsilon\in\left( 0,\varepsilon_{0}\right) $ it exists
$\delta_{\varepsilon}>0$ depending on $\varepsilon$ s.t.: for any $\left(
\theta_{1},\theta_{2}\right) \in\mathcal{B}_{\delta_{\varepsilon}}\left(
\left( 0,0\right) \right) \cap\mathbb{C}^{2}$ it exists an unique
nondegenerate and discrete eigenvalue $E\left( \theta_{1},\theta_{2}\right)
\in\sigma\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right) \cap
\mathcal{B}_{\varepsilon}(E_{0})$. Moreover, the function $E\left( \theta
_{1},\theta_{2}\right) $ is holomorphic w.r.t. $\left( \theta_{1},\theta
_{2}\right) $ in $\mathcal{B}_{\delta_{\varepsilon}}\left( \left(
0,0\right) \right) $.
If, in addition, $\mathcal{V}$ is assumed to be defined positive, fulfilling
(\ref{V_pos}), then:\newline$iii)$ It exists $\delta>0$ s.t., for all $\left(
\theta_{1},\theta_{2}\right) \in\mathcal{B}_{\delta}\left( \left(
0,0\right) \right) $, $\sigma\left( Q_{\theta_{1},\theta_{2}
(\mathcal{V})\right) $ is purely absolutely continuous and coincide with the
positive real axis.
\end{proposition}
\begin{proof}
With the notation introduced above, let $z\in\mathbb{C}\backslash\left(
\sigma\left( Q_{0,0}(\mathcal{V})\right) \cup\mathcal{S}_{\theta_{1
,\theta_{2}}\right) $; the representation (\ref{krein_1}) implies that the
difference $\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})-z\right)
^{-1}-\left( Q_{0,0}(\mathcal{V})-z\right) ^{-1}$ is a finite rank operator.
Then, the first statement of the Proposition follows by adapting the Weyl's
theorem to the non-selfadjoint framework (for this point, we refer to
\cite{ReSi4}, Sec.~XIII.4, Lemma~3 and the strong spectral mapping theorem).
In the unperturbed case, $Q_{0,0}(\mathcal{V})$ is a 1D Schr\"{o}dinger
operator with a short range potential. Its spectrum has a purely absolutely
continuous part on the positive real axis, and possible non-degenerate
eigenvalues located on the negative real axis, without accumulation points.
Then, the second statement is a direct consequence of the Kato-Rellich
theorem, since $Q_{\theta_{1},\theta_{2}}(\mathcal{V})$ is Kato-analytic
w.r.t. the parameters.
When $\mathcal{V}$ is a defined positive potential, $\sigma\left(
Q_{0,0}(\mathcal{V})\right) =\sigma_{ac}\left( Q_{0,0}(\mathcal{V})\right)
=\mathbb{R}_{+}$, while the point spectrum is empty. The spectrum
$\sigma\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right) $ corresponds to
the subset of the complex plane where the map $z\rightarrow\mathcal{G
_{\theta_{1},\theta_{2}}^{z}(x,y)$ (defined in eq. (\ref{G_z_teta})) is not
holomorphic. According to the result of the Lemma \ref{Lemma_Green_ker}, the
functions $\mathcal{G}^{z}(x,y)$ and $\mathcal{H}^{z}(x,y)$, appearing at the
r.h.s. of (\ref{G_z_teta}) are $z$-holomorphic in $\mathbb{C}/\mathbb{R}_{+}$
and continuously extend to the whole branch cut both in the limits
$z=k^{2}+i\varepsilon$, $\varepsilon\rightarrow0^{\pm}$. As shown in Lemma
\ref{Lemma_Krein_coeff}, the same hold for the coefficients of $\left(
B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right)
^{-1}$, provided that $\left( \theta_{1},\theta_{2}\right) $ is close enough
to the origin in $\mathbb{C}^{2}$. In particular, these are $z$-holomorphic in
$\mathbb{C}/\mathbb{R}_{+}$ and have continuous extensions to the whole branch
cut. Then, for $\mathcal{V}$ defined positive, the map $z\rightarrow
\mathcal{G}_{\theta_{1},\theta_{2}}^{z}(x,y)$ is holomorphic in $\mathbb{C
/\mathbb{R}_{+}$ and have continuous extensions as $z\rightarrow\mathbb{R
_{+}$, both in the limits $z=k^{2}+i\varepsilon$, $\varepsilon\rightarrow
0^{\pm}$. This yields: $\sigma\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V
)\right) =\sigma_{ac}\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right)
=\mathbb{R}_{+}$.
\end{proof}
\subsection{\label{Section_Resolvent_2}Generalized eigenfunctions expansion.}
Let $\psi_{-}(\cdot,k,\theta_{1},\theta_{2})$ denote the generalized
eigenfunction of the operator $Q_{\theta_{1},\theta_{2}}\left( \mathcal{V
\right) $, describing an incoming wave function of momentum $k$; this is a
solution to the boundary value proble
\begin{equation}
\left\{
\begin{array}
[c]{lll
\left( -\partial_{x}^{2}+\mathcal{V}\right) u=k^{2}u\,, & & \text{for
x\in\mathbb{R}\backslash\left\{ a,b\right\} \,,\ k\in\mathbb{R}\,,\\
& & \\
e^{-\frac{\theta_{1}}{2}}u(b^{+},\zeta,\theta_{1},\theta_{2})=u(b^{-
,\zeta,\theta_{1},\theta_{2})\,, & & e^{-\frac{\theta_{2}}{2}}u^{\prime
}(b^{+},\zeta,\theta_{1},\theta_{2})=u^{\prime}(b^{-},\zeta,\theta_{1
,\theta_{2})\,,\\
& & \\
e^{-\frac{\theta_{1}}{2}}u(a^{-},\zeta,\theta_{1},\theta_{2})=u(a^{+
,\zeta,\theta_{1},\theta_{2})\,, & & e^{-\frac{\theta_{2}}{2}}u^{\prime
}(a^{-},\zeta,\theta_{1},\theta_{2})=u(a^{+},\zeta,\theta_{1},\theta_{2})\,,
\end{array}
\right. \label{Jost_eq_teta
\end{equation}
fulfilling the exterior condition
\begin{equation}
\psi_{-}(x,k,\theta_{1},\theta_{2})\left\vert _{\substack{x<a\\k>0}}\right.
=e^{ikx}+R(k,\theta_{1},\theta_{2})e^{-ikx}\,,\quad\psi_{-}(x,k,\theta
_{1},\theta_{2})\left\vert _{\substack{x>b\\k>0}}\right. =T(k,\theta
_{1},\theta_{2})e^{ikx}\,, \label{gen_eigenfun_ext1
\end{equation}
an
\begin{equation}
\psi_{-}(x,k,\theta_{1},\theta_{2})\left\vert _{\substack{x<a\\k<0}}\right.
=T(k,\theta_{1},\theta_{2})e^{ikx}\,,\quad\psi_{-}(x,k,\theta_{1},\theta
_{2})\left\vert _{\substack{x>b\\k<0}}\right. =e^{ikx}+R(k,\theta_{1
,\theta_{2})e^{-ikx}\,, \label{gen_eigenfun_ext2
\end{equation}
where $R$ and $T$ are the reflection and transmission coefficients. In the
case $\left( \theta_{1},\theta_{2}\right) =\left( 0,0\right) $, $\psi
_{-}(\cdot,k,0,0)$ is a generalized eigenfunction of the selfadjoint model
$Q_{0,0}\left( \mathcal{V}\right) $. In what follows we adopt the simplified
notation: $\psi_{-}(\cdot,k,0,0)=\psi_{-}(\cdot,k)$. These functions are
expressed in terms of the corresponding Jost's solutions a
\begin{equation}
\psi_{-}(x,k)=\left\{
\begin{array}
[c]{lll
-\frac{2ik}{w\left( k\right) }\chi_{+}(x,k)\,, & & \text{for }k\geq0\,,\\
& & \\
\frac{2ik}{w\left( -k\right) }\chi_{-}(x,-k)\,, & & \text{for }k<0\,,
\end{array}
\right. \label{gen_eigenfun_0
\end{equation}
(e.g. in \cite{Yafa}). In the case of defined positive potentials, an approach
similar to the one leading to the Krein-like resolvent formula (\ref{krein_AB
) allows to obtain an expansion for the difference: $\left. \psi_{-
(\cdot,k,\theta_{1},\theta_{2})-\psi_{-}(\cdot,k)\right. $ for $\left.
\left( \theta_{1},\theta_{2}\right) \rightarrow\left( 0,0\right) \right.
$. To this aim, we need an explicit expression of the finite rank terms,
appearing at the r.h.s. of (\ref{krein_1}), in the limits where $z$ approaches
the branch cut. This can be done by using the results of Lemmas
\ref{Lemma_Green_ker} and \ref{Lemma_Krein_coeff}. Adopting the notation
introduced in (\ref{GH_zeta}), let us defin
\begin{equation}
\left\{ g(e_{i},\zeta,\mathcal{V})\right\} _{i=1}^{4}=\left\{ G^{\zeta
}(\cdot,b)\,,\ -H^{\zeta}(\cdot,b)\,,\ G^{\zeta}(\cdot,a)\,,\ -H^{\zeta
(\cdot,a)\right\} \,; \label{g_zeta
\end{equation}
we get, for $\zeta\in\mathbb{C}^{+}$ and $z=\zeta^{2}$, the identity:
$\gamma(e_{i},z,\mathcal{V})=g(e_{i},\zeta,\mathcal{V})$; due to Lemma
\ref{Lemma_Green_ker}, the limits of $g(e_{i},\zeta,\mathcal{V})$ as
$\zeta\rightarrow k\in\mathbb{R}_{\pm}$ exist and corresponds to the limits of
$\gamma(e_{i},z,\mathcal{V})$ as $z\rightarrow k^{2}\pm i0$ respectively.
Namely, we hav
\begin{equation}
\lim_{z\rightarrow k^{2}\pm i0}\gamma(e_{i},z,\mathcal{V})=\left\{
\begin{array}
[c]{c
\left. g(e_{i},k,\mathcal{V})\right\vert _{k\in\mathbb{R}_{+}}\,,\\
\\
\left. g(e_{i},k,\mathcal{V})\right\vert _{k\in\mathbb{R}_{-}}\,.
\end{array}
\right. \label{g_k
\end{equation}
The coefficients $\left( B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V
)-A_{\theta_{1},\theta_{2}}\right) _{ij}^{-1}$ have been considered in the
Lemma \ref{Lemma_Krein_coeff} where their regularity w.r.t. the $z$ and the
extensions to the branch cut have been investigated. To get further insights
on the structure of the inverse matrix, we use the explicit form of $\left(
B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right)
$ given in (\ref{Krein_coeff_0})-(\ref{Krein_coeff}). In what follows,
$\mathcal{M}\left( \zeta,\theta_{1},\theta_{2}\right) $ denotes the r.h.s.
of (\ref{Krein_coeff_0}
\begin{equation}
\left( B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2
}\right) =\mathcal{M}\left( \zeta,\theta_{1},\theta_{2}\right)
\,,\qquad\zeta\in\mathbb{C}^{+},\ z=\zeta^{2}\,. \label{M_zeta_teta
\end{equation}
In the assumption (\ref{V_pos}), the matrix $\mathcal{M}\left( \zeta
,\theta_{1},\theta_{2}\right) $ continuously extends extends to $\zeta
\in\overline{\mathbb{C}^{+}}$ and taking its limits for $\zeta\rightarrow
k\in\mathbb{R}_{\pm}$ corresponds to consider the limits of $\left(
B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right)
_{ij}$ as $z\rightarrow k^{2}\pm i0$ respectively. This yield
\begin{equation}
\lim_{z\rightarrow k^{2}\pm i0}\left( B_{\theta_{1},\theta_{2}
\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right) =\left\{
\begin{array}
[c]{c
\left. \mathcal{M}\left( k,\theta_{1},\theta_{2}\right) \right\vert
_{k\in\mathbb{R}_{+}^{\ast}}\,,\\
\\
\left. \mathcal{M}\left( k,\theta_{1},\theta_{2}\right) \right\vert
_{k\in\mathbb{R}_{-}}\,.
\end{array}
\right. \label{krein_coeff_1
\end{equation}
In particular, making use of (\ref{Krein_coeff}), we hav
\begin{align}
\mathcal{M}\left( k,\theta_{1},\theta_{2}\right) & =\label{M_k_teta}\\
&
\begin{pmatrix}
\mathcal{\beta}\left( \theta_{2}\right) \mathcal{O}\left( 1\right)
-\alpha\left( \theta_{2}\right) & \mathcal{\beta}\left( \theta_{2}\right)
\mathcal{O}\left( 1+\left\vert k\right\vert \right) & \mathcal{\beta}\left(
\theta_{2}\right) \mathcal{O}\left( 1\right) & \mathcal{\beta}\left(
\theta_{2}\right) \mathcal{O}\left( 1+\left\vert k\right\vert \right) \\
\mathcal{\beta}\left( \theta_{1}\right) \mathcal{O}\left( \frac
{1}{1+\left\vert k\right\vert }\right) & \mathcal{\beta}\left( \theta
_{1}\right) \mathcal{O}\left( 1\right) -\alpha\left( \theta_{1}\right) &
\mathcal{\beta}\left( \theta_{1}\right) \mathcal{O}\left( \frac
{1}{1+\left\vert k\right\vert }\right) & \mathcal{\beta}\left( \theta
_{1}\right) \mathcal{O}\left( 1\right) \\
\mathcal{\beta}\left( -\theta_{2}\right) \mathcal{O}\left( 1\right) &
\mathcal{\beta}\left( -\theta_{2}\right) \mathcal{O}\left( 1+\left\vert
k\right\vert \right) & \mathcal{\beta}\left( -\theta_{2}\right)
\mathcal{O}\left( 1\right) -\alpha\left( -\theta_{2}\right) &
\mathcal{\beta}\left( -\theta_{2}\right) \mathcal{O}\left( 1+\left\vert
k\right\vert \right) \\
\mathcal{\beta}\left( -\theta_{1}\right) \mathcal{O}\left( \frac
{1}{1+\left\vert k\right\vert }\right) & \mathcal{\beta}\left( -\theta
_{1}\right) \mathcal{O}\left( 1\right) & \mathcal{\beta}\left( -\theta
_{1}\right) \mathcal{O}\left( \frac{1}{1+\left\vert k\right\vert }\right) &
\mathcal{\beta}\left( -\theta_{1}\right) \mathcal{O}\left( 1\right)
-\alpha\left( -\theta_{1}\right)
\end{pmatrix}
\,.\nonumber
\end{align}
From the Lemma \ref{Lemma_Krein_coeff}, this matrix is invertible whenever
$\left( \theta_{1},\theta_{2}\right) \in\mathcal{B}_{\delta}\left( \left(
0,0\right) \right) $ with $\delta$ small enough; under such a condition,
indeed, the determinant's expansio
\[
\left.
\begin{array}
[c]{l
\det\mathcal{M}\left( k,\theta_{1},\theta_{2}\right) =\det\left(
-A_{\theta_{1},\theta_{2}}\right) +\mathcal{O}\left( \theta_{1}\right)
+\mathcal{O}\left( \theta_{2}\right) \,,\\
\\
\det\left( -A_{\theta_{1},\theta_{2}}\right) =4\left( 1+\cosh\frac
{\theta_{1}}{2}\right) \left( 1+\cosh\frac{\theta_{2}}{2}\right) \,,
\end{array}
\right.
\]
(see the relation (\ref{d_exp})) implies: $\left\vert \det\mathcal{M}\left(
k,\theta_{1},\theta_{2}\right) \right\vert >1$. Then, a direct computation
leads t
\begin{equation}
\mathcal{M}^{-1}\left( k,\theta_{1},\theta_{2}\right) =\frac{1
{\det\mathcal{M}\left( k,\theta_{1},\theta_{2}\right) }\left[ \det\left(
A_{\theta_{1},\theta_{2}}\right) \,diag\left\{ \lambda_{i}\right\}
\,+M(k,\theta_{1},\theta_{2})\right] \,, \label{krein_coeff_inv
\end{equation}
where $diag\left( \lambda_{i}\right) $, the main term in
(\ref{krein_coeff_inv}), is the $\mathbb{C}^{4,4}$ diagonal matrix defined by
the coefficient
\begin{equation}
\left\{ \lambda_{i}\right\} _{i=1}^{4}=\left\{ \frac{-1}{\alpha\left(
\theta_{2}\right) }\,,\ \frac{-1}{\alpha\left( \theta_{1}\right)
}\,,\ \frac{-1}{\alpha\left( -\theta_{2}\right) }\,,\ \frac{-1
{\alpha\left( -\theta_{1}\right) }\right\} \,, \label{coeff
\end{equation}
while the remainder i
\begin{equation}
M(k,\theta_{1},\theta_{2})
\begin{pmatrix}
\mathcal{O}\left( \theta_{1}\right) +\mathcal{O}\left( \theta_{2}\right) &
\mathcal{O}\left( \theta_{2}(1+\left\vert k\right\vert )\right) &
\mathcal{O}\left( \theta_{2}\right) & \mathcal{O}\left( \theta
_{2}(1+\left\vert k\right\vert )\right) \\
\mathcal{O}\left( \frac{\theta_{1}}{1+\left\vert k\right\vert }\right) &
\mathcal{O}\left( \theta_{1}\right) +\mathcal{O}\left( \theta_{2}\right) &
\mathcal{O}\left( \frac{\theta_{1}}{1+\left\vert k\right\vert }\right) &
\mathcal{O}\left( \theta_{1}\right) \\
\mathcal{O}\left( \theta_{2}\right) & \mathcal{O}\left( \theta
_{2}(1+\left\vert k\right\vert )\right) & \mathcal{O}\left( \theta
_{1}\right) +\mathcal{O}\left( \theta_{2}\right) & \mathcal{O}\left(
\theta_{2}(1+\left\vert k\right\vert )\right) \\
\mathcal{O}\left( \frac{\theta_{1}}{1+\left\vert k\right\vert }\right) &
\mathcal{O}\left( \theta_{1}\right) & \mathcal{O}\left( \frac{\theta_{1
}{1+\left\vert k\right\vert }\right) & \mathcal{O}\left( \theta_{1}\right)
+\mathcal{O}\left( \theta_{2}\right)
\end{pmatrix}
\,, \label{krein_coeff_inv1
\end{equation}
Here, the symbols $\mathcal{O}\left( \cdot\right) $ are referred to the
metric space $\mathcal{B}_{\delta}\left( \left( 0,0\right) \right)
\times\mathbb{R}$ and, being obtained from the calculus of the inverse matrix
$\left( B_{\theta_{1},\theta_{2}}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2
}\right) ^{-1}$, denotes polynomial expressions depending on the functions:
$\alpha\left( \pm\theta_{i}\right) $, $\mathcal{\beta}\left( \pm\theta
_{i}\right) $, $G^{\zeta}\left( x,y\right) $, $\partial_{1}^{i}H^{\zeta
}\left( x,y\right) $, with $x,y\in\left\{ a,b\right\} $ and $i=0,1$. Then,
as a consequence of Lemma \ref{Lemma_Green_ker}, these terms are holomorphic
w.r.t. the parameters $\left( \theta_{1},\theta_{2}\right) \in
\mathcal{B}_{\delta}\left( \left( 0,0\right) \right) $ and continuous
w.r.t. $k\in\mathbb{R}$.
\begin{proposition}
\label{Proposition_Krein_gen_eigenfun}Assume $\left( \theta_{1},\theta
_{2}\right) \in\mathcal{B}_{\delta}\left( \left( 0,0\right) \right) $
with $\delta>0$ small enough, and let $\mathcal{V}$ be defined according to
(\ref{V}), (\ref{V_pos}). The solutions $\psi_{-}(\cdot,k,\theta_{1
,\theta_{2})$ to the generalized eigenfunctions problem (\ref{Jost_eq_teta}),
(\ref{gen_eigenfun_ext1})-(\ref{gen_eigenfun_ext2}) allow the representatio
\begin{equation}
\psi_{-}(\cdot,k,\theta_{1},\theta_{2})=\left\{
\begin{array}
[c]{lll
\psi_{-}(\cdot,k)-\sum_{i,j=1}^{4}\left[ \mathcal{M}^{-1}\left( k,\theta
_{1},\theta_{2}\right) B_{\theta_{1},\theta_{2}}\right] _{ij}\left[
\Gamma_{1}\psi_{-}(\cdot,k)\right] _{j}\,g(e_{i},k,\mathcal{V})\,, & &
\text{for }k\geq0\,,\\
& & \\
\psi_{-}(\cdot,k)-\sum_{i,j=1}^{4}\left[ \mathcal{M}^{-1}\left(
-k,\theta_{1},\theta_{2}\right) B_{\theta_{1},\theta_{2}}\right]
_{ij}\left[ \Gamma_{1}\psi_{-}(\cdot,k)\right] _{j}\,g(e_{i},-k,\mathcal{V
)\,, & & \text{for }k<0\,.
\end{array}
\right. \label{gen_eigenfun_Krein
\end{equation}
The functions $\psi_{-}(x,k,\theta_{1},\theta_{2})$ are $\mathcal{C}^{1
$-continuous w.r.t. $x\in\mathbb{R}/\left\{ a,b\right\} $, $k$-continuous in
$\mathbb{R}$ and holomorphic w.r.t. the parameters $\left( \theta_{1
,\theta_{2}\right) $ in $\mathcal{B}_{\delta}\left( \left( 0,0\right)
\right) $.
\end{proposition}
\begin{proof}
We start considering the case $k\geq0$. According to the definition of
$\psi_{-}(\cdot,k)$ and $\,g(e_{i},k,\mathcal{V})$, the function at the r.h.s.
of (\ref{gen_eigenfun_Krein}) solves the equatio
\begin{equation}
\left( -\partial_{x}^{2}+\mathcal{V}\right) u=k^{2}u\,,\qquad\text{for
x\in\mathbb{R}\backslash\left\{ a,b\right\} \,,\ k\in\mathbb{R}\,,
\end{equation}
and fulfills the conditions (\ref{gen_eigenfun_ext1}) and
(\ref{gen_eigenfun_ext2}). Set: $\psi_{-}(\cdot,k,\theta_{1},\theta_{2
)=\phi-\psi$ with
\begin{align}
\phi & =\psi_{-}(\cdot,k)\,,\label{phi}\\
& \nonumber\\
\psi & =\sum_{i,j=1}^{4}\left[ \mathcal{M}^{-1}\left( k,\theta_{1
,\theta_{2}\right) B_{\theta_{1},\theta_{2}}\right] _{ij}\left[ \Gamma
_{1}\phi\right] _{j}\,g(e_{i},k,\mathcal{V})\,.\label{psi
\end{align}
The function $\psi$ can be pointwise approximated by elements of the defect
spaces $\mathcal{N}_{z}$ as $z\rightarrow k^{2}+i0$. With the notation
introduced in (\ref{M_zeta_teta}) and (\ref{g_zeta}), let $\psi_{z}$ be
defined b
\begin{equation}
\psi_{z}=\sum_{i,j=1}^{4}\left[ \mathcal{M}^{-1}\left( \zeta,\theta
_{1},\theta_{2}\right) B_{\theta_{1},\theta_{2}}\right] _{ij}\left[
\Gamma_{1}\phi\right] _{j}\,g(e_{i},\zeta,\mathcal{V})\,,\quad\zeta
\in\mathbb{C}^{+}\,,\ z=\zeta^{2}\,;\label{psi_zeta
\end{equation}
it results $\psi_{z}\in\mathcal{N}_{z}$ and $\lim_{z\rightarrow k^{2}+i0
\psi_{z}=\psi$. Since $\psi_{-}(\cdot,k)$ is $\mathcal{C}_{x}^{1}$-continuous
in $\mathbb{R}$, we have: $\Gamma_{0}\phi=0$ and the following relation hold
\begin{align}
\mathcal{M}\left( k,\theta_{1},\theta_{2}\right) \Gamma_{0}\left( \phi
-\psi\right) & =-\mathcal{M}\left( k,\theta_{1},\theta_{2}\right)
\Gamma_{0}\psi=-\lim_{z\rightarrow k^{2}+i0}\left( B_{\theta_{1},\theta_{2
}\,q(z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right) \Gamma_{0}\psi
_{z}\nonumber\\
& =-\lim_{z\rightarrow k^{2}+i0}\left( B_{\theta_{1},\theta_{2}}\,\Gamma
_{1}\gamma(\cdot,z,\mathcal{V})-A_{\theta_{1},\theta_{2}}\right) \Gamma
_{0}\psi_{z}\nonumber\\
& =-\lim_{z\rightarrow k^{2}+i0}\left( B_{\theta_{1},\theta_{2}}\,\Gamma
_{1}-A_{\theta_{1},\theta_{2}}\Gamma_{0}\right) \psi_{z}=\left(
-B_{\theta_{1},\theta_{2}}\,\Gamma_{1}+A_{\theta_{1},\theta_{2}}\Gamma
_{0}\right) \psi\,.\label{one_k
\end{align}
The $n$-th component of the vector at the l.h.s. of (\ref{one_k}) writes a
\begin{multline*}
\left[ \smallskip\mathcal{M}\left( k,\theta_{1},\theta_{2}\right)
\Gamma_{0}\left( \phi-\psi\right) \right] _{n}=\left[ \smallskip
-\mathcal{M}\left( k,\theta_{1},\theta_{2}\right) \Gamma_{0}\psi\right]
_{n}=\\
-\sum_{i,j=1}^{4}\left[ \left[ \smallskip\mathcal{M}\left( k,\theta
_{1},\theta_{2}\right) \Gamma_{0}\left( \phi-\psi\right) \right]
_{n}\Gamma_{0}g(e_{i},k,\mathcal{V})\right] _{n}\left[ \mathcal{M
^{-1}\left( k,\theta_{1},\theta_{2}\right) B_{\theta_{1},\theta_{2}}\right]
_{ij}\left[ \Gamma_{1}\phi\right] _{j}\,.
\end{multline*}
Recalling that $\Gamma_{0}g(e_{i},k,\mathcal{V})=e_{i}$, we get
\begin{multline*}
\left[ \smallskip\mathcal{M}\left( k,\theta_{1},\theta_{2}\right)
\Gamma_{0}\left( \phi-\psi\right) \right] _{n}=\\
-\sum_{i,j=1}^{4}\left( \mathcal{M}\left( k,\theta_{1},\theta_{2}\right)
\right) _{ni}\left[ \mathcal{M}^{-1}\left( k,\theta_{1},\theta_{2}\right)
B_{\theta_{1},\theta_{2}}\right] _{ij}\left[ \Gamma_{1}\phi\right]
_{j}=-\sum_{i,j=1}^{4}B_{nj}\left[ \Gamma_{1}\phi\right] _{j}\,,
\end{multline*}
which implie
\begin{equation}
\mathcal{M}\left( k,\theta_{1},\theta_{2}\right) \Gamma_{0}\left( \phi
-\psi\right) =-B\Gamma_{1}\phi\,.\label{two_k
\end{equation}
From (\ref{one_k}) and (\ref{two_k}), the interface condition
\begin{equation}
A_{\theta_{1},\theta_{2}}\Gamma_{0}\psi_{-}(\cdot,k,\theta_{1},\theta
_{2})=B_{\theta_{1},\theta_{2}}\Gamma_{1}\psi_{-}(\cdot,k,\theta_{1
,\theta_{2})\,,\label{gen_eigenfun_bc
\end{equation}
follow. Since these are equivalent to the ones assigned in the equation
(\ref{Jost_eq_teta}), the function defined in (\ref{gen_eigenfun_Krein}) is a
solution to the problem (\ref{Jost_eq_teta}), (\ref{gen_eigenfun_ext1
)-(\ref{gen_eigenfun_ext2}). The case $k<0$ can be treated by a suitable
adaptation of the above arguments.
The regularity of the generalized eigenfunctions w.r.t. to the variables
$\left\{ x,k,\theta_{1},\theta_{2}\right\} $ is a consequence of the
representation (\ref{gen_eigenfun_Krein}) and of the properties of the maps
$\psi_{-}(\cdot,k)$, $\mathcal{M}^{-1}\left( k,\theta_{1},\theta_{2}\right)
$, $B_{\theta_{1},\theta_{2}}$, and $g(e_{i},k,\mathcal{V})$ (for this point,
we refer to the corresponding definitions and to the results of the
Proposition \ref{Proposition_Jost} and of the Lemmata \ref{Lemma_Green_ker
-\ref{Lemma_Krein_coeff}).
\end{proof}
As a consequence of the above result, an expansion of $\psi_{-}(\cdot
,k,\theta_{1},\theta_{2})$ for small values of $\theta_{i}$ follows.
\begin{corollary}
\label{Corollary_eigenfun_exp}Let $\psi_{-}(\cdot,k,\theta_{1},\theta_{2})$
denotes a solution to the generalized eigenfunctions problem
(\ref{Jost_eq_teta}), (\ref{gen_eigenfun_ext1})-(\ref{gen_eigenfun_ext2}). In
the assumptions of the Proposition \ref{Proposition_Krein_gen_eigenfun}, the
expansio
\begin{equation}
\psi_{-}(\cdot,k,\theta_{1},\theta_{2})-\psi_{-}(\cdot,k)=\mathcal{O}\left(
\theta_{2}k\right) G^{\sigma k}(\cdot,b)+\mathcal{O}\left( \frac{\theta
_{1}k}{1+\left\vert k\right\vert }\right) H^{\sigma k}(\cdot,b)+\mathcal{O
\left( \theta_{2}k\right) G^{\sigma k}(\cdot,a)+\mathcal{O}\left(
\frac{\theta_{1}k}{1+\left\vert k\right\vert }\right) H^{\sigma k
(\cdot,a)\,. \label{gen_eigenfun_exp
\end{equation}
holds with: $\sigma=\frac{k}{\left\vert k\right\vert }$. The symbols
$\mathcal{O}\left( \cdot\right) $ are defined in the sense of the metric
space $\mathbb{R\times}\mathcal{B}_{\delta}\left( \left( 0,0\right)
\right) $.
\end{corollary}
\begin{proof}
As already noticed, the assumption of positive potentials (\ref{V_pos})
prevents the Jost's function $w(k)$ to have zeroes on the real axis. In
particular, a consequence of the definition (\ref{gen_eigenfun_0}) and of the
relations (\ref{Jost_sol_bound}) i
\begin{equation}
\psi_{-}(x,k)=\mathcal{O}\left( \frac{k}{1+\left\vert k\right\vert }\right)
\,;\qquad\partial_{x}\psi_{-}(x,k)=\mathcal{O}\left( k\right) \,,
\label{gen_eigenfun_0_est
\end{equation}
and a direct computation yield
\begin{equation}
B_{\theta_{1},\theta_{2}}\Gamma_{1}\psi_{-}(\cdot,k)=\left\{ \,\mathcal{O
\left( \theta_{2}k\right) ,\ \mathcal{O}\left( \frac{\theta_{1
k}{1+\left\vert k\right\vert }\right) \,,\ \mathcal{O}\left( \theta
_{2}k\right) \,,\ \mathcal{O}\left( \frac{\theta_{1}k}{1+\left\vert
k\right\vert }\right) \right\} \,.
\end{equation}
where the symbols $\mathcal{O}\left( \cdot\right) $ are referred to the
metric space $\mathbb{R\times}\mathcal{B}_{\delta}\left( \left( 0,0\right)
\right) $. Making use of this expression and of the relations
(\ref{krein_coeff_inv})-(\ref{krein_coeff_inv1}), we ge
\begin{equation}
\mathcal{M}^{-1}\left( \sigma k,\theta_{1},\theta_{2}\right) B_{\theta
_{1},\theta_{2}}\left[ \Gamma_{1}\psi_{-}(\cdot,k)\right] =\left\{
\,\mathcal{O}\left( \theta_{2}k\right) ,\ \mathcal{O}\left( \frac
{\theta_{1}k}{1+\left\vert k\right\vert }\right) \,,\ \mathcal{O}\left(
\theta_{2}k\right) \,,\ \mathcal{O}\left( \frac{\theta_{1}k}{1+\left\vert
k\right\vert }\right) \right\} \,. \label{coeff1
\end{equation}
Then, the expansion (\ref{gen_eigenfun_exp}) follows from the formula
(\ref{gen_eigenfun_Krein}) by taking into account (\ref{coeff1}) and the
definition (\ref{g_zeta}).
\end{proof}
\section{\label{Section_Similarity}Similarity and uniform-in-time estimates
for the dynamical system.}
In what follows, $\mathcal{V}$ is a positive short-range potential. With this
assumption, $Q_{0,0}(\mathcal{V})$ has a purely absolutely continuous spectrum
and the related generalized Fourier transform $\mathcal{F}_{\mathcal{V}}
\begin{equation}
\left( \mathcal{F}_{\mathcal{V}}\varphi\right) (k)=\int_{\mathbb{R}
\frac{dx}{\left( 2\pi\right) ^{1/2}}\,\psi_{-}^{\ast}(x,k)\varphi
(x)\,,\qquad\varphi\in L^{2}(\mathbb{R})\,,\label{gen_Fourier
\end{equation}
is a unitary map with range $R\left( \mathcal{F}_{\mathcal{V}}\right)
\mathcal{=}$ $L^{2}(\mathbb{R})$ and an inverse map $\mathcal{F}_{\mathcal{V
}^{-1}$ acting a
\begin{equation}
\left( \mathcal{F}_{\mathcal{V}}^{-1}f\right) (x)=\int\frac{dk}{\left(
2\pi\right) ^{1/2}}\,\psi_{-}(x,k)f(k)\,,
\end{equation}
for all $f\in L^{2}(\mathbb{R})$. Assume in addition the parameters
$\theta_{1},\theta_{2}$ to be close enough to the origin, so that the
expansion (\ref{gen_eigenfun_exp}) hold, and consider the operator
$\mathcal{W}_{\theta_{1},\theta_{2}}$ defined by the integral kerne
\begin{equation}
\mathcal{W}_{\theta_{1},\theta_{2}}(x,y)=\int_{\mathbb{R}}\frac{dk}{2\pi
}\,\psi_{-}(x,k,\theta_{1},\theta_{2})\psi_{-}^{\ast
(y,k)\,.\label{W_teta_ker
\end{equation}
The next Proposition shows that $\mathcal{W}_{\theta_{1},\theta_{2}}$ form an
analytic family of bounded operators w.r.t. $\left( \theta_{1},\theta
_{2}\right) $, while, for fixed values of the parameters, $\mathcal{W
_{\theta_{1},\theta_{2}}$ induces a similarity between $Q_{\theta_{1
,\theta_{2}}(\mathcal{V})$ and $Q_{0,0}(\mathcal{V})$.
\begin{proposition}
\label{Proposition_W_cont}Let $\mathcal{V}$ satisfy the conditions (\ref{V}),
(\ref{V_pos}) and assume $\left( \theta_{1},\theta_{2}\right) \in
\mathcal{B}_{\delta}(\left( 0,0\right) )$ with $\delta>0$ small enough.
Then, the set $\left\{ \mathcal{W}_{\theta_{1},\theta_{2}}\,,\ \left(
\theta_{1},\theta_{2}\right) \in\mathcal{B}_{\delta}(\left( 0,0\right)
)\right\} $ forms an analytic family of bounded operators in $L^{2
(\mathbb{R})$, w.r.t. $\left( \theta_{1},\theta_{2}\right) $, and the
expansio
\begin{equation}
\mathcal{W}_{\theta_{1},\theta_{2}}=1+\mathcal{O}\left( \theta_{1}\right)
+\mathcal{O}\left( \theta_{2}\right) \,, \label{W_teta_exp
\end{equation}
holds in the $\mathcal{L}\left( L^{2}(\mathbb{R}),L^{2}(\mathbb{R})\right) $
operator norm. The couple $Q_{\theta_{1},\theta_{2}}(\mathcal{V})$,
$Q_{0,0}(\mathcal{V})$ is intertwined through $\mathcal{W}_{\theta_{1
,\theta_{2}}$ b
\begin{equation}
Q_{\theta_{1},\theta_{2}}(\mathcal{V})\mathcal{W}_{\theta_{1},\theta_{2
}=\mathcal{W}_{\theta_{1},\theta_{2}}Q_{0,0}(\mathcal{V})\,.
\label{W_teta_inter
\end{equation}
\end{proposition}
\begin{proof}
Let consider the action of $\mathcal{W}_{\theta_{1},\theta_{2}}$ on
$\varphi\in L^{2}(\mathbb{R})$; making use of (\ref{gen_Fourier}) and
(\ref{W_teta_ker}), this writes as
\begin{equation}
\mathcal{W}_{\theta_{1},\theta_{2}}\varphi=\int_{\mathbb{R}}\frac{dk}{\left(
2\pi\right) ^{1/2}}\,\psi_{-}(\cdot,k,\theta_{1},\theta_{2})\left(
\mathcal{F}_{\mathcal{V}}\varphi\right) (k)\,, \label{W_teta_act
\end{equation}
and, expressing $\psi_{-}(x,k,\theta_{1},\theta_{2})$ through the expansion
(\ref{gen_eigenfun_exp}), we ge
\begin{align}
\mathcal{W}_{\theta_{1},\theta_{2}}\varphi & =\int_{\mathbb{R}}\frac
{dk}{\left( 2\pi\right) ^{1/2}}\,\psi_{-}(\cdot,k)\left( \mathcal{F
_{\mathcal{V}}\varphi\right) (k)+\int_{\mathbb{R}}\frac{dk}{\left(
2\pi\right) ^{1/2}}\,\left[ \mathcal{O}\left( \theta_{2}k\right)
G^{\left\vert k\right\vert }(\cdot,b)+\mathcal{O}\left( \theta_{2}k\right)
G^{\left\vert k\right\vert }(\cdot,a)\right] \left( \mathcal{F
_{\mathcal{V}}\varphi\right) (k)\nonumber\\
& \nonumber\\
& +\int_{\mathbb{R}}\frac{dk}{\left( 2\pi\right) ^{1/2}}\,\left[
\mathcal{O}\left( \frac{\theta_{1}k}{1+\left\vert k\right\vert }\right)
H^{\left\vert k\right\vert }(\cdot,b)+\mathcal{O}\left( \frac{\theta_{1
k}{1+\left\vert k\right\vert }\right) H^{\left\vert k\right\vert
(\cdot,a)\right] \left( \mathcal{F}_{\mathcal{V}}\varphi\right) (k)\,,
\label{W_teta_exp1
\end{align}
where, it is important to remark, the symbols $\mathcal{O}\left(
\cdot\right) $ here denote functions depending only on $k$, $\theta_{1}$ and
$\theta_{2}$, but independent of $x$. Since $\int\frac{dk}{\left(
2\pi\right) ^{1/2}}\,\psi_{-}(\cdot,k)\left( \mathcal{F}_{\mathcal{V
}\varphi\right) (k)=\mathcal{F}_{\mathcal{V}}^{-1}\left( \mathcal{F
_{\mathcal{V}}\varphi\right) $, this equation yields: $\left( \mathcal{W
_{\theta_{1},\theta_{2}}-\mathbb{I}\right) \varphi=I+II$, wher
\begin{equation}
I(\varphi)=\int_{\mathbb{R}}\frac{dk}{\left( 2\pi\right) ^{1/2}}\,\left[
\mathcal{O}\left( \theta_{2}k\right) G^{\left\vert k\right\vert
(\cdot,b)+\mathcal{O}\left( \theta_{2}k\right) G^{\left\vert k\right\vert
}(\cdot,a)\right] \left( \mathcal{F}_{\mathcal{V}}\varphi\right) (k)\,,
\label{I
\end{equation}
an
\begin{equation}
II(\varphi)=\int_{\mathbb{R}}\frac{dk}{\left( 2\pi\right) ^{1/2}}\,\left[
\mathcal{O}\left( \frac{\theta_{1}k}{1+\left\vert k\right\vert }\right)
H^{\left\vert k\right\vert }(\cdot,b)+\mathcal{O}\left( \frac{\theta_{1
k}{1+\left\vert k\right\vert }\right) H^{\left\vert k\right\vert
(\cdot,a)\right] \left( \mathcal{F}_{\mathcal{V}}\varphi\right) (k)\,.
\label{II
\end{equation}
In order to obtain the expansion (\ref{W_teta_exp}), $L^{2}$-norm estimates of
the maps defined in (\ref{I}) and (\ref{II}) are needed. We consider at first
the case of $I(\varphi)$; let define $\phi_{\alpha}$ a
\begin{equation}
\phi_{\alpha}(x)=\int_{\mathbb{R}}\frac{dk}{\left( 2\pi\right) ^{1/2
}\,\mathcal{O}\left( k\right) G^{\left\vert k\right\vert }(x,\alpha)\left(
\mathcal{F}_{\mathcal{V}}\varphi\right) (k)\,,\quad\alpha\in\left\{
a,b\right\} \,, \label{Phi_alpha
\end{equation}
being $\mathcal{O}\left( \cdot\right) $ depending only on $k$. The $L^{2
$-norm of $\phi_{\alpha}$ is bounded b
\begin{equation}
\left\Vert \phi_{\alpha}\right\Vert _{L^{2}\left( \mathbb{R}\right)
\leq\left\Vert 1_{\left\{ x\leq a\right\} }\phi_{\alpha}\right\Vert
_{L^{2}\left( \mathbb{R}\right) }+\left\Vert 1_{\left( a,b\right)
\phi_{\alpha}\right\Vert _{L^{2}\left( \mathbb{R}\right) }+\left\Vert
1_{\left\{ x\geq b\right\} }\phi_{\alpha}\right\Vert _{L^{2}\left(
\mathbb{R}\right) }\,. \label{Phi_alpha_bound
\end{equation}
For $\alpha=b$, making use of the explicit form of $G^{k}(x,b)$, given by
(\ref{G_z}) for $\zeta=k$, and exploiting the relations (\ref{Jost_sol_bound})
an
\begin{equation}
1_{\left\{ x\leq a\right\} }\chi_{-}(x,k)=e^{-ikx}\,,\qquad1_{\left\{ x\geq
b\right\} }\chi_{+}(x,k)=e^{-ikx}\,, \label{Jost_ext2
\end{equation}
we hav
\begin{align}
1_{\left\{ x\leq a\right\} }(x)\mathcal{O}\left( k\right) G^{\left\vert
k\right\vert }(x,b) & =1_{\left\{ x\leq a\right\} }(x)\tau_{1}\left(
k\right) e^{-i\left\vert k\right\vert x}\label{Green_ext1}\\
& \nonumber\\
1_{\left\{ x\geq b\right\} }(x)\mathcal{O}\left( k\right) G^{\left\vert
k\right\vert }(x,b) & =1_{\left\{ x\geq b\right\} }(x)\tau_{2}\left(
k\right) e^{i\left\vert k\right\vert x} \label{Green_ext2
\end{align}
with $\tau_{1},\tau_{2}\in L_{k}^{\infty}\left( \mathbb{R}\right) $. In the
following, $\mathcal{P}$ denotes be the parity operator: $\mathcal{P
u(t)=u(-t)$; from (\ref{Green_ext1}), we ge
\begin{align}
1_{\left\{ x\leq a\right\} }(x)\phi_{b}(x) & =1_{\left\{ x\leq a\right\}
}(x)\int_{\mathbb{R}}\frac{dk}{\left( 2\pi\right) ^{1/2}}\,\tau_{1}\left(
k\right) e^{-i\left\vert k\right\vert x}\left( \mathcal{F}_{\mathcal{V
}\varphi\right) (k)\nonumber\\
& =1_{\left\{ x\leq a\right\} }(x)\left( \mathcal{F}_{0}^{-1}\left(
1_{k<0}\tau_{1}\mathcal{F}_{\mathcal{V}}\varphi+\mathcal{P}\left( 1_{k>0
\tau_{1}\mathcal{F}_{\mathcal{V}}\varphi\right) \right) \right) (x)\,,
\end{align}
where, according to the notation introduced in (\ref{gen_Fourier}),
$\mathcal{F}_{0}$ is the standard Fourier transform. Thus, $1_{\left\{ x\leq
a\right\} }\phi_{b}$ is estimated by
\begin{equation}
\left\Vert 1_{\left\{ x\leq a\right\} }\phi_{b}\right\Vert _{L^{2}\left(
\mathbb{R}\right) }=\left\Vert \mathcal{F}_{0}^{-1}\left( 1_{k<0}\tau
_{1}\mathcal{F}_{\mathcal{V}}\varphi\right) \right\Vert _{L^{2}\left(
\mathbb{R}\right) }+\left\Vert \mathcal{F}_{0}^{-1}\mathcal{P}\left(
1_{k>0}\tau_{1}\mathcal{F}_{\mathcal{V}}\varphi\right) \right\Vert
_{L^{2}\left( \mathbb{R}\right) }\lesssim\left\Vert \varphi\right\Vert
_{L^{2}\left( \mathbb{R}\right) }\,, \label{est1
\end{equation}
while, for $1_{\left\{ x\geq b\right\} }\phi_{b}$, a similar inequality
follows by using (\ref{Green_ext2}
\begin{equation}
\left\Vert 1_{\left\{ x\geq b\right\} }\phi_{b}\right\Vert _{L^{2}\left(
\mathbb{R}\right) }=\left\Vert \mathcal{F}_{0}^{-1}\mathcal{P}\left(
1_{k<0}\tau_{2}\mathcal{F}_{\mathcal{V}}\varphi\right) \right\Vert
_{L^{2}\left( \mathbb{R}\right) }+\left\Vert \mathcal{F}_{0}^{-1}\left(
1_{k>0}\tau_{2}\mathcal{F}_{\mathcal{V}}\varphi\right) \right\Vert
_{L^{2}\left( \mathbb{R}\right) }\lesssim\left\Vert \varphi\right\Vert
_{L^{2}\left( \mathbb{R}\right) }\,. \label{est2
\end{equation}
According to definition of $G^{k}(x,b)$ for $x<b$, the term $1_{\left(
a,b\right) }\phi_{b}$ writes a
\begin{equation}
1_{\left( a,b\right) }(x)\phi_{b}(x)=1_{\left( a,b\right) }(x)\int
_{\mathbb{R}}dk\,\frac{\mathcal{O}\left( k\right) \chi_{+}(b,\left\vert
k\right\vert )}{w(\left\vert k\right\vert )}\chi_{-}(x,\left\vert k\right\vert
)\left( \mathcal{F}_{\mathcal{V}}\varphi\right) (k)=1_{\left( a,b\right)
}(x)\int_{\mathbb{R}}dk\,\chi_{-}(x,\left\vert k\right\vert )\tau
_{3}(k)\left( \mathcal{F}_{\mathcal{V}}\varphi\right) (k)\,, \label{est3_0
\end{equation}
where $\tau_{3}\in L_{k}^{\infty}\left( \mathbb{R}\right) $ is: $\tau
_{3}(k)=\frac{\mathcal{O}\left( k\right) \chi_{+}(b,\left\vert k\right\vert
)}{w(\left\vert k\right\vert )}$. Using the definition (\ref{gen_eigenfun_0})
and the identitie
\begin{equation}
\chi_{\pm}(\cdot,-k)=\chi_{\pm}^{\ast}(\cdot,k)\,,\qquad w(-k)=w^{\ast}(k)\,,
\end{equation}
it follow
\begin{align}
1_{k<0}(k)\chi_{-}(x,-k) & =1_{k<0}(k)\frac{w\left( -k\right) }{2ik
\psi_{-}(x,k)\,,\label{Jost_gen_eigenfun1}\\
& \nonumber\\
1_{k\geq0}(k)\chi_{-}(x,k) & =-1_{k\geq0}(k)\frac{w\left( k\right)
{2ik}\psi_{-}(x,-k)\,. \label{Jost_gen_eigenfun2
\end{align}
Take $\tilde{\tau}_{3}(k)=\tau_{3}(k)\frac{w\left( -k\right) }{2ik}$ and
$\hat{\tau}_{3}(k)=\tau_{3}(k)\frac{w\left( k\right) }{-2ik}$; it results:
$\tilde{\tau}_{3},\hat{\tau}_{3}\in L_{k}^{\infty}\left( \mathbb{R}\right) $
and the r.h.s. of (\ref{est3_0}) rephrases a
\begin{equation}
1_{\left( a,b\right) }(x)\phi_{b}(x)=1_{\left( a,b\right) }(x)\left[
\int_{k<0}dk\,\psi_{-}(x,k)\tilde{\tau}_{3}(k)\left( \mathcal{F
_{\mathcal{V}}\varphi\right) (k)+\int_{k>0}dk\,\psi_{-}(x,-k)\left(
\hat{\tau}_{3}(k)\left( \mathcal{F}_{\mathcal{V}}\varphi\right) (k)\right)
\right] \,. \label{est3_1
\end{equation}
The first term identifies with the inverse Fourier transform of $1_{k<0
\tilde{\tau}_{3}\mathcal{F}_{\mathcal{V}}\varphi$
\begin{equation}
\int_{k<0}dk\,\psi_{-}(\cdot,k)\tilde{\tau}_{3}(k)\left( \mathcal{F
_{\mathcal{V}}\varphi\right) (k)=\mathcal{F}_{\mathcal{V}}^{-1}\left(
1_{k<0}\tilde{\tau}_{3}\mathcal{F}_{\mathcal{V}}\varphi\right) \,,
\label{est3_2
\end{equation}
while, for the second term, we hav
\begin{equation}
\int_{k>0}dk\,\psi_{-}(\cdot,-k)\left( \hat{\tau}_{3}(k)\left(
\mathcal{F}_{\mathcal{V}}\varphi\right) (k)\right) =-\int_{k<0}dk\,\psi
_{-}(\cdot,k)\left( \mathcal{P}\left( \hat{\tau}_{3}\mathcal{F
_{\mathcal{V}}\varphi\right) \right) (k)=-\mathcal{F}_{\mathcal{V}
^{-1}\mathcal{P}\left( 1_{k>0}\hat{\tau}_{3}\mathcal{F}_{\mathcal{V}
\varphi\right) \,. \label{est3_3
\end{equation}
The above relations yield the estimat
\begin{equation}
\left\Vert 1_{\left( a,b\right) }\phi_{b}\right\Vert _{L^{2}\left(
\mathbb{R}\right) }=\left\Vert 1_{\left( a,b\right) }\mathcal{F
_{\mathcal{V}}^{-1}\left( 1_{k<0}\tilde{\tau}_{3}\mathcal{F}_{\mathcal{V
}\varphi\right) ^{\ast}\right\Vert _{L^{2}\left( \mathbb{R}\right)
}+\left\Vert 1_{\left( a,b\right) }\mathcal{F}_{\mathcal{V}}^{-1
\mathcal{P}\left( 1_{k>0}\tilde{\tau}_{3}\mathcal{F}_{\mathcal{V}
\varphi\right) \right\Vert _{L^{2}\left( \mathbb{R}\right) }\lesssim
\left\Vert \varphi\right\Vert _{L^{2}\left( \mathbb{R}\right) }\,.
\label{est3_4
\end{equation}
As a consequence of (\ref{est1}), (\ref{est2}) and (\ref{est3_4}) we ge
\begin{equation}
\left\Vert \phi_{b}\right\Vert _{L^{2}\left( \mathbb{R}\right)
\lesssim\left\Vert \varphi\right\Vert _{L^{2}\left( \mathbb{R}\right) }\,,
\end{equation}
and a similar computation in the case of $\phi_{a}$ leads to: $\left\Vert
\phi_{a}\right\Vert _{L^{2}\left( \mathbb{R}\right) }\lesssim\left\Vert
\varphi\right\Vert _{L^{2}\left( \mathbb{R}\right) }$. From the definitions
(\ref{I}) and (\ref{Phi_alpha}), it follow
\begin{equation}
\left\Vert I(\varphi)\right\Vert _{L^{2}\left( \mathbb{R}\right)
\lesssim\left\vert \theta_{2}\right\vert \left( \left\Vert \phi
_{a}\right\Vert _{L^{2}\left( \mathbb{R}\right) }+\left\Vert \phi
_{b}\right\Vert _{L^{2}\left( \mathbb{R}\right) }\right) \lesssim\left\vert
\theta_{2}\right\vert \,\left\Vert \varphi\right\Vert _{L^{2}\left(
\mathbb{R}\right) }\,. \label{I_est
\end{equation}
For the map $II(\varphi)$, we introduce $\psi_{\alpha}$ defined a
\begin{equation}
\psi_{\alpha}(x)=\int_{\mathbb{R}}dk\,\mathcal{O}\left( \frac{k}{1+\left\vert
k\right\vert }\right) H^{\left\vert k\right\vert }(x,\alpha)\left(
\mathcal{F}_{\mathcal{V}}\varphi\right) (k)\,,\quad\alpha\in\left\{
a,b\right\} \,, \label{Psi_alpha
\end{equation}
where $\mathcal{O}\left( \cdot\right) $ depends only on $k$. For $\alpha=b$,
the explicit form of $H^{k}(x,b)$, given by (\ref{H_z}) for $\zeta=k$, and the
relations (\ref{Jost_sol_bound}), (\ref{Jost_ext2}), yiel
\begin{align}
1_{\left\{ x\leq a\right\} }(x)\mathcal{O}\left( \frac{k}{1+\left\vert
k\right\vert }\right) H^{\left\vert k\right\vert }(x,b) & =1_{\left\{
x\leq a\right\} }(x)\eta_{1}\left( k\right) e^{-i\left\vert k\right\vert
x}\label{H_ext1}\\
& \nonumber\\
1_{\left( a,b\right) }(x)\mathcal{O}\left( \frac{k}{1+\left\vert
k\right\vert }\right) H^{\left\vert k\right\vert }(x,b) & =1_{\left(
a,b\right) }(x)\eta_{3}(k)\chi_{-}(x,\left\vert k\right\vert )\label{H_in}\\
& \nonumber\\
1_{\left\{ x\geq b\right\} }(x)\mathcal{O}\left( \frac{k}{1+\left\vert
k\right\vert }\right) H^{\left\vert k\right\vert }(x,b) & =1_{\left\{
x\geq b\right\} }(x)\eta_{2}\left( k\right) e^{i\left\vert k\right\vert x}
\label{H_ext2
\end{align}
where $\eta_{i=1,2,3}\in L_{k}^{\infty}\left( \mathbb{R}\right) $ are
described by $\mathcal{O}\left( \frac{k}{1+\left\vert k\right\vert }\right)
$. Setting: $\tilde{\eta}_{3}(k)=\eta_{3}(k)\frac{w\left( -k\right) }{2ik}$
and $\hat{\eta}_{3}(k)=\eta_{3}(k)\frac{w\left( k\right) }{-2ik}$ (which,
according to the characterization of $\eta_{3}$, still implies: $\tilde{\eta
}_{3},\hat{\eta}_{3}\in L_{k}^{\infty}\left( \mathbb{R}\right) $), and
proceeding as before, we obtain the decompositio
\begin{align}
\psi_{b} & =1_{\left\{ x\leq a\right\} }\left[ \mathcal{F}_{0
^{-1}\left( 1_{k<0}\eta_{1}\mathcal{F}_{\mathcal{V}}\varphi+\mathcal{P
\left( 1_{k>0}\eta_{1}\mathcal{F}_{\mathcal{V}}\varphi\right) \right)
\right] \nonumber\\
& +1_{\left\{ x\geq b\right\} }\left[ \mathcal{F}_{0}^{-1}\mathcal{P
\left( 1_{k<0}\eta_{2}\mathcal{F}_{\mathcal{V}}\varphi\right) +\mathcal{F
_{0}^{-1}\left( 1_{k>0}\eta_{2}\mathcal{F}_{\mathcal{V}}\varphi\right)
\right] \nonumber\\
& +1_{\left( a,b\right) }\left[ \mathcal{F}_{\mathcal{V}}^{-1}\left(
1_{k<0}\tilde{\eta}_{3}\mathcal{F}_{\mathcal{V}}\varphi\right) -\mathcal{F
_{\mathcal{V}}^{-1}\mathcal{P}\left( 1_{k>0}\hat{\eta}_{3}\mathcal{F
_{\mathcal{V}}\varphi\right) \right] \,. \label{psi_be
\end{align}
This entails: $\left\Vert \psi_{b}\right\Vert _{L^{2}\left( \mathbb{R
\right) }\lesssim\left\Vert \varphi\right\Vert _{L^{2}\left( \mathbb{R
\right) }$, while, with similar computations, the corresponding estimate in
the case of $\psi_{a}$ is obtained. From the definitions (\ref{II}) and
(\ref{Psi_alpha}), follow
\begin{equation}
\left\Vert II\right\Vert _{L^{2}\left( \mathbb{R}\right) }\lesssim\left\vert
\theta_{1}\right\vert \left( \left\Vert \psi_{a}\right\Vert _{L^{2}\left(
\mathbb{R}\right) }+\left\Vert \psi_{b}\right\Vert _{L^{2}\left(
\mathbb{R}\right) }\right)
\end{equation}
Then, the above estimates impl
\begin{equation}
\left\Vert II\right\Vert _{L^{2}\left( \mathbb{R}\right) }\lesssim\left\vert
\theta_{1}\right\vert \,\left\Vert \varphi\right\Vert _{L^{2}\left(
\mathbb{R}\right) }\,. \label{II_est
\end{equation}
The expansion (\ref{W_teta_exp}) is a consequence of (\ref{I_est}) and
(\ref{II_est}). Since the symbols in (\ref{I})-(\ref{II}) are holomorphic in
$\left( \theta_{1},\theta_{2}\right) $, the operators $\mathcal{W
_{\theta_{1},\theta_{2}}$ form an analytic family w.r.t. the parameters.
Next, we consider the relation (\ref{W_teta_inter}). Let $\varphi\in D\left(
Q_{0,0}(\mathcal{V})\right) $, using the functional calculus of
$Q_{0,0}(\mathcal{V})$, we have: $\left( \mathcal{F}_{\mathcal{V}}\left(
Q_{0,0}(\mathcal{V})\varphi\right) \right) (k)=k^{2}\left( \mathcal{F
_{\mathcal{V}}\varphi\right) (k)$ and, according to (\ref{W_teta_act}), the
r.h.s. of (\ref{W_teta_inter}) writes a
\begin{equation}
\mathcal{W}_{\theta_{1},\theta_{2}}Q_{0,0}(\mathcal{V})\varphi=\int
_{\mathbb{R}}dk\,\psi_{-}(\cdot,k,\theta_{1},\theta_{2})k^{2}\left(
\mathcal{F}_{\mathcal{V}}\varphi\right) (k)\,.\label{W_teta_inter1
\end{equation}
To discuss the action of $Q_{\theta_{1},\theta_{2}}(\mathcal{V})\mathcal{W
_{\theta_{1},\theta_{2}}$ over $D\left( Q_{0,0}(\mathcal{V})\right) $, we
use the expansio
\begin{equation}
\mathcal{W}_{\theta_{1},\theta_{2}}\varphi=\varphi+I(\varphi)+II(\varphi
)\,.\label{W_teta_exp0
\end{equation}
From the above results, the map $I(\varphi)+II(\varphi)$ can be represented a
\begin{align}
I(\varphi)+II(\varphi) & =1_{\left\{ x\leq a\right\} }\left[
\mathcal{F}_{0}^{-1}\left( 1_{k<0}\mu_{1}\mathcal{F}_{\mathcal{V}
\varphi+\mathcal{P}\left( 1_{k>0}\mu_{1}\mathcal{F}_{\mathcal{V}
\varphi\right) \right) \right] \nonumber\\
& +1_{\left\{ x\geq b\right\} }\left[ \mathcal{F}_{0}^{-1}\mathcal{P
\left( 1_{k<0}\mu_{2}\mathcal{F}_{\mathcal{V}}\varphi\right) +\mathcal{F
_{0}^{-1}\left( 1_{k>0}\mu_{2}\mathcal{F}_{\mathcal{V}}\varphi\right)
\right] \nonumber\\
& +1_{\left( a,b\right) }\left[ \mathcal{F}_{\mathcal{V}}^{-1}\left(
1_{k<0}\mu_{3}\mathcal{F}_{\mathcal{V}}\varphi\right) -\mathcal{F
_{\mathcal{V}}^{-1}\left( 1_{k<0}\left( \mathcal{P}\left( \mu
_{4}\mathcal{F}_{\mathcal{V}}\varphi\right) \right) \right) \right]
\,,\label{W_teta_exp2
\end{align}
where $\mu_{i}\in L_{k}^{\infty}\left( \mathbb{R}\right) $, $i=1,..4$, are
suitable bounded functions of $k$. Let $u\in L_{k}^{\infty}\left(
\mathbb{R}\right) $ and $\mathcal{V},\mathcal{V}^{\prime}$ any couple of
potentials fulfilling the assumptions; the operators $\mathcal{F
_{\mathcal{V}^{\prime}}^{-1}u\mathcal{F}_{\mathcal{V}}$ and $\mathcal{F
_{\mathcal{V}^{\prime}}^{-1}\mathcal{P}u\mathcal{F}_{\mathcal{V}}$ map
$D\left( Q_{0,0}(\mathcal{V})\right) $ into itself. Then, as a consequence
of (\ref{W_teta_exp0}), (\ref{W_teta_exp2}), the operator $\mathcal{W
_{\theta_{1},\theta_{2}}$ maps $D\left( Q_{0,0}(\mathcal{V})\right) $ into
$D\left( Q(\mathcal{V})\right) $, while, according to (\ref{gen_eigenfun_bc
) and (\ref{W_teta_inter1}), $\mathcal{W}_{\theta_{1},\theta_{2}}\varphi$
fulfills the interface conditions (\ref{B_C_1}) for all $\varphi\in D\left(
Q_{0,0}(\mathcal{V})\right) $; we obtain: $\left. \mathcal{W}_{\theta
_{1},\theta_{2}}\in\mathcal{L}\left( D\left( Q_{0,0}(\mathcal{V})\right)
,D\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V})\right) \right) \right. $.
Moreover, from the relation:\newline $\left. \left( Q_{\theta_{1},\theta
_{2}}(\mathcal{V})-k^{2}\right) \psi_{-}(\cdot,k,\theta_{1},\theta
_{2})=0\right. $, it follow
\begin{equation}
Q_{\theta_{1},\theta_{2}}(\mathcal{V})\mathcal{W}_{\theta_{1},\theta_{2
}\varphi=\int_{\mathbb{R}}dk\,\psi_{-}(\cdot,k,\theta_{1},\theta_{2
)k^{2}\left( \mathcal{F}_{\mathcal{V}}\varphi\right)
(k)\,,\label{W_teta_inter2
\end{equation}
which leads to (\ref{W_teta_inter}).
\end{proof}
\subsection{Proof of the Theorem\textbf{ \ref{Theorem1
.\label{Section_Theorem1}}}
When the parameters $\theta_{1},\theta_{2}$ are chosen in a suitably small
neighbourhood of the origin, the expansion (\ref{W_teta_exp}) yield
\begin{equation}
\mathcal{W}_{\theta_{1},\theta_{2}}^{-1}=1+\mathcal{O}\left( \theta
_{1}\right) +\mathcal{O}\left( \theta_{2}\right) \,, \label{W_teta_exp3
\end{equation}
and $Q_{\theta_{1},\theta_{2}}(\mathcal{V})$ expresses as the conjugated
operato
\begin{equation}
Q_{\theta_{1},\theta_{2}}(\mathcal{V})=\mathcal{W}_{\theta_{1},\theta_{2
}Q_{0,0}(\mathcal{V})\mathcal{W}_{\theta_{1},\theta_{2}}^{-1}\,.
\label{conjugation
\end{equation}
Let us introduc
\begin{equation}
U_{\theta_{1},\theta_{2}}(t)=\mathcal{W}_{\theta_{1},\theta_{2}
U_{0,0}(t)\mathcal{W}_{\theta_{1},\theta_{2}}^{-1}\,, \label{semigroup
\end{equation}
being $U_{0,0}(t)=e^{-itQ_{0,0}(\mathcal{V})}$ the unitary propagator
associated with $-iQ_{0,0}(\mathcal{V})$. Due to the properties of
$\mathcal{W}_{\theta_{1},\theta_{2}}$, $U_{\theta_{1},\theta_{2}}(t)$ is
holomorphic w.r.t. $\left( \theta_{1},\theta_{2}\right) $, while, for fixed
values of the parameters, the family $\left\{ U_{\theta_{1},\theta_{2
}(t)\right\} _{t\in\mathbb{R}}$ forms a strongly continuous group on
$L^{2}(\mathbb{R})$ and, according to (\ref{conjugation}), (\ref{semigroup}),
we hav
\begin{equation}
i\partial_{t}\left( U_{\theta_{1},\theta_{2}}(t)u\right) =Q_{\theta
_{1},\theta_{2}}(\mathcal{V})U_{\theta_{1},\theta_{2}}(t)u\,,
\end{equation}
for all $u\in L^{2}(\mathbb{R})$. This allows to identify $U_{\theta
_{1},\theta_{2}}(t)$ with the quantum dynamical system generated by
$-iQ_{\theta_{1},\theta_{2}}(\mathcal{V})$. Making use of (\ref{W_teta_exp})
and (\ref{W_teta_exp3}), we ge
\begin{equation}
U_{\theta_{1},\theta_{2}}(t)=U_{0,0}(t)+\mathcal{R}\left( t,\theta_{1
,\theta_{2}\right) \,,
\end{equation}
where the remainder term is strongly continuous and uniformly bounded w.r.t.
$t$ in the $L^{2}$-operator norm, allowing the representation: $\mathcal{R
\left( t,\theta_{1},\theta_{2}\right) =\mathcal{O}\left( \theta_{1}\right)
+\mathcal{O}\left( \theta_{2}\right) $.
\subsection{\label{Section_WO}Time dependent wave operators and scattering
systems.}
So far, we have investigated the continuity of the dynamical system generated
by $-iQ_{\theta_{1},\theta_{2}}(\mathcal{V})$ w.r.t. the parameters
$\theta_{i=1,2}$. This has been analyzed by using small-$\theta_{i}$
expansions of the 'stationary wave operators' $\mathcal{W}_{\theta_{1
,\theta_{2}}$ defined in (\ref{W_teta_ker}). In what follows we consider the
scattering problem for the pair $\left\{ Q_{\theta_{1},\theta_{2
}(\mathcal{V}),Q_{0,0}(\mathcal{V})\right\} $ and show that $\mathcal{W
_{\theta_{1},\theta_{2}}$ coincides with a wave operator of this couple. The
next Lemma discusses this point under the assumptions of the Proposition
\ref{Proposition_W_cont}.
\begin{lemma}
\label{Lemma_WO}Let $\mathcal{V}$ fulfills the conditions (\ref{V}),
(\ref{V_pos}) and assume $\left( \theta_{1},\theta_{2}\right) \in
\mathcal{B}_{\delta}(\left( 0,0\right) )$ with $\delta>0$ small enough. The
\begin{equation}
\text{\emph{s-}}\lim_{t\rightarrow-\infty}e^{itQ_{\theta_{1},\theta_{2
}(\mathcal{V})}e^{-itQ_{0,0}(\mathcal{V})}=\mathcal{W}_{\theta_{1},\theta_{2
}\,. \label{wave_op
\end{equation}
\end{lemma}
\begin{proof}
Let introduce the modified transform $\mathcal{F}_{\mathcal{V},\theta
_{1},\theta_{2}}$ defined b
\begin{equation}
\mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2}}^{-1}f=\int_{\mathbb{R
}dk\,\psi_{-}(x,k,\theta_{1},\theta_{2})\,f(k),\qquad f\in L^{2
(\mathbb{R})\,. \label{gen_Fourier_teta_inverse
\end{equation}
The action of $\mathcal{W}_{\theta_{1},\theta_{2}}$ can be expressed in terms
of $\mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2}}$ and $\mathcal{F
_{\mathcal{V}}$ as $\left. \mathcal{W}_{\theta_{1},\theta_{2}}=\mathcal{F
_{\mathcal{V},\theta_{1},\theta_{2}}^{-1}\mathcal{F}_{\mathcal{V}}\right. $,
from which we get: $\left. \mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2
}=\mathcal{F}_{\mathcal{V}}\mathcal{W}_{\theta_{1},\theta_{2}}^{-1}\right. $.
Due to the expansion (\ref{W_teta_exp}), it result
\begin{equation}
\mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2}}=\mathcal{F}_{\mathcal{V
}\left( 1+\mathcal{O}\left( \theta_{1}\right) +\mathcal{O}\left(
\theta_{2}\right) \right) \label{gen_Fourier_teta_exp
\end{equation}
in the $L^{2}$-operator norm sense. Making use of the intertwining property,
we hav
\begin{equation}
\mathcal{W}_{\theta_{1},\theta_{2}}^{\ast}\left( Q_{\theta_{1},\theta_{2
}(\mathcal{V})\right) ^{\ast}=Q_{0,0}(\mathcal{V})\mathcal{W}_{\theta
_{1},\theta_{2}}^{\ast}\,.
\end{equation}
Since $\left. \mathcal{W}_{\theta_{1},\theta_{2}}^{\ast}=\mathcal{F
_{\mathcal{V}}^{-1}\left( \mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2
}^{-1}\right) ^{\ast}\right. $ and $\left. \left( Q_{\theta_{1},\theta
_{2}}(\mathcal{V})\right) ^{\ast}=Q_{-\theta_{2}^{\ast},-\theta_{1}^{\ast
}(\mathcal{V})\right. $ (see eq. (\ref{Q_teta_adj})), it follow
\begin{equation}
\mathcal{F}_{\mathcal{V}}^{-1}\left( \mathcal{F}_{\mathcal{V},\theta
_{1},\theta_{2}}^{-1}\right) ^{\ast}Q_{-\theta_{2}^{\ast},-\theta_{1}^{\ast
}(\mathcal{V})=Q_{0,0}(\mathcal{V})\mathcal{F}_{\mathcal{V}}^{-1}\left(
\mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2}}^{-1}\right) ^{\ast}\,.
\end{equation}
Let us denote with $A$ the operator of multiplication by $k^{2}$. Using the
functional calculus for $Q_{0,0}(\mathcal{V})$, this operator is represented
by: $A=\mathcal{F}_{\mathcal{V}}Q_{0,0}(\mathcal{V})\mathcal{F}_{\mathcal{V
}^{-1}$, and the previous relation rephrases a
\begin{equation}
\left( \mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2}}^{-1}\right) ^{\ast
}Q_{-\theta_{2}^{\ast},-\theta_{1}^{\ast}}(\mathcal{V})=A\left(
\mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2}}^{-1}\right) ^{\ast}\,.
\end{equation}
Then, taking the adjoint, yield
\begin{equation}
Q_{\theta_{1},\theta_{2}}(\mathcal{V})\mathcal{F}_{\mathcal{V},\theta
_{1},\theta_{2}}^{-1}=\mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2}}^{-1}A\,.
\label{gen_Fourier_teta_1
\end{equation}
To identify $\mathcal{W}_{\theta_{1},\theta_{2}}$ with the wave operator,
according to the time dependent definitio
\begin{equation}
W_{-}\left( Q_{\theta_{1},\theta_{2}}(\mathcal{V}),Q_{0,0}(\mathcal{V
)\right) =\text{s-}\lim_{t\rightarrow-\infty}e^{itQ_{\theta_{1},\theta_{2
}(\mathcal{V})}e^{-itQ_{0,0}(\mathcal{V})}\,, \label{W_
\end{equation}
it is enough to prove tha
\begin{equation}
\lim_{t\rightarrow-\infty}\left\Vert \left( e^{itQ_{\theta_{1},\theta_{2
}(\mathcal{V})}e^{-itQ_{0,0}(\mathcal{V})}-\mathcal{W}_{\theta_{1},\theta_{2
}\right) u\right\Vert _{L^{2}\left( \mathbb{R}\right) }=0\,.
\label{wave_op_condition
\end{equation}
Explicitly, the function in (\ref{wave_op_condition}) reads a
\begin{equation}
\left( e^{itQ_{\theta_{1},\theta_{2}}(\mathcal{V})}e^{-itQ_{0,0
(\mathcal{V})}-\mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2}}^{-1
\mathcal{F}_{\mathcal{V}}\right) u\,. \label{wave_op_condition1
\end{equation}
Setting $g=\mathcal{F}_{\mathcal{V}}u$, we have: $e^{-itQ_{0,0}(\mathcal{V
)}u=\mathcal{F}_{\mathcal{V}}^{-1}e^{-itA}g$, and (\ref{wave_op_condition1})
rephrases a
\begin{equation}
e^{itQ_{\theta_{1},\theta_{2}}(\mathcal{V})}\left( \mathcal{F}_{\mathcal{V
}^{-1}e^{-itA}-e^{-itQ_{\theta_{1},\theta_{2}}(\mathcal{V})}\mathcal{F
_{\mathcal{V},\theta_{1},\theta_{2}}^{-1}\right) g\,.
\label{wave_op_condition2
\end{equation}
Then, using (\ref{gen_Fourier_teta_1}) and the definitions (\ref{gen_Fourier
), (\ref{gen_Fourier_teta_inverse}), we ge
\begin{gather}
\left( e^{itQ_{\theta_{1},\theta_{2}}(\mathcal{V})}e^{-itQ_{0,0
(\mathcal{V})}-\mathcal{W}_{\theta_{1},\theta_{2}}\right) u=e^{itQ_{\theta
_{1},\theta_{2}}(\mathcal{V})}\left( \mathcal{F}_{\mathcal{V}}^{-1
-\mathcal{F}_{\mathcal{V},\theta_{1},\theta_{2}}^{-1}\right) e^{-itA
g\,\medskip\qquad\qquad\qquad\qquad\nonumber\\
=e^{itQ_{\theta_{1},\theta_{2}}(\mathcal{V})}\int_{\mathbb{R}}dk\,\left(
\psi_{-}(\cdot,k)-\psi_{-}(\cdot,k,\theta_{1},\theta_{2})\right) e^{-itk^{2
}g(k),\qquad g\in\mathcal{F}_{\mathcal{V}}u\,. \label{wave_op_condition3
\end{gather}
Under our assumptions, the result of the Corollary
\ref{Corollary_eigenfun_exp} applies and the r.h.s. of
(\ref{wave_op_condition3}) can be further developed through the expansion
(\ref{gen_eigenfun_exp}). This yield
\begin{gather}
\left( e^{itQ_{\theta_{1},\theta_{2}}(\mathcal{V})}e^{-itQ_{0,0
(\mathcal{V})}-\mathcal{W}_{\theta_{1},\theta_{2}}\right) u=\medskip
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\nonumber\\
\int_{\mathbb{R}}dk\,\mathcal{O}\left( \theta_{2}k\right) G^{\sigma k
(\cdot,b)e^{-itk^{2}}g(k)+\int_{\mathbb{R}}dk\,\mathcal{O}\left( \frac
{\theta_{1}k}{1+\left\vert k\right\vert }\right) H^{\sigma k}(\cdot
,b)e^{-itk^{2}}g(k)\medskip\nonumber\\
+\int_{\mathbb{R}}dk\,\mathcal{O}\left( \theta_{2}k\right) G^{\sigma
k}(\cdot,a)e^{-itk^{2}}g(k)+\int_{\mathbb{R}}dk\,\mathcal{O}\left(
\frac{\theta_{1}k}{1+\left\vert k\right\vert }\right) H^{\sigma k
(\cdot,a)e^{-itk^{2}}g(k)\,. \label{wave_op_exp
\end{gather}
where $\sigma=\frac{k}{\left\vert k\right\vert }$, while the functions
$\mathcal{O}\left( \theta_{2}k\right) $ and $\mathcal{O}\left( \frac
{\theta_{1}k}{1+\left\vert k\right\vert }\right) $ are independent of $x$. To
obtain (\ref{wave_op}), it is enough to show that, for all $g\in
\mathcal{C}_{0}^{\infty}\left( \mathbb{R}\right) $, limits of the type
(\ref{wave_op_condition}) are available for each term at the r.h.s. of
(\ref{wave_op_exp}). In what follows, we consider the first contribution of
(\ref{wave_op_exp}), while the other terms can be treated within the same
approach. Since we work with fixed $\left( \theta_{1},\theta_{2}\right) $,
the dependence from these parameters is next omitted. Our aim is to prov
\[
\lim_{\left\vert t\right\vert \rightarrow\infty}\left\Vert \int_{\mathbb{R
}dk\,\mathcal{O}\left( k\right) G^{\sigma k}(\cdot,b)e^{-itk^{2
}g(k)\right\Vert _{L^{2}(\mathbb{R})}=0\,,
\]
when $g\in\mathcal{C}_{0}^{\infty}\left( \mathbb{R}\right) $. We use the
definitions (\ref{G_z}), (\ref{GH_zeta}), with $\zeta=k$, to write
\begin{equation}
\left\Vert \int_{\mathbb{R}}dk\,\mathcal{O}\left( k\right) G^{\sigma
k}(\cdot,b)e^{-itk^{2}}g(k)\right\Vert _{L^{2}(\mathbb{R})}=I+II
\label{wave_op_exp1
\end{equation}
wher
\begin{align}
I & =\int_{-\infty}^{b}dx\left\vert \int_{\mathbb{R}}dk\,\frac
{\mathcal{O}\left( k\right) }{w(\left\vert k\right\vert )}\chi
_{-}(x,\left\vert k\right\vert )\chi_{+}(b,\left\vert k\right\vert
)e^{-itk^{2}}g(k)\right\vert ^{2}\,,\label{wave_op_exp1_1}\\
& \nonumber\\
II & =\int_{b}^{+\infty}dx\left\vert \int_{\mathbb{R}}dk\,\frac
{\mathcal{O}\left( k\right) }{w(\left\vert k\right\vert )}\chi
_{+}(x,\left\vert k\right\vert )\chi_{-}(b,\left\vert k\right\vert
)e^{-itk^{2}}g(k)\right\vert ^{2}\,. \label{wave_op_exp1_2
\end{align}
Recall that $\chi_{\pm}$ are defined through the equation
\begin{align}
1_{x<b}(x)\chi_{+}(x,k) & =e^{ikx}
{\displaystyle\int\limits_{x}^{b}}
\frac{\sin k(t-x)}{k}\mathcal{V}(t)\chi_{+}(t,k)\,dt\,,\qquad1_{x\geq
b}(x)\chi_{+}(x,k)=e^{ikx}\,,\label{Int_Jost_eq_0}\\
& \nonumber\\
1_{x>a}(x)\chi_{-}(x,k) & =e^{-ikx}
{\displaystyle\int\limits_{a}^{x}}
\frac{\sin k(t-x)}{k}\mathcal{V}(t)\chi_{-}(t,k)\,dt\,,\qquad1_{x\leq
a}(x)\chi_{-}(x,k)=e^{-ikx}\,, \label{Int_Jost_eq_0.1
\end{align}
and introduce the function
\begin{equation}
\gamma_{+}(x)
{\displaystyle\int\limits_{x}^{b}}
\left\vert \mathcal{V}(t)\right\vert \,dt\,,\qquad\gamma_{-}(x)
{\displaystyle\int\limits_{a}^{x}}
\left\vert \mathcal{V}(t)\right\vert \,dt\,. \label{gamma_+-
\end{equation}
For compactly supported short-range potentials (see (\ref{V})), it results:
$\gamma_{+}\in L^{2}\left( \left( a,+\infty\right) \right) $ and
$\gamma_{-}\in L^{2}\left( \left( -\infty,b\right) \right) $. As stated in
the Proposition \ref{Proposition_Jost}, $\chi_{\pm}$ are uniformly bounded
w.r.t. $x,k\in\mathbb{R}$ and the relations (\ref{Int_Jost_eq_0}),
(\ref{Int_Jost_eq_0.1}) rephrase a
\begin{align}
1_{x<b}(x)\chi_{+}(x,k) & =e^{ikx}+\ell_{+}(x,k)\,,\qquad1_{x\geq b
(x)\chi_{+}(x,k)=e^{ikx}\,,\label{Jost_exp_plus}\\
& \nonumber\\
1_{x>a}(x)\chi_{-}(x,k) & =e^{-ikx}+\ell_{-}(x,k)\,,\qquad1_{x\leq a
(x)\chi_{-}(x,k)=e^{-ikx}\,, \label{Jost_exp_min
\end{align}
with $\ell_{\pm}$ s.t.
\begin{equation}
\left\vert \ell_{\pm}(x,k)\right\vert <\frac{1}{k}\left\Vert \chi_{\pm
}\right\Vert _{L_{x,k}^{\infty}(\mathbb{R}^{2})}\gamma_{\pm}(x)\,.
\label{gamma_+-_est
\end{equation}
Plugging these relations into (\ref{wave_op_exp1_1})-(\ref{wave_op_exp1_2})
and using $1_{x>b}(x)g_{+}(x,k)=0$, we ge
\begin{align}
I & \leq\int_{-\infty}^{b}dx\left\vert \int_{\mathbb{R}}dk\,e^{-i\left\vert
k\right\vert x}e^{-itk^{2}}\frac{\mathcal{O}\left( k\right) e^{i\left\vert
k\right\vert b}}{w(\left\vert k\right\vert )}g(k)\right\vert ^{2
+\int_{-\infty}^{b}dx\left\vert \int_{\mathbb{R}}dk\,\ell_{-}(x,\left\vert
k\right\vert )e^{-itk^{2}}\frac{\mathcal{O}\left( k\right) e^{i\left\vert
k\right\vert b}}{w(\left\vert k\right\vert )}g(k)\right\vert ^{2
\,,\label{wave_op_exp1_1_exp}\\
& \nonumber\\
II & =\int_{b}^{+\infty}dx\left\vert \int_{\mathbb{R}}dk\,e^{i\left\vert
k\right\vert x}e^{-itk^{2}}\frac{\mathcal{O}\left( k\right) }{w(\left\vert
k\right\vert )}\chi_{-}(b,\left\vert k\right\vert )g(k)\right\vert ^{2}\,.
\label{wave_op_exp1_2_1
\end{align}
With the change of variable: $s=-x+b$, the first integral at the r.h.s. of
(\ref{wave_op_exp1_1_exp}) writes a
\begin{equation}
\int_{-\infty}^{b}dx\left\vert \int_{\mathbb{R}}dk\,e^{-i\left\vert
k\right\vert x}e^{-itk^{2}}\frac{\mathcal{O}\left( k\right) e^{i\left\vert
k\right\vert b}}{w(\left\vert k\right\vert )}g(k)\right\vert ^{2}=\int
_{0}^{+\infty}ds\left\vert \int_{\mathbb{R}}dk\,e^{i\left\vert k\right\vert
s-itk^{2}}\left( \frac{\mathcal{O}\left( k\right) }{w(\left\vert
k\right\vert )}g(k)\right) \right\vert ^{2}\,,
\end{equation}
while, setting: $s=x-b$, the identity (\ref{wave_op_exp1_2_1}) rephrases a
\begin{equation}
II=\int_{0}^{+\infty}ds\left\vert \int_{\mathbb{R}}dk\,e^{i\left\vert
k\right\vert s}e^{-itk^{2}}\left( \frac{\mathcal{O}\left( k\right)
e^{i\left\vert k\right\vert b}}{w(\left\vert k\right\vert )}\chi
_{-}(b,\left\vert k\right\vert )g(k)\right) \right\vert ^{2}\,.
\end{equation}
Due to the relation $w(k)=\mathcal{O}\left( 1+\left\vert k\right\vert
\right) $ (see the proof of Lemma \ref{Lemma_Green_ker})), the above
appearing functions
\[
\frac{\mathcal{O}\left( k\right) }{w(k)}g\left( k\right) \text{\quad
and\quad}\chi_{-}(b,\left\vert k\right\vert )g(k)\,,
\]
both belong to $L_{k}^{2}(\mathbb{R})$. Moreover, as a consequence of
(\ref{gamma_+-_est}), it result
\begin{equation}
\left\vert \ell_{-}(x,\left\vert k\right\vert )\frac{\mathcal{O}\left(
k\right) e^{i\left\vert k\right\vert b}}{w(\left\vert k\right\vert
)}\right\vert \leq\left\Vert \chi_{-}\right\Vert _{L_{x,k}^{\infty
(\mathbb{R}^{2})}\left\vert \frac{\mathcal{O}\left( k\right) }{kw(\left\vert
k\right\vert )}\right\vert \gamma_{-}(x)\lesssim\gamma_{-}(x)\in L^{2}\left(
\left( -\infty,b\right) \right) \,,
\end{equation}
We ge
\begin{align}
I & \leq\int_{0}^{+\infty}ds\left\vert \int_{\mathbb{R}}dk\,e^{i\left\vert
k\right\vert s-itk^{2}}q_{1}(k)\right\vert ^{2}+\int_{-\infty}^{b}dx\left\vert
\int_{\mathbb{R}}dk\,e^{-itk^{2}}q_{2}(k,x)g(k)\right\vert ^{2}\,,\\
& \nonumber\\
II & =\int_{0}^{+\infty}ds\left\vert \int_{\mathbb{R}}dk\,e^{i\left\vert
k\right\vert s-itk^{2}}q_{3}(k)\right\vert ^{2}\,.
\end{align}
where, according to the previous remarks, $q_{1},q_{3}\in L_{k}^{2
(\mathbb{R})$, while $q_{2}$ allows the estimat
\begin{equation}
\left\vert q_{2}\left( k,x\right) \right\vert \leq f(x)\in L^{2}\left(
\left( -\infty,b\right) \right) \,.
\end{equation}
Then, it follows from an application of the Lemma 2.6.4 of \cite{Yafa1} tha
\begin{equation}
\lim_{t\rightarrow-\infty}\int_{0}^{+\infty}ds\left\vert \int_{\mathbb{R
}dk\,e^{i\left\vert k\right\vert s-itk^{2}}q_{j}(k)\right\vert ^{2}=0\,,\quad
j=1,3\,, \label{wave_op_exp1_1_lim1
\end{equation}
Moreover, for $g\in\mathcal{C}_{0}^{\infty}\left( \mathbb{R}\right) $, the
Riemann-Lebesgue Lemma implie
\begin{equation}
\lim_{\left\vert t\right\vert \rightarrow\infty}\int_{\mathbb{R
}dk\,e^{-itk^{2}}q_{2}(k,x)g(k)=0\,;
\end{equation}
thus, using the estimat
\[
\left\vert \int_{\mathbb{R}}dk\,e^{-itk^{2}}q_{2}(k,x)\,g(k)\right\vert
\lesssim\left\vert f(x)\right\vert \left( \int_{\mathbb{R}}dk\,\left\vert
g(k)\right\vert \right) \lesssim\left\vert f(x)\right\vert \,,
\]
and the dominated convergence theorem, we ge
\begin{equation}
\lim_{\left\vert t\right\vert \rightarrow\infty}\left\Vert \int_{\mathbb{R
}dk\,e^{-itk^{2}}q_{2}(k,x)\,g(k)\right\Vert _{L^{2}\left( -\infty,b\right)
}=0\,. \label{wave_op_exp1_1_lim2
\end{equation}
\end{proof}
The above result exploits the condition: $\left( \theta_{1},\theta
_{2}\right) \in\mathcal{B}_{\delta}(\left( 0,0\right) )$ which has been
previously introduced to identify the spectra of the operators $Q_{\theta
_{1},\theta_{2}}(\mathcal{V})$ and $Q_{0,0}(\mathcal{V})$, and to compare the
corresponding quantum dynamics. Nevertheless a question is left open: is a
small parameter condition necessary in order that the pair $\left\{
Q_{\theta_{1},\theta_{2}}(\mathcal{V}),Q_{0,0}(\mathcal{V})\right\} $ forms a
complete scattering system ? Actually this restriction does not seems to be
necessary. It has been shown, indeed, that a key point in the development of
the scattering theory for the possibly non-selfadjoint pair $\left\{
H_{0},H_{1}\right\} $ is the existence of the strong limit on the real axis
of the characteristic functions associated with $H_{i=0,1}$ (e.g. in
\cite{Ryz01} and \cite{Ryz1}). In particular, the Theorem 4.1 in \cite{Ryz01}
makes use of this assumption to study the existence of the related wave
operators. According to \cite{Ryz0}, the resolvent formula (\ref{krein_1})
implies that, for any $\left( \theta_{1},\theta_{2}\right) $, the
characteristic function of the operator $Q_{\theta_{1},\theta_{2}
(\mathcal{V})$ has boundary values a.e. on the real axis (for this point we
refer to the last Proposition in \cite{Ryz0} and to the references therein).
This suggests the possibility of defining the scattering system $\left\{
Q_{\theta_{1},\theta_{2}}(\mathcal{V}),Q_{0,0}(\mathcal{V})\right\} $ without
restrictions on $\left( \theta_{1},\theta_{2}\right) $.
A slightly different approach to the scattering problem consists in
characterizing the scattering matrix for $\left\{ Q_{\theta_{1},\theta_{2
}(\mathcal{V}),Q_{0,0}(\mathcal{V})\right\} $. In the case of selfadjoint
extensions of a symmetric operator, a relation between the scattering matrix
and the Weyl function, associated with a boundary triple, have been
established in \cite{BeMaNe}, while extensions of results from \cite{BeMaNe}
to certain non-selfadjoint situations (dissipative/accumulative) have been
presented in, \cite{BeMaNe1}, \cite{BeMaNe2}. A generalization to the case of
$Q_{\theta_{1},\theta_{2}}(\mathcal{V})$ would represents a useful insight in
the study of the scattering properties of the system $\left\{ Q_{\theta
_{1},\theta_{2}}(\mathcal{V}),Q_{0,0}(\mathcal{V})\right\} $.
\section{\label{Sec_small_h}Further perspectives: the regime of quantum wells
in a semiclassical island.}
Let introduce the modified operators $Q_{\theta_{1},\theta_{2}}^{h
(\mathcal{V})$, depending on the small parameter $h\in\left( 0,h_{0}\right)
$, $h_{0}>0$, and defined according t
\begin{equation}
Q_{\theta_{1},\theta_{2}}^{h}(\mathcal{V}):\left\{
\begin{array}
[c]{l
D\left( Q_{\theta_{1},\theta_{2}}^{h}(\mathcal{V})\right) =\left\{ u\in
H^{2}\left( \mathbb{R}\backslash\left\{ a,b\right\} \right) \,\left\vert
\ \text{(\emph{\ref{B_C_1}}) holds}\right. \right\} \,,\\
\\
\left( Q_{\theta_{1},\theta_{2}}^{h}(\mathcal{V}^{h})\,u\right)
(x)=-h^{2}u^{\prime\prime}(x)+\mathcal{V}^{h}(x)\,u(x)\,,\qquad x\in
\mathbb{R}\backslash\left\{ a,b\right\} \,.
\end{array}
\right. \label{Q_teta_h
\end{equation}
with $\mathcal{V}^{h}$ $h$-dependent and locally supported on $\left[
a,b\right] $. In the applications perspectives, rather relevant is the case
of a positive and bounded $\mathcal{V}^{h}$ formed by the superposition of a
potential barrier and a collection of potential wells supported on a region of
size $h$ (these are usually referred to as \emph{quantum wells}). Hamiltonians
of this type have been introduced in \cite{FMN2}, where the case: $\theta
_{2}=3\theta_{1}$ is considered, with the purpose of realizing models of
electronic transverse transport through resonant heterostructures. When the
initial state describes incoming charge carriers in the conduction band, the
quantum dynamics of such systems is expected be driven by the, possibly
non-linear, adiabatic evolution of a finite number of resonant states related
to the shape resonances. This picture, arising in the physical literature, is
confirmed by the analysis presented in \cite{JoPrSj}, \cite{PrSj} concerning
the case of a 1D Schr\"{o}dinger-Poisson selfadjoint model with a double
barrier, and in \cite{FMN3}, where an application involving Hamiltonians of
the type $Q_{\theta_{1},\theta_{2}}^{h}(\mathcal{V})$ is considered.
As previously remarked, the artificial interface conditions allow to develop
an alternative approach to the adiabatic evolution for shape resonances. In
the particular case where $\theta_{2}=3\theta_{1}$, using suitable exterior
dilations, it is possible to write the evolution problem for the resonant
states of the modified operators $Q_{\theta_{1},\theta_{2}}^{h}(\mathcal{V})$
as a dynamical system of contractions. Then the adiabatic approximations can
be obtained by using a 'standard' approach and reasonably weak assumptions on
the regularity-in-time of the potential, as it has been shown in \cite{FMN2},
while a similar strategy would not work in the selfadjoint case, due to the
lack of accretivity of the corresponding complex deformed operator. This
justifies the interest in the operators $Q_{\theta_{1},\theta_{2}
^{h}(\mathcal{V})$ as models for the electronic quantum transport in the
regime of quantum wells in a semiclassical island, and motivates an extension
of the previous analysis taking into account the role of the scaling parameter
$h$ in the definition of the modified dynamics. Our aim is to obtain, in the
$h$-dependent case, a comparison between the dynamical system modified by
non-mixed interface conditions and the unitary dynamics generated by the
selfadjoint Hamiltonians $Q_{0,0}^{h}(\mathcal{V}^{h})$. Proceeding in this
direction, a first step consists in the study of the Jost's solutions, the
generalized eigenfunctions and the Green's kernel associated to operators of
the type $Q_{0,0}^{h}(\mathcal{V})$, which, according to the definition
(\ref{Q_teta_h}), is given b
\begin{equation}
Q_{0,0}^{h}(\mathcal{V}):\left\{
\begin{array}
[c]{l
D\left( Q_{0,0}^{h}(\mathcal{V})\right) =H^{2}\left( \mathbb{R}\right)
\,\,,\\
\\
\left( Q_{0,0}^{h}(\mathcal{V})\,u\right) (x)=-h^{2}u^{\prime\prime
}(x)+\mathcal{V}(x)\,u(x)\,,\qquad x\in\mathbb{R}\,,
\end{array}
\right. \label{Q_0_h
\end{equation}
with $\mathcal{V}$, possibly depending on $h$, locally supported on $\left[
a,b\right] $. In what follows, $\chi_{\pm}^{h}\left( \cdot,\zeta
,\mathcal{V}\right) $ denote the solutions of the equatio
\begin{equation}
\left( -h^{2}\partial_{x}^{2}+\mathcal{V}\right) u=\zeta^{2}u\,,
\label{Jost_eq_h1
\end{equation}
fulfilling the condition
\begin{equation}
\left. \chi_{+}^{h}\left( \cdot,\zeta,\mathcal{V}\right) \right\vert
_{x>b}=e^{i\frac{\zeta}{h}x}\,,\qquad\left. \chi_{-}^{h}\left( \cdot
,\zeta,\mathcal{V}\right) \right\vert _{x<a}=e^{-i\frac{\zeta}{h}x}\,.
\label{Jost_sol_ext_h
\end{equation}
It is worthwhile to notice that, in the attempt of extending our approach to
this new setting, all the estimates involved in the proofs depend on $h$ and
exhibit exponential bounds w.r.t. to the small parameter. To fix this point,
let $h>0$ and consider the functions $\chi_{\pm}^{h}$. The rescaled functions
$b_{\pm}^{h}\left( x,\zeta\right) =e^{\mp i\frac{\zeta}{h}x}\chi_{\pm
^{h}\left( x,\zeta\right) $ are defined through a Picard iteration procedure
(see the e.g. in the eq. (\ref{Jost_Picard})). Taking into account the
small-$h$ behviour of the corresponding rescaled kernels, it result
\begin{equation}
\sup_{x\in\mathbb{R}\,,\ \zeta\in\overline{\mathbb{C}^{+}}}\left\vert b_{\pm
}(x,\zeta)\right\vert \leq e^{\frac{C_{0}}{h^{2}}}\,,\qquad\sup_{x\in
\mathbb{R}\,,\ \zeta\in\overline{\mathbb{C}^{+}}}\left\vert b_{\pm}^{\prime
}(x,\zeta)\right\vert \leq e^{\frac{C_{1}}{h^{2}}}\,,
\end{equation}
where the coefficients $C_{i}$, $i=0,1$, possibly depend on the data $a$, $b$,
and $\left\Vert \mathcal{V}\right\Vert _{L^{1}(a,b)}$. Then, a suitable
rewriting of the Krein's formula (\ref{krein_1}) and of the results of the
Lemmata \ref{Lemma_Green_ker} and \ref{Lemma_Krein_coeff} in the $h$-dependent
case would allows to express the $h$-dependent wave operators as:
$\mathcal{W}_{\theta_{1},\theta_{2}}^{h}=1+\mathcal{O}\left( h\right)
+\mathcal{O}\left( h\right) \,$, provided tha
\begin{equation}
\left( \theta_{1},\theta_{2}\right) \in\mathcal{B}_{\rho(h)}\left( \left(
0,0\right) \right) \,,\quad\text{with }\rho(h)=he^{-\frac{\tilde{C}}{h^{2}}}
\label{teta_h
\end{equation}
and $\tilde{C}>0$ large enough.
Operators defined with the prescription (\ref{teta_h}) appear to be of small
interest in the applications perspective. At this concern we recall that the
adiabatic theorem obtained in \cite{FMN2} applies with: $\theta_{i
=c_{i}h^{N_{0}}$, $i=1,2$, for some $N_{0}\in\mathbb{N}$ (see Theorem $7.1$ in
\cite{FMN2}). In this connection, it is important to relax the constraint
expressed by (\ref{teta_h}) in order to obtain small-$h$ expansions of the
waves operators, holding at least in a suitable subspace of $L^{2}$, when the
parameters are assumed to be only polynomially smalls w.r.t. $h$.
According to formulas of Section \ref{Section_Resolvent_2}, the key to obtain
small-$\theta_{i=1,2}$ expansions of the generalized eigenfunctions, and then
of the wave operators, consists in controlling the boundary values of Green's
functions as $z$ approaches the continuous spectrum. Introducing quantum wells
in the model, produces resonances with exponentially small imaginary parts as
$h\rightarrow0$. This means that, the Green's functions, which are expressed
in terms of the Jost solutions and the Jost function, will be exponentially
large w.r.t. $h$ somewhere in the potential structure when $z$ is close to the
corresponding energies. Nevertheless, their values on the boundary of the
potential support are expected to be, at most, of order $\mathcal{O}\left(
\frac{1}{h^{N_{0}}}\right) $ for a suitable $N_{0}\in\mathbb{N}$; an explicit
example of this mechanism can be found in \cite{FMN3}. Studying the Green
function around a resonant energy requires the introduction of a Dirichlet
problem in order to resolve the spectral singularity and to match the complete
problem with some combination of this spectral problem with the filled wells
spectral problem. Following \cite{HeSj1}, \cite{Hel}, the Grushin technique
can be used for handling this matching and obtain resolvent approximations.
Developing this approach is a further perspective of our work.
\begin{description}
\item[Akcnowledgement] This work arises from a question addressed by A. Teta
and has largely profited from useful discussions with F. Nier and C.A. Pillet.
The author is also indebted to S. Naboko and H. Neidhardt for their important remarks.
\end{description}
\bigskip
| 2023-04-23T06:40:44.545Z | 2013-03-13T01:00:31.000Z | redpajama/arxiv | arxiv_0001 | 1,024 | 24,540 |
1f7f43b0a79413cabbb5fa420697a82bd442ae4a | \section{Introduction\label{sec:intro}}
The ``unidentified infrared emission'' (UIE) bands,
a distinct set of spectral features at wavelengths
of 3.3, 6.2, 7.7, 8.6, 11.3 and 12.7$\mum$,
dominate the mid-infrared spectra of many bright
astronomical objects. They are ubiquitously seen in
the interstellar medium (ISM) of our own galaxy and
star-forming galaxies, both near and far,
and account for over 10\% of their total infrared (IR)
luminosity (see Joblin \& Tielens 2011).
Although the exact nature of the carriers
remains unknown,
the UIE bands
are commonly attributed
to polycyclic aromatic hydrocarbon (PAH) molecules
(L\'eger \& Puget 1984, Allamandola et al.\ 1985).
The identification of the UIE bands is important as
they are a useful probe of the cosmic star-formation history,
and their carriers are an essential player in galactic evolution.
Very recently, Kwok \& Zhang (2011; hereafter KZ11)
argue
that the UIE bands
arise from coal- or kerogen-like organic nanoparticles,
consisting
of
chain-like aliphatic hydrocarbon material linking
small units of aromatic rings.
This hypothesis
has potentially
important implications for our understanding of
stellar evolution, interstellar chemistry, and the formation of
our solar system. If confirmed,
it
would establish an important
link among stars at their late evolutionary stages, the ISM,
and the solar system, as the kerogen-like organic matter seen
in meteorites (Derenne \& Robert 2010; Cody et al.\ 2011)
has similar chemical structures as those suggested
by KZ11 for the UIE bands seen in the ISM
and in circumstellar environments around evolved stars
(i.e., planetary nebulae and proto-planetary nebulae).
However, the KZ11 hypothesis of substantially
aliphatic organic matter as the UIE carriers
does not appear to be consistent with the observed strengths of the UIE bands.
As will be elaborated below in \S\ref{sec:ch3.4um}
and \S\ref{sec:ch6.85um}, astronomical observations show that if
aliphatic hydrocarbon units are present in the UIE carriers,
they must be a minor constituent.
Further, their arguments against the PAH model
do not seem to pose a problem (see \S\ref{sec:discussion}).
\section{Constraints on the Aliphatic Fraction
from the 3.4$\mum$ Feature\label{sec:ch3.4um}}
KZ11
argue that the material responsible for
the UIE features has a substantial aliphatic component,
based on the mid-IR spectra of NGC\,7027 (a planetary nebula),
IRAS\,22272+5435 (a protoplanetary nebula), and the Orion bar
(a photodissociation region in the Orion nebula).
They decompose the 3--20$\mum$ spectra of these objects
into three components: the UIE bands,
broad plateaus (several $\mum$ in width)
peaking at 8 and 12$\mum$, and a thermal continuum.
They attribute the broad plateau features
(which account for $\simali$1/3 of the 3--20$\mum$ power
of these objects) to aliphatic branches of the UIE carriers,
similar to the coal model for the UIE bands (Guillois et al.\ 1996).
Recognizing the challenge of the coal model in being
heated to emit the UIE bands
(Puget et al.\ 1995),
KZ11
hypothesize
that the coal-like UIE carriers are nanometer in size
or they are heated by the chemical energy released from
the $\rmH$\,$+$\,$\rmH$\,$\rightarrow$\,$\rmH_2$ reaction
(Duley \& Williams 2011).
Aliphatic hydrocarbon has a
band at 3.4$\mum$ due to the C--H stretching
mode
(Pendleton \& Allamandola 2002).
In some HII regions, reflection nebulae and planetary nebulae
(as well as extragalactic regions,
e.g., see Yamagishi et al.\ 2012, Kondo et al.\ 2012),
the UIE
near 3$\mum$ exhibits a rich spectrum:
the dominant 3.3$\mum$ feature is usually accompanied
by a weaker feature at 3.4$\mum$
along with an underlying plateau
extending out to $\simali$3.6$\mum$.
In some objects, a series of weaker features
at 3.46, 3.51, and 3.56$\mum$ are also seen superimposed
on the plateau, showing a tendency to decrease in strength
with increasing wavelength (see Figure~1
and Geballe et al.\ 1985, Jourdain de Muizon et al.\ 1986,
Joblin et al.\ 1996).
While assignment of the 3.3$\mum$ emission feature to
the aromatic C--H stretch is widely accepted,
the precise identification of the 3.4$\mum$ feature
(and the accompanying weak features at 3.46, 3.51, and 3.56$\mum$
and the broad plateau) remains somewhat controversial.
By assigning the 3.4$\mum$ emission exclusively
to aliphatic C--H, one can place an upper limit
on the aliphatic fraction of the emitters of the UIE features.
Let $I_{3.4}$ and $I_{3.3}$ respectively be the observed intensities
of the 3.4$\mum$ and 3.3$\mum$ emission features.
In interstellar and circumstellar environments,
$I_{3.4}/I_{3.3}$ typically ranges from $\simali$0.06
to $\simali$0.20, depending on the local conditions
(Schutte et al.\ 1993).
Let $A_{3.4}$ and $A_{3.3}$ respectively be the band strengths
of the aliphatic and aromatic C--H bonds.
We take $A_{3.4} = 2.7\times10^{-18}\cm$ per aliphatic C--H bond,
averaged over ethane, hexane, ethyl-benzene,
and methyl-cyclo-hexane (d'Hendecourt \& Allamandola 1986,
Mu\~noz-Caro et al.\ 2001).\footnote
Typical type II Kerogens have
$A_{3.4} \approx 2.8\times10^{-18}\cm$ per C atom
while the 3.3$\mum$ aromatic feature is barely visible
(see Figure~2 in Papoular 2001). This clearly shows that
kerogen -- at least this type -- is not a good explanation
for the 3.3$\mum$ and 3.4$\mum$
emission
features.
In coals, the 3.3$\mum$ aromatic feature is usually
weaker than the 3.4$\mum$ aliphatic feature
except for those with high ranks (i.e., more evolved,
more ordered, with lower H/C and O/C ratios).
As coal evolves, the progressive release of heteroatoms
(decreasing
H/C and O/C) leads to
formation
of planar clusters of benzene-type rings followed by
stacking of these aromatic sheets to form disordered stacks of
graphitic planes (see Papoular 2001).
As a result of the progressive aromatization,
$A_{3.4}/A_{3.3}$ decreases as coal evolves.
The aliphatic C--H deformation band at 6.85$\mum$
band disappears in highly evolved coal
(e.g., anthracite, see Papoular 2001).
}
We take $A_{3.3} = 4.0\times10^{-18}\cm$ per aromatic C--H bond
for small neutral PAHs (Draine \& Li 2007).
\begin{figure}
\centerline
{
\includegraphics[width=8cm,angle=0]{f1.ps}
}
\caption{\footnotesize
\label{fig:1}
Comparison of the 3.15--3.65$\mum$ {\it emission} spectrum
of the Orion Bar (position 4; black; Sloan et al.\ 1997)
with the optical depth ({\it absorption}) spectra of GCS\,3
(a Galactic center source; blue; Chiar et al.\ 2000).
The weakness of the 3.4$\mum$
feature in the {\it emission}
spectrum indicates that
the {\it aliphatic} component must be minor,
even assuming that the 3.4$\mum$ emission is exclusively
due to aliphatic C--H (i.e., neglecting anharmonicity
and superhydrogenation). In contrast, the
{\it absorption} spectrum
of the diffuse ISM
toward GCS\,3 is
dominated by aliphatic hydrocarbon.
}
\end{figure}
Let $N_{\rm H,aliph}$ and $N_{\rm H,arom}$ respectively
be the numbers of aliphatic and aromatic C--H bonds
in the emitters of the 3.3$\mum$ UIE feature.
We obtain
$N_{\rm H,aliph}/N_{\rm H,arom}\approx\left(I_{3.4}/I_{3.3}\right)
\times\left(A_{3.3}/A_{3.4}\right)\approx0.30$,
taking $I_{3.4}/I_{3.3}$\,=\,0.2
[KZ11
estimate $I_{3.4}/I_{3.3}\approx0.22$
for NGC\,7027, and $I_{3.4}/I_{3.3}\approx0.19$ for the Orion bar].
We assume that one aliphatic C atom corresponds to
2.5 aliphatic C--H bonds (intermediate between methylene -CH$_2$
and methyl -CH$_3$) and one aromatic C atom corresponds to
0.75 aromatic C--H bond (intermediate between benzene C$_6$H$_6$
and coronene C$_{24}$H$_{12}$).
Therefore, in the UIE carriers the ratio of the number of C atoms
in aliphatic units to that in aromatic rings is
$N_{\rm C,aliph}/N_{\rm C,arom}\approx 0.30\times\left(0.75/2.5\right)
= 0.09$, showing that the aliphatic component is only a minor part of
the UIE emitters.
KZ11
take $I_{3.4}/I_{3.3}\approx1.88$
for the protoplanetary nebula IRAS\,22272+5435
(but much smaller $I_{3.4}/I_{3.3}$ ratios have also been
reported for this source; see Goto et al.\ 2003).
So far only
a few
sources (exclusively protoplanetary nebulae)
are reported to have $I_{3.4}/I_{3.3}$\,$\gtsim$\,1
(Hrivnak et al.\ 2007). They are atypical UIE sources:
their UIE spectra have most of the power emitted from two broad bands
peaking at $\simali$8$\mum$ and $\simali$11.8$\mum$,
while typical UIE spectra have distinctive peaks at 7.7, 8.6,
and 11.3$\mum$ (see Tokunaga 1997).\footnote
Overall, the IR spectra of coals or kerogens
resemble that of atypical sources (e.g., some
protoplanetary nebulae; see Guillois et al.\ 1996).
They do not resemble the UIE features
seen in the interstellar sources except for highly
evolved (i.e., highly aromatized) coals
(see Papoular 2001).
}
We note that $N_{\rm C,aliph}/N_{\rm C,arom}$\,=\,0.09
is an upper bound as the 3.4$\mum$ emission feature could
also be due to anharmonicity of the aromatic C--H stretching
mode (Barker et al.\ 1987).
Let $\nu$ be the vibrational quantum number.
In a harmonic oscillator, the level spacing is constant;
the $\Delta\nu$\,=\,1 transition between high $\nu$ levels
results in the same spectral line
as for the $\nu$\,=\,1$\rightarrow$0 transition.
Anharmonicity decreases the spacing between the higher $\nu$ levels,
and the $\Delta\nu$\,=\,1 transitions between higher $\nu$ levels
occur at longer wavelengths. The anharmonicity model explains
the weaker features (at 3.40$\mum$, 3.51$\mum$, ...)
as ``hot bands''
($\nu$\,=\,2$\rightarrow$1, $\nu$\,=\,3$\rightarrow$2, ...)
of the 3.3$\mum$ fundamental $\nu$\,=\,1$\rightarrow$0
aromatic C--H stretching mode.
The 3.4$\mum$ emission feature could also be
due
in part to ``superhydrogenated'' PAHs in which some peripheral
C atoms have two H atoms (see Figure~2).
The extra H atom converts the originally aromatic ring into
an aliphatic ring. This creates two aliphatic C--H stretching bands:
one due to the symmetric and the other
to the asymmetric C--H stretching modes.
These bands would fall near 3.4$\mum$ and 3.5$\mum$,
with the former more intense than the latter,
consistent with astronomical observations (Bernstein et al.\ 1996).
The 3.4$\mum$ feature may also result from aliphatic sidegroups
attached as functional groups to PAHs
(see Figure~2; Duley \& Williams 1981, Pauzat et al.\ 1999,
Wagner et al.\ 2000).
The C--H stretching modes of methyl (-CH$_3$),
methylene (-CH$_2$-), and ethyl (-CH$_2$CH$_3$)
sidegroups on PAHs fall near the weaker satellite features
associated with the 3.3$\mum$ band.
All these possibilities (i.e., anharmonicity,
superhydrogenation, and aliphatic sidegroups)
probably contribute to the 3.4$\mum$ emission,
the extent of each depending on conditions
in the local environment.
Sandford (1991) argued that the satellite features
at 3.40, 3.46, 3.51, and 3.56$\mum$ in NGC\,7027
cannot be predominantly due to aliphatic sidegroups on PAHs.
KZ11
note that the 3.4$\mum$ aliphatic
C--H stretching mode is commonly observed in {\it absorption}
in the diffuse ISM. If the UIE carriers have the same
mixed aromatic-aliphatic structure as the bulk
of the hydrocarbon material, then in heavily obscured regions,
both the 3.3$\mum$ band and the 3.4$\mum$ band would show up
in {\it absorption}, with the 3.4$\mum$ absorption band
much {\it weaker} than the 3.3$\mum$ absorption band.
However, astronomical observations have actually shown
the opposite (see Figure~1): the 3.4$\mum$ absorption band
is much {\it stronger} than the 3.3$\mum$ absorption band
(e.g. in the Galactic center source GCS 3,
the 3.4$\mum$ absorption band is stronger than
the 3.3$\mum$ band by a factor of 35; Chiar et al.\ 2000).
Therefore, the bulk of the 3.4$\mum$ absorber in the ISM
must be hydrocarbon material in the larger grains,
evidently more strongly aliphatic than the UIE carriers
(Dartois et al.\ 2007).
\section{Constraints
from the 6.85$\mum$ Feature\label{sec:ch6.85um}}
In addition to the 3.4$\mum$ C--H stretching mode,
aliphatic hydrocarbon materials also have two C--H
deformation bands at 6.85$\mum$ and 7.25$\mum$.\footnote
One may argue that in the KZ11-type coal- or kerogen-like
material, the aliphatic C--H bands may not occur
at the same wavelengths as for pure aliphatics
or PAHs with simple aliphatic sidegroups:
the aliphatic H atoms occupy a
broad range of
local chemical environments, subject to hydrogen
bonding perturbations by nearby O and S atoms.
Such interactions could conceivably shift the C--H frequencies
from their ``normal'' aliphatic positions.
However, laboratory measurements have shown that the aliphatic
C--H bands in coal or kerogen do occur at 3.4$\mum$ and 6.85$\mum$,
displaying little wavelength shift compared to that of pure aliphatics
(see Papoular 2001).
}
These two bands have been observed in weak absorption
in the diffuse ISM (Chiar et al.\ 2000).
They are also seen in emission in interstellar
and circumstellar UIE sources.
Their strengths (relative to the nearby 7.7$\mum$
C--C stretching band) also allow an estimate of
the aliphatic fraction of the UIE carrier.\footnote
Coals or kerogens do not exhibit a distinct band
at 7.7$\mum$ and thus one cannot infer
their aliphatic fractions from
$I_{6.85}/I_{7.7}$.
}
\begin{figure}
\begin{center}
\includegraphics[width=16cm,angle=0]{f2.ps}
\caption{\footnotesize
\label{fig:fig2}
Examples
of ``superhydrogenated'' PAHs
with methyl (-CH$_3$) aliphatic sidegroups.
In addition to anharmonicity, superhydrogenation
and methyl-like aliphatic sidegroups attached to PAHs
may contribute to
the weak 3.4$\mum$
emission feature accompanying the 3.3$\mum$ feature.
}
\end{center}
\end{figure}
Let $I_{6.85}$ and $I_{7.7}$
be the observed intensities
of the 6.85$\mum$ and 7.7$\mum$ emission features.
Let $A_{6.85}$ and $A_{7.7}$
be the strengths
of the 6.85$\mum$ aliphatic C--H band
and the 7.7$\mum$ aromatic C--C band.
We take $A_{6.85} = 2.3\times10^{-18}\cm$
per CH$_2$ or CH$_3$ group,
an average of
that measured
for methylcylcohexane
($A_{6.85} = 3.0\times10^{-18}\cm$ per CH$_3$ group;
d'Hendecourt \& Allamandola 1986)
and for
hydrogenated amorphous carbon
($A_{6.85} = 1.5\times10^{-18}\cm$
per CH$_2$ or CH$_3$ functional group;
Dartois \& Mu\~noz-Caro 2007).\footnote
Typical type II Kerogens have
$A_{6.85} \approx 3.6\times10^{-19}\cm$ per C atom
(Papoular 2001).
If 15\% of the C atoms in kerogens are in aliphatic form,
for kerogens we would have $A_{6.85} \approx 2.4\times10^{-18}\cm$
per aliphatic C atom.
This is close to that adopted in this work:
$A_{6.85} \approx 2.3\times10^{-18}\cm$
per aliphatic C atom.
}
We take $A_{7.7} = 5.4\times10^{-18}\cm$ per C atom
for charged aromatic molecules (Draine \& Li 2007).
Let $N^\prime_{\rm C,aliph}$ and $N^\prime_{\rm C,arom}$
respectively be the numbers of aliphatic and aromatic C atoms
in the emitters of the 6--8$\mum$ UIE bands.
Let $B_\lambda\left(T\right)\propto
\lambda^{-3}/\left[\exp\left(hc/\lambda kT\right)-1\right]$
be the Planck function at wavelength $\lambda$ and temperature $T$
(where $h$ is
Planck's
constant, $c$ is the speed of light,
and $k$ is
Boltzmann's
constant),
with $B_{6.85}/B_{7.7}\approx0.9\pm0.2$
for $330<T<1000\K$.
Then $N^\prime_{\rm C,aliph}/N^\prime_{\rm C,arom}\approx
\left(I_{6.85}/I_{7.7}\right)\times\left(A_{7.7}/A_{6.85}\right)
\times\left(B_{7.7}/B_{6.85}\right)\approx0.10$
for NGC 7027 ($I_{6.85}/I_{7.7}\approx0.039$)
and $N^\prime_{\rm C,aliph}/N^\prime_{\rm C,arom}\approx0.14$
for the Orion bar ($I_{6.85}/I_{7.7}\approx0.053$).
KZ11
derived $I_{6.85}/I_{7.7}\approx1.43$
for IRAS\,22272+5435, but the observed spectrum and
decomposition fit support a significantly smaller value.
We conclude that the carriers of the 6--8$\mum$ UIE bands
are predominantly aromatic, with $<$\,15\% of the C atoms
in aliphatic form. The aliphatic fraction, while still small,
appears to be higher than estimated for the 3.3--3.4$\mum$ band carriers,
consistent with increased aromatization of the smallest particles,
which are heated to the highest temperatures.
\section{Discussion\label{sec:discussion}}
KZ11 attribute the broad plateau emission
around 8 and 12$\mum$ to the aliphatic component
of the UIE carreirs. They hypothesize that the clustering
of aromatic rings may break up the simple methyl- or
methylene-like sidegroups and hence the aliphatic
components may take many other forms
(e.g., -CH=CH$_2$, -CH=CH-, C=CH$_2$, C=C-H).
They speculate that the in-plane and out-of-plane bending modes
of these sidegroups may combine to form the plateau.
We note that the PAH model naturally accounts for the so-called
``plateau'' emission through the combined wings of
the C--C and C--H bands.
We also note that the clustering of aromatic rings and
aliphatic chains would be accompanied by forming new C--C bonds
and losing H atoms. Laboratory measurements of coals
have shown that
lowering
the H content leads to
aromatization (see Papoular 2001).
KZ11
claim that the PAH hypothesis
postulates that the UIE emission is excited exclusively
by far-UV photons, and that this is inconsistent with
observation of UIE emission in reflection nebulae
excited by cool stars (Sellgren et al.\ 1990).
However, Li \& Draine (2002)
explicitly
considered
the excitation of PAHs by longer-wavelength photons,
and showed that the light from even relatively cool stars
can
excite UIE emission, consistent with observations.
The excitation of PAHs by visible or even near-IR photons
with wavelengths up to $\simali$1--2$\mum$ has been further
experimentally verified (Mattioda et al.\ 2005).
KZ11
note the constancy of UIE band ratios
in regions (e.g. the Carina nebula) where the radiation intensity
changes by orders of magnitude. This is {\it precisely} what one
expects if the emission comes from single-photon heating of
nanoparticles [see Figure 13 of Li \& Draine (2001),
Figure 4b of Li \& Draine (2002),
Figure 1f of Draine \& Li (2007)].
KZ11
note that of the more than 160 molecules
identified in circumstellar and interstellar environments,
none is a PAH. This is true, but also not surprising because
the mid-IR UIE bands -- the major observational information --
are representative of functional groups and do not fingerprint
individual PAH molecules.\footnote
The far-IR bands are more sensitive to
the skeletal characteristics of a molecule,
and hence are more diagnostic of
the molecular identity and more powerful for
chemical identification of unknown species.
In principle, far-IR spectroscopy could be
used to test the KZ11 hypothesis:
the KZ11-type material has an extremely ``floppy''
structure compared to the more rigid PAHs,
and therefore there would be many low frequency
skeletal bends and very low-frequency
pseudo-rotations about bond axes.
However, there is little information on
the far-IR spectroscopy of coal or kerogen.
Even for PAHs, this information is very limited
(e.g., see Joblin et al.\ 2009, Zhang et al.\ 2010).
}
KZ11
argue that the carrier
of the UIR features cannot be a ``pure aromatic compound''.
Proponents of the identification of the astronomical UIE features
as coming from PAHs do not claim that the emitting material is
``pure aromatic compound'', as strictly defined by a chemist.
The astronomical material may well include
a {\it minor} aliphatic component,
as well as defects, substituents (e.g., N in place of C),
partial dehydrogenation, and sometimes superhydrogenation
(Tielens 2008).
Some of the nanoparticles may be multilayer aggregates of PAHs.
KZ11
state that PAH molecules have strong
and narrow absorption features in the UV whereas the search
for characteristic absorption features of PAHs superposed
on the interstellar extinction curves was not successful
(e.g., see Clayton et al.\ 2003).
For individual {\it small} PAHs, this is true.
However, in the PAH hypothesis it is natural to
expect that there will be a large number of distinct
species present in the ISM, and no single UV band may
be strong enough to be identified in the UV.
This also explains why laboratory-measured spectra
of {\it individual} PAHs do not precisely match
the observed UIE features in band widths and peak wavelengths,
while {\it combined} laboratory spectra of neutral PAHs
and their ions can successfully reproduce the UIE bands
associated with many different interstellar objects
(Allamandola et al.\ 1999).
There are, in fact, over 400 diffuse interstellar bands (DIBs)
in the optical that remain to be identified
(Sarre 2006, Salama et al.\ 2011).
Many of these may eventually be found to be produced
by specific PAHs, but at this time we lack the laboratory
spectroscopy to make the identifications.
The lack of identification of any specific PAH
is not a fatal problem for the PAH hypothesis,
at least at this time. As we develop a better knowledge
of the gas-phase spectroscopy of the larger PAHs,
this story may change.
If the DIBs are electronic transitions of PAHs,
they hold great promise for identifying
specific PAH molecules, as the electronic transitions
are more characteristic of a specific PAH molecule
than the mid-IR C--H and C--C vibrational bands.
The strong interstellar 217.5\,nm extinction bump
is likely to be a {\it blend} of $\pi$\,--\,$\pi^{\ast}$
absorption bands from the entire population of PAHs,
with the fine structures from individual PAH molecules smoothed out.
Furthermore, internal conversion may lead to extreme broadening of
the UV absorption bands in larger PAHs, which may account for
absence of recognizable absorption features shortward of 200\,nm.
This has been demonstrated both experimentally and theoretically.
L\'eger et al.\ (1989) measured the absorption spectra of mixtures
of over 300 neutral PAH species with $\simali$12--28 C atoms.
Joblin et al.\ (1992) measured the absorption spectra of
neutral PAH mixtures containing $\simali$14--44 C atoms.
All these spectra have a strong UV feature around 217.5\,nm
(but relatively broader than the interstellar bump).
While it is true that
laboratory PAH samples
do not precisely reproduce the observed profile of
the 217.5\,nm feature (L\'eger et al.\ 1989, Joblin et al.\ 1992),
this is probably due to the fact that laboratory studies
are generally limited to small PAHs while interstellar PAHs
are much larger (e.g. models that reproduce
the $\simali$3--20$\mum$ UIE bands have most of
the PAH mass in PAHs with $>$\,100 C atoms,
see Li \& Draine 2001, Draine \& Li 2007).
Indeed, Steglich et al.\ (2010) showed that larger PAHs
indeed provide better fits to the observed 217.5\,nm feature.
Cecchi-Pestellini et al.\ (2008) also showed that a weighted
sum of 50 neutral and ionized PAHs in the size range of
$\simali$10--66 C atoms
can
reproduce the 217.5\,nm
extinction bump observed in various environments.
\section{Conclusion}\label{sec:summary}
We examine the hypothesis of mixed aromatic-aliphatic
organic matter as the UIE carriers.
We place an upper limit on the aliphatic fraction of
the UIE carriers based on the observed
weak
intensities of
the 3.4$\mum$ and 6.85$\mum$ emission features.
By attributing them {\it exclusively} to
aliphatic C--H stretch and aliphatic C--H deformation,
we derive the fraction of carbon atoms
in aliphatic form to be $<$\,15\%.
We conclude that the UIE emitters are predominantly
aromatic: PAHs
dominate the principal UIE bands.
Our expectation is that confirmation will not come
until we have laboratory spectroscopy of PAH candidates
in the gas phase that precisely match some of the observed DIBs.
\acknowledgments
AL is supported in part by NSF AST-1109039.
BTD is supported in part by NSF AST-1008570.
We thank R.\ Glaser, M.\ K\"ohler, S.\ Kwok,
and the anonymous referee for helpful comments.
}
| 2023-04-23T06:40:45.139Z | 2012-10-25T02:03:34.000Z | redpajama/arxiv | arxiv_0001 | 1,053 | 4,081 |
34a633f0d5a8e1ad171ec19727f2616868c7b2bd | \section{Stellar mass completeness}
\label{sec:completeness}
\begin{figure}
\includegraphics[width=\columnwidth]{images/stellar_mass_completeness.pdf}
\caption{Histogram of stellar masses in the \texttt{L100Ref} simulation for different DMO subhalo mass limits from the matched subhalo list.
The grey shaded area designates the stellar mass resolution limit ($M_{\star} \,/\, \mathrm{M_{\odot}} > 1.8 \times 10^{8}$, or 100 star particles at the initial baryon mass).}
\label{fig:stellar_mass_completeness}
\end{figure}
Before model training we pre-select haloes based on their dark matter properties only, to ensure the same selection can be applied to any DMO simulation the model is applied to.
This is intended to avoid a situation where a model is applied to haloes with properties that were not present in the training set.
Since the selection is done on DMO properties only, we here check whether galaxies below the resolution limit in the hydro simulation are included, and the incompleteness of galaxies above the resolution limit.
\fig{stellar_mass_completeness} shows a histogram of stellar mass in the \texttt{L100Ref} simulation for different DMO subhalo mass cuts.
For even the strictest subhalo mass limit there are large numbers of subhaloes with stellar masses below the resolution limit; this suggests their baryonic properties are highly unresolved.
However, the important quantity is the completeness at fixed stellar mass.
For a subhalo mass limit of $M_{\mathrm{subhalo}} \,/\, \mathrm{M_{\odot}} > 10^{10}$ the completeness is greater than 95\% above the stellar mass resolution limit ($M_{\star} \,/\, \mathrm{M_{\odot}} > 1.8 \times 10^{8}$, approximately equal to 100 star particles at the initial baryon mass, i.e. ignoring stellar evolution mass loss), and 100\% complete above $5 \times 10^{9} \, \mathrm{M_{\odot}}$.
We use a subhalo mass limit of $M_{\mathrm{subhalo}} \,/\, \mathrm{M_{\odot}} > 10^{10}$ throughout the rest of the text.
\section{Isotonic Fits to a Single Feature}
\label{sec:shamcomp}
In order to provide a qualitative assessment of the ERT model we choose to fit a simple model to the relationship between each predictor and a \textit{single} feature.
We use subhalo mass and $V_{\mathrm{max}}$ as our chosen features as these are commonly used in SHAM approaches.
We fit each relation with an Isotonic regression model, which ensures monotonicity.
We do this for the training set, and evaluate the performance on the test set.
Each relation, and the corresponding fits, are shown in \fig{sham_comparison}.
The percentage of galaxies where the predicted value is within 0.2 dex of the true value is quoted in each panel.
In each case this percentage is lower than that achieved with the ERT model.
We also show the pearson correlation coefficient for the ERT model as well as the Isotonic regression model for each feature in \fig{pearson_appendix}.
The ERT model outperforms the Isotonic regression model for all predictors, though the performance is comparable using $V_{\mathrm{max}}$ for the stellar mass, stellar velocity dispersion, and stellar metallicity.
This is expected from the strong correlation between the predictor and $V_{\mathrm{max}}$ in each of these cases, shown in \fig{sham_comparison}.
\fig{feature_importance_predictors} also shows that these three predictors are particularly dependent on $V_{\mathrm{max}}$, whereas other predictors have greater contributions from other features.
It is also interesting to see that subhalo mass is the more accurate predictor for gas mass, black hole mass and star formation rate compared to $V_{\mathrm{max}}$, which highlights that using one or the other feature in a SHAM approach may not lead to optimised predictions for all galaxy features -- the ML approach, on the other hand, simply incorporates all features, and chooses the best for each predictor.
In \fig{clustering_appendix} we show the impact of using the Isotonic regression model (using subhalo mass as the feature) on the projected correlation function and the GSMF.
The GSMF is mostly reproduced, as expected due to the strong correlation between feature and predictor.
However, the projected correlation function (for $11 < M_{\star} \,/\, \mathrm{M_{\odot}} < 11.5$) shows a deficit in the normalisation compared to the ERT model, particularly on small scales.
One explanation is that high mass satellite galaxies, which are not common in the training set, may be more common in the larger P-Millennium volume.
The ERT model then handles these objects better than the Isotonic model, utilising other features that are more important in these environments (\textit{e.g.} the satellite flag).
\begin{figure}
\includegraphics[width=\columnwidth]{images/sham_comparison_L0050N0752.pdf}
\caption{
Relations between features commonly used in SHAM approaches (Subhalo mass and $V_{\mathrm{max}}$; $x$--axis) and each predictor ($y$--axis).
Each panel shows a 2D histogram of the distribution (blue) alongside a fitted monotonic linear relation (black line).
The percentage of galaxies where the predicted value is within 0.2 dex of the true value is quoted in each panel.
}
\label{fig:sham_comparison}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{images/pearson_isotonic_comparison.pdf}
\caption{
Pearson correlation coefficient for the ERT model (\texttt{L050AGN+ZoomAGN}) as well Isotonic regression models trained using subhalo mass and $V_{\mathrm{max}}$.
Each predictor is shown on the $x$-axis.
}
\label{fig:pearson_appendix}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{images/clustering_appendix_M_DM.png}
\includegraphics[width=\columnwidth]{images/gsmf_pmillennium_sham.png}
\caption{
Predictions for the projected correlation function (top panel) and galaxy stellar mass function (bottom panel) using the Isotonic regression model (using subhalo mass; brown lines), compared to the ERT model (blue) with all features.
}
\label{fig:clustering_appendix}
\end{figure}
\section{Application to DMO simulations}
\label{sec:pmill_predictions}
\begin{figure*}
\includegraphics[width=\textwidth]{images/clustering.png}
\caption{
Projected correlation function in bins of stellar mass; the mass range is indicated in each column.
The results from \texttt{L100Ref} are shown in blue, and the \texttt{L050AGN+Zoom} machine learning model predictions on P-Millennium are shown in orange.
Observational results from GAMA \citep{farrow_galaxy_2015} are shown in grey.
Errors are estimated using jacknife resampling of each simulation volume.
}
\label{fig:clustering}
\end{figure*}
A key aim of the model is to produce predictions for distribution functions and clustering statistics for much larger volumes than can be achieved using periodic hydrodynamic simulations.
To this end we test how well our model produces the two-point galaxy correlation function (2PCF), galaxy stellar mass function (GSMF), star-forming sequence, stellar mass -- metallicity relation and the stellar mass -- black hole relation in independent DMO volumes, including the $(800 \, \mathrm{Mpc})^3$ P-Millennium simulation.
\subsection{The Two-Point Galaxy Correlation Function}
Galaxy clustering measurements provide a powerful means of testing gravity and cosmological parameters, including the contribution of dark energy, as well as the impact of galaxy bias on galaxy formation and evolution.
One of the key statistics for measuring clustering is the spherically averaged two-point correlation function (2PCF) \citep{peebles_large-scale_1980}, defined as
\begin{equation}
\xi(r) = \frac{1}{\langle n \rangle} \frac{\mathrm{d}P}{\mathrm{d}V} - 1,
\end{equation}
where $\langle n \rangle$ is the mean comoving number density of galaxies,and $\mathrm{d}P/\mathrm{d}V$ is the probability of finding a galaxy in volume $\mathrm{d}V$ at comoving distance $r$ from another galaxy.
For redshift surveys, where the line-of-sight distance is inaccessible, this is often split into projected and line-of-sight distance components, which can be used to estimate the \textit{projected} correlation function \citep{davis_survey_1983},
\begin{equation}
w_{\mathrm{p}} (r_{\mathrm{p}}) = 2 \int^{\pi_{\mathrm{max}}}_{0} \xi(r_{\mathrm{p}}, \pi) \, \mathrm{d} \pi,
\end{equation}
where $\pi_{\mathrm{max}}$ is the maximum distance along the line-of-sight.
Since $w_{\mathrm{p}} (r_{\mathrm{p}})$ is robust against redshift space distortion effects it is better suited for comparisons with simulations.
Simulation studies of galaxy clustering are typically carried out on large scales with DMO simulations or relatively lower resolution hydrodynamical simulations \citep[\textit{e.g.} BAHAMAS;][]{mccarthy_bahamas_2017}, and on smaller scales using high-resolution hydrodynamical simulations, which can resolve the baryonic feedback effects on haloes \citep[see][]{van_daalen_impact_2014}.
We here see how well our machine learning model can provide predictions on both large and small scales \textit{simultaneously} by applying the model to the large-volume P-Millennium simulation.
We estimate errors on our clustering statistics using jacknife resampling of each simulation volume \citep[for details, see][]{artale_small-scale_2017}.
\fig{clustering} shows the projected 2PCF measured on the \texttt{L100Ref} simulation, the \texttt{L050AGN+Zoom} model applied to the P-Millennium simulation, and compared to observational results from GAMA \citep{farrow_galaxy_2015} in different stellar mass bins.
As shown in \cite{artale_small-scale_2017}, the \texttt{L100Ref} simulation is in good agreement with the observational constraints on small scales up to stellar masses of $10^{11} \, \mathrm{M_{\odot}}$.
However, on larger scales ($r_{\mathrm{p}} > 3 h^{-1} \mathrm{Mpc}$) there is a deficit in the normalisation, attributed to finite-volume effects; the smaller periodic boxes do not sample the largest modes in the power spectrum.
There are also too few galaxies above a stellar mass of $10^{11} \, \mathrm{M_{\odot}}$ in \texttt{L100Ref} to obtain robust clustering statistics.
The \texttt{L050AGN+Zoom} model, applied to the much larger volume P-Millennium simulation, shows no such deficit at the largest scales.
We are in fact able to make predictions out to scales of $100 \, h^{-1} \, \mathrm{Mpc}$, a factor of ten larger than achievable with the periodic simulations.
The model is also able to make predictions for the clustering of the most massive galaxies, $> 10^{11} \, \mathrm{M_{\odot}}$, since there are sufficient numbers of these galaxies to produce reliable statistics.
There is, however, a small deficit in the normalisation at the smallest scales in the lower mass bins for the \texttt{L050AGN+Zoom} model (outside the estimated errors).
This may be due to a number of effects, one being the lower resolution of the P-Millennium simulation, which may lead to sub-structures on small scales being smoothed out.
To test the impact of this we applied the \texttt{L050AGN+Zoom} model to the DMO $100 \, \mathrm{Mpc}$ box (using the same initial conditions as the \texttt{L100Ref} simulation), which has a mass resolution $\sim 10 \times$ higher.
This is shown in \fig{clustering}; at the largest scales the model shows the same deficit as the \texttt{L100Ref} simulation, due to the smaller box size.
However, at small scales there is the same deficit as in the \texttt{L050AGN+Zoom} model applied to P-Millennium.
This confirms that it is not resolution effects leading to the lower amplitude.
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\columnwidth]{images/gsmf_comparison.png} \\
\includegraphics[width=\columnwidth]{images/gsmf_pmillennium.png}
\end{multicols}
\caption{The galaxy stellar mass function (GSMF).
Both panels show the GSMF from the \texttt{L100Ref} (orange) and \texttt{L050AGN} (green) simulation sets for comparison, as well as observational constraints from \protect\cite{baldry_galaxy_2012}.
Lines are dotted where there are fewer than 10 galaxies per bin.
\textit{Left:} the predicted GSMF on the $(100 \, \mathrm{Mpc})^3$ DMO volume from machine learning models trained on the \texttt{L050AGN} (purple, dashed) and \texttt{L050AGN+Zoom} (purple, solid) simulation sets.
\textit{Right:} the predicted GSMF on the $(800 \; \mathrm{Mpc})^3$ P-Millennium DMO simulation, from machine learning models trained on the \texttt{L050AGN} (blue, dashed) and \texttt{L050AGN+Zoom} (blue, solid) simulation sets.}
\label{fig:gsmf_pmillennium}
\end{figure*}
An alternative explanation is the well known effect of baryons on their host dark matter haloes \citep[\textit{e.g.}][]{velliscig_intrinsic_2015,schaller_baryon_2015}.
This may not only affect the masses of haloes, but also their mass distribution, changing the substructure on small scales, and hence the clustering measurement \citep{van_daalen_impact_2014,hellwing_effect_2016}.
To test whether this is causing the lower normalisation at small scales, we extract a catalogue of features from the full hydro simulation (\texttt{L100Ref}) and use these as inputs to the \texttt{L050AGN+Zoom} model.
We emphasise that these 'halo` features contain the contribution from both baryons and dark matter, but are otherwise identical to the features from a DMO simulation.
The predicted clustering for this hybrid model application is shown in the second panel of \fig{clustering}; the normalisation matches that of the \texttt{L100Ref} simulation, confirming that it is indeed baryonic effects causing the lower normalisation on small scales.
We stress that this is not strictly a fair use of the machine learning model, as it was trained on haloes from a DMO simulation, and as such the predictions should be taken with some caution.
However, we argue this is a relatively `clean' test of the impact of baryons on the halo, and the knock on effect on the clustering.
Other effects may also contribute to the deficit, such as differences in the parameters of the halo finder between DMO and hydro simulations, and for different resolution simulations.
However, it seems clear that baryonic effects on haloes are a key contributor.
A similar effect at small scales has been seen in semi-analytic models applied to DMO simulations \citep{farrow_galaxy_2015,contreras_galaxydark_2015}.
The machine learning model presented here allows us to cleanly test this effect on identical haloes.
We also compared the model predictions for the projected correlation function against those using a single subhalo feature (subhalo mass) to predict the stellar mass, applied to the P-Millennium volume.
The normalisation is underestimated in this simple model compared to the GAMA measurements, and this is particularly pronounced in the highest stellar mass bin.
Full details are provided in \app{shamcomp}.
\section{Discussion \& Conclusions}
\label{sec:conc}
We have demonstrated the effectiveness of machine learning methods in modelling the complex relationships between galaxies and their host haloes by training a machine learning model to directly learn this mapping.
By combining hydro and DMO simulations we avoid baryonic effects on haloes that would bias predictions.
And by using a training set consisting of both periodic and zoom simulations of galaxy clusters, we include rare environments that may not be present in typical periodic simulations, allowing the model to be applied to much larger volume dark matter-only simulations, increasing the dynamic range, and allowing the evaluation of clustering statistics over much larger scales.
Our conclusions are as follows:
\begin{itemize}
\item The model successfully predicts the stellar mass, stellar velocity dispersion and black hole mass, and provides reasonable predictions for the star formation rate, stellar metallicity and total gas mass.
Even where the stellar metallicity shows some dispersion in the prediction, the overall distribution is recovered.
\item Star formation rates and gas masses are biased low due to the effect of quiescent, gas poor galaxies, and some suggestions for improving this are put forward, including the use of historical halo features.
\item Adding features representing the local density leads to a negligible increase in the predictive accuracy for most properties, except the gas mass, which shows significant improvement, particularly in cluster environments.
\item We apply the trained model to the \mbox{\sc{P-Millennium}}\ simulation and analyse the projected two-point correlation function.
We are able to predict the clustering of galaxies out to much larger scales than in the periodic hydro simulations ($> 10 \, h^{-1} \, \mathrm{Mpc}$), as well as analyse the clustering of rarer, high-mass galaxies, and find that \mbox{\sc{EAGLE}}\ is in good agreement with observational constraints from GAMA on large scales.
On smaller scales we conclude that baryonic effects on haloes affect the clustering statistics.
\item The predicted galaxy stellar mass function is in excellent agreement with that given by the periodic hydro simulations at low and intermediate masses, and extends the relation to higher masses.
\item The black hole -- stellar mass and stellar mass -- metallicity relations are well reproduced, though with less scatter, as seen in other machine learning models.
\item The normalisation of the star-forming sequence is slightly under-predicted at the characteristic mass, which reflects both the lower normalisation in the training data, but also the lower predicted stellar masses on the test set.
However, the general form is in good agreement.
\item $V_{\mathrm{max}}$ is the most important feature in all simulation sets. Measures of the local environment, such as the satellite flag, host halo mass and local density, do not show high importance in any of the models.
\end{itemize}
We stress that our model is not intended as a replacement of traditional galaxy formation models: it is in fact wholly reliant on such models to train from.
It does, however, provide a means of expanding the predictions from such models to much larger periodic volumes.
These larger volumes are useful for a number of science questions.
Galaxy clustering is a particularly important application we have demonstrated here, allowing us to test the clustering statistics of high-resolution hydrodynamic simulations in the high-mass, large-separation regime.
As demonstrated by \cite{jo_machine-assisted_2019}, additional features, such as the halo merger history and its assembly and formation time, are expected to have a significant positive impact on the prediction accuracy.
Whilst we have found that features describing the local environment are not highly important, additional parameters describing, for example, the tidal shear \citep[\textit{e.g.}][]{lucie-smith_interpretable_2019}, may also encode more useful information for the machine to learn from.
It may also be possible to make predictions at multiple redshifts simultaneously by providing the machine with the scale factor, as demonstrated in \cite{moster_galaxynet_2020}.
The \textsc{C-EAGLE} sample provides a wealth of training data on rich cluster environments, however those environments on the opposite end of the overdensity distribution, extreme underdensities or \textit{cosmic voids}, are less well sampled in our training set.
Void regions do not have as obvious an effect on their constituent galaxies properties as rich cluster environments, where galaxy mergers are far more common and extreme processes such as ram-pressure stripping occur, however noticeable effects are still seen in voids in the fiducial periodic \textsc{EAGLE} volumes \citep{paillas_baryon_2016,xu_galaxy_2020}.
Larger, more significantly underdense regions are, as for overdense regions, not well sampled in the periodic volumes, however such voids are an important constituent of the universe, making up $\sim$ 60\% of the cosmic volume \citep{pan_cosmic_2012}.
In future work we will use resimulations of a range of overdensities down to low redshift to better populate this region of overdensity space, reducing generalization errors for galaxies in these environments.
We have focused on six key baryonic properties, but other baryonic properties are simple to add, as well as the emission properties of galaxies if combined with post-processing pipelines.
This will allow for the construction of extremely large lightcones \citep[as demonstrated in][using their empirical modelling plus simulation-calibrated approach]{hearin_generating_2020}, necessary for making predictions for wide field surveys from the upcoming Roman and Euclid space-based observatories \citep{potter_pkdgrav3_2017}.
To this end, in future work we will explore predictions during the epoch of reionisation, where we will leverage the \mbox{\sc{Flares}}\ simulations \citep{lovell_first_2021}.
A unique aspect of \mbox{\sc{Flares}}\ is that it consists of resimulations of a range of overdensities, providing training data in extreme over- and \textit{under}-dense environments, which may aid predictions of galaxy properties across all environments.
\section{Feature exploration}
\label{sec:feature_exploration}
\begin{figure}
\includegraphics[width=\columnwidth]{images/feature_importance_predictors_L0050N0752.png}
\caption{Matrix showing the relative importance ($0\rightarrow1$, low to high importance) of each feature ($y$-axis) for each predictor quantity ($x$-axis), for the \texttt{L050AGN+Zoom} model.
The importance is normalised by the maximum for each predictor.
}
\label{fig:feature_importance_predictors}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{images/feature_importance_all.png}
\caption{Relative feature importance as described by ERT, across all features simultaneously.
\texttt{L050AGN} (blue) and \texttt{L050AGN+Zoom} (orange) machine learning models are shown, including additional features describing the local density on different scales ($\rho(R)$).
}
\label{fig:feature_importance_all}
\end{figure}
Feature importance in ERT can be evaluated from the relative position of a given feature in the tree; the closer to the root node in the ensemble of trees, the higher the importance.
In order to evaluate the feature importance for each predictor, we re-train the model on each predictor individually.
\fig{feature_importance_predictors} shows a matrix of each predictor against each feature, coloured by their relative importance.
The order of relative importance is generally the same for all predictors.
$V_{\mathrm{max}}$ is by far the most important feature; \cite{kamdar_machine_2016-1} attributed a similarly high importance for $V_{\mathrm{max}}$ in their machine learning model trained on Illustris.
A number of other studies have highlighted the importance of $V_{\mathrm{max}}$ for predicting baryonic properties.
\citep{matthee_origin_2017} showed that, in \mbox{\sc{EAGLE}}, $V_{\mathrm{max}}$ is a key predictor of the stellar mass, more so than the halo mass.
\cite{chaves-montero_subhalo_2016} use a subhalo abundance matching technique to test the recovery of the clustering of galaxies in \mbox{\sc{EAGLE}}\ and find a similarly strong dependence on $V_{\mathrm{max}}$.
The circumgalactic medium mass fraction, at fixed halo mass, has also been shown to correlate strongly with $V_{\mathrm{max}}$ (when parametrised as a ratio with the virial circular velocity, closely related to the halo binding energy), in both \mbox{\sc{EAGLE}}\ and Illustris \citep{davies_gas_2018,davies_quenching_2020,oppenheimer_feedback_2019}; the authors of these studies argue that a high $V_{\mathrm{max}}$ corresponds to an early collapse time for a halo, which leads to greater black hole growth, which in turn ejects more of the circumgalactic medium mass.
This has a big impact on the latter baryonic properties of the galaxy, such as its star formation history and morphology.
This explains the strong importance of $V_{\mathrm{max}}$ in our feature set for the majority of our baryonic predictors.
Interestingly, for the gas mass, the half-mass radius is instead the most important feature.
$V_{\mathrm{max}}$ is still of high importance, but at a similar level to the subhalo mass and total halo mass ($M_{\mathrm{crit},200}$).
This suggests that the size of the underlying dark matter halo is closely related to its current gas mass.
A similarly strong correlation between (HI) gas mass and size has been found observationally, though with the stellar component rather than dark matter \citep{catinella_galex_2012}.
The peculiar velocity is the least important feature for all predictors, as expected.
Interestingly, features that encode the local halo environment, such as $M_{\mathrm{crit,200}}$ and its status as a satellite or central, are also two of the least important features.
This suggests that the properties of the subhalo itself mostly determine the baryonic properties, however this does not necessarily mean that `nature' rather than `nurture' is the dominant evolutionary process.
Instead, other subhalo features may encode environmental information, \textit{e.g.} satellites are clear outliers in the $M_{200} - M_{\mathrm{subhalo}}$ plane.
We also evaluated the effect of including local density features, $\rho(R)$.
\fig{feature_importance_all} shows the feature importance for all predictors, in the \texttt{L050AGN} and \texttt{L050AGN+Zoom} machine learning models.
None of these local features dominates the feature importance, but the density on intermediate scales ($R = [2,4] \; \mathrm{Mpc}$) has a higher importance than on the smallest and largest scales ($R = [1,8] \; \mathrm{Mpc}$, respectively).
The order of feature importance is otherwise mostly preserved.
\section{Ideas}
\begin{itemize}
\item{ Page 7 of \cite{reddick_connection_2013} shows common clustering statistics used to evaluate SHAM models. Apply the same to ML model.}
\item{Assembly Bias is the dependence of the clustering of dark matter haloes on properties other than their mass. It provides a tension with HOD models which only use halo mass as a parameter determining halo occupation. Could use ML to quantify assembly bias, and identify the parameters needed to counteract it.}
\item{ Stellar mass / Halo mass relation. \citep{mitchell_evolution_2016} look at evolution of halo-galaxy relation over time. Compare to other models, in particular abundance matching models \citep{behroozi_comprehensive_2010}.}
\item{Directly compare with \cite{simha_testing_2012}, who compare a SHAM with a hydrodynamic simulation. They investigate whether adding a parameter to the SHAM can increase the predictive accuracy. Could present the paper in the same vein.}
\item{\cite{simha_testing_2012} look at luminosity functions in a SHAM, comparing to true SPH values (modelled through SPS). Find greater spread, but still centered on true values.}
\item{\cite{simha_testing_2012} investigate whether SHAM can be used to predict rest frame UV luminosities of high-z galaxies, given that these are dependent on SFR, rather than stellar mass. }
\item{See \cite{weinberg_baryon_2008} for a similar investigation}
\end{itemize}
\section{Introduction}
\label{sec:intro}
Cosmological hydrodynamic simulations self-consistently model the evolution of baryonic and cold dark matter, and the subsequent hierarchical assembly of galaxies in a $\Lambda$CDM universe \citep{benson_galaxy_2010, somerville_physical_2015}.
A number of projects, such as \mbox{\sc{EAGLE}}\ \citep{schaye_eagle_2015}, \textsc{Illustris} \citep{vogelsberger_introducing_2014}, \textsc{Illustris-TNG} \citep{pillepich_simulating_2018}, \textsc{Mufasa} \citep{dave_mufasa:_2016} and \textsc{Simba} \citep{dave_simba:_2019} have had reasonable success at reproducing key galaxy distribution functions in the low-redshift Universe, such as the galaxy stellar mass function.
They are typically run within large periodic volumes, $\sim$100 Mpc on a side, and have mass resolutions of order $\sim 10^6 \, \mathrm{M_{\odot}}$.
This is sufficient to resolve the internal structure of galaxies, but still coarse enough to necessitate the use of subgrid models for small-scale stellar and black hole processes.
It is currently computationally infeasible to run simulations at this resolution in substantially larger volumes\footnote{The \textsc{BlueTides} simulation \citep{feng_bluetides_2016} is one of the largest high-resolution hydrodynamic simulations, with a volume 400 $h^{-1} \, \mathrm{Mpc}$ on a side, but was only run to $z = 7$.}.
This is an issue for certain science questions, since the volumes typically simulated ($\sim (100 \; \mathrm{Mpc})^3$) do not contain large numbers of rare, overdense environments, as well as galaxies with unusual growth histories (e.g. star bursts).
For example, \mbox{\sc{EAGLE}}\ contains only seven clusters at $z = 0$, and these are all relatively low mass ($ M_{200,\mathrm{crit}} \,/\, \mathrm{M_{\odot}} < 10^{14.5}$).
In order to simulate rare environments, that are not represented in smaller scale periodic volumes, another approach is to use `zoom' simulations \citep{katz_hierarchical_1993,tormen_structure_1997}.
These use initial conditions selected from a much larger dark matter only (DMO) simulation, of order $\sim (1 \; \mathrm{Gpc})^3$ in volume, and then resimulate a smaller region from this volume with full hydrodynamics.
Large scale tidal forces are preserved by simulating the rest of the volume with low resolution dark matter only particles.
This approach has been used successfully to simulate cluster environments with the \mbox{\sc{EAGLE}}\ code \citep{barnes_cluster-eagle_2017,bahe_hydrangea_2017}.
However, since zooms only simulate a small region of interest they have a number of drawbacks compared to periodic simulations.
They cannot be used to predict mean distribution functions \textit{directly}, since they are, by construction, biased regions.
One means of circumventing this limitation is to use multiple zoom simulations of differing environments, and weight the relative abundance of each simulation based on its relative total matter overdensity.
This technique was first demonstrated with the \textsc{GIMIC} simulations \citep{crain_galaxies-intergalactic_2009}, and recently used in the \mbox{\sc{Flares}}\ simulations to make predictions for the abundance of galaxies during the epoch of reionisation \citep{lovell_first_2021,vijayan_first_2021}.
Another drawback is that zooms cannot be used to self-consistently predict aspects of the large scale structure, such as the clustering of galaxies, since they are by construction non-representative, small volume regions of the Universe.
Large periodic volumes are the only means of studying these kinds of spatial statistics \citep[\textit{e.g.} the BAHAMAS project;][]{mccarthy_bahamas_2017}, but these large volumes cannot currently be simulated at the high resolution necessary to model internal galaxy structures.
This limits what can be achieved with high-resolution hydrodynamic simulations.
$N$-body DMO simulations predict the distribution of matter as a result of gravitational interactions only, and are therefore significantly cheaper computationally than simulations including the gas hydrodynamics.
They are therefore less demanding to run accurately in large volumes, allowing them to be used to explore the large scale structure (LSS).
There are also a number of approaches to modelling galaxy evolution that are relatively simpler than running a full hydro simulation, using semi-analytic or phenomenological models to populate haloes in DMO simulations.
The host halo has a significant impact on the properties of a galaxy; haloes are the cradles within which galaxies form, and continue to influence the evolution of a galaxy throughout its lifetime \citep{wechsler_connection_2018}.
Understanding the relationship between galaxy properties and the properties of their host haloes is an important factor in understanding galaxy formation and evolution, and in the subsequent building of these kinds of galaxy evolution models.
Semi Analytic Models (SAMs) explicitly assume a close relationship between a galaxy and its host-halo.
They treat the complicated physics of galaxy formation with approximate, physically-motivated analytical models, applied \textit{ex post facto} to N-body dark matter only simulations \citep[for a review, see][]{baugh_primer_2006}.
The halo properties, and their merging history, provide the input parameters for such models, which have successfully reproduced a number of distribution functions simultaneously \citep[\textit{e.g.}][]{gonzalez-perez_new_2014, henriques_galaxy_2015, henriques_l-galaxies_2020}.
Subhalo Abundance Matching (SHAM) models also rely explicitly on the galaxy-halo relationship, populating dark matter haloes from simulations with rank ordered galaxies from observations.
Such models have been used to constrain the stellar mass - halo mass relation \citep[\textit{e.g.}][]{behroozi_comprehensive_2010,moster_constraints_2010,moster_galactic_2013,legrand_cosmos-ultravista_2019}, though it has been noted that the efficacy of such methods is highly dependent on the observational selection function \citep{stiskalek_dependence_2021}.
Both these approaches are capable of modelling galaxy evolution over very large volumes, allowing predictions for the clustering of galaxies as well as their evolution in rare, overdense environments.
They have also been used in combination with hydrodynamic simulations in order to highlight potential issues \citep[\textit{e.g.} for satellites where mergers lead to mass loss;][]{simha_testing_2012}, and SAMs have even been explicitly calibrated to reproduce hydrodynamic simulations \citep{neistein_hydrodynamical_2012,mitchell_how_2021}, allowing an investigation into the effects of changes to specific coefficients in the model.
Machine learning methods continue to grow in popularity in all areas of astronomy \citep[see][]{ball_data_2010,fluke_surveying_2020}, and a number of recent papers have explored how they can be used in combination with simulations to emulate galaxy properties, analogous to a SHAM or SAM model.
In a pioneering paper, \cite{xu_first_2013} used the Millennium simulation, coupled with a SAM, to predict the number of galaxies in a given halo using support vector machines and \textit{k}-nearest neighbour algorithms.
Later, \cite{kamdar_machine_2016} showed how tree based methods can be trained to learn additional properties of the the baryon-halo relationship directly from an existing SAM.
They used dark matter properties from each halo as features, and baryonic properties as predictors, and trained the machine to learn the mapping between the two.
They then followed this up by applying the same technique to the \textsc{Illustris} hydrodynamic simulation \citep{kamdar_machine_2016-1}.
\cite{agarwal_painting_2018} presented a similar model applied to the \textsc{MUFASA} simulation.
Using the more recent \textsc{Illustris-TNG} simulation, \cite{jo_machine-assisted_2019} presented a similar model, and then applied this trained model to the much larger DMO MultiDark-Planck simulation.
A novel addition to their model was historical halo features (extracted from the halo merger tree), which allowed the model to broadly reproduce key distribution functions, though we note that they do not present tests in the high halo mass regime ($> 10^{14} \; \mathrm{M_{\odot}}$).
\cite{sullivan_using_2018} used artificial neural networks to better predict the baryon fraction of haloes at high-redshift using both dark matter and baryonic properties from their \textsc{Ramses-RT} radiative transfer simulations.
Most recently, a number of hybrid approaches have been presented: \cite{moews_hybrid_2020} combined the results of an equilibrium model with machine learning on the \textsc{Simba} simulations, and \cite{hearin_generating_2020} combined empirical modelling with simulation outputs from a SAM to populate large DMO volumes with galaxies.
\cite{icaza-lizaola_sparse_2021} demonstrated, using a sparse regression approach, that halo angular momentum has little impact on the stellar-halo mass relation.
Finally, a number of approaches have demonstrated predictions for baryonic properties of the cosmic web not necessarily linked to discrete subhaloes \citep[\textit{e.g.}][]{sinigaglia_bias_2020}.
In this paper we build on these previous works, by combining the results of both periodic and zoom cosmological simulations from the \mbox{\sc{EAGLE}}\ project to train a machine learning model to learn the relationship between galaxy baryonic properties and their host dark matter haloes.
Our approach is unique in two ways.
Firstly, we match subhaloes from each hydrodynamic simulation with those in a DMO counterpart (simulated from the same initial conditions), in order to avoid the effect of baryons on the host dark matter halo \citep{schaller_baryon_2015}.
This allows the model to be directly applied to an independent DMO simulation, without leading to biases in the predictions due to differences in the dark matter features.
Secondly, we address the issue of \textit{generalization error}.
Machine learning methods are a powerful set of techniques for making predictions on data that look similar to the data on which they are trained, but fail when presented with new data that lies outside of the bounds of the original training data.
This presents a problem for models trained on smaller periodic volumes, since such volumes will not contain the massive clusters present in larger DMO simulations, and hence any model trained on these volumes won't provide good predictions for galaxies in overdense environments.
We avoid this by including clusters from the \mbox{\sc{C-EAGLE}}\ project \citep[\mbox{\sc{C-EAGLE}};][]{barnes_cluster-eagle_2017,bahe_hydrangea_2017} in our training set.
This allows us to apply the trained model to the much larger volume $(800 \; \mathrm{Mpc})^3$ \mbox{\sc{P-Millennium}}\ simulation \citep{baugh_galaxy_2019}, and predict distribution functions of key baryonic properties within this enormous volume, extending the dynamic range, as well as allowing predictions of clustering statistics on larger scales / for higher mass haloes.
The method is shown diagrammatically in \fig{ML_paper_figure}.
Whilst often negatively perceived as a `black box', many machine learning methods in fact provide a wealth of insights into the form of their predictive model, and the weight given to their input parameters.
This presents an opportunity to learn, in an unbiased manner, what parameters best explain the galaxy-halo connection.
We train the model with a range of dark matter properties, and explore the relative predictive power of each one on the baryonic properties.
Hydrodynamic simulations represent the cutting edge of cosmological modelling; machine learning methods could provide a practical way of extracting quantitative information on the modelled relationships.
All of these insights can be used to inform future analytic, semi-analytic and hydrodynamic model development.
This paper is laid out as follows.
In \sec{sims} we present the simulations used to train the model, as well as our algorithm for matching subhaloes between the hydro and DMO runs.
\sec{ml_methods} details the machine learning methods used, as well as our choice of features and predictors.
\sec{test_results} details our results on test sets, including the effect of including density information.
\sec{pmill_predictions} presents our results on independent DMO simulations, including the P-Millennium simulation, and \sec{feature_exploration} shows our feature exploration analysis.
Finally, in \sec{conc} we discuss our results and summarise our conclusions.
Throughout, we assume a (flat) Planck year 1 cosmology \citep[$\Omega_{\mathrm{m}} = 0.307$, $\Omega_{\Lambda} = 0.693$, $h = 0.6777$, ][]{planck_collaboration_planck_2014} and a Chabrier stellar initial mass function \citep[IMF;][]{chabrier_galactic_2003}.
\section*{Acknowledgements}
The authors wish to thank the anonymous referee for a detailed report that improved this manuscript.
We also wish to thank Daniel Farrow for providing the GAMA clustering data, John Helly for help reading the P-Millennium data, and Aswin Vijayan, Rob Crain, Joop Schaye and Scott Kay for helpful comments and discussions.
Thanks also to the \mbox{\sc{EAGLE}}\ team for their efforts in developing the \mbox{\sc{EAGLE}}\ simulation code.
We acknowledge the following open source software packages used in the analysis: \textsf{scipy} \citep{2020SciPy-NMeth}, \textsf{Astropy} \citep{robitaille_astropy:_2013} , \textsf{matplotlib} \citep{Hunter:2007}, \textsf{seaborn} \citep{waskom_seaborn_2021} and \textsf{halotools} \citep[][v0.7]{hearin_forward_2017}.
This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk).
The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1.
DiRAC is part of the National e-Infrastructure.
PAT acknowledges support from the STFC (grant number ST/P000525/1)
CCL acknowledges support from the Royal Society under grant RGF/EA/181016.
CMB acknowledges support from STFC grant ST/T000244/1.
MS is supported by the Netherlands Organisation for Scientific Research (NWO) through VENI grant 639.041.749.
YMB gratefully acknowledges funding from the NWO through VENI grant number 639.041.751.
\section*{Data Availability}
The public \mbox{\sc{EAGLE}}\ database can be used to access the subhalo properties for the periodic hydrodynamic simulations in this paper \citep{mcalpine_eagle_2016}.
Other data underlying this article will be shared on reasonable request to the corresponding author.
The code used to train and analyse the models, and produce all plots, is made available at \href{https://github.com/christopherlovell/ML-cosmo}{github.com/christopherlovell/ML-cosmo}.
\bibliographystyle{mnras}
\section{Machine Learning Methods}
\label{sec:ml_methods}
\subsection{Extremely Randomised Trees}
\label{sec:ert}
We used the \textsc{Scikit-Learn} \citep{pedregosa_scikit-learn:_2011} implementation of Extremely Randomised Trees \citep[ERT;][]{geurts_extremely_2006}, a tree based ensemble method.
ERT is demonstrably effective in this domain compared to other popular machine learning methods \citep{kamdar_machine_2016,jo_machine-assisted_2019}.
To understand what makes ERT such an effective learner, first consider a single decision tree.
Decision trees are typically constructed top down, numerically evaluating all splits for each feature using a cost function.
The best split (lowest cost) is chosen at each level.
Some of the issues seen with Decision Trees, particularly overfitting, can be alleviated by ensembling many different trees trained on subsets of the data.
Random Forests extend this idea by, at each split, randomly limiting the feature space from which splits can be made (within individual trees not all of the data is used, but over the whole ensemble it is).
This increases the variance by stopping strong features from dominating each tree.
ERT also introduces another layer of randomness; each split is chosen at random from the range of values available for each feature.
Bad splits are still rejected, but the extra layer of randomness encourages exploration of the full feature space, creating more `weak' learners for use in the ensemble.
At each iteration, only the best split from the subset of features is chosen, and the iterative procedure continues until a leaf node condition is reached.
Within ERT the mean squared error (MSE) is used to evaluate each split.
To quantify the effective fit of each model, and to discriminate between models, we used both MSE and the Pearson correlation coefficient ($\rho$), defined below:
\begin{equation}
\rho = \frac{\mathrm{cov} (X_{\mathrm{predicted}} X_{\mathrm{test}})}{\sigma_{X_{\mathrm{predicted}}} \sigma_{X_{\mathrm{test}}}} \;\;.
\end{equation}
\subsection{Features \& Predictors}
\label{sec:features_predictors}
\begin{figure*}
\includegraphics[width=\textwidth]{images/joint_plots_L0050N0752.pdf}
\caption{Predicted (from the machine learning model) against the true baryonic properties on the test set from the \texttt{L050AGN+Zoom} simulation set.
Clockwise from top left: stellar mass, gas mass, black hole mass, star formation rate, stellar metallicity, and stellar velocity dispersion.
The vertical bar separated from the rest of the distribution to the left in each panel corresponds to galaxies with a true value of zero for that corresponding predictor (see \sec{features_predictors}).
The fraction of galaxies whose predicted property is within 0.2 dex of the true value is quoted at the top left of each panel.
}
\label{fig:joint_plots_L0050N0752}
\end{figure*}
We chose our features from the properties of the DMO haloes and their host FOF haloes.
Some features are expected to be of greater importance for predicting certain baryonic properties; we explore this in \sec{feature_exploration}.
The selected subhalo features are as follows: total subhalo mass ($M_{\mathrm{sub}}$), half mass radius ($R_{1/2}$), peculiar velocity ($v$), maximum circular velocity ($v_{\mathrm{max}}$), radius of maximum circular velocity ($R_{v_{\mathrm{max}}}$), potential energy ($E_{\mathrm{p}}$), FOF group mass ($M_{\mathrm{crit,200}}$), and finally a boolean feature that specifies whether the subhalo is a satellite or a central.
Since we wish to evaluate the impact of environment we also include additional features to quantify this.
As a simple measure of environment we calculated the density of dark matter within spheres centred on a given subhalo in the DMO simulation.
We ran a periodic KD-tree search for neighbouring particles, then calculated the density on different scales, $R = [1,2,4,8] \, \mathrm{Mpc}$, to quantify both the small and large scale environment.
We indicate in the text where these additional features are included in a given training set.
More dark matter features are available in the subfind catalogues, and additional features could be calculated from the particle information (such as the large scale tidal torque), but we limited our chosen features to those above as they are present in both the \mbox{\sc{EAGLE}}\ and \mbox{\sc{P-Millennium}}\ catalogues.
Combinations of features may also lead to better predictive accuracy; we will explore this systematically in future work.
We predict six baryonic properties: the stellar mass, gas mass, black hole mass, stellar velocity dispersion, star formation rate and stellar metallicity.
The stellar mass and gas mass are taken from the central 30 kpc of each subhalo to allow better comparison with observations.
We transform all of these predictors into log space, which has been shown to improve the prediction accuracy due to the typically large dynamic range of cosmological properties \citep{jo_machine-assisted_2019}.
If the value is zero, we set it to some small value, determined by the resolution limit where appropriate,
\begin{align*}
M_{\star} \,/\, \mathrm{M_{\odot}} &\geqslant 1 \times 10^5 \\
M_{\mathrm{gas}} \,/\, \mathrm{M_{\odot}} &\geqslant 5 \times 10^5 \\
M_{\bullet} \,/\, \mathrm{M_{\odot}} &\geqslant 2 \times 10^4 \\
\mathrm{SFR} \,/\, \mathrm{M_{\odot} \; yr^{-1}} &\geqslant 1 \times 10^{-4} \\
\mathrm{Z_{*}} &\geqslant 5 \times 10^{-7} \\
v_{\star,\mathrm{disp}} \,/\, \mathrm{km \; s^{-1}} &\geqslant 3 \;\;.
\end{align*}
\subsection{Training}
\label{sec:training}
We train our model on all haloes with a dark matter mass (as measured in the \textsc{DMO} simulation) $M_{\mathrm{sub}}\,/\, \mathrm{M_{\odot}} \geqslant 1 \times 10^{10}$.
The completeness of our selection with respect to stellar mass is shown in detail in \app{completeness}.
By applying our selection to the dark matter properties we can use the same thresholds when applying the model to independent dark matter only simulations.
We split our data into training and test sets, 80-20\% respectively.
All hyperparameter optimisation, parameter scaling, and training is done on the training set, and only final model assessment is performed on the test set.
For each feature set, the hyperparameters of the ERT instance are chosen through an exhaustive grid search.
For each set of hyperparameters, \textit{k}-fold cross validation is performed \citep{stone_cross-validatory_1974} with $k = 10$ folds, and the coefficient of determination, $\rm{R}^{2}$, is used to discriminate,
\begin{equation}
R^{2} = 1 - \frac{\sum_{i} (X^{i}_{\mathrm{test}} - X^{i}_{\mathrm{predicted}})^{2}}{\sum_{i} (X^{i}_{\mathrm{test}} - X_{\mathrm{mean,train}})^{2}} \;\;.
\end{equation}
We standardise all of our features and predictors by subtracting the mean and scaling to unit variance.
\subsection{The Galaxy Stellar Mass Function}
\begin{figure*}
\includegraphics[width=\textwidth]{images/pmillennium_dfs.png}
\caption{The black hole -- stellar mass relation (left), stellar mass -- metallicity relation (middle) and star forming sequence (right).
Observational constraints for each relation are shown, from \protect\cite{mcconnell_revisiting_2013}, \protect\cite{gallazzi_ages_2005} and \protect\cite{bauer_galaxy_2013}, respectively.
The relation in the \texttt{L100Ref} (orange), \texttt{L050AGN} (green) and \texttt{L050AGN+Zoom} (red) simulation sets is shown, as well as the predicted relation from the \texttt{L050AGN+Zoom} machine learning model applied to the P-Millennium simulation (blue).
The median is given by the solid line in each case, and the $16^{\mathrm{th}}-84^{\mathrm{th}}$ percentile range is shown by the coloured shaded region.
The dashed black line in the right panel shows the cut used for passive galaxies, $\mathrm{sSFR} < 10^{-11} \, \mathrm{yr^{-1}}$.
}
\label{fig:pmillennium_dfs}
\end{figure*}
The left panel of \fig{gsmf_pmillennium} shows the \texttt{L050AGN} model run on the \texttt{L100Ref} DMO simulation.
We compare to the GSMF from the hydrodynamic \texttt{L100Ref} simulation, and it is clear that the high mass end of the GSMF is not reproduced.
Whilst there are parameter differences between the models, it is not expected that the AGNdT9 model would fail to produce any $10^{12} \, \mathrm{M_{\odot}}$ galaxies in a $(100 \, \mathrm{Mpc})^3$ volume.
In fact, the predictions broadly follow the model used for training, \texttt{L050AGN}, though underestimate the abundance of galaxies at the high-mass end ($> 10^{11} \, \mathrm{M_{\odot}}$).
This additional underestimate is likely the result of a lack of training data at the high-mass end, due to the low number of high-mass galaxies in the \texttt{L050AGN} volume.
However, if we use the \texttt{L050AGN+Zoom} model we get much better agreement with the \texttt{L100Ref} simulation at the high-mass end.
This demonstrates the effect of including the \mbox{\sc{C-EAGLE}}\ zoom regions; the model is able to learn the baryonic properties of galaxies in the cluster regions, which are not present in \texttt{L050AGN}.
Predictions at lower stellar masses are also consistent with both \texttt{L100Ref} and \texttt{L050AGN} down to $\sim 10^8 \; \mathrm{M_{\odot}}$, the approximate resolution limit of the original simulations \citep{schaye_eagle_2015}, and where our predictions are approximately complete (see \app{completeness}).
We now turn our attention to the much larger \mbox{\sc{P-Millennium}}\ DMO simulation.
The right panel of \fig{gsmf_pmillennium} shows predictions for the \texttt{L050AGN} and \texttt{L050AGN+Zoom} models on this volume, and whilst the former still completely misses the high mass end, the model including zooms is able to predict stellar masses out to $\sim 10^{12} \, \mathrm{M_{\odot}}$.
This extends the dynamic range of the GSMF beyond that accessible to the \texttt{L100Ref} hydrodynamic simulation, and improves the statistics significantly.
This is a significant achievement of the model -- it is able to successfully extend the predictive range beyond that achievable with periodic hydrodynamic simulations.
At lower stellar masses the predictions are consistent with both \texttt{L100Ref} and \texttt{L050AGN}.
The predictions at the high mass end are also in broad agreement with the observational constraints from \cite{baldry_galaxy_2012}.
The P-Millennium simulation is lower resolution than those used for training, which may impact the predicted properties of galaxies, particularly those close to the resolution limit.
To test the impact of resolution, we applied the \texttt{L050AGN+Zoom} model to a lower resolution $(100 \mathrm{Mpc})^3$ DMO run, with 8 times fewer particles.
The predictions for the GSMF were identical, which confirms that differing resolution has no impact on the predicted properties; as long as the haloes are resolved, the halo features used for prediction are robust.
\subsection{The black hole -- stellar mass relation}
We have demonstrated how the model is able to predict stellar masses with high accuracy, and produce a GSMF for the \mbox{\sc{P-Millennium}}\ simulation volume.
We now explore other key baryonic distribution functions.
\fig{pmillennium_dfs} shows the black hole -- stellar mass relation, the stellar mass -- metallicity relation, and the star-forming sequence.
Each panel shows the relation in the \texttt{L100Ref}, \texttt{L050AGN} \& \texttt{L050AGN+Zoom} simulations, as well as the predicted relation for our \texttt{L050AGN+Zoom} model, with fiducial feature set, run on the \mbox{\sc{P-Millennium}}\ simulation.
The black hole -- stellar mass relation shows a rapid increase in the stellar mass above $M_{\star} \sim 10^{10} \; \mathrm{M_{\odot}}$, though the exact mass at which the relation turns upwards is dependent on the simulation.
In \texttt{L050AGN+Zoom} the increase is at a higher mass compared to the two periodic simulations.
This is not due to any parameter differences, since \texttt{L050AGN} has identical parameters, but may be due to the cluster environment delaying black hole accretion by starving the central regions of a galaxy of gas.
Though \cite{van_son_galaxies_2019} note an excess of `black hole monster galaxies' in cluster environments due to tidal stripping, this is a sub-dominant population compared to the main relation, so it does not increase the normalisation of the black hole -- stellar mass relation in these environments.
The model predictions lie between the periodic and zoom relations, which is perhaps expected since both environments are providing training data from which the machine is making its predictions.
Overall the relation is predicted remarkably well, and the predictions extend the dynamic range to higher stellar and black hole masses than those achievable in \texttt{L100Ref} \& \texttt{L050AGN}.
At these higher masses the model is in good agreement with the observational results of \cite{mcconnell_revisiting_2013}, though the scatter at fixed stellar mass is still underpredicted \citep[as seen in][]{schaye_eagle_2015}.
\subsection{The stellar mass -- metallicity relation}
Predictions from the model for the stellar mass -- metallicity relation show similar behaviour.
The model predictions lie between the relations from the periodic and zoom simulation sets at high stellar masses ($M_{\star} \,/\, \mathrm{M_{\odot}} > 10^{10}$), but closely follow the predictions below this, except at the very lowest stellar masses.
The scatter in both these relations is much tighter for the model predictions than in the original simulation sets.
This is a reflection of the deterministic nature of the machine learning prediction, combined with the relatively limited feature set, which has been discussed in a number of previous works \citep[\textit{e.g.}][]{kamdar_machine_2016-1,moews_hybrid_2020}.
Historical halo features, such as the formation and assembly time, may help to increase the diversity of baryonic properties at fixed stellar mass.
However, the predictions still lie within the uncertainties on observational constraints from \cite{gallazzi_ages_2005} at all stellar masses.
\subsection{The star forming sequence}
Finally, the right panel of \fig{pmillennium_dfs} shows the star-forming sequence, excluding passive galaxies ($\mathrm{sSFR} < 10^{-11} \, \mathrm{yr^{-1}}$).
As shown in \fig{violins_SFR_L0050N0752} the model tends to underpredict SFRs of star forming galaxies, and this is reflected in the star-forming sequence, where the normalisation at $M_{*} \,/\, M_{\odot} = 10^{11}$ is lower than that in the original simulation sets, by up to -0.8 dex compared to \texttt{L050AGN+Zoom}.
The scatter at fixed stellar mass is comparable to the simulation sets ($\pm 0.25 \, \mathrm{dex}$), though this may be partly due to truly quiescent galaxies in the simulation sets that have residual star formation when predicted in the model.
In general, however, the star forming sequence is broadly reproduced, is in good agreement both above and below the characteristic mass, and lies within the uncertainties on observational constraints from \cite{bauer_galaxy_2013}.
\section{Predicting Baryonic Properties from Dark Matter Properties}
\label{sec:test_results}
\begin{figure}
\includegraphics[width=\columnwidth]{images/violins_L0050N0752.png}
\caption{Violin plots showing the distribution of predicted baryonic properties (green) from the machine learning model against the true values (orange) in the \texttt{L050AGN+Zoom} simulation set.
Dashed and dotted lines show the median and upper/lower quartiles of each distribution, respectively.
Each distribution is a kernel density estimate of the true underlying distribution, which may smooth some features, particularly where the distribution is discontinuous (\textit{e.g.} galaxies with zero gas mass).
Clockwise from top left: stellar mass, black hole mass, stellar velocity dispersion, and stellar metallicity.
}
\label{fig:violins_L0050N0752}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{images/violins_SFR_L0050N0752.png}
\caption{Violin plots showing the distribution of predicted SFR (green) from the machine learning model against the true SFR (orange) in the \texttt{L050AGN+Zoom} simulation set.
The left plot shows the total distribution, which is heavily skewed toward quiescent galaxies, since the sample is dominated by low mass galaxies that are artificially quenched.
The central plot shows the distribution ignoring those galaxies with zero SFR in the test set.
The right plot shows only the \textit{predicted} SFR for all galaxies with zero SFR in the test set (note that this violin is symmetric as only a single property is plotted)).
}
\label{fig:violins_SFR_L0050N0752}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{images/violins_gas_L0050N0752.png}
\caption{As for \fig{violins_SFR_L0050N0752}, but showing the distribution of total gas mass.
}
\label{fig:violins_gas_L0050N0752}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{images/fit_comparison.pdf}
\caption{Comparison of fit accuracy described by the Pearson correlation coefficient ($\rho_{\mathrm{pearson}}$), measured on the test set, against the number of haloes in the training set, for each of the baryonic predictors.
The \texttt{L100Ref}, \texttt{L050AGN} and \texttt{L050AGN+Zoom} simulation sets are shown in orange, blue and green, respectively, where bullet markers show results with the fiducial feature set, and star markers show result including all local density features (see \sec{density}).
}
\label{fig:fit_comparison}
\end{figure*}
We first present results for the \texttt{L050AGN+ZoomAGN} model, with the fiducial feature set (excluding environmental features).
\fig{joint_plots_L0050N0752} shows the predicted against the true value for the six baryonic properties in the test set.
Figures \ref{fig:violins_L0050N0752}, \ref{fig:violins_SFR_L0050N0752} \& \ref{fig:violins_gas_L0050N0752} compare the predicted and true distribution of these properties in the test set as violin plots\footnote{Bin width of the kernel density estimate is calculated using Scott's rule \citep{scott_optimal_1979}.}.
Together, these figures show how accurately the model predictions are, and how well the cosmic distribution is reproduced.
We also quote the fraction of galaxies where the predicted value is within 0.2 dex of the true value; for the stellar velocity dispersion this is as high as 97\%, but even for the gas mass, which has the lowest prediction accuracy, this is still close to two thirds of the sample (65\%).
This is comparable to the accuracy achieved in \cite{neistein_hydrodynamical_2012} in their SAM trained on a hydro simulation, though we push our predictions to lower stellar masses.
To qualitatively demonstrate the accuracy of the ERT model, we compare to predictions utilising a single feature (subhalo mass or $V_{\mathrm{max}}$), analogous to a SHAM approach.
For this single feature we fit an isotonic regression model\footnote{see \href{https://scikit-learn.org/stable/modules/generated/sklearn.isotonic.IsotonicRegression.html}{here} for details on the Isotonic regression model employed.} between the feature and each predictor (for the whole dataset, not just training).
This model ensures monotonicity, and broadly fits each predictor well considering the simplicity of the model.
We again quote the fraction of galaxies where the predicted value from this simple relation is within 0.2 dex of the true value.
The ERT model shows greater accuracy for all predictors compared to the Isotonic model, whether subhalo mass or $V_{\mathrm{max}}$ are used.
This is particularly the case for the gas mass (49\% where $V_{\mathrm{max}}$ is used, compared to 65\% for the ERT model).
Full details on the Isotonic fits, and comparison to the predicted GSMF and projected correlation function, are provided in \app{shamcomp}.
The model is able to accurately predict both the stellar mass and stellar velocity dispersion remarkably well, however there is more structure in the joint plots for other properties.
Predictions for the stellar metallicity show a greater spread than the other values, perhaps unsurprisingly due to its known complex dependence on the star formation history, however the violin plot shows that the overall distribution is recovered.
Black hole masses in \mbox{\sc{EAGLE}}\ are dominated by newly formed black holes at the seed mass ($10^{5} \; M_{\odot}$), as more haloes reach the mass-threshold for black hole seeding.
The model is able to capture these, and does a reasonable job of predicting the masses of more massive black holes.
The relations for the total gas mass and SFR are more complicated.
There are a large number of galaxies with zero star formation, and the right panel of \fig{violins_SFR_L0050N0752} shows that the model predicts a range of SFRs for these galaxies, though the majority are limited to $< 3 \times 10^{-3} \; \mathrm{M_{\odot} \, yr^{-1}}$.
To see how well the model predicts the distribution of star forming galaxies, we show in the middle panel of \fig{violins_SFR_L0050N0752} the distribution of SFR ignoring quiescent galaxies.
It is clear that the model underpredicts the SFR for most galaxies.
This may be due to the quiescent galaxies biasing the predictions for other haloes, as well as ERT predicting a smooth distribution of SFRs when a discontinuous distribution would be more appropriate.
The SFR is also known to be more strongly dependent on the assembly history; including features that encode this may lead to better predictions, which we discuss in \sec{conc}.
\fig{violins_gas_L0050N0752} shows that, as for the SFR, there is a reasonably tight relation for the total gas mass, except where galaxies have zero gas mass.
These galaxies make up a large proportion of all subhaloes, and the model fails to predict low gas masses for these galaxies, instead predicting a wider range of gas masses, as can be seen in the right panel of \fig{violins_gas_L0050N0752}.
This suggests that the physics that causes the evacuation of gas from low mass haloes is not encoded in the provided dark matter parameters.
However, the overall distribution, when renormalised, better reproduces that seen in the test set compared to the SFR.
To demonstrate the impact of adding the \mbox{\sc{C-EAGLE}}\ clusters to our training set, we compare the prediction accuracy against models trained only on the periodic volumes.
\fig{fit_comparison} shows the Pearson correlation coefficient for the \texttt{L100Ref}, \texttt{L050AGN} and \texttt{L050AGN+zoom} models.
Adding the zoom regions leads to a large increase in the training set size, but this has no significant positive effect on the predictive accuracy for any of the features.
In fact, for the gas mass, black hole mass and stellar metallicity the predictive accuracy is actually worse.
This may be due to the unique impact of the cluster environment on these three particular baryonic properties of galaxies, for example through the effect of ram pressure stripping and fly-by interactions.
So while there is more data for the machine to learn from, the relationship represented is more complicated than that present in the periodic volumes, and therefore more difficult to predict.
We stress that in order to make predictions for larger boxes, it is essential to include these environments in the training set, and that a lower predictive accuracy compared to the periodic volumes is not necessarily indicative of a poorer model.
This does not suggest that adding more data does not improve the predictive accuracy - $\rho_{\mathrm{pearson}}$ calculated for \texttt{L100Ref} is higher than than for \texttt{L050AGN} for all baryonic properties, showing the advantage of a larger training set size where the underlying distribution of galaxy properties is broadly similar.
\subsection{The effect of including local density in the feature set}
\label{sec:density}
We add four features for the local density calculated within spheres with radii $R = [1,2,4,8] \, \mathrm{Mpc}$.
\fig{fit_comparison} also shows the impact of including these additional features on the predictive accuracy for the \texttt{L050AGN}, \texttt{L100Ref} and \texttt{L050AGN+Zoom} simulation sets.
Including density information has a minor positive impact on the predictive accuracy of all features for almost all simulation sets, though the quantitative impact is small in most cases.
The largest impact is seen for the gas mass, with an increase in $\rho_{\mathrm{pearson}}$ of approximately $+0.05$ for the periodic simulation sets, and $+0.07$ for the \texttt{L050AGN+Zoom} simulation set.
This fits with the hypothesis suggested above that environmental effects operating in clusters lead to poor predictions for the gas mass when environmental features are not included.
Such features are important for accurately predicting specific baryonic properties.
In summary, our model is capable of predicting a range of baryonic properties with reasonable accuracy, and successfully reproduces their cosmic distributions.
We now show how the model can be applied to independent, larger DMO volumes, and the impact of including the zoom regions on the predicted relations.
\section{Comparing with Semi-Analytic Models}
Figure \ref{fig:eagle_trained_gsmf} shows the galaxy stellar mass function for Henriques2015 and our model. The model successfully reproduces a similar stellar mass function on all scales.
It is important to reiterate that the SAM model and our prediction are completely independent, and that the prediction is based on a very different interpretation of the galaxy physics than the SAM. Any discrepancies between the two can either be attributed to this fundamental difference, or the models failure to reproduce the relationship.
\begin{figure}
\includegraphics[width=\columnwidth]{images/gsmf_henriques_fit.png}
\caption{Galaxy stellar mass function for the matched \mbox{\sc{EAGLE}}\ test set against the predicted stellar mass. Both mass functions are smoothed with a Gaussian of standard deviation 0.1 dex.}
\label{fig:eagle_trained_gsmf}
\end{figure}
\section{Simulations}
\label{sec:sims}
\subsection{The \mbox{\sc{EAGLE}}\ \& \mbox{\sc{C-EAGLE}}\ simulations}
\label{sec:eagle_sim}
\begin{figure*}
\includegraphics[width=\textwidth]{images/ML_paper_figure.pdf}
\caption{Diagram showing the simulations (approximately to scale) used throughout this work, and the features and predictors used for training the machine learning model.
At the top are the \mbox{\sc{C-EAGLE}}\ zoom simulations; each image shows the distribution of dark matter (left) in the DMO simulations, and the gas (right) in the full hydro simulation, centred on the centre of potential of the most massive FOF group in each simulation, within a radius $r = 15 \times R_{\mathrm{crit},200}$.
Below these are the cubic periodic \texttt{L100Ref} and \texttt{L050AGN} simulations, again showing the dark matter (left) and gas (right).
In the centre are tables detailing the features from the DMO simulations (left) and the predictors from the hydro simulations (right).
At the bottom is a cropped image of the dark matter distribution in the P-Millennium simulation, to which the trained machine learning model is applied to predict the baryonic properties of its haloes.
}
\label{fig:ML_paper_figure}
\end{figure*}
\begin{table*}
\label{tab:simulations}
\begin{tabular}{ccccccccc}
Simulation & Prefix & Volume ($\mathrm{Mpc}^3$) & $N_{\mathrm{halo}} (> 10^{10} \, \mathrm{M_{\odot}})$ & $N_{\mathrm{matched}}$ & $N_{\mathrm{train}}$ & $N_{\mathrm{test}}$ & $C_{\mathrm{visc}}$ & $\Delta T$ \\
\hline
Reference L0100N1504 & \texttt{L100Ref} & $100^{3}$ & $88\,173$ & $86\,861$ & $69\,615$ & $17\,246$ & $2 \pi$ & $10^{8.5}$ \\
AGNdT9 L0050N0752 & \texttt{L050AGN} & $50^{3}$ & $11\,423$ & $11\,265$ & $9\,031$ & $2\,231$ & $2 \pi \times 10^2$ & $10^{9}$ \\
\mbox{\sc{C-EAGLE}}\ & \texttt{ZoomAGN} & $202.7^{3}$ & $373\,275$ & $364\,408$ & - & - & $2 \pi \times 10^2$ & $10^{9}$ \\
\mbox{\sc{C-EAGLE}}\ + L050AGN & \texttt{L050AGN+ZoomAGN} & $203.7^{3}$ & $384\,698$ & $375\,673$ & $300\,770$ & $74\,903$ & $2 \pi \times 10^2$ & $10^{9}$ \\
\end{tabular}
\caption{Details on each simulation set. The columns provide (1) the name or description of the simulation set, (2) the prefix used throughout this paper, (3) the total volume, (4) the number of subhaloes with mass $> 10^{10} \, \mathrm{M_{\odot}}$, (5) the number of those haloes matched between the hydro and DMO simulations (see \sec{matching}), (6) the number of subhaloes in the training set, (7) the number of subhaloes in the test set, (8) the value of the viscosity parameter, and (9) the value of the $\Delta T$ parameter.}
\end{table*}
\begin{figure*}
\includegraphics[width=\textwidth]{images/match_statistics.png}
\caption{Fraction of subhaloes from each DMO simulation matched with a counterpart in the hydro simulation, binned by total subhalo mass. \texttt{L050AGN} and \texttt{L100Ref} are shown in blue and orange, respectively, and each zoom from \texttt{ZoomAGN} is shown as a black dashed line.
}
\label{fig:match_statistics}
\end{figure*}
The \mbox{\sc{EAGLE}}\ project is a suite of cosmological hydrodynamic simulations \citep{schaye_eagle_2015,crain_eagle_2015} employing subgrid models for feedback from stars and AGN.
\mbox{\sc{EAGLE}}\ has been shown to accurately reproduce many observed relations, including the galaxy stellar mass function, galaxy sizes, quenched fractions, gas content and black hole masses \citep{trayford_colours_2015,trayford_optical_2017,lagos_molecular_2015,bahe_distribution_2016,crain_eagle_2017,furlong_size_2017,mcalpine_link_2017} at a range of redshifts \citep[\textit{e.g.}][]{furlong_evolution_2015}.
A number of different resolutions and volumes make up the \mbox{\sc{EAGLE}}\ simulation suite.
In this work we use the `fiducial' resolution simulations, with gas particle mass $m_{g} = 1.8 \times 10^6 \; \mathrm{M_{\odot}}$, dark matter particle mass $9.7 \times 10^{6} \; M_{\odot}$, and a physical softening length of $0.7 \; \mathrm{kpc}$.
haloes in the simulation are identified first through a Friends-Of-Friends (FOF) halo finder, and then split into child self-bound objects with SUBFIND \citep{dolag_substructures_2009}.
Cluster-Eagle \citep[or \mbox{\sc{C-EAGLE}},][]{barnes_cluster-eagle_2017,bahe_hydrangea_2017}, uses the \mbox{\sc{EAGLE}}\ model to simulate cluster environments using the `zoom' re-simulation technique \citep{katz_hierarchical_1993, tormen_structure_1997}.
30 clusters at $z=0$ (shown in \fig{ML_paper_figure}), with a range of halo masses ($14 < \mathrm{log_{10}(M_{200} \,/\, M_{\odot})} < 15.51$), are selected from a (3.2 Gpc)$^3$ `parent' DMO simulation \citep{barnes_redshift_2017}.
The clusters are resimulated at an identical resolution to the fiducial periodic \mbox{\sc{EAGLE}}\ simulation.
Full details on the selected clusters is provided in \cite{barnes_cluster-eagle_2017}.
\mbox{\sc{C-EAGLE}}\ uses the AGNdT9 calibration of the \mbox{\sc{EAGLE}}\ model \citep{schaye_eagle_2015}, which, compared to the fiducial Reference model, uses a higher value for $C_{\mathrm{visc}}$, which controls the sensitivity of the BH accretion rate to the angular momentum of the gas, and a higher gas temperature increase from AGN feedback, $\Delta T$.
A larger $\Delta T$ leads to fewer, more energetic feedback events, whereas a lower $\Delta T$ leads to more continual heating.
\cite{schaye_eagle_2015} show that AGNdT9 predicts X-ray luminosities and hot gas fractions in galaxy groups in better agreement with observational constraints, though with some discrepancies on cluster scales \citep{barnes_cluster-eagle_2017}.
\tab{simulations} details the simulations used in this work, and any combinations.
\texttt{L100Ref} is a $(100 \, \mathrm{Mpc})^3$ periodic volume (shown in \fig{ML_paper_figure}) run with the Reference model parameters; the hydro simulation contains $1504^3$ dark matter and $1504^3$ gas particles.
\texttt{L050AGN} is a smaller, $(50 \, \mathrm{Mpc})^3$ periodic volume (shown in \fig{ML_paper_figure}) run at the same resolution as \texttt{L100Ref} but with the AGNdT9 model parameters; it contains $752^3$ dark matter and $752^3$ gas particles.
\texttt{L050AGN+ZoomAGN} is a combination of \texttt{L050AGN} with the zoom cluster regions from \mbox{\sc{C-EAGLE}}.
We also match with DMO counterparts to each of these simulations, run using the same initial conditions; the match is described in \sec{matching}.
We use the snapshot corresponding to $z = 0.101$ in all simulations.
Throughout the rest of this text, whenever we refer to a \textit{model} we are referring to a \textit{machine learning} model (unless otherwise stated) trained on the matched hydro-DMO simulations indicated in the name.
The simulations are all referred to explicitly as \textit{simulations} to distinguish them from the machine learning models.
\subsection{Matching between hydrodynamic and dark matter only simulations}
\label{sec:matching}
Including baryons can lead to significant alterations to the underlying dark matter haloes \citep{weinberg_baryon_2008}.
For example, \cite{schaller_baryon_2015} demonstrate that, in the \mbox{\sc{EAGLE}}\ simulation, the halo centres are more `cuspy' in the presence of stars.
In order to apply our trained model to DMO simulations it is necessary to avoid these effects, as they will bias any predictions based on the dark matter features.
We achieve this by matching subhaloes in each hydrodynamic simulation to their counterparts in DMO simulations, and use the properties of the matched haloes in the DMO simulation as our features.
The galaxy properties that a given halo would have if hydrodynamics had been included are then predicted.
Each DMO simulation is run from the same initial conditions, but is not split into baryonic and dark-matter species.
Aside from this, all cosmological and numerical parameters are identical.
We perform the match using the approach of \cite{schaller_baryon_2015}.
We first find the 50 most bound dark matter particles in a subhalo in the hydro simulation, and search for haloes in the \textsc{DMO} simulation that have 50\% or more of these same particles (matched on particle ID).
We then perform the same match in reverse (subhaloes in the DMO matched with subhaloes in the hydro simulation).
Those haloes that match bijectively are linked.
\fig{match_statistics} shows the fraction of haloes matched from the DMO simulation at a given DMO halo mass for the two periodic simulations (\texttt{L100Ref} and \texttt{L050AGN}) as well as each of the \mbox{\sc{C-EAGLE}}\ clusters.
We also detail the total number of haloes and the number of matched haloes for each simulation set in \tab{simulations}.
More than 95\% of subhaloes with $M_{\mathrm{subhalo}} > 10^{10} \, \mathrm{M}_{\odot}$ are matched bijectively across all simulations.
We hence choose to train our model only on subhaloes with masses above this threshold (see \sec{training} for details).
By using a threshold dependent only on the DMO properties we can use a similar threshold in any target DMO simulation (subject to the existing resolution constraints of that simulation).
It is noticeable that there are a larger fraction of subhaloes at the high-mass end ($M_{\mathrm{subhalo}} \,/\, M_{\odot} > 5 \times 10^{12}$) that are not matched, in both the periodic and zoom simulations.
We looked at these cases individually, and found, where a single halo was identified in the DMO simulation, the halo finder splits this halo into multiple individual haloes in the baryonic simulation.
Missing these haloes reduces the size of our training set, which is particularly disappointing at the high-mass end where the number of haloes is already low, however we do not expect this to lead to biases in our predictions due to the already heterogenous nature of our training set.
\subsection{The \mbox{\sc{P-Millennium}}\ simulation}
\mbox{\sc{P-Millennium}}\ is a large DMO simulation ($800 \; \mathrm{cMpc}$ on a side; particle mass $1.06 \times 10^8 \, \mathrm{M_{\odot}}$) using the same \cite{planck_collaboration_planck_2014} cosmology as \mbox{\sc{EAGLE}}.
\cite{baugh_galaxy_2019} first presented the simulation, and demonstrated its use as a parent volume for the \mbox{\sc{Galform}}\ model, in order to predict the atomic hydrogen content of galaxies.
\cite{safonova_rosella_2020} also used \mbox{\sc{P-Millennium}}\ as a parent simulation for a SHAM model, generating mock catalogues.
\mbox{\sc{P-Millennium}}\ uses the same FOF and Subfind structure finders as the \mbox{\sc{EAGLE}}\ simulation project, which means the features can be used directly for any model trained on \mbox{\sc{EAGLE}}.
We present our predictions using \mbox{\sc{P-Millennium}}\ in \sec{pmill_predictions}.
| 2023-04-23T06:40:45.583Z | 2021-11-04T01:14:55.000Z | redpajama/arxiv | arxiv_0001 | 1,067 | 11,874 |
ebedd8266ca40f20975b5641aaf32c6321bdae17 | \section{Introduction}
The human visual system can perceive the same canonical color of an object even under different illuminants.
This feature can be mimicked by computational color constancy, an essential task in the camera pipeline that processes raw sensor signals to sRGB images.
Conventional methods~\cite{buchsbaum1980spatial,FinlaysonT04,FuntS10,land1971lightness,WeijerGG07} utilize statistical properties of the scene to cope with this ill-posed problem, such as the most widely used gray world assumption.
Such statistical methods, however, often fail where their assumptions are violated in complex scenes.
Until recently, deep learning based methods~\cite{HuWL17,Qian0NKM17,XuLHLQ20,YuCWQZJ20} have been applied to the color constancy problem and achieve considerable quality improvements on challenging scenes.
Yet, this ill-posed and sensor-dependent task still suffers from the difficulty of collecting massive paired data for supervised training.
When learning with insufficient training data, a common issue frequently encountered is the possibility of learning spurious correlations~\cite{Vigen} or undesirable biases from data~\cite{TorralbaE11}: misleading features that work for most training samples but do not always hold in general.
For instance, previous research has shown that a deep object-recognition model may rely on the spuriously correlated background instead of the foreground object to make predictions~\cite{abs-2006-09994} or be biased towards object textures instead of shapes~\cite{GeirhosRMBWB19}.
In the case of color constancy, outdoor scenes often have higher correlations with high color temperature illuminants than indoor scenes.
Thus, deep learning models may focus on scene related features instead of illuminant related features. This leads to a decision behavior that tends to predict high color temperature illuminants for outdoor scenes, but suffers high error on outdoor scenes under low color temperature illuminants.
This problem becomes worse when the sparsity of data increases.
To avoid learning such spurious correlations, one may seek to regularize deep learning models to learn scene-invariant, illuminant-dependent representations.
As illustrated in Fig.\ref{fig:motivation_v2}, in contrast to image classification problem, the representation of the same scene under different illuminants should be \emph{far} from each other.
On the contrary, the representation of different scenes under the same illuminant should be \emph{close} to each other.
Therefore, we propose to learn such desired representations by contrastive learning~\cite{simclr,He0WXG20,HjelmFLGBTB19}, a framework that learns general and robust representations by comparing similar and dissimilar samples.
However, conventional self-supervised contrastive learning often generates easy or trivial contrastive pairs that are not very useful for learning generalized feature representations~\cite{SupContrastKhosla}.
To address this issue, a recent work~\cite{simclr} has demonstrated that strong data augmentation is crucial for conducting successful contrastive learning.
Nevertheless, previous data augmentations that have been shown effective for image classification may not be suitable for color constancy.
Here we illustrate some of them.
First, most previous data augmentations in contrastive learning are designed for high-level vision tasks (e.g., object recognition) and seek illuminant invariant features, which can be detrimental for color constancy.
For example, color dropping converts an sRGB image to a gray-scale one, making the color constancy task even more difficult.
Moreover, the color constancy task works best in the linear color space where the linear relationship to scene radiance is preserved.
This prevents from using non-linear color jittering augmentations, e.g., contrast, saturation, and hue.
To this end, we propose \textit{CLCC: Contrastive Learning for Color Constancy}, a novel color constancy framework with contrastive learning.
For the purpose of color constancy, effective positive and negative pairs are constructed by exploiting the label information,
while novel color augmentations are designed based on color domain knowledge~\cite{AfifiB19iccv,RawtorawBrown14,ColorReprKaraimerB18}.
Built upon a previous state-of-the-art~\cite{HuWL17}, CLCC provides additional 17.5$\%$ improvements (mean angular error decreases from 2.23 to 1.84) on a public benchmark dataset~\cite{cheng2014illuminant}, achieving state-of-the-art results without increasing model complexity.
Besides accuracy improvement, our method also allows deep learning models to effectively acquire robust and generalized representations even when learning from small training datasets.
\paragraph{Contribution}
We introduce CLCC, a fully supervised contrastive learning framework for the task of color constancy.
By leveraging label information, CLCC generates more diverse and harder contrastive pairs to effectively learn feature representations aiming for better quality and robustness.
A novel color augmentation method that incorporates color domain knowledge is proposed.
We improve the previous state-of-the-art deep color constancy model without increasing model complexity.
CLCC encourages learning illuminant-dependent features rather than spurious scene content features irrelevant for color constancy, making our model more robust and generalized, especially in data-sparse regions.
\section{Related Work}
\subsection{Contrastive learning}
Contrastive learning is a framework that learns general and robust feature representations by comparing similar and dissimilar pairs.
Inspired from noise contrastive estimation (NCE) and N-pair loss~\cite{GutmannH10,MnihK13,Sohn16}, remarkable improvements on image classification are obtained in several recent works~\cite{simclr,He0WXG20,HjelmFLGBTB19,LoweOV19,abs-1906-05849,abs-1807-03748,WuXYL18}.
Particularly, a mutual information based contrastive loss, InfoNCE~\cite{abs-1807-03748} has become a popular choice for contrastive learning (see \cite{McAllesterS20,PooleOOAT19} for more discussion).
Furthermore, recent works~\cite{BachmanHB19,BeyerZOK19,ChuangRL0J20,abs-1905-09272,SupContrastKhosla,abs-2005-10243} have shown that leveraging supervised labels not only improves learning efficiency by alleviating sampling bias (and hence reducing the need for large batch size training) but also improves generalization by learning task-relevant features.
\subsection{Data augmentation}
Data augmentations such as random cropping, flipping, and rotation have been widely used in classification ~\cite{HeZRS16,SandlerHZZC18}, object detection~\cite{LinGGHD17}, and semantic segmentation~\cite{ChenZPSA18} to improve model quality.
Various works rely on manually designed augmentations to reach their best results~\cite{simclr,SatoNY15}. To ease such efforts, strategy search~\cite{CubukZMVL19,CubukZSL20} or data synthesis~\cite{abs-1712-04621,abs-1711-00648} have been used to improve data quality and diversity.
However, popular data augmentation strategies for image recognition~\cite{simclr,CubukZMVL19,KalantariR17,RedmonDGF16} (e.g., color channel dropping, color channel swapping, HSV jittering) may not be suitable for the color constancy task.
Thus, we incorporate color domain knowledge~\cite{AfifiB19iccv,ColorReprKaraimerB18,RawtorawBrown14} to design data augmentation suitable for contrastive learning on color constancy.
\subsection{Color constancy}
Color constancy is a fundamental low-level computer vision task that has been studied for decades. In general, current research can be divided into learning-free and learning-based approaches.
The former ones use color histogram and spatial information to estimate illuminant~\cite{buchsbaum1980spatial,FinlaysonT04,FuntS10,land1971lightness,WeijerGG07}.
Despite the efficiency of these methods, they do not perform well on challenging scenes with ambiguous color pixels.
The latter ones adopt data-driven approaches that learn to estimate illuminant from training data~\cite{Barnard00,BarronT17,Finlayson13,FuntX04,JozeD12}. These learning-based approaches outperform learning-free methods and have become popular in both academic and industry fields. In addition, recent works have shown that features learned from deep neural networks are better than hand-crafted ones~\cite{KrizhevskySH12,LiYWZH18,RedmonDGF16}.
Consequently, deep learning based color constancy research has gradually received more and more attention. Recently, FC4 uses ImageNet-pretrained backbones~\cite{HuWL17,KrizhevskySH12} to prevent over-fitting and estimate illuminant with two additional convolutional layers. RCC-Net~\cite{Qian0NKM17} uses a convolutional LSTM to extract features in both spatial and temporal domains to estimate illuminants.
C4~\cite{YuCWQZJ20} proposes a cascaded, coarse-to-fine network for color constancy, stacking three SqueezeNets to improve model quality.
To mitigate the issue that the learned representation suffers from being sensitive to image content, IGTN ~\cite{XuLHLQ20} introduces metric learning to learn scene-independent illuminant features.
From a different perspective, most learning based methods strongly bind to a single sensor's spectral sensitivity and thus cannot be generalized to other camera sensors without fine-tuning.
Several works~\cite{AfifiB19,JuarezPBLSM20,XiaoG020} have attempted to resolve this issue by training on multiple sensors simultaneously. We note that multi-sensor training is out of the scope of this work, hence we do not compare to this line of research.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{fig/overview.pdf}
\end{center}
\vspace*{-5mm}
\caption{An overview of our CLCC: Besides the main color constancy task, we propose to incorporate contrastive learning to learn generalized and illuminant-dependent feature representations.}
\label{fig:overview_architecture}
\end{figure}
\section{Preliminaries}
\paragraph{Image formation model}
A raw-RGB image can be viewed as a measurement of scene radiance within a particular range of spectrum from a camera sensor:
\begin{align}
\mathbf{I}_{\mathrm{raw}}(\mathbf{x})=\int_{\omega} R_{c}(\lambda) S(\mathbf{x}, \lambda) L(\lambda) d \lambda
\label{eqn:imgformationmodel}
\end{align}
where $\lambda$ denotes the wavelength, $\omega \in [380, 720]$ (nm) is the visible spectrum, $R_{c}$ is the spectral sensitivities of the sensor's color channel $c \in \{r,g,b\}$. The term $S(\mathbf{x}, \lambda)$ denotes the scene's material reflectance at pixel $\mathbf{x}$ and $L(\lambda)$ is the illuminant in the scene, assumed to be spatially uniform. Notably, $\mathbf{I}_{\mathrm{raw}}$ values are linearly proportional to the scene radiance, making color constancy easier to work with.
\paragraph{Color space conversions}
Usually $\mathbf{I}_{\mathrm{raw}}$ undergoes two color space conversions in the camera pipeline:
\begin{align}
\mathbf{I}_{\mathrm{sRGB}} = \mathcal{G}_{\mathrm{XYZ} \rightarrow \mathrm{sRGB}} (
\mathcal{F}_{\mathrm{raw} \rightarrow \mathrm{XYZ}}(
\mathbf{I}_{\mathrm{raw}}
)
)
\label{eqn:colorspaceconversion}
\end{align}
where $\mathcal{F}(\cdot)$ involves linear operations including white balance and full color correction. $\mathcal{F}(\cdot)$ maps a sensor-specific raw-RGB to a standard perceptual color space such as CIE XYZ. $\mathcal{G}(\cdot)$ involves non-linear photo-finishing procedures (e.g., contrast, hue, saturation) and eventually maps XYZ to the sRGB color space (we refer to \cite{Karaimer2016ASP} for a complete overview of camera imaging pipeline).
\paragraph{White balance and full color correction}
Given $\mathbf{I}_{\mathrm{raw}}$, white balance (WB) aims to estimate the scene illuminant $L = [L_r, L_g, L_b]$, i.e., the color of a neutral material captured with a physical color checker placed in the scene. Knowing that a neutral material equally reflects spectral energy at every wavelength regardless of different illuminants, we can apply a $3\times3$ diagonal matrix $\mathbf{M}_{\mathrm{WB}}$ with the diagonal entries $[L_g/L_r, 1, L_g/L_b]$ on $\mathbf{I}_{\mathrm{raw}}$ to obtain a white-balanced image $\mathbf{I}_{\mathrm{WB}}$:
\begin{align}
\mathbf{I}_{\mathrm{WB}} = \mathbf{I}_{\mathrm{raw}} \mathbf{M}_{\mathrm{WB}}
\label{eqn:WB}
\end{align}
After WB, a neutral material should appear achormatic (i.e., ``gray'').
Because WB only corrects achromatic colors, a $3\times3$ full color correction matrix $\mathbf{M}_{\mathrm{CC}}$ is further applied to correct chromatic colors (in practice, those chromatic patches with known CIE XYZ values on color checker). Note that $\mathbf{M}_{\mathrm{CC}}$ is illuminant-specific due to error introduced by the estimated $\mathbf{M}_{\mathrm{WB}}$
\begin{align}
\mathbf{I}_{\mathrm{XYZ}} = \mathbf{I}_{\mathrm{WB}} \mathbf{M}_{\mathrm{CC}}
\label{eqn:CC}
\end{align}
Such $\mathbf{I}_{\mathrm{XYZ}}$ is sensor-agnostic since the illuminant cast is completely removed for both achromatic and chromatic colors.
\section{Methodology}
We start with our problem formulation and review conventional self-supervised contrastive learning in Section \ref{sec:formulation}. Next, we introduce CLCC, our fully-supervised contrastive learning framework for color constancy in Section \ref{sec:clcc}. Finally, we describe our color augmentation for contrastive pair synthesis in Section \ref{sec:coloraug}. How these sections fit together is illustrated in Fig.~\ref{fig:overview_architecture}.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\linewidth]{fig/method.pdf}
\end{center}
\vspace*{-5mm}
\caption{The proposed formation for contrastive pairs and color augmentation.}
\label{fig:clcc}
\end{figure*}
\subsection{Formulation}
\label{sec:formulation}
\paragraph{The learning problem}
Our problem setting follows the majority of learning-based color constancy research which only focuses on the white balance step of estimating the illuminant $L$ from the input raw image $\mathbf{I}_{\mathrm{raw}}$:
\begin{align}
\hat{L} = f_{\phi}(h_{\theta}(
\mathbf{I}_{\mathrm{raw}}
))
\label{eqn:problemformulation}
\end{align}
where $h_{\theta}$ is the feature extractor that produces visual representations for $\mathbf{I}_{\mathrm{raw}}$, $f_{\phi}$ is the illuminant estimation function, and $\hat{L}$ is the estimated illuminant. Both $h_{\theta}$ and $f_{\phi}$ are parameterized by deep neural networks with arbitrary architecture design, where $\theta$ and $\phi$ can be trained via back-propagation.
\paragraph{The learning objectives}
The overall learning objective can be decomposed into two parts: (1) illuminant estimation for color constancy and (2) contrastive learning for better representations (as shown in Fig.~\ref{fig:overview_architecture}):
\begin{align}
\mathcal{L}_{\mathrm{total}} =
\lambda \mathcal{L}_{\mathrm{illuminant}} +
\beta \mathcal{L}_{\mathrm{contrastive}}
\label{eqn:learningobjective}
\end{align}
For the illuminant estimation task, we use the commonly used angular error as:
\begin{align}
\mathcal{L}_{\mathrm{illuminant}} =
\arccos{(\frac
{\hat{L} \cdot L}
{\lVert \hat{L} \rVert \cdot \lVert L \rVert}
)}
\label{eqn:angularloss}
\end{align}
where $\hat{L}$ is the estimated illuminant and $L$ is the ground-truth illuminant.
Since the datasets for color constancy are relatively small because it is difficult to collect training data with corresponding ground-truth illuminants.
Training a deep learning model with only the supervision $\mathcal{L}_{\mathrm{illuminant}}$ usually does not generalize well.
Therefore, we propose to use contrastive learning, which can help to learn a color constancy model that generalize better even with a small training dataset.
Details of the contrastive learning task are described as follows.
\paragraph{The contrastive learning framework}
The proposed CLCC is built upon the recent work SimCLR~\cite{simclr}.
Therefore, we discuss self-supervised constrative learning for color constancy in this section, and then elaborate on our extended fully-supervised contrastive learning in the next section.
The essential building blocks of contrastive learning are illustrated here:
\begin{itemize}
\setlength \itemsep{-0.0em}
\item A stochastic data augmentation $t(\cdot) \sim \mathcal{T}$ that augments a sample image $\mathbf{I}$ to a different \textit{view} $t(\mathbf{I})$. Note that $t(\cdot)$ is required to be \textit{label-preserving}, meaning that $\mathbf{I}$ and $t(\mathbf{I})$ still share the same ground-truth illuminant $L$.
\item A feature extraction function $h_\theta$ that extracts the \textit{representation} of $t(\mathbf{I})$. $h_\theta$ is further used for downstream color constancy task as defined in the Eq.~(\ref{eqn:problemformulation}).
\item A feature projection function $g_\psi$ that maps the representation $h_{\theta}(t(\mathbf{I}))$ to the \textit{projection} $\mathbf{z}$ that lies on a unit hypersphere.
$g_\psi$ is typically only required when learning representations and thrown away once the learning is finished.
\item A similarity metric function $s(\cdot)$ that measures the similarity between latent projections $(\mathbf{z}_i, \mathbf{z}_j)$.
\item Contrastive pair formulation: \textit{anchor} $\mathbf{I}$, \textit{positive} $\mathbf{I}^{+}$ and \textit{negative} $\mathbf{I}^{-}$ samples jointly compose the positive pair $(\mathbf{I}, \mathbf{I}^{+})$ and the negative pair $(\mathbf{I}, \mathbf{I}^{-})$ for contrastive learning. For the color constancy task, a positive pair should share the same illuminant label $L$, while a negative pair should have different ones.
\item A contrastive loss function $\mathcal{L}_{\mathrm{contrastive}}$ that aims to maximize the similarity between the projection of the positive pair $(\mathbf{z}, \mathbf{z}^{+})$ and minimize the similarity between that of the negative pair $(\mathbf{z}, \mathbf{z}^{-})$ in the latent projection space.
\end{itemize}
\paragraph{Self-supervised contrastive learning}
Given two random training images $\mathbf{I}_i$ and $\mathbf{I}_j$ with different scene content, one can naively form a positive contrastive pair with two randomly augmented views of the same image $(t(\mathbf{I}_i), t'(\mathbf{I}_i^{+}))$, and a negative contrastive pair with views of two different images $(t(\mathbf{I}_i), t'(\mathbf{I}_j^{-}))$.
Such naive formulation introduces two potential drawbacks.
One is the \textit{sampling bias}, the potential to sample a false negative pair that shares very similar illuminants (i.e., $L_i \simeq L_j$).
The other is the \textit{lack of hardness}, the fact that the positive $t(\mathbf{I}_i^{+})$ derived from the same image as the anchor $t(\mathbf{I}_i)$ could share similar scene content. This alone suffices to let neural networks easily distinguish from negative $t'(\mathbf{I}_j^{-})$ with apparently different scene content. Hence, as suggested by~\cite{simclr}, one should seek strong data augmentations to regularize such learning shortcut.
To alleviate sampling bias and increase the hardness of contrastive pairs, we propose to leverage label information, extending self-supervised contrastive learning into fully-supervised contrastive learning, where the essential data augmentation is specifically designed to be label-preserving for color constancy task.
\subsection{\textbf{\textit{CLCC}}: Contrastive learning for color constancy}
\label{sec:clcc}
We now describe our realization of each component in the proposed fully-supervised contrastive learning framework, as depicted in Fig.~\ref{fig:clcc}
\paragraph{Contrastive pair formulation}
Here, we define $\mathbf{I}_{\mathrm{X}\mathrm{A}}$ as a linear raw-RGB image captured in the scene $\mathrm{X}$ under the illuminant $L_{\mathrm{A}}$. Let us recapitulate our definition that a positive pair should share an identical illuminant while a negative pair should not. Therefore, given two randomly sampled training images $\mathbf{I}_{\mathrm{AX}}$ and $\mathbf{I}_{\mathrm{BY}}$,
we construct our contrastive pairs as follows:
\begin{itemize}
\setlength \itemsep{-0.0em}
\item An easy positive pair $(t(\mathbf{I}_{\mathrm{XA}}), t'(\mathbf{I}_{\mathrm{XA}}^{+}))$---with an identical scene $\mathrm{X}$ and illuminant $L_{\mathrm{A}}$.
\item An easy negative pair $(t(\mathbf{I}_{\mathrm{XA}}), t'(\mathbf{I}_{\mathrm{YC}}^{-}))$---with different scenes ($\mathrm{X}$, $\mathrm{Y}$) and different illuminants ($L_{\mathrm{A}}$, $L_{\mathrm{C}}$).
\item A hard positive pair $(t(\mathbf{I}_{\mathrm{XA}}), t'(\mathbf{I}_{\mathrm{YA}}^{+}))$---with different scenes ($\mathrm{X}$, $\mathrm{Y}$) but an identical illuminant $L_{\mathrm{A}}$.
\item A hard negative pair $(t(\mathbf{I}_{\mathrm{XA}}), t'(\mathbf{I}_{\mathrm{XC}}^{-}))$---with an identical scene $\mathrm{X}$ but different illuminants ($L_{\mathrm{A}}$, $L_{\mathrm{C}}$).
\end{itemize}
$\mathbf{I}_{\mathrm{YC}}$, $\mathbf{I}_{\mathrm{YA}}$ and $\mathbf{I}_{\mathrm{XC}}$ are synthesized by replacing one scene's illuminant to another.
Note that we define the novel illuminant $L_{\mathrm{C}}$ as the interpolation or extrapolation between $L_{\mathrm{A}}$ and $L_{\mathrm{B}}$, thus we do not need a redundant hard negative sample $\mathbf{I}_{\mathrm{XB}}$.
More details are explained in Section \ref{sec:coloraug}.
$t$ is a stochastic perturbation-based, illuminant-preserving data augmentation composed by \textit{random intensity}, \textit{random shot noise}, and \textit{random Gaussian noise}.
\paragraph{Similarity metric and contrastive loss function}
Once the contrastive pairs are defined in the image space, we use $h_\theta$ and $g_\psi$ to encode those views $t(\cdot)$ to the latent projection space $\mathbf{z}$. Our contrastive loss can be computed as the sum of InfoNCE losses for properly elaborated contrastive pairs:
\begin{equation}
\begin{aligned}
\mathcal{L}_{\mathrm{contrastive}} &=
\mathcal{L}_{\mathrm{NCE}}(\mathbf{z}_{\mathrm{XA}},\mathbf{z}_{\mathrm{XA}}^{+},\mathbf{z}_{\mathrm{YC}}^{-})\\ &+
\mathcal{L}_{\mathrm{NCE}}(\mathbf{z}_{\mathrm{XA}},\mathbf{z}_{\mathrm{XA}}^{+},\mathbf{z}_{\mathrm{XC}}^{-})\\ &+
\mathcal{L}_{\mathrm{NCE}}(\mathbf{z}_{\mathrm{XA}},\mathbf{z}_{\mathrm{YA}}^{+},\mathbf{z}_{\mathrm{YC}}^{-})\\ &+
\mathcal{L}_{\mathrm{NCE}}(\mathbf{z}_{\mathrm{XA}},\mathbf{z}_{\mathrm{YA}}^{+},\mathbf{z}_{\mathrm{XC}}^{-})
\label{eqn:contrastiveloss}
\end{aligned}
\end{equation}
The InfoNCE loss $\mathcal{L}_{\mathrm{NCE}}$ can be computed as:
\begin{equation}
\begin{aligned}
\mathcal{L}_{\mathrm{NCE}} =
-\log \left[\frac
{\exp (s^{+} / \tau)}
{\exp (s^{+} / \tau) + \sum_{n=1}^{N} \exp(s^{-} / \tau)}
\right]
\label{eqn:infonceloss}
\end{aligned}
\end{equation}
where $s^{+}$ and $s^{-}$ are the cosine similarity scores of positive and negative pairs respectively:
\begin{equation}
\begin{aligned}
s^{+} &= s(\mathbf{z}, \mathbf{z}^{+})\\
s^{-} &= s(\mathbf{z}, \mathbf{z}^{-})
\label{eqn:similarity}
\end{aligned}
\end{equation}
Equation \eqref{eqn:infonceloss} could be viewed as performing a $(N+1)$-way classification realized by cross-entropy loss with $N$ negative pairs and $1$ positive pair.
$\tau$ is the temperature scaling factor.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\linewidth]{fig/coloraug.pdf}
\end{center}
\vspace*{-5mm}
\caption{An illustration of our proposed color augmentation. The left hand side shows the generation of positive/negative samples by swapping pre-existing illuminants from a pair of images via estimated color mapping matrices $\mathbf{M}_{\mathrm{AB}}$ and $\mathbf{M}_{\mathrm{BA}}$. The right hand side shows the augmented samples with novel illuminants via interpolation ($w = +0.5$) and extrapolation ($w = -1.5$ and $w = +1.5$) using the detected color checkers $\mathbf{C}_{\mathrm{A}}$ and $\mathbf{C}_{\mathrm{B}}$.}
\label{fig:coloraug}
\end{figure*}
\subsection{Raw-domain Color Augmentation}
\label{sec:coloraug}
The goal of our proposed color augmentation is to synthesize
more diverse and harder positive and negative samples by manipulating illuminants such that the color constancy solution space is better constrained.
As shown in Fig.~\ref{fig:coloraug}, for example, given two randomly sampled $(\mathbf{I}_{\mathrm{XA}}, L_{\mathbf{A}})$, and $(\mathbf{I}_{\mathrm{YB}}, L_{\mathbf{B}})$ from training data, we go through the following procedure to synthesize $\mathbf{I}_{\mathrm{YC}}$, $\mathbf{I}_{\mathrm{YA}}$ and $\mathbf{I}_{\mathrm{XC}}$, as defined in Section~\ref{sec:clcc}.
\paragraph{Color checker detection}
We extract 24 linear-raw RGB colors $\mathbf{C}_{\mathrm{A}} \in \mathbb{R}^{24 \times 3}$ and $\mathbf{C}_{\mathrm{B}} \in \mathbb{R}^{24 \times 3}$ of the color checker from $\mathbf{I}_{\mathrm{XA}}$ and $\mathbf{I}_{\mathrm{YB}}$ respectively using the off-the-shelf color checker detector.
\paragraph{Color transformation matrix}
Given $\mathbf{C}_{\mathrm{A}}$ and $\mathbf{C}_{\mathrm{B}}$, we can solve a linear mapping $\mathbf{M}_{\mathrm{AB}} \in \mathbb{R}^{3 \times 3}$ that transform $\mathbf{C}_{\mathrm{A}}$ to $\mathbf{C}_{\mathrm{B}}$ by any standard least-square method. The inverse mapping $\mathbf{M}_{\mathrm{BA}}$ can be derived by solving the $\mathbf{M}_{\mathrm{AB}}^{-1}$. Accordingly, we can augment $\mathbf{I}_{\mathrm{XB}}$ and $\mathbf{I}_{\mathrm{YA}}$ as:
\begin{equation}
\begin{aligned}
\mathbf{I}_{\mathrm{XB}} &= \mathbf{I}_{\mathrm{XA}} \mathbf{M}_{\mathrm{AB}}\\
\mathbf{I}_{\mathrm{YA}} &= \mathbf{I}_{\mathrm{YB}} \mathbf{M}_{\mathrm{BA}}
\end{aligned}
\end{equation}
\paragraph{Novel illuminant synthesis}
The above augmentation procedure produces novel samples $\mathbf{I}_{\mathrm{XB}}$ and $\mathbf{I}_{\mathrm{YA}}$, but using only pre-existing illuminants $L_{\mathrm{A}}$ and $L_{\mathrm{B}}$ from the training data.
To synthesize a novel sample $\mathbf{I}_{\mathrm{XC}}$ under a novel illuminant $L_{\mathrm{C}}$ that \emph{does not exist in the training dataset},
we can synthesize $\mathbf{C}_{\mathrm{C}}$ by channel-wise \emph{interpolating} or \emph{extrapolating} from the existing $\mathbf{C}_{\mathrm{A}}$ and $\mathbf{C}_{\mathrm{B}}$ as:
\begin{equation}
\begin{aligned}
\mathbf{C}_{\mathrm{C}} = (1-w) \mathbf{C}_{\mathrm{A}} + w \mathbf{C}_{\mathrm{B}}
\end{aligned}
\label{eqn:C_C}
\end{equation}
where $w$ can be randomly sampled from a uniform distribution of an appropriate range $[w_{min}, w_{max}]$. Note that $w$ should not be close to zero in avoidance of yielding a false negative sample $\mathbf{I}_{\mathrm{XC}} = \mathbf{I}_{\mathrm{XA}}$ for contrastive learning.
To more realistically synthesize $\mathbf{I}_{\mathrm{XC}}$ (i.e., more accurate on chromatic colors), we need the full color transformation matrix $\mathbf{M}_{\mathrm{AC}}$ that maps $\mathbf{I}_{\mathrm{XA}}$ to $\mathbf{I}_{\mathrm{XC}}$:
\begin{equation}
\begin{aligned}
\mathbf{I}_\mathrm{XC} &= \mathbf{I}_{\mathrm{XA}} \mathbf{M}_{\mathrm{AC}}\\
\mathbf{I}_\mathrm{YC} &= \mathbf{I}_{\mathrm{YB}} \mathbf{M}_{\mathrm{BC}}
\end{aligned}
\label{eqn:I_AC}
\end{equation}
where $\mathbf{M}_{\mathrm{AC}}$ can be efficiently computed from the identity matrix $\mathbbm{1}$ and $\mathbf{M}_{\mathrm{AB}}$ without solving least-squares as:
\begin{equation}
\begin{aligned}
\mathbf{M}_\mathrm{AC} &= (1-w) \mathbbm{1} + w \mathbf{M}_{\mathrm{AB}}\\
\mathbf{M}_\mathrm{BC} &= w \mathbbm{1} + (1-w) \mathbf{M}_{\mathrm{BA}}\\
\end{aligned}
\label{eqn:M_AC}
\end{equation}
Equation \eqref{eqn:M_AC} can be derived from Eq.\eqref{eqn:C_C} and Eq.\eqref{eqn:I_AC}.
\paragraph{From full color mapping to neutral color mapping}
Our synthesis method could be limited by the performance of color checker detection.
When the color checker detection is not successful, the full colors $\mathbf{C}_{\mathrm{A}}$ and $\mathbf{C}_{\mathrm{B}}$ could be reduced to the neutral ones $L_{\mathrm{A}}$ and $L_{\mathrm{B}}$,
meaning that the color transformation matrix $\mathbf{M}_{\mathrm{AB}}$ is reduced from a full matrix to a diagonal matrix.
This is also equivalent to first perform WB on $\mathbf{I}_{\mathrm{A}}$ with $L_{\mathrm{A}}$, and subsequently perform an inverse WB with $L_{\mathrm{B}}$.
We provide the ablation study of this simplified version in our experiment, where we term the full color mapping as \textit{Full-Aug} and the simplified neutral color mapping as \textit{WB-Aug}.
We show that even though chromatic colors cannot be correctly mapped, \textit{WB-Aug} could still obtain performance improvement over the baseline.
\begin{figure}[htbp]
\centering
\subfloat[(a) On the NUS-8 dataset, CLCC achieves the best results and the most light-weighted model of all comparable methods.]{\includegraphics[width=0.45\textwidth]{fig/NUS-8-parameters-compare.png}}
\label{fig:NUS8Results}
\\[-0.35em]
\subfloat[(b) On the Gehler dataset, without increasing model complexity, CLCC improves SqueezeNet-FC4 to achieve comparable results.]{\includegraphics[width=0.45\textwidth]{fig/Gehler-parameters-compare.png}}
\label{fig:GehlerResults}
\\
\caption{Model complexity versus mean angular error. CLCC improves SqueezeNet-FC4 by 0.39 (17.5\%) on the NUS-8 dataset as the state-of-the-art, and by 0.21 (12.5\%) on the Gehler dataset as a comparable method.}
\label{fig:compare_with_other_methods}
\end{figure}
\section{Experiment}
\subsection{Network training}
Following FC4~\cite{HuWL17}, we use ImageNet-pretrained SqueezeNet as the backbone and add a non-linear projection head with three-layer MLP with $512$ hidden units for contrastive learning. Note that the projection head is thrown away once the learning is finished.
We use Adam~\cite{KingmaB14} optimizer with $\beta_1 = 0.9$ and $\beta_2 = 0.999$.
The learning rate is $0.0003$ and batch size is $16$.
We use dropout~\cite{SrivastavaHKSS14} with probability of $0.5$ and $L_2$ weight decay of $0.000057$ for regularization.
The loss weights for illuminant estimation and contrastive learning heads $(\lambda, \beta)$ is $(0.1, 1.0)$ for the first $5000$ epochs, $(1.0, 0.1)$ for the rest $5000$ epochs in learning objective~(\ref{eqn:learningobjective}).
The number of negative samples $N$ is $12$ and the temperature scaling factor $\tau$ is $0.87$ for InfoNCE loss~(\ref{eqn:infonceloss}).
Note that we do not train our illuminant estimation head with contrastive pairs. They are only used for training the contrastive learning head as depicted in Fig.~\ref{fig:overview_architecture}.
\subsection{Data augmentation}
We follow the default data augmentations used by FC4 with several differences. We resize the crop to $256\times256$ to speed-up training. For perturbation-based augmentations in contrastive learning, the range of intensity gain is $[0.8, 1.2]$, and the ranges of standard deviation of Guassian noise and shot noise are $[0, 0.04]$ and $[0.02, 0.06]$ for $[0,1]$-normalized images respectively. The $(w_{min}, w_{max})$ for novel color synthesis ~(\ref{eqn:C_C}) are $(-5.0, -0.3)$ and $(+0.3, +5.0)$.
\subsection{Dataset and evaluation metric}
There are two standard public datasets for color constancy task: the reprocessed~\cite{GehlerDataset} Color Checker Dataset~\cite{GehlerRBMS08} (termed as the Gehler dataset in this paper) and the NUS-8 Dataset~\cite{cheng2014illuminant}. The Gehler dataset has $568$ linear raw-RGB images captured by $2$ cameras and the NUS-8 dataset has $1736$ linear raw-RGB images captured by $8$ cameras.
In color constancy studies, three-fold cross validation is widely used for both datasets. Several standard metrics are reported in terms of angular error in degrees: mean, median, tri-mean of all the errors, mean of the lowest $25\%$ of errors
, and mean of the highest $25\%$ of errors.
\begin{center}
\begin{table}[t]
\resizebox{\columnwidth}{!}{
\begin{tabular}{ l c c c c c }
\hline
& Mean & Median & Tri. & Best-25\% & Worst-25\% \\
\hline
White-Patch~\cite{brainard1986analysis} & 10.62 & 10.58 & 10.49 & 1.86 & 19.45 \\
Edge-based Gamut~\cite{Barnard00} & 8.43 & 7.05 & 7.37 & 2.41 & 16.08 \\
Pixel-based Gamut~\cite{Barnard00} & 7.70 & 6.71 & 6.90 & 2.51 & 14.05 \\
Intersection-based Gamut~\cite{Barnard00} & 7.20 & 5.96 & 6.28 & 2.20 & 13.61 \\
Gray-World~\cite{buchsbaum1980spatial} & 4.14 & 3.20 & 3.39 & 0.90 & 9.00 \\
Bayesian~\cite{GehlerRBMS08} & 3.67 & 2.73 & 2.91 & 0.82 & 8.21 \\
NIS~\cite{GijsenijG11} & 3.71 & 2.60 & 2.84 & 0.79 & 8.47 \\
Shades-of-Gray~\cite{FinlaysonT04} & 3.40 & 2.57 & 2.73 & 0.77 & 7.41 \\
1st-order Gray-Edge~\cite{WeijerGG07} & 3.20 & 2.22 & 2.43 & 0.72 & 7.36 \\
2nd-order Gray-Edge~\cite{WeijerGG07} & 3.20 & 2.26 & 2.44 & 0.75 & 7.27 \\
Spatio-spectral (GenPrior)~\cite{ChakrabartiHZ12} & 2.96 & 2.33 & 2.47 & 0.80 & 6.18 \\
Corrected-Moment (Edge)~\cite{Finlayson13} & 3.03 & 2.11 & 2.25 & 0.68 & 7.08 \\
Corrected-Moment (Color)~\cite{Finlayson13} & 3.05 & 1.90 & 2.13 & 0.65 & 7.41 \\
Cheng et al.~\cite{cheng2014illuminant} & 2.92 & 2.04 & 2.24 & 0.62 & 6.61 \\
CCC (dist+ext)~\cite{Barron15} & 2.38 & 1.48 & 1.69 & 0.45 & 5.85 \\
Regression TreeTree~\cite{ChengPCB15} & 2.36 & 1.59 & 1.74 & 0.49 & 5.54 \\
DS-Net (HypNet + SelNet)~\cite{ShiLT16} & 2.24 & 1.46 & 1.68 & 0.48 & 6.08 \\
FFCC-4 channels~\cite{BarronT17} & 1.99 & \second{1.31} & \second{1.43} & \first{\textbf{0.35}} & 4.75 \\
\hline
AlexNet-FC4~\cite{HuWL17} & 2.12 & 1.53 & 1.67 & 0.48 & 4.78 \\
SqueezeNet-FC4~\cite{HuWL17} & 2.23 & 1.57 & 1.72 & 0.47 & 5.15 \\
IGTN (vanilla triplet loss)~\cite{XuLHLQ20} & 2.02 & 1.36 & - & 0.45 & 4.70 \\
IGTN (no triplet loss)~\cite{XuLHLQ20} & 2.28 & 1.64 & - & 0.51 & 5.20 \\
IGTN (no learnable histogram)~\cite{XuLHLQ20} & 2.15 & 1.52 & - & 0.47 & 5.28 \\
IGTN (full)~\cite{XuLHLQ20} & 1.85 & \first{\textbf{1.24}} & - & \second{0.36} & 4.58 \\
C4~\cite{YuCWQZJ20} & 1.96 & 1.42 & 1.53 & 0.48 & \second{4.40} \\
\hline
CLCC w/ Full-Aug & \first{\textbf{1.84}} & \second{1.31} & \first{\textbf{1.42}} & 0.41 & \first{\textbf{4.20}} \\
\hline
\end{tabular}
}
\caption{Angular error of various methods on the NUS-8 dataset. CLCC gets the best results on the mean tri-mean and worst-25\% metrics, and comparable results on the others. Notably, although IGTN gets the best result on the median metric, its model complexity is the largest.}
\label{NUS8Results}
\end{table}
\end{center}
\begin{center}
\begin{table}[t]
\resizebox{\columnwidth}{!}{
\begin{tabular}{ l c c c c c c }
\hline
& Mean & Median & Tri. & Best-25\% & Worst-25\% & Extra data \\
\hline
Gray World~\cite{buchsbaum1980spatial} & 6.36 & 6.28 & 6.28 & 2.33 & 10.58 & \\
General Gray World~\cite{WeijerGG07} & 4.66 & 3.48 & 3.81 & 1.00 & 10.58 & \\
White Patch~\cite{brainard1986analysis} & 7.55 & 5.68 & 6.35 & 1.45 & 16.12 & \\
Shades-of-Gray~\cite{FinlaysonT04} & 4.93 & 4.01 & 4.23 & 1.14 & 10.20 & \\
Spatio-spectral (GenPrior)~\cite{ChakrabartiHZ12} & 3.59 & 2.96 & 3.10 & 0.95 & 7.61 & \\
Cheng et al.~\cite{cheng2014illuminant} & 3.52 & 2.14 & 2.47 & 0.50 & 8.74 & \\
NIS~\cite{GijsenijG11} & 4.19 & 3.13 & 3.45 & 1.00 & 9.22 & \\
Corrected-Moment (Edge)~\cite{Finlayson13}& 3.12 & 2.38 & - & 0.90 & 6.46 & \\
Corrected-Moment (Color)~\cite{Finlayson13} & 2.96 & 2.15 & - & 0.64 & 6.69 & \\
Exemplar~\cite{JozeD12} & 3.10 & 2.30 & - & - & - & \\
Regression Tree~\cite{ChengPCB15} & 2.42 & 1.65 & 1.75 & 0.38 & 5.87 & \\
CNN~\cite{BiancoCS17} & 2.36 & 1.95 & - & - & - & \\
CCC (dist+ext)~\cite{Barron15} & 1.95 & 1.38 & 1.22 & 0.35 & 4.76 & \\
DS-Net(HypNet+SelNet)~\cite{ShiLT16} & 1.90 & 1.12 & 1.33 & 0.31 & 4.84 & \\
FFCC-4 channels~\cite{BarronT17} & 1.78 & 0.96 & 1.14 & 0.29 & 4.29 & \\
FFCC-2 channels~\cite{BarronT17} & 1.67 & 0.96 & 1.13 & 0.26 & 4.23 & +S \\
FFCC-2 channels~\cite{BarronT17} & 1.65 & \first{\textbf{0.86}} & 1.07 & 0.24 & 4.44 & +M\\
FFCC-2 channels~\cite{BarronT17} & 1.61 & \first{\textbf{0.86}} & \second{1.02} & \first{\textbf{0.23}} & 4.27 & +S +M\\
\hline
AlexNet-FC4~\cite{HuWL17} & 1.77 & 1.11 & 1.29 & 0.34 & 4.29 & \\
SqueezeNet-FC4~\cite{HuWL17} & 1.65 & 1.18 & 1.27 & 0.38 & 3.78 & \\
IGTN (vanilla triplet loss)~\cite{XuLHLQ20} & 1.73 & 1.09 & - & 0.31 & 4.25 & \\
IGTN (no triplet loss)~\cite{XuLHLQ20} & 1.78 & 1.13 & - & 0.34 & 4.31 & \\
IGTN (no learnable histogram)~\cite{XuLHLQ20} & 1.85 & 1.10 & - & 0.31 & 4.91 & \\
IGTN (full)~\cite{XuLHLQ20} & 1.58 & 0.92 & - & 0.28 & 3.70 & \\
C4 ~\cite{YuCWQZJ20} & \first{\textbf{1.35}} & \second{0.88} & \first{\textbf{0.99}} & 0.28 & \first{\textbf{3.21}} & \\
\hline
CLCC w/ Full-Aug & \second{1.44} & 0.92 & 1.04 & \second{0.27} & \second{3.48} & \\
\hline
\end{tabular}
}
\caption{Angular error of various methods on the Gehler dataset. The use of semantic data or meta-data are denoted by “S” or "M". The result shows that SqueezeNet-FC4 plugging in our approach, which keeps the same model complexity and without meta data can achieve comparable performance.}
\label{GehlerResults}
\end{table}
\end{center}
\subsection{Evaluation}
\paragraph{Quantitative evaluation}
Following the evaluation protocol, we perform three-fold cross validation on the NUS-8 and the Gehler datasets.
We compare our performance with previous state-of-the-art approaches.
As shown in Fig.~\ref{fig:compare_with_other_methods}a, the proposed CLCC is able to achieve state-of-the-art mean angular error on the NUS-8 dataset, 17.5$\%$ improvements compared to FC4 with similar model size.
Other competitive methods, such as C4 and IGTN, use much more model parameters ($3\times$ and more than $200\times$) but give worse mean angular error.
Table~\ref{NUS8Results} shows comprehensive performance comparisons with recent methods on the NUS-8 dataset~\cite{cheng2014illuminant}. Our CLCC provides significant improvements over the baseline network SqueezeNet-FC4 across all scoring metrics and reach the best mean metric, as well as the best worst-25\% metric.
This indicates that the proposed fully-supervised contrastive learning not only improves the overall performance when there is no massive training data, but also improves robustness via effective constrastive pairs constructions. For the Gehler dataset, as shown in Fig.~\ref{fig:compare_with_other_methods}b, our CLCC stays competitive with less than $0.1$ performance gap behind the best performing approach C4~\cite{YuCWQZJ20}, whose model size is $3\times$ larger. Table~\ref{GehlerResults} shows detailed performance of state-of-the-art methods on the Gehler dataset. It is shown that methods achieving better scores than CLCC either require substantially more complexity (C4), or utilize supplemental data (FFCC).
C4 has three times more parameters which may facilitate remembering more sensor features than ours.
FFCC needs meta-data from camera to reach the best median metric. If no auxiliary data is used, CLCC performs better than FFCC-4 channels on all metrics.
\paragraph{Ablation for color augmentation}
Recap that our proposed color augmentation methods for contrastive learning includes Full-Aug and WB-Aug mentioned in Section~\ref{sec:coloraug}. As shown in Table~\ref{ablation_augmentation}, even when the color checker is not successfully detected for full color mapping (Full-Aug), the reduced neutral color mapping (WB-Aug) is still able to significantly decrease the mean angular error from $2.23$ to $1.93$ and the worst-case error from $5.15$ to $4.30$ over the SqueezeNet-FC4 baseline, which are substantial relative improvement $13.5\%$ and $16.5\%$ respectively.
Furthermore, when Full-Aug is considered, the mean angular error can be decreased from $1.93$ to $1.84$ with an additional relative improvement $5.1\%$. This shows that correctly mapped chromatic colors for synthesizing contrastive pairs can improve the quality of contrastive learning, resulting a improved model.
\begin{table}[th]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{lcccc}
\hline
& Mean & Median & Best-25\% & Worst-25\% \\
\hline
FC4~\cite{HuWL17} & 2.23 & 1.57 & 0.47 & 5.15 \\
$+$ CLCC w/ WB-Aug & 1.93 & 1.38 & 0.44 & 4.30 \\
$+$ CLCC w/ Full-Aug & 1.84 & 1.31 & 0.41 & 4.20 \\
\hline
\end{tabular}
}
\caption{The results show CLCC is able to improve SqueezeNet-FC4 quality by contrastive learning with two different data augmentations on the NUS-8 dataset.}
\label{ablation_augmentation}
\end{table}
\begin{figure}[th]
\begin{center}
\includegraphics[width=\linewidth]{fig/perclustererror.png}
\end{center}
\caption{Per-cluster error metrics on the Gehler dataset. We show that CLCC achieves better performance on all clusters, especially the worst-case performance (i.e. worst-25\%). Notably, in the sparse data regime (cluster colored with \colorbox{pink!80}{pink} that contains only $16$ data points), CLCC trades best-case performance (i.e., best-25\%) with the worst-case one, leading to better robustness (i.e., lower test error standard deviation).}
\label{fig:quantitative_robustness}
\end{figure}
\paragraph{Worst-case robustness}
In this section, we are also interested in whether CLCC can provide improvements on robustness for worst-cases. To illustrate the robustness in more finegrained level, we propose to evaluate our model under $K$ grouped data on the Gehler dataset via clustering the illuminant labels with K-means. $K$ is selected as $5$ for example. Each group represents different scene contents under similar illuminants. As shown in Fig.~\ref{fig:quantitative_robustness}, CLCC greatly improves on all scoring metrics among all clusters (except for best-$25\%$ in pink cluster). Remarkably, we demonstrate that, when the amount of cluster data decreases from higher one (e.g., purple cluster) to lower one (e.g., pink cluster), as shown in the data distribution on top-left side in Fig.~\ref{fig:quantitative_robustness}), the improvement over worse-case performance increases. Especially in the region that suffers from data sparsity (e.g., $16$ data points in pink cluster), CLCC significantly reduces the worse-case error from $3.24$ to $2.31$, which achieves $28.7\%$ relative improvement.
This finding supports our contrastive learning design which aims to learn better illuminant-dependent features that are robust and invariant to scene contents.
\section{Conclusion}
In this paper, we present CLCC, a contrastive learning framework for color constancy.
Our framework differs from conventional self-supervised contrastive learning on the novel fully-supervised construction of contrastive pairs, driven by our novel color augmentation.
We improve considerably over previous strong baseline, achieving state-of-the-art or competitive results on two public benchmark datasets, without additional computational costs.
Our design of contrastive pairs allows model to learn better illuminant features that are particularly robust to worse-cases in data sparse regions.
{\small
\bibliographystyle{ieee_fullname}
| 2023-04-23T06:40:45.900Z | 2021-06-10T02:19:56.000Z | redpajama/arxiv | arxiv_0001 | 1,079 | 6,661 |
6f027172b73240e68f41d2b688edabbebd702352 | \section{G{\footnotesize İ}r{\footnotesize İ}ş}\label{sec:giris}
Son yıllarda, derin öğrenme literatüründe olduğu gibi, onun bir alt başlığı olan doğal dil işleme (NLP) alanında da meydana gelmekte olan gelişmeler sayesinde, dilden dile çeviri, anlamsal analiz ve metin sınıflandırma gibi problemlerin çözümünde önemli ilerlemeler kaydedilmiştir.
Mikolov vd. tarafından geliştirilen word2vec yöntemi \cite{Miokolov:2013} ile kelimeler cümle içerisinde diğer kelimelerle birlikte kullanılma sıraları ve olasılıklarını temsil eden ve kelime gömme olarak adlandırılan çok boyutlu sayısal yöneyler şeklinde gösterilebilmektedir. \cite{Kilimci:2019} ve \cite{Sel:2019} gibi çalışmalarda kelime gömme tabanlı metin sınıflandırma yöntemleri önerilmiştir.
Yine Mikolov vd. tarafından, word2vec'e paralel olarak geliştirilen doc2vec yöntemi \cite{Mikolov:2014} ile cümle ve paragrafları da bir yöney şeklinde gösterebilmek mümkün olabilmektedir. Doc2vec, bir sınıflandırma yöntemi ile birlikte, bir doküman sınıflandırma modeli olarak da kullanılabilmektedir.
Tang vd., \cite{Tang:2015} çalışmasında Twitter’dan alınmış cümlelerin sınıflandırılmasında uzun-kısa vadeli bellek (LSTM) modelinin kullanılmasının sözdizimsel ayrıştırıcılardan ve harici duyarlılık sözlüklerinden daha iyi performans gösterdiğini ortaya koymuşlardır.
Ayata vd., \cite{Ayata:2017} çalışmasında 2017 Twitter’da yazılmış metinlerin, kelime gömüleri modeli ile oluşturulan vektörlerin, destek vektör makinesi ve rastgele orman sınıflandırma modelleri kullanılarak sektörel bazda sınıflandırılması çalışmasını gerçekleştirmişlerdir.
Ertuğrul vd., \cite{Ertugrul:2018} çalışmasında metin verilerinden oluşan film konu özetleri ile film sınıflandırması yapmak için, cümlelere ait gömme yöneyleriyle ,çift yönlü LSTM (bi-LSTM) modelini eğiterek elde ettikleri sonuçların, tekrarlayan sinir ağları ve lojistik regresyon ile karşılaştırıldığında bu modelin daha iyi olduğunu göstermişlerdir.
Çiftçi vd., \cite{Ciftci:2018} çalışmasında Lojistik Regresyon ve Naive Bayes gibi analiz yöntemleri ile LSTM yöntemini, Türkçe duygu analizi alanında karşılaştırmış ve LSTM yönteminin daha iyi sonuç verdiğini göstermişlerdir.
Geçtiğimiz bir kaç sene içerisinde, çok daha güncel bir model olan BERT \cite{Devlin:2018} doğal dil işlemede farklı problemlerin çözümünde çok tercih edilen bir dil modeli olmuştur. BERT modelinin duygu analizi için kullanımına yönelik \cite{Gao:2019} ve \cite{Acikalin:2020} gibi güncel çalışmalar mevcuttur.
Biz de bu çalışmada AdresGezgini A.Ş. firmasının (kısaca firma olarak adlandırabiliriz) internet sitesini ziyaret eden ziyaretçilerle firmanın müşteri temsilcileri arasında gerçekleşen sohbet yazışmalarını ele aldık. Bu yazışmalarda sayfa ziyaretçilerinin gerçekleştirdiği sorguları anlamlandırabilmek adına bu cümleleri manuel olarak etiketleyerek yukarıda örneklerine değindiğimiz doc2vec, LSTM ve BERT ile kurguladığımız üç adet sınıflandırma modelini bu veri ile eğittik. Doc2Vec ve LSTM modellerini, çok daha güncel bir yöntem olan BERT modeli ile kıyasladık.
\section{Kullanılan Ver{\footnotesize İ} Set{\footnotesize İ}} \label{sec:data}
2014 ve 2020 yılları arasında firma web sitesindeki sohbet arayüzü yardımı ile müşteri temsilcileri ve site ziyaretçileri arasında gerçekleştirilen sohbet yazışmaları detaylı bir şekilde incelenerek düzgün diyalog formatında ilerleyen 13794 adet sohbet görüşmesi belirlendi. Daha sonra, bu görüşmelerde ziyaretçiler tarafından gerçekleştirilen girdi cümleler ayrıştırılarak ön tanımlı 10 adet kategori ile manuel olarak etiketlendi. Dijital reklam alanında hizmet veren firmamıza gelen sorular genellikle farklı platformlarda reklam vermek, web sitesi tasarımı yaptırmak, seo v.b. dijital reklamcılık hizmetleri ile ilgili olmaktadır. Kategorilerle etiketlediğimiz bazı örnek cümleler Tablo \ref{tablo:ornekler}'de görülebilir.
\begin{table}[h!]
\centering
\caption{\textsc{Örnek cümleler ve eşleşt{\footnotesize İ}r{\footnotesize İ}lm{\footnotesize İ}ş et{\footnotesize İ}ketler}}
\label{tablo:ornekler}
\begin{tabular}{|l|l|}
\hline
\textbf{Etiket} & \textbf{Örnek Cümle}\\
\hline
\hline
Kategori 0 & sitemin trafiğini arttırmak için reklam vermeyi düşünüyorum\\ & internet üzerinden reklamlar vermek istiyorum\\
& farklı yerlerden nasıl reklam verebiliyoruz\\
\hline
Kategori 1 & bizim internet sayfamız yok nasıl web adresi verebilirim\\
& şirketim için web sitesi yaptırmak istiyorum\\
& web tasarımı yaptıracağım yardımcı olabilir misiniz\\
\hline
Kategori 2 & google şirket reklamı vermek istiyordum \\
& google üzerinden reklam vereceğim\\
& arama sonuçlarında üst sırada çıkmak istiyorum\\
\hline
Kategori 3 & facebook reklamlarını kullanmak istiyoruz\\
& şirketim için facebookta tanıtım yapacağım\\
& facebook reklam vermek istiyorum\\
\hline
Kategori 4 & mağazamın ınstagram hesabı için reklam vermeyi düşünüyorum\\ & instagram reklamları hakkında bilgi alacağım\\
& instagramda nasıl tanıtım yapabiliriz\\
\hline
Kategori 5 & youtube üst sırada çıkmak için reklam vermek istiyorum\\
& youtube reklam nasıl oluyor\\
& internetten youtube a reklam açacağım\\
\hline
Kategori 6 & seo hizmetiniz hakkında bilgi verebilir misiniz\\
& siteme seo yaptıracağım\\
& parayla seo yapıyor musunuz \\
\hline
Kategori 7 & benimle iletişime geçer misiniz\\
& müşteri temsilcisiyle görüşeceğim\\
& beni arar mısınz hemen\\
\hline
Kategori 8 & iyi günler dükkanımı haritaya kaydetmek istiyorum\\
& google harita kaydı nasıl yapılıyor\\
& işletme harita kaydı oluşturacağım\\
\hline
Kategori 9 & ücretlendirme politikanız hakkında bilgi almak istiyordum\\ & fiyatlar hakkında bilgi alabilir miyim\\
& reklam ücretleriniz ne kadar\\
\hline
\end{tabular}
\end{table}
Etiketlere göre elimizdeki verinin dağılımı Şekil \ref{img:dataDist}'de görülmektedir. Şekilden de görüleceği üzere, veri setinin etiketlere göre dağılımı büyük oranda düzensiz bir yapıdadır. Toplam veri setinin \%30 luk kısmını test için ayırarak, 9651 adet eğitim verisi veri ile bir sonraki bölümde detaylarını vereceğimiz yöntemleri inceledik.
\begin{figure}
\centering
\includegraphics[width=0.50\textwidth]{data}
\caption{Veri setindeki örneklerin etiketlere göre dağılım grafiği. Grafikten de görüleceği üzere veri dağılımı büyük oranda düzensizdir.}
\label{img:dataDist}
\end{figure}
\section{Öner{\footnotesize İ}len Yöntemler} \label{sec:methods}
\subsection{Doc2Vec Yöntemi} \label{subsec:doc2vec}
Mikolov'un\cite{Mikolov:2014} çalışmasında önerilen doc2vec yöntemi, verilen bir yazı külliyatı kullanılarak, yazı külliyatı içerisindeki kelimelerin, cümlelerin, paragrafların bir araya gelme şekilleri gözetilerek gerçekleştirilen olasılık hesapları, verilen külliyat içerisindeki yazı örneklerini, tıpkı kelime gömülerinde \cite{Miokolov:2013} olduğu gibi çok boyutlu sayısal bir yöney şeklinde gösterebilmemizi sağlamaktadır.
Çalışmamızda biz eğitim veri kümesini kullanarak bir doc2vec model eğitimi gerçekleştirdik. Sonuç olarak her cümle örneği için elde ettiğimiz kelime gömmelerini de çok kategorili yapısal bağıntı (multi nominal logistic regression (MNLR)) sınıflandırma modelini eğitmek için kullandık.
Doc2vec modeli eğitilmeden önce veri seti noktalama işaretlerinden arındırılmış, büyük harfler küçük harfe dönüştürülmüş, NLTK kütüphanesi kullanılarak etkisiz kelimeler (stop words) kelimeler atılmıştır. İlgili veri seti ile birlikte kullanıldığında bu modelin başarımının düşük olduğu gözlemlenmiştir.
\subsection{LSTM Yöntemi} \label{subsec:lstm}
İlk olarak Hochreiter vd. tarafından \cite{Hochreiter:1997} çalışmasında önerilen uzun-kısa vadeli bellek (LSTM) mimarisi, verilerdeki zamansal bağıntıların öğrenilmesi açısından önemli bir işleve sahiptir.
Bu model kullanılırken, veri setimizde yaygın kullanılan 6872 farklı kelime belirlenmiş ve her biri bir tamsayı ile eşleştirilmiştir (tokenization). Veri setindeki en uzun cümlenin kelime sayısını (250) göz önünde bulundurularak daha kısa cümleleri $<BOŞ>$ ($<PAD>$) etiketi le işaretleyerek, 250 uzunluğunda eğitim girdileri elde ettik. Eğitim girdilerimizdeki sayıların ifade ettiği kelimeleri 100 boyutunda bir gömme yöneyine dönüştürecek olan gömme katmanını bir seyreltme (dropout) katmanı ardından LSTM katmanına bağlayarak bu mimariyi, girdilerimize karşılık gelen etiket bilgisi ile birlikte eğittik. Yaklaşık 5 milyon parametreden oluşan bu mimarinin başarımının bir sonraki bölümde açıklayacağımız model olan BERT modelinden daha kötü fakat doc2vec modelinden daha iyi olduğu gözlemlenmiştir.
\subsection{BERT Yöntemi} \label{subsec:bert}
Devlin vd.\cite{Devlin:2018} çalışması ile doğal dil işleme literatürüne yakın bir zamanda giren dönüştürücü (transformer) tabanlı bir dil modeli olan BERT, farklı problemlerin çözümü için önceki modeller yerine tercih edilmeye başlamıştır. Biz de çalışmamız kapsamında gerçekleştirmeye çalıştığımız sınıflandırma problemi için Bölüm \ref{subsec:doc2vec} ve \ref{subsec:lstm}'de sırasıyla anlattığımız Doc2Vec ve LSTM yöntemleri ile birlikte BERT modelinin de başarımını test ettik. BERT ile kurulan çözüm yöntemleri önceden eğitilmiş bir model gerektirdiği için bu yaklaşım öğrenme aktarması (transfer-learning) olarak düşünülebilir. Bu duruma istinaden biz de, Türkçe dilinde önceden eğitilmiş olan BERT-BASE-TURKISH-UNCASED modelini \cite{Schweter:2020} tercih ettik. Bu model yaklaşık 35GB'lık oldukça büyük bir Türkçe külliyat ile eğitilmiştir. Modelin Türkçe dilinde NLP alanında başarılı sonuçlar alan ince ayar yapılmış (fine-tuned) modelleri bulunmaktadır. Biz de kendi veri setimiz ile gerçekleştirdiğimiz bir ince ayar aşamasından sonra modeli kendi sınıflandırma problemimiz için denedik. Bir sonraki bölümde de göstereceğimiz üzere en iyi test başarımını da bu model ile elde ettik.
\section{Uygulama Sonuçları}
Bölüm \ref{sec:giris}'de tanımını yaptığımız sınıflandırma problemini, Bölüm \ref{sec:methods}'de anlattığımız üç yöntem ile çözmeye çalıştık. Bu bölümde denediğimiz yöntemlerin sınıflandırma başarımlarını karşılaştıracağız. Yöntemlerin sınıflandırma başarımlarını Şekil \ref{img:confmxs}'de hata dizeyi (matrisi) ve Tablo \ref{tablo:yontemler}'de tablo olarak gösterdik. Başarım hesaplarını, eğitim ve test veri seti için ayrı ayrı gerçekleştirdik. Bununla birlikte kategorik sınıflandırma problemlerinde başarımı ölçmek için kullanılan F1 metriğini de, her etiket sınıfı için ayrı ayrı hesapladık. Her ne kadar, ikili sınıflandırma yöntemlerinin başarımlarını göstermek için tek bir F1 skoru hesaplanması yeterli olsa da, buradaki gibi çoklu sınıflandırma durumlarında, F1 skorunun her bir sınıf için ayrı ayrı hesaplanması gerekmektedir. Bu durumda genel bir F1 skoru hesaplamak için iki temel yöntem kullanılmaktadır. Şayet, veri setindeki örnek sayısı her bir grup için yaklaşık eşit düzeylerde ise her bir sınıf için hesaplanmış olan F1 skorlarının aritmetik ortalaması bir metrik olarak kullanılabilmektedir, Tablo \ref{tablo:yontemler}'de bu skoru ortalama F1 skoru olarak gösterdik. Ancak buradaki gibi örnek sayısının dağılımının düzensiz olduğı veri setleri için her etiket sınıfı için hesaplanan F1 skorunun örnek sayısı baz alınarak hesaplanan bir ağırlıklı ortalamasını almak çok daha doğru bir sonuç vermektedir. Tablomuzda bu skoru ağırlıklı ortalama F1 skoru olarak gösterdik.
\begin{figure*}[h!]
\centering
\subfloat[][Doc2Vec Yönteminin Hata Dizeyi]{\includegraphics[width=0.32\textwidth]{CMDOC2VEC.pdf}\label{img:confmxdoc2vec}}
\subfloat[][LSTM Yönteminin Hata Dizeyi]{\includegraphics[width=0.32\textwidth]{CMLSTM.pdf}\label{img:confmxlstm}}
\subfloat[][BERT Yönteminin Hata Dizeyi]{\includegraphics[width=0.32\textwidth]{CMBERT.pdf}\label{img:confmxbert}}
\caption{Bölüm \ref{sec:methods}'te anlattığımız modellere ait hata dizeylerinin renklendirilmiş gösterimleri. Koyu renkler daha iyi başarım yüzdelerini temsil etmektedir. \protect\subref{img:confmxdoc2vec} Bölüm \ref{subsec:doc2vec}'da anlatılan Doc2Vec yönteminin hata dizeyi. \protect\subref{img:confmxlstm} Bölüm \ref{subsec:lstm}'de anlatılan LSTM yönteminin hata dizeyi.
\protect\subref{img:confmxbert} Bölüm \ref{subsec:bert}'de anlatılan BERT yönteminin hata dizeyi.
}
\label{img:confmxs}
\end{figure*}
\begin{table*}[h!]
\centering
\caption{\textsc{Öner{\footnotesize İ}len Yöntemler{\footnotesize İ}n Doğruluk Karşılaştırması}}
\label{tablo:yontemler}
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Yöntem} & \textbf{Eğitim Doğruluğu} & \textbf{Test Doğruluğu} & \textbf{Ort.F1 Skoru} & \textbf{Ağ.Ort.F1 Skoru}\\
\hline
\hline
Doc2Vec + MNLR (Bölüm \ref{subsec:doc2vec}) & 0.67 & 0.64 & 0.60 & 0.28\\
\hline
LSTM (Bölüm \ref{subsec:lstm}) & 0.97 & 0.90 & 0.90 & 0.80\\
\hline
BERT (Bölüm \ref{subsec:bert}) & \textbf{0.97} & \textbf{0.93} & \textbf{0.93} & \textbf{0.87} \\
\hline
\end{tabular}
\end{table*}
Doc2Vec yöntemi, her ne kadar uzun cümlelerin ve paragrafların anlamsal olarak sınıflandırılmasında başarılı bir yöntem olsa da, buradaki gibi kısa konuşma cümlelerinin sınıflandırılmasında düşük performans göstermiştir. Yöntemin eğitim ve test veri setleri için hesaplanan doğruluk oranları yüzde 60 dolaylarında çıkmıştır. Yine benzer şekilde F1 skorlarının, özellikle ağırlıklı F1 skoru değerinin çok düşük olduğu gözlemlenmiştir.
Bölüm \ref{subsec:lstm}'de bahsettiğimiz LSTM yöntemi dahilinde önerilen mimari, ilgili problemin çözümünde Doc2Vec yöntemine göre çok daha iyi oranlarda doğruluk sonuçları vermiştir. Öğrenme eğrilerinin ilk 10 dönem (epoch) için seyri Şekil \ref{img:lstmTraining}'te görülebilir. Çalışmamızda kullanılan diğer yöntemler bu problem özelinde 3 iterasyondan fazla eğitime gerek duymadığı için öğrenme eğrileri sadece bu model için paylaşılmıştır. LSTM modeli için hesaplanan iki F1 skoru için de görece yüksek değerlere ulaşmıştır. Ancak tablodan da görüleceği üzere, daha güncel bir yöntem olan BERT ile kurgulanan ve önceden eğitilmiş bir BERT modelinin ince ayar yapılarak \cite{Schweter:2020} kullanıldığı yöntem en yüksek doğruluk değerine ulaşmıştır. Her ne kadar LSTM yöntemi eğitim veri seti için BERT ile yaklaşık aynı doğruluk değerlerine erişse de, BERT yöntemi test veri setinde daha başarılı olmuştur. Buna bağlı olarak da F1 skorlarında da diğer iki yönteme göre en iyi sonuçları vermiştir.
Yöntemlerin her bir etiket için başarımlarının ayrı ayrı gösterilmesi adına, F1 skoru hesabında kullanılan hata dizeyleri incelendiğinde, yöntemlerin birbirine göre başarım oranları görsel olarak gözlemlenebilmektedir. Şekil \ref{img:confmxs}'de her bir yöntem için çizdirdiğimiz hata dizeyleri incelendiğinde, daha yüksek başarımlar daha koyu renklerle gösterildiğinde, BERT yönteminin hata dizeyinin köşegeni doğrultusundaki değerlerin daha yüksek değerlere, dolayısı ile daha koyu renklere sahip olduğu görülmektedir. Tablo \ref{tablo:yontemler}'de gösterdiğimiz ağırlıklı F1 skorlarından da görmüş olduğumuz gibi, BERT yöntemi her bir etiket grubu için en yüksek doğruluk oranına sahip yöntem olmuştur.
\begin{figure*}
\centering
\subfloat[][LSTM Yönteminin Kayıp Eğrisi]{\includegraphics[width=0.45\textwidth]{LSTM-Loss.pdf}\label{img:lstmLoss}}
\subfloat[][LSTM Yönteminin Doğruluk Eğrisi]{\includegraphics[width=0.45\textwidth]{LSTM-Accuracy.pdf}\label{img:lstmAcc}}
\caption{Bölüm \ref{subsec:lstm}'de anlattığımız LSTM modelinin 10 dönemlik eğitimi sonucunda elde edilen eğriler. \protect\subref{img:lstmLoss} Kayıp değerinin dönem ile değişimini eğitim ve test veri setleri için gösteren eğri \protect\subref{img:lstmAcc} Doğruluk değerinin dönem ile değişimini eğtim ve test veri setleri için gösteren eğri. }
\label{img:lstmTraining}
\end{figure*}
\section{Sonuçlar}
Kısa cümlelerin sınıflandırılması, benzer kelimeler içermeyen cümleler için geleneksel yöntemlerle kolaylıkla gerçekleştirilebilmektedir. Çok benzer kelimeler içeren kısa cümlelerin ait olduğu farklı kategorileri doğru şekilde sınıflandırabilmek için derin öğrenme yöntemlerine başvurmanın doğru bir yaklaşım olabileceği tespit edilmiştir. Bu noktada yapılan analizler BERT ile yapılan sınıflandırma işlemlerinin incelenen diğer yöntemlerden daha yüksek doğruluk oranıyla bu işlemi gerçekleştirebildiğini ortaya koymuştur. İlerleyen çalışmalarda aynı modellerin çok daha fazla kategoride ve farklı uzunluklardaki cümleler ve cümle grupları ile performansı karşılaştırılabilir.
Çalışmamızda gerçekleştirdiğimiz kodlama çalışmaları için python dilini ve ilgili kütüphanelerinin yanısıra düzenleme ve derleme için Jupyter Notebook uygulamasını kullandık. Bu kodlar ilgili GitHub depomuzdan \cite{AdresGezgini:2021} klonlanarak benzer otomatik etiketleme problemleri için kolaylıkla test edilebilir.
\section*{B{\footnotesize İ}lg{\footnotesize İ}lend{\footnotesize İ}rme}
Bu çalışma TÜBİTAK TEYDEB 1501 programı kapsamında desteklenmiş olan 3190585 numaralı "Makine Öğrenmesi ile Anlamlı Diyalog Üretebilecek Genel Amaçlı Chatbot Uygulaması" isimli proje kapsamında gerçekleştirilmiştir.
\bibliographystyle{IEEEbib}
| 2023-04-23T06:40:46.365Z | 2021-06-10T02:18:02.000Z | redpajama/arxiv | arxiv_0001 | 1,091 | 2,179 |
e7112334e3d2d89874df3bdcb8200075859984ee | \section{Introduction}
The last few decades have seen growing interest in cosmic magnetic fields on several
fronts~\cite{Vachaspati:2020blt}. Several ideas have been proposed that can generate magnetic fields
in cosmology, some of which are directly tied to known particle
physics~\cite{Vachaspati:1991nm,Cornwall:1997ms,Vachaspati:2001nb} and its possible
extensions~\cite{Joyce:1997uy,Forbes:2000gr,Stevens:2012zz,Miniati_2018,Vachaspati:2020blt}.
The magneto-hydrodynamical (MHD) evolution of cosmological magnetic fields is now understood quite
well on the basis of analytical arguments~\cite{Banerjee:2004df,Jedamzik:2010cy}
and direct simulations~\cite{Brandenburg:2017neh}.
There are claims for an indirect lower bound on the cosmological magnetic field
strength~\cite{Neronov_Vovk_2010_science,Essey:2010nd,Dolag:2010ni,Wood:2017kcy,Biteau:2018tmv},
though not without debate~\cite{Broderick:2011av,Arlen:2012iy}, and more direct
evidence~\cite{Chen:2014rsa}.
Concurrently there are claims of a parity violating signature that can be used to
measure the magnetic field helicity spectrum~\cite{Tashiro:2013ita,Chen:2014qva} though
with no significant detections as yet~\cite{asplund2020measurement,kachelriess2020searching}.
In parallel to these developments, motivated by heavy-ion collision
experiments~\cite{Kharzeev:2011vv}, there has been renewed interest in chiral
effects in plasmas, namely the chiral-magnetic \cite{Vilenkin:1980fu} and
chiral-vortical \cite{Vilenkin:1979ui} effects (CME and CVE respectively).
The CME and CVE have also been applied to the evolution of cosmological and
astrophysical magnetic fields
\cite{Joyce:1997uy,Boyarsky:2011uy,Tashiro:2012mf,Dvornikov:2012rk,Dvornikov:2013bca,
Dvornikov:2016jth,Brandenburg:2017rcb,Masada:2018swb}.
In this paper we discuss how CME and CVE can effectively arise
in standard cosmology with standard particle interactions due to the parity-violating
decays of heavy leptons and quarks.
The basic idea is that the standard model has a number of unstable particles
that decay at various cosmological epochs, primarily due to the weak interactions.
Since the weak interactions violate parity, the decay products are chiral and
this provides a net particle helicity to the cosmological medium.
The net particle helicity in principle leads to electric currents via the CME that
can generate magnetic helicity. However, accounting only for decays of standard
model particles, the net particle helicity is too small to significantly affect
cosmological magnetic fields and their helicity.
We start by describing the physical effect in some detail in the context of the tau lepton in
Sec.~\ref{physics}, where we also estimate the induced electric currents.
We find an upper bound to the magnetic helicity that can be generated due to
chiral effects in Sec.~\ref{maghel}.
Our conclusions are summarized and discussed in Sec.~\ref{conclusions}.
\section{Chirality production in tau decays}
\label{physics}
To illustrate the physics of the effect, in this section we will discuss the decay of tau leptons
in the background of a magnetic field and fluid vorticity. Except for small differences, the
physics carries over to the case of decays of other particles.
\subsection{Particle decay}
\label{particledecays}
Tau leptons decay into electrons (or muons) and neutrinos
\begin{equation}
\tau^- \to e^- + \nu_\tau + {\bar \nu}_e
\label{taudec}
\end{equation}
and anti-tau into positrons and neutrinos
\begin{equation}
\tau^+ \to e^+ + {\bar \nu}_\tau + \nu_e
\label{antitaudec}
\end{equation}
These decays violate parity since they proceed primarily by the weak interactions.
Therefore the tau predominantly decays into a relativistic left-handed electron, while
an anti-tau decays into a relativistic right-handed positron.
Due to the lepton asymmetry of the universe there are more taus than anti-taus,
and the cosmological medium gains net left-handed chirality as tau's decay.
The decay product electrons are chiral since they are produced by the weak
interactions, but chirality is not preserved for massive particles. Instead, as emphasized
in Ref.~\cite{Grabowska:2014efa} in the context of supernovae and neutron stars,
chirality is nearly equal to helicity for ultrarelativistic particles, so it is
better to think of the final electrons as being in a definite helicity state. Helicity can only
change due to particle interactions. We shall adopt this view in what follows.
The $\tau$ mass is $m_\tau = 1777~{\rm MeV}$ and the $\tau$ lifetime in its rest frame is
$\tau_\tau= 2.9\times 10^{-13}~{\rm s}$. However, the decaying taus are constantly reproduced
by reactions inverse to
(\ref{taudec}), (\ref{antitaudec}),\footnote{Tau-particles are also produced and destroyed in
scattering reactions like $\tau^- + {\nu}_e \to e^- + \nu_\tau$. We disregard them in what follows,
since they do not change the order of magnitude of the effect.} so the number density of taus, $n_\tau$,
remains comparable to that of photons until the time
\begin{equation}
t_\tau \sim 10^{-7}~{\rm s},
\label{ttaudecay}
\end{equation}
when the cosmic temperature drops to $T\sim m_\tau$. At later times $n_\tau$ decreases exponentially.
The particle helicity density, $n_\chi$, is produced in tau decays and is dissipated by
helicity flipping scatterings and due to the chiral anomaly. The latter is proportional
to $\alpha^3 B^2$~\cite{Figueroa:2017hun}, where $\alpha \approx 1/137$ is the fine structure
constant and $B$ the magnetic field strength, and is much slower than helicity flipping
scatterings for vanishing or weak magnetic fields. We will ignore the anomalous flipping
for now but will discuss it in Sec.~\ref{Bgen} when we consider the effect of particle
chirality on the generation of magnetic fields.
The evolution of particle helicity density can be described by the kinetic equation in the
relaxation time approximation,
\begin{equation}
\frac{d}{dt} (a^3 n_\chi) =
\frac{a^3}{\tau_d} (\delta n_\tau - \delta n_\tau^{\rm eq}) - \frac{a^3 n_\chi}{\tau_\chi},
\label{1}
\end{equation}
where
\begin{equation}
\delta n_\tau = n_\tau^+ - n_\tau^-,
\end{equation}
$n_\tau^-$ and $n_\tau^+$ are the densities of tau and anti-tau particles, respectively,
$\delta n_\tau^{\rm eq}$ is the equilibrium value of $\delta n_\tau$,
$\tau_d \sim (T/m_\tau)\tau_\tau$
is the decay time of taus (assuming that $T>m_\tau$ and with time dilation taken into
account) and $\tau_\chi^{-1}$ is the electron
helicity flipping rate. At $T\gg m_e$, the helicity flipping rate
is suppressed by a factor $m_e^2/T^2$ compared to the scattering rate
$\alpha T$~\cite{Boyarsky:2020cyk} (earlier estimates of the scattering rate
were suppressed by another factor of $\alpha$~\cite{Grabowska:2014efa}),
\begin{equation}
\tau_\chi\sim \frac{1}{\alpha T} \frac{T^2}{m_e^2}.
\label{tauchi}
\end{equation}
The excess of anti-tau's over tau's, $\delta n_\tau$, decreases due to tau decay and
is described by the equation,
\begin{equation}
\frac{d}{dt} (a^3 \delta n_\tau)=\frac{a^3}{\tau_d} (\delta n_\tau^{\rm eq} - \delta n_\tau) .
\label{2}
\end{equation}
At temperatures below the elecroweak phase transition, $T\lesssim T_{\rm EW}\sim 100$~GeV, we have
$\tau_d \ll t$, where $t$ is the cosmic time\footnote{This is easily verified using the relation
$t\sim m_{\rm P}/\sqrt{N} T^2$, where $m_{\rm P}$ is the Planck mass and $N$ is the number of particle species
in equilibrium.}.
This means that the equilibrium density of taus establishes very quickly (compared to the Hubble time),
and the approximate solution of (\ref{2}) is
$\delta n_\tau\approx \delta n_\tau^{\rm eq}$. Inserting \eqref{2} in \eqref{1} and then
using $\delta n_\tau\approx \delta n_\tau^{\rm eq}$ we have
\begin{equation}
\frac{d}{dt} (a^3 n_\chi) = -\frac{d}{dt}\left(a^3 \delta n_\tau^{\rm eq}\right) - \frac{a^3 n_\chi}{\tau_\chi}.
\label{nchieq}
\end{equation}
With a given $\delta n_\tau^{\rm eq}$, this equation can be solved in quadratures, but we shall instead find
an approximate solution. Since we are in the regime where $\tau_\chi \ll t$, the term on the left-hand side
can be neglected and we obtain
\begin{equation}
n_\chi\approx -\tau_\chi T^3\frac{d}{dt}\left(\frac{\delta n_\tau^{\rm eq}}{T^3}\right),
\label{nchi}
\end{equation}
where we have used $aT\approx {\rm const}$.
Once we determine the equilibrium excess of anti-taus over taus, denoted by $\delta n_\tau^{\rm eq}$,
we can estimate the chirality density of the universe due to tau decays using \eqref{nchi}.
\subsection{Equilibrium density}
The equilibrium density $\delta n_\tau^{\rm eq}$ is given by
\begin{equation}
\delta n_\tau^{\rm eq}=\frac{1}{2\pi^2}\int_0^\infty dp p^2 \left[f\left(\frac{E-\mu_\tau}{T}\right)
-f\left(\frac{E+\mu_\tau}{T}\right) \right],
\label{integral}
\end{equation}
where $f(x)=(e^x +1)^{-1}$ is the Fermi distribution, $E=\sqrt{p^2 +m_\tau^2}$, and $\mu_\tau$ is the
chemical potential of $\tau$ particles. At $T\gg m_\tau, \mu_\tau$ we can expand the integrand in
Eq.~(\ref{integral}) in powers of $m_\tau^2/p^2$ and $\mu_\tau/T$. The integrations are then easily
performed and we find
\begin{equation}
\delta n_\tau^{\rm eq}\approx \frac{\mu_\tau T^2}{6} \left(1-\frac{3m_\tau^2}{2\pi^2 T^2}\right).
\label{3}
\end{equation}
We assume that the baryon and/or lepton asymmetry of the universe was generated at $T\gg T_{EW}$ by
some interactions beyond the Standard Model, for example by $(B-L)$-violating leptoquark decays. This
asymmetry was then redistributed between the Standard Model leptons and quarks by sphaleron processes,
so at $T\ll T_{EW}$ we expect the chemical potentials of light baryons and leptons to be of the order
$\mu/T\sim \eta_B$ ~\cite{Kuzmin:1987wn,Harvey:1990qw}, where $\eta_B \sim 10^{-9}$ is the observed
baryon to photon ratio. In the high-temperature regime, when $T$ is large compared to all
relevant particle masses, we have $\mu_\tau /T\approx {\rm const}$, with a mass correction
${\cal O}(m^2/T^2)$~\cite{Bochkarev:1989kp}. Then Eq.~(\ref{3}) becomes
\begin{equation}
\frac{\delta n_\tau^{\rm eq}}{T^3}\approx C\eta_B - K\eta_B\frac{m_\tau^2}{T^2},
\label{33}
\end{equation}
where $C$ and $K$ are ${\cal O}(1)$ numerical constants\footnote{This estimate assumes that
taus are the heaviest particles present in equilibrium at temperature $T$. If a heavier particle is
present in equilibrium, it too will contribute to the mass correction and may change the estimate.}.
The mass correction term in (\ref{33}) can be qualitatively understood as follows.
As the temperature decreases, it becomes energetically favorable to transfer the conserved $\tau$-lepton
number from $\tau$-particles to $\tau$-neutrinos. The excess $\tau$-lepton number is also decreased as
a result~\cite{Bochkarev:1989kp}.
Substituting Eq.~(\ref{33}) in (\ref{nchi}) we obtain
\begin{equation}
n_\chi\approx -3K\eta_B \tau_\chi m_\tau^2 {\dot T}.
\end{equation}
With ${\dot T}=-T/2t$, $t\sim m_{\rm P}/T^2$ and $\tau_\chi$ from Eq.~(\ref{tauchi}), this gives (omitting numerical factors)
\begin{equation}
n_\chi \sim \frac{\eta_B m_\tau^2}{\alpha m_e^2}\frac{T}{m_{\rm P}} n_\gamma ,
\label{nchi2}
\end{equation}
where $n_\gamma\sim T^3$ is the photon number density.
This estimate was derived assuming $T\gg m_\tau$, but it still applies at $T\sim m_\tau$.
Reactions (\ref{taudec}), (\ref{antitaudec}) remain in equilibrium when $T$ drops well below
$m_\tau$. In this regime, $\delta n_\tau$ and $n_\chi$ decrease exponentially.
Similar formulae can be written down for the decay of other unstable particles. The largest
helicity is injected by the decay of the heaviest particle into the lightest particle and at
the highest temperature.
\section{Magnetic helicity}
\label{maghel}
As noted in Ref.~\cite{Brandenburg:2017rcb},
the maximum magnetic helicity that can be obtained from particle helicity can be
derived from the chiral anomaly equation, which can be written as a conservation
law,
\begin{equation}
n_\chi + \frac{4\pi}{\alpha} h = {\rm constant}.
\end{equation}
where $h = \langle {\bf A}\cdot {\bf B} \rangle$ is the magnetic helicity. Assuming
that the initial magnetic helicity and the final particle helicity vanish, we get
\begin{equation}
h_{\rm max} = \frac{\alpha}{4\pi} n_\chi
\sim \frac{\eta_B m_\tau^2}{4\pi m_e^2}\frac{T}{m_{\rm P}} n_\gamma
\end{equation}
where we have used \eqref{nchi2}. We compare $h_{\rm max}$ to magnetic
helicity that could be induced due to baryogenesis~\cite{Cornwall:1997ms,Vachaspati:2001nb},
\begin{equation}
h_B \sim \frac{\eta_B}{\alpha} n_\gamma \sim 10^{-5} \, {\rm cm}^{-3}
\sim 10^{-45}\, {\rm G}^2\, {\rm Mpc}
\end{equation}
where we have used the known cosmic baryon number density and are
using natural units.
Then
\begin{equation}
h_{\rm max} \sim \frac{\alpha m_\tau^2}{4\pi m_e^2}\frac{T}{m_{\rm P}} h_B
\sim 10^{-10} h_B
\end{equation}
where we have used $T \sim 100\, {\rm GeV}$ in the numerical estimate. Even if the
decay of top quarks with mass $\sim 175\, {\rm GeV}$ to down quarks with mass
$\sim 1\, {\rm MeV}$ is considered, $h_{\rm max} \sim 10^{-6} h_B$. Comparing
to observations, even with the most conservative lower bound of $10^{-19}\, {\rm G}$
on Mpc scales, we get an estimate for the observed helicity $\sim 10^{-38}\, {\rm G}^2\, {\rm Mpc}$.
\section{Conclusions}
\label{conclusions}
We have shown that the decays of certain {\it standard model} particles can lead to a chiral
cosmological medium around the epoch of the electroweak phase transition. The final
result for the chiral asymmetry due to tau-lepton decays is given in \eqref{nchi2}. However,
the asymmetry is suppressed by the baryon to entropy ratio ($\eta_B \sim 10^{-9}$) and
the effect on magnetic field helicity generation is very weak as we have shown in
Sec.~\ref{maghel}. Nonetheless it is of interest that the cosmological medium
was chiral at the earliest moments even within the standard model of particle physics.
\section{Acknowledgements}
We thank the participants of the Nordita workshop on ``Quantum Anomalies
and Chiral Magnetic Phenomena'', especially Axel Brandenburg and Kohei Kamada
for feedback. We also thank Matt Baumgart, Cecilia Lunardini, Igor Shovkovy, and
Tracy Slatyer for discussions. TV's work is supported by the U.S. Department of Energy,
Office of High Energy Physics, under Award No.~DE-SC0019470 at Arizona State
University. AV is supported by the National Science Foundation Award No.~1820872.
\bibstyle{aps}
| 2023-04-23T06:40:46.532Z | 2021-01-19T02:05:11.000Z | redpajama/arxiv | arxiv_0001 | 1,093 | 2,415 |
73ee7519dd2c0491e1ccd3372c96a2f524061c87 | \section{Introduction}
By ``string'' we mean a sequence of symbols from some alphabet $\mathcal{A}$. There are many natural examples of strings. This document is a string in the Roman alphabet. We conceptualize DNA and computer code as strings either in the alphabet of nucleotides, or ones and zeros. Some strings are more complicated than others. For example the infinite decimal expansion of the number 0 is simple. In genetics, genes often occur in complex regions of the genetic code, whereas other regions of DNA may be monotonous or repetitious.
Formally, if $s$ is a sequence of symbols from $\mathcal{A}$, let $p_s(n)$ denote, for a positive integer $n$, the number of distinct substrings of $s$ of length $n$. This assumes that the length of $s$ is at least $n$. By basic combinatorics,
\begin{equation}
1 \leq p_s(n) \leq |\mathcal{A}|^n
\label{eqn:basiccomplexity}
\end{equation}
The function $p_s(\cdot)$ as described above is a commonly used measure of complexity, with applications in dynamical systems, automata theory, biology, and other domains \cite{berthe2010combinatorics}. It features as a principal object in some well known theorems, such as the Morse-Hedlund theorem (see Section \ref{complexVC}.)
There is also a tradition of measuring the complexity of set families. For example characterizing the families of events for which frequency converges uniformly to probability (across the family) has been of interest in statistics since the 1970's \cite{dudley2014uniform}. This viewpoint has found applications in the theoretical foundations of machine learning by way of the Vapnik Chervonenkis dimension. Surprisingly a somewhat parallel investigation in mathematical logic examining uniformly definable families has turned up interesting connections to the ``wildness'' of the semantics of a formal theory \cite{simon2015guide}.
The formal description of Vapnik Chervonenkis dimension is as follows. Consider a set $U$ and a collection of subsets $\mathcal{C} \subseteq \{c : c \subseteq U\}$. For a given $B \subseteq U$, we define $\mathcal{C} \cap B = \{c\cap B: c \in \mathcal{C}\}$. The following is a complexity function somewhat analogous to the string complexity function:
\begin{equation}
m_{\mathcal{C}}(n) = \sup \{|\mathcal{C} \cap B| : B\subseteq U \text{ and } |B|=n\}
\label{eqn:vccomplexity}
\end{equation}
It was independently discovered by Sauer, Vapnik and Chervonenkis, and Shelah and Perles that for any $\mathcal{C}$, either $m_{\mathcal{C}}(n)$ always equal to its maximum possible value of $2^n$, or else the function is bounded by a polynomial in $n$. For a given $\mathcal{C}$, the largest $n$ for which $m_{\mathcal{C}}(n) = 2^n$ is known as its Vapnik-Chervonenkis dimension (if no such $n$ exists, the VC dimension is infinite). It is standard to refer to $B$ as \textit{shattered} if $\mathcal{C}\cap B$ equals the power set of $B$. Observe that the VC dimension of $\mathcal{C}$ is the size of the largest shattered subset of $U$ \cite{sauer1972density,shelah1972combinatorial,vapnik2015uniform}. If the VC dimension of $\mathcal{C}$ is $d$, then $m_{\mathcal{C}}(n) \leq n^d$ for all $n$.
Valiant discovered that, in the context of a learning problem, hypothesis spaces with finite VC dimension coincide with the hypothesis spaces that are distribution-free learnable (in the Probably-Approximately-Correct model of learning) \cite{valiant1984theory}. In model theory, Shelah, Laskowski and many others have investigated first order theories in which all partitioned formulas have finite VC dimension \cite{laskowski1992vapnik}. The idea of the VC dimension of a binary string is in some ways implicit in model theory, as explained in Section \ref{Smt}. However this is not necessarily obvious, and the emphasis here is not on the model theoretic properties of structures, only in strings of finite VC dimension.
On the combinatorics side, many achievements have been made in characterizing the complexity of arrangements of geometrical objects in a way that essentially relates to VC dimension, for example Radon, Cover, Basu and many others \cite{matousek2013lectures}. Work has also been done on concept classes relating to the sets of positivity for neural networks and other sophisticated learning machines (for example Sontag, Macintyre and many others \cite{sontag1998vc,karpinski1997polynomial}). However nothing seems to have been written about VC classes corresponding to binary strings.
In model theory there has been work done that relates directly to this paper. In particular many authors have made deep contributions to understanding when the structure $(\mathbb{N},+,P)$ or $(\mathbb{Z},+,P)$ is stable or NIP when $P$ is a unary predicate (please see Section \ref{Smt} for attributions and definitions). A model theorist would understand this paper as investigating the relation $P(x+y)$ in and of itself, independent of the stability theoretic properties of the structure in which it lies. Therefore this is not a model theory paper, and the questions we answer are not discussed in the model theoretic literature. Namely determining VC dimension for particular strings in heuristic ways, dimension preserving alterations of strings, the topological and dynamical nature of strings of finite VC dimension when considered in aggregate, and what we call here ``prime strings" which are combinatorially extreme vis a vis their length and VC dimension.
\subsection{The structure of this paper}
In Section \ref{Sintro} we give the central definitions, including the VC dimension of finite strings. In Section \ref{Sexamples} we give a number of examples of strings of both finite and infinite VC dimension. We investigate a string that shares a self-similarity quality with the Cantor set, and show that it lies on the extreme end of VC complexity. We also investigate strings that are more like Sidon sets (on the other end of the complexity spectrum). It arises that $p_s(n)$ gives surprisingly little information on the VC dimension of a string (and \textit{vice-versa}). We conjecture that Sturmian strings (which are of minimal non-trivial complexity with respect to $p_s(n)$) can have infinite VC dimension.
In Section \ref{Svccomp}, we first study real numbers with binary representations of finite VC dimension. Theorem \ref{Tcantorlike} establishes that real numbers of uniformly bounded VC dimension are distributed in a Cantor set like fashion; we also show that there are uncountably many such numbers. We investigate the set of reals of finite VC dimension and establish its basic topological properties in Theorem \ref{Ttop}. We conjecture that the reals of finite VC dimension constitute an uncountable subfield of $\mathbb{R}$, analogous to the constructable numbers or computable numbers.
We then move on to a more detailed examination of binary strings of finite VC dimension. Theorem \ref{Talt} establishes that the VC dimension of a string can be bounded in terms of the number of alternations between 0 and 1, echoing many related results in model theory. We then investigate the strings of finite VC dimension as a dynamical system and show that the bi-infinite strings of uniformly bounded VC dimension are a non-sofic shift space (Theorem \ref{Tsofic}).
Then we turn to investigate so-called prime strings, which are strings of a given VC dimension that are atomic in a certain sense. We completely characterize the prime strings of low dimension (0,1, and 2) and prove that prime strings of dimension 3 are fundamentally more complex. Along the way we show that prime strings do not constitute a regular language in the hierarchy of formal languages.
The last section ties this paper to work in model theory.
\section{From strings to set families}\label{Sintro}
In this section we explain how to encode a binary string as a set family, and consequently equip binary strings with a VC dimension. This involves investigating three different but similar notions of dimensionality. We then give an application of these definitions to a particular string, and analyze the relationship between VC complexity (of a string) and the complexity as usually defined.
Let $S$ be a binary string (a string in the alphabet $\mathcal{A} = \{0,1\}$), and $\mathcal{S}(S)$ the set of its finite substrings. If $S$ is infinite, then by default we understand $S$ to be infinite only on the right (we will discuss bi-infinite strings in Section \ref{Sbi}). Recall that substrings, unlike subsequences, are contiguous. Let $s \in \mathcal{S}(S)$. We let $s_i$ denote the digit in the $i$th position of $s$ (indexing is zero based). We will associate $s$ with a subset of natural numbers $n(s) = \{i: s_i = 1\}$. Using this technique we can associate $S$ with a subset family $\mathfrak{S} = \{n(s):s \in \mathcal{S}(S)\}$. From here we can define the VC dimension of $\mathfrak{S}$ in the usual way. By abuse of notation, the VC dimension of a binary string S refers to the VC dimension of $\mathfrak{S}$.
\textbf{Example}:
If $S = 011$, $\mathcal{S}(S) = \{<>,0,1,01,11,011\}$ and $\mathfrak{S} = \{\{\},\{0\},\{1\},\{0,1\},\{1,2\}\}$. The set $\{0,1\}$ is shattered and VC($\mathfrak{S}$) = 2.
This way of assigning VC dimension to a binary string is perhaps the most natural, though there are other notions. Note that any concept class (ie set system) can be regarded as consisting of binary strings, and a concept class derived from a binary string can be thought of as a special kind of concept class that can be assembled to form a single object (ie the string).
We now consider a ``sliding window'' method of assigning a VC dimension to a string. Given a string $S$, let $\mathcal{S}_w(S)$ denote the substrings of $S$ of length $w$. We can imagine sliding a width $w$ window along the string and recording which substrings are observed. $\mathcal{S}_w(S) = \{s \in \mathcal{S}(S): |s| = w\}$. This leads to its own set family $\mathfrak{S}_w = \{n(s) : s \in \mathcal{S}_w(S)\}$.
\textbf{Example}:
If $S = 011$, $\mathcal{S}_2(S) = \{01,11\}$ and $\mathfrak{S}_2 = \{\{1\},\{0,1\}\}$.
The sliding window dimension of $S$ is defined to be $SWdim(S) = \max \{VCdim(\mathfrak{S}_w): w \in \mathbb{N} \}$. If no maximum exists then we say $SWdim(S) = \infty$.
Often we only care whether a certain measure of complexity is finite or infinite. From this point of view it is unlikely to greatly matter exactly which notion of string complexity is selected. We show below that the notions of SWdim and VCdim for binary strings differ in value by at most one.
\begin{proposition} For any binary string $S$, $SWdim(S) \leq VCdim(S)$.
\end{proposition}
\begin{proof}
For a given $w$, $\mathcal{S}_w(S) \subseteq \mathcal{S}(S)$. Therefore for each $w$, $VCdim(\mathfrak{S}_w) \leq VCdim(\mathfrak{S})$. Thus $\max \{VCdim(\mathfrak{S}_w) : w \in \mathbb{N}\} \leq VCdim(\mathfrak{S})$.
\end{proof}
\begin{proposition} For any binary string, if $VCdim(S) \geq d+1$ then $SWdim(S) \geq d$.
\end{proposition}
\begin{proof} Suppose $\mathfrak{S}$ shatters the set of indices $\{i_0,...,i_d\}$. Let $\mathfrak{S}^{i_d} = \{n(s) : s \in \mathcal{S}(S), s_{i_d} = 1\}$.
Then $\mathfrak{S}^{i_d}$ shatters $\{i_0,...,i_{d-1}\}$. Moreover every $s$ in $\mathcal{S}(S)$ such that $s_{i_d} = 1$ must have $|s| \geq {i_d}+1$.
Then $\mathfrak{S}^{i_d}_{i_d+1}$ also shatters $\{i_0,...,i_{d-1}\}$.
Therefore $\mathfrak{S}_{i_d+1}$ shatters $\{i_0,...,i_{d-1}\}$, and this proves $SWdim(S) \geq d$. \end{proof}
The lower bound in Corollary \ref{C1} is seen to be tight by the examples provided above. The upper bound can be established using $S=01100$.
\begin{corollary}\label{C1}
The above two propositions together show that $$VCdim(S)-1 \leq SWdim(S) \leq VCdim(S).$$
In particular, $SWdim(S)=\infty \iff VCdim(S)=\infty$.
\end{corollary}
To ease the transition to further generalizations, we introduce one additional notion of string complexity.
Given a string $S$, a $d$-mask on $S$ is a set of subsequences of $S$ of the form $\langle S_{i_0+t},S_{i_1+t},\ldots,S_{i_{d-1}+t}\rangle$, where $t$ varies over the index set (in this case the natural numbers). The indexes must be distinct -- for convenience we can assume $i_0 < i_1 < \cdots < i_{d-1}$. We ignore sequences resulting from values of $t$ that give nonsensical indexes (too big or too small).
A $d$-mask is said to be \textit{full} if it contains all binary sequences of length $d$.
\textbf{Example}:
$S = 101001$.
Then one 2-mask might be $\{\langle S_{0+t},S_{2+t}\rangle : t = 0,1,2,3\}$. Explicitly this is $\{\langle 11\rangle,\langle 00\rangle,\langle 10\rangle,\langle 01 \rangle\}$. This is a full 2-mask.
The mask dimension $Mdim(S) =\max \{d : S\text{ has a full $d$-mask}\}$. If there is no maximum then we say $Mdim(S) = \infty$.
\begin{proposition}\label{SWeqM} For all binary strings $S$, $Mdim(S) = SWdim(S)$.
\end{proposition}
\begin{proof}
Suppose there is a full $d$-mask of the form $\{\langle S_{i_0+t},S_{i_1+t},\ldots,S_{i_{d-1}+t} \rangle : t \in \mathbb{N}\}$. Without loss we may assume that $i_0 = 0$. Let $w=i_{d-1}+1$ (assuming $i_0 < i_1 < \cdots < i_{d-1}$). Then for every $t$ there is some $s$ in $\mathcal{S}(S)_w$ such that $n(s) \cap \{i_0,i_1,\ldots,i_{d-1}\} = \{i_j : S_{i_j+t}=1\}$. Because the $d$-mask is full, $\mathfrak{S}_w$ shatters $\{i_0,i_1,\ldots,i_{d-1}\}$.
Conversely suppose $\mathfrak{S}_w$ shatters $\{i_0,i_1,\ldots,i_{d-1}\}$. For any $d$ length binary sequence there is some $s\in \mathcal{S}(S)_w$ such that $\langle s_{i_0},s_{i_1},\ldots,s_{i_{d-1}} \rangle$ realizes the sequence. If $t$ is the starting index of $s$ in $S$ then $\langle S_{i_0+t},S_{i_1+t},\ldots,S_{i_{d-1}+t}\rangle = \langle s_{i_0},s_{i_1},\ldots,s_{i_{d-1}} \rangle$. Thus $\langle S_{i_0+t},S_{i_1+t},\ldots,S_{i_{d-1}+t}\rangle$ where $t$ varies over the index set gives a full $d$-mask.
\end{proof}
The following is easy, but interesting; if a string is infinitely complex iff it has an infinitely complex tail.
\begin{lemma}\label{Ltail} A binary string S has finite VC dimension iff it has a suffix of finite VC dimension.
\end{lemma}
\begin{proof} From left to right is obvious. For right to left, let $S = xy$ where $y$ is a suffix of finite VC dimension $d$. If $S$ has infinite VC dimension then it must have arbitrarily large mask dimension. Therefore there must be a full $|x|+d+1$ mask. The last $d+1$ coordinates of this mask are a full $d+1$ mask. However all parts of $S$ affecting the substrings contributing to the full $d+1$ mask must occur in $y$. This is a contradiction.
\end{proof}
\section{Examples and comparisons with $p_s(n)$}\label{Sexamples}
Having labored through the above definitions and propositions, we now explore in some depth a particular example.
\subsection{The Cantor string}
We inductively define a string which is somewhat like the Cantor set. We call this construction the Cantor string.
Let $0^k$ denote a string of $k$ zeros for $k \in \mathbb{N}$.
Let $S^{(0)} = 1$. Given $S^{(i)}$ for $i \in \mathbb{N}$, define $S^{(i+1)} = S^{(i)}0^{|S^{(i)}|}S^{(i)}$. That is, $S^{(i+1)}$ is the concatenation of $S^{(i)}$, $|S^{(i)}|$ many zeros, and $S^{(i)}$ again.
The first few examples of $S^{(i)}$ are given below.
$S^{(0)} = 1$
$S^{(1)} = 101$
$S^{(2)} = 101000101 $
$S^{(3)} = 101000101000000000101000101 $
$S^{(4)} = 101000101000000000101000101000000000000000000000000000101000101000000000101000101 $
Note that $S^{(i)}$ is a proper initial segment of $S^{(i+1)}$ for all $i$. The \textit{Cantor string}, $S^{(\omega)}$ is the infinite binary string that has all $S^{(i)}, i \in \mathbb{N}$, as a proper initial segment.
\begin{lemma}\label{cantorindexes} The indexes in $S^{(\omega)}$ where a 1 occurs are precisely the indexes of the form $2\cdot\sum_{k\in K}3^k$ for some finite $K \subseteq \mathbb{N}$ (possibly empty).
\end{lemma}
\begin{proof} This is an easy inductive argument on $i$ for the $S^{(i)}$.
\end{proof}
Just as the Cantor set shows that sparsity and cardinality are independent, the Cantor string shows that a sparse string can have high complexity.
\begin{proposition} The Cantor string has infinite VC dimension.
\end{proposition}
\begin{proof} It suffices to show that the string has infinite mask dimension. Let $d \in \mathbb{N}$ be given. We will show that $S^{(\omega)}$ admits a full $d$-mask. The indexes we will use are of the form $i_l := 2\cdot 3^l$ for $l=1,2,\cdots,d$. Let $s$ be a finite proper initial substring of $S^{(\omega)}$ which is long enough that all the indexes referenced below exist.
Let $[d]$ denote the set $\{1,2,\ldots,d\}$. Let $A \subseteq [d]$ be given. We will construct a value $t$ such that for all $l \in [d]$, $s_{2(3^l)+t} = 1 \iff l \notin A$.
Define $t = 2(\sum_{a\in A} 3^a)$. Consider $2(3^l)+t = 2(3^l+\sum_{a \in A}3^a)$. Observe that $3^l+\sum_{a \in A}3^a$ is a sum of cubic powers iff $l \notin A$.
Thus, by Lemma \ref{cantorindexes}, $s_{2(3^l)+t} = 1 \iff l \notin A$. This means that we have constructed a full $d$-mask. Since $d$ was arbitrary, the dimension is infinite.
\end{proof}
\subsubsection{The complexity function and the Cantor string}\label{complexVC}
We say that a string is \textit{aperiodic} if it is not periodic, meaning it does not factor as the infinite product of some word. A string is \textit{not eventually periodic} if no suffix of the string is periodic.
Consider the following well-known result \cite{morse1938symbolic}.
\begin{lemma}\label{LMHT}[Morse-Hedlund theorem] An aperiodic sequence has a strictly increasing complexity function.
\end{lemma}
The authors of the above theorem characterized the simplest strings that are not eventually periodic as being Sturmian. A string $S$ is \textit{Sturmian} if $p_s(n) = n+1$ for all $n$.
The Cantor string is clearly aperiodic, since it contains arbitrarily long substrings of the form $ 10^{3k}1$. For this same reason, the Cantor string is not even eventually periodic. It is also unbalanced, in the sense that the Hamming weight of substrings of length $n$ take on multiple (more than two) values for large $n$. Sturmian strings are characterized by being balanced and not eventually periodic, and so the Cantor string is not Sturmian. However the complexity function is still linearly bounded.
\begin{theorem} When $x$ is the Cantor string, $p_x(n) = 2n-1$ for all $n>1$.
\end{theorem}
\begin{proof}
We use the strings $S^{(k)}$ as in the definition of the Cantor string. First observe that $p_x(1) = 2$. We claim that for $n \geq 2$, $p_x(n) = 2n-1$. Let $k$ be maximal so that $n>3^{k-1}$. Because of the recursive structure of the Cantor string, in order to determine $p_x(n)$ it suffices to consider $p_w(n)$ where $w=0^{n}S^{(k)}0^{n-1}$. Recall that $S^{(k)} = S^{(k-1)}0^{3^{k-1}}S^{(k-1)}$. We will consider the distinct words taken on by a window of length $n$ as it progresses across $w$. More precisely, we imagine that we have a sequence of binary $n$-tuples denoted by $b$, where $b(t)=\langle w_{0+t},w_{1+t},\ldots,w_{n-1+t}\rangle$ as $t$ ranges over $\{0,1,2,\ldots,n+3^k-1\}$.
First note $b(0)=0^n$. This and all subsequent words are distinct until $t=n+3^{k-1}$. At this stage, graphically, the leftmost index of $b(t)$ is positioned at the start of the middle 0's in $S^{(k)}$. We then encounter words previously seen for the next $2\cdot 3^{k-1}-n+3^{k-1}$ values of $t$ (incrementing sequentially). Graphically, at this stage, the rightmost index of $b(t)$ is moving into the long field of zeros on the right after $S^{(k)}$. Then the next $n-(3^{k-1}+1)$ values of $t$ again yield words that are previously unseen.
The total number of steps performed is $n+3^k-1$, which is one step for each value of $t$. The total number of distinct words encountered is $n+3^{k-1}+n-(3^{k-1}+1) = 2n-1$.
\end{proof}
The apparent contradiction between infinite VC dimension and tame string complexity will be addressed in Section \ref{Svccomp}.
\subsection{The Thue-Morse sequence}
By the \textit{parity function} we mean the function $f:\mathbb{N} \rightarrow \{0,1\}$ such that $f(n)=0$ iff the binary representation of $n$ has an even number of 1's. The Thue-Morse sequence \cite{allouche1999ubiquitous} is the sequence for which the $n$th symbol is $f(n)$. Here we show that the Thue-Morse sequence has infinite mask dimension (and hence infinite VC dimension). We must show that the sequence admits a full $d$-mask for all $d$. Let $d \in \mathbb{N}$ be given. Construct a binary matrix $M$ of dimensions $d \times 2^{d+1}$ in the following way. We assume some canonical mapping $g : \mathcal{P}([d]) \rightarrow \{1,2,...,2^d\}$, where $\mathcal{P}([d])$ is the powerset of $\{1,2,3,\ldots,d\}$.
We conceive of $M$ as being composed of $d$ rows and $2^d$ two-digit columns. Then for $A \in \mathcal{P}([d])$, $2g(A)$ will index the first (ie. leftmost) digit of the column corresponding to $A$, and $2g(A)+1$ will index the second digit. For each $i \in [d]$, the row $i$ of $M$ will have a 1 precisely in the columns $\{2g(A)+1 : i \in A\}$.
We now conceptualize the rows of $M$ as integers $a_1,a_2,\ldots,a_{d}$ where the row $i$ of $M$ specifies the binary digits of $a_i$.
Let $A \subseteq [d]$ be given. Let $b_A = 2^{2g(A)+1}$, the integer whose binary expression is 1 precisely in index $2g(A)+1$.
Then $f(b_A + a_i) = 0 \iff i \in A$. This gives a full $d$-mask on the Thue-Morse Sequence. Therefore the Thue-Morse sequence has infinite mask (and hence VC) dimension.
\subsection{Powers of two and Golumb rulers}\label{Spowersof2}\label{Ssidon}
The facts given in this section relate to recent results in the model theory literature, and nothing in this section is essentially new. Please see Section \ref{Smodelcon} for a discussion. Sidon sets were explicitly used in the context of VC dimension by \cite{aschenbrenner2016vapnik} and the sequel.
Let $S$ be the infinite binary string with a 1 precisely in indexes that are powers of two.
$$S = 011010001000000010000000000000001\ldots$$
Observe that the index distance between any two 1's is of the form $2^k-2^l = \sum_{k>j\geq l} 2^j$.
Thus if $2^k-2^l = 2^m-2^n$ then (viewing the numbers in binary) it is clear that $k=m$ and $l=n$.
This implies that there can be no full 3-mask. If there were such a mask $\langle S_{i_0+t},S_{i_1+t},S_{i_2+t}\rangle$ then there must be:
\begin{enumerate}
\item Some $t_1$ such that $\langle S_{i_0+t_1},S_{i_1+t_1},S_{i_2+t_1}\rangle = \langle 1,1,1\rangle$
and
\item Some $t_2$ such that $\langle S_{i_0+t_2},S_{i_1+t_2},S_{i_2+t_2}\rangle = \langle 1,1,0\rangle $
\end{enumerate}
Clearly $t_1 \neq t_2$. But by the above discussion there is at most a unique $t_1$ such that $S_{i_0+t_1}=S_{i_1+t_1}=1$. Therefore there is no full 3-mask.
On the other hand there is a full 2-mask (namely any two adjacent indexes). Therefore the mask dimension is 2 (which is also the VC dimension).
The $S$ above, when interpreted as the binary expansion of a real number, is a Fredholm constant. This shows that the binary decimal representation of transcendental numbers can have finite VC dimension.
The proof of the VC dimension of powers of two essentially used only the property that the difference between any pair of 1's is unique. There is a general term for strings with this property, namely Golumb rulers (or Sidon sets) \cite{o2004complete}.
Let $A \subset \mathbb{N}$. Let $\Delta_A(d) = |\{(a,b)\in A^2: b-a=d, b>a \}|$. We say that $A$ is a Sidon set if $\max_{d \in \mathbb{N}} \Delta_A(d) = 1$. We will also say that $A$ is a nearly Sidon set if $\max_{d \in \mathbb{N}} \Delta_A(d)$ exists (ie is finite). Any binary string $S$ can be understood as a subset of $\mathbb{N}$, namely $n(S) = \{i : S_i = 1\}$. We say $S$ is (nearly) Sidon if $n(S)$ is a (nearly) Sidon set.
$S$ is said to be eventually (nearly) Sidon spaced if $S$ is infinite and has a (nearly) Sidon spaced suffix.
\begin{proposition}
Any of the following conditions imply that a binary string $S$ is nearly Sidon spaced.
\begin{enumerate}
\item $S$ is Sidon spaced.
\item $S$ is eventually Sidon spaced.
\item $S$ is eventually nearly Sidon spaced.
\end{enumerate}
\end{proposition}
\begin{proof}
This fact is straightforward.
\end{proof}
The following are examples of Sidon spaced sequences:
\begin{enumerate}
\item The binary string $S$ with 1 precisely in positions $\{q^n : n \in \mathbb{N}\}$ for a positive integer $q>1$.
\item The binary string $S$ with 1 precisely in positions $\{n! : n \in \omega\}$.
\item Any increasing sequence $\langle u_i : i\in \mathbb{N}\rangle$ such that $u_{i+1} \geq 2u_i$ for all $i$.
\end{enumerate}
\begin{proposition}\label{P:sidon}
Any Sidon spaced binary string $S$ based on a sufficiently large Sidon set has VC dimension 2.
\end{proposition}
\begin{proof}
This is essentially the same as the proof for powers of two.
\end{proof}
In fact any nearly Sidon spaced sequence has finite VC dimension. The reason is that for $S$ to have a full $d$ mask implies $\Delta_{n(S)}(k)\geq 2^{d-2}$ for at least one $k \in \mathbb{N}$. Thus if $S$ has infinite VC dimension then $S$ cannot be nearly Sidon spaced.
\subsection{The number of Sidon sets in $\mathbb{N}$}
A natural question is to ask the cardinality of all strings of a certain VC dimension. In this subsection we prepare to answer this question, which is formally resolved in Theorem \ref{T:uncountable}. We will basically show that all strings of any VC dimension can be encoded as strings of VC dimension 2.
Consider that for binary strings in general, flipping 1's to 0's can increase VC dimension. For example the simple string 1* can be made to have any desired VC dimension by introducing 0's at certain indexes. However for Sidon sets this is not the case, because subsets of Sidon sets are also Sidon sets. Removing elements introduces no new differences or sums, and so the property of being (nearly) Sidon is preserved.
This property provides an easy way to see that there are uncountably many Sidon sets. Suppose $S$ is a binary string that has 1 precisely at indexes in the set $n(S)=\{i:S_i=1\}$. Let $2^S$ denote the binary string for which $n(2^S)=\{2^i:S_i=1\}$. In other words $2^S$ has a 1 in index $i$ if and only if $i=2^j$ and $S_j=1$.
\begin{proposition}
For any binary string $S$, $VCdim(2^S)\leq 2$.
\end{proposition}
\begin{proof}
The string $2^S$ is Sidon spaced.
\end{proof}
The above proposition is interesting, because $2^S$ is in some sense just as complex as $S$. However this complexity is not something that we can capture with the relation $P(x+y)$ (in the language of Section \ref{Smodelcon}.) On the other hand the VC dimension of the relation $P(2^{x+y})$ would be capable of detecting ``exponential level'' complexity in $2^S$. There is no reason to stop at first powers, and this discussion could go on to towers of exponentiation or other fast growing functions. From a certain viewpoint, complexity depends on the expressive power of the observer.
\begin{proposition}
For any binary strings $S$ and $T$, $2^S=2^T \iff S=T$.
\end{proposition}
\begin{proof}
This is obvious.
\end{proof}
\begin{corollary}\label{Cuncountable}
There are uncountably many Sidon spaced sequences.
\end{corollary}
\begin{proof}
Let $2^{\mathbb{N}}$ represent the space of all possible binary strings. This set is uncountable. Then by the previous proposition $\{2^S: S\in 2^\mathbb{N}\}$ is also uncountable. All of these are Sidon spaced.
\end{proof}
\section{VC dimension and substring diversity}\label{Svccomp}
The Cantor string example shows that for a string to have finite VC dimension it is not sufficient that the complexity function be polynomially (or even linearly) bounded. However the complexity function $p_S(n)$ can give information about VC dimension in extreme cases. For example if the complexity function is superpolynomial (on the one hand) or constantly bounded (on the other) then the VC dimension of the associated sequence is determined as either infinite or finite (respectively). When we establish the finitude of VC dimension for strings in this section, the results are implicit in some model theoretic work, for example \cite{POINT1}.
\begin{lemma}\label{Lspol} Suppose that the complexity function $p_S(n)$ for a string S is superpolynomial. Then S has infinite VC dimension.
\end{lemma}
\begin{proof}
Suppose by way of contradiction that $VCdim(S)=d$ for an integer $d$. Then by Sauer's Lemma, for any integer $w$, $|\mathcal{S}_w(S)| = |\mathfrak{S}_w| \leq w^d$. However $p_S(w) = |\mathcal{S}_w(S)|$.
\end{proof}
\begin{corollary} Suppose that $S$ is a binary string of finite VC dimension and let $\mathfrak{L}=\mathcal{S}(S)$ denote the formal language consisting of substrings of $S$. Then $\mathfrak{L}$ is a sparse language.
\end{corollary}
\begin{proof}
This is just a rephrasing of the above Lemma.
\end{proof}
\begin{lemma}\label{Lperiod} A periodic string $S$ with period $m \in \mathbb{N}$ has mask dimension at most $m$.\end{lemma}
\begin{proof}
Let $\{\langle S_{i_0+t},S_{i_1+t},\ldots,S_{i_{d-1}+t} \rangle : t \in \mathbb{N}\}$ be a full $d$-mask for some integer $d$. Suppose that $i_p \equiv i_q \mod m$ for some $p,q \in \{0,\ldots,d-1\}$. Then $i_p + t \equiv i_q + t\mod m$, and thus $S_{i_p+t}=S_{i_q+t}$ for all $t$. Because the $d$-mask is full, we must have $i_p$ distinct modulo $m$ for all $p \in \{0,\ldots,d-1\}$. Thus $d \leq m$.
\end{proof}
\begin{corollary} We can draw the following conclusions from Lemma \ref{Lperiod}.
\begin{enumerate}
\item If $S$ has period $m$ then $VCdim(S) \leq m+1$.
\item If $S$ is eventually periodic, then $VCdim(S)$ is finite.
\item If $p_s(n)$ is bounded by a constant then $VCdim(S)$ is finite.
\item The binary representation of any rational number has finite VC dimension.
\end{enumerate}
\end{corollary}
\begin{proof}
The statement (1) follows from Corollary \ref{C1} and Proposition \ref{SWeqM}. The statement (2) follows from Lemma \ref{Ltail}. The statement (3) follows from Lemma \ref{LMHT}.
\end{proof}
There is a very natural question that we have not answered:
\textbf{Question}: Can a Sturmian string have infinite VC dimension?
One would think that the answer to this question is ``yes.'' The Cantor string suggests that complexity of a string can grow very slowly provided that the radius of shattered sets of size $d$ grows exponentially in $d$. A priori we could do this even with minimal non-trivial complexity. But we haven't managed to determine the VC dimension of any Sturmian string to date.
\subsection{The VC dimension of real numbers}
We first state some obvious facts for convenience of reference.
\begin{lemma}\label{Lmonotone}
Let $S$ be a binary string and $S'$ a substring of $S$. Then $VCdim(S) \geq VCdim(S')$.
\end{lemma}
\begin{lemma}\label{Lfinite} Let $S$ be an infinite binary string of VC dimension $d$ for $d \in \mathbb{N}$. Then $S$ has a proper initial substring of the same VC dimension.
\end{lemma}
\begin{lemma}\label{Laddzero} If $S$ is a finite binary string and $0^*$ is the zero string, then $VCdim(S0^*)=VCdim(S)$.
\end{lemma}
We now make a few remarks on strings that arise as binary representations of reals. Rational numbers of the form $\frac{p}{2^k}$ for integers $p,k$ may have more than one base 2 representation. For example the real number 1 has the binary representations $10^*$ and $01^*$. In this case we will choose the representation that ends in $0^*$. For the purposes of computing VC dimension, we will ignore any decimal point or negative sign.
A number is said to be normal in base $b$ if, for every positive integer $n$, all possible $n$-digit substrings have density $b^{-n}$ .
\begin{corollary} For any real number normal to base 2, the base 2 representation has infinite VC dimension.
\end{corollary}
Borel proved \cite{ian1993borel} that the set of real numbers normal to every base has full Lebesgue measure. Therefore almost all real numbers have infinite VC dimension with respect to their base 2 representations. This implies that the real numbers of finite VC dimension are measure zero.
It is conjectured that for any algebraic irrational, the complexity function is $p_x(n) = O(b^n)$, where $b$ is the base of the representation \cite{adamczewski2007complexity}. If this conjecture is true then by Lemma \ref{Lspol} the binary decimal representation of any algebraic irrational has infinite VC dimension.
For a string $S$ and nonnegative integers $i<j$, let $S^{([i:j])}$ denote the substring of $S$ defined by $\langle S_i,S_{i+1},\ldots,S_{j-1}\rangle$. We also write $S^{([i:\infty])}$ for the suffix of $S$ beginning at index $i$ (inclusive). If we write $S^{([i:-j])}$ this means the same as $S^{([i:n])}$ where $n=len(S)-j$. If $S$ and $T$ are binary strings, define $L_i^{T,S} =S^{([0:i])}T^{([i:\infty])}$. The \textit{straight line} from $T$ to $S$ is the sequence of strings $\mathfrak{L}(T,S) = \{L_i^{T,S}:i \in \mathbb{N}\}$. Define $P_i^{T,S} = S^{([0:i])}T$. The \textit{push} from $T$ to $S$ is $\mathfrak{P}(T,S) = \{P_i^{T,S}: i \in \mathbb{N}\}$. If we refer to a limit of $P_i$ or $L_i$ we are referring to the limit of the corresponding real numbers (via binary representation). For a binary sequence $S$ we let $\mathfrak{r}(S)$ denote the corresponding real number in [0,1). By $0^*$ we mean the binary string with $0$ at all indexes.
\begin{proposition}\label{Pshoveline}
Let $S,T$ be binary strings with VC dimension $d_S,d_T \in \mathbb{N}\cup\{\infty\}$ (respectively). The following hold.
\begin{enumerate}
\item For every $i \in \mathbb{N}$, $VCdim(P_i^{T,S})\geq d_T$.
\item For every $i \in \mathbb{N}$, $VCdim(L_i^{0^*,S})\leq d_S$.
\item $\lim_{i \to \infty} P_i^{T,S} = \mathfrak{r}(S)$
\item $\lim_{i \to \infty} L_i^{T,S} = \mathfrak{r}(S)$
\end{enumerate}
\end{proposition}
\begin{proof}
The statement (1) follows from Lemma \ref{Lmonotone}. The statement (2) follows from lemmas \ref{Lmonotone} and \ref{Laddzero}. Statements (3) and (4) are obvious.
\end{proof}
We now consider what happens in the limit to set of real numbers with various VC complexity assumptions. Interestingly, VC dimension can drop arbitrarily in a limit, but not go up. The following propositions make this precise.
\begin{proposition}
For any $d \in \mathbb{N}\cup \{\infty \}$ there is a sequence $\langle r_i : i \in \mathbb{N}\rangle$ of real numbers such that $VCdim(r_i)\geq d$ for all $i$ and $\lim_{i \to \infty} r_i = 0$.
\end{proposition}
\begin{proof}
Let $r_0$ be a real number of VC dimension $d$ and $S$ its binary representation. Let $r_i$ be the real number whose binary representation is $0^iS$ for $i>0$. Then $\lim_{i\to\infty} r_i = 0$ and $VCdim(r_i)\geq d$ for all $i$.
\end{proof}
\begin{proposition}\label{Plimits}
For any $d \in \mathbb{N}$ and any sequence $\langle r_i : i \in \mathbb{N}\rangle$ of real numbers such that $\forall i\, VCdim(r_i) \leq d$, if $r^* = \lim_{i \to \infty} r_i$ then $VCdim(r^*)\leq d$.
\end{proposition}
\begin{proof}
Let $S$ denote the binary representation of $r^*$. Suppose, by way of contradiction, that $VCdim(S) > d$. Let $S'$ be a finite initial substring of $S$ such that $VCdim(S')>d$. Let $n = len(S')$. There is some $i$ such that $|r_i-r^*| < \frac{1}{2^n}$. Then $S'$ is also an initial substring of the binary representation of $r_i$. But then $VCdim(r_i)>d$ $\rightarrow \leftarrow$.
\end{proof}
For $d\in \mathbb{N}$, Let $\mathfrak{V}_{\leq d} = \{r\in \mathbb{R}: VCdim(r) \leq d\}$. Let $\mathfrak{V}_{< \infty}$ denote the reals of finite VC dimension.
\begin{proposition}\label{Pperf}
Let $r \in \mathbb{R}$. Then the following hold.
\begin{enumerate}
\item There is a sequence of real numbers in $\mathfrak{V}_{< \infty}$ that converges to $r$. Additionally, if $d=VCdim(r)$ is an integer, then there is a sequence of real numbers in $\mathfrak{V}_{\leq d}$ that converges to $r$.
\item There is a sequence of real numbers in $\mathbb{R}\setminus \mathfrak{V}_{< \infty}$ that converges to $r$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $T$ be such that $r=\mathfrak{r}(T)$. Then $\mathfrak{L}(0^*,T)$ witnesses the statement (1).
Let $S$ be any binary string of infinite VC dimension. Then $\mathfrak{P}(S,T)$ witnesses the statement (2).
\end{proof}
The following theorem shows that $\mathfrak{V}_{\leq d}$ is topologically very similar to the Cantor set.
\begin{theorem}\label{Tcantorlike}
For all $d\in\mathbb{N}$, $\mathfrak{V}_{\leq d}$ is closed, totally separated, perfect, and nowhere dense in $\mathbb{R}$. It also has Lebesgue measure 0.
\end{theorem}
\begin{proof}
By Proposition \ref{Plimits}, every convergent sequence in $\mathfrak{V}_{\leq d}$ has a limit in $\mathfrak{V}_{\leq d}$. Therefore it is closed. By Proposition \ref{Pperf}, every $r\in\mathfrak{V}_{\leq d}$ is a limit point for a sequence in $\mathfrak{V}_{\leq d}$. Therefore $\mathfrak{V}_{\leq d}$ is perfect. Given any $r \in \mathfrak{V}_{\leq d}$ let $S$ be its binary representation and take $n > d$ (possibly $n=\infty$). For any proper initial substring $S'$ of $S$ there is a binary word $w$ such that $S'w$ has VC dimension $n$. This shows that for any open $U$ with $r \in U$, there are real numbers of VC dimension $n>d$. If $x,y \in \mathfrak{V}_{\leq d}$ are distinct, there is some $z$ strictly between them with $z \notin \mathfrak{V}_{\leq d}$. Thus $\mathfrak{V}_{\leq d}$ is totally separated as witnessed by the open sets $(-\infty,z)\cup(z,\infty)$.
To see that $\mathfrak{V}_{\leq d}$ is nowhere dense, let $U \subseteq \mathbb{R}$ be an open set. Let $r \in U$ be arbitrary. By \label{Pperf} (2) there is a sequence of reals of VC dimension $> d$ converging to $r$. Therefore there is some $t\in U\setminus \mathfrak{V}_{\leq d}$. Since $\mathfrak{V}_{\leq d}$ is closed, there is an open set $V$ such that $t\in V$ and $\mathfrak{V}_{\leq d} \cap V=\emptyset$. Then $V\cap U$ shows that $\mathfrak{V}_{\leq d}$ is not dense in $U$.
The theorem of Borel on normal numbers, already mentioned, shows that $\mathfrak{V}_{\leq d}$ has Lebesgue measure 0.
\end{proof}
\begin{corollary}
For any $d\in \mathbb{N}$, set $\mathfrak{V}_{\leq d} \cap [0,1]$ is a Stone space.
\end{corollary}
\begin{proof}
The set is closed and bounded, therefore compact. It is obviously Hausdorff. It is totally separated by Theorem \ref{Tcantorlike}.
\end{proof}
\begin{lemma}\label{Lvc1} If $S$ is a binary string and $VCdim(S)=1$ then $S$ is of one of the following forms. Each of these may be followed by arbitrarily many zeros, which do not affect the VC dimension.
\begin{enumerate}
\item $0^a1$, $a\geq 1$
\item $1^a$, $a \geq 1$ (possibly $a$ is infinite)
\item $0^a(10^b)^c1$, $0 \leq a \leq b$, $b\geq 1$, $c\geq 1$ (possibly $c$ is infinite).
\end{enumerate}
\end{lemma}
\begin{proof}
The VC dimension is one in each case by easy inspection. In form (3) note that if $a > b$ then the VC dimension is 2. Observe that any nonzero binary string has a string of form (1) or (2) as a prefix. We will show that any attempt to extend one of these forms while preserving VC dimension results in another instance of one of the three given forms.
First consider extending a string of form (1). Suppose $S=0^a1$ for $a\geq 1$. Then $VCdim(S1)=2$. The extension $S0^b$ for a nonnegative integer $b$ is trivial. The extension $S0^b1$ results in a string of type (3) if $b\geq a$, and $VCdim(S0^b1)=2$ otherwise. We deal with the type (3) possibility below.
Consider extending a string of form (2). Suppose $S=1^a$ for $a \geq 1$. The only nontrivial extension for this form is $S0^b1$ for $b \geq 1$, and $VCdim(S0^b1)=2$.
Now consider extending a string of form (3). Suppose $S=0^a(10^b)^c1$, where $0 \leq a < b$, and $c\geq 1$. Then $VCdim(S1)=2$. If we let $T=S0^d1=0^a(10^b)^c10^d1$, then the tail of $T$ will be of the form $10^b10^d1.$ But to avoid increasing the VC dimension, we must have $b=d$, yielding another string of form (3).
\end{proof}
\begin{theorem}\label{T:uncountable}
The set $\mathfrak{V}_{\leq d}$ is uncountable iff $d\geq 2$.
\end{theorem}
\begin{proof}
The right to left direction follows easily from Corollary \ref{Cuncountable}. To show countability for $\mathfrak{V}_{\leq 1}$, we first note that there is a unique string of VC dimension 0, namely $0^*$. Lemma \ref{Lvc1} completes the argument by showing that the strings of VC dimension 1 are of finitely many forms, each with countably many instances.
\end{proof}
\begin{theorem}\label{Ttop}
The following all hold in the standard topology on $\mathbb{R}$.
\begin{enumerate}
\item $\mathfrak{V}_{< \infty}$ is not closed.
\item $\mathfrak{V}_{< \infty}$ is not open.
\item $\mathfrak{V}_{< \infty}$ is dense and codense in $\mathbb{R}$.
\item $\mathfrak{V}_{< \infty}$ is uncountable and co-uncountable in $\mathbb{R}$.
\item $\mathfrak{V}_{< \infty}$ has Lebesgue measure 0.
\item $\mathfrak{V}_{< \infty}$ is meagre.
\end{enumerate}
\end{theorem}
\begin{proof}
Every real (of any VC dimension) is, on the one hand, the limit of reals of finite VC dimension, and, on the other, the limit of reals of infinite VC dimension. Therefore the closure of $\mathfrak{V}_{< \infty}$ and the closure of its complement both yield $\mathbb{R}$. Therefore $\mathfrak{V}_{< \infty}$ is both dense and codense. Neither $\mathfrak{V}_{< \infty}$ nor its complement are closed under limits, so neither is closed (or open) in $\mathbb{R}$. Because $\mathfrak{V}_{< \infty} = \bigcup_{d\in\mathbb{N}} \mathfrak{V}_{\leq d}$ and each $\mathfrak{V}_{\leq d}$, is nowhere dense, $\mathfrak{V}_{< \infty}$ is meagre in $\mathbb{R}$. Because $\mathfrak{V}_{\leq 2}$ is uncountable, $\mathfrak{V}_{< \infty}$ is uncountable. The fact that it is null follows from the theorem of Borel previously mentioned.
\end{proof}
\textbf{Question}: Is $\mathfrak{V}_{< \infty}$ a subfield of $\mathbb{R}$, intermediate between $\mathbb{Q}$ and $\mathbb{R}$? This subfield of ``simple" reals would be analogous to the constructable or computable numbers. But unlike those examples, it would be uncountable. It is not hard to show that $(\mathfrak{V}_{< \infty},\oplus)$ is a subgroup of $(\mathbb{R},\oplus)$, where $\oplus$ denotes XOR on binary representations. However the ``carry bits" seem to leave just enough uncertainty for the sum of two elements in $\mathfrak{V}_{< \infty}$ to violate closure.
\subsection{The structure of strings of finite VC dimension}\label{Sbi}
There is a structural consequence to the topological discussion from the previous section. We present this as Proposition \ref{Pstruct}. Then we prove some facts about strings of finite VC dimension, some of which relate to symbolic dynamics. In particular we work toward Theorem \ref{Talt} which shows that VC dimension is bounded by the number of alternations in a string, and Theorem \ref{Tsofic} which shows that while strings of VC dimension at most $d$ form a shift space (when bi-infinite) this shift space is not sofic when $d>1$.
\begin{proposition}\label{Pstruct}
Consider a finite binary string $S$ of VC dimension $d \in \mathbb{N}$ with $d>0$. We assume without loss that $S$ ends in a 1. Suppose $VCdim(S^{([0:-1])})<d$. Then for some integer $k$, $VCdim(S^{([0:-1])}01^k) = d$.
\end{proposition}
\begin{proof}
Consider $r=\mathfrak{r}(S)$. This point is not in the closed set $\mathfrak{V}_{\leq d-1}$, and therefore there is some open interval $U$ containing $r$ such that every $t\in U$ has $VCdim(t)\geq d$. For all sufficiently large integers $k$, the real number corresponding to the binary string $S^{([0:-1])}01^k$ will be inside $U$. Therefore, for all sufficiently large $k$, $VCdim(S^{([0:-1])}01^k) \geq d$. By Lemma \ref{Lpis}, there is some $k^+$ such that $VCdim(S^{([0:-1])}01^{k^+}) = d$.
\end{proof}
\textbf{Example:} The string 011 has VC dimension 2 and 010 has VC dimension 1. The string 01011 has VC dimension 2 again.
\begin{proposition} Prepending or postpending a symbol to a binary string increases the VC dimension by at most 1.
\end{proposition}
\begin{proof}
Like the proof of Lemma \ref{Lpis}.
\end{proof}
\begin{lemma}\label{Lprepend}
Suppose $S$ is a binary string. Then for all $l \in \mathbb{N}$ it is the case that $VCdim(0^lS)\leq VCdim(S)+2$.
\end{lemma}
\begin{proof}
Without loss we may assume that $VCdim(S)$ is finite, say $d\in \mathbb{N}$. Let $S'$ be the reverse of $S$. The mask dimension of $S'$ is the same as the mask dimension of $S$, which is at most $d$. Let $S'' = S'0^l$, and $S'''$ the reverse of $S''$. Then $S'''=0^lS$, and
$$VCdim(S)+2 \geq Mdim(S)+2=Mdim(S')+2 \geq VCdim(S')+1 = VCdim(S'')+1$$
$$VCdim(S'') +1 \geq Mdim(S'') + 1= Mdim(S''') + 1 \geq VCdim(S''')=VCdim(0^lS).$$
\end{proof}
The proof technique used in Lemma \ref{Lprepend} can be generalized to prepending or postpending $0^l$ or $1^l$ to any binary string $S$. It may be necessary to take the bitwise complement of a string as part of this process. Bitwise complements are easily seen to preserve mask dimension.
\begin{corollary}\label{Cbuild}
Suppose $S$ is a binary string. Then for all $l \in \mathbb{N}$ and $c\in\{0,1\}$ it is the case that $VCdim(c^lS)\leq VCdim(S)+2$ and $VCdim(Sc^l)\leq VCdim(S)+2$.
\end{corollary}
\begin{proof}
The proof of Lemma \ref{Lprepend}, \textit{mutatis mutandis}.
\end{proof}
Note that $VCdim(0)=0$, and $VCdim(011)=2$, showing that the bound is tight.
For a binary string $S$, let $\textbf{alt}(S)$ denote the number of maximal contiguous blocks of $1$'s appearing in $S$. This is a measure of the number of alternations between 0 and 1 that occur in $S$. It may be infinite.
\begin{theorem}\label{Talt}
If $S$ is a binary string then $VCdim(S) \leq 2\textbf{alt}(S)$.
\end{theorem}
\begin{proof}
If $S$ is a string of 1's then the bound obviously holds, since $VCdim(S)=1$ and $\textbf{alt}(S)=1$. Otherwise, start with the leftmost $0$ occurring in $S$ and build out the rest of the string using Corollary \ref{Cbuild}. Note that adding 0's on the right does not affect VC dimension.
\end{proof}
When we investigate statements about $\mathfrak{V}_{\leq d}$ and $\mathfrak{V}_{< \infty}$ in the remainder of this section, consider that all the strings in both of these sets are without loss members of $2^\mathbb{N}$. That is, they can be regarded as infinite on the right by extending with zeros if necessary. Adding trailing zeros to a string does not affect VC dimension.
\begin{corollary}\label{Cshift}
The set of strings $\mathfrak{V}_{<\infty}$ is closed under shifts.
\end{corollary}
\begin{proof}
This is a consequence of Lemma \ref{Lprepend}.
\end{proof}
This means that $\mathfrak{V}_{<\infty}$ is a subshift. It is clearly not a subshift of finite type (ie characterized by a finite set of forbidden words). In fact any finite word occurs in some string of finite VC dimension. Thus $\mathfrak{V}_{<\infty}$ is not closed (or open) in the product topology on $2^{\mathbb{N}}$. On the other hand $\mathfrak{V}_{\leq d}$ is not closed under shifts (consider the strings 11 and 011). However it is topologically closed in $2^{\mathbb{N}}$ with the product topology.
Let $\mathfrak{V}_{\leq d}^{\Leftrightarrow}$ denote the set of bi-infinite binary strings of VC dimension at most $d$.
\begin{proposition}
{$\mathfrak{V}_{\leq d}^{\Leftrightarrow}$ is a shift space.}
\end{proposition}
\begin{proof}
The VC dimension of a string depends only on its substrings. For bi-infinite strings this set is not affected by shifts, and hence the VC dimension is not affected either. Thus the set is closed under shifts. The set $\mathfrak{V}_{\leq d}^{\Leftrightarrow}$ is easily seen to be closed in the product topology on $2^{\mathbb{N}}$. Therefore it is a shift space.
\end{proof}
Note that both $\mathfrak{V}_{\leq d}$ and $\mathfrak{V}_{\leq d}^{\Leftrightarrow}$ are characterized by omitting finite strings of dimension $d+1$. Thus finite strings of VC dimension $d+1$ form a forbidden list for both sets. We might ask whether $\mathfrak{V}_{\leq d}^{\Leftrightarrow}$ is a shift of finite type (meaning it is characterized by a finite set of forbidden substrings). For $d=0$ this is clearly true, because the forbidden list can simply be $\mathcal{F}=\{1\}$. In Theorem \ref{T2prime} we explicitly show that $\mathfrak{V}_{\leq 1}^{\Leftrightarrow}$ is not a shift of finite type. In fact when $d > 1$, $\mathfrak{V}_{\leq d}^{\Leftrightarrow}$ does not even satisfy a weaker condition, known as being \textit{sofic}. We will not directly define a sofic shift space, but rather use an equivalent condition (see \cite{louidor2013independence}).
Let $B(\mathfrak{V}_{\leq d}^{\Leftrightarrow})$ denote the set of finite substrings that occur in any element of $\mathfrak{V}_{\leq d}^{\Leftrightarrow}$.
For a word $w\in B(\mathfrak{V}_{\leq d}^{\Leftrightarrow})$, the \textit{follower set} of $w$ in $\mathfrak{V}_{\leq d}^{\Leftrightarrow}$, denoted by $F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w)$, is defined
by
$$F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w) = \{z\in B(\mathfrak{V}_{\leq d}^{\Leftrightarrow}) : wz\in B(\mathfrak{V}_{\leq d}^{\Leftrightarrow})\}.$$
A $\mathbb{Z}$ shift space (such as $\mathfrak{V}_{\leq d}^{\Leftrightarrow})$ is sofic if and only if it has only finitely many follower sets. We will show that when $d>1$, $\mathfrak{V}_{\leq d}^{\Leftrightarrow}$ has infinitely many distinct follower sets. First we need two technical lemmas.
The following lemma basically establishes that it is possible for a symbol in a binary string to be so distant that it cannot affect VC dimension. We use a left infinite string (denoted by $0^\infty$) in the argument.
\begin{lemma}\label{Lmadness}
Suppose that a binary string $S$ is of the form $0^{\infty}T1$ where
\begin{enumerate}
\item $VCdim(S)=d+1$ for some $d\in \mathbb{N}$, $d>1$.
\item $VCdim(0^{\infty}T) = d$.
\end{enumerate}
Then there is some $N \in \mathbb{N}$ such that for all $l>N$, $VCdim(0^{\infty}T0^l1)=d$.
\end{lemma}
\begin{proof}
Let $k$ be the length of $T$, and $N=2k$. We claim this is sufficient. For suppose $l > N$ and $VCdim(0^{\infty}T0^l1)=d+1$. Then some $A\subseteq \mathbb{N}$ is shattered with $|A|=d+1$. Without loss, $0\in A$. There is some $A_0 \subseteq A$ that is traced by $0^{\infty}T0^l1$ but not traced by $0^{\infty}T$. We now argue that $A_0$ contains an element $c>k$.
Because $0^{\infty}T$ is left infinite and $T$ without loss begins with 1, $A_0$ is not a singleton, as all singletons are already traced. If $|A_0|\geq 2$, $\max{A_0} > k$, otherwise $A_0$ would be traced by $0^{\infty}T$. Therefore $\exists c \in A$, $c>k$. Because $d>1$, there are $a,b \in A$ with $a<b<c$, and $a=0$.
Because $c>k$ and $len(T)=k$, any substring $w$ of $0^{\infty}T0^l1$ tracing a non-singleton set including $c$ must be a suffix. The length of the suffix tracing $\{0,c\}$ must be $c+1$. But the length of the suffix tracing $\{0,b,c\}$ must also be $c+1$. Therefore these are the same suffix $\rightarrow\leftarrow$.
Therefore $VCdim(0^{\infty}T0^l1)=d$.
\end{proof}
The proof shows that actually $0^{2len(T)+1}$ is sufficient, where $0^{\infty}$ appears in the above lemma.
\begin{lemma}\label{Linsert}
Let $S$ a binary string. For a an integer $p>1$, let $S^{[p]}$ be derived from $S$ by replacing each symbol $c$ in $S$ with $0^{p-1}c$. Then $VCdim(S)=VCdim(S^{[p]})$.
\end{lemma}
\begin{proof}
It is clear that $VCdim(S^{[p]})\geq VCdim(S)$. We now show $VCdim(S^{[p]})\leq VCdim(S)$. Suppose $S^{[p]}$ shatters a set of size $k$. Then there is some $A\subseteq \mathbb{N}$ shattered by $S^{[p]}$ with $0\in A$, $|A|=k$ and where $A$ consists only of multiples of $p$. This $A$ can be shattered even restricting to substrings of the form $S^{[p]([i:j])}$ where $i$ is a multiple of $p$. In fact if $i \not \equiv 0 \mod p$, the subset of $\mathbb{N}$ corresponding to $S^{[p]([i:j])}$ contains no multiples of $p$. But then $S$ can shatter a set of size $k$ as well.
\end{proof}
\begin{theorem}\label{Tsofic}
$\mathfrak{V}_{\leq d}^{\Leftrightarrow}$ is not sofic if $d>1$.
\end{theorem}
\begin{proof}
Let $d>1$. Suppose a binary string $S$ is of the form $0^{2k+1}T1$ where
the following hold: $k=len(T)$ for $k\in \mathbb{N}$, $VCdim(S)=d+1$, $VCdim(0^{2k+1}T) = d$.
Define a sequence of numbers $a_1,a_2,\ldots$ recursively by $a_1 = 1$ and $a_{i+1} = 2ka_i+2$. For each $i =1,2,3,\ldots$, define $w_i = 0^{(2k+1)a_i}T^{[a_i]} = (0^{2k+1}T)^{[a_i]}$. By Lemma \ref{Linsert}, $VCdim(w_i)=d$ for all $i \in \mathbb{N}\setminus \{0\}$. Therefore $w_i \in B(\mathfrak{V}_{\leq d}^{\Leftrightarrow})$ for all $i$.
Suppose $i < j$. We claim $F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w_i) \neq F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w_j)$. In particular we will argue that $0^{a_j-1}1 \in F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w_i) \setminus F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w_j)$. By the definition of $S$ and Lemma \ref{Linsert}, $VCdim(w_j0^{a_j-1}1)=VCdim((0^{2k+1}T1)^{[a_j]})=d+1$. Therefore $0^{a_j-1}1 \notin F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w_j)$.
On the other hand $w_i = 0^{(2k+1)a_i}T^{[a_i]}$ and $len(T^{[a_i]})=ka_i$. But $a_j-1 > 2ka_i$. Therefore, by Lemma \ref{Lmadness}, $$VCdim(w_i0^{a_j-1}1)=VCdim(0^{(2k+1)a_i}T^{[a_i]}0^{a_j-1}1)=d,$$
and $0^{a_j-1}1 \in F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w_i)$. Thus $F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w_i) \neq F_{\mathfrak{V}_{\leq d}^{\Leftrightarrow}}(w_j)$ and consequently $\mathfrak{V}_{\leq d}^{\Leftrightarrow}$ is not sofic.
\end{proof}
\section{Prime strings}
Let $d$ be a nonnegative integer and $S$ a binary string. We say that $S$ is $d$-prime for $d \in \mathbb{N}$ if $VCdim(S)=d$ and whenever $S'$ is a proper substring of $S$, $VCdim(S') < d$. In this section we seek to determine the prime strings for various $d$ and analyze the properties of prime strings in general.
A consequence of Lemma \ref{Lmonotone} is that a finite string $S$ is $d$-prime iff the VC dimension of $S$ is $d$ and the dimension decreases when either its leftmost or rightmost symbol is removed.
We now establish the existence of prime strings, and show that all strings contain at least one prime substring of the same VC dimension.
\begin{proposition}\label{Pss} Let $S$ be a string of VC dimension $d \in \mathbb{N}$. Then $S$ has a $d$-prime string as a substring.
\end{proposition}
\begin{proof} We can assume without loss that $S$ is finite. The following algorithm will produce a prime substring of $S$. First remove as many symbols from the right of $S$ as possible while preserving the VC dimension. Then remove as many symbols from the left of the remainder as possible while preserving the VC dimension. Let $S'$ denote the result of this process. Then $VCdim(S')=d$. Suppose, by way of contradiction, that $S'$ has a proper substring of VC dimension $d$. Then, by Lemma \ref{Lmonotone}, $S'$ can have a digit removed either from the left or right while preserving the VC dimension. This contradicts the process that produced $S'$ in the first place.
\end{proof}
\begin{lemma}\label{Lpis}: Let $S$ be a possibly infinite binary string of VC dimension $d+1$ for $d \in \mathbb{N}$. Then $S$ contains a proper initial substring of VC dimension $d$.
\end{lemma}
\begin{proof} By Lemma \ref{Lfinite} we can assume without loss that $S$ is finite. Let $S^+$ be the shortest proper initial substring of $S$ of VC dimension $d+1$. Let $S^-$ be $S^+$ with the rightmost symbol removed. Then $VCdim(S^-) \leq d$. We claim that also $VCdim(S^-) \geq d$.
Let $A \subseteq \mathbb{N}$ be a set of size $d+1$ that is shattered by $S^+$. Let $a = \max(A)$ and $A^- = A\setminus \{a\}$.
We claim that $A^-$ is shattered by $S^-$. Let $A_0 \subseteq A^-$. There is some substring $s$ of $S^+$ which cuts out $A_0 \cup \{a\}$ from $A$.
Then $s_a = 1$ and without loss $a$ is the last index in $s$. Let $s^-$ be $s$ without the rightmost symbol, and note that $s^-$ is a substring of $S^-$. Thus $A_0$ is cut out from $A$ by a substring of $S^-$. Since $A_0$ was arbitrary, $A$ is shattered by substrings of $S^-$. Therefore $VCdim(S^-) \geq d$, whence $VCdim(S^-)=d$.
\end{proof}
\begin{proposition} Let $S$ be a possibly infinite binary string of VC dimension $d+1$ for $d \in \mathbb{N}$. Then $S$ contains a $d$-prime substring. In particular, every $d+1$-prime string contains a $d$-prime string as a substring.
\end{proposition}
\begin{proof} First apply Lemma \ref{Lpis} to get a proper initial substring of VC dimension $d$. Then apply Proposition \ref{Pss} to get a $d$-prime substring.
\end{proof}
Notice that the $d$-prime strings are not closed under complement. For example $ 011 $ is $2$-prime, but $ 100$ is not even VC dimension 2. This is essentially because of the downward closure of the substring concept and the way we associate substrings with subsets in the definition of string VC dimension. This is not the case for mask dimension. Mask dimension is easily seen to be preserved under complementation for any string and reversals for finite strings.
We now consider prime strings of VC dimension 1,2, and 3.
\subsubsection{VC dimension 1}
There is a unique prime string $S = 1 $.
\subsubsection{VC dimension 2}
\begin{theorem}\label{T2prime} Let $k,d$ be nonnegative integers. All 2-prime strings are either of the form
\begin{enumerate}
\item $10^k10^d1$ where $k < d$, or
\item $0^{d+1}10^d1$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $S$ be a 2-prime string. We may assume that $S$ is finite and ends with 1. By definition, we must have $VCdim(S^{([0:-1])})=1$. Therefore $S^{([0:-1])}$ is of one of the three forms presented in Lemma \ref{Lvc1}. Viewed another way, $S$ must result from appending $0^d1$ to one of the forms in Lemma \ref{Lvc1}, for some nonnegative integer $d$. The forms from Lemma \ref{Lvc1} are as follows.
\begin{enumerate}
\item $0^a1$, $a\geq 1$
\item $1^a$, $a \geq 1$
\item $0^a(10^b)^c1$, $0 \leq a \leq b$, $b\geq 1$, $c\geq 1$
\end{enumerate}
We now try appending $0^d1$ to each of these forms in a series of cases (respectively).
Case 1: Suppose $S=0^a10^d1$ for $a \geq 1$, $d \geq 0$. To achieve $VCdim(S)=2$ it is necessary and sufficient that $d<a$. Then in order for $S$ to be prime we must have $a=d+1$ (else we could remove a zero on the left while preserving dimension). This gives a 2-prime string of form (2) from the statement of the theorem.
Case 2: Suppose $S=1^a0^d1$ for $a \geq 1$. To achieve $VCdim(S)=2$, it is necessary and sufficient that $a>1$ and $d>0$. Then for $S$ to be prime, we must have $S=110^d1$, which is form (1) from the statement of the theorem.
Case 3: Suppose $S=0^a(10^b)^c10^d1$ where $0 \leq a \leq b$, $b\geq 1$, and $c\geq 1$. To achieve $VCdim(S)=2$, it is necessary and sufficient that $d\neq b$. If $d< b$ then it must be that $S=0^{d+1}10^d1$, which is form (2) in the statement of the theorem. If $d > b$ then we must have $S=10^b10^d1$, which is form (1) in the statement of the theorem.
\end{proof}
It is natural to wonder where $d$-prime strings lie in the hierarchy of formal languages. One could pose similar questions about finite strings of bounded VC dimension in general.
\begin{corollary}
The language of 2-prime strings is not regular.
\end{corollary}
\begin{proof}
Suppose by way of contradiction that the language is regular. Then by the Pumping Lemma, all sufficiently long 2-prime strings can be written as a product of words $xyz$ such that $xy^nz$ is still 2-prime for all $n\in \mathbb{N}\setminus\{0\}$ (with $y$ nonempty). But consider a 2-prime string of the form $S = 0^{d+1}10^d1$, where $d$ is sufficiently large for the Pumping Lemma to apply. Suppose $S=xyz$. Then $y$ either does or does not contain a 1. If $y$ does contain a 1, then $xy^2z$ is not 2-prime because it begins with 0 but contains more than two 1's. If $y$ does not contain a 1 then $xy^2z$ is not 2-prime because the 0's are imbalanced. Therefore the language cannot be regular.
\end{proof}
\subsubsection{VC dimension 3}
The language of 3-prime strings is much more complex than in the case of dimension 2. This is owing to the comparative lack of structure in strings of VC dimension 2, which form an (co)uncountable set (by Theorem \ref{T:uncountable}). In particular, the proof strategy in Theorem \ref{T2prime} cannot be adapted to the case of 3-prime strings, because dimension 2 strings have no simple characterization.
The prime strings of VC dimension 2 divide clearly into two infinite families, and have a bounded number of alternations between 0 and 1. However in the case of dimension 3 this is not the case. In this section we will analyze the shortest 3-prime string $S=0010111$ (found through exhaustive search). This string has an associated family of 3-prime strings, of which it is the simplest member. These are of the form $00(10)^k 111$, $k \geq 1$. This family provides examples of 3-prime strings with unboundedly many alternations. Other examples of 3-prime families discovered experimentally are presented (without proof) in Table \ref{Ttprime}.
\begin{table}
\begin{center}
\begin{tabular}{|l|}
\hline
$S=01111^k0^l10^m1$, where $l \leq k+1, m,k\geq 1$ \\
\hline
$S=0^{2k+1}10^{k-1}10^k10^{k-1}1$, where $k\geq 1$ \\
\hline
$S=1101010101(01)^k0000010000011$, where $k\geq 0$\\
\hline
\end{tabular}
\caption{Some examples of 3-prime families. }
\label{Ttprime}
\end{center}
\end{table}
In order to simplify the proof of the 3-primeness of $S$ and its family members, we introduce some new terminology.
\begin{definition}
Given a binary string $S$ of length $n$, the \textit{right rays} of $S$ are the substrings of the form $S^{([i:n])}$ for $0 \leq i < n$. Let $\mathfrak{R}(S)$ denote the right rays of $s$.
\end{definition}
\begin{definition} Consider $A\subseteq \mathbb{N}$, consisting of elements $\{a_1 < a_2 < \cdots < a_n\}$. The \textit{telescope} of $A$ is the family of subsets $\{\{a_1,...,a_i\} : i \leq n\}$.
\end{definition}
\begin{definition}
Given a substring $s$ of $S$, the telescope of $s$, denoted $\mathcal{T}_s$ is the telescope of the associated set family $n(s) = \{i \in \mathbb{N}: s_i = 1\}$.
\end{definition}
\begin{proposition}
Given a string S, the set family associated with S as in the definition of VC dimension on strings is
$$\mathfrak{S}=\bigcup_{s \in \mathfrak{R}(S)} \mathcal{T}_s$$
\end{proposition}
\begin{proof}
This is a clear consequence of the definition of the VC dimension of a binary string.
\end{proof}
\begin{proposition} The family of strings $F_k=00(10)^k 111$, $k \geq 1$ consists only of 3-prime strings.
\end{proposition}
\begin{proof}
There are three things to show.
\begin{enumerate}
\item When $S=00(10)^k 111$, $VCdim(S)=3$ for all $k \geq 1$.
\item When $S=0(10)^k 111$ $VCdim(S)\leq 2$ for all $k \geq 1$.
\item When $S=00(10)^k 11$ $VCdim(S)\leq 2$ for all $k \geq 1$.
\end{enumerate}
In the case of (1) we only need to show that the VC dimension is at least 3, because (3) together with Lemma \ref{Lpis} implies that the VC dimension cannot be 4 or greater.
Proof of (1):
Let $S=00(10)^k 111$ for $k\geq 1$. Suppose $m=len(S)=5+2k$. The subsets of $\mathbb{N}$ corresponding to $\mathfrak{R}(S)$ are described in the following table. The substrings corresponding to the sets are shown in the column headers. Each set depends on the row index $i$.
\begin{table}[h]
\begin{tabular}{|c|c|c|}
\hline
$i$& $S^{([-3-2i+1:m])}$ & $S^{([-3-2i:m])}$ \\
\hline \hline
0 & $\{0,1\}$ & $\{0,1,2\}$ \\
\hline
1 & $\{1,2,3\}$ & $\{0,2,3,4\}$ \\
\hline
2 & $\{1,3,4,5\}$ & $\{0,2,4,5,6\}$ \\
\hline
3 & $\{1,3,5,6,7\}$ & $\{0,2,4,6,7,8\}$ \\
\hline
4 & $\{1,3,5,7,8,9\}$ & $\{0,2,4,6,8,9,10\}$ \\
\hline
5 & $\{1,3,5,7,9,10,11\}$ & $\{0,2,4,6,8,10,11,12\}$ \\
\hline
$\vdots$ & $\vdots$ & $\vdots$ \\
\hline
$k-1$ & $\{\text{odds to }2k-3\text{ inclusive },2k-2,2k-1\}$ & $\{\text{evens to }2k-2\text{ inclusive },2k-1,2k\}$ \\
\hline
$k$ & $\{\text{odds to }2k-1\text{ inclusive },2k,2k+1\}$ & $\{\text{evens to }2k\text{ inclusive },2k+1,2k+2\}$ \\
\hline
\end{tabular}
\end{table}
Additionally the following two sets are included, as well as the sets $n(S^{([m:m])})=\emptyset$, and $n(S^{([-1:m])})=\{0\}$.
$$P = n(S^{([1:m])})=\{1,3,5,....,2k+1,2k+2,2k+3\}$$
$$U = n(S^{([0:m])})=\{2,4,6,...,2k+2,2k+3,2k+4\}$$
We will argue that the above set system shatters $A=\{0,2k-1,2k\}$. In fact $\{0,2l-1,2l\}$ is shattered for $l=1,...,k$ though we omit the proof.
For each subset of $A$ we give the relevant set named above which traces it. The rows and columns refer to the table above.
\vspace{0.2cm}
\begin{tabular}{|l|l|}
\hline
Subset & Realization \\
\hline \hline
$\{\}$ & $S^{([m:m])}$ \\
\hline
$\{0\}$ & $S^{([-1:m])}$ \\
\hline
$\{2k-1\}$ & $P$ \\
\hline
$\{2k\}$ & $U$ \\
\hline
$\{0,2k-1\}$ & truncating row $k-1$ column 1 so that $2k-1$ is the maximum element \\
\hline
$\{0,2k\}$ & row $k$ column 1 \\
\hline
$\{2k-1,2k\}$ & row $k$ column 0 \\
\hline
$\{0,2k-1,2k\}$ & row $k-1$ column 1 \\
\hline
\end{tabular}
\vspace{0.2cm}
This concludes the proof of (1).
For (2) and (3) we give only a sketch. First enumerate the sets realized from right rays. Then consider a shattered set $A$ of size 3. Work through the various cases concerning whether $A$ contains only odds, only evens, or a mix of odds and evens. In each case it will be clear that there is some subset of $A$ which is not traced.
\end{proof}
We are uncertain how complex the language of 3-prime strings may be.
\textbf{Question}: Which level of the hierarchy of formal languages do $d$-prime strings occupy?
\section{Model theory and generalizations}\label{Smt}
The study of binary strings is equivalent to the study of subsets of $\mathbb{N}$ or $\mathbb{Z}$ (if the strings are infinite in both directions.) The complexity of subsets of integers is a topic that has been deeply investigated in model theory. These results allow for the quick determination of the finitude of VC dimension for a broad collection of binary strings. In this section we will do two things. First we will put the results of the previous sections in a model theoretic context and show that many of the results in the previous sections have analogues in structures other than $\mathbb{Z}$ and $\mathbb{N}$. Then we will review results from the model theory literature that establish finitude of VC dimension in a wide assortment of strings. See \cite{marker2006model} for a survey of model theory.
In the context of model theory a \textit{language} is a set of symbols which denote abstract functions, predicates, and constants. For example the language of groups is $L=\{+,0\}$ and the language of ordered rings is $L=\{+,\cdot,<\}$. The equality relation is implicitly included. If $L$ is a language, then there is an associated set of well formed first order formulas in $L$. A universe of objects together with an interpretation of the elements of $L$ is known as a $L$-structure (or model). For example $N=(\mathbb{N},+,\cdot)$ is a model in the language of rings. A $L$-model provides a semantics for the $L$-formulas.
In the presence of a model, formal expressions in a language take on a truth value. If a formal $L$-sentence $\varphi$ is true in an $L$-model $M$, we write $M \models \varphi$, which we read as ``$M$ models $\varphi$". For example $\mathbb{N} \models \forall x \forall y (xy = yx)$ with respect to standard multiplication. We refer to $M$ both as a model and the universe of the model $M$ by abuse of notation. The first-order theory of the $L$ structure $M$ is the set of first order $L$-formulas modeled by $M$.
We say that a language $L'$ is an expansion of a language $L$ if $L \subseteq L'$. If $\varphi(x_0,x_1,\ldots,x_m)$ is an $L$ formula we can partition its variables such that some are construed as parameters. When we write $\varphi(\bar{x};\bar{y})$ we mean an $L$ formula with variables $\bar{x}$ of some arity $|\bar{x}|$ and parameter variables $\bar{y}$ with some arity $|\bar{y}|$. If $M$ is an $L$ structure we use $M^{\bar{x}}$ and $M^{\bar{y}}$ to refer to the $|\bar{x}|$-tuples and $|\bar{y}|$-tuples in $M$, respectively. We define, for any $\bar{b} \in M^{\bar{y}}$, $\varphi(M,\bar{b}) = \{\bar{a}\in M^{\bar{x}}: M \models \varphi(\bar{a},\bar{b})\}$. For any $X \subseteq M^{\bar{x}}$ and $Y \subseteq M^{\bar{y}}$, we have the set system $\mathcal{C}^{\varphi(\bar{x},\bar{y})}_{X,Y} = \{\varphi(X,\bar{b}) : \bar{b} \in Y\}$. The VC dimension of $\varphi(\bar{x},\bar{y})$ with respect to the theory of $M$ is defined to be the VC dimension of $\mathcal{C}^{\varphi(\bar{x},\bar{y})}_{X,Y}$ where $X=M^{\bar{x}}$ and $Y=M^{\bar{y}}$. This is first order definable and a property of the theory of $M$. A first order $L$ theory $T$ is said to be NIP (for ``not the independence property") if every partitioned $L$ formula has finite VC dimension. A guide to NIP theories can be found in \cite{simon2015guide}.
We now show that what we have called mask dimension on binary strings is simply the VC dimension of certain formulas in Presberger arithmetic, expanded by a predicate.
Let $M$ be an $L$-structure where $L$ is an expansion of the language with a single binary operator, denoted $+$. Let $P(x)$ be a predicate on $M$. To connect with binary strings, use the convention that $P(a)$ is identified with $1$ if $M \models P(a)$ for $a \in M$, and $P(a)$ is identified with 0 if $M \models \neg P(a)$. A $d$-mask on $(M,+,P)$ is a set of sequences of the form $\langle P(a_1+t),P(a_2+t),\ldots,P(a_d+t)\rangle$ for fixed $a_1,...,a_d \in M$ as $t$ varies in $M$. A $d$-mask on $(M,+,P)$ is said to be full if $|\{\langle P(a_1+t),P(a_2+t),\ldots,P(a_d+t)\rangle : t \in M\}| = 2^d$. The mask dimension of the predicate $P$ is the maximum $d$ such that a full $d$-mask on $(M,+,P)$ exists. If there is no maximum, we say that the mask dimension is $\infty$.
If $M=\mathbb{N}$ and $+$ is standard addition, then $P \subseteq \mathbb{N}$ and $P$ can be identified with a binary string. The mask dimension of this string as defined in Section \ref{Sintro} is what we have defined as the mask dimension of $(M,+,P)$ in the previous paragraph. Note that this is also the VC dimension of $\varphi(x;y) = P(x+y)$ in the theory of $M$
We can see from this description that all set systems arising from binary strings are self dual, in the sense that $P(x+y)=P(y+x)$. This is a fairly strong restriction on the kinds of set systems that can arise from binary strings.
Observe that the entire discussion about mask dimension of a definable set can be applied to more general situations through the notion of the VC dimension of $P(x+y)$. Below we show that the Cantor set has infinite mask dimension.
\begin{proposition} Let $M = (\mathbb{R},+,P)$ and $P(\mathbb{R})$ be the Cantor set. Then $(\mathbb{R},+,P)$ has infinite mask dimension.
\end{proposition}
\begin{proof}
An element $r \in \mathbb{R}$ is in the Cantor set iff there is $A \subseteq \mathbb{N}$ such that $r = 2\sum_{a \in A} 3^{-a}$.
Let $d \in \mathbb{N}$ be given.
Consider a $d$-mask of the form $\langle P(a_1+t),P(a_2+t),\ldots,P(a_d+t)\rangle$ where $a_i = 2\cdot 3^{-i}$ for $i =1,2,\ldots,d$.
We claim that this $d$-mask is full. Let $A \subseteq [d]$ be given.
Define $t= 2\sum_{a \in A}3^{-a}$.
Now consider $a_i + t = 2(3^{-i}+ \sum_{a \in A} 3^{-a})$. We have $M \models P(a_i + t) \iff i \notin A$. This gives a full $d$-mask. Since $d$ was arbitrary, the Cantor set has infinite mask dimension.
\end{proof}
It was shown in \cite{hieronymi2018interpreting} using model theoretic techniques that an expansion of $(\mathbb{R},+,<)$ that defines a Cantor set is not NIP. The order relation is not necessary (see above) for the standard ternary Cantor set. The paper however deals with a more abstract definition of a Cantor set, namely a subset of $\mathbb{R}$ that is nonempty, compact, and has neither isolated nor interior points.
It is certain that something equivalent to the following is known, but we cannot find a reference.
\begin{proposition}\label{Pgroup} Suppose $R=(R,+)$ is a group. Suppose that $P$ names a subgroup of $R$. Then the mask dimension of $(R,+,P)$ is one unless $R=P$ in which case it is zero.
\end{proposition}
\begin{proof}
Let $G$ be the subgroup named by $P$. Let $r_1,r_2 \in R$, and consider a 2-mask $\langle P(r_1+t),P(r_2+t)\rangle$. For this mask to be full, there must be four values of $t$, each realizing a different group membership condition on $r_1+t$ and $r_2+t$. In particular we must show that there is some $t_{11}$ such that $r_1+t_{11}\in G$ and $r_1+t_{11}\in G$, as well as some $t_{01}$ such that $r_1+ t_{01} \notin G$ and $r_2+t_{01} \in G$. We show that this is not possible. Without loss $R\setminus G$ is nonempty.
Suppose that there is some $t_{11} \in R$ such that $r_1+t_{11} = g_1, r_2+t_{11} = g_2$, with $g_1,g_2 \in G$.
Then $g_1-r_1 = g_2 - r_2$, and $g_1-g_2 = r_1 -r_2$. Therefore $r_1 - r_2 \in G$.
Now suppose that there is some $t_{01} \in R, h \in R \setminus G$, and $g \in G$ such that $r_1 + t_{01} = h$, and $r_2 + t_{01} = g$.
Then $h-g = r_1 - r_2 \in G.$
Therefore $h-g \in G$ and $h-g+g \in G$. Then $h \in G$ $\rightarrow \leftarrow$. This shows that the mask dimension of $P$ is less than two.
Suppose that there is some $h \in R\setminus G$. Let $g \in G$. Then $g+0 \in G$, and $g+h \notin G$ (since $g+h$ is in the coset $G+h$, disjoint from $G$). Therefore a full 1-mask exists.
Finally suppose that $R = G$. Then there is no full 1-mask.
\end{proof}
Proposition \ref{Pgroup} gives many examples of pairs $(M,+,P)$ for which $P$ is of mask dimension 1.
The example $(\mathbb{R},+,\mathbb{Q})$ shows in particular that a dense/codense subset of $\mathbb{R}$ can have small complexity. On the other hand a dense/codense subset of $\mathbb{R}$ can have infinite complexity, as the following shows.
\begin{proposition} $(\mathbb{R},+,P)$ has infinite mask dimension where $P$ is the union of sets of the form $\mathbb{Q}+\sum_{i\in A} \pi^i$ for finite subsets $A \subseteq \mathbb{N}$, and $\pi$ a nonalgebraic constant.
\end{proposition}
\begin{proof}
We identify $P$ with its interpretation $P(\mathbb{R})$. Because $P \supseteq \mathbb{Q}$, $P$ is dense in $\mathbb{R}$. Because $P$ is countable it is also codense. Let $d \in \mathbb{N}$ be given. Let $a_i = \pi^i$ for $i=1,2,...,d$. Fix any $A \subseteq [d]$ and let $t = \sum_{i \in A} \pi^i$. Then $a_i + t \in P \iff i \notin A$. This gives a full $d$-mask.
\end{proof}
Note that the definition of a Sidon set generalizes to abelian groups other than $(\mathbb{Z},+)$. In these more general settings, predicates $P$ which realize near Sidon sets will still have finite mask dimension, essentially by the same argument from Section \ref{Ssidon}. This is somewhat implicit in results from \cite{VPD2} and \cite{POINT1}.
\subsubsection{Model theoretic connections}\label{Smodelcon}
In this section we survey the model theory literature for results that give information about classes of binary strings (and generalizations) with finite VC dimension. Often in model theory a goal is to show that a structure has combinatorial properties, such as NIP. A stronger condition than NIP is \textit{stability}. Many authors have showed conditions on $P$ such that $(\mathbb{Z},+,P)$ is a stable or NIP structure. Either of these conclusions implies that every formula, including $P(x+y)$, has finite VC dimension, and hence that the binary string corresponding to $P$ has finite VC dimension.
There is deep work by a number of authors that examine the groups $G$, subsets $A \subseteq G$, and the VC dimension of the set family $\{gA: g \in G\}$. This is of course equivalent to studying models $(G,+,A)$ of finite mask dimension where $(G,+)$ is a group (only in multiplicative notation). Examples of work in this vein includes \cite{conant2018structure, conant2018pseudofinite, conant2020approximate}, and \cite{terry2020quantitative}.
Model theorists have been aware for some time that that when $A \subseteq{N}$ is satisfies certain sparsity conditions, $(\mathbb{Z},+,A)$ has finite mask dimension. Some work on this topic includes \cite{conant2019stability,POINT1,conant2018multiplicative}, and \cite{conant2020weakly}.
Hawthorne has done work on examining the relation between automatic sequences and finite VC dimension \cite{hawthorne2020automata}.
In \cite{kaplan2017decidability} it is shown (as a consequence of the Green-Tao theorem) that if $P$ names the primes in $\mathbb{N}$, then for all $d$, if $A=[d]$ then for every $A_0 \subseteq A$ there is an arithmetic progression $\{t_0i+t_1: i \in A\}$ such that $P(t_0i+t_1) \iff i\in A_0$.
Consequently $|\{\langle P(t_0+t_1),P(2t_0+t_1),\ldots,P(dt_0+t_1) \rangle : t_0,t_1 \in \mathbb{N}\} | = 2^d$. This is a kind of ``affine" mask dimension on the set of primes, which is shown to be infinite.
\textbf{Question:} What is the mask dimension of the binary string with a 1 at index $i$ precisely if $i$ is prime?
It follows from the work of \cite{POINT1,VPD2} that the sequence with a 1 precisely in indexes corresponding to Fibonacci numbers has finite VC dimension. These authors also first established basically all of the facts in Section \ref{Spowersof2}. There has been some interesting work relating NIP theories to dynamics on bi-infinite binary strings through the automorphism group \cite{mofidi2018some}.
\bibliographystyle{amsplain}
\small | 2023-04-23T06:40:46.759Z | 2021-01-26T02:21:42.000Z | redpajama/arxiv | arxiv_0001 | 1,100 | 13,921 |
ad2c900bf39820965d4bf56c44480bc16de384e3 |
\subsubsection{\textbf{Feature Smoothing Block}} FSB is a stack of $1 \times 1$ Convolution - Batch\_Normalization - ReLU. This block reduces the dimensionality or smoothen the input features.
\par
\subsubsection{\textbf{Feature Interpolator Block}} At each stage of ResNet-$50$, the spatial size of feature is reduced by half in order to increase the receptive field. Hence, output features of the deeper layers need to be upsampled in order to merge them with the features produced by shallower layers. FIB perform this operation by using bilinear interpolation. Although, a deconvolution layer can also be used for this purpose, we avoid its use because of extra learnable parameters.
\par
\subsubsection{\textbf{Context Extraction Block}} Utility of this block is to increase feature separability by gathering various levels of context. This block performs a concatenation operation which combines diverse features from earlier stages to embed uniqueness in features (Fig. \ref{fig_customized_neural_links}).
\subsection{\textbf{Instance Detection and Segmentation}}
The semantic segmentation approach assigns a common label to all instances of a class and doesn't differentiate between them. In ARC'$17$, all items had exactly one instance and therefore, semantic segmentation was sufficient for recognition purpose by treating each item as a different category. However, multiple instances are quite common practically. In such cases, The Mask-RCNN \cite{maskrcnn} can not be employed because the box regression is hard to achieve in short durations ($\sim30$ minutes) of
training as compared to semantic segmentation and is also computationally slower. Hence, we extend our system to perform instance detection and segmentation while adding minimal computational overhead.
\par
To achieve this, first, pixel-wise semantic labels are for an input image are obtained by the child. For each item class, pixel-wise masked images are computed using the predicted labels i.e. $n$ masked images for $n$ classes. All such images are then fed to the tutor network which predicts bounding boxes for all instances of the class, similar to Fig. \ref{fig_auto_anno_results}f. The predicted boxes serves as the rectangular boundaries for each instance while the labels predicted by the child serves as the segmentation mask (Fig. \ref{fig_comptetition_runs}).
\begin{figure}[t]
\centering
\subfloat[]
{
\begin{tikzpicture}
\FPeval{\txthgt}{0.4}
\FPeval{\radiicornr}{0.15}
\node (ex) [scale = 0.95]{
\begin{tikzpicture}[node distance=5.0ex,scale=0.5]
\node (resnet50) [rectangle, fill=cyan!90!magenta,label={[above]\tiny Resnet-$50$},minimum width=6.5ex,minimum height=17ex,yshift=-7.0ex]{};
\node (stage1) [rectangle, fill=black!10!white,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny Conv$1\_3$};
\node (stage2) [rectangle, fill=black!10!white, below of=stage1,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny Conv$2\_3$};
\node (stage3) [rectangle, fill=black!10!white, below of=stage2,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny Conv$3\_4$};
\node (stage4) [rectangle, fill=black!10!white, below of=stage3,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny Conv$4\_6$};
\node (stage5) [rectangle, fill=black!10!white, below of=stage4,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny Conv$5\_3$};
\node (fib1) [rectangle, fill=skyblue!100!white, below of=stage5,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FIB};
\node (fmb1) [rectangle, fill=gray!60!white, left of= fib1, below of=stage5, xshift=1ex,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny $+$};
\node (fsb1) [rectangle, fill=violet!40!white, below of=fmb1,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FSB};
\node (fib2) [rectangle, fill=skyblue!100!white, below of=fsb1,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FIB};
\node (fmb2) [rectangle, fill=gray!60!white, left of=fib2, xshift=0.9ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny $+$};
\node (fsb2) [rectangle, fill=violet!40!white, below of=fmb2,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FSB};
\node (fib3) [rectangle, fill=skyblue!100!white, below of=fsb2,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FIB};
\node (fmb3) [rectangle, fill=gray!60!white, left of=fib3, xshift=0.9ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny $+$};
\node (fsb3) [rectangle, fill=violet!40!white, below of=fmb3,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FSB};
\node (fib4) [rectangle, fill=skyblue!100!white, right of=fib3, xshift=6.5ex,yshift=6.3ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FIB};
\node (fsb4) [rectangle, fill=violet!40!white, right of=fsb3, xshift=10.6ex ,yshift=6.3ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FSB};
\node (fib5) [rectangle, fill=skyblue!100!white, right of=fib4, xshift=-1.3ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FIB};
\node (fsb5) [rectangle, fill=violet!40!white, right of=fsb4, xshift=-1.3ex,text height=\txthgt ex ,rounded corners=\radiicornr mm]{\tiny FSB};
\node (fib6) [rectangle, fill=skyblue!100!white, right of=fib5, xshift=-1.3ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FIB};
\node (fsb6) [rectangle, fill=violet!40!white, right of=fsb5, xshift=-1.3ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FSB};
\node (fib7) [rectangle, fill=skyblue!100!white, right of=fib6, xshift=-1.3ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FIB};
\node (fsb7) [rectangle, fill=violet!40!white, right of=fsb6, xshift=-1.3ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FSB};
\node (flb) [rectangle, fill=purple!50!white, below of=fib1, xshift = 1ex, yshift=-12.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny CEB};
\node (fsb8) [rectangle, fill=violet!40!white, below of=flb,yshift=1.5ex,text height=\txthgt ex,rounded corners=\radiicornr mm]{\tiny FSB};
\draw [arrow] (stage1) -- (stage2);
\draw [arrow] (stage2) -- (stage3);
\draw [arrow] (stage3) -- (stage4);
\draw [arrow] (stage4) -- (stage5);
\draw [arrow] (stage5) -- (fib1);
\draw [arrow] (stage4) -| (fmb1);
\draw [arrow] (fib1) -- (fmb1);
\draw [arrow] (fmb1) -- (fsb1);
\draw [arrow] (fsb1) -- (fib2);
\draw [arrow] (stage3) -| (fmb2);
\draw [arrow] (fib2) -- (fmb2);
\draw [arrow] (fmb2) -- (fsb2);
\draw [arrow] (fsb2) -- (fib3);
\draw [arrow] (stage2) -| (fmb3);
\draw [arrow] (fib3) -- (fmb3);
\draw [arrow] (fmb3) -- (fsb3);
\draw [arrow] (stage5) -| (fib4);
\draw [arrow] (fib4) -- (fsb4);
\draw [arrow] (stage4) -| (fib5);
\draw [arrow] (fib5) -- (fsb5);
\draw [arrow] (stage3) -| (fib6);
\draw [arrow] (fib6) -- (fsb6);
\draw [arrow] (stage2) -| (fib7);
\draw [arrow] (fib7) -- (fsb7);
\draw [arrow] (fsb3) |- (flb);
\draw [arrow] (fsb4) -| (flb);
\draw [arrow] (fsb5) |- ($(flb.east) - (0, -1ex)$);
\draw [arrow] (fsb6) |- (flb);
\draw [arrow] (fsb7) |- ($(flb.east) - (0, 1ex)$);
\draw [arrow] (flb) -- (fsb8);
\end{tikzpicture}
};
\end{tikzpicture}
\label{fig_customized_neural_links}
}
\hspace*{0.5ex}
\subfloat[]
{
\FPeval{\radiicornr}{0.15}
\begin{tikzpicture}
\FPeval{\width}{16}
\FPeval{\height}{12}
\FPeval{\imwidth}{2}
\FPeval{\imheight}{2}
\FPeval{\rotimwidth}{15.4}
\FPeval{\rotimheight}{0.75*\rotimwidth}
\node (rot_plat) [rectangle, minimum width=\width ex,minimum height=\height ex, xshift=0 ex]{\includegraphics[width=\rotimwidth ex,height=\rotimheight ex]{images/rot_plat_2.jpg}};
\node (rect_nodplat) at(rot_plat) [draw=white!70!black,minimum width = 17ex, minimum height=11ex, rounded corners=0.6mm,line width=0.1pt, yshift=-1ex]{};
\node (rot_plat_text) [rectangle, below of=rot_plat, xshift=-0.5ex,yshift= 0.5ex]{\tiny Rotating platform};
\node (online_images) [block, below of=rot_plat, fill=red!60!white,minimum width=\width ex,minimum height=0.1*\height ex, xshift=0 ex,yshift=-5ex,rounded corners=\radiicornr mm]{\tiny Image Aquisition};
\foreach \i in {1,...,3}
{
\FPeval{\del}{\i*\imwidth}
\FPeval{\imname}{clip(\i+35)}
\node (online_image_\i) [rectangle,below of=online_images, xshift=-\del ex -0.5, yshift=3.5 ex]{\includegraphics[width=\imwidth ex,height=\imheight ex]{images/\imname.png}};
}
\node (auto_anno) [block, fill=red!60!white, below of=online_images,minimum width=\width ex,minimum height=0.1*\height ex, yshift=0.5 ex,rounded corners=\radiicornr mm]{\tiny Tutor};
\foreach \i in {1,...,3}
{
\FPeval{\del}{\i*\imwidth}
\FPeval{\nodnum}{clip(\i+4)}
\FPeval{\imname}{clip(\i+35)}
\node (online_image_\nodnum) [rectangle,right of=auto_anno, xshift=-\del ex -6.5 ex, yshift=-2.8 ex]{\includegraphics[width=\imwidth ex,height=\imheight ex]{images/\imname.png}};
}
\foreach \i in {1,...,3}
{
\FPeval{\del}{\i*\imwidth}
\FPeval{\imname}{clip(\i+35)}
\node (online_seg_\i) [rectangle,right of=auto_anno, xshift=-\del ex + 1.9 ex, yshift= -2.8 ex]{\includegraphics[width=\imwidth ex,height=\imheight ex]{images/coloured_mask_\imname.png}};
}
\node (synthetic_clutter) [block, fill=red!60!white,below of=auto_anno, minimum width=\width ex,minimum height=0.1*\height ex, yshift=0.5 ex,rounded corners=\radiicornr mm]{\tiny Synthetic cluttering};
\foreach \i in {1,...,3}
{
\FPeval{\del}{\i*\imwidth}
\node (clutter_image_\i) [rectangle,right of=synthetic_clutter, xshift=-\del ex - 6.5 ex, yshift=-2.9 ex]{\includegraphics[width=\imwidth ex,height=\imheight ex]{images/clutter_\i.jpg}};
}
\foreach \i in {1,...,3}
{
\FPeval{\del}{\i*\imwidth}
\node (clutter_seg_\i) [rectangle,right of=synthetic_clutter, xshift=-\del ex + 1.9 ex, yshift= -2.9 ex]{\includegraphics[width=\imwidth ex,height=\imheight ex]{images/clutter_coloured_mask_\i.png}};
}
\node (training) [block, fill=red!60!white, below of=synthetic_clutter,minimum width=\width ex,minimum height=0.1*\height ex, yshift=0.5 ex,rounded corners=\radiicornr mm]{\tiny Child};
\node (human_input) [rectangle,above of=auto_anno,minimum height=0.1*\height ex, yshift=-3ex, xshift=4ex]{\tiny class label};
\node () [below of=training, yshift=1ex]{};
\draw [arrow] (online_images) -- (auto_anno);
\draw [arrow] (auto_anno) -- (synthetic_clutter);
\draw [arrow] (synthetic_clutter) -- (training);
\draw [arrow] (human_input.south) -- ($(auto_anno.north)+(4 ex,0)$);
\draw [<->] (rot_plat.south) -- (online_images.north);
\end{tikzpicture}
\label{fig_state_transition}
}
\caption{(a) Child with the customized CNN head, (b) state transition in the proposed online learning system}
\end{figure}
\subsection{\textbf{Dataset}}
We collect $12000$ single instance images of $40$ items provided by Amazon a priori, referred as \texttt{known-set}. Then we manually generate their mask and box annotations. During ARC'$17$, each task had a \texttt{competition-set} comprising of known and novel items divided equally. The stow-task $20$, pick-task $32$ and final stow-pick task had $32$ items. The images of novel items were collected during $45$ minutes. As mentioned previously, $120$ images are obtained for two revolution of the platform and each was rotated by $-10^{\circ}$ and $10^{\circ}$ in order to approximately match the already collected $300$ images per known item.
\subsection{\textbf{Semi Supervised Labeling}}
We split the set of $12000$ images into two sets of $8000$ and $4000$ i.e. $200$ train and $100$ test images per item. We train the network architecture (Fig. \ref{fig_customized_neural_links}) for class agnostic mask and Single-Shot-Multi-Box-Detector (SSD) \cite{ssd} for box annotation. For both of them, we use comprehensive data-augmentation i.e. random hue, saturation, brightness, and contrast all with selection probability of $0.5$, random rotation between $-10^{\circ}$ to $10^{\circ}$, Gaussian blur $\sigma=3$ and a crop size of $512\times 512$. The training hyper-parameters are set to $learning~rate (\eta) = 0.001$, $learning~rate~policy = step$, $gamma = 0.1$, $momentum = 0.90$ and $weight~decay = 0.0001$.
\par
In general, mAP score is preferred to asses the bounding box prediction accuracy. However, in our case, we are more interested in assessing the overlapping of predicted and ground truth boxes due to presence of only one class (item). Thus, instead of mAP, we report mIoU \cite{voc} score for bounding box prediction and same is also reported for the semantic segmentation. The impact of comprehensive data-augmentation on the tutor can be seen clearly in Table \ref{tab_autoanno}.
\subsection{\textbf{Clutter Synthesis and Quick Child Learning}}
We capture $200$ real cluttered images and manually annotate them to evaluate the performance of quick learning for semantic segmentation. Our focus in this experiment remains on quick learning and therefore we don't involve analysis on large datasets (e.g. \cite{cityscapes}). We evaluate the child network against the state-of-art PSPNet \cite{pspnet} by training both of them for $35$ minutes with a batch size of $5$ on the given computing platform. We freeze the Batch-Normalization \cite{bn} parameters of the back-bone while they are learnt for all the Batch-Normalization layers of the CNN head. We set the training hyper parameters $learning~rate~policy = step$, $gamma = 0.1$, $momentum = 0.90$ and $weight~decay = 0.0001$ and use multinomial softmax loss in which multiple classes compete against each other. The ResNet-$50$ backbone is pretrained for class agnostic segmentation (tutor). To analyze the effect of synthetic clutter and learning rate, we adopt two values of $learning~rates~(\eta) = \{ 0.01, 0.001\}$ and for each of them, the child and PSPNet are trained for varying amounts of clutter.
\par
Table \ref{tab_segmentation_perfm} shows the mIoU of both the architectures on the mentioned real cluttered images. It can be seen that higher learning rate has adverse effects on PSPNet whereas the child network performs significantly better. With synthetic cluttering, both the networks performs well, however the child network outperforms PSPNet with a visible margin. A grid size of $3\times3$ doesn't adds much onto the accuracy, because all the items in the cluttered images are almost isolated. As the grid size is increased, actual cluttered and occlusion situation appears in the cluttered images, resulting in improved mIoU scores, which can also be verified qualitatively by Fig. \ref{fig_seg_results}. All the segmentation masks are are thresholded at $90\%$ confidence in order to demonstrate, how quick the network can achieve higher confidences. Row-$2$ column-(i) marks the presence of small misclassified patches in the output of PSPNet, whereas these are rarely present in the case of child.
\par
The quantitative evaluation of instance detection remains same as that of Table \ref{tab_autoanno} because the tutor for box-annotation is employed for instance detection. Fig. \ref{fig_comptetition_runs} shows the qualitative results for instance detection and segmentation. The images were acquired during our actual stow and pick task runs in ARC'$17$.
\begin{figure}[t]
\vspace*{0.3ex}
\centering
\FPeval{\width}{5ex}
\FPeval{\height}{5ex}
\FPeval{\shift}{0.4ex}
\begin{tikzpicture}
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=0 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_\i.png}};
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=1 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_gnd_truth_\i.png}};
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=2 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_mask_01_\i.png}};
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=3 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_mask_FPN_01_\i.png}};
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=4 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_mask_001_\i.png}};
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=5 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_mask_FPN_001_\i.png}};
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=6 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_mask_cluttered_01_\i.png}};
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=7 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_mask_FPN_cluttered_01_\i.png}};
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=8 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_mask_cluttered_001_\i.png}};
\foreach \i in {1,...,4}
\node (node_\i) [rectangle, xshift=9 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/seg_results_exp/foscam_clutter_mask_FPN_cluttered_001_\i.png}};
\FPeval{\subfloatcaptiony}{0-4.5}
\node (node_1) [rectangle, xshift=0 * (\width +\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex, anchor=north,align=center ]{\footnotesize (a)};
\node (node_2) [rectangle, xshift=1 * (\width +\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex,anchor=north,align=center]{\footnotesize (b)};
\node (node_3) [rectangle, xshift=2 * (\width+\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex, anchor=north,align=center]{\footnotesize (c)};
\node (node_4) [rectangle, xshift=3 * (\width +\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex, anchor=north,align=center]{\footnotesize (d)};
\node (node_5) [rectangle, xshift=4 * (\width +\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex, anchor=north,align=center]{\footnotesize (e)};
\node (node_6) [rectangle, xshift=5 * (\width +\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex, anchor=north,align=center]{\footnotesize (f)};
\node (node_7) [rectangle, xshift=6 * (\width +\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex, anchor=north,align=center]{\footnotesize (g)};
\node (node_8) [rectangle, xshift=7 * (\width +\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex, anchor=north,align=center]{\footnotesize (h)};
\node (node_9) [rectangle, xshift=8 * (\width +\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex, anchor=north,align=center]{\footnotesize (i)};
\node (node_10) [rectangle, xshift=9 * (\width +\shift),yshift=\subfloatcaptiony * (\height+\shift), text width= 1*\width-1ex, anchor=north,align=center]{\footnotesize (j)};
\FPeval{\ycaptionup}{5.5}
\FPeval{\ycaptiondown}{4.3}
\node (node_3_up) [rectangle, below of =node_3,yshift=\ycaptionup ex, text width= 1*\width-1ex, anchor=north,align=center]{\tiny PSPNet};
\node (node_4_up) [rectangle,below of =node_4,yshift=\ycaptionup ex, text width= 1*\width-1ex, anchor=north,align=center]{\tiny Child};
\node (node_5_up) [rectangle,below of =node_5,yshift=\ycaptionup ex, text width= 1*\width-1ex, anchor=north,align=center]{\tiny PSPNet};
\node (node_6_up) [rectangle,below of =node_6,yshift=\ycaptionup ex, text width= 1*\width-1ex, anchor=north,align=center]{\tiny Child};
\node (node_7_up) [rectangle,below of =node_7,yshift=\ycaptionup ex, text width= 1*\width-1ex, anchor=north,align=center]{\tiny PSPNet};
\node (node_8_up) [rectangle,below of =node_8,yshift=\ycaptionup ex, text width= 1*\width-1ex, anchor=north,align=center]{\tiny Child};
\node (node_9_up) [rectangle,below of =node_9, yshift=\ycaptionup ex, text width= 1*\width-1ex, anchor=north,align=center]{\tiny PSPNet};
\node (node_10_up) [rectangle,below of =node_10,yshift=\ycaptionup ex, text width= 1*\width-1ex, anchor=north,align=center]{\tiny Child};
\node (node_3_up_eta) [rectangle, below of =node_3,yshift=\ycaptiondown ex, anchor=north,align=center]{\tiny $\eta$= $.01$};
\node (node_4_up_eta) [rectangle, below of =node_4,yshift=\ycaptiondown ex, anchor=north,align=center]{\tiny $\eta$= $.01$};
\node (node_5_up_eta) [rectangle, below of =node_5,yshift=\ycaptiondown ex, anchor=north,align=center]{\tiny $\eta$= $.001$};
\node (node_6_up_eta) [rectangle, below of =node_6,yshift=\ycaptiondown ex, anchor=north,align=center]{\tiny $\eta$= $.001$};
\node (node_7_up_eta) [rectangle, below of =node_7,yshift=\ycaptiondown ex, anchor=north,align=center]{\tiny $\eta$= $.01$};
\node (node_8_up_eta) [rectangle, below of =node_8,yshift=\ycaptiondown ex, anchor=north,align=center]{\tiny $\eta$= $.01$};
\node (node_9_up_eta) [rectangle, below of =node_9,yshift=\ycaptiondown ex, anchor=north,align=center]{\tiny $\eta$= $.001$};
\node (node_10_up_eta) [rectangle, below of =node_10,yshift=\ycaptiondown ex, anchor=north,align=center]{\tiny $\eta$= $.001$};
\node (input) [rectangle,below of =node_1,yshift=\ycaptionup ex,anchor=north,align=center]{\tiny Input};
\node (image) [rectangle,below of =node_1,yshift=\ycaptiondown ex,anchor=north,align=center]{\tiny image};
\node (ground) [rectangle,below of =node_2,yshift=\ycaptionup ex,anchor=north,align=center]{\tiny Ground};
\node (truth) [rectangle,below of =node_2,yshift=\ycaptiondown ex,anchor=north,align=center]{\tiny truth};
\end{tikzpicture}
\caption{(c)-(f) training without synthetic clutter, and (g)-(i) with synthetic clutter.}
\label{fig_seg_results}
\vspace*{-0.3ex}
\end{figure}
\subsection{\textbf{Amazon Robotics Challenge, 2017}}
\subsubsection{\textbf{Suppressed Misclassification and Open Workspace}}
The on-spot training was done for novel items and the known items, only in the competition set. This strategy allowed the child to penalize the loss function for each item, approximately in a uniform manner. On the other hand, the teams who used their networks pretrained on known items had faced issue of small misclassified patches. It happened due to biasing of their networks towards known items which lead to confusion of novel with known items. In our case, such patches were significantly suppressed and were observed only once during the pick task. Moreover, our system was also robust to ambient lights which allowed us to keep the workspace open and unconstrained (Fig. \ref{fig_full_system}) in contrast to the other teams \cite{nimbro2017}, \cite{acrvvision2017}, \cite{mit2017}.
\subsubsection{\textbf{Inspection Free Self Learning System}}
The improved system starts learning as soon as an item is placed on the platform. It continuously monitors the data acquisition process to examine the number of items processed and generate synthetic clutter only for the items whose images and ground truths are available. In contrast, our vision system in ARC'$17$ coudn't take advantage of the instance detection and online learning. The clutter generation process was, however, runtime, i.e. child never encounters a cluttered image twice. Typically during a training run, the child could learn approximately $\sim67000$ images in $35$ minutes, which is near real time. In addition, all the components of our system were inspection free in contrast to the Team Nimbro and ACRV who manually monitored their data acquisition process and performed correction in case of erroneous ground truth segmentation masks. %
\subsubsection{\textbf{Statistical Analysis with other teams}}
Our system generated data for itself and exhibited no mis-labeling of an item, just in $45$ minutes. Due to accurate visual perception, we achieved highest grasping accuracies, even more than the winners. Table-\ref{tab_ARC} shows the item grasp success rate for top-5 teams in ARC'$17$. Our team is highlighted in \textcolor{blue}{blue}.
\begin{table}[h]
\centering
\caption{\scriptsize Grasping performance of the Top-$5$ teams in ARC'$17$}
\begin{tabular}{c|c|c|c}
\hline
\multirow{2}{*}{Team} & \multicolumn{3}{c}{Grasp Success Rate} \\ \cline{2-4}
& Stow task & Pick task & Final task \\ \hline
ACRV & $58.00 ~\%$ & $66.00 ~\%$ & $ 62.50 ~\% $ \\
NimbRo Picking & $11.11~\%$ &$ 68.40 ~\%$&$ 56.80~\%$ \\
Nanyang & $38.80 ~\%$ & $\mathbf{100.0} ~\%$ &$ 53.80~\% $ \\
\textcolor{blue}{IITK-TCS} & \textcolor{blue}{$\mathbf{78.26 ~\%}$} & \textcolor{blue}{$\mathbf{100.0}~\%$} &$ \textcolor{blue}{\mathbf{79.20}~\%} $ \\
MIT-Princeton & $59.37~ \%$ & $39.00 ~\%$ &$ 64.70~\% $ \\
\hline
\end{tabular}
\label{tab_ARC}
\end{table}
\begin{figure}[t]
\vspace*{0.3ex}
\centering
\FPeval{\width}{13ex}
\FPeval{\height}{8ex}
\FPeval{\shift}{0.4ex}
\begin{tikzpicture}
\foreach \i in {0,1}
\node (node_\i) [rectangle, xshift=1 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/official_runs/img_\i.jpg}};
\foreach \i in {0,1}
\node (segnode_\i) [rectangle, xshift=2 * (\width +\shift),yshift=-\i * (\height+\shift)]{
\includegraphics[width=\width,height=\height]{images/official_runs/seg_\i.jpg}};
\FPeval{\heightpick}{16.4ex}
\FPeval{\yshiftpick}{0.4ex}
\foreach \i in {7,...,7}
\node (node_\i) [rectangle, xshift= 3* (\width +\shift),yshift=-(\i -7) * (\heightpick+\yshiftpick)-4.25ex]{
\includegraphics[width=\width,height=\heightpick]{images/official_runs/img_\i.jpg}};
\foreach \i in {7,...,7}
\node (segnode_\i) [rectangle, xshift=4 * (\width +\shift),yshift=-(\i-7) * (\heightpick+\yshiftpick)-4.25ex]{
\includegraphics[width=\width,height=\heightpick]{images/official_runs/seg_\i.jpg}};
\FPeval{\subfloatcaptiony}{0-5.5}
\end{tikzpicture}
\caption{Instance detection and segmentation results on the images collected during our ARC'$17$ competition runs}
\label{fig_comptetition_runs}
\vspace*{-0.3ex}
\end{figure}
\subsubsection{\textbf{Image Acquisition}}
The image acquisition process is facilitated by a rotating platform, equipped with $5\times$FOSCAM FI9903P HD RGB LAN cameras mounted with different viewing angle. We divide one revolution ($\sim10~sec$) into $12$ parts which results in $12 \times 5 = 60 $ images per revolution. For each object, we repeat the process for two revolutions, in order to capture all views of an item. With these settings, image can be collected at a rate of $\sim360$ images per minute (Table \ref{tab_data_aqs}). Alternative to the platform, our system can also be taught by showing an item to the camera by hand.
\par
\subsubsection{\textbf{Ground Truth Generation}}
The acquired images are sent to the tutor to obtain corresponding class agnostic mask and box annotations. These annotations are combined with priorities $\mathbf{p_{m} = p_{b}}$ (Fig. \ref{fig_priorities_auto_anno}) to obtain a final mask which is later filled with a numeric-id (class label), either provided by a human or computerized file storages. Due to real time speed, the ground truth is generated as soon as the images are acquired. To accomplish the same task, it would require approximately $1-2$ hours of human effort. Our system is capable of generating fully annotated images at $~10-15$ FPS (Table \ref{tab_data_aqs}). This speed can be increased by using high-speed cameras to avoid motion blur occurring due to rotating platform.
\par
\subsubsection{\textbf{Clutter Synthesis}}
The images and corresponding ground truths obtained from the previous step are used to synthesize cluttered scenes. A total of $24$ threads are responsible and all of them remain live as long as the training runs. The clutter is generated at a rate of $\sim5\times24=120$ FPS and in this process, most of the time is spent in the image decoding. It can be reduced by prefetching all available images in RAM and continuously discarding the clutter images which have been used in the training process. This increases scene synthesis throughput (Table \ref{tab_clutter}).
\par
\subsubsection{\textbf{Child Learning}}
The child CNN is replicated across all the available GPUs. The child is fed with both the single-class (obtained from platform) and multi-class images (cluttered scenes). The selection between single-class and multi-class images is done at random with a probability ratio of $1:3$. The timing performance of the child learning is provided in the Table \ref{tab_data_aqs}. On the specified server, It can learn at $32$ FPS i.e. $4\times8$ (Batch size$\times$GPUs)
\section{Introduction}
\input{introduction}
\section{Related Work}
\input{related_work}
\section{Learning Framework}
\label{algorithm}
\input{algorithm}
\subsection{\textbf{Semi Supervised Labeling}}
\input{automatic_annotation}
\subsection{\textbf{Occlusion Aware Scene Synthesis}}
\input{synthetic_clutter_generation}
\subsection{\textbf{Customized CNN Head}}
\label{subsec_customized_nw}
\input{customized_deep_network}
\section{Online Learning}
\label{implementation}
\input{implementation}
\section{Experiments}
\label{experiments}
\input{experiments}
\section{Conclusion}
\input{discussion}
\nocite{*}
\bibliographystyle{ieeetr}
| 2023-04-23T06:40:46.907Z | 2021-01-19T02:08:18.000Z | redpajama/arxiv | arxiv_0001 | 1,107 | 4,301 |
657c4309537140f63a77eed301998001513d0d44 | \section{Uniform functions and differentiation theorems}\label{Topological STDs}
In this section, we consider questions of the following forms: Given an appropriate system $(X, \mathcal{B}, T, \mu)$, are there $f \in L^\infty(X, \mu)$ for which \linebreak
$\left( \frac{1}{\mu(F_k)} \int_{F_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \mathrm{d} \mu \right)_{k = 1}^\infty$ converges for \emph{all} choices of $(F_k)_{k = 1}^\infty$? On the other hand, are there restrictions we can place on $(X, \mathcal{B}, \mu, T)$ to ensure that $\left( \frac{1}{\mu(F_k)} \int_{F_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \mathrm{d} \mu \right)_{k = 1}^\infty$ converges for \emph{all} choices of $(F_k)_{k = 1}^\infty$ \emph{and} all $f \in C(X)$? The answer to the former question will be centered around the notion of a uniform function (defined below), and the answer to the latter question will be centered around unique ergodicity.
Let $X$ be a compact metrizable space with Borel $\sigma$-algebra $\mathcal{B}$, and let $T : X \to X$ be a homeomorphism. Then $(X, T)$ is uniquely ergodic iff the sequence $ \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right)_{k = 1}^\infty$ converges in $C(X)$ to a constant function for all $f \in C(X)$ \cite[Theorem 10.6]{EisnerOperators}, and when this happens, the sequence converges to $\int f \mathrm{d} \mu$, where $\mu$ is the unique ergodic $T$-invariant Borel probability measure. Thus if $(F_k)_{k = 1}^\infty$ is \emph{any} sequence of measurable sets of positive measure, then $\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{F_k} \left( T^i f \right) \to \int f \mathrm{d} \mu$ for all $f \in C(X)$, since $\alpha_{F_k}$ is a bounded functional on $L^{\infty}(X, \mu)$. Fix $\epsilon > 0$, and choose $K \in \mathbb{N}$ such that
$$k \geq K \Rightarrow \left\| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty \leq \epsilon.$$
Then if $k \geq K$, we have
\begin{align*}
\left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{F_k} \left( T^i f \right) \right| & = \left| \alpha_{F_k} \left( \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right| \\
& \leq \left\| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty \\
& \leq \epsilon .
\end{align*}
More generally, if we have any dynamical system $(Y, \mathcal{A}, \nu, S)$, we can call a function $g \in L^\infty(Y, \nu)$ \emph{uniform} if $\frac{1}{k} \sum_{i = 0}^{k - 1} S^i g \to \int g \mathrm{d} \nu$ in $L^\infty$. Let $\mathscr{U}(Y, \mathcal{A}, \nu, S) \subseteq L^\infty (Y, \nu)$ denote the space of all uniform functions on $(Y, \mathcal{A}, \nu, S)$. If $g$ is uniform, then for any sequence $(G_k)_{k = 1}^\infty$ of measurable sets of positive measure, we have
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\nu(G_k)} \int_{G_k} S^i g \mathrm{d} \nu \to \int g \mathrm{d} \nu ,$$
meaning that essentially any differentiation problem of the type that interests us will behave exceptionally well for that $g$.
Whenever $(X, T)$ is a uniquely ergodic system, we have \linebreak $C(X) \subseteq \mathscr{U}(X, \mathcal{B}, \mu, T)$, since
$$\left\| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty \leq \left\| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_{C(X)} .$$
We collect here a few results about some more general differentiation problems. We first demonstrate a general characterization theorem for uniform functions.
\begin{Thm}\label{Uniformity and STDs}
Let $(Y, \mathcal{A}, \nu, S)$ be an ergodic dynamical system, and let $g \in L^\infty (Y, \nu)$. Then $g$ is uniform if and only if for all sequences $(G_k)_{k = 1}^\infty$ in $\mathcal{A}$ of measurable sets of positive measure,
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\nu(G_k)} \int_{G_k} S^i g \mathrm{d} \nu \to \int g \mathrm{d} \nu .$$
\end{Thm}
\begin{proof}
$(\Rightarrow)$ If $g$ is uniform, then
\begin{align*}
\left| \int g \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\nu(G_k)} \int_{G_k} S^i g \mathrm{d} \nu \right| & = \left| \frac{1}{\nu(G_k)} \int_{G_k} \left( \int g \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} S^i g \right) \mathrm{d} \nu \right| \\
& \leq \frac{1}{\nu(G_k)} \int_{G_k} \left\| \int g \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} S^i g \right\|_\infty \\
& = \left\| \int g \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} S^i g \right\|_\infty \\
& \to 0 .
\end{align*}
$(\Leftarrow)$ Suppose that $g$ is not uniform, and set $h_k = \int g \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} S^i g $. Then $\limsup_{k \to \infty} \left\| h_k \right\|_\infty > 0$. Breaking $h_k$ into its real part $h_k^{\textrm{Re}}$ and imaginary part $h_k^{\textrm{Im}}$ tells us that either $\limsup_{k \to \infty} \left\| h_k^{\textrm{Re}} \right\|_\infty > 0$, or \linebreak$\limsup_{k \to \infty} \left\| h_k^{\textrm{Im}} \right\|_\infty > 0$. Suppose without loss of generality that \linebreak$\limsup_{k \to \infty} \left\| h_k^{\textrm{Re}} \right\|_\infty > 0$. Then at least one of the inequalities
$$\nu \left( \left\{ y \in Y : h_{k}^{\textrm{Re}}(y) \geq \frac{\epsilon_0}{2} \right\} \right) > 0, \; \nu \left( \left\{ y \in Y : h_{k}^{\textrm{Re}}(y) \leq - \frac{\epsilon_0}{2} \right\} \right) > 0$$
attains for infinitely many $k \in \mathbb{N}$. Assume without loss of generality that $I = \left\{ k \in \mathbb{N} : \nu \left( \left\{ y \in Y : h_{k}^{\textrm{Re}}(y) \geq \frac{\epsilon_0}{2} \right\} \right) > 0 \right\}$ is an infinite set.
Construct a sequence $\left( G_k \right)_{k = 1}^\infty$ by letting $G_k = \left\{ y \in Y : h_{k}^{\textrm{Re}}(y) \geq \frac{\epsilon_0}{2} \right\}$ for all $k \in I$, and $G_k = Y$ for $k \in \mathbb{N} \setminus I$. Then if $k \in I$, then
\begin{align*}
\left| \frac{1}{\nu(G_k)} \int_{G_k} h_k \mathrm{d} \nu \right| & \geq \left| \frac{1}{\nu(G_k)} \int_{G_k} h_k^{\textrm{Re}} \mathrm{d} \nu \right| \\
& = \frac{1}{\nu(G_k)} \int_{G_k} h_k^{\textrm{Re}} \mathrm{d} \nu \\
& \geq \frac{1}{\nu(G_k)} \int_{G_k} \frac{\epsilon_0}{2} \mathrm{d} \nu \\
& = \frac{\epsilon_0}{2} .
\end{align*}
Therefore, there exist infinitely many $k \in \mathbb{N}$ such that
$$\left| \int g \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\nu(G_k)} \int_{G_k} S^i g \mathrm{d} \nu \right| = \left| \frac{1}{\nu(G_k)} \int_{G_k} h_k \mathrm{d} \nu \right| \geq \frac{\epsilon_0}{2},$$
meaning that $\left| \int g \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\nu(G_k)} \int_{G_k} S^i g \mathrm{d} \nu \right| \not \to 0$.
\end{proof}
Because we will so frequently be considering averages of functions over sets of positive measures, it will benefit us to introduce the following notation.
\begin{Not}\label{Defining alpha}
Let $(X, \mathcal{B}, \mu)$ be a probability space. When $F \in \mathcal{B}$ is a set of positive measure $\mu(F) > 0$, we denote by $\alpha_F$ the state on $L^\infty (X, \mu)$ given by
\begin{align*}
\alpha_F (f) & : = \frac{1}{\mu(F)} \int_F f \mathrm{d} \mu .
\end{align*}
\end{Not}
Theorem \ref{Uniformity and STDs} hints at why we consider spatial-temporal differentiations of $L^\infty$ functions instead of, for example, differentiations of $L^p$ functions for $p \in [1, \infty)$. One might plausibly propose that if we have a uniquely ergodic dynamical system $(X, \mathcal{B}, \mu, T)$, then we can observe that for all $f \in C(X)$, all spatial-temporal differentiations converge to $\int f \mathrm{d} \mu$. We could then try to extend this convergence to all of $L^1(X, \mu)$, since $C(X)$ is $L^1$-dense in $L^1(X, \mu)$. However, we know that a uniquely ergodic dynamical system can still have non-uniform $L^\infty$ functions (in fact, any ergodic dynamical system over a non-atomic standard probability space will have them, as seen in Proposition \ref{Existence of non-uniform functions}), so this cannot be right. The catch is that for measurable $F$ of nonzero measure, the functional $\alpha_F : f \mapsto \frac{1}{\mu(F)} \int_F f \mathrm{d} \mu$ is of norm $1$ with respect to $L^\infty$, but the same can't be said relative to $L^p$ for $p \in [1, \infty)$. As such, the "natural" choice of function for a spatial-temporal differentiation is an $L^\infty$ function.
A similarly plausible but misguided attempt to establish convergence results of spatial-temporal differentiations for all $f \in L^\infty(X, \mu)$ could be through the concept of uniform sets. In \cite[Theorem 1]{HanselRaoult}, it was established that if $\mathcal{B}$ is separable with respect to the metric $(A, B) \mapsto \mu(A \Delta B)$, then there exists a dense $T$-invariant subalgebra $\mathcal{B}' \subseteq \mathcal{B}$ of sets such that $\chi_B$ is uniform for all $B \in \mathcal{B}'$. Again, one might propose that we could use a density argument to extend convergence results on spatial-temporal differentiations to functions $\chi_A$ for all $A \in \mathcal{B} \supseteq \mathcal{B}'$. But again, Theorem \ref{Uniformity and STDs} tells us that this would be tantamount to proving that all $L^\infty$ functions are uniform, and we know that there can exist non-uniform $L^\infty$ functions.
Other results are possible regarding topological dynamical systems, as we show below.
\begin{Lem}
Let $f \in L^\infty(X, \mu)$ be a nonnegative function, where $(X, \mathcal{B}, \mu, T)$ is a dynamical system. Then the sequence $\left( \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} S^i f \right\|_\infty \right)_{k = 1}^\infty$ is convergent, and
$$\lim_{k \to \infty} \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} S^i f \right\|_\infty = \inf_{k \in \mathbb{N}} \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} S^i f \right\|_\infty .$$
\end{Lem}
\begin{proof}
Let $a_k = \left\| \sum_{i = 0}^{k - 1} T^i f \right\|_\infty$. Then the sequence $(a_k)_{k = 1}^\infty$ is subadditive. This follows since if $k, \ell \in \mathbb{N}$, then
\begin{align*}
a_{k + \ell} & = \left\| \sum_{i = 0}^{k + \ell - 1} T^i f \right\|_\infty \\
& \leq \left\| \sum_{i = 0}^{k - 1} T^i f \right\|_\infty + \left\| \sum_{i = k}^{k + \ell - 1} T^i g \right\|_\infty \\
& = \left\| \sum_{i = 0}^{k - 1} T^i f \right\|_\infty + \left\| T^k \sum_{i = 0}^{\ell - 1} T^i f \right\|_\infty \\
& = \left\| \sum_{i = 0}^{k - 1} T^i f \right\|_\infty + \left\| \sum_{i = 0}^{\ell - 1} T^i f \right\|_\infty \\
& = a_k + a_\ell .
\end{align*}
The result then follows from the Subadditivity Lemma.
\end{proof}
\begin{Def}
For nonnegative $f \in L^\infty(X, \mu)$, set
$$\Gamma(f) : = \lim_{k \to \infty} \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty .$$ We call this value $\Gamma(f)$ the \emph{gauge} of $f$.
\end{Def}
This $\Gamma(f)$ satisfies the inequality $\Gamma(f) \geq \int f \mathrm{d} \mu$, since
\begin{align*}
\frac{1}{k} \sum_{i = 0}^{k - 1} T^i f & \leq \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty \\
\Rightarrow \int f \mathrm{d} \mu = \int \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \mathrm{d} \mu & \leq \int \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty \mathrm{d} \mu \\
& = \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty \\
\Rightarrow \int f \mathrm{d} \mu & \leq \inf_{k \in \mathbb{N}} \int \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty \\
& = \Gamma(f) .
\end{align*}
\begin{Def}
Let $X$ be a compact metric space, and let $C_{\mathbb{R}}(X)$ denote the (real) space of real-valued continuous functions on $X$ endowed with the uniform norm $\| \cdot \|_{C(X)}$. Let $T : X \to X$ be a continuous homeomorphism, and let $\mathcal{M}_T$ denote the family of all $T$-invariant Borel probability measures on $X$. A measure $\mu \in \mathcal{M}_T(X)$ is called \emph{$f$-maximizing} for some $f \in C_{\mathbb{R}}(X)$ if $\int f \mathrm{d} \mu = \sup_{\nu \in \mathcal{M}_T} \int f \mathrm{d} \nu$. We denote by $\mathcal{M}_{\mathrm{max}}(f)$ the space of all $f$-maximizing measures.
\end{Def}
The definition of maximizing measures is due to Jenkinson \cite[Definition 2.3]{Jenkinson}. The definition is topological in nature, in the sense that it is defined with reference to a homeomorphism on a compact metric space prior to any other measure that metric space might possess. A result of Jenkinson \cite[Proposition 2.4]{Jenkinson} tells us that for every $f \in C_{\mathbb{R}}(X)$, we have
\begin{enumerate}
\item $\mathcal{M}_{\mathrm{max}}(f) \neq \emptyset$,
\item $\mathcal{M}_{\mathrm{max}}(f)$ is a compact metrizable simplex, and
\item the extreme points of $\mathcal{M}_{\mathrm{max}}(f)$ are exactly the ergodic $f$-maximizing measures. In particular, every $f \in C_{\mathbb{R}}(X)$ admits an ergodic $f$-maximizing measure.
\end{enumerate}
For every $f \in C_{\mathbb{R}}(X)$, let $\mu_f$ denote an ergodic maximizing measure for $f$. We claim that $\Gamma(f) \leq \int f \mathrm{d} \mu_f$. To prove this, we note that \linebreak$\left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty \leq \max_{x \in X} \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f (x)$, where the maximum exists because $X$ is compact and $f \in C_{\mathbb{R}}(X)$ is continuous. Choose $x_k \in X$ such that $\max_{x \in X} \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) = \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x_k)$. Let $\delta_{x_k}$ denote the Borel point-mass probability measure
$$\delta_{x_k}(A) = \begin{cases}
1 & x_k \in A \\
0 & x_k \not \in A
\end{cases}$$
Let $\mu_k = \frac{1}{k} \sum_{i = 0}^{k - 1} \delta_{T^i x_k}$, so that $\frac{1}{k} \sum_{i = 0}^{k - 1} T^i f (x_k) = \int f \mathrm{d} \mu_k$.
Since the space of Borel probability measures on $X$ is compact in the weak* topology on $C(X)^*$, there exists a subsequence $(\mu_{k_n})_{n = 1}^\infty$ of $(\mu_k)_{k = 1}^\infty$ converging to a Borel probability measure $\mu'$. We claim that $\mu'$ is $T$-invariant, since if $g \in C(X)$, then
\begin{align*}
\left| \int T g \mathrm{d} \mu' - \int g \mathrm{d} \mu' \right| & \leq \left| \int T g \mathrm{d} \mu' - \int T g \mathrm{d} \mu_{x_{k_n}} \right| \\
& + \left| \int T g \mathrm{d} \mu_{x_{k_n}} - \int g \mathrm{d} \mu_{x_{k_n}} \right| \\
& + \left| \int g \mathrm{d} \mu_{x_{k_n}} - \int g \mathrm{d} \mu' \right| \\
& \leq \left| \int T g \mathrm{d} \mu' - \int T g \mathrm{d} \mu_{x_{k_n}} \right| \\
& + \left| \frac{g \left( T^{k_n} x_{k_n} \right) - g(x_{k_n})}{k_n} \right| \\
& + \left| \int g \mathrm{d} \mu_{x_{k_n}} - \int g \mathrm{d} \mu' \right| \\
& \leq \left| \int T g \mathrm{d} \mu' - \int T g \mathrm{d} \mu_{x_{k_n}} \right| \\
& + \frac{2 \|g\|_{C(X)}}{k_n} \\
& + \left| \int g \mathrm{d} \mu_{x_{k_n}} - \int g \mathrm{d} \mu' \right| \\
& \to 0 .
\end{align*}
Therefore $\mu'$ is a $T$-invariant Borel probability measure on $X$ such that $\int f \mathrm{d} \mu' = \Gamma(f)$. But if $\mu_f$ is $f$-maximal, then
$$\Gamma(f) = \int f \mathrm{d} \mu' \leq \int f \mathrm{d} \mu_f .$$ Under certain conditions, however, we can achieve equality here.
\begin{Lem}\label{Assani Lemma 3}
Let $(X, \mathcal{B}, \mu)$ be a probability space, where $X$ is a compact metric space with Borel $\sigma$-algebra on $X$ denoted by $\mathcal{B}$. Let $T : X \to X$ be a homeomorphism. If $\mu$ is strictly positive, and $f \in C_{\mathbb{R}}(X)$ is nonnegative, then
$$\Gamma(f) = \int f \mathrm{d} \mu_f .$$
\end{Lem}
\begin{proof}
First, we claim that if $X$ is compact and $\mu$ is strictly positive, then $\left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty = \sup_{x \in X} \left| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \right|$. Assume for contradiction that $\left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty < \sup_{x \in X} \left| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \right|$. Then there exists $R > 0$ such that $R < \sup_{x \in X} \left| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \right|$ and
$$\mu \left( \left\{ x \in X : \left| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \right| > R \right\} \right) = 0 .$$
But that set is nonempty and open, so it must have positive measure, a contradiction.
Therefore $\left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty = \sup_{x \in X} \left| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \right|$ for all $k \in \mathbb{N}$. However, we can bound $\int f \mathrm{d} \mu_f$ by
\begin{align*}
\int f \mathrm{d} \mu_f & = \int \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \mathrm{d} \mu \\
& \leq \sup_{x \in X} \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \mathrm{d} \mu \\
& = \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \mathrm{d} \mu \right\|_\infty \\
\int f \mathrm{d} \mu_f & \leq \inf_{k \in \mathbb{N}} \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \mathrm{d} \mu \right\|_\infty \\
& = \Gamma(f) ,
\end{align*}
establishing the opposite inequality.
\end{proof}
\begin{Lem}\label{Assani Lemma 4}
Suppose that $(X, \mathcal{B}, \mu, T)$ consists of a compact metric space $X$ with Borel $\sigma$-algebra $\mathcal{B}$ and a strictly positive probability measure $\mu$ that is ergodic with respect to a homeomorphism $T: X \to X$. Then the system $(X, T)$ is uniquely ergodic if and only if $\Gamma(f) = \int f \mathrm{d} \mu$ for all nonnegative $f \in C_{\mathbb{R}}(X)$.
\end{Lem}
\begin{proof}
$(\Rightarrow)$ If $(X, T)$ is uniquely ergodic, then in particular $\mu = \mu_f$ for all nonnegative $f \in C_{\mathbb{R}}(X)$. Therefore by the previous lemma, we have
$$\int f \mathrm{d} \mu = \int f \mathrm{d} \mu_f = \Gamma(f) .$$
$(\Leftarrow)$ If $(X, T)$ is \emph{not} uniquely ergodic, then we know that $\mathcal{M}_T(X)$ is not a singleton, and thus contains another ergodic measure $\nu$. By a result of Jenkinson \cite[Theorem 3.7]{Jenkinson}, we know that there exists $f \in C(X)$ real-valued such that $\nu = \mu_f$ is the \emph{unique} $f$-maximizing measure. We may assume without loss of generality that $f$ is nonnegative, since otherwise we can replace $f$ with $\tilde{f} - \inf_{x \in X} f(x)$. Since we claimed that $\nu$ was the unique $f$-maximizing measure, we can conclude in particular that
\begin{align*}
\int f \mathrm{d} \mu & < \int f \mathrm{d} \nu \\
& = \int f \mathrm{d} \mu_f \\
& = \Gamma(f) .
\end{align*}
\end{proof}
\begin{Thm}\label{Assani Theorem}
Suppose that $(X, \mathcal{B}, \mu, T)$ consists of a compact metric space $X$ with Borel $\sigma$-algebra $\mathcal{B}$ and a probability measure $\mu$ that is ergodic with respect to a homeomorphism $T: X \to X$. Then the following results are related by the implications (1)$\Rightarrow$(2)$\Rightarrow$(3). Further, if $\mu$ is strictly positive, then (3)$\Rightarrow$(1).
\begin{enumerate}
\item $(X, T)$ is uniquely ergodic.
\item For every sequence of Borel-measurable sets $(F_k)_{k = 1}^\infty$ of positive measure, and for every $f \in C(X)$, the limit $\lim_{k \to \infty} \alpha_{F_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right)$ exists and is equal to $\int f \mathrm{d} \mu$, where $\alpha_\cdot$ is as defined in Notation \ref{Defining alpha}.
\item For every sequence of open sets $(U_k)_{k = 1}^\infty$ of positive measure, and for every $f \in C(X)$, the limit $\lim_{k \to \infty} \alpha_{U_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right)$ exists and is equal to $\int f \mathrm{d} \mu$, where $\alpha_\cdot$ is as defined in Notation \ref{Defining alpha}.
\end{enumerate}
\end{Thm}
\begin{proof}
(1)$\Rightarrow$(2): If $(X, T)$ is uniquely ergodic, then \linebreak $\left\| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_{C(X)} \stackrel{k \to \infty}{\to} 0$, so
\begin{align*}
\left| \int f \mathrm{d} \mu - \alpha_{F_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right| & = \left| \alpha_{F_k} \left( \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right| \\
& \leq \left\| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_{C(X)} \\
& \stackrel{k \to \infty}{\to} 0 .
\end{align*}
(2)$\Rightarrow$(3): Trivial, since an open set is automatically Borel.
$\neg$(1)$\Rightarrow \neg$(3): Suppose $(X, T)$ is \emph{not} uniquely ergodic, and that $\mu$ is strictly positive. Then Lemma \ref{Assani Lemma 4} tells us that there exists nonnegative $f \in C_{\mathbb{R}}(X)$ for which $\Gamma(f) > \int f \mathrm{d} \mu$. Let $L$ be such that $\int f \mathrm{d} \mu < L < \Gamma(f)$, and consider the open set
$$U_k = \left\{ x \in X : \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) > L \right\} .$$
By the proof of Lemma \ref{Assani Lemma 3}, we know that
$$\Gamma(f) = \inf_{k \in \mathbb{N}} \max_{x \in X} \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) = \inf_{k \in \mathbb{N}} \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x_k) ,$$ where $x_k \in U_k$ for all $k \in \mathbb{N}$. Therefore $U_k$ is a nonempty open set, and since $\mu$ is strictly positive, that means $\mu(U_k) > 0$. Therefore
\begin{align*}
\alpha_{U_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) & \geq \alpha_{U_k} \left( L \right) \\
& = L \\
\Rightarrow \liminf_{k \to \infty} \alpha_{U_k}(f) & \geq L \\
& > \int f \mathrm{d} \mu .
\end{align*}
\end{proof}
\begin{Thm}\label{Pathological connected differentiation}
Suppose that $(X, \mathcal{B}, \mu, T)$ consists of a compact connected metric space $X = (X, \rho)$ with Borel $\sigma$-algebra $\mathcal{B}$ and a probability measure $\mu$ that is ergodic with respect to a homeomorphism $T: X \to X$. Suppose further that $\mu$ is strictly positive, but $(X, T)$ is not uniquely ergodic. Then there exists a sequence $(U_k)_{k = 1}^\infty$ of nonempty open subsets of $X$ and a nonnegative continuous function $f \in C(X)$ such that the sequence $\left( \alpha_{U_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right)_{k = 1}^\infty$ is not Cauchy. Furthermore, if $\mu$ is atomless, then we can choose the sequence $(U_k)_{k = 1}^\infty$ such that $\mu \left( U_k \right) \searrow 0$.
\end{Thm}
\begin{proof}
Lemma \ref{Assani Lemma 4} tells us that there exists nonnegative $f \in C_{\mathbb{R}}(X)$ for which $\Gamma(f) > \int f \mathrm{d} \mu$. Let $L, M \in \mathbb{R}$ such that $\int f \mathrm{d} \mu < L < M < \Gamma(f)$, and consider the open sets
\begin{align*}
V_k & = \left\{ x \in X : \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) > M \right\} , \\
W_k & = \left\{ x \in X : \int f \mathrm{d} \mu < \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) < L \right\} .
\end{align*}
By the proof of Lemma \ref{Assani Lemma 3}, we know that $V_k \neq \emptyset$, so let $x_k \in V_k$. We also know that there exists $z_k$ in $X$ such that $f(z_k) \leq \int f \mathrm{d} \mu$, since if $f(z) > \int f \mathrm{d} \mu$ for all $z \in X$, then $\int f(z) \mathrm{d} \mu(z) > \int f \mathrm{d} \mu$, a contradiction. By the Intermediate Value Theorem, there then exists $y_k \in W_k$. Construct $(U_k)_{k = 1}^\infty$ as
$$U_k = \begin{cases}
V_k , & \textrm{$k$ odd} \\
W_k , & \textrm{$k$ even} .
\end{cases}$$
Then
\begin{align*}
\limsup_{k \to \infty} \alpha_{U_{2k - 1}} \left( \frac{1}{2k - 1} \sum_{i = 0}^{2k - 2} T^i f \right) & \geq \limsup_{k \to \infty} \alpha_{U_{2k - 1}} \left( M \right) \\
& = M , \\
\liminf_{k \to \infty} \alpha_{U_{2k}} \left( \frac{1}{2k} \sum_{i = 0}^{2k - 1} T^i f \right) & \leq \liminf_{k \to \infty} \alpha_{U_{2k}} \left( L \right) \\
& = L .
\end{align*}
Therefore
$$\liminf_{k \to \infty} \alpha_{U_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \leq L < M \leq \limsup_{k \to \infty} \alpha_{U_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) .$$
Moreover, if $\mu$ is atomless, then we can choose $\left(U_k \right)_{k = 1}^\infty$ so that $\mu \left( U_k \right) \to 0$ by letting $U_k$ be a ball of sufficiently small radius contained in $V_k$ (if $k$ is odd) or $W_k$ (if $k$ is even). The above calculations can be carried out in the same way.
\end{proof}
In Theorem \ref{Pathological Differentiation}, we construct an example of a Bernoulli shift $(X, \mathcal{B}, \mu, T)$ where there exists $(x, f) \in X \times C(X)$ such that the sequence \linebreak $\left( \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right)_{k = 1}^\infty$ not only does not converge to $\int f \mathrm{d} \mu$, as in Theorem \ref{Assani Theorem}, but such that it does not converge at all. Theorem \ref{Pathological connected differentiation} does not encompass that example, since subshifts are a priori totally disconnected.
In the next result, we will be making use of the Jewett-Krieger Theorem in a specific formulation. This is the formulation originally proven by Jewett in \cite{Jewett} under the assumption that the transformation was weakly mixing; Bellow and Furstenberg later demonstrated in \cite{BellowFurstenberg} that the parts of Jewett's argument which relied on the weakly mixing property could be proven under the weaker assumption of ergodicity. The version of the Jewett-Krieger Theorem we will be using is as follows.
\begin{JK}
Given an invertible ergodic system $(Y, \mathcal{A}, \nu, S)$ on a standard probability space $(Y, \mathcal{A}, \nu)$, there exists an essential isomorphism $h : (Y, \mathcal{A}, \nu, S) \to \left(2^\omega, \mathcal{B}, \mu, T \right)$ (where $2^\omega$ denotes the Cantor space) such that $\left( 2^\omega, T \right)$ is a strictly ergodic system.
\end{JK}
The following result provides some structure statements about the space $\mathscr{U}(Y, \mathcal{A}, \nu, S)$ of uniform functions.
\begin{Thm}
Let $(Y, \mathcal{A}, \nu)$ be a standard probability space, and $S : Y \to Y$ an ergodic automorphism. Then $\mathscr{U}(Y, \mathcal{A}, \nu, S)$ is a closed $S$-invariant subspace of $L^\infty (Y, \nu)$ that is closed under complex conjugation, and contains a unital $S$-invariant C*-subalgebra $A$ which is dense in $L^1(Y, \nu)$. This $A$ is isomorphic as a C*-subalgebra to $C \left( 2^\omega \right)$.
\end{Thm}
\begin{proof}
First, we prove that $\mathscr{U}(Y, \mathcal{A}, \nu, S)$ is a closed $S$-invariant subspace of $L^\infty(Y, \nu)$. The fact it is a subspace of $L^\infty(Y, \nu)$ is clear, so suppose $f \in \operatorname{cl} \left( \mathscr{U}(Y, \mathcal{A}, \nu, S) \right)$. Then there exists $g \in \mathscr{U}(Y, \mathcal{A}, \nu, S)$ such that $\left\|f - g \right\|_\infty \leq \frac{\epsilon}{3}$. Choose $K \in \mathbb{N}$ such that $k \geq K \Rightarrow \left\| \int g \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i g \right\|_\infty \leq \frac{\epsilon}{3}$ Then
\begin{align*}
\left\| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty & \leq \left\| \int f \mathrm{d} \mu - \int g \mathrm{d} \mu \right\|_\infty \\
& + \left\| \int g \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i g \right\|_\infty \\
& + \left\| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i (g - f) \right\|_\infty \\
& \leq \left\| f - g \right\|_\infty + \left\| \int g \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i g \right\|_\infty + \left\| f - g \right\|_\infty \\
& \leq \frac{\epsilon}{3} + \frac{\epsilon}{3} + \frac{\epsilon}{3} = \epsilon .
\end{align*}
Thus $f \in \mathscr{U}(Y, \mathcal{A}, \nu, S)$. Now, we claim that if $f \in \mathscr{U}(Y, \mathcal{A}, \nu, S)$, then $Sf, S^{-1}f \in \mathscr{U}(Y, \mathcal{A}, \nu, S)$. We compute
\begin{align*}
\left\| \int S f \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} S^i (Sf) \right\|_\infty & = \left\| \int f \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} S^i (Sf) \right\|_\infty \\
& = \left\| \int f \mathrm{d} \nu - \left( \frac{1}{k} \sum_{i = 0}^{k - 1} S^i f \right) + \frac{1}{k} \left( f - S^k f \right) \right\|_\infty \\
& \leq \left\| \int f \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} S^i f \right\|_\infty + \frac{2 \| f \|}{k} \\
& \stackrel{k \to \infty}{\to} 0.
\end{align*}
An analogous argument will show that $S^{-1} f \in \mathscr{U}(Y, \mathcal{A}, \nu, S)$. To see that $\mathscr{U}(Y, \mathcal{A}, \nu, S)$ is also closed under complex conjugation, we see that
\begin{align*}
\left\| \int \overline{f} \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i \overline{f} \right\|_\infty & = \left\| \overline{\int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f} \right\|_\infty \\
& = \left\| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right\|_\infty .
\end{align*}
Finally, we prove that $\mathscr{U}(Y, \mathcal{A}, \nu, S)$ contains a unital $S$-invariant C*-algebra $A$ that's dense in $L^1(Y, \nu)$. By the Jewett-Krieger Theorem, we know there exists an essential isomorphism $\phi : (Y, \mathcal{A}, \nu, S) \to \left(2^\omega, \mathcal{B}, \mu, T\right)$, where $\left( 2^\omega, \mathcal{B}, \mu, T \right)$ is uniquely ergodic. Let $A = \Phi \left( C \left( 2^\omega \right) \right)$, where $\Phi : L^\infty \left( 2^\omega , \mu \right) \to L^\infty(Y, \nu)$ is the pullback of $\phi$. Since $C \left( 2^\omega \right)$ is dense in $L^1 \left( 2^\omega , \mu \right)$, we can infer that $A = \Phi \left( C \left( 2^\omega \right) \right)$ is dense in $L^1(Y, \nu)$. Since continuous functions in a uniquely ergodic system are uniform, it follows that the functions of $A$ are uniform.
Because $\mu$ is strictly positive, we know that $C \left( 2^\omega \right)$ is isomorphic to its copy in $L^\infty \left( 2^\omega , \mu \right)$ (see proof of Lemma \ref{Assani Lemma 3}), so this map $\Phi$ is an isomorphism between $C \left( 2^\omega \right) \subsetneq L^\infty \left( 2^\omega , \mu \right)$ and $A = \Phi \left( C \left( 2^\omega \right) \right)$.
\end{proof}
\begin{Prop}
Suppose that $(X, \mathcal{B}, \mu, T)$ consists of a compact metric space $X = (X, \rho)$ with Borel $\sigma$-algebra $\mathcal{B}$ and a probability measure $\mu$ that is ergodic with respect to a homeomorphism $T: X \to X$, where $X$ is connected. Suppose further that $\exists F \in \mathcal{B}$ such that $0 < \mu(F) < 1$. Then there exists $f \in \mathscr{U}(X, \mathcal{B}, \mu, T) \setminus C(X)$.
\end{Prop}
\begin{proof}
By the Jewett-Krieger Theorem, there exists an essential isomorphism $h : (X, \mathcal{B}, \mu, T) \to \left( X', \mathcal{B}', \mu', T' \right)$, where $X' = 2^\omega$ and $\left( X', T' \right)$ is uniquely ergodic. The topological space $2^\omega$ admits a basis $\mathcal{G}$ of clopen sets. We claim that there exists $G \in \mathcal{G}$ such that $0 < \mu'(G) < 1$.
Assume for contradiction that $\mu'(G) \in \{0, 1\}$ for all $G \in \mathcal{G}$. If $(E_k)_{k = 1}^\infty$ is some sequence in $\mathcal{B}'$ of sets for which $\mu'(E_k) \in \{0, 1\}$, then
\begin{align*}
\mu' \left( \bigcup_{k = 1}^\infty E_k \right) & = \max_{k \in \mathbb{N}} \mu'(E_k) & \in \{0, 1\}, \\
\mu' \left( \bigcap_{k = 1}^\infty E_k \right) & = \min_{k \in \mathbb{N}} \mu' (E_k) & \in \{0, 1\}, \\
\mu'(X \setminus E_1) & = 1 - \mu(E_1) & \in \{0, 1\} .
\end{align*}
But since $\mathcal{G}$ generates $\mathcal{B}'$, this would imply that $\mu'(E) \in \{0, 1\}$ for all $E \in \mathcal{B}'$, a contradiction.
Therefore, there exists $G_0 \in \mathcal{B}'$ clopen such that $0 < \mu'(G_0) < 1$. Set $g = \chi_{G_0} \in C \left( X' \right) \subseteq \mathscr{U} \left( X', \mathcal{B}', \mu', T' \right)$, and let $f = g \circ h$. Then $f \in \mathscr{U}(X, \mathcal{B}, \mu, T)$. But since $f$ takes values in $\{0, 1\}$, and $\mu(\{x \in X : f(x) = 1 \}) \not \in \{0, 1\}$, we must conclude that $f \in \mathscr{U} \left( X', \mathcal{B}', \mu', T' \right) \setminus C(X)$.
\end{proof}
We conclude this section by remarking that in most situations, we'll have $\mathscr{U}(Y, \mathcal{A}, \nu, S) \neq L^\infty(Y, \nu)$. We cite here a special case of a result of N. Ormes.
\begin{Lem}
Suppose $(Y, \mathcal{A}, \nu)$ is a non-atomic standard probability space, and $S : Y \to Y$ is an ergodic automorphism. Then there exists a minimal homeomorphism $T : 2^\omega \to 2^\omega$ and an affine homeomorphism $p : [0, 1] \to \mathcal{M}_T \left( 2^\omega \right)$ for which $\left( 2^\omega , \mathcal{B} , p(0) , T \right)$ is essentially isomorphic to $(Y, \mathcal{A}, \nu, S)$, where $\mathcal{B}$ here denotes the Borel $\sigma$-algebra on $2^\omega$.
\end{Lem}
\begin{proof}
This is a special case of \cite[Corollary 7.4]{Ormes}, where we specifically consider the Choquet simplex $[0, 1]$.
\end{proof}
Since $\left( 2^\omega, T \right)$ is not uniquely ergodic, it follows that there exists $f_0 \in C \left( 2^\omega \right)$ such that $\left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_0 \right)_{k = 1}^\infty$ does not converge uniformly to the constant $\int f_0 \mathrm{d} \left( p(0) \right)$. Since $\left( 2^\omega , T \right)$ is minimal, and the support of $p(0)$ is a nonempty $T$-invariant compact subset of $2^\omega$, it follows that $p(0)$ is strictly positive, and so the uniform norm on $C \left( 2^\omega \right)$ coincides with the $L^\infty \left( 2^\omega , p(0) \right)$ norm on $C \left( 2^\omega \right)$. As such, it follows that
$$\left\| \int f_0 \mathrm{d} (p(0)) - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_0 \right\|_\infty = \sup_{x \in 2^\omega} \left| \int f_0 \mathrm{d} (p(0)) - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_0(x) \right| \stackrel{k \to \infty}{\not \to} 0 .$$ Let $\phi : (Y, \mathcal{A}, \nu, S) \to \left( 2^\omega , \mathcal{B}, p(0), T \right)$ be an essential isomorphism, and let $\Phi : L^\infty \left( 2^\omega , p(0) \right) \to L^\infty (Y, \nu)$ be the pullback of $\phi$. Then
$$\left\| \int \left( \Phi f_0 \right) \mathrm{d} \nu - \frac{1}{k} \sum_{i = 0}^{k - 1} S^i \left( \Phi f_0 \right) \right\|_\infty = \left\| \int f_0 \mathrm{d} (p(0)) - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_0 \right\|_\infty \stackrel{k \to \infty}{\not \to} 0 .$$
Therefore $\Phi f_0 \in L^\infty(Y, \nu) \setminus \mathscr{U}(Y, \mathcal{A}, \nu, S)$.
The following proposition summarizes this discussion.
\begin{Prop}\label{Existence of non-uniform functions}
Suppose $(Y, \mathcal{A}, \nu)$ is a non-atomic standard probability space, and $S : Y \to Y$ is an ergodic automorphism. Then $\mathscr{U}(Y, \mathcal{A}, \nu, S) \neq L^\infty (Y, \nu)$.
\end{Prop}
\section{Non-expansive maps}\label{Non-expansive}
In this section, as well as in Section \ref{Lipschitz}, we investigate for a certain class of dynamical system $(X, \mathcal{B}, \mu, T)$ what can be said about the convergence properties of $\left( \frac{1}{\mu(F_k)} \int_{F_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \mathrm{d} \mu \right)_{k = 1}^\infty$ for $f \in C(X)$ when we consider a "probabilistically generic" sequence $(F_k)_{k = 1}^\infty$. In other words, we investigate in some sense a "typical" behavior of \linebreak$\left( \frac{1}{\mu(F_k)} \int_{F_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \mathrm{d} \mu \right)_{k = 1}^\infty$ for $f \in C(X)$, and find sufficient conditions for this differentiation to converge almost surely to $\int f \mathrm{d} \mu$ for all $f \in C(X)$.
Let $X = (X, \rho)$ be a compact metric space, and $T : X \to X$ a $1$-Lipschitz map, i.e. such that $\rho(Tx, Ty) \leq \rho(x, y)$ for all $x, y \in X$. Let $\mathcal{B}$ denote the Borel $\sigma$-algebra on $X$, and $\mu$ a $T$-invariant, ergodic Borel probability measure on $X$. Then $(X, T)$ has topological entropy $0$, and thus $(X, \mathcal{B}, \mu, T)$ is automatically of entropy $0 < \infty$ \cite[Lemma 1]{Goodman71}. By the Krieger Generator Theorem \cite[2.1]{KriegerGenerator}, the ergodic system admits a finite measurable partition $\mathcal{E} = \{ E_d \}_{d \in \mathcal{D}}$ of $X$ such that $\left\{ T^i E_d : i \in \mathbb{Z}, d \in \mathcal{D} \right\}$ generates the $\sigma$-algebra $\mathcal{B}$, where $\mathcal{D}$ is a finite indexing set. We call $\mathcal{E}$ a \emph{generator} of $(X, \mathcal{B}, \mu, T)$.
Let $d_i : X \to \mathcal{D}, i \in \mathbb{Z}$ be the measurable random variable uniquely determined by the relation
$$x \in T^{-i} E_{d_i(x)} ,$$
or equivalently
$$T^i x \in E_{d_i(x)}$$
Given a word $\mathbf{a} = (a_0, a_1, \ldots, a_{\ell - 1}) \in \mathcal{D}^\ell$, we define the \emph{cylinder associated to $\mathbf{a}$} by
$$[a_0, a_1, \ldots, a_{\ell - 1}] : = \bigcap_{i = 0}^{\ell - 1} T^{-i} E_{a_i} .$$
We also define the \emph{rank-$k$ cylinder associated to $x \in X$} by
$$C_k(x) : = [d_0(x), d_1(x), \ldots, d_{k - 1}(x)] = \bigcap_{i = 0}^{k - 1} T^{-i} E_{d_i(x)} .$$
Equivalently, we can define $C_k(x)$ to be the element of $\lor_{i = 0}^{k - 1} T^{-i} \mathcal{E}$ containing $x$.
We note here that $\mu(C_k(x)) > 0$ for all $k \in \mathbb{N}$ for almost all $x \in X$, since
\begin{align*}
\{ x \in X : \exists k \in \mathbb{N} \textrm{ s.t. } \mu(C_k(x)) = 0 \} & = \bigcup_{k \in \mathbb{N}} \left\{ x \in X : \mu(C_k(x)) = 0 \right\} \\
& = \bigcup_{k \in \mathbb{N}} \left( \bigcup_{ \mathbf{d} \in \mathcal{D}^k \textrm{ s.t. } \mu([\mathbf{d}]) = 0 } [\mathbf{d}] \right)
\end{align*}
is a countable union of null sets.
Suppose further that $\operatorname{diam} (C_k(x)) \to 0$ for almost all $x \in X$. Our main result for this section is the following.
\begin{Thm}\label{Non-expansinve random cylinders}
Let $X = (X, \rho)$ be a compact metric space, and $T : X \to X$ a $1$-Lipschitz map, i.e. such that $\rho(Tx, Ty) \leq \rho(x, y)$ for all $x, y \in X$. Let $\mathcal{B}$ denote the Borel $\sigma$-algebra on $X$, and $\mu$ a $T$-invariant, ergodic Borel probability measure on $X$. Let $\mathcal{E} = \{ E_d \}_{d \in \mathcal{D}}$ be a finite measurable partition of $X$ which generates $\mathcal{B}$, and let $C_k(x)$ be the element of $\lor_{i = 0}^{k - 1} T^{-i} \mathcal{E}$ containing $x$. Suppose further that $\operatorname{diam}(C_k(x)) \to 0$ for almost all $x \in X$. Then the set of $x \in X$ such that
$$\frac{1}{\mu(C_k(x))} \int_{C_k(x)} \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \mathrm{d} \mu \stackrel{k \to \infty}{\to } \int f \mathrm{d} \mu$$
for all $f \in C(X)$ is of full measure.
\end{Thm}
\begin{proof}
Since $X$ is compact metrizable, we know that $C(X)$ is a separable vector space, so let $\{ f_n \}_{n \in \mathbb{N}}$ be a countable set in $C(X)$ such that $\overline{\operatorname{span}}\{ f_n \}_{n \in \mathbb{N}} = C(X)$, where the closure is taken in the uniform norm on $C(X)$. Let
$$S_n = \left\{ x \in X : \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f_n \right) \to \int f_n \mathrm{d} \mu \right\} .$$
We claim that $\mu(S_n) = 1$.
Let $x \in X$ such that $\operatorname{diam}(C_k(x)) \to 0$, that $\mu(C_k(x)) > 0$ for all $k \in \mathbb{N}$, and such that $\frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_n (x) \to \int f_n \mathrm{d} \mu$. By the Birkhoff ergodic theorem \cite[Theorem 1.5]{Walters}, the set of all such $x$ is of full measure. Fix $\epsilon > 0$. Since $f_n$ is uniformly continuous, we know there exists $\delta > 0$ such that $\rho(x_1, x_2) \leq \delta \Rightarrow |f_n(x_1) - f_n(x_2)| \leq \frac{\epsilon}{2}$. Choose $K_1 \in \mathbb{N}$ such that $\operatorname{diam}(C_k (x)) \leq \delta$ for all $k \geq K_1$. Choose $K_2 \in \mathbb{N}$ such that $k \geq K_2 \Rightarrow \left| \int f_n \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_n (x) \right| \leq \frac{\epsilon}{2}$. Let $K = \max \{ K_1, K_2 \}$, and suppose that $k \geq K$. Then
\begin{align*}
& \left| \int f_n \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left(T^i f_n\right) \right| \\
& \leq \left| \int f_n \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_n(x) \right| + \left| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_n (x) - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f_n \right) \right| \\
& \leq \frac{\epsilon}{2} + \frac{1}{k} \sum_{i = 0}^{k - 1} \left| T^i f_n(x) - \frac{1}{\mu(C_k(x))} \int_{C_k(x)} T^i f \right| \\
& = \frac{\epsilon}{2} + \frac{1}{k} \sum_{i = 0}^{k - 1} \left| \frac{1}{\mu(C_k(x))} \int_{C_k(x)} T^i f_n(x) - T^i f_n \mathrm{d} \mu \right| \\
& = \frac{\epsilon}{2} + \frac{1}{k} \sum_{i = 0}^{k - 1} \left| \frac{1}{\mu \left( T^i C_k(x) \right)} \int_{T^i C_k(x)} f_n(x) - f_n \mathrm{d} \mu \right| \\
& \leq \frac{\epsilon}{2} + \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu \left( T^i C_k(x) \right)} \int_{T^i C_k(x)} \left| f_n(x) - f_n \right| \mathrm{d} \mu \\
& \leq \frac{\epsilon}{2} + \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu \left( T^i C_k(x) \right)} \int_{T^i C_k(x)} \frac{\epsilon}{2} \mathrm{d} \mu \\
& = \epsilon ,
\end{align*}
since $\operatorname{diam} \left( T^i C_k(x) \right) \leq \operatorname{diam} (C_k(x)) \leq \operatorname{diam} (C_K(x)) < \delta$. Thus if \linebreak$\mu(C_k(x))> 0$ for all $k \in \mathbb{N}$, if $\operatorname{diam}(C_k(x)) \to 0$, and if $\frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_n (x) \to \int f_n \mathrm{d} \mu$, then $x \in E_n$. Thus $\mu(S_n) = 1$ for all $n \in \mathbb{N}$, and so $\mu \left( \bigcap_{n \in \mathbb{N}} S_n \right) = 1$.
We claim now that if $x \in S = \bigcap_{n \in \mathbb{N}} S_n$, then $\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \to \int f \mathrm{d} \mu$ for all $f \in C(X)$. Fix $x \in S, f \in C(X), \epsilon > 0$. Then there exist $N \in \mathbb{N}$ and $z_1, \ldots, z_N \in \mathbb{C}$ such that
$$\left\| f - \sum_{n = 1}^{N} z_n f_n \right\|_\infty < \frac{\epsilon}{3} .$$
Choose $L_1, \ldots, L_N \in \mathbb{N}$ such that
$$k \geq L_n \Rightarrow \left| \int f_n \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f_n \right) \right| < \frac{\epsilon}{3 N \max \{ |z_1|, \ldots, |z_N|, 1 \}} .$$
Abbreviate $g = \sum_{n = 1}^N z_n f_n$, and let $L = \max \{ L_1, \ldots, L_N \}$. Then if $k \geq L$, then
\begin{align*}
& \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \right| \\
& \leq \left| \int f \mathrm{d} \mu - \int g \mathrm{d} \mu \right| + \left| \int g \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i g \right) \right| \\
& + \left| \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i (g - f) \right) \right| \\
& \leq \left\| f - g \right\|_\infty + \sum_{n = 1}^N |z_n| \left| \int f_n \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f_n \right) \right| \\
& + \frac{1}{k} \sum_{i = 0}^{k - 1} \left\| g - f \right\|_\infty \\
& \leq \epsilon .
\end{align*}
Thus $x \in S \Rightarrow \lim_{k \to \infty} \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) = \int f \mathrm{d} \mu$ for all $f \in C(X)$. Since $\mu(S) = 1$, this concludes the proof.
\end{proof}
\begin{Rmk}
We remark that the cylindrical structure of the $C_k(x)$ was not essential to our proof of Theorem \ref{Non-expansinve random cylinders}. Rather, the important feature of $(C_k(x))_{k = 1}^\infty$ was that their diameter went to $0$ as $k \to \infty$. To demonstrate this fact, we consider the scenario where we replace the $C_k(x)$ with balls around $x$ of radius decreasing to $0$, and note that the technique of proof is remarkably similar to that used to prove Theorem \ref{Non-expansinve random cylinders}.
\end{Rmk}
\begin{Thm}\label{Non-expansive random balls}
Let $X = (X, \rho)$ be a compact metric space, and $T : X \to X$ a $1$-Lipschitz map, i.e. such that $\rho(Tx, Ty) \leq \rho(x, y)$ for all $x, y \in X$. Let $\mathcal{B}$ denote the Borel $\sigma$-algebra on $X$, and $\mu$ a $T$-invariant, ergodic Borel probability measure on $X$. Let $(r_k)_{k = 1}^\infty$ be a non-increasing sequence of positive numbers $r_k > 0$ such that $\lim_{k \to \infty} r_k = 0$. Let $B_k(x) = \{ y \in X : \rho(x, y) < r_k \}$. Then the set of $x \in X$ such that
$$\frac{1}{\mu(B_k(x))} \int_{B_k(x)} \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \mathrm{d} \mu \stackrel{k \to \infty}{\to } \int f \mathrm{d} \mu$$
for all $f \in C(X)$ is of full measure.
\end{Thm}
\begin{proof}
First we will prove that for an arbitrary $f \in C(X)$, the set of all $x \in X$ such that $\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{B_k(x)} \left( T^i f \right) \stackrel{k \to \infty}{\to } \int f \mathrm{d} \mu$ is of full measure. Fix $\epsilon > 0$, and choose $\delta > 0$ such that $\rho(x, y) < \delta \Rightarrow |f(x) - f(y)| < \frac{\epsilon}{2}$ (where we invoke the uniform continuity of $f$). Choose $K_1 \in \mathbb{N}$ such that $r_{K_1} < \delta$. Then if $k \geq K_1 , i \in [0, k - 1]$, we have that $y \in B_k(x) \Rightarrow \rho(x, y) < \delta \Rightarrow \rho\left( T^i x, T^i y \right) < \delta$. Let $x \in \operatorname{supp}(\mu), k \geq K_1$. Then
\begin{align*}
\left| T^i f(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| & = \left| T^i f(x) - \frac{1}{\mu(B_k(x))} \int_{B_k(x)} T^i f \mathrm{d} \mu \right| \\
& = \left| T^i f(x) - \frac{1}{\mu\left(T^i B_k(x)\right)} \int_{T^i B_k(x)} f \mathrm{d} \mu \right| \\
& = \left| \frac{1}{\mu \left(T^i B_k(x) \right)} \int_{T^i B_k(x)} \left( T^i f(x) - f \right) \mathrm{d} \mu \right| \\
& \leq \frac{1}{\mu \left(T^i B_k(x) \right)} \int_{T^i B_k(x)} \left| T^i f(x) - f \right| \mathrm{d} \mu \\
& \leq \frac{\epsilon}{2} .
\end{align*}
Let $x \in X \cap \operatorname{supp}(\mu)$ such that $\frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \stackrel{k \to \infty}{\to} \int f \mathrm{d} \mu$. Choose $K_2 \in \mathbb{N}$ such that
$$k \geq K_2 \Rightarrow \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \right| < \frac{\epsilon}{2}.$$
Then if $k \geq \max \{ K_1, K_2 \}$, then
\begin{align*}
& \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{B_k(x)} \left(T^i f\right) \right| \\
& \leq \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \right| + \left| \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f (x) - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{B_k(x)} \left( T^i \right) \right| \\
& \leq \frac{\epsilon}{2} + \frac{\epsilon}{2} \\
& = \epsilon .
\end{align*}
The Birkhoff Ergodic Theorem then tells us that the set \linebreak
$ \left\{ x \in X : \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f(x) \stackrel{k \to \infty}{\to} \int f \mathrm{d} \mu \right\}$
is of full measure, and so we can intersect it with the support of $\mu$ to get another set of full measure.
We can now use an argument almost identical to that used in the proof of Theorem \ref{Non-expansinve random cylinders} to prove this present theorem. Let $\{ f_n \}_{n \in \mathbb{N}}$ be a countable set in $C(X)$ such that $\overline{\operatorname{span}}\{ f_n \}_{n \in \mathbb{N}} = C(X)$, where the closure is taken in the uniform norm on $C(X)$. Let
$$S_n = \left\{ x \in X \cap \operatorname{supp}(\mu) : \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{B_k(x)} (f_n) \to \int f_n \mathrm{d} \mu \right\} .$$
As we have already shown, each $S_n$ is of full measure, and thus so is $\bigcap_{n \in \mathbb{N}} S_n$. From here, appealing to the fact that these $\{ f_n \}_{n \in \mathbb{N}}$ generate $C(X)$, we can prove the present theorem.
\end{proof}
\begin{Rmk}
Assuming that $\mu(\{x\}) = 0$ for all $x \in X$, then $\mu(B_k(x)) \to 0$ for all $x \in X$.
\end{Rmk}
\begin{Ex} Theorem \ref{Non-expansive random balls} ceases to be true if we drop the hypothesis that our system is ergodic. Let $X = (X, \rho)$ be a compact metric space, and let $T : X \to X$ be the identity map $T = \operatorname{id}_X$ on $X$. Let $\mu$ be any non-atomic Borel probability measure $\mu$ on $X$ (which is automatically $\operatorname{id}_X$-invariant) that is strictly positive. Fix $x_0 \in X$ and let $f(x) = \rho(x, x_0)$. Let $B_k(x_0) = \{ x \in X : \rho(x, x_0) < 1/k \}$.
We claim that $\int f \mathrm{d} \mu > 0$, but $$\alpha_{B_k(x_0)}\left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \stackrel{k \to \infty}{\to} 0 .$$
First, we observe that $\mu(B_k(x_0)) > 0$ for all $k \in \mathbb{N}$, since $x_0 \in \operatorname{supp}(\mu)$. However, since $\bigcap_{k = 1}^{\infty} B_k(x_0) = \{x_0\}$, we know that $\mu(B_k(x_0)) \to 0$. Therefore there exists $K \in \mathbb{N}$ such that $0 < \mu (B_K(x_0)) < \mu(B_1(x_0))$. Since $f$ is a nonnegative function, we can then conclude that
\begin{align*}
\int f \mathrm{d} \mu & \geq \int_{B_1(x_0) \setminus B_K(x_0)} f \mathrm{d} \mu \\
& \geq \mu(B_1(x_0) \setminus B_K(x_0)) \frac{1}{K} \\
& > 0 .
\end{align*}
Then
\begin{align*}
\alpha_{B_k(x_0)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) & = \alpha_{B_k(x_0)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} f \right) \\
& = \alpha_{B_k(x_0)}(f) \\
& = \frac{1}{\mu(B_k(x_0))} \int_{B_k(x_0)} f \mathrm{d} \mu .
\end{align*}
However, we can also say that $\left| \alpha_{B_k(x_0)}(f) \right| \leq 1/k$, since
\begin{align*}
\left|\alpha_{B_k(x_0)}(f) \right| & = \left| \frac{1}{\mu(B_k(x_0))} \int_{B_k(x_0)} f \mathrm{d} \mu \right| \\
& \leq \frac{1}{\mu(B_k(x_0))} \int_{B_k(x_0)} |f| \mathrm{d} \mu \\
& \leq \frac{1}{\mu(B_k(x_0))} \int_{B_k(x_0)} \frac{1}{k} \mathrm{d} \mu \\
& = \frac{1}{k} .
\end{align*}
Thus $\alpha_{B_k(x_0)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \stackrel{k \to \infty}{\to} 0 \neq \int f \mathrm{d} \mu$.
Thus for \emph{every} $x_0 \in X$ exists $f_{x_0} \in C(X)$ such that
$$\limsup_{k \to \infty} \left| \int f_{x_0} \mathrm{d} \mu - \alpha_{B_k(x_0)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f_{x_0} \right) \right| > 0 .$$
This example highlights how the assumption that $T$ is ergodic is pulling some amount of weight.
\end{Ex}
\begin{Rmk}
In this section, as well as in Sections \ref{Lipschitz} and \ref{Bernoulli shifts and probability}, we focus on continuous functions $f \in C(X)$. Our reason for this is that we can study these $f$ in relation to the topological properties of $(X, T)$.
\end{Rmk}
\section{Lipschitz maps and subshifts}\label{Lipschitz}
Let us consider a compact \emph{pseudo}metric space $X = (X, p)$, and $T : X \to X$ a map that is Lipschitz of constant $L > 1$, i.e. $p(Tx, Ty) \leq L \cdot p(x, y)$. Recall that a pseudometric is distinguished from a metric by the fact we do not assume that a pseudometric distinguishes points, i.e. we do not assume that $p(x, y) = 0 \Rightarrow x = y$. Suppose that $(X, \mathcal{B}, \mu, T)$ is of finite entropy, and thus admits a generator $\mathcal{E}$. Suppose further that for almost all $x \in X$ exists a constant $\gamma = \gamma_x \in [1, \infty)$ such that
$$\operatorname{diam}(C_k(x)) \leq \gamma \cdot L^{-k} \; (\forall k \in \mathbb{N}) .$$
We pause to remark on two points. The first is that our consideration of pseudometric spaces is not generality for generality's sake. As we will see later in this section, this consideration of pseudometric spaces will be useful for studying certain metric spaces. The second is that this class of examples is not a direct generalization of the class considered in Section \ref{Non-expansive}. Though every $1$-Lipschitz map is of course Lipschitz for every constant $L > 1$, our condition on $\operatorname{diam}(C_k(x))$ is stronger here, since we ask not just that $\operatorname{diam}(C_k(x))$ go to $0$, but that it do so exponentially.
Since we are working in the slightly unorthodox setting of pseudometric spaces rather than metric spaces, we will prove that one of the strong properties of compact metric spaces is also true of compact \textit{pseudo}metric spaces, namely that every continuous function is uniformly continuous. The proof is essentially identical to the "textbook" argument for compact metric spaces. We doubt this is a new result, but we could not find a reference for it, so we prove it here.
\begin{Lem}
Let $(X, p)$ be a compact pseudometric space. Then every continuous function $f : X \to \mathbb{C}$ is uniformly continuous.
\end{Lem}
\begin{proof}
Fix $\epsilon > 0$. Then for every $x \in X$ exists $\delta_x > 0$ such that $p(x, y) < \delta_x \Rightarrow |f(x) - f(y)| < \epsilon$. Then the family $\mathcal{U} = \left\{ B \left( x, \frac{\delta_x}{2} \right) \right\}_{x \in X}$ is an open cover of $X$, so there exists a finite subcover $\mathcal{U}' = \left\{ B \left( x_j, \frac{\delta_{x_j}}{2} \right) \right\}_{j = 1}^n$ of $X$.
Let $\delta' = \min_{1 \leq j \leq n} \frac{\delta_{x_j}}{2}$, and suppose that $x, y \in X$ such that $p(x, y) < \delta'$. Then there exists $x_j \in X$ such that $p(x, x_j) < \frac{\delta_{x_j}}{2}$, since $\mathcal{U}'$ is a cover of $X$. Then
\begin{align*}
p(x_j, y) & \leq p(x_j, x) + p(x, y) \\
& < \frac{\delta_{x_j}}{2} + \frac{\delta_{x_j}}{2} \\
& = \delta_{x_j} .
\end{align*}
Therefore $p(x_j, x) < \frac{\delta_{x_j}}{2} < \delta_{x_j}, p(x_j, y) < \delta_{x_j}$, so $|f(x) - f(x_j)| < \frac{\epsilon}{2} , |f(y) - f(x_j)| < \frac{\epsilon}{2}$.
Thus
\begin{align*}
|f(x) - f(y)| & \leq |f(x) - f(x_j)| + |f(x_j) - f(y)| \\
& < \frac{\epsilon}{2} + \frac{\epsilon}{2} \\
& = \epsilon .
\end{align*}
Therefore $\delta' > 0$ is such that $p(x, y) < \delta' \Rightarrow |f(x) - f(y)| < \epsilon$. Thus we have shown that $f$ is uniformly continuous.
\end{proof}
Now we are able to both state and prove the first main result of this section.
\begin{Prop}\label{Random Lipschitz Cylinders}
Let $(X, p)$ be a compact pseudometric space, and let $T : X \to X$ be an $L$-Lipschitz homeomorphism on $X$ with respect to $p$, where $L > 1$. Suppose $\mu$ is a regular Borel probability measure on $X$ such that $T$ is ergodic with respect to $\mu$. Let $\mathcal{E} = \{ E_d \}_{d \in \mathcal{D}}$ be a generator of $(X, T)$ such that for almost all $x \in X$ exists $\gamma_x \in \mathbb{R}$ such that $\operatorname{diam}(C_k(x)) \leq \gamma_x \cdot L^{-k}$ for all $k \in \mathbb{N}$. Fix $f \in C(X)$. Then
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \stackrel{k \to \infty}{\to } \int f \mathrm{d} \mu$$
for almost all $x \in X$.
\end{Prop}
\begin{proof}
Our goal is to show that for every $\epsilon > 0$, there exists some $K \in \mathbb{N}$ such that if $k \geq K$, we have
\begin{align*}
\left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \right| \\
\leq \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \right| + \left| \frac{1}{k} \sum_{i = 0}^{k - 1} \left( \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right) \right| \\
\leq \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \right| + \frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| \\
\leq \epsilon .
\end{align*}
We will accomplish this by bounding the terms
$$\left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \right| , \; \frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| $$
by $\epsilon$.
We will start with bounding the latter term. We claim that if $x \in X$ such that $\mu(C_k(x)) > 0, \operatorname{diam}(C_k(x)) \leq \gamma_x \cdot L^{-k}$ for all $k \in \mathbb{N}$, then for every $\epsilon > 0$, there exists $K_1 \in \mathbb{N}$ such that
$$k \geq K_1 \Rightarrow \frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right) (x) - \alpha_{C_k(x)} \left( T^i f \right) \right| < \frac{\epsilon}{2} .$$
To prove this, choose $\delta > 0$ such that $p(y, z) < \delta \Rightarrow |f(y) - f(z)| < \frac{\epsilon}{4}$. Let $\kappa \in \mathbb{N}$ such that $\gamma_x \cdot L^{-\kappa} < \delta$. Then if $k > \kappa$, then
\begin{align*}
\frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| & = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| \right] \\
& + \frac{1}{k} \left[ \sum_{k - \kappa + 1}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| \right] .
\end{align*}
We will estimate these two terms separately, bounding each by $\frac{\epsilon}{4}$. Beginning with the former, we observe that if $x, y \in C_k(x)$, then
$$p \left( T^{i} x, T^i y \right) \leq L^{i} p(x, y) \leq L^i \cdot \gamma_x \cdot L^{-k} = \gamma_{x} \cdot L^{i - k} .$$
In particular, this means that if $i - k \leq - \kappa$, then $\left| \left( T^i f \right)(x) - f(z) \right| < \frac{\epsilon}{4}$ for all $z = T^i y \in T^i C_{k}(x)$, so
\begin{align*}
& \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| \right] \\
& = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \frac{1}{\mu(C_k(x))} \int_{C_k(x)} \left( \left( T^i f \right)(x) \right) - T^i f \mathrm{d} \mu \right| \right] \\
& = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \frac{1}{\mu\left(T^i C_k(x)\right)} \int_{T^i C_k(x)} \left| \left( T^i f \right) (x) - f \right| \mathrm{d} \mu \right] \\
& \leq \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \frac{1}{\mu\left(T^i C_k(x)\right)} \int_{T^i C_k(x)} \frac{\epsilon}{4} \mathrm{d} \mu \right] \\
& = \frac{k - \kappa + 1}{k} \frac{\epsilon}{4} \\
& \leq \frac{\epsilon}{4} .
\end{align*}
On the other hand, we can estimate
$$\frac{1}{k} \left[ \sum_{k - \kappa + 1}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| \right] \leq \frac{2 \kappa}{k} \| f \| .$$
Choose $K_1 > \kappa$ such that $\frac{2 \kappa \left\| f \right\|_\infty}{K_1} < \frac{\epsilon}{4}$. Then if $k \geq K_1$, we have
\begin{align*}
\frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| & = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| \right] \\
& + \frac{1}{k} \left[ \sum_{k - \kappa + 1}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| \right] \\
& \leq \frac{\epsilon}{4} + \frac{\epsilon}{4} \\
& = \frac{\epsilon}{2} .
\end{align*}
Now suppose further that $x \in X$ is such that $\frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \stackrel{k \to \infty}{\to} \int f \mathrm{d} \mu$. Choose $K_2 \in \mathbb{N}$ such that $k \geq K_2 \Rightarrow \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \right| < \frac{\epsilon}{2}$. Then if $k \geq \max \{ K_1, K_2 \}$, then we have
\begin{align*}
\frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| & = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| \right] \\
& + \frac{1}{k} \left[ \sum_{k - \kappa + 1}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{C_k(x)} \left( T^i f \right) \right| \right] \\
& \leq \frac{\epsilon}{2} + \frac{\epsilon}{2} \\
& = \epsilon .
\end{align*}
Since the set of $x \in X$ for which this calculation could be performed is of full measure, the proposition follows.
\end{proof}
From here, we get the following corollary.
\begin{Cor}
Let $(X, \rho)$ be a compact metric space, and let $T : X \to X$ be an $L$-Lipschitz homeomorphism on $X$ with respect to $\rho$, where $L > 1$. Suppose $\mu$ is a regular Borel probability measure on $X$ such that $T$ is ergodic with respect to $\mu$. Let $\mathcal{E} = \{ E_d \}_{d \in \mathcal{D}}$ be a generator of $(X, T)$ such that for almost all $x \in X$ exists $\gamma_x \in \mathbb{R}$ such that $\operatorname{diam}(C_k(x)) \leq \gamma_x \cdot L^{-k}$ for all $k \in \mathbb{N}$. Then the set of $x \in X$ such that
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \stackrel{k \to \infty}{\to } \int f \mathrm{d} \mu$$
for all $f \in C(X)$ is of full measure.
\end{Cor}
\begin{proof}
Let $\{ f_n \}_{n \in \mathbb{N}}$ be a countable set in $C(X)$ such that \linebreak$C(X) = \overline{\operatorname{span}} \{ f_n \}_{n \in \mathbb{N}}$. By the previous result, we can extrapolate that the set of $x \in X$ such that $\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f_n \right) \stackrel{k \to \infty}{\to } \int f_n \mathrm{d} \mu$ is of full measure. We can then extend to all of $C(X)$ in the same manner as we did in the proof of Theorem \ref{Non-expansinve random cylinders}.
\end{proof}
\subsection{Two-sided subshifts and systems of finite entropy}
This brings us to the matter of (two-sided) subshifts. Let $\mathcal{D}$ be a finite discrete set, and let $T : \mathcal{D}^\mathbb{Z} \to \mathcal{D}^\mathbb{Z}$ be the map $(Tx)_n = x_{n + 1}$, called the \emph{left shift}. We call $X \subseteq \mathcal{D}^\mathbb{Z}$ a \emph{subshift} if $X$ is compact and $T X = X$. Assume that $\mu$ is a Borel probability measure on $X$ with respect to which $T$ is ergodic.
In a shift space, we will always take our generator to be the family $\mathcal{E} = \{ E_d \}_{d \in \mathcal{D}}$ of sets $E_d = \{ x \in X : x_0 = d \} , d \in \mathcal{D}$. We claim that for almost all $x \in X$, we have
$$\lim_{k \to \infty} \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) = \int f \mathrm{d} \mu$$
for all $f \in C(X)$. First, we want to establish the following lemma.
\begin{Lem}\label{Cylinders dense}
Let $(X, \mathcal{F}, \mu, T)$ be a subshift, where $X \subset \mathcal{D}^\mathbb{Z}$. The family
$$\mathcal{F} = \left\{ T^n \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} : {(a_0, a_1, \ldots, a_{\ell - 1}) \in \mathcal{D}^\ell , \ell \in \mathbb{N}, i \in \mathbb{Z}} \right\}$$
generates $C(X)$ in the sense that its span is dense in $C(X)$ with respect to the uniform norm.
\end{Lem}
\begin{proof}
We claim that every $f \in C(X)$ can be approximated uniformly by elements of $\operatorname{span} \mathcal{F}$. We will begin by demonstrating the result for real $f \in C(X)$, then extrapolate the result to all complex-valued $f \in C(X)$.
For $\ell \in \mathbb{N}$, set
\begin{align*}
& A(a_{-\ell + 1}, a_{-\ell + 1}, \ldots, a_{-1}, a_0, a_1, \ldots, a_{\ell - 1}, a_{\ell - 1}) \\
& = \{ x \in X : x_{j} = a_j \; \forall j \in [-\ell + 1, \ell - 1] \} \\
& = T^{-\ell + 1} [a_{-\ell + 1}, a_{-\ell + 2}, \ldots, a_{-1}, a_0, a_1, \ldots, a_{\ell - 1}, a_{\ell - 1}]
\end{align*}
and let
\begin{align*}
g_\ell & = \sum_{\mathbf{a} \in \mathcal{D}^{2 \ell - 1}} \min \left\{ f(y) : y \in A(\mathbf{a}) \right\} \chi_{A(\mathbf{a})} \\
& = \sum_{\mathbf{a} \in \mathcal{D}^{2 \ell - 1}} \min \left\{ f(y) : y \in T^{-\ell +1} [\mathbf{a}] \right\} \chi_{T^{-\ell} [\mathbf{a}]} \\
& \in \operatorname{span} \mathcal{F} .
\end{align*}
We claim that $g_\ell \to f$ uniformly. The sequence $\ell \mapsto g_\ell$ is monotonic increasing. Moreover, we claim that it converges pointwise to $f$. To see this, let $x \in X$, and consider $g_\ell(x)$. Fix $\epsilon > 0$. Then for each $\ell \in \mathbb{N}$ exists $y^{(\ell)} \in X$ such that $g_\ell(x) = f\left( y^{(\ell)} \right)$. However, since $y_j^{(\ell)} = x_j$ for all $j \in [-\ell + 1, \ell - 1]$, we can conclude that $y^{(\ell)} \to x$, and so by continuity of $f$, we can conclude that $g_\ell(x) = f \left( y^{(\ell)} \right) \to f(x)$. Thus $g_\ell \nearrow f$ pointwise. Dini's Theorem then gives us uniform convergence. Therefore, if $f \in C(X)$ is real-valued, then $f \in \overline{\operatorname{span}} \mathcal{F}$. On the other hand, any complex-valued function $f \in C(X)$ can be expressed as the sum of its real and imaginary parts, and we can apply this argument to both of those parts separately.
\end{proof}
\begin{Thm}\label{Metric result for subshifts}
Let $X \subseteq \mathcal{D}^\mathbb{Z}$ be a subshift, and let $\mu$ be a Borel probability measure on $X$ with respect to which the left shift $T$ is ergodic. Then the set of all $x \in X$ such that
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \to \int f \mathrm{d} \mu$$
for all $f \in C(X)$ is of full measure.
\end{Thm}
\begin{proof}
Our first step is to show that
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) \to \int \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \mathrm{d} \mu$$
for all finite strings $\mathbf{a} = (a_0, a_1, \ldots, a_{\ell - 1}) \in \mathcal{D}^\ell$. Let $p$ be the pseudometric on $X$ given by
$p(x, y) = 2^{- \min \{ n \geq 0 : x_n \neq y_n \}} ,$
where $\min (\emptyset) = + \infty$ and $2^{-\infty} = 0$.
We claim that the function $\chi_{[\mathbf{a}]}$ is continuous with respect to the topology of $p$, and that $(X, \mathcal{B}, \mu, T)$ satisfies the hypotheses of Proposition \ref{Random Lipschitz Cylinders} for $L = 2$. A straightforward calculation shows that $T$ is $2$-Lipschitz and that $\operatorname{diam} \left([\mathbf{a}]\right) \leq 2 \cdot 2^{-\ell}$ for all $\ell \in \mathbb{N}, \mathbf{a} \in \mathcal{D}^\ell$. Therefore, if
$$R_\mathbf{a} = \left\{ x \in X : \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) \to \int \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \mathrm{d} \mu \right\},$$
then $\mu(R_\mathbf{a}) = 1$ for all $\mathbf{a} \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell$, and so $R = \bigcap_{\mathbf{a} \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell} R_\mathbf{a}$ is of full measure. We now claim that if $\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) \to \int \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \mathrm{d} \mu$, then $$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i T^n \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) \to \int \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \mathrm{d} \mu $$
for all $n \in \mathbb{Z}$.
It will suffice to prove the result for $n = \pm 1$ and extend to all $n \in \mathbb{Z}$ by induction. To prove the claim for $n = 1$, we observe that
\begin{align*}
\left( \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i (T f) \right) \right) - \left( \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \right) & = \frac{1}{k} \alpha_{C_k(x)} \left( T^k f - f \right) \\
\Rightarrow \left| \left( \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i (T f) \right) \right) - \left( \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \right) \right| & \leq \frac{2}{k} \| f \|_\infty \\
& \to 0 .
\end{align*}
A similar calculation tells us that
$$\left| \left( \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i \left( T^{-1} f \right) \right) \right) - \left( \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \right) \right| \leq \frac{2}{k} \| f \|_\infty \to 0 ,$$
verifying the claim for $n = - 1$.
Thus if $\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \to \int f \mathrm{d} \mu$, then a straightforward induction argument will show that
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i \left( T^n f \right) \right) \to \int f \mathrm{d} \mu = \int T^n f \mathrm{d} \mu $$
for all $n \in \mathbb{Z}$.
In particular, this means that if $x \in R$, then $\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \to \int f \mathrm{d} \mu$ for all $f \in \mathcal{F}$. Since the span of $\mathcal{F}$ is dense in $C(X)$, this means that if $x \in R$, then
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \to \int f \mathrm{d} \mu$$
for all $f \in C(X)$.
\end{proof}
We turn now to apply Theorem \ref{Metric result for subshifts} to a slightly broader context. Let $(Y, \mathcal{A}, \nu, S)$ be an invertible ergodic system with finite entropy. Then the system admits a finite generator $\mathcal{E} = \{E_d\}_{d \in \mathcal{D}}$. For each $y \in Y, i \in \mathbb{Z}$, let $e_i(y) \in \mathcal{D}$ be the element of $\mathcal{D}$ such that $y \in S^{-i} E_{e_i(y)}$, or equivalently such that $S^i y \in E_{e_i(y)}$. Define the \emph{$k$-length cylinder} corresponding to $y$ by
$$F_k(y) = \bigcap_{i = 0}^{k - 1} S^{-i} E_{e_i(y)} .$$
We define a map $\phi : Y \to \mathcal{D}^\mathbb{Z}$ by
$$\phi(y) = (e_i(y))_{i \in \mathbb{Z}} .$$
We call this map $\phi$ the \emph{itinerary map} on $Y$ induced by $\mathcal{E}$. Let $T$ be the standard left shift on $\mathcal{D}^\mathbb{Z}$. The itinerary map commutes with the left shift in the sense that the following diagram commutes:
$$
\begin{tikzcd}
Y \arrow[d, "\phi"] \arrow[r, "S"] & Y \arrow[d, "\phi"] \\
\mathcal{D}^\mathbb{Z} \arrow[r, "T"] & \mathcal{D}^\mathbb{Z}
\end{tikzcd}
$$
We can now state the following corollary.
\begin{Cor}
Let $(Y, \mathcal{A}, \nu, S)$ be an invertible ergodic system with finite entropy and finite generator $\mathcal{E} = \{E_d\}_{d \in \mathcal{D}}$. Let $\mathbb{A} \subseteq L^\infty(Y, \nu)$ be the subspace
$$\mathbb{A} = \overline{\operatorname{span}} \left\{ S^n \chi_{\bigcap_{j = 0}^{\ell - 1} S^{-j} E_{d_j}} : n \in \mathbb{Z}, \ell \in \mathbb{N}, d_j \in \mathcal{D} \right\} .$$
Then the set of $y \in Y$ such that
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu(F_k(y))} \int_{F_k(y)} S^i g \mathrm{d} \nu \to \int g \mathrm{d} \nu$$
for all $g \in \mathbb{A}$ is of full measure.
\end{Cor}
\begin{proof}
Endow $\mathcal{D}^\mathbb{Z}$ with the pushforward measure $\mu(B) = \nu \left( \phi^{-1} B \right)$. Since $\phi^{-1}[d] = E_d \in \mathcal{A}$ for all $d \in \mathcal{D}$, we know that $\mu$ is Borel. We also observe that $F_k(y) = \phi^{-1} C_k(\phi(y))$. Consider $f = \chi_{\bigcap_{i = 0}^{k - 1} E_{d_i}} = \chi_{\phi^{-1} [d_0, d_1, \ldots, d_{k - 1}] }$. Let $B \subseteq \mathcal{D}^\mathbb{Z}$ be the set of all $x \in X$ such that
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{C_k(x)} \left( T^i f \right) \to \int f \mathrm{d} \mu$$
for all $f \in C(X)$, which we know by the previous theorem to be of full measure in $X$, and let $A = \phi^{-1} B$. Then if $y \in A$, and $d_0, d_1, \ldots, d_{\ell - 1} \in \mathcal{D}$, then
\begin{align*}
& \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\nu(F_k(y))} \int_{F_k(y)} S^i \chi_{\bigcap_{j = 0}^{\ell - 1} S^{-j} E_{d_j}} \mathrm{d} \nu \\
& = \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\nu(F_k(y))} \nu \left( F_k(y) \cap S^{-i} \bigcap_{j = 0}^{\ell - 1} S^{-j} E_{d_j} \right) \\
& = \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\nu \left( \phi^{-1} C_k(\phi(y)) \right)} \nu \left( \phi^{-1} \left( C_k(\phi(y)) \cap T^{-i} [d_0, d_1, \ldots, d_{\ell - 1}] \right) \right) \\
& = \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu (C_k(\phi(y))) } \mu \left( C_k(\phi(y)) \cap T^{-i} [d_0, d_1, \ldots, d_{\ell - 1}] \right) \\
& = \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu (C_k(\phi(y))) } \int_{C_k(\phi(y)} T^{i} \chi_{[d_0, d_1, \ldots, d_{\ell - 1}]} \mathrm{d} \mu \\
& \to \int \chi_{[d_0, d_1, \ldots, d_{\ell - 1}]} \mathrm{d} \mu \\
& = \mu([d_0, d_1, \ldots, d_{\ell - 1}]) \\
& = \nu \left( \bigcap_{j = 0}^{\ell - 1} S^{-j} E_{d_j} \right) \\
& = \int \chi_{\bigcap_{j = 0}^{\ell - 1} S^{-j} E_{d_j}} \mathrm{d} \nu ,
\end{align*}
since $\chi_{[d_0, d_1, \ldots, d_{\ell - 1}]} \in C \left( \mathcal{D}^\mathbb{Z} \right)$.
By an argument similar to that employed in the proof of Theorem \ref{Metric result for subshifts}, we can extrapolate that if $y \in A$, then
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu(F_k(y))} \int_{F_k(y)} S^i S^n \chi_{\bigcap_{j = 0}^{\ell - 1} S^{-j} E_{d_j}} \mathrm{d} \nu \to \int S^n \chi_{\bigcap_{j = 0}^{\ell - 1} S^{-j} E_{d_j}} \mathrm{d} \nu$$
for $n \in \mathbb{Z}$. By density, it follows that if $g \in \mathbb{A}$, then \linebreak $\frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu(F_k(y))} \int_{F_k(y)} S^i g \mathrm{d} \nu \to \int g \mathrm{d} \nu$ for all $y \in A$, and $A$ is a set of full measure.
\end{proof}
\subsection{Pathological differentiation problems and relations to symbolic distributions}
In Theorem \ref{Metric result for subshifts}, we demonstrated that
$$\alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \stackrel{k \to \infty}{\to} \int f \mathrm{d} \mu \; (\forall f\in C(X)) $$
for almost all $x \in X$. We take this opportunity to demonstrate that the "almost all" caveat is indispensable, as there can exist $x \in X$ for which $\alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \not \to \int f \mathrm{d} \mu$ for certain $f \in C(X)$. This is related to the shift not being uniquely ergodic, which we discussed in more detail in Section \ref{Topological STDs}. In fact, we even claim the sequence $\left( \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right)_{k = 1}^{\infty}$ can fail to be \emph{Cauchy} for certain pairs $(x, f) \in X \times C(X)$.
\begin{Thm}\label{Pathological Differentiation}
Let $X = \mathcal{D}^\mathbb{Z}$ be a Bernoulli shift with symbol space \linebreak $\mathcal{D} = \{0, 1, \ldots, D - 1\}, D \geq 2$, a Borel probability measure $\mu$ such that $\mu([d]) \neq 0$ for all $d \in \mathcal{D}$. Let $f = \chi_{[0]}$, and left shift $T$. Then there exists an uncountable subset $S \subseteq X$ such that $x, y \in S \Rightarrow x_j = y_j \; \left( \forall j \leq 0 \right)$, and such that the sequence $\left( \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right)_{k = 1}^\infty$ is not Cauchy for all $x \in S$.
\end{Thm}
\begin{proof}
We first compute $\alpha_{[x_0, x_1, \ldots, x_{k - 1}]} \left(T^i f \right)$ for $0 \leq i \leq k - 1$ as follows. We see
\begin{align*}
\alpha_{[x_0, x_1, \ldots, x_{k - 1}]} \left(T^i f \right) & = \frac{1}{\mu([x_0, x_1, \ldots, x_{k - 1}])} \int_{[x_0, x_1, \ldots, x_{k - 1}]} T^i \chi_{[0]} \mathrm{d} \mu \\
& = \frac{1}{\mu([x_0, x_1, \ldots, x_{k - 1}])} \int_{\bigcap_{j = 0}^{k - 1} T^{-j} [x_j]} \chi_{T^{-i} [0]} \mathrm{d} \mu \\
& = \frac{1}{\mu([x_0, x_1, \ldots, x_{k - 1}])} \mu \left( \left( \bigcap_{j = 0}^{k - 1} T^{-j} [x_j] \right) \cap T^{-i}[0] \right) \\
& = \delta(x_i, 0) ,
\end{align*}
where $\delta(\cdot, \cdot)$ refers here to the Kronecker delta. Thus if $x = (x_j)_{j \in \mathbb{Z}} \in X$, then
\begin{align*}
\alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) & = \frac{\# \{ i \in [0 , k - 1] : x_i = 0 \}}{k} & (\dagger) \end{align*}
The identity $(\dagger)$ implies that if there exists $x \in X$ such that \linebreak $\left( \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right)_{k = 1}^\infty$ is not Cauchy, then we can then build our set $S$. For $x, y \in X$, write $x \sim y$ if $x_j = y_j$ for all $j \leq 0$, and the set $\{ j \in \mathbb{N} : x_j \neq y_j \}$ has density $0$. This is an equivalence relation. We claim that if $x \sim y$, then $\left| \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) - \alpha_{C_k(y)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right| \stackrel{k \to \infty}{\to} 0$. By $(\dagger)$, we know that
\begin{align*}
\left| \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) - \alpha_{C_k(y)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right| & = \left|\frac{1}{k} \sum_{i = 0}^{k - 1} (\delta(x_i, 0) - \delta(y_i, 0)) \right| \\
& \leq \frac{1}{k} \sum_{i = 0}^{k - 1} |(\delta(x_i, 0)-\delta(y_i, 0))| \\
& \leq \frac{\# i \in [0, k - 1] : x_i \neq y_i}{k} \\
& \stackrel{k \to \infty}{\to} 0 .
\end{align*}
Therefore, we can let $S$ be the equivalence class of $x$ under $\sim$. To see that this $S$ is uncountable, let $E \subseteq \mathbb{N}$ be an infinite subset of density $0$. Then $I$ has density $0$. For each $F \subseteq E$, let $x^F \in X$ be a sequence such that $x_j^F = x_j$ for $j \not \in F$ and $x_j^F \neq x_j$ for $j \in F$. Since $E$ has uncountably many subsets, and $x \sim x^F$ for all $F \subseteq E$, we have shown that the equivalence class of $x$ by $\sim$ is uncountable. So, assuming that $x \in X$ such that $\left( \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} f \right) \right)_{k = 1}^\infty$ is not Cauchy, then we can let $S = \{ y \in X : x \sim y \}$.
Our next order of business is to construct some such $x$. The identity $(\dagger)$ also helps us construct an $x \in X$ for which $\left( \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right)_{k = 1}^\infty$ is not Cauchy. Construct $x = (x_j)_{j \in \mathbb{Z}} \in X$ as follows. For brevity, let $c_n = \sum_{p = 1}^n 2^p$. Set
$$x_j = \begin{cases}
0 & j < 0 \\
1 & j = 0 \\
0 & 0 < j \leq 2 \\
1 & 2 < j \leq 6 \\
0 & 6 < j \leq 14 \\
1 & 14 < j \leq 30 \\
\vdots \\
0 & c_{2n} < j \leq c_{2n + 1} \\
1 & c_{2n + 1} < j \leq c_{2n + 2} \\
0 & c_{2n + 2} < j \leq c_{2n + 3} \\
\vdots
\end{cases}$$
In plain language, this sequence begins with $0$ for $j < 0$, a $1$ at $j = 0$, then $2^1$ terms of $0$, then $2^2$ terms of $1$, then $2^3$ terms of $0$, then $2^4$ terms of $1$, and so on. We claim that $\liminf_{k \to \infty} \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \neq \limsup \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right)$. Sampling along the subsequence $k_n = c_{2n} + 1$, we get
\begin{align*}
\alpha_{C_{c_{2n} + 1}(x)} \left( \frac{1}{c_{2n} + 1} \sum_{i = 0}^{c_{2n}} T^i f \right) & = \frac{2 + 8 + 32 + \cdots + 2^{2n - 1}}{1 + 2 + 4 + 6 + \cdots + 2^{2n}} & = \frac{\frac{1}{2}\sum_{p = 1}^n 4^p}{1 + \sum_{q = 1}^{2n} 2^q} \\
& = \frac{1}{3} \cdot \frac{4^n - 1}{4^n - \frac{1}{2}} & \stackrel{n \to \infty}{\to} \frac{1}{3} ,
\end{align*}
where the limit is taken using L'Hospital's Rule. On the other hand, looking at the subsequence $k_n = c_{2n - 1} + 1$, we get
\begin{align*}
\alpha_{C_{c_{2n - 1} + 1}(x)} \left( \frac{1}{c_{2n - 1} + 1} \sum_{i = 0}^{c_{2n - 1}} T^i f \right) & = \frac{2 + 8 + 32 + \cdots + 2^{2n - 1}}{1 + 2 + 4 + 6 + \cdots + 2^{2n - 1}} & = \frac{\frac{1}{2}\sum_{p = 1}^n 4^p}{1 + \sum_{q = 1}^{2n - 1} 2^q} \\
& = \frac{1}{3} \cdot \frac{4^{n} - 1}{\frac{1}{2} 4^{n} - \frac{1}{2}} & \stackrel{n \to \infty}{\to} \frac{2}{3} .
\end{align*}
Thus we can say
$$
\liminf_{k \to \infty} \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \leq \frac{1}{3} < \frac{2}{3} \leq \limsup_{k \to \infty} \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) .
$$
Therefore the sequence $\left( \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \right)_{k = 1}^\infty$ is divergent, and thus not Cauchy.
\end{proof}
\begin{Rmk}
Theorem \ref{Pathological Differentiation} is not encompassed by Theorem \ref{Pathological connected differentiation}, since a subshift is a priori totally disconnected.
\end{Rmk}
This calculation adequately sets up the following result.
\begin{Thm}\label{Normal shift points}
Let $(X, \mathcal{B}, \mu, T)$ be an ergodic subshift, with $X \subseteq \mathcal{D}^\mathbb{Z}$, and let $x \in X$. Then the following statements about $x \in X$ are equivalent.
\begin{enumerate}
\item For all $f \in C(X)$, the limit $$\lim_{k \to \infty} \alpha_{C_k(x)}\left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right)$$
exists and is equal to $\int f \mathrm{d} \mu$.
\item For all words $(a_0, a_1, \ldots, a_{\ell - 1}) \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell$, the limit $$\lim_{k \to \infty} \alpha_{C_k(x)}\left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right)$$
exists and is equal to $\mu([a_0, a_1, \ldots, a_{\ell - 1}])$.
\item For all words $(a_0, a_1, \ldots, a_{\ell - 1}) \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell$, the limit
$$\lim_{k \to \infty} \frac{\# \{ i \in [0, k - \ell] : x_i = a_0, x_{i + 1} = a_1, \ldots, x_{i + \ell - 1} = a_{\ell - 1} \}}{k}$$
exists and is equal to $\mu([a_0, a_1, \ldots, a_{\ell - 1}])$.
\item For all words $(a_0, a_1, \ldots, a_{\ell - 1}) \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell$, the limit
$$\lim_{k \to \infty} \frac{\# \{ i \in [0, k - 1] : x_i = a_0, x_{i + 1} = a_1, \ldots, x_{i + \ell - 1} = a_{\ell - 1} \}}{k}$$
exists and is equal to $\mu([a_0, a_1, \ldots, a_{\ell - 1}])$.
\end{enumerate}
\end{Thm}
\begin{proof}
Lemma \ref{Cylinders dense} tells us that (1)$\iff$(2). That (3)$\iff$(4) comes from the observation that the absolute difference between the two sequences is at most $\frac{\ell - 1}{k}$. To establish (2)$\iff$(3), we compute $\alpha_{C_k(x)} \left( T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right)$ for $i \in [0, k - \ell]$ as follows.
\begin{align*}
\alpha_{C_k(x)} \left( T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) & = \frac{1}{\mu([x_0, x_1, \ldots, x_{k - 1}])} \int_{[x_0, x_1, \ldots, x_{k - 1}]} \chi_{T^{-i} [a_0, a_1, \ldots, a_{\ell - 1}]} \mathrm{d} \mu \\
& = \begin{cases}
1 & a_i = x_0, a_{i + 1} = x_1, \ldots, a_{i + \ell - 1} = x_{\ell - 1} , \\
0 & \textrm{otherwise}
\end{cases} .
\end{align*}
Therefore
\begin{align*}
& \frac{1}{k} \sum_{i = 0}^{k - \ell} \alpha_{C_k(x)} \left( T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) \\
& = \frac{\# \{ i \in [0, k - \ell] : x_i = a_0, x_{i + 1} = a_1, \ldots, x_{i + \ell - 1} = a_{\ell - 1} \}}{k} .
\end{align*}
Finally, we observe that
\begin{align*}
\left| \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) - \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - \ell} T^i f \right) \right| & = \left| \alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = k - \ell + 1}^{k - 1} T^i f \right) \right| \\
& \leq \frac{\ell - 1}{k} \| f \|_\infty .
\end{align*}
Therefore the end behaviors of $\left( \alpha_{C_k(x)}\left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) \right)_{k = 1}^\infty$ and \linebreak $\left( \alpha_{C_k(x)}\left( \frac{1}{k} \sum_{i = 0}^{k - \ell} T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) \right)_{k = 1}^\infty$ are identical, i.e. one converges iff the other converges, and if they converge, then they converge to the same value. But then, as has already been established, we know that
\begin{align*}
\left( \alpha_{C_k(x)}\left( \frac{1}{k} \sum_{i = 0}^{k - \ell} T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) \right)_{k = 1}^\infty \\
= \left( \frac{\# \{ i \in [0, k - 1] : x_i = a_0, x_{i + 1} = a_1, \ldots, x_{i + \ell - 1} = a_{\ell - 1} \}}{k} \right)_{k = 1}^\infty ,
\end{align*}
demonstrating that (2)$\iff$(3).
\end{proof}
Theorem \ref{Normal shift points} gives us an alternate proof of Theorem \ref{Metric result for subshifts}. Applying the Birkhoff Ergodic Theorem to the functions $\chi_{[\mathbf{a}]}$ tells us that almost all $x \in X$ satisfy $\frac{1}{k} \sum_{i = 0}^{k - 1} T^i \chi_{[\mathbf{a}]}(x) \stackrel{k \to \infty}{\to } \mu([\mathbf{a}])$ for all strings $\mathbf{a} \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell$. But this is exactly condition (4) from Theorem \ref{Normal shift points}. Moreover, this result gives us a more concrete characterization of the "set of full measure" that Theorem \ref{Metric result for subshifts} alludes to.
Before concluding, we demonstrate that Proposition \ref{Random Lipschitz Cylinders} does not hinge on the cylinder structure of $X$.
\begin{Thm}
Let $(X, \rho)$ be a compact metric space, and let $T : X \to X$ be an $L$-Lipschitz homeomorphism on $X$ with respect to $\rho$, where $L > 1$. Suppose $\mu$ is a regular Borel probability measure on $X$ such that $T$ is ergodic with respect to $\mu$. Let $(r_k)_{k = 1}^\infty$ be a sequence of positive numbers $r_k > 0$ such that there exists a constant $\gamma \in \mathbb{R}$ such that $r_k \leq \gamma \cdot L^{-k}$ for all $k \in \mathbb{N}$. Fix $f \in C(X)$. Let $B_k(x) = \{ y \in X : \rho(x, y) < r_k \}$. Then the set of $x \in X$ such that
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{B_k(x)} \left( T^i f \right) \stackrel{k \to \infty}{\to } \int f \mathrm{d} \mu$$
for all $f \in C(X)$ is of full measure.
\end{Thm}
\begin{proof}
Since $C(X)$ is separable, it will suffice to show that given some fixed $f \in C(X)$, we have
$$\frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{B_k(x)} \left( T^i f \right) \stackrel{k \to \infty}{\to } \int f \mathrm{d} \mu$$
for almost all $x \in X$. Our method of proof will closely resemble our proof of Proposition \ref{Random Lipschitz Cylinders}.
Our goal is to show that for every $\epsilon > 0$ exists some $K \in \mathbb{N}$ such that if $k \geq K(\epsilon)$, we have
\begin{align*}
\left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \alpha_{B_k(x)} \left( T^i f \right) \right| \\
\leq \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \right| + \left| \frac{1}{k} \sum_{i = 0}^{k - 1} \left( \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right) \right| \\
\leq \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \right| + \frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| \\
\leq \epsilon .
\end{align*}
We will accomplish this by bounding the terms
$$\left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \right| , \; \frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| $$
by $\epsilon$.
We will start with bounding the latter term. We claim that if $x \in X$ such that $\mu(B_k(x)) > 0, \operatorname{diam}(B_k(x)) \leq \gamma_x \cdot L^{-k}$ for all $k \in \mathbb{N}$, then for every $\epsilon > 0$, there exists $K_1 \in \mathbb{N}$ such that
$$k \geq K_1 \Rightarrow \frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right) (x) - \alpha_{B_k(x)} \left( T^i f \right) \right| < \frac{\epsilon}{2} .$$
To prove this, choose $\delta > 0$ such that $p(y, z) < \delta \Rightarrow |f(y) - f(z)| < \frac{\epsilon}{4}$. Let $\kappa \in \mathbb{N}$ such that $\gamma_x \cdot L^{-\kappa} < \delta$. Then if $k > \kappa$, then
\begin{align*}
\frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| & = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| \right] \\
& + \frac{1}{k} \left[ \sum_{k - \kappa + 1}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| \right] .
\end{align*}
We will estimate these two terms separately, bounding each by $\frac{\epsilon}{4}$. Beginning with the former, we observe that if $x, y \in B_k(x)$, then
$$p \left( T^{i} x, T^i y \right) \leq L^{i} p(x, y) \leq L^i \cdot \gamma_x \cdot L^{-k} = \gamma_{x} \cdot L^{i - k} .$$
In particular, this means that if $i - k \leq - \kappa$, then $\left| \left( T^i f \right)(x) - f(z) \right| < \frac{\epsilon}{4}$ for all $z = T^i y \in T^i C_{k}(x)$, so
\begin{align*}
& \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| \right] \\
& = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \frac{1}{\mu(B_k(x))} \int_{B_k(x)} \left( \left( T^i f \right)(x) \right) - T^i f \mathrm{d} \mu \right| \right] \\
& = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \frac{1}{\mu\left(T^i B_k(x)\right)} \int_{T^i B_k(x)} \left| \left( T^i f \right) (x) - f \right| \mathrm{d} \mu \right] \\
& \leq \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \frac{1}{\mu\left(T^i B_k(x)\right)} \int_{T^i B_k(x)} \frac{\epsilon}{4} \mathrm{d} \mu \right] \\
& = \frac{k - \kappa + 1}{k} \frac{\epsilon}{4} \\
& \leq \frac{\epsilon}{4} .
\end{align*}
On the other hand, we can estimate
$$\frac{1}{k} \left[ \sum_{k - \kappa + 1}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| \right] \leq \frac{2 \kappa}{k} \| f \| .$$
Choose $K_1 > \kappa$ such that $\frac{2 \kappa \left\| f \right\|_\infty}{K_1} < \frac{\epsilon}{4}$. Then if $k \geq K_1$, we have
\begin{align*}
\frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| & = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| \right] \\
& + \frac{1}{k} \left[ \sum_{k - \kappa + 1}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| \right] \\
& \leq \frac{\epsilon}{4} + \frac{\epsilon}{4} \\
& = \frac{\epsilon}{2} .
\end{align*}
Now suppose further that $x \in X$ is such that $\frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \stackrel{k \to \infty}{\to} \int f \mathrm{d} \mu$. Choose $K_2 \in \mathbb{N}$ such that $k \geq K_2 \Rightarrow \left| \int f \mathrm{d} \mu - \frac{1}{k} \sum_{i = 0}^{k - 1} \left( T^i f \right)(x) \right| < \frac{\epsilon}{2}$. Then if $k \geq \max \{ K_1, K_2 \}$, then we have
\begin{align*}
\frac{1}{k} \sum_{i = 0}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| & = \frac{1}{k} \left[ \sum_{i = 0}^{k - \kappa} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| \right] \\
& + \frac{1}{k} \left[ \sum_{k - \kappa + 1}^{k - 1} \left| \left( T^i f \right)(x) - \alpha_{B_k(x)} \left( T^i f \right) \right| \right] \\
& \leq \frac{\epsilon}{2} + \frac{\epsilon}{2} \\
& = \epsilon .
\end{align*}
\end{proof}
Before looking at a more general family of differentiation problems, we want to take a moment to observe that if $(X, \mathcal{B}, T, \mu)$ is an ergodic system, then if the (measure-theoretic) entropy $h(T, \mu)$ of the system is positive, then we automatically have that $\mu(C_k(x)) \stackrel{k \to \infty}{\to} 0$: by the Shannon-McMillan-Breiman Theorem \cite[Theorem 6.2.1]{D&K}, it follows that for $\mu$-almost every $x \in X$ there exists $K = K_x \in \mathbb{N}$ such that
$$k \geq K \Rightarrow - \frac{1}{k} \log \mu (C_k(x)) \geq \frac{h(T, \mu)}{2} .$$
Then if $k \geq K$, we have
\begin{align*}
- \frac{1}{k} \log \mu(C_k(x)) & \geq \frac{h(T, \mu)}{2} \\
\Rightarrow \log \mu(C_k(x)) & \leq - \frac{h(T, \mu)}{2} k & < 0 \\
\Rightarrow \mu(C_k(x)) & \leq \left( e^{- \frac{h(T, \mu)}{2}} \right)^k & \stackrel{k \to \infty}{\to} 0 .
\end{align*}
On the other hand, whether $\mu(B_k(x)) \stackrel{k \to \infty}{\to} 0$ depends on where $(X, \mathcal{B}, \mu)$ contains atoms. If $\mu(\{x\}) = 0$ for all $x \in X$, then $\mu(B_k(x)) \stackrel{k \to \infty}{\to} 0$.
\section{Random cylinders in a Bernoulli shift - a probabilistic approach}\label{Bernoulli shifts and probability}
In this section, we consider problems similar to those addressed in Sections \ref{Non-expansive} and \ref{Lipschitz}, where we take some $(X, \mathcal{B}, \mu, T)$ with specified properties (in this case, we assume the system is Bernoulli), and seek to establish conditions under which for a randomly chosen sequence $(F_k)_{k = 1}^\infty$ of sets of positive measure, the sequence $\left( \frac{1}{\mu(F_k)} \int_{F_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \mathrm{d} \mu \right)_{k = 1}^\infty$ converges almost surely to $\int f \mathrm{d} \mu$ for all $f \in C(X)$.
We provide now an alternate proof of a special case of Theorem \ref{Metric result for subshifts}. Though the result proved is lesser in scope, we include it for the reason that the proof provided here has a decidedly more probabilistic flavor than the proof provided of Theorem \ref{Metric result for subshifts} in Section \ref{Lipschitz}. This method of proof also proves slightly more versatile, as it allows us to consider randomly chosen sequences of cylinders which are not necessarily nested.
In this section, $X = \mathcal{D}^\mathbb{Z}$ is a Bernoulli shift on a finite alphabet $\mathcal{D}$ with probability vector $\mathbf{p} = (p(d))_{d \in \mathcal{D}}$, and $\mu$ is the Borel probability measure on $X$ induced by $\mathbf{p}$. We begin by proving a lemma to which we assign a whimsical title.
\begin{Lem}[The Even Stronger Law of Large Numbers]
Let $(Y, \mathcal{A}, \nu)$ be a probability space, and let $(k_n)_{n = 1}^\infty$ be a sequence in $\mathbb{N}$ such that $\sum_{n = 1}^\infty k_n^{-2} < \infty$. Let $(\zeta_{i, n})_{0 \leq i \leq k_n - 1, n \in \mathbb{N}}$ be a family of $L^\infty$ real random variables satisfying the following conditions.
\begin{enumerate}
\item There exists $C \in [1, \infty)$ such that $\| \zeta_{i, n} \|_\infty \leq C$ for all $0 \leq i \leq k_n - 1, n \in \mathbb{N}$.
\item $\int \zeta_{i, n} \mathrm{d} \nu = m$ for all $0 \leq i \leq k_n - 1 , n \in \mathbb{N}$, where $m$ is a constant.
\item For each $n \in \mathbb{N}$, the subfamily $\{ \zeta_{i, n} \}_{i = 0}^{k_n - 1}$ is mutually independent.
\end{enumerate}
Then
$$\frac{1}{k_n} \sum_{i = 0}^{k_n - 1} \zeta_{i, n} \stackrel{n \to \infty}{\to} m$$
almost surely.
\end{Lem}
\begin{proof}
For the sake of brevity, abbreviate
$$S_n = \sum_{i = 0}^{k_n - 1} \zeta_{i, n},$$
and assume without loss of generality that $m = 0$ (else, we can just consider $\hat{\zeta}_{i, n} = \zeta_{i, n} - m$). Given $\epsilon > 0$, set
$$E_{n, \epsilon} = \{ y \in Y : |S_n(y)|/k_n \geq \epsilon \} = \{ y \in Y : |S_n(y)| \geq k_n \epsilon \} .$$
Then Chebyshev's inequality tells us that
$$\mu(E_{n, \epsilon}) \leq \frac{1}{(k_n \epsilon)^4} \int S_n^4 \mathrm{d} \nu .$$ Then
$$\int S_n^4 \mathrm{d} \nu = \sum_{r, s, t, u = 0}^{k_n - 1} \int \zeta_{r, n} \zeta_{s, n} \zeta_{t, n} \zeta_{u, n} \mathrm{d} \nu .$$
This sum consists of terms of the forms
\begin{enumerate}
\item $\int \zeta_{r, n}^4 \mathrm{d} \nu$
\item $\int \zeta_{r, n}^2 \zeta_{s, n}^2 \mathrm{d} \nu$
\item $\int \zeta_{r, n}^3 \zeta_{s, n} \mathrm{d} \nu$
\item $\int \zeta_{r, n}^2 \zeta_{s, n} \zeta_{t, n} \mathrm{d} \nu$
\item $\int \zeta_{r, n} \zeta_{s, n} \zeta_{t, n} \zeta_{u, n} \mathrm{d} \nu$
\end{enumerate}
where $r , s , t , u$ are distinct. We assert that the terms of the third, fourth, and fifth forms all vanish by virtue of independence. This leaves $k_n$ terms of the first form and $3k_n (k_n - 1)$ terms of the second form. Thus there are $k_n + 3 k_n (k_n - 1)$ terms of absolute value $\leq C^4$. Thus
\begin{align*}
\int S_n^4 \mathrm{d} \nu & \leq \left( 3k_n^2 - 2 k_n \right)^2 C^4 \\
& \leq 3 k_n^2 C^4 \\
\Rightarrow \mu(E_{n, \epsilon}) & \leq \frac{3 k_n^2 C^4}{(k_n \epsilon)^4} \\
\Rightarrow \sum_{n = 1}^\infty \mu(E_{n, \epsilon}) & \leq \frac{3 C^4}{\epsilon^4}\sum_{n = 1}^\infty k_n^{-2} \\
& < \infty .
\end{align*}
By the Borell-Cantelli Lemma, it follows that $\mu \left( \bigcap_{N = 1}^\infty \bigcup_{n = N}^\infty E_{n, \epsilon} \right) = 0$. But $$\bigcap_{N = 1}^\infty \bigcup_{n = N}^\infty E_{n, \epsilon} = \left\{ y \in Y : \limsup_{n \to \infty} \left| \frac{S_n(y)}{k_n} \right| \geq \epsilon \right\} ,$$
so we can conclude that
$$\mu \left( \left\{ y \in Y : \limsup_{n \to \infty} \left| \frac{S_n(y)}{k_n} \right| > 0 \right\} \right) = \mu \left( \bigcup_{K = 1}^\infty \left( \bigcap_{N = 1}^\infty \bigcup_{n = N}^\infty E_{n, \frac{1}{K}} \right) \right) = 0 .$$
Thus $\frac{S_n}{k_n} \to m$ almost surely.
\end{proof}
Now we apply this to estimating $$\frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu(C_k(x))} \int_{C_k(x)} T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} .$$ Fix a word $\mathbf{a} = (a_0, a_1, \ldots, a_{\ell - 1}) \in \mathcal{D}^\ell$. We are going to consider a sequence of families of discrete random variables in $X$ given by
\begin{align*}
\xi_{i, k}^{\mathbf{a}} (x) & = \frac{1}{\mu(C_k(x))} \int_{C_k(x)} T^i \chi_{[\mathbf{a}]} \mathrm{d} \mu \\
& = \frac{\mu \left( C_k(x) \cap T^{-i} [\mathbf{a}] \right)}{\mu(C_k(x))} . & (0 \leq i \leq k - 1)
\end{align*}
Each random variable is bounded in $L^\infty(X, \mu)$ by $1$. We claim that they also have a shared mean $\int \xi_{i, k}^{\mathbf{a}} \mathrm{d} \mu = \mu([\mathbf{a}])$.
\begin{align*}
& \int \xi_{i, k}^{\mathbf{a}} \mathrm{d} \mu \\
& = \sum_{\vec{d} \in \mathcal{D}^k} \left( \prod_{h = 0}^{k - 1} p(d_h) \right) \alpha_{[d_0, d_1, \ldots, d_{k - 1}]} \left( T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \right) \\
& = \sum_{\vec{d} \in \mathcal{D}^k} \left( \prod_{h = 0}^{k - 1} p(d_h) \right) \frac{1}{\prod_{h = 0}^{k - 1} p(d_h)} \int_{[d_0, d_1, \ldots, d_{k - 1}]} T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \mathrm{d} \mu \\
& = \sum_{\vec{d} \in \mathcal{D}^k} \int_{[d_0, d_1, \ldots , d_{k - 1}]} T^i \chi_{[a_0, a_1, \ldots, a_{\ell - 1}]} \mathrm{d} \mu \\
& = \sum_{\vec{d} \in \mathcal{D}^k} \int_{[d_0, d_1, \ldots , d_{k - 1}]} \chi_{T^{-i} [a_0, a_1, \ldots, a_{\ell - 1}]} \mathrm{d} \mu \\
& = \sum_{\vec{d} \in \mathcal{D}^k} \mu \left( [d_0, d_1, \ldots, d_{k - 1}] \cap T^{-i} [a_0, a_1, \ldots, a_{\ell - 1}] \right) \\
& = \sum_{\vec{d} \in \mathcal{D}^k} \mu \left( [d_0, d_1, \ldots, d_{k - 1}] \cap \bigcup_{c_0, c_1, \ldots, c_{i - 1}} [c_0, c_1, \ldots, c_{i - 1}, a_0, a_1, \ldots, a_{\ell - 1}] \right) \\
& = \sum_{\vec{d} \in \mathcal{D}^k} \mu \left( [d_0, d_1, \ldots, d_{k - 1}] \cap [d_0, d_1, \ldots, d_{i - 1}, a_0, a_1, \ldots, a_{\ell - 1}] \right)
\end{align*}
To compute this value, we look at two cases: where $i + \ell \leq k$, and where $i + \ell \geq k$.
If $i + \ell \leq k$, then
\begin{align*}
[d_0, d_1, \ldots, d_{k - 1}] \cap [d_0, d_1, \ldots, d_{i - 1}, a_0, a_1, \ldots, a_{\ell - 1}] \\
= \begin{cases}
[d_0, d_1, \ldots, d_{k - 1}] & d_i = a_0, d_{i + 1} = a_1, \ldots, d_{i + \ell - 1} = a_{\ell - 1} \\
\emptyset & \textrm{otherwise}
\end{cases}
\end{align*}
This means that $d_0, d_1, \ldots, d_{i - 1}$, as well as $d_{i + \ell}, \ldots, d_{k - 1}$ are "free". Thus
\begin{align*}
\sum_{ \vec{d} \in \mathcal{D}^k} \mu \left( [d_0, d_1, \ldots, d_{k - 1}] \cap [d_0, d_1, \ldots, d_{i - 1}, a_0, a_1, \ldots, a_{\ell - 1}] \right) \\
= \sum_{ \vec{d} \in \mathcal{D}^l} \mu([d_0, d_1, \ldots, d_{i - 1}, a_0, a_1, \ldots, a_{\ell - 1}, d_{i + \ell}, \ldots, d_{k - 1}]) \\
= \sum_{ \vec{d} \in \mathcal{D}^k} \left( p(d_0) p(d_1) \cdots p(d_{i - 1}) \right) \left( p(a_0) p(a_1) \cdots p(a_{\ell - 1}) \right) \left( p(d_{i + \ell}) \cdots p(d_{k - 1}) \right) \\
= \mu([a_0, a_1, \ldots, a_{\ell - 1}]) .
\end{align*}
On the other hand, if $i + \ell \geq k$, then
\begin{align*}
[d_0, d_1, \ldots, d_{k - 1}] \cap [d_0, d_1, \ldots, d_{i - 1}, a_0, a_1, \ldots, a_{\ell - 1}] \\
= \begin{cases}
[d_0, d_1, \ldots, d_{i - 1}, a_0, a_1, \ldots, a_{\ell - 1}] & d_i = a_0, \ldots, d_{k - 1} = a_{k - i - 1} \\
\emptyset & \textrm{otherwise}
\end{cases}
\end{align*}
leaving $d_0, d_1, \ldots, d_{i - 1}$ "free". Thus
\begin{align*}
\sum_{\vec{d} \in \mathcal{D}^k} \mu \left( [d_0, d_1, \ldots, d_{k - 1}] \cap [d_0, d_1, \ldots, d_{i - 1}, a_0, a_1, \ldots, a_{\ell - 1}] \right) \\
= \sum_{\vec{d} \in \mathcal{D}^k} \mu([d_0, d_1, \ldots, d_{i - 1}, a_0, a_1, \ldots, a_{\ell - 1}]) \\
= \sum_{\vec{d} \in \mathcal{D}^k} p(d_0) p(d_1) \cdots p(d_{i - 1}) p(a_0) p(a_1) \cdots p(a_{\ell - 1}) \\
= \mu([a_0, a_1, \ldots, a_{\ell - 1}]) .
\end{align*}
Thus in either case, we have $\int \xi_{i, k}^{\mathbf{a}} \mathrm{d} \mu = \mu([\mathbf{a}])$.
Now, for fixed $k$, the family $\left\{\xi_{i, k}^{\mathbf{a}} \right\}_{i = 0}^{k - 1}$ is not necessarily independent, but we can break it up into arithmetic subsequences which are. Consider the families $\left\{ \xi_{m \ell + j, k}^\mathbf{a} \right\}_{m = 0}^{\lfloor k / \ell \rfloor - 1}$ for $j \in \{0, 1, \ldots, \ell - 1\}$. Then these subfamilies are independent, so the Even Stronger Law Of Large Numbers tells us that $\frac{1}{\lfloor k / \ell \rfloor} \sum_{m = 0}^{k - 1} \xi_{m \ell + j, k}^\mathbf{a} \to \mu([\mathbf{a}])$ almost surely. Now we calculate
\begin{align*}
& \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu(C_k(x))} \int_{C_k(x)} T^i \chi_{[\mathbf{a}]} \mathrm{d} \mu \\
& = \frac{1}{k} \sum_{i = 0}^{k - 1} \xi_{i, k}^\mathbf{a}(x) \\
& = \frac{\ell \lfloor k / \ell \rfloor}{k} \left[ \frac{1}{\ell \lfloor k / \ell \rfloor} \sum_{i = 0}^{k - 1} \xi_{i, k}^\mathbf{a}(x) \right] \\
& = \frac{\ell \lfloor k / \ell \rfloor}{k} \left[ \frac{1}{\ell} \sum_{j = 0}^{\ell - 1} \frac{1}{\lfloor k / \ell \rfloor} \sum_{m = 0}^{\lfloor k / \ell \rfloor - 1} \xi_{m \ell + j, k}^\mathbf{a}(x) \right] + \frac{\sum_{i = \ell \lfloor k / \ell \rfloor}^{k - 1} \xi_{i, k}^{\mathbf{a}} (x)}{\ell} \\
& \stackrel{\textrm{almost surely}}{\to} (1) \left[ \frac{1}{\ell} \sum_{j = 0}^{\ell - 1} \mu([\mathbf{a}]) \right] + 0 \\
& = \mu([\mathbf{a}]) \\
& = \int \chi_{[\mathbf{a}]} \mathrm{d} \mu .
\end{align*}
Taking a countable intersection over $\mathbf{a} \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell$, we can conclude that the set $B$ of all $x \in X$ such that $\frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu(C_k(x))} \int_{C_k(x)} T^i \chi_{[\mathbf{a}]} \mathrm{d} \mu \to \int \chi_{[\mathbf{a}]} \mathrm{d} \mu$ for all words $\mathbf{a}$ is of full measure. We can further conclude that if $x \in B$, we have $\frac{1}{\mu(C_k(x))} \int_{C_k(x)} T^i T^n \chi_{[\mathbf{a}]} \mathrm{d} \mu \to \int T^n \chi_{[\mathbf{a}]}$ for all words $\mathbf{a}$ and $n \in \mathbb{Z}$. Since $\operatorname{span} \left\{ T^n \chi_{[\mathbf{a}]} : \mathbf{a} \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell, n \in \mathbb{Z} \right\}$ is dense in $C(X)$, we can conclude the following special case of Theorem \ref{Metric result for subshifts}.
\begin{Prop}\label{Bernoulli Prop}
Let $X = \mathcal{D}^\mathbb{Z}$ be a Bernoulli shift, and let $\mu$ be the associated measure. Endow $X$ with the generator $\mathcal{E} = \{ E_d \}_{d \in \mathcal{D}}$, where $E_d = \{ x \in X : x_0 = d \}$. Then the set of all $x \in X$ such that
$$\alpha_{C_k(x)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \to \int f \mathrm{d} \mu$$
for all $f \in C(X)$ is of full measure.
\end{Prop}
However, this technique lends itself to another result that is not encompassed by Theorem \ref{Metric result for subshifts}. We have looked at spatial-temporal differentiation problems where we are differentiating with respect to the cylinders $C_k(x)$ of a randomly chosen $x \in X$. The next result considers instead the situation where we randomly choose a \emph{sequence} $(x_k)_{k = 1}^\infty$ in $X$ and differentiating with respect to the sequence $(C_k(x_k))_{k = 1}^\infty$.
\begin{Thm}
Let $X = \mathcal{D}^\mathbb{Z}$ be a Bernoulli shift, and let $\mu$ be the associated measure. Endow $X$ with the generator $\mathcal{E} = \{ E_d \}_{d \in \mathcal{D}}$, where $E_d = \{ x \in X : x_0 = d \}$. Consider the countably infinite product probability space $\left(X^\infty, \mathcal{B}^\infty, \mu^\infty \right) = \prod_{k \in \mathbb{N}} (X, \mathcal{B}, \mu)$. Then the set of all $(x_k)_{k = 1}^\infty \in X^\infty$ such that
$$\alpha_{C_k(x_k)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \to \int f \mathrm{d} \mu$$
for all $f \in C(X)$ is of full $\mu^\infty$-measure.
\end{Thm}
\begin{proof}
Our method is very similar to the method used for Proposition \ref{Bernoulli Prop}. Let $\mathbf{x} = (x_k)_{k = 1}^\infty \in X^\infty$ denote a sequence in $X$.
Fix a word $\mathbf{a} = (a_0, a_1, \ldots, a_{\ell - 1}) \in \mathcal{D}^\ell$. We are going to consider a sequence of families of discrete random variables in $X$ given by
\begin{align*}
\zeta_{i, k}^{\mathbf{a}} (\mathbf{x}) & = \frac{1}{\mu(C_k(x_k))} \int_{C_k(x_k)} T^i \chi_{[\mathbf{a}]} \mathrm{d} \mu \\
& = \frac{\mu \left( C_k(x_k) \cap [\mathbf{a}] \right)}{\mu(C_k(x_k))} . & (0 \leq i \leq k - 1)
\end{align*}
Each random variable $\zeta_{i, k}^{\mathbf{a}}$ is bounded in $L^\infty(X, \mu)$ by $1$. By a calculation identical to the one used to prove Proposition \ref{Bernoulli Prop}, we can conclude that $\int \zeta_{i, k}^{\mathbf{a}} \mathrm{d} \mu = \mu([\mathbf{a}])$.
As before, for fixed $k$, the family $\left\{\zeta_{i, k}^{\mathbf{a}} \right\}_{i = 0}^{k - 1}$ is not necessarily independent, but we can break it up into arithmetic subsequences which are. Consider the families $\left\{ \zeta_{m \ell + j, k}^\mathbf{a} \right\}_{m = 0}^{\lfloor k / \ell \rfloor - 1}$ for $j \in \{0, 1, \ldots, \ell - 1\}$. Then these families are independent, and so the Even Stronger Law Of Large Numbers tells us that $\frac{1}{\lfloor k / \ell \rfloor} \sum_{m = 0}^{k - 1} \zeta_{m \ell + j, k}^\mathbf{a} \to \mu([\mathbf{a}])$ almost surely. Now we calculate
\begin{align*}
& \frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu(C_k(x_k))} \int_{C_k(x_k)} T^i \chi_{[\mathbf{a}]} \mathrm{d} \mu \\
& = \frac{1}{k} \sum_{i = 0}^{k - 1} \zeta_{i, k}^\mathbf{a}(\mathbf{x}) \\
& = \frac{\ell \lfloor k / \ell \rfloor}{k} \left[ \frac{1}{\ell \lfloor k / \ell \rfloor} \sum_{i = 0}^{k - 1} \zeta_{i, k}^\mathbf{a}(\mathbf{x}) \right] \\
& = \frac{\ell \lfloor k / \ell \rfloor}{k} \left[ \frac{1}{\ell} \sum_{j = 0}^{\ell - 1} \frac{1}{\lfloor k / \ell \rfloor} \sum_{m = 0}^{\lfloor k / \ell \rfloor - 1} \zeta_{m \ell + j, k}^\mathbf{a}(\mathbf{x}) \right] + \frac{\sum_{i = \ell \lfloor k / \ell \rfloor}^{k - 1} \zeta_{i, k}^{\mathbf{a}} (\mathbf{x})}{\ell} \\
& \stackrel{\textrm{almost surely}}{\to} (1) \left[ \frac{1}{\ell} \sum_{j = 0}^{\ell - 1} \mu([\mathbf{a}]) \right] + 0 \\
& = \mu([\mathbf{a}]) \\
& = \int \chi_{[\mathbf{a}]} \mathrm{d} \mu .
\end{align*}
Again, taking a countable intersection over $\mathbf{a} \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell$, we can conclude that the set $B$ of all $\mathbf{x} \in X^\infty$ such that $\frac{1}{k} \sum_{i = 0}^{k - 1} \frac{1}{\mu(C_k(x_k))} \int_{C_k(x_k)} T^i \chi_{[\mathbf{a}]} \mathrm{d} \mu \to \int \chi_{[\mathbf{a}]} \mathrm{d} \mu$ for all words $\mathbf{a}$ is of full measure. We can further conclude that if $\mathbf{x} \in B$, we have $\frac{1}{\mu(C_k(x_k))} \int_{C_k(x_k)} T^i T^n \chi_{[\mathbf{a}]} \mathrm{d} \mu \to \int T^n \chi_{[\mathbf{a}]}$ for all words $\mathbf{a}$ and $n \in \mathbb{Z}$. Since $\operatorname{span} \left\{ T^n \chi_{[\mathbf{a}]} : \mathbf{a} \in \bigcup_{\ell = 1}^\infty \mathcal{D}^\ell, n \in \mathbb{Z} \right\}$ is dense in $C(X)$, we can conclude that if $\mathbf{x} \in B$, then
$$\alpha_{C_k(x_k)} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \to \int f \mathrm{d} \mu$$
for all $f \in C(X)$.
\end{proof}
| 2023-04-23T06:40:47.041Z | 2022-05-20T02:08:24.000Z | redpajama/arxiv | arxiv_0001 | 1,112 | 19,871 |
f1abc5b0b8ecc62c24f1cd6f4fb6f4feb0d6b4b2 | \section{Introduction}
In recent years, many-body-perturbation theory, specifically in Hedin's $GW$
approximation\cite{Hedin65,Hedin69}, where $G$ is the one-electron Green's function and
$W$ the screened Coulomb interaction, has emerged as the method of choice to calculate
meaningful quasiparticle excitation energies as opposed to Kohn-Sham density functional
one-electron energies. In particular, this yields more accurate band structures and band
gaps in good agreement with experiment, typically within $\sim$0.1 eV depending
somewhat on the details of the implementation and the material.\cite{MvSQSGWprl}
For localized levels, such as point defects, on the other hand, excitations are usually
calculated by means of a $\Delta$SCF approach from differences between two total
energies. More precisely for point defects, it is now standard procedure to calculate
the energies of formation as function of Fermi level position (\textit{i.e.\ } the electron chemical
potential) in the gap and then to determine the transition energies, which are the
crossing points of these energies of formation from one charge state to the other. On
the other hand, it has been pointed out that the $GW$ quasiparticle energies for a
defect system can be directly related to ``vertical'' excitations, meaning excitations
which keep the structure unchanged. Apart from excitonic effects, their differences
represent the optical transitions for example for transferring an electron from a defect
level to the conduction band or from the valence band to a defect
level.\cite{Rinke09,Lany10GWVO} For example, transferring an electron from a defect
level to the conduction band minimum (CBM), changes the defect from one charge state
$q$ to another $q+1$, with an electron in a delocalized conduction band state. Thus the
difference in CBM and defect level quasiparticle energies calculated at the fixed
geometry of the $q$ state, is the ``vertical'' excitation energy for this process.
Subsequently, one may add relaxation energies of the defect to lowest energy geometry
within a given charge state. Thus, the thermodynamic transition state level of the
$q/q+1$ transition may be obtained from a total energy relaxation within a given charge
state calculated at the DFT level combined with the quasiparticle excitation energies
calculated at the $GW$ level. Apart from the excitonic effect, the ``vertical''
transition is often directly of interest as an approximation to the optical transition.
In practice, defect levels are usually calculated in supercells using periodic boundary
conditions, and the defect levels turn into defect bands. So, the above relation gives
a renewed incentive to take defect one-electron band structures seriously, rather than
dismissing them as irrelevant Kohn-Sham eigenvalues, provided they are calculated at the
$GW$ quasiparticle level.
Nonetheless, the $GW$ method has not yet found widespread applications in defect
studies. This is at least in part due to the large computational effort required for
$GW$ calculations. In particular, the latter is challenging for the large supercells
needed to represent defects adequately. Thus, there is a need for improving the
efficiency of the $GW$ approach, eventually at the cost of some simplification, to make
it applicable to larger systems.
On the other hand, in many defect calculations, the infamous gap underestimate of the
typical semi-local generalized gradient approximation (GGA) or local density
approximations (LDA), can lead to serious errors in defect calculations. Defect levels,
which should be in the gap may end up in the band continuum. This also affects the total
energies and therefore transition levels if one considers charge states in which that
defect level (now a resonance in the band) is given an extra charge because in the
calculation that charge is actually placed in a delocalized state at the bottom of the
band rather than in the defect level. Thus there seem to be some advantages to start
from a more accurate quasiparticle one-electron theory such as $GW$. Total energies with
in the QS$GW$ approach are in principle calculable at the random phase approximation
(RPA) level by means of an adiabatic connection approach\cite{Kotani07}, but are
difficult to converge because they require a sum over unoccupied bands.
It would seem desirable to at least build into the calculation of the defect, the
correct band structure of the host, overcoming thereby the band gap problem. Several
such approaches have been used in the past: for example, Christensen\cite{Christensen84}
advocated using a $\delta$-function corrected potential in which a $\delta$-function
placed at the cation site and some interstitial sites, raises the $s$-like partial waves
at those sites in energy and since these $s$-like orbitals form a predominant part of
the conduction band, they artificially corrected the gap. LDA+U with several orbital
dependent $U$ parameters were also shown to provide an effective way to mimic the
corrected band structure.\cite{Paudel08VO,Boonchun2011,Skachkov16} Non-local external potentials
(NLEP) of the form $\Delta V_{\alpha,l}^\mathrm{NLEP}$ adjusted to reproduce the
conduction band structure were introduced by Lany and Zunger.\cite{LanyNLEP} Modified
pseudopotential were also used for this purpose.\cite{Segev07}
The question thus arises to what extent we can decouple the host band structure effects
from the defect. Our goal with this project was to explore whether the $GW$ self-energy
of a defect system could be constructed from that of the host and the defect site itself
or its immediate neighborhood without having to carry out the expensive full $GW$
calculation for the large unit cell required to adequately represent a defect.
The most prevalent approach nowadays to incorporate the gap corrections beyond
semi-local functions, is to use a hybrid functional such as the Heyd-Scuseria-Ernzerhof
(HSE).\cite{HSE03,HSE06,PBEh} That approach significantly improves the gaps by including
a fraction of the exact exchange operator cutoff usually beyond a certain range. By
adjusting the fraction of the exchange included, the gap can be adjusted. While this
approach correctly incorporates the gap correction, it is less clear that it also
adjusts both band edges individually and/or obeys the generalized Koopmans theorem (GKT)
\cite{Lanypolaron,LanyGKTNO} for different defects simultaneously with the host band
structure. Furthermore it is also a relatively expensive approach with computational
effort well beyond that of a semi-local calculation. The approach we present here to
construct the $GW$ self-energy is also applicable to the non-local exact exchange and
hence, could also make that approach more efficient.
In this paper, we show that the QS$GW$ self-energy matrix can be expanded in
atom-centered orbitals, such as the linearized muffin-tin orbitals (LMTO). If these are
chosen sufficiently localized, then the self-energy matrix can be represented in real
space within a finite range. One might envision doing this also with maximally localized
Wannier functions. This then offers new opportunities to approximately construct the
self-energy of a system by partitioning the system in sub parts and constructing the
self-energy by a cut-and-paste approach. In particular, we apply this here to point
defects. We first construct the self-energy of the host in a supercell from that of the
primitive cell. In a second step, we replace the part of the self-energy matrix related
to the defect atom and its near neighborhood in terms of the self-energy of a smaller
supercell containing the defect for which a GW calculation is more readily feasible. We
validate the accuracy of the approach with the well-studied
case of the As$_\mathrm{Ga}$ defect.
\section{Computational Approach}
\subsection{GW background}
In many-body-perturbation theory quasiparticle excitation energies are given by the
equation
\begin{eqnarray}
\left[ -\frac{1}{2}\nabla^2 + v_N({\bf r})+ v_H({\bf r})\right]\Psi_i({\bf r})
\nonumber \\
+\int d^3r' \Sigma_{xc}({\bf r},{\bf r}',E_i)\Psi_i({\bf r}')=E_i\Psi_i({\bf r}),
\label{eqqp}
\end{eqnarray}
where we use Hartree atomic units ($\hbar=e=m_e=1)$, $v_N$ is the nuclear potential and
$v_H$ the Hartree potential, $\Sigma_{xc}$ the exchange-correlation self-energy and $\Psi_i$ is the quasiparticle wave function. In
the $GW$ approximation, the latter is calculated from $\Sigma(12)=iG(12)W(1^+2)$ where
$1$ is a shorthand for $\{{\bf r}_1,\sigma_1,t_1\}$, \textit{i.e.\ } position, spin and time of the
particle 1, $1^+$ means $\lim_{\delta\rightarrow+0} t_1+\delta$, $G(12)$ is the
one-electron Green's function and $W(12)$ the screened Coulomb interaction. The exact
one particle Green's function is defined by
$G(12)=-i\langle N|T[\psi(1),\psi^\dagger(2)]|N\rangle$, with $T$ the time-ordering
operator, $\psi(1)$ the annihilation field operator and $|N\rangle$ the $N$-electron
ground state. The screened Coulomb interaction is given by
$W(12)=v(12)+\int d(34) v(3)P(34) W(42)$ and $P(12)=-iG(12)G(21)$ is the irreducible
polarization propagator. In practice, it is usually obtained starting from some
effective independent-particle Hamiltonian,
\begin{equation}
H^0= -\frac{1}{2}\nabla^2 + v_N({\bf r})+ v_H({\bf r})+v_{xc}({\bf r})
\end{equation}
with for example the local density approximation (LDA) exchange-correlation potential
$v_{xc}({\bf r})$. The Green's function $G^0$ is then constructed from the eigenvalues $\epsilon_i$ and
and eigenfunctions $\psi_i$ of
\begin{equation}
H^0\psi_i({\bf r})=\epsilon_i\psi_i({\bf r})
\end{equation}
as follows
\begin{equation}
G^0({\bf r},{\bf r}',\omega)=\sum_i \frac{\psi_i({\bf r})\psi_i^*({\bf r}')}
{\omega-\epsilon_i+i\delta sgn(\epsilon_i-\mu)},
\end{equation}
with $\mu$ the chemical potential and $\omega$ the energy variable of the
Green's function.
These LDA eigenstates form a convenient basis set in which the Green's function
$G^0_{ij}(\omega)=\delta_{ij}(\omega-\epsilon_i\pm i\delta)^{-1}$ is diagonal. For a
solid, the states are labeled by $i=\{n,{\bf k}\}$ with $n$ a band index and ${\bf k}$
the point in the Brillouin zone. The screened ($W^0$) and bare ($v$) Coulomb
interactions and the polarization $P^0$ on the other hand are expressed in an auxiliary
basis set of Bloch functions. In the LMTO implementation of the $GW$ method, these are
constructed from products of angular momentum partial waves\cite{Aryasetiawan94} inside
the spheres and plane-waves confined to the interstitial space, which is afterwards
reduced to avoid linear dependence and rotated so as to diagonalize the bare Coulomb
interaction.\cite{Kotani07,Kotanijpsj,Friedrich10} We label them
$E_{{\bf q}\mu}({\bf r})$ and the matrix of the Coulomb interactions in terms of them is
then written
\begin{equation}
v_{\mu\nu}({\bf q})=\int d^3r d^3r' E_{{\bf q}\mu}^*({\bf r})
\frac{1}{|{\bf r}-{\bf r}'|}E_{{\bf q}\nu}({\bf r}')
\end{equation}
and
\begin{equation}
W^0_{\mu\nu}({\bf q},\omega)=\left[ 1- v_{\mu\lambda}({\bf q})
P^0_{\lambda\kappa}({\bf q},\omega)\right]^{-1}v_{\kappa\nu}({\bf q})
\end{equation}
is obtained from a matrix inversion once $P^0_{\mu\nu}({\bf q},\omega)$ is known. The
latter is also calculated directly in terms of the $\psi_{n{\bf k}}$ and eigenvalues
$\epsilon_{n{\bf k}}$.\cite{Kotani07} In the above equation, summation convention over
repeated indices is understood. In most other works, plane waves are used instead as
basis functions. Let's write the rotation from the auxiliary functions $E^{\bf q}_\mu$
to the LDA eigenstates as
$\langle \psi_{{\bf k}n}|\psi_{{\bf k}-{\bf q}n'}E^{\bf q}_\mu\rangle$. Using these
rotation matrix we can express $W$ as:
\begin{eqnarray}
W^0_{nn'm{\bf k}}({\bf q},\omega)=\sum_{\mu\nu}
\langle\psi_{{\bf k}n}|\psi_{{\bf k}-{\bf q},n'}E^{\bf q}_\mu\rangle
W^0_{\mu\nu}({\bf q},\omega)\nonumber \\
\langle E_\nu^{\bf q}\psi_{{\bf k}-{\bf q}n'}|\psi_{{\bf k}m}\rangle
\end{eqnarray}
The self-energy matrix is then given by
\begin{eqnarray}
\Sigma_{nm}({\bf k},\omega)=\frac{i}{2\pi}\int d\omega' \sum_{\bf q}\sum_{n'}
G^0_{nn'}({\bf k}-{\bf q},\omega-\omega') \nonumber \\
W^0_{nn'm;{\bf k}}({\bf q},\omega')e^{i\delta\omega'}
\end{eqnarray}
in which we recognize the schematic $\Sigma=iGW$ but which makes it clear that to
obtain the energy and {\bf k}-space dependent form, a triple convolution is involved over energy $\omega'$, ${\bf q}$ and band index $n'$.
This self-energy matrix is energy dependent and contains in principle information, not
only on the quasiparticle energies but also the satellites and non-coherent parts of
the one-electron excitation. In the QS$GW$ method, we now reduce this to a non-local but
energy-independent and Hermitian matrix,
\begin{equation}
\tilde{\Sigma}_{nm}({\bf k})=\frac{1}{2}\mathrm{Re}
\left[\Sigma_{nm}({\bf k},\epsilon_{n{\bf k}})
+\Sigma_{nm}({\bf k},\epsilon_{m{\bf k}})\right]
\end{equation}
We now define
$\Delta\tilde{\Sigma}_{nm}({\bf k})=\tilde{\Sigma}_{nm}({\bf k})-v^{xc}_{nm}({\bf k})$
where we subtract the matrix element of the LDA exchange-correlation potential taken
between the Bloch eigenstates. We can then add this correction to the
exchange-correlation potential to the $H^0$ Hamiltonian and re-diagonalize the latter to
find new independent particle eigenvalues and eigenstates and repeat the cycle of
calculating $\tilde{\Sigma}$. At the convergence of this iteration, the eigenvalues of
the final $H^0$, which we will call $H^\mathrm{QSGW}$ are identical to the
quasiparticle energies. We may view this as finding a $G^0W^0$ perturbation theory
solution of Eq.(\ref{eqqp}) starting from $H^0$ but refining $H^0$ so the perturbation
becomes negligible. In this sense the quasiparticle energies are self-consistent and
independent of the starting approximation but they are still real and do not provide
information on the lifetime of the actual quasiparticle states.
\subsection{Bloch-function and real space representation}
The Bloch functions of the $H^\mathrm{QSGW}$ are now known as an expansion in the LMTO
basis set.
\begin{equation}
|\psi_{n{\bf k}}\rangle=\sum_{{\bf R}i} |\chi_{{\bf R}\i}^{\bf k}\rangle
b_{n{\bf R}i}^{\bf k}
\end{equation}
so we can re-express the self-energy correction matrix as
$\Delta\tilde{\Sigma}_{{\bf R}i,{\bf R}'i'}({\bf k})$. Here {\bf R} label the sites in
the unit cell and $i$ the muffin-tin orbitals. The latter are labeled by angular
momentum quantum numbers $(l,m)$ as well as a third index, labeling the choice of
smoothed Hankel function decay and/or local orbital (confined to the muffin-tin sphere).
See Ref. \onlinecite{Kotani07} for a full description of the full potential (FP)-LMTO method used.
Finally, performing an inverse Bloch sum or Fourier transform, we obtain
$\Delta\tilde\Sigma_{{\bf R},i;{\bf R}'+{\bf T},i'}$ fully expressed in real space,
where ${\bf T}$ are the lattice translation vectors.
Next, let us consider the same self-energy of the bulk system but represented in a
supercell. Obviously there is a one-to-one mapping
$\{{\bf R},{\bf T}\}\leftrightarrow\{{\bf R}_S,{\bf T}_S\}$ between positions ${\bf R}$
in the primitive cell with lattice vectors ${\bf T}$ to the positions inside the
supercell, ${\bf R}_S$, and the superlattice's lattice vectors, where
${\bf T}_S=\sum_i^3 n_i{\bf A}_i$ with $n_i$ integers, and the superlattice is defined
by ${\bf A}_i=\sum_j N_{ij}{\bf a}_j$ with $N_{ij}$ a set of integers. Thus, we can
obtain $\Delta\tilde\Sigma_{{\bf R}_S,i;{\bf R}_S'+{\bf T}_S,i'}$ by a simple relabeling
procedure. In practice these are stored for every ${\bf R}_S$ using a neighbor table out
to some maximum distance $|{\bf R}_S'+{\bf T}_S-{\bf R}_S|\le d_{max}$.
\begin{figure}
\includegraphics[width=8cm]{o4.pdf}
\caption{Schematic illustration of the self-energy editor cut-and-paste method. The
left shows the $64^{8d}$ supercell with defect atoms shown as black
spheres and host atoms as open circles and the atoms within a range
$d_{max}$ from the defect atom indicated by the dashed circle are red. The
target system $64^{1d}$ with 1 defect is shown on the right. In this
example, we assume only the defect atom itself comprises the defect region
and the atoms within the range $d_{max}$ contribute to the self-energy in
real space. The self-energy $\tilde\Sigma_{{\bf R},{\bf R}'}$ of the atom
pairs corresponding to the red atoms connected to the defect atom in the
target $64^{1d}$ cell are replaced by those from the $64^{8d}$ supercell,
shown on the left.\label{figsupcel}}
\end{figure}
\subsection{Self-energy cut-and-paste approach}
The overall scheme for constructing the defect cell and its self-energy is as follows.
For example, let us consider a 64 atom supercell to model the defect. We then start by
creating the self-energy matrix of the perfect supercell (the host) from the self-energy
in the primitive cell in the above form in real space and labeled according to the 64
atom cell $\{{\bf R}_S,{\bf T}_S\}$ scheme. Once we have the self-energy matrix in the
host supercell, we need to replace it by that for the defect within a certain range
$d_{def}$ from the defect atom. For that purpose we construct a smaller supercell
containing the defect, which we ultimately plan to use, \textit{e.g.\ } an 8 atom cell and carry out
a self-consistent DFT calculation for it and subsequently a QS$GW$ calculation of its
self-energy. We then transform the self-energy of this defect cell again to a new 64
atom cell by a similar relabeling step. Let us call this the $64^{8d}$ cell
where 64 indicates the number of atoms in the cell and
the superscript indicates
the cell contains 8 defects. Next , we
modify the host 64 atom cell by inserting the defect. We then carry out a
self-consistent calculation for it at the DFT level and construct its self-energy by the
following cut-and-paste method. The first step of the method is to create a blank place
holder in memory to hold the self-energy around each atom, the type of atoms, and their
orbitals at each of its neighbors. We then copy the corresponding self-energy into it
from the host 64 atom cell and subsequently replace it by that of the defect $64^{8d}$
cell for atoms within a certain distance $d_{def}$ from the defect site. The rest of the
atoms are left unchanged as host atoms. Each copy step only happens according to the
neighbor table up to a maximum range $d_{max}$. The copying of the self-energy orbital
blocks happens by pairs. Only half of the $\Sigma_{{\bf R_S}i;{\bf R}_S'+{\bf T}_S,i'}$
matrix elements need to be constructed because of the hermiticity. Once assembled in
real space it can be Fourier transformed back according to the periodicity of the
supercell, to find $\Sigma_{{\bf R}_Si;{\bf R}_S'i'}({\bf k}_S)$. Finally, we then need
to carry out just one DFT self-consistent step in which the thus assembled estimate of
the self-energy is added to the $H^0$ DFT Hamiltonian and we can evaluate its band
structure. The scheme is illustrated in Fig. \ref{figsupcel}.
It is clear that for the scheme to work, $d_{max}$ must fit within the small defect
containing supercell, so that the self-energy in the final cell for a pair of atoms of
which one is within the defect range $d_{def}$ does not have a neighbor of the wrong
type, in other words, it must still be a host like atom as in the final cell, not
another defect atom.
Furthermore, we need to allow for relaxation of the atoms near the defect. Therefore the
atoms are mapped between the different cells based on their atom numbering and
connectivity, not on the basis of their exact position. In principle, one could first
relax the atoms in the large cell with the defect at the DFT level and then do the
mapping to the perfect crystal and small defect cell even if their atomic positions do
not perfectly match. Alternatively we could assume that the self-energy is not too
sensitive to the relaxation and hence keep the self-energy fixed after our initial
cut-and-paste operation and afterwards, relax (or relax again) the positions in the
presence of that fixed self-energy. It is important here to remember that the
self-energy provides only a correction to the electronic structure beyond the DFT
Hamiltonian. The main defect induced changes are are already contained in
at the DFT level.
\subsection{Computational details}
The method has been implemented in the LMTO and QSGW suite of codes, named Questaal
(Quasiparticle Electronic Structure and Augmented LMTOs)\cite{questaalpaper,questaal}.
The basis sets are specified in terms of the angular momentum cutoffs and smoothed
Hankel functions. For the initial tests on GaAs with the As$_\mathrm{Ga}$ antisite
defect, we used a rather minimal basis set as specified along with the results. This
leads to an overestimate of the band gap but is convenient for our present purpose of
demonstrating the validity of the approach. Lattice positions are optimized for the
crystal cells with defects. Details of the supercells chosen in the cut-and-paste
approach are given along with the test results. In the final Sec. \ref{largebs} we use a larger basis set to achieve accurate comparison to experiment.
\section{Results}
\subsection{Gap convergence with self-energy real space cutoff} \label{gapcon}
We start by testing the core idea of a finite range self-energy for bulk GaAs. Table
\ref{tabgap} shows the band gap of GaAs in QS$GW$ as function of the cutoff $d_{max}$
used in the real space representation of the self-energy. We can see that as soon as
$d_{max}/a>1$ with $a$ the lattice constant, or a cluster of about 30 atoms is included,
the gaps become reasonable, although the convergence is not uniform and it takes till a
cluster of about 450 atoms or $d_{max}/a\approx 3$ to get absolute convergence. We note
that in an $8$ atom supercell the distance between defects is only 1 cubic lattice
constant and thus we have to restrict the $d_{max}/a\approx 1$, but nonetheless, this
already will be shown to give quite reasonable results for the self-energy.
The short-distance nature of the self-energy in real space is illustrated in
Fig~\ref{fig:selfvsd}. We here show the trace of each block of the self-energy matrix
for pairs $\{{\bf R}_S,{\bf R}^\prime_S+{\bf T}_S\}$ as function of the their separation
distance. While the self-energy operator $\tilde\Sigma({\bf r},{\bf r}^\prime)$ itself
in principle falls off as $1/|{\bf r}-{\bf r}^\prime|$, being a screened exchange type
term, the matrix elements
$\Delta\tilde\Sigma_{{\bf R}_Si;{\bf R}^\prime_S+{\bf T}_Si^\prime}$ fall off much
faster because they are dominated by the overlap of the corresponding basis function
orbitals. The on-site elements are clearly seen to be 2-3 orders of magnitude larger
than the inter-site elements with nearest and second nearest neighbors. They oscillate
somewhat as we go to further neighbors and their localization could be further improved
by means of more localized screened muffin-tin-orbitals
such as the jigsaw orbitals proposed in
Ref. \onlinecite{questaalpaper}. We may also notice that the on-site self-energy of As
in the perfect crystal or in the defect site are very close to each other. The essential
point in correcting the self-energy matrix of the perfect crystal represented in the
supercell, which already incorporates the host band gap change, is to replace that of
the Ga atom by an As atom. The inter-site elements of the self-energy are so much
smaller that they play only a minor role and this explains why we can restrict the range
of the self-energy matrix elements rather severely with a small $d_{max}$. This provides
{\sl a-posteriori} also an explanation for why schemes such as LDA+U
\cite{Paudel08VO,Boonchun2011} for modifying the band structure or other local on-site
corrections \cite{Christensen84,LanyNLEP} have had considerable success in adjusting the
gap in defect calculations.
\begin{figure}
\includegraphics[width=8cm]{selfvsd.pdf}
\caption{The trace of submatrices of corresponding atom-atom pair
$\Tr[\Sigma_{{\bf R}_Si;{\bf R}_S'+{\bf T}_Si'}]$ in the self-energy matrix is
plotted as function of distance $|{\bf R}_S'+{\bf T}_S-{\bf R}_S|$.
The basis atoms of primitive GaAs crystal
(Ga$_p$ and As$_p$), the defect (As$_{Ga}$) from the 64 atom cell and its
nearest neighbor (As$_{nn}$) are chosen to illustrate the drastic drop
in energy corrections with distance. Both structures are in $q=0$ state.
In the inset, the onsite energy corrections of the As atoms are seen
more clearly.\label{fig:selfvsd}}
\end{figure}
\begin{table}[h]
\caption{Convergence of band gap with $d_{max}$ in GaAs with Basis set Ga: $spd$,
As: $spd$. $a=10.66$ Bohr is the lattice constant in the zinc blende
structure. The gap of GaAs the LDA gap with this basis set is 0.50 eV and
the k-space QS$GW$ gap is 2.35 eV.\label{tabgap}}
\begin{ruledtabular}
\begin{tabular}{lrc}
$d_{max}/a$ & $\# neighbors$ & gap (eV) \\\hline
0.6 & 5 & 1.84 \\
0.8 & 17 & 1.76 \\
1.0 & 29 & 2.09 \\
1.2 & 47 & 2.26 \\
1.4 & 87 & 2.30 \\
1.6 & 147 & 2.29 \\
1.8 & 191 & 2.30 \\
2.0 & 275 & 2.32 \\
2.2 & 345 & 2.32 \\
2.4 & 417 & 2.33 \\
2.6 & 457 & 2.34 \\
3.0 & 461 & 2.35 \\
$\infty$ & & 2.35
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Basic properties of the As$_{Ga}$ antisite defect} \label{asga}
Next, we check the viability of the scheme for the case of the As$_\mathrm{Ga}$ antisite
in GaAs. This is a well-studied defect, known as the EL2 defect, or at least closely
related to it.\cite{Bachelet83,Dabrowski} In the $q=0$ state, it has a single $a_1$
defect level filled in the gap and a single $t_2$ empty resonance just above the CBM.
The excited state $a_1^1t_2^1$ is two-fold degenerate and both its
$S=1$ and $S=0$ configurations are
orthogonal to the ground
state $a_1^2t_2^0$. This degeneracy leads to a symmetry breaking distortion, such that
the antisite As atom is pushed through the interstitial position and the initial single
point defect turns into Ga-vacancy and As-interstitial $V_{Ga}+As_i$ defect complex.
Due to this displacement the system might get trapped in a metastable state
at lower energy than the excited state but at the distorted geometry.
This metastable state is labeled $1a^02a^2$ where now the levels are labeled according to
the $C_{3v}$ distorted geometry. The ground state
in this notation is $1a^22a^0$ and the excited
state is $1a^12a^1$. In other words, $1a$ in $C_{3v}$ derives
from the $a_1$ in $T_d$ while $2a$ derives from $t_2$.\cite{Dabrowski}
This state is associated with the
photoquenching behavior of EL2 defect in Ref. \onlinecite{Dabrowski}. Therefore, the
optical excitation energy from the $a_1$ to the $t_2$ level is of interest and can be
directly related to the corresponding QS$GW$ levels since it occurs from the ground state
$q=0$ geometry. Furthermore, the Green's function calculations of Bachelet
\textit{et al.\ }\cite{Bachelet83} provide detailed information on the position of the Kohn-Sham
defect levels of $a_1$ and $t_2$ symmetry in two charge states. We identify these two
defect levels in defect supercell band structure and compare our results with Ref.
\onlinecite{Bachelet83} and \onlinecite{Dabrowski}.
The ground state defect level of neutral EL2 is well known experimentally and
we discuss this further in Sec. \ref{largebs}.
\subsection{Application of the method to As$_\mathrm{Ga}$.} \label{appasga}
In Fig.~\ref{fig64lda} we show the band structure of a 64 atom supercell containing an
As$_\mathrm{Ga}$ antisite defect at its origin in the LDA and for $q=0$ state. In this
study, we used a minimal basis set of only a single $\kappa$ and smoothing radius
$R_{sm}$, $spd$ orbital set on Ga and As. The gap is thereby underestimated as 0.65 eV
in LDA and overestimated as 2.25 eV in QS$GW$. This should facilitate recognizing the
defect level. Nonetheless, we see in Fig.~\ref{fig64lda} that the electronic structure
of the defect and even the gap are barely recognizable. The defect band is the highest
occupied band but is seen to be so much broadened that, in combination with the LDA gap
underestimate, it touches the valence band maximum (VBM) at the $\Gamma$-point. The
conduction band minimum (CBM) occurs 0.65 eV above it at the $\Gamma$-point in
LDA, while a converged basis set would give an even lower LDA gap of only 0.51 eV.
Clearly, the band structure of this system cannot be examined accurately with DFT level
calculations.
\begin{figure}
\includegraphics[width=8cm]{64LDA.pdf}
\caption{Band structure of As$_\mathrm{Ga}$ in $q=0$ state in GaAs in 64 atom cell
in LDA. This supercell has simple cubic form and the high-symmetry
${\bf k}$-points are $\Gamma=(0,0,0)$, $X=(1,0,0)$, $M=(1,1,0)$ and
$R=(1,1,1)$ in units $2\pi/a_S$ with $a_S=2a$ the supercell lattice
constant.\label{fig64lda}}
\end{figure}
Next, we construct the defect in an 8 atom cell and perform a QS$GW$ calculation for it.
From the corresponding band structure shown in Fig.~\ref{fig8gw} it is clear that this
cell is much too small to adequately represent the defect electronic structure. In this
figure, it is not even clear which is the defect band and which is the lowest conduction
band. On the other hand, we will show that this cell is sufficient to reconstruct the
self-energy components in the immediate neighborhood of the defect.
\begin{figure}
\includegraphics[width=8cm]{8GW.pdf}
\caption{Band structure of As$_\mathrm{Ga}$ in 8 atom GaAs cell in QS$GW$. This
supercell is also a simple cubic but with lattice constant $a$. The
high-symmetry points are labeled the same as in Fig. \ref{fig64lda} but
correspond to a Brillouin zone twice as large in each direction.
\label{fig8gw}}
\end{figure}
In Fig.~\ref{fig64gw}, we show the band structure of the 64 atom cell obtained by means
of our self-energy cut-and-paste approach with the dashed red lines. The defect region
contains only the defect atom itself and the self-energy range $d_{max}$ was set to one
lattice constant (5.64 \AA). In the same figure we show the fully self-consistent QS$GW$
results in the same 64 atom cell with the solid black lines, after aligning the two at
the VBM. The dispersion of this defect band is about 0.7 eV and results from
periodically repeated defects in the 64 atom supercells, so from defects that are
2$a_{cell}$ or 11.28 \AA\ apart. Comparing with the band structure in a larger cell in
Fig. \ref{fig:216} we can see that the defect band width is mostly reduced at $R$ and
$\Gamma$ but the top of the band near $X$ and $M$ stays the same. We therefore identify
the eigenvalue near its maximum with the isolated defect level. The fact that the defect
band dispersion does not follow the expected form
$E_d+2t[\cos{(k_xa)}+\cos{(k_ya)}+\cos{(k_za)}]$ for a simple isotropic $s$-band in a
nearest-neighbor simple cubic lattice, where the maximum would be at $R$ and the minimum
at $\Gamma$ indicates that the interactions between defect states are not isotropic
because of the underlying crystal structure. The filled defect band is now clearly
detached from the VBM and it occurs at about 1.12 eV above the VBM. Its position above
the VBM and even its dispersion is in excellent agreement between the cut-and-paste and
fully self-consistent results. Furthermore, this defect level position above the VBM is
in good agreement with Bachelet's Green's function calculation\cite{Bachelet83} of an
isolated defect, which gives 1.23 eV for the $q=0$ state.
The CBM is also clearly seen (overestimated as 2.25 eV above the VBM by our small basis
set) in the full self-consistent calculation. This gap is somewhat smaller in the
cut-and-paste case (1.66 eV). This is because the range cutoff $d_{max}$ applied to
$\Sigma$ reduces the gap. Second, in this 64 atom cell, we do not yet approach the
dilute limit where the perfect crystal gap with this cutoff (2.09 eV) would be
recovered. This can be attributed to the $t_2$ defect level interacting with the CBM. In
both cases, we can identify the second defect level, the $t_2$ state, which here is a
resonance and can be recognized as a flat band near $X$ at about 1.55 eV above the Fermi
level which coincides with the top of the $a_1$ defect band, in direct QS$GW$
calculation and 1.07 eV in the cut-and-paste method. This defect level lies 0.42 eV
above the CBM in direct QS$GW$ calculation and 0.60 eV in the cut-and-paste approach.
The energy splitting between the $t_2$ and $a_1$ state taken as the energy difference at
$X$-point, because of above reasons, hence 1.55 eV in direct calculation and 1.06 eV in
our approach. Dabrowski \textit{et al.\ }\cite{Dabrowski} reports this splitting as 1.18 eV
experimentally and as 0.97 eV at the DFT level calculations, while in the Green's
function calculation it is 0.87 eV \cite{Bachelet83}. Our result for this splitting of
the $t_2-a_1$ level is thus comparable in accuracy with the previous calculations and
in fact closer to the experimental value, which, as mentioned earlier, is important for
understanding the optical behavior of this defect. Hence, the the cut-and-paste method
is found to be a viable approach.
\begin{figure}
\includegraphics[width=8cm]{comparisons_q0/def_64_8_vs_qsgw.pdf}
\caption{Band structure of As$_\mathrm{Ga}$ in $q=0$ state in GaAs in 64 atom cell:
fully QS$GW$ (solid black lines), with self-energy constructed by
cut-and-paste approach (red dashed lines) using only the defect atom as
defect region aligned at the VBM. The zero of energy is the Fermi level for
the defect system in the full QS$GW$ case for the neutral defect.
\label{fig64gw}}
\end{figure}
Next, we test whether replacing only the self-energy related to the defect atom itself
is sufficient or whether we need to include a larger defect cluster region. In our
cut-and-paste method, including the nearest neighbors in the defect region corresponds
to taking the nearest neighbor of the defect atom as another center for the
$\Delta\tilde\Sigma_{{\bf R}_S,{\bf R}^\prime_S+{\bf T}_S}$ to be taken from the small
defect containing (8 atom) cell. This approach could be problematic, because it extends
the range of this $\Delta\tilde\Sigma$ with ${\bf R}_S$ being a nearest neighbor of the
defect beyond its unit cell, hence includes pairs connecting this atom to other defect
atoms, while in the dilute limit, or in the large supercell with a single defect, there
should only be one defect atom. However, because the off-site matrix elements fall off
rather quickly, this is found not to be a serious problem. With this approach, we
observe that the valence and defect bands remain in the same position, but the CBM value
drop 0.14 eV, which deviates from the QS$GW$ calculation as expected. These results are
shown in supplemental information.
\begin{figure}
\includegraphics[width=8cm]{comparisons_q0/def_64_32_vs_def_64_8.pdf}
\caption{Band structure of As$_\mathrm{Ga}$ in $q=0$ state in GaAs in 64 atom cell:
$64_8$ scheme (solid black lines), and $64_{32}$ scheme (red dashed lines),
both with self-energy constructed by cut-and-paste approach using only the
defect atom as defect region, aligned at the VBM. The zero of energy is the
Fermi level for the defect system in $64_8$ scheme.\label{8to32}}
\end{figure}
Clearly, in order to allow us to include more neighbors than the defect atom itself in a
safe way, we would need to enlarge the size of the cell from which the defect and its
neighbors self-energy is extracted. This might then also allow to increase the
$d_{max}$ or range of the self-energy cutoff. To further test the convergence of our
scheme we now consider a somewhat larger cell than the 8 atom cell to extract the defect
atom and its neighbors' self-energy. For this purpose, we incorporate the defect in a 32
atom supercell of GaAs. The $d_{max}$ value is set to the lattice constant of this cell,
1.73$a$ of the conventional cell, which is then also the nearest defect distance.
Building the host cell using this small cell does not provide considerable improvement
for the cut-and-paste method. This is expected since we already know that the
self-energy matrix elements fall off rapidly with intersite distance. Furthermore, the
exact QS$GW$ calculation of this cell is now a more expensive calculation and not so
much is gained by increasing the size to a 64 atom cell using the cut-and-paste
approach. Nonetheless, we test its performance to check the convergence of our
cut-and-paste approach in terms of the size of the defect region and self-energy cutoff
distance $d_{max}$. The results of this scheme is summarized in
Table~\ref{tablesumsmall} with the label 64$_{32}$. Different defect region descriptions
and charge states were also considered. In Fig.~\ref{8to32}, we compare the
two cut-and-paste band structures of the defect containing 64 atom cell in the $q=0$
state, built from the 8 atom cell (64$_{8}$) and from the 32 atom cell (64$_{32}$). The
almost identical bands show that there is nothing to be gained in the accuracy of the
method by extracting the defect atom self-energy from a 32-atom compared to an 8 atom
cell and is furthermore significantly more computationally expensive. We conclude that
the most effective approach for the cut-and-paste method is building the defect
containing host cell from the 8 atom cell and it is sufficient to define the defect
region as the defect atom itself. The results obtained by this approach are in good
agreement with our exact QS$GW$ calculations and previous studies in the literature.
\begin{figure}
\includegraphics[width=8cm]{comparisons_q2/def_64_8_2_vs_qsgw2.pdf}
\caption{Band structure of As$_\mathrm{Ga}$ in $q=2$ state in GaAs in 64 atom cell:
fully QS$GW$ (solid black lines), with self-energy constructed by
cut-and-paste approach (red dashed lines) using only the defect atom as
defect region, aligned at the VBM. The zero of energy is the Fermi level
for the VBM in the full QS$GW$ case for the doubly ionized defect.
\label{fig64gw+2}}
\end{figure}
Next we want to address whether the cut-and-paste approach works also for charged defect
states and correctly describes the trend with charge observed in the Green's function
calculations. In Fig.~\ref{fig64gw+2} we show the band structure of the same cell, but
with the defect in $q=2$ state. In this case, the exact QS$GW$ band gap does not change,
but the defect level moves closer to the VBM. The cut-and-paste approach yields 0.05 eV
larger band gap. Although the defect band dispersion is again reproduced faithfully, the
position of defect level deviates by about 0.2 eV between the cut-and-paste and exact
calculation. This is still an acceptable precision of the cut-and-paste method and
indicates that the nearly perfect agreement seen earlier for the $q=0$ state is perhaps
somewhat coincidental.
On the other hand, the difference between the two charge states regarding the defect
level position agrees with prior work and has a clear physical meaning. It can be
related to the different final geometries after atomic relaxations in different charge
states. For the $q=0$ state, we observe outward breathing of nearest As atoms, such that
they move further from the defect atom about 4\% of the nearest neighbor distance.
Dabrowski \textit{et al.\ }\cite{Dabrowski} reported similar lattice relaxation effects.
On the other hand, in the $q=2$ state, the nearest neighbor
distance remains the same as in the perfect crystal.
\begin{figure}
\includegraphics[width=8cm]{comparisons_q0/def_216_8_vs_def_64_8.pdf}
\caption{Band structure of As$_\mathrm{Ga}$ in $q=0$ state: $64_8$ scheme (solid
black lines), and $216_8$ scheme (red dashed lines), aligned at the VBM.
Defect region is defined only with the defect atom. Dispersion of $a_1$
level vanishes as defect-defect distance increases, in a fashion explained
with the tight-binding model above. Note that the Brillouin zones of the
216 and 64 atom cells have the same symmetry labeling but the latter is
larger and contains fewer bands. The $\Gamma-X$ distance of the 216 atom
cell is scaled down by a factor 2/3 compared with the 64 atom one and
similarly for the other directions.\label{fig:216}}
\end{figure}
One important advantage of our method is that once the exact QS$GW$ calculation for the
small cell is done, moving to larger host cells and calculating the defect properties in
the dilute limit is a straightforward procedure and one can achieve $GW$-level
accuracies for the cost of an LDA calculation for even larger cells. Although we do not
include the exact calculations for these larger cells in this work, we analyze a 216
atom cell, \textit{i.e.\ } a $3\times3\times3$ supercell of the conventional simple cubic 8 atom
cell. In Fig.~\ref{fig:216} we see that in the 216 atom cell dispersion of the defect
band is almost zero, indicating we are close to the dilute limit. Furthermore, this
allows us to better evaluate the nature of the defect band dispersion. As mentioned
earlier, the top of the defect band which is flat between $X$ and $M$ remains the same
as in the smaller 64 and 32 atom cells but the band width is reduced at $\Gamma$. This
helps us to identify the top of the defect band with the dilute limit defect level.
\begin{table}[h!]
\caption{The energy band gap, $E_g$, the defect level at the $X$-point
w.r.t.\ VBM, $a^X_{1_{VBM}}$, the defect level splitting at the $X$-point,
$(t_2-a_1)^X$ and
the $a_1$ defect band width $w(a_1)$, are presented for various schemes in
$q=0$ (top half) and in $q=2$ (bottom half) states. The subscript numbers
represent the small cell atom number which was used to build up the final
cell in the cut-and-paste method.\label{tablesumsmall}}
\begin{tabular}{lcccc}
& $E_g$ & $a^X_{1_{VBM}}$ & $(t_2-a_1)^X$ & $w(a_1)$\\\hline
Dabrowski\cite{Dabrowski} & & 0.6 & 0.97 & \\
Bachelet\cite{Bachelet83} (LDA) & 0.7 & 1.23 & 0.87 & \\
64 atom QS$GW$ & 2.25 & 1.12 & 1.55 & 0.62 \\\hline
64$^{\dagger}_8$ & 1.66 & 1.19 & 1.07 & 0.68 \\
64$^{\ast}_8$ & 1.52 & 1.19 & 1.04 & 0.69 \\
64$^{\dagger}_{32}$ & 1.69 & 1.01 & 1.02 & 0.67 \\
64$^{\ast}_{32}$ & 1.64 & 0.98 & 1.03 & 0.68 \\
216$^{\dagger}_{8}$ & 1.72 & 1.14 & 1.24 & 0.19 \\
216$^{\ast}_{8}$ & 1.67 & 1.14 & 1.23 & 0.19 \\\hline\hline
Bachelet & & 0.69 & 0.99 & \\
64 atom QS$GW$ & 2.24 & 0.69 & 1.60 & 0.60 \\\hline
64$^{\dagger}_8$ & 1.71 & 1.06 & 0.95 & 0.67 \\
64$^{\ast}_8$ & 1.58 & 1.07 & 0.94 & 0.67 \\
64$^{\dagger}_{32}$ & 1.63 & 0.92 & 0.96 & 0.67 \\
64$^{\ast}_{32}$ & 1.64 & 0.92 & 0.95 & 0.67 \\
216$^{\dagger}_{8}$ & 1.70 & 0.61 & 1.24 & 0.16 \\
216$^{\ast}_{8}$ & 1.65 & 0.62 & 1.24 & 0.16 \\\hline
\end{tabular}
\begin{tablenotes}
\small
\item $^{\dagger}$ Defect region is defect atom itself.
\item $^{\ast}$ Defect region is defect atom and its nearest neighbors.
\end{tablenotes}
\end{table}
It is important to point that obtaining this 216 atom cell band structure only required
an additional LDA calculation of this cell. Once the 8 atom cell $GW$ calculation is
done, moving to even larger cells only requires the LDA calculation of the desired
single defect containing supercell. This provides an immense reduction in computational
cost. For instance, we can compare the computation time of exact QS$GW$ calculations for
8 atom, 32 atom and 64 atom cells. The convergence parameters for these calculations are
kept the same and the {\bf k}-point meshes are kept equivalent by taking a smaller set
with approximately the same grid-density in the larger cells. The number of processors
for each calculation is adjusted proportional to the {\bf k}-point mesh grid. Under
these circumstances, fully consistent QS$GW$ calculation was completed in 0.6 hour for
the 8 atom cell, 26.4 hour for the 32 atom cell, and 181 hours for the 64 atom cell. In
other words, the computing time scales like $N^\eta$ with $\eta\approx1.8$ and $N$ the
number of atoms or $3\eta$ for the scaling with the linear size of the supercell. On the
other hand, the LDA calculation of the 64 atom cell took less than half an hour. For
this particular 64$_8$ scheme, the bottleneck of cut-and-paste method was the 8 atom
cell QS$GW$ calculation. For larger single defect cells, the LDA calculation might
become the bottleneck due to large number of atoms, however the computation cost would
still be insignificant, compared to the QS$GW$ calculation of the same cell. For
instance, the LDA calculation of the host cell in our 216$_{8}$ scheme was performed
using resources comparable to that of 64$_8$ LDA step, and this step took around 6
hours. Note that all the atoms are fully relaxed in this step, and one could reduce the
computation time even further by only letting the first few nearest neighboring atoms
relax.
\subsection{Large basis sets and comparison to experiment} \label{largebs}
To test the fidelity of the theory, we finally make a careful comparison against the experimentally observed neutral EL2 deep donor
level ($E_{v}{+}0.75$\,eV).\cite{Dabrowski,Weber82} This defect level
was first identified with the As$_\mathrm{Ga}$ antisite by Weber \textit{et al.\ }\cite{Weber82} based on Electron
Paramagnetic Resonance (EPR) and activation of the unpaired spin $+1$
charge state from the neutral state by an optical transition
to the conduction band, which was found to occur at 0.75 eV. This was later
also confirmed by thermionic emission and optical studies.\cite{Omling86,Samuelson86}
In view of the experimental gap of 1.52 eV at low temperature,
this places the defect level almost exactly at mid gap.
We revisit
the neutral EL2 level with a reasonably well converged basis, and also include spin-orbit coupling (SOC). We repeat the
procedure described above, using a 32-atom supercell, with an \emph{spdfspd} basis on both the Ga and As, and local
orbitals to include the Ga $3d$ in the valence. The basis also includes \emph{sp} ``floating orbitals'' (smoothed
Hankel functions without augmentation spheres~\cite{questaalpaper}) centered at the two high-symmetry interstitial sites
along the [111] line. The QS\emph{GW} bandgap (1.7\,eV, including SOC) is slightly less than that of a fully converged
basis (1.8\,eV). This includes a reduction in the bandgap of 0.11\,eV from SOC. The gap reduction is expected, as the
split-off valence band at $\Gamma$ is 0.33\,eV below the VBM, both experimentally and in QS\emph{GW}.
We note that even with his well-converged basis set the bandgap is overestimated because QS\emph{GW} under-screens $W$, because the RPA polarizability omits electron-hole
attractions that connect the electron and hole parts of the bubble. If such attractions are included, \textit{e.g.\ } via ladder
diagrams, the bandgap is reduced to a value very close to the observed gap. It was discovered soon after QS\emph{GW}
was first formulated that, empirically, the RPA dielectric constant $\epsilon_\infty$ is uniformly 20\% too small for a
wide range of insulators, strongly correlated or not~\cite{Chantis06a}; see in particular Fig. 1 in
Ref.~\cite{Bhandari18}. Adding ladder diagrams greatly improves on $\epsilon(\omega)$, as has been known from
pioneering work in the groups of Louie\,\cite{Rohlfing98b} and Reining\,\cite{Albrecht98}. It has recently shown that
improving $W$ with ladders, and using this $W$ in the QS\emph{GW} self-consistency cycle, almost completely eliminates
the overestimate of bandgaps in weakly correlated semiconductors, and it also corrects for the underestimate of
$\epsilon_\infty$ \cite{cunningham21}. As an alternative, a hybrid approach has long been used, mixing LDA and
QS\emph{GW}~\cite{Chantis06a}. Kotani and his coworkers showed that a hybrid of 80\% QS\emph{GW} and 20\% LDA yields
uniformly good bandgaps in many weakly correlated semiconductors ~\cite{Deguchi16}.
The hybrid approach is an inexpensive, albeit \emph{ad hoc} way to mimic the
effect of ladder diagrams, and we use it here to refine our estimate of the neutral EL2 level.
\begin{figure}
\includegraphics[width=7cm]{dos-128.pdf}
\caption{Density-of states of a 128 atom supercell of GaAs with a single As antisite for an unrelaxed lattice (black),
and for a lattice whose nearest neighbors were relaxed (red). Energy zero corresponds to the Fermi level in both
cases, and sits at the top of the (doubly occupied) midgap EL2 level. It has a dispersion of about 0.3\,eV.
in the 128 atom cell.
\label{figdos}}
\end{figure}
Fig.\,\ref{figdos} shows the density-of-states (DOS) of the 128 supercell within QS\emph{GW} for two scenarios: the black data
is the DOS for an ideal (unrelaxed) structure, while the red data shows the effect of relaxing the four nearest
neighbors around the As\textsubscript{Ga} only. Limiting relaxations to nearest neighbors simplifies the embedding
procedure, and tests showed that more complete relaxations made minor further changes. It is seen that the lattice
relaxation induces a shift in the defect level, moving it about 0.3\,eV closer to the valence band.
To estimate the EL2 energy in the limit of an infinite cell,
the center of gravity of the band was calculated. It is found to be
close to the Fermi level at the top of the defect band for the
neutral charge state, about 0.11 eV below $E_F$. Thus, for higher precision
we here subtract this 0.11 eV from $E_F$ to obtain the defect level.
This would reduce the $a_1$ defect levels in Table \ref{tablesumsmall}
by 0.11 eV.
The VBM and CBM are inferred from the
energies where the DOS touches zero (see labels in Fig.\,\ref{figdos}), so that the EL2 relative to either can be
computed. The results are displayed in Table~\ref{tabbigel2}.
\begin{table}[h]
\caption{Band center of neutral EL2 defect, in eV, relative to the valence band maximum, embedded in a 128 atom cell.
A reasonably well converged basis was used, with spin orbit coupling included. Two calculations are shown:
the first assuming no lattice relaxation, and the second including it, as described in the text.
Also shown is the bandgap. \label{tabbigel2}}
\begin{ruledtabular}
\begin{tabular}{ccccc}
& \multispan2 QSGW & \multispan2 80\%QSGW+20\%LDA \cr
structure & EL2 & gap &\qquad EL2 & gap \cr
unrelaxed & 1.13 & 1.69 &\qquad 1.02 & 1.44 \\
relaxed & 0.83 & 1.68 &\qquad 0.74 & 1.42
\end{tabular}
\end{ruledtabular}
\end{table}
As Table~\ref{tabbigel2} shows, EL2 is predicted to be at $E_{v}{+}0.83$\,eV and $E_{c}{-}0.85$\,eV. It is slightly too
far from both the VBM and the CBM, compared to experiment, because the QS\emph{GW} gap is too large. It is not \emph{a priori} obvious how much this level will shift if QS\emph{GW} were high enough fidelity to yield the experimental gap,
\textit{e.g.\ } by adding ladder diagrams to $W$. It is possible in principle to do this by carrying out the calculation with
ladders in $W$, but here we take the hybrid 80\%QS\emph{GW}+20\%LDA approach as a simple alternative. The result is shown in
Table~\ref{tabbigel2}. The hybrid approach reduces the level by about 0.1\,eV, while the gap itself is reduced by
~$\sim$0.25\,eV. This is consistent with the EL2 being comprised of roughly equal measures of the host valence band and
conduction band character and its position almost exactly in the middle of the gap. Finally, the predicted EL2 energy ($E_{v}{+}0.74$\,eV) is in excellent agreement with the
observed value $E_{v}{+}0.75$\,eV, while the bandgap (1.42\,eV) is also close to the room-temperature experimental gap.
(Ideally the QS\emph{GW} gap should be $\sim$1.5\,eV, the zero-temperature gap of GaAs. Indeed, it does come out very
close to 1.5\,eV when a fully converged basis is used, whether ladders are added or the hybrid-$\Sigma$ approach is
used.)
Even with a slightly less complete basis set, including $spdfspd$ and
Ga-$3d$ local orbitals, but omitting the floating orbitals, we found
that the defect band top in a 64 atom cell and using $64_8$ cut-and-paste
approach, lies at 0.78 eV above the VBM when the latter is corrected by SOC,
and with a host
gap of 1.7 eV. Given that the center of gravity of the defect band DOS lies
slightly lower, this would become 0.67 eV.
Even in this calculation, however, the conduction
band minimum in the defect cell is $\sim$1.35 eV, indicating that the
$t_2$ resonance in the conduction band affects the conduction band minimum.
This effect is reduced only by going to even larger supercells.
\section{Conclusion}
We have shown that the $GW$ self-energy matrix represented in a real-space basis-set is
short-ranged, and the contribution to the quasiparticle energy correction to DFT
eigenvalues is significant only for the first few nearest neighbors of a specific atom.
We introduce a cut-and-paste method for defect calculations at the $GW$ level that
exploits this property. We demonstrate the method using a well-known single point
defect, namely the As$_{Ga}$ antisite in a GaAs crystal. The main correction to the
defect band structure compared to LDA in our method is incorporating the host perfect
crystal gap correction via the perfect crystal self-energy, which requires a trivial
cost because it just amounts to a relabeling of the self-energy matrix according to the
supercell description of the atomic sites. After detailed examination, we conclude that
an 8 atom cell is sufficient to extract the defect atom's self-energy which is then used
to replace the defect atom self-energy in the final supercell. We observe almost perfect
agreement between the fully self-consistent QS$GW$ defect bands and the cut-and-paste
method, in terms of the valence bands, the $a_1$ defect level position and the defect
band dispersion. There is a small disagreement between our method and the full QS$GW$
calculations, in terms of unoccupied levels. This is caused mainly by the range cutoff
$d_{max}$ applied to $\Sigma$, which slightly reduces the gap from its converged QS$GW$
value. The main advantage of the method is that it allows us to obtain $GW$-level
accuracy results for large defect supercells at essentially the cost of an LDA
calculation for the latter. This allowed us to carefully monitor the defect band
dispersion and identify the dilute limit isolated defect level more precisely. We found
good agreement for the defect level positions with previous studies of this system,
although the previous studies were not at the $GW$-level. Inspecting the band structures
of the defect system in detail allowed us to identify not only the obvious defect level
in the gap of $a_1$ symmetry but also the excited defect $t_2$ symmetry resonance and
provides accurate information on the optical transition between these levels, which has
previously been recognized as an important step in activating a metastable state of this
defect. Our calculation also agrees with previous work in the change in defect level as
function of charge states. Overall, the cut-and-paste method significantly reduces the
computational cost of $GW$-level calculations, with a small loss in accuracy.
Finally, to establish that QS\emph{GW} is able to predict defect levels with
a fidelity comparable to its ability to predict energy bands of bulk materials,
we benchmark the embedding approach against the experimentally measured neutral EL2 level. We show
the discrepancies with experiments are small, and closely track the known discrepancies
for weakly correlated periodic systems.
\acknowledgments{This work was supported by the US Department of Energy Basic Energy
Sciences (DOE-BES) under grant number DE-SC0008933. The calculations were
performed on the High Performance Computing Resource in the Core Facility
of Advanced Research Computing at Case Western Reserve University.
M.v.S. was supported by the U.S. Department of Energy, Office of Science,
Basic Energy Sciences, under award FWP ERW7246.}
| 2023-04-23T06:40:47.175Z | 2022-02-22T02:01:14.000Z | redpajama/arxiv | arxiv_0001 | 1,113 | 9,721 |
ec93bf321c51dcfe93256391ca27e65db7414e78 | \section{Introduction} \label{sec:intro}
Classical T Tauri stars \citep[CTTS;][]{Joy1945} represent a key stage of star and planet formation, where the stellar photosphere is newly visible but the star retains its circumstellar disk, a remnant of its formation within the surrounding molecular cloud. The star and disk interact as the star irradiates the disk and the disk accretes onto the star, causing complex co-dependent star and disk evolution over 5-10 Myr (for single and wide binary stars: close binary disk lifetimes are closer to 1 Myr; \citealt{Haisch2001, Armitage2003, Cieza2007, Cieza2009, Kraus2012a}) as the disk accretes onto the central star, assembles planets, and photoevaporates.
Circumstellar disks have dust and gas components; the gas disk extends inward to the magnetospheric truncation radius, which occurs within a few R$_{\star}$ of the photosphere \citep{Koenigl1991, Akeson2005, Johnstone2014}. The gaseous component of circumstellar disks is thought to fall onto the surface of the host star via magnetospheric accretion, where gas from the disk funnels onto the stellar surface along magnetic field lines \citep[e.g.,][]{Uchida1985, Bertout1988, Koenigl1991, Calvet1998}. The process of gas accretion onto the star produces continuum emission in excess of the stellar radiation in the optical and ultraviolet \citep[UV; e.g.,][]{Basri1990, Hartigan1991, Calvet1998}, which is called veiling in reference to its apparent reduction of the depth of absorption lines in a continuum-normalized spectrum.
The dust disk is inwardly truncated around $R \sim 0.1$ au \citep{Muzerolle2003, Eisner2005, Eisner2007, Dullemond2010, Davies2020} because of dust sublimation, which is thought to occur at temperatures around 1,500-2,000 K \citep{Pollack1994} and results in a raised rim of dust at the dust sublimation radius that produces a large fraction of the infrared (IR) flux \citep{Folha2001, Johns-Krull2001, Muzerolle2003, Eisner2005, Eisner2007, Dullemond2010, Davies2020}. The absorption and re-radiation of incident stellar flux at the inner dust disk edge produces an IR continuum excess.
Clarity regarding the dynamics, time scale, and detailed physics of dust evolution in circumstellar disks is vital for an understanding of planetesimal and planet formation, which occurs from the dust in the disk. Over time, dust and planetesimals settle into the midplane of the disk and radially migrate inward toward the star \citep[e.g.,][]{Artymowicz1993, Korycansky1993, Ward1997}, partially setting a timescale for dust dissipation, while the dust sublimation radius sets the minimum radius for \textit{in situ} rocky planet formation. Features such as the mass accretion rate onto the star dictate the total luminosity irradiating the inner disk edge, and therefore impact disk evolution by controlling the location of the dust sublimation radius. Thus, understanding gas accretion, disk properties, and the relationship between those attributes of a system is vital for a clear understanding of the planet formation process.
Because the accretion process impacts both the host star and the circumstellar disk, the optical and IR excesses should be correlated. This hypothesis regarding the link between optical and NIR excess was tested by \citet{Hartigan1991}, who found that K-L and K-N IR excesses were both correlated with optical veiling, supporting the inner disk structure model and the relationship between disk excess in the IR and accretion continuum excess in the optical. Other work pushed blueward in the IR, finding that there was excess at the IYJH bands \citep{Cieza2005, Fischer2011} which originated from an unknown source but had a spectrum that was consistent with a warm-hot blackbody ($\sim$2200-5000K; \citealt{Fischer2011}). However, \citet{McClure2013} investigated the near-infrared (NIR) excess and found that their spectral energy distribution (SED) fits did not require the presence of the warm-hot blackbody component, instead finding that a two- to three-component fit consisting of blackbodies at hot (gas emission; $\sim$8000K), warm (dust sublimation; $\sim$1600K) and possibly cool (dust emission; $\sim$800K) temperatures was sufficient to describe the NIR excess emission.
Thus, although the relationship between K band excess and optical veiling is relatively well-established, and it is known that J and H excesses are correlated with K excess \citep[e.g.,][]{Cieza2005}, the relationship between optical veiling (as a proxy for gas accretion) and the NIR SED (as a proxy for the radius and structure of the inner dust disk wall) is still not well understood. This is partially because past studies of these relations in the NIR have either been large heterogeneously observed and characterized samples with less detailed analysis, or labor-intensive in-depth analyses of a small number of systems. Additionally, well-calibrated all-sky NIR photometry is only recently available via WISE \citep{Wright2010}, and establishing the photospheric SEDs of disk-hosting stars (to measure the optical and NIR excesses) has required recent advances in our knowledge of young stars' photospheric emission \citep[e.g.,][]{Herczeg2014}.
Another potential complicating factor in past analyses of disk and accretion properties in CTTS is multiplicity. Stellar multiplicity is ubiquitous: $\sim$50\% of Sun-like stars in the field are binaries \citep{Duquennoy1991, Raghavan2010}, with multiplicity fractions decreasing to $\sim$25\% for M stars in the field \citep[e.g.,][]{Winters2019}. Multiplicity rates for young star-forming regions are typically close to double those of the field \citep[e.g.,][]{Duchene2013}. Multiplicity impacts disk formation and evolution by decreasing the timescale for disk dissipation \citep[e.g.,][]{Cieza2009, Kraus2012a} and reducing disk masses \citep[e.g.,][]{Jensen1996, Harris2012, Zurlo2021}, which could affect disk emission and may impact accretion rate, which is correlated with disk mass \citep{Manara2016m}. Past works have typically attempted to screen out binaries, but undetected binaries will still bias measurements of properties such as stellar age, mass, and spectral type \citep[e.g.,][]{Furlan2020, Sullivan2021}. Beyond simple removal from samples because of potential biases, information about the disk emission from known binaries may inform our understanding of circumstellar disk structure and planet formation in binary star systems. Thus, it is important to not only identify binaries to facilitate their removal from samples of single stars, but also to study the properties of the binary star disk host population in their own right.
To study the relationships between NIR excesses and optical veiling in both single and binary stars, we have used a sample of $\sim$ 160 PMS stars in Taurus with homogeneously measured properties from \citealt{Herczeg2014} (hereafter HH14), including bolometric luminosity, extinction, spectral type, age, and optical veiling ($r_{7510}$), and a young age with only moderate dispersion ($\tau \sim 1-2$ Myr; e.g., \citealt{Krolikowski2021}). We have cross-matched the HH14 Taurus sample with the 2MASS \citep{Skrutskie2006} and ALLWISE \citep{Wright2010} surveys to construct near- to mid-IR ($1.2< \lambda < 22 \mu$m) SEDs and determine the disk-host status of all the stars in our sample. We have also cross-matched our sample with several high-contrast imaging surveys of Taurus members to identify any binaries in the HH14 sample that had not been identified at the time of that analysis.
Using this large homogeneously-characterized sample, we have explored relationships between optical veiling and NIR excess, constructed disk SEDs for our sample, and examined differences in disk and accretion properties between single and binary stars. Section \ref{sec:sample} discusses our sample selection, and Section \ref{sec:excess} describes our calculation of the NIR excess. Section \ref{sec:veiling correlations} presents the relationships we found between optical veiling and NIR excess, Section \ref{sec:disc} discusses our results, and Section \ref{sec:conclusion} presents our conclusions.
\section{Sample Selection}\label{sec:sample}
\subsection{Initial Sample Selection}
We began with the TTS sample from HH14, which is a relatively homogeneous collection of moderate-resolution spectroscopic observations of 281 stars from the Taurus ($\tau \sim 1$ Myr; e.g., \citealt{Krolikowski2021}), Lupus ($\tau \sim 1-3$ Myr; \citealt{Galli2020}), Ophiuchus ($\tau \sim 2-6$ Myr; \citealt{Esplin2020}), TW Hydra ($\tau \sim 8-10$ Myr; \citealt{Venuti2019}), and MBM 12 ($\tau \sim 2$ Myr; \citealt{Luhman2001}) associations. To reduce the age spread in the population used for our analysis, we restricted the sample to Taurus targets. To ensure high-quality photometry, we removed known binaries (as marked in the HH14 target table) that had separate spectral observations but were unresolved in 2MASS, but retained systems that were either resolved in both photometry and spectroscopy or were unresolved in both.
The observations and data reduction for the selected Taurus sample are described in detail in HH14, but are briefly described here for completeness. HH14 obtained low-resolution spectra on the Hale 200-inch telescope at Palomar Observatory using the Double Spectrograph \citep[DBSP;][]{Oke1982} and on the Keck I telescope at Keck Observatory using the Low-Resolution Imaging Spectrograph \citep[LRIS;][]{Oke1995, McCarthy1998} between November 2006 and December 2008. The spectra were reduced and flux calibrated using spectrophotometric standards. Spectra of known binary stars with separations of $\rho < 5\arcsec$ were extracted simultaneously using PSF fitting of a reference source to extract one component's flux and then the other.
HH14 calculated the mass, spectral type, bolometric luminosity L$_{bol}$, and optical excess (veiling) at 7510 \AA\ (denoted as $r_{7510}$), among other properties of their sample. For clarity, we note here that $r_{\lambda} = \frac{F_{acc}}{F_{phot}}$ at a given wavelength $\lambda$, and can be measured as a fractional decrease in the depth of spectral lines relative to the normalized continuum, which is equivalent to comparing an observed, flux-calibrated spectrum to combined photospheric and accretion spectrum models.
To construct our preliminary sample we selected all HH14 targets that fit the above criteria and were not marked as ``continuum'' objects (objects without visible spectral lines). We were left with 175 targets. If an object had multiple veiling measurements we took the median of the values. HH14 found that the veiling did not change by more than a factor of three for systems with multiple veiling measurements, and that strongly veiled systems never became weakly veiled (or vice versa). We assume that the median is roughly representative of the veiling value, but note that there may be some scatter in the veiling-NIR excess relation because of variable veiling.
HH14 performed their calculations based on an assumed distance to each system, typically derived using a parallax for a nearby star. The majority of our targets were included in the \textit{Gaia} EDR3 catalog \citep{Gaia2016, Gaia2021}, so when possible we used the distances calculated by inverting the \textit{Gaia} parallax. 14 systems did not have a \textit{Gaia} match, and for those systems we used the HH14 distance. In HH14 the assumed distance for each system was dependent on its location in Taurus: the assumed distance was 131 pc for stars near the Lynds 1495 complex \citep{Torres2012}; 147 pc for stars near T Tau \citep{Loinard2007}; 161 pc for stars near HP Tau \citep{Torres2009}; and 140 pc for all other Taurus objects. Because the calculation of NIR excesses was dependent on the system luminosity, which was calculated by HH14 using their original distances, we rescaled the luminosity to the correct distance for systems with \textit{Gaia} parallaxes.
The names, 2MASS source names, veiling values, distances, and $A_{V}$\ values for our targets are listed in Table \ref{tab:source_params}.
\subsection{Identifying NIR Photometry}
\begin{figure*}
\gridline{\fig{CMD_ex.pdf}{0.45\linewidth}{}
\fig{CMD_unex.pdf}{0.45\linewidth}{}
}
\gridline{\fig{HRD_HH14.pdf}{0.45\linewidth}{}
}
\caption{Top: The observed (left) and de-reddened (right) color-magnitude diagrams plotted as $M_{J}$ vs. 2MASS $J-K$ for the input sample, where the color coding corresponds to the optical veiling value. The arrow on the upper right of each figure denotes a reddening vector for $A_{V}$ = 1 using the \citet{Cardelli1989} reddening law. Binaries are denoted with `x' symbols, while single stars are points. Isochrones from the MIST models spanning ages of 0.1-10 Myr are underlaid on the figures. The majority of points lay to the right of the isochrones, indicating that they are redder than expected due to the presence of a circumstellar disk. The high-veiling stars typically have large $J-K$ excesses. The stars still fall on the red side of the isochrones after being de-reddened because of the circumstellar disks. Bottom: A Hertzsprung-Russell Diagram of the input sample, plotted using the HH14 measured luminosity and a temperature derived by converting the HH14 spectral types to $T_{eff}$ using the \citet{Pecaut2013} SpT-$T_{eff}$ conversion. The color bar again indicates the veiling for each star; binaries are 'x' markers, and single stars are points. Many of the stars are very young, and the high-veiling stars are dispersed among the sample.}
\label{fig:CMD_compare}
\end{figure*}
To select the NIR photometry for our analysis, we crossmatched the HH14 Taurus sample with the 2MASS \citep{Skrutskie2006} and ALLWISE \citep{Wright2010} surveys to find the measured NIR magnitudes ($JHK_{S}W1W2W3W4$) for each target. Although our focus was on the NIR 2MASS filters ($JHK_{s}$), we included the two short-wavelength WISE filters ($\lambda_{eff, W1} = 3.6 \mu$m and $\lambda_{eff, W2} = 4.5 \mu$m) because they are analogous to the L and M bands but did not extend our analysis further into the MIR. However, we used the WISE W4 ($\lambda_{eff, W4} = 22 \mu$m) band to identify disk-bearing stars as described in the following section. After cross-matching with 2MASS and WISE, we retained 163 stars with individual 2MASS and WISE photometry entries corresponding to each optical spectrum from HH14. The $JHK_{S}W1W2$ photometry for each source is listed in Table \ref{tab:source_params}.
Figure \ref{fig:CMD_compare} shows some of the properties of our sample. The top row shows two color-magnitude diagrams in $M_{J}$ vs. J-K space, with MIST isochrones \citep{Paxton2011, Paxton2013, Paxton2015, Dotter2016, Choi2016} ranging from 0.1 to 10 Myr underlaid in gray. The single stars are marked as points and the binary stars have `x' markers, and all points are color coded by their veiling value. The top left panel shows the systems before correction for extinction, and the top right panel shows the systems after they have been de-reddened. In both panels many systems fall redward of the isochrones because of the presence of a circumstellar disk producing NIR excess. The systems with high veiling typically have large J-K excesses even after being corrected for reddening, indicating that they are disk hosts.
The bottom panel of Figure \ref{fig:CMD_compare} shows the sample on an HR diagram using the system luminosities from HH14 \edit1{(after rescaling to the \textit{Gaia} distance)}, and a temperature derived from the \citet{Pecaut2013} spectral type-$T_{eff}$ conversion, which we used throughout this work. Ioschrones ranging from 0.1-10 Myr are again underlaid on the figure, and the markers and color scheme are the same as in the top row of the figure. The high-veiling sources are mixed with the remainder of the population. We did not remove the weak-lined T Tauri stars HH14 used as spectral type templates from the sample, so some targets appear much older than others.
\subsection{Identifying Binary Companions}
Multiplicity, especially close stellar companions (separation $\rho < 50$ au, or $\rho < 0.35 \arcsec$ at the typical 145 pc Taurus distance), can greatly complicate disk structure and accretion dynamics \citep[e.g.,][]{Tofflemire2017}. Thus, we wished to identify multiple stars in our sample to explore whether multiplicity impacted the relationship between veiling and NIR excess, and to remove them from our analysis if necessary.
To identify binaries in our sample, we crossmatched our sample with the binaries identified in high-resolution surveys by \citet{Kraus2011}, \citet{Kraus2012b}, \citet{Schaefer2014}, and \citet{Daemgen2015}. Because we did not search for spectroscopic binaries, our measured multiplicity is a lower limit, but should include the majority of binaries in the sample. We found that 71 of our 163 targets were resolved multiples, and thus measured a multiplicity fraction of $f_{binary} \sim 0.44$, which is slightly lower than the multiplicity fraction of $f_{binary} \sim$ 60\% measured for Taurus \citep{Kraus2011, Duchene2013, Daemgen2015}. This is likely because we had previously removed targets that are known wide-separation binaries with separate spectroscopic observations but unresolved photometry from 2MASS and WISE, which artificially reduced the measured binary fraction. This also meant that our binary sample was comprised mostly of very close ($\rho < 0.5\arcsec$) or very wide ($\rho > 5\arcsec$) binaries. Half of the binary sample (37 out of 71 systems) had a wider separation ($\rho > 0.35\arcsec$, roughly 50 au) than is expected to affect the disk properties. The multiplicity status and source for multiplicity determination of each target is listed in Table \ref{tab:source_params}.
\section{NIR Excess Calculation}\label{sec:excess}
After identifying the 2MASS and WISE magnitudes for each object, we de-reddened the magnitudes by converting the A$_{V}$ measured by HH14 to A$_{\lambda}$ using the reddening law of \citet{Cardelli1989}, which defines $A_{J}/A{V} = 0.282, A_{H}/A_{V} = 0.19, A_{K_{s}}/A_{V} = 0.114$, and $A_{W1}/A_{V} = 0.056$. In W2, W3, and W4, where the \citet{Cardelli1989} extinction law was not defined, we assumed the extinction coefficient was effectively zero. Although there are other extinction law determinations that extend further into the infrared \citep[e.g.,][]{Davenport2014}, we used the \citet{Cardelli1989} law to match HH14.
To calculate the intrinsic magnitudes for our sample, we began with the bolometric luminosity values measured by HH14 from their flux-calibrated spectra. We converted $L_{bol}$ to an absolute bolometric magnitude by asserting that $M_{bol} = -2.5\log_{10}(L_{bol}/L_{\odot}) + M_{bol, \odot}$, where $M_{bol, \odot}$ = 4.74. \edit1{Using the appropriate distance for each system, we converted the absolute bolometric magnitude $M_{bol}$ to an apparent magnitude $m_{bol}$.}
\begin{figure}
\gridline{\fig{example_SED_HD-283572.pdf}{0.95\linewidth}{}
}
\gridline{\fig{example_SED_LkCa-4.pdf}{0.95\linewidth}{}
}
\gridline{\fig{example_SED_V1075-Tau.pdf}{0.95\linewidth}{}
}
\caption{Infrared SEDs for several example systems without disks. The markers and color scheme are the same as in Figure \ref{fig:source_SEDs}. These stars do not have a significant IR excess. The remainder of the disk-free SEDs are shown in a Figure Set in the online version of this paper.}
\label{fig:nodisk_source_SEDs}
\end{figure}
We converted each $m_{bol}$ to a J magnitude using the \citet{Pecaut2013} J-band bolometric correction as $J = m_{bol} - BC_{J}$. Then, we inferred the remaining predicted photospheric magnitudes ($HK_{s}W1W2W3W4$) using the appropriate intrinsic colors from \citet{Pecaut2013}. Finally, we subtracted the predicted intrinsic magnitudes from the de-reddened observed magnitudes to measure the excess in each photometric band.
Figure \ref{fig:nodisk_source_SEDs} shows spectral energy distributions (SEDs) of several systems without disks, including the observed system fluxes, the intrinsic stellar fluxes, and the excess (disk) fluxes, which for these disk-free systems are typically close to zero. To demonstrate the error on the SED calculation caused by observational uncertainties in $A_{V}$, bolometric luminosity, and spectral type, we also plotted 100 samples of a BT-Settl model spectrum \citep{Allard2003, Barber2006, Allard2011, Caffau2011, Allard2012, Allard2013} calculated with perturbed values of those parameters drawn from a normal distribution with a standard deviation equal to the quoted measurement error from HH14. From visual inspection of all system SEDs we identified nine sources with incorrect $A_{V}$\ measurements and two systems (DG Tau and IQ Tau) with otherwise peculiar SEDs. We removed these 11 systems from our analysis, because the SEDs indicated that we could not calculate an accurate NIR excess. Systems that were removed from the analysis because of anomalous SEDs are marked with a note in Table \ref{tab:source_params}. Figure Set 1 in the online version of this paper shows SEDs for all the disk-free systems in our sample.
The measurement error on $A_{V}$, L$_{bol}$, and SpT also affects the inferred NIR excess by altering the predicted intrinsic magnitude for a system. To assess the error on the NIR excess we used a bootstrap analysis. Using 1000 samples for each system, we randomly drew a spectral type, luminosity, and extinction from a normal distribution with a mean equal to the published value for each system and a FWHM equal to the error on the measurement given by HH14: $\sigma_{SpT} = 0.5$ subclass; $\sigma_{L_{bol}} = 10\%$; $\sigma_{A_{V}} = 0.3$mag. Using the perturbed values, we inferred a NIR excess using the procedure described above, and took the error on each excess to be the standard deviation of the 1000 bootstrapped draws. To calculate the error on the color excess we added the bootstrapped errors for the two relevant NIR bands in quadrature.
\subsection{Infrared Disk Indicators}
As part of our sample selection process we wished to only select sources with disks, as those are the systems that should be accreting significantly. To identify disks we could not simply select sources with nonzero veiling, because young stars have variable accretion, meaning that stars with disks may be in a quiescent accretion phase and thus have no measured veiling. A more reliable indicator of disk presence is mid-IR excess, usually characterized by a K$_{s} - [24]$ or [24] $\mu$m excess, where [24] is the Spitzer 24 $\mu$m band.
\begin{figure*}
\gridline{\fig{example_SED_2M-0415+2909.pdf}{0.45\linewidth}{}
\fig{example_SED_CIDA-12.pdf}{0.45\linewidth}{}
}
\gridline{\fig{example_SED_DO-Tau.pdf}{0.45\linewidth}{}
\fig{example_SED_GK-Tau-A.pdf}{0.45\linewidth}{}
}
\gridline{\fig{example_SED_IS-Tau.pdf}{0.45\linewidth}{}
\fig{example_SED_LkHa-358.pdf}{0.45\linewidth}{}
}
\caption{Infrared SEDs for several example systems with disks. A BT-Settl model spectrum of the appropriate temperature, with solar metallicity and surface gravity of $\log(g) = 4.5$, is plotted in black, with the results from 100 MCMC draws shown in gray. The observed (orange points), intrinsic (green `x'), and excess (blue `x') fluxes are plotted in units of erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$. The example spectrum has fluxes calculated using the \edit1{HH14 temperature, \textit{Gaia} distance, and rescaled L$_{bol}$}. The orange F$_{7510}$ point was measured by HH14 and corrected for excess/veiling, and extincted in this work using the measured A$_{V}$ from HH14 and the \citet{Cardelli1989} extinction law, with R$_{V}$ = 3.1}. The remainder of the disk-host star SEDs are shown in a Figure Set in the online version of this paper.
\label{fig:source_SEDs}
\end{figure*}
We used two methods to identify disk-bearing stars in our sample. First, we assumed that the WISE W4 band, with $\lambda_{cen} = 22 \mu$m, was analogous to the Spitzer 24$\mu$m band. We used a disk excess metric from \citet{Rebull2010}, which found that stars with disks have $K_{s} < 14$ and $K_{s} - [24] > 1$, where we replaced [24] with W4. Using this metric, we identified 133 stars with disks. We crossmatched the remaining 28 stars with the \citet{Rebull2010} Spitzer survey of Taurus, since Spitzer was more sensitive at 24 $\mu$m than WISE was at 22 $\mu$m, but did not find any additional disk-hosting stars. Thus, the final sample of disk-bearing stars consisted of 133 systems. The disk host status of each target is listed in Table \ref{tab:source_params}.
Figure \ref{fig:nodisk_source_SEDs} shows example SEDs for three systems that do not have an IR disk excess, while Figure \ref{fig:source_SEDs} shows SEDs of several representative stars that host disks. SEDs for all stars in the sample are contained in Figure Sets 1 and 2 in the online version of this paper. The disk-free stars are fit well by both the intrinsic colors calculated using \citet{Pecaut2013} and the model spectra calculated using the BT-Settl models, and show little to no excess across the full wavelength range of the SED. In contrast, the disk host star SEDs all show excess that is orders of magnitude larger than the intrinsic stellar photospheric flux at long wavelengths, and often have significant excesses, with fluxes comparable to the intrinsic stellar brightness, across the full SED.
\section{Correlations Between Optical Veiling and NIR Excesses} \label{sec:veiling correlations}
\subsection{Relationships Between Veiling and NIR Colors}
Any correlation between optical veiling and NIR excess is driven by the relation between accretion in two different regimes of the star-disk system. The accretion shock on the stellar surface produces UV and optical veiling, so changes in optical veiling should be caused by changes in accretion. In the NIR, the emission is (at least predominantly) caused by the inner rim of the dust disk, and changes to the environment of the inner disk, such as by accretion variability, should cause changes in the disk emission. Thus, quantifying and understanding the relationship between these two seemingly disparate processes informs our understanding of accretion and its potential effects on the circumstellar disk environment. The most easily observable relations are the correlations between IR color excesses and veiling, which were the focus of past work \citep[e.g.,][]{Hartigan1990, Kenyon1995}. Thus, we began by exploring those correlations in the NIR for our sample.
The color excess describes how red a source is by measuring the flux ratio between two points of its SED. A larger color excess indicates a larger flux ratio, which could be produced by either a larger absolute excess or a steeper SED slope, either of which is independent of the disk luminosity because color is a relative measure. In addition, color excess is not directly dependent on the stellar luminosity, because it is a relative measurement, rather than one that relies on removing the underlying stellar flux. To quantify the relationship between NIR color excess and optical veiling, we calculated linear regressions for the color excess as a function of the optical veiling $r_{7510}$. To explore any possible differences between single and binary star relations, we calculated separate regressions for each subsample of stars with disks.
We also calculated separate regressions for samples including or excluding the data points where $r_{7510} = 0$. The sample that includes the points with $r_{7510} = 0$ encompasses the range of possible outcomes for disk-bearing stars, from (nearly) quiescent accretion to accretion luminosity nearly comparable to the stellar photosphere. However, we found that the linear regression for that sample had a slope that was driven by stars with $r_{7510} = 0$, and hence with quiescent accretion. Therefore, we also wished to investigate the relationship between veiling and NIR color excess for only actively accreting systems.
\begin{figure*}
\gridline{\fig{JHexcess_nice.pdf}{0.45\linewidth}{}
\fig{HKexcess_nice.pdf}{0.45\linewidth}{}
}
\gridline{\fig{KW1excess_nice.pdf}{0.45\linewidth}{}
\fig{W1W2excess_nice.pdf}{0.45\linewidth}{}
}
\caption{Veiling-excess correlations for NIR colors including points where $r_{7510} = 0$. The binaries are shown in teal with `x' markers, and the single stars are shown as magenta dots. The best-fit lines for the binary and single star regressions are shown as a dot-dash dark blue and solid black line, respectively. The magenta and teal lines underlaid on the figure indicate the error on the best-fit line, calculated using 200 bootstrapped values for the slope and y-intercept of each line. The magenta and teal text on each figure is the equation of the best-fit line for the single and binary stars, respectively, calculated from a linear regression. The binary star slopes are shallower than the single star slopes, indicating that the binary star SEDs are not as red (e.g., not as steeply sloped) as the single stars. }
\label{fig:veiling_color_corr}
\end{figure*}
\begin{figure*}
\plottwo{compare_color_slopes.pdf}{compare_color_slopes_nor0.pdf}
\caption{The slopes of the best fit line relating the optical veiling and color excess for binary and single stars (teal and magenta, respectively), where the error bars are the RMS error on the slope, with (left) and without (right) systems with r$_{7510} = 0$. \textbf{Left:} There are no statistically significant differences between the single and binary slopes for the regression including quiescent disk hosts (systems where $r_{7510} = 0$). \textbf{Right:} There are significant differences between the single and binary slopes for \edit1{H-K and K-W1}. When excluding the $r_{7510} = 0$ points, the binary slopes are shallower than when the $r_{7510} = 0$ points are included in the regression.}
\label{fig:color_slopes}
\end{figure*}
Figure \ref{fig:veiling_color_corr} shows plots of color excess versus $r_{7510}$ for each filter combination including points where $r_{7510} = 0$. For each color we fit a linear trend to the binary and single stars separately. The best-fit lines from each linear regression are plotted in Figure \ref{fig:veiling_color_corr}, along with the data points used in the calculation. The thin magenta and teal lines indicate the error on the linear fit, visualized by 200 bootstrapped lines with parameters \edit1{drawn using the covariance matrix from the linear fit}. We found that almost all of the linear regressions (except the binary W1-W2 relation) were statistically significant and consistent with a non-zero slope (p-value $<$ 0.05), indicating that there is a correlation between color excess and optical veiling in J-H, H-K, and K-W1 colors, as well as W1-W2 for single stars. There is no significant difference in slope between binary and single stars for any color in the regressions that include $r_{7510} = 0$ points, as illustrated by the left panel of Figure \ref{fig:color_slopes}.
After excluding $r_{7510} = 0$ points, all the measured slopes are shallower. The discrepancy between the single and binary stars grows moving redward, and H-K and K-W1 have slopes that differ more than the mutual uncertainty. Shallower slopes in $r_{7510}$ vs. color excess space indicate that the slope of the mean binary SED is shallower than the mean of the slope of the single star SED, especially at longer wavelengths in actively-accreting systems.
\begin{deluxetable*}{cCCCCCCC}
\tablecaption{Best-fit line parameters, P-values, and $\rho$ values for each NIR color \label{tab:color_params}}
\setcounter{table}{1}
\tablecolumns{8}
\tablewidth{0pt}
\tablehead{
\colhead{Color} & \colhead{Slope} & \colhead{Y-intercept} & \colhead{P-value} & \colhead{Spearman $\rho$} & \colhead{Slope} & \colhead{Y-intercept} & \colhead{Y-intercept}\\
\colhead{} & \colhead{$f(r_{7510})$} & \colhead{$f(r_{7510})$} & \colhead{$f(r_{7510})$} & \colhead{$f(r_{7510})$} & \colhead{(no $r=0$)} & \colhead{(no $r=0$)} & \colhead{$f(r_{7510}-\overline{r_{7510}})$\tablenotemark{a}}\\
}
\startdata
\multicolumn{8}{c}{Single Stars}\\
\hline
J-H & 0.80 $\pm$ 0.16 & 0.16$\pm$ 0.03 & $<0.0001$ & 0.49 & 0.73 $\pm$ 0.25 & 0.18$\pm$ 0.06 & 0.23$\pm$ 0.04\\
H-K & 1.06 $\pm$ 0.13 & 0.04$\pm$ 0.02 & $<0.0001$ & 0.66 & 1.00 $\pm$ 0.18 & 0.06$\pm$ 0.04 & 0.12$\pm$ 0.03\\
K-W1 & 1.56 $\pm$ 0.22 & 0.14$\pm$ 0.04 & $<0.0001$ & 0.63 & 1.30 $\pm$ 0.29 & 0.23$\pm$ 0.07 & 0.24$\pm$ 0.05\\
W1-W2 & 0.88 $\pm$ 0.21 & 0.26$\pm$ 0.04 & $<0.0001$ & 0.54 & 0.71 $\pm$ 0.17 & 0.31$\pm$ 0.04 & 0.33$\pm$ 0.04\\
\hline
\multicolumn{8}{c}{Binary Stars}\\
\hline
J-H & 0.55 $\pm$ 0.19 & 0.20$\pm$ 0.03 & 0.001 & 0.42 & 0.53 $\pm$ 0.20 & 0.20$\pm$ 0.04 & 0.24$\pm$ 0.03\\
H-K & 0.92 $\pm$ 0.21 & 0.11$\pm$ 0.03 & $<0.0001$ & 0.56 & 0.67 $\pm$ 0.26 & 0.17$\pm$ 0.05 & 0.22$\pm$ 0.04\\
K-W1 & 1.51 $\pm$ 0.50 & 0.30$\pm$ 0.07 & $<0.0001$ & 0.68 & 0.41 $\pm$ 0.64 & 0.58$\pm$ 0.12 & 0.61$\pm$ 0.09\\
W1-W2 & 0.71 $\pm$ 0.36 & 0.34$\pm$ 0.05 & 0.0001 & 0.49 & 0.04 $\pm$ 0.40 & 0.51$\pm$ 0.07 & 0.52$\pm$ 0.06\\
\enddata
\tablenotetext{a}{The y-intercept of the line calculated using a regression with the x-values centered at the mean of the veiling values, which is $\overline{r_{7510}} = 0.072$.}
\end{deluxetable*}
The numerical values (slope, Y-intercept, and P-value) of each linear regression are listed in Table \ref{tab:color_params}, and the equation of the best-fit line is also shown on each panel of Figure \ref{fig:veiling_color_corr} for binaries and single stars. Table \ref{tab:color_params} presents \edit1{three} values for the linear fit y-intercept \edit1{and two slope values}. The first value (denoted as $f(r_{7510})$) was calculated using the data without any modification. However, this calculation introduces a covariance between the slope and y-intercept errors, artificially reducing the measured error on the y-intercept. Thus, we also calculated the y-intercept and its associated error after shifting the data so that the veiling values were centered at 0 (i.e., after subtracting the mean veiling value $\overline{r_{7510}} = 0.072$ from all veiling measurements). This additional y-intercept and associated error are denoted as $f(r_{7510} - \overline{r_{7510}})$ in Table \ref{tab:color_params}. \edit1{Finally, we also show the slope and y-intercept calculated when performing the linear fit while excluding the $r_{7510} = 0$ points.}
The p-values found via the linear regression indicate whether the correlation between NIR color and veiling is statistically significant, but do not quantify the strength of the correlations: a small p-value does not necessarily imply a strong correlation. For example, a weak correlation within a large data set (N$\rightarrow \infty$) could have a very small p-value because the large number of samples drives down the statistical uncertainty. \edit1{To assess the strength of each correlation, we calculated a Spearman $\rho$ coefficient for each sample of binaries and single stars. The $\rho$ values for single and binary stars in each color are listed in Table \ref{tab:color_params}. For single stars, H-K vs. $r_{7510}$ is the strongest correlation, with $\rho_{S, H-K} = 0.66$. For binary stars, the strongest correlation is K-W1, with $\rho_{B, K-W1} = 0.68$.}
To illustrate the differences in linear regression best-fit slope between each color, and the differences between single and binary stars, Figure \ref{fig:color_slopes} shows the linear regression best-fit slope as a function of color for J-H, H-K, K-W1, and W1-W2. There are no statistically significant differences between the single and binary star best-fit slopes in the sample including $r_{7510} = 0$ points. In the regressions performed while excluding $r_{7510} = 0$ points, the difference between the single and binary slopes is only outside the mutual uncertainties in W1-W2. On average, the binary star slope is (83 $\pm$ 10)\% of the single star slope when including $r_{7510} = 0$ points, and (44 $\pm$ 25)\% when excluding $r_{7510} = 0$ points. The W1-W2 slopes are shallower than those of other colors because by W1 and W2 the disk excess SED begins to flatten out (e.g., Figure \ref{fig:source_SEDs}), meaning that the dynamic range of the color excess decreases.
The binary star best-fit slopes have smaller $\tau$ coefficients than the single stars, indicating that the binary excess is less predictive of veiling than the single star excess. This is likely because there are very few high-veiling binary stars, so the binary fit is more dominated by the scatter in the low-veiling and no-veiling sources than the single star sample, which has more high-veiling sources.
\subsection{Relationships Between Veiling and NIR Excesses}
Past work on this topic has focused on color excesses instead of single-filter excesses because calculating absolute excesses requires a robust understanding of the SED of the underlying stellar photosphere, which has historically been difficult to constrain, especially outside of the optical wavelength regime. However, individual filter excesses, when accessible, provide more specific information about the absolute, rather than the relative, disk SED. Single-filter excesses trace the disk SED, and correlations between NIR absolute excess and optical veiling reflect and quantify the relationship between the accretion column and the dust disk. Individual filter excesses also remove the zero-point ambiguity of color excesses, which are relative measurements and do not account for the J band excess caused by the disk SED \citep{Cieza2005, Fischer2011}.
After calculating the NIR color excess, we used the \citet{Pecaut2013} bolometric correction $BC_{J}$ to calculate absolute excesses in all five filters used in this analysis. Then, we calculated a linear regression comparing each excess against the optical veiling separately for the samples of single and binary stars with disks. We calculated regressions both including and excluding systems with $r_{7510} = 0$, to examine whether quiescent disk hosts significantly impacted the relationship between NIR excess and veiling. However, we found that there was not a significant difference between the slopes calculated with and without the $r_{7510} = 0$ systems, so we concentrate on the results including the veiling-free systems.
\begin{figure*}
\plotone{mag_excess_nice.pdf}
\caption{Veiling-excess correlations for NIR filters. Binaries are shown as teal `x' markers, with the best-fit line from the linear regression overplotted as a dark blue dashed line. Single stars are shown as magenta points, with the best-fit line from a linear regression shown as the black solid line. The thin teal and magenta lines show the error on the binary and single star best-fit lines, respectively, visualized using 200 draws \edit1{from the covariance matrix returned by the linear fit}. The error bars are not plotted because they are smaller than the points. The text on each figure is the equation of the best-fit line for the single or binary stars, calculated using a linear regression. The slope of the best-fit line increases substantially moving redward across the SED.}
\label{fig:veiling_excess_corr}
\end{figure*}
The best-fit lines from each linear regression are plotted in Figure \ref{fig:veiling_excess_corr}, along with the data points used in the calculation and 200 bootstrapped samples \edit1{drawn from the covariance matrix returned from the linear fit}. The binary and single star parameters were calculated separately and are shown as teal and magenta lines, respectively. The equation of the best-fit line for the single and binary stars is also shown on each panel. We note that the excess is shown as magnitudes in excess of the photosphere, so a more negative number indicates a larger excess, and a negative slope to the linear fit indicates a positive correlation between stronger veiling and stronger NIR excess.
\begin{deluxetable*}{cCCCCCCC}
\tablecaption{Best-fit line parameters, $\rho$ values, and P-values for each NIR magnitude \label{tab:mag_params}}
\setcounter{table}{2}
\tablecolumns{8}
\tablewidth{0pt}
\tablehead{
\colhead{Filter} & \colhead{Slope} & \colhead{Y-intercept} & \colhead{P-value} & \colhead{Spearman $\rho$} & \colhead{Slope} & \colhead{Y-intercept} & \colhead{Y-intercept}\\
\colhead{} & \colhead{$f(r_{7510})$} & \colhead{$f(r_{7510})$} & \colhead{$f(r_{7510})$} & \colhead{$f(r_{7510})$} & \colhead{(no $r=0$)} & \colhead{(no $r=0$)} & \colhead{$f(r_{7510}-\overline{r_{7510}})$\tablenotemark{a}}\\
}
\startdata
\multicolumn{8}{c}{Single Stars}\\
\hline
J & -1.78$\pm$0.40 & 0.00$\pm$0.07 & 0.001 & -0.37 & -1.82$\pm$0.62 & 0.01$\pm$0.14 & -0.13$\pm$0.04\\
H & -2.57$\pm$0.41 & -0.16$\pm$0.07 & 0.0001 & -0.45 & -2.55$\pm$0.61 & -0.17$\pm$0.14 & -0.35$\pm$0.04\\
K & -3.64$\pm$0.44 & -0.20$\pm$0.07 & $<0.0001$ & -0.60 & -3.55$\pm$0.65 & -0.23$\pm$0.15 & -0.47$\pm$0.04\\
W1 & -5.20$\pm$0.55 & -0.35$\pm$0.09 & $<0.0001$ & -0.67 & -4.85$\pm$0.77 & -0.46$\pm$0.18 & -0.72$\pm$0.04\\
W2 & -6.07$\pm$0.69 & -0.61$\pm$0.12 & $<0.0001$ & -0.66 & -5.56$\pm$0.88 & -0.77$\pm$0.20 & -1.05$\pm$0.04\\
\hline
\multicolumn{8}{c}{Binary Stars}\\
\hline
J & -1.01$\pm$0.51 & -0.05$\pm$0.07 & $>0.05$ & -0.10 & -1.63$\pm$0.75 & 0.11$\pm$0.14 & -0.13$\pm$0.06\\
H & -1.55$\pm$0.57 & -0.25$\pm$0.08 & $>0.05$ & -0.26 & -2.15$\pm$0.77 & -0.09$\pm$0.14 & -0.36$\pm$0.06\\
K & -2.47$\pm$0.64 & -0.36$\pm$0.09 & 0.01 & -0.33 & -2.83$\pm$0.82 & -0.27$\pm$0.15 & -0.54$\pm$0.06\\
W1 & -3.98$\pm$0.81 & -0.66$\pm$0.11 & $<0.0001$ & -0.60 & -3.23$\pm$0.98 & -0.85$\pm$0.18 & -0.94$\pm$0.06\\
W2 & -4.69$\pm$1.09 & -0.99$\pm$0.15 & $<0.0001$ & -0.60 & -3.27$\pm$1.27 & -1.36$\pm$0.23 & -1.33$\pm$0.06\\
\enddata
\tablenotetext{a}{The y-intercept of the line calculated using a regression with the x-values centered at the mean of the veiling values, which is $\overline{r_{7510}} = 0.072$..}
\end{deluxetable*}
Table \ref{tab:mag_params} shows the relevant values from the linear fit, including the y-intercept calculated using a linear regression with the data points centered at the origin \edit1{and the linear parameters for a fit excluding the $r_{7510} = 0$ points}, as discussed in Section 4.1. From the linear regression we find that all filters have significant p-values (p$<$0.05) for both single and binary stars, indicating that the data are correlated, except for the J \edit1{and H} band binary relations. To assess the strength of the correlation between veiling and individual filter excess \edit1{we calculated the Spearman $\rho$ coefficient, also listed for each fit in Table \ref{tab:mag_params}. Because the data are in magnitudes, a more negative $\rho$ corresponds to a stronger positive correlation between veiling and NIR excess. We find that the binary star fits are systematically more weakly correlated than single stars, and that the W1 filter shows the strongest correlation between veiling and NIR excess, with Spearman $\rho$ values of -0.67 and -0.60 for single and binary stars, respectively.}
\begin{figure*}
\plottwo{compare_mag_slopes.pdf}{compare_mag_slopes_nor0.pdf}
\caption{The slopes of the best fit line to the relationship between absolute NIR excess and optical veiling for binary and single stars (teal and magenta, respectively), where the error bars are the RMS error on the slope with (left) and without (right) $r_{7510} = 0$ systems included. \textbf{Left:} When including $r_{7510} = 0$ points in the linear regression, there are no statistically significant differences between the single and binary slopes across all wavelengths. \textbf{Right:} After excluding the $r_{7510} = 0$ points, the differences between the single and binary sample slopes remain statistically insignificant except in W2.}
\label{fig:mag_slopes}
\end{figure*}
Figure \ref{fig:mag_slopes} shows the slopes of the binary and single star linear regressions as a function of wavelength both including (left) and excluding (right) $r_{7510} = 0$ systems. As the circumstellar disk becomes the dominant source of flux in the SED, the dynamic range of the excess increases, causing the slope of the best-fit line to increase across the SED. When the $r_{7510} = 0$ points are included, although some of the binary and single star slopes have overlapping error bars, on average the binary star slope is slightly shallower than the single star slope ($m_{binary} \approx 0.7 m_{single}$, although the relative difference is clearly not constant; Figure \ref{fig:mag_slopes}). When the $r_{7510} = 0$ points are excluded from the regression, the slopes are slightly shallower, and the differences in slope between binary and single stars become more significant toward the red rather than remaining relatively constant as in the case including the $r_{7510} = 0$ points. However, the single and binary stars only disagree by more than the mutual uncertainty in the reddest filter (W2).
\begin{figure*}
\plottwo{excess_SED_flux.pdf}{excess_SED_fbol.pdf}
\caption{Left: The SED points of the mean circumstellar disk for single (magenta) and binary (teal) stars in our sample, produced by converting each filter excess into a flux excess. The error bars show the RMS scatter of the sample, and the gray lines underlaid show each individual system's SED. There is large rms scatter among all the systems, but the mean binary disk flux is slightly larger than the single disk flux. Right: The same figure as on the left, but with all filter fluxes normalized by each system's bolometric flux. Because increased F$_{bol}$ should cause a larger dust truncation radius at the same temperature, increasing the amount of disk emission, this normalization attempts to remove that dependence. Across the middle of the SED binary disks remain brighter than single stars, but at each end of the SED binaries are fainter than singles.}
\label{fig:flux_SED}
\end{figure*}
To better demonstrate the structure of the disk SED, we converted the magnitude excesses for each system into flux excesses, and plotted the resulting mean and standard deviation of each filter, shown in the left panel of Figure \ref{fig:flux_SED}. The gray lines underlaid on the figure show the SEDs for individual systems. The error bars do not show the error on the measurement, but rather show the RMS spread in the distribution of samples. The SED peak falls at the H band for both single and binary stars, which is the location of the peak emission of a blackbody at a temperature of $T_{eff} \approx 1800$K, which is consistent with the expected dust sublimation temperature of $\sim 1500-2000$K. The mean binary star disk SED is typically slightly brighter (F$_{binary} \approx 1.2$ F$_{single}$) than the single star SED, except in J band, where F$_{binary} \approx 0.8$ F$_{single}$. However, a Kolmogorov-Smirnov statistic test \citep[K-S test;][]{Kolmogorov1933, Smirnov1939, Massey1951, Hodges1958} finds that the p-value for most filters is greater than 0.05 (the standard cutoff for statistical significance), indicating that there is no statistically significant difference in the disk flux distributions between single and binary stars.
The right panel of Figure \ref{fig:flux_SED} shows the same data as the left panel, but normalized by the bolometric flux of each host star \edit1{(calculated using the rescaled $L_{bol}$ and \textit{Gaia} distance)}. The emitting area of the inner edge of the disk should increase with stellar flux, because more radiation will move the dust sublimation radius to larger distances, so normalizing by F$_{bol}$ attempts to correct for system-to-system variations in brightness and emitting area, allowing more direct comparison. In the normalized case, binary disks remain brighter than single disks in $KW1W2$ by $\sim 25\%$ but are fainter in $JH$ and $W3W4$ by $\sim 20\%$. From K-S tests, the normalized disk flux distributions are still statistically indistinguishable.
\section{Discussion} \label{sec:disc}
To understand the relationship between optical and infrared excess in pre-main sequence stars, we have assembled a large, homogeneously characterized sample of Taurus members using the measurements of HH14, calculated the infrared color and single-filter excesses for single and close binary stars using 2MASS and WISE observations ($JHK_{s}W1W2$ filters and J-H, H-K, K-W1, and W1-W2 colors) and investigated possible relations between the optical veiling and the NIR excess in both individual filters and NIR colors.
We found that NIR excess (JHKW1W2 filters, or $1.1 < \lambda < 4.5 \mu$m) is correlated with optical veiling across all bands and colors. Our findings that NIR and optical excesses are related are consistent with previous work \citep[e.g.,][]{Hartigan1990, Hartigan1991, Kenyon1995, Cieza2005}, and continued to explore the bluer region of the NIR (J and H bands, or $\lambda <2.2 \mu$m), where the origins of the NIR emission are still not known \citep{Cieza2005, Fischer2011, McClure2013}. We found that H-K is the NIR color that is most strongly correlated with veiling. This is consistent with a dust sublimation origin for some of the NIR excess, because H-K is the color associated with of the peak of a blackbody at the dust sublimation temperature, and thus should have the largest sensitivity to emission arising from the inner edge of the dust disk. We further discuss the relationship between veiling and NIR excess in Section \ref{sec:nir disc}.
We found that the W1 single-filter excess is most strongly correlated with veiling. This is because the relation is nearly as steep as the W2-veiling relationship, but has significantly less scatter, likely because W1 is closer to the disk SED peak and so is brighter than W2. We found that many of the systems that meet the \citet{Rebull2010} disk host criteria have no veiling measured, and we discuss these systems in Section \ref{sec:noveil}. We found that after converting to fluxes, there is no significant difference between single and binary star disk SEDs. We discuss the binary subsample in Section \ref{sec:binary_disc}. We also found that in a sample of active accretors (systems with r $>$ 0) the binary relation is significantly shallower than the single star relation at long wavelengths, indicating that in actively accreting binaries the inner disk rim emission is not as strongly correlated with accretion as it is for single stars or when also including quiescent binary stars.
\subsection{The Relationship Between NIR Excesses and Optical Veiling}\label{sec:nir disc}
The observed correlation between NIR excess and optical veiling means that larger disk flux is either directly or indirectly related to a higher stellar accretion rate. The relation between larger excess and higher veiling indicates that sources with more active accretion also produce more NIR emission, both in relative (i.e., colors) and absolute (i.e., disk flux) measures. Because colors indicate the flux ratio between two SED points, the correlation between NIR colors and optical excess corresponds to a brighter and/or steeper SED. Physically, this relates to the temperature and luminosity of the inner disk. Similarly, disk fluxes and absolute excesses correspond to the total emitting area of the inner disk, because the dust sublimation temperature is roughly constant, so an increase in disk flux at a given wavelength can only be produced by a larger emitting area.
The two dimensions over which to increase the disk emission area are the dust disk inner radius and the scale height of the inner disk. The dust disk inner radius is set by the total flux emitted from the system, of which the accretion luminosity is a small fraction ($L_{acc} \sim 0.1 L_{\star}$; \citealt{Alcala2014, Alcala2017, Robinson2019}), and stellar optical variability does not appear to induce NIR variability \citep{Flaherty2012}, so the NIR variations do not appear to be caused by an increase in the dust disk inner radius. Thus, \citet{Flaherty2012} concluded that the NIR variability they observed was caused by increases in the height of the inner rim of the dust disk, which is inflated because of increased internal heating \citep{Dullemond2001, Isella2005}. Possible mechanisms \citet{Flaherty2012} suggested for increasing scale heights include changes to the stellar magnetic field, X-ray flares, and companions perturbing the disk.
\citet{Espaillat2019} did not observe a correlation between accretion rate and X-ray flux, but did observe a correlation between the accretion rate $\dot{M}$ and the MIR variability of GM Aur. Because of this correlation, \citet{Espaillat2019} suggest that increasing surface density of optically thin dust in the inner disk increases both disk emission and accretion rates by inducing additional mass loading onto the stellar magnetosphere. However, the timescale of this correlation was observed to be on the order of days to weeks, and therefore may not fully explain the correlation we observe between accretion rate and disk emitting area in asynchronous observations. The persistence of this correlation suggests that there is a relatively steady-state process causing both the NIR excess and the average veiling, even though they are variable on short timescales. One possibility is that more massive dust disks have larger inner walls due to increased internal pressure, while disk mass is correlated with accretion rate \citep{Manara2016mdot}, causing increased veiling. Synchronous observations with longer time baselines will help clarify the origin of the asynchronous correlation between veiling and NIR excess. Performing a similar analysis of additional star forming regions, such as the well-defined sample of disk hosts in Upper Scorpius \citep{Luhman2012,Esplin2018} or the Lupus sample of \citet{Alcala2017}, could explore whether these relationships and statistics are shared between regions of different ages and with different environments.
\subsection{NIR Excesses in Veiling-Free Stars}\label{sec:noveil}
\begin{figure*}
\plotone{mag_rsplit_nice.pdf}
\caption{Histograms of the single-filter NIR excess distribution for disk-hosting sources with veiling values of zero (blue), $0 < r \leq 0.2$ (orange), and $r > 0.2$ (black). \edit1{The text on each figure shows the skew value for each histogram, color-coded appropriately.} The zero-veiling sources are typically centered around an excess value of zero for all filters, while the higher-veiling sources have progressively larger average excess. The asymmetry \edit1{(skewness)} of all histograms increases with increasing wavelength, showing that the veiling-free systems still have excess flux from a disk. We note again here that in absolute NIR excesses, a negative value indicates a flux excess.}
\label{fig:noveil_hist}
\end{figure*}
We have found that NIR excess and veiling are correlated for stars that meet the \citet{Rebull2010} criteria for being a disk host. However, there were many disk host stars in our sample with little or no veiling measured, and that there was significant scatter in the excesses measured for the zero-veiling sources, indicating that a source with no veiling could still have a large amount of NIR excess.
We measured NIR excesses consistent with the presence of a disk in 133 of our 163 systems. Of those 133 systems, 64 ($\sim 50\%$) had veiling measurements of $r_{7510} = 0$. The systems with no veiling are coeval with systems that show both veiling and a disk ($\tau_{r = 0} = 3.14$Myr; $\tau_{r \neq 0} = 3.22$Myr), indicating that these systems are not necessarily older than their veiled counterparts, although we caution that age is not necessarily an indicator of evolutionary stage and age can be difficult to infer for young stars.
\citet{Natta2006} measured the accretion properties of a sample of Ophiuchus PMS stars using hydrogen emission lines in the NIR (Pa$\beta$ and Br$\gamma$), and similarly found that $\sim$50\% of their targets exhibited signs of hosting a disk but did not have measureable accretion. The optical continuum excess/veiling measurement and the NIR emission line measurement measure the same relative number of systems with significant levels of accretion and systems with disks. This suggests that both methods may have the same level of sensitivity to accretion. The strength of optical veiling and the equivalent width of NIR emission lines are correlated \citep[e.g.,][]{Muzerolle1998, Calvet2004, Alcala2014, Alcala2017}, and have similar origins. The optical/UV emission arises from the accretion shock on the stellar surface, while the NIR emission lines are produced in the accretion column \citep{Calvet2004}. Similar statistics in the number of systems lacking each of these properties while still hosting disks implies that there is little to no accretion occurring within the magnetospheric truncation radius (i.e., neither in the accretion column nor in the accretion shock) in these systems, which is consistent with the physical origins of the NIR emission lines and optical veiling. However, no measureable accretion occurring at one time does not necessarily mean that accretion has permanently ceased.
The discrepancy between the numbers of systems with disks and systems with measured veiling may be a result of some disks beginning to evolve, causing accretion to fall below detectable levels, or could be caused by the intrinsic variability of accretion causing large changes in veiling. For example, \citet{Venuti2014} observed veiling variation on the order of 0.5 dex on timescales of weeks. More direct indicators of accretion also vary rapidly: accretion rates derived from H$\alpha$ can vary on timescales of hours to days; \citep{Mundt1982, Johns1995, Costigan2012, Sousa2016, Sousa2021}. It seems most likely that the observed high fraction of veiling-free disk hosts is caused by a combination of these factors.
As circumstellar disks evolve, the slope of the MIR SED increases more rapidly with wavelength as dust grains settle and thus intercept less stellar radiation \citep{Dullemond2004} and a central, more transparent cavity clears because of grain growth and possibly planet/planetesimal formation in the inner disk \citep{Dullemond2005}. The combination of these factors means that a more evolved circumstellar disk should continue emitting at longer wavelengths even after the inner disk has begun to dissipate.
Figure \ref{fig:noveil_hist} shows histograms of the magnitudes of excess in each NIR filter used in our analysis for systems with $r_{7510} = 0$, $0 < r_{7510} \leq 0.2$, and $r_{7510} > 0.2$. The majority of systems without measured veiling have little to no NIR excess, with most of the distributions centered around 0 excess. Moving through the NIR to longer wavelengths, the distributions become progressively less symmetric \edit1{(e.g., when assessed using a skew test, we found the skewness increases with increasing wavelength, as demonstrated by the values listed on Figure \ref{fig:noveil_hist})}, indicating that systems without veiling have increasing amounts of excess emission at progressively longer wavelengths, as would be expected from more evolved disks. This behavior is in contrast to the higher-veiling systems, which show a similar but much less dramatic asymmetry toward longer wavelengths that begins earlier than in the $r_{7510} = 0$ case. The disk-clearing process is rapid \citep[e.g.,][]{Simon1995, Wolk1996, Andrews2005, Luhman2010, Alexander2014}, leading to small numbers of observed transitional disks \citep[e.g.,][]{Kenyon1995, Luhman2010}, meaning that we would not expect nearly 50\% of our sample to be transitional disks. However, it is unclear if there is another explanation for the asymmetric NIR excess distribution for the veiling-free sample.
In addition to disk evolution impacting accretion and disk properties, accretion itself is not a steady-state process, and stochastic variations in accretion cause photometric and veiling variability. For example, HH14 measured veiling values that changed by up to a factor of three on timescales of days, while \citet{Venuti2014} found that the UV/optical excess rarely varied by more than 0.5 dex on timescales of a week. Similarly, \citet{Fang2020} found that large changes in optical veiling are rare but that variability is common. Changes in accretion, and thus in veiling, may also significantly affect the number of systems with measured veiling relative to the number of systems with disks. Without time-domain observations of our sample of nominally veiling-free sources it is impossible to determine the relative contributions of variability or evolutionary stage to the observed lack of veiling in nearly half of our sample.
\subsection{The Impact of Photometric Variability on NIR Excesses}
Young stars are photometrically variable, which compounds with veiling variability to introduce scatter to our asynchronous comparisons between NIR colors and optical veiling. However, although young stars are variable, the amplitude of photometric variability is almost always $\Delta \text{mag} < 1$mag, and typically closer to $\Delta \text{mag} < 0.2$ mag \citep{Bouvier1995, Carpenter2001, Herbst2002, Cody2013, Sousa2016} in both the optical and the NIR. Some young stars, such as RW Aur A+B, AA Tau, and V409 Tau in Taurus, as well as the UXor and EXor classes of variable stars, are known to have much greater variability \citep[up to or greater than $\sim 4$ mag;][]{McLaughlin1946, Herbig1950, Bouvier1999, Beck2001, Herbig2008, Bouvier2013, Rodriguez2013, Holoien2014, Rodriguez2015, Rodriguez2016, Lamzin2017}; however, such systems are rare \citep{Herbst1994, Grankin2007, Herbig2008} and so we do not expect them to significantly impact our results.
\subsection{The Effect of Multiplicity on NIR and Optical Excess}
\label{sec:binary_disc}
We have combined a large sample of homogeneously characterized Taurus members with adaptive optics multiplicity surveys. We did not modify our sample based on any additional binary properties (e.g., disk properties or separation), meaning that the binary sample includes systems with a wide variety of configurations and disk properties. For example, CoKu Tau/4 is a very close binary with a cleared central cavity created by the binary tidal forces, creating an SED that resembles a transition disk \citep{Ireland2008}. Another example of unusual binary configuration is DF Tau, and likely others, where only one star in the binary has a disk \citep{Allen2017}.
Our compilation of Taurus binaries allowed us to quantify the differences in inner disk and accretion properties between single and close binary stars for the first time, which are expected to occur because of the impact of the secondary companion on the evolution of the disk(s) of binary systems. We found that binary star NIR excesses are not as strongly correlated with veiling as single star NIR excesses, and that the differences between single stars and binary stars are most significant in significantly accreting systems where $r_{7510} > 0$. In this subsection we explore the implications of the differences and similarities between the single and binary star samples.
\begin{figure*}
\gridline{\fig{jhexcess_hist.pdf}{0.45\linewidth}{}
\fig{hkexcess_hist.pdf}{0.45\linewidth}{}
}
\gridline{\fig{kw1excess_hist.pdf}{0.45\linewidth}{}
\fig{w1w2excess_hist.pdf}{0.45\linewidth}{}
}
\gridline{
\fig{veiling_hist.pdf}{0.45\linewidth}{}
}
\caption{Top two rows: Histograms of color excesses for single and binary stars. The binary and single star excess distributions pass a K-S test, indicating that the distributions are drawn from the same underlying population. Bottom row: Histogram of veiling values for single and binary stars. The two samples also pass a K-S test, indicating that the two samples are drawn from the same underlying distribution.}
\label{fig:bin_single_compare}
\end{figure*}
Because we measured the veiling-color excess relation separately for single and binary stars and found different slopes for each fit, we wished to explore whether the individual data sets used to construct the relation were different between single and binary stars. Figure \ref{fig:bin_single_compare} shows histograms of the color excesses (top two rows) and veiling (bottom row) of binaries and single stars separately. We found by K-S tests (which are sensitive to the center of a distribution) and Anderson-Darling tests (which are sensitive to the edges of a distribution) that the single and binary star samples had identical color excesses and veiling distributions (e.g., p$_{K-S}$ = 0.35 and 0.2 for H-K excess and veiling, respectively; p$_{A-D} > 0.25$ for all parameters). This suggests that the binary and single star disk SED distributions are statistically indistinguishable, meaning that any differences between the single and binary stars are likely not related to differences in the input distributions.
We found that the binary star veiling-color excess and veiling-filter excess Pearson r values (Tables \ref{tab:color_params} and \ref{tab:mag_params}) are smaller than those of the single stars, indicating that NIR excess is more weakly correlated with veiling for binary stars. This may be caused by a physical property of binaries, but could also be a sign that the dynamic range of binary excesses is smaller than that of single stars: a larger dynamic range will increase the apparent level of covariance between two parameters.
\begin{figure}
\plotone{flux_vs_filter_veiling.pdf}
\caption{A scatter plot of the average ratio between single and binary star disk fluxes plotted against wavelength and normalized by stellar bolometric flux, color coded by the central veiling value in each $\Delta r_{7510} = 0.1$ bin. The systems have been binned for ease of interpretation. Low-veiling binary disks are systematically brighter than low-veiling single star disks, while high-veiling binary disks are systematically fainter than single star disks, although there is a large amount of RMS scatter in most bins.}
\label{fig:flux_filter_ratios}
\end{figure}
Referring to Figure \ref{fig:veiling_excess_corr}, it appears that at small veiling values the single and binary star samples both have comparable amounts of excess, but at high veiling values binaries have less excess than single stars. To confirm this, Figure \ref{fig:flux_filter_ratios} shows the average ratio between binary and single star disk fluxes, normalized by stellar bolometric flux, as a function of wavelength, binned by veiling value in increments of 0.1. At low veiling values the binary disk SEDs are typically brighter than the single star disks, but at large veiling values the binary disks are fainter than the single disks. The fainter binary disks at high veiling values drive down the disk excess values, reducing the dynamic range of the relations, causing the shallower slope observed in the optical veiling-NIR excess relations (Figures \ref{fig:veiling_color_corr} and \ref{fig:veiling_excess_corr}). There are very few binary or single star systems at high veiling values ($N \leq 5$ for $r > 0.3$), meaning that it is difficult to draw conclusions from this sample. A larger sample of high-veiling sources would help determine whether this effect is caused by random outliers or physical properties of binaries.
It appears that in our diverse sample of binaries there is little, if any, difference in disk and accretion properties between single and binary stars. The similar distributions of NIR excesses and SED properties between binary and single stars in our sample suggest that the impacts of multiplicity do not create substantial differences between the total populations of binary and single stars. However, many of the effects of multiplicity are only present at small separations (e.g., reduced disk lifetimes only occur at separations $\rho\lesssim 50$ au; \citealt{Cieza2009, Kraus2012a}. Our binary analysis was limited by a small range of veiling values and by a relatively small sample size that did not allow us to investigate the effects of different binary properties on disk attributes. A consistently observed and characterized sample with more systems and a larger range of veiling values is needed to robustly investigate the impacts of multiplicity on inner dust disk properties. For example, similar analyses to HH14 and this work in a larger star-forming region could integrate a much larger binary sample, and such a work could investigate disk properties at different binary separations and mass ratios, which is an important next step in understanding the effect of multiplicity on inner disk properties.
\section{Conclusions}\label{sec:conclusion}
To explore the relationship between NIR excess and optical veiling in T Tauri stars, we have assembled an archival sample of previously-characterized Taurus stars from HH14. We combined the measured properties of the sample with 2MASS and WISE NIR photometry to construct NIR SEDs for our sources and to identify disk hosts, and we used several direct-imaging surveys to determine multiplicity for all the stars in the sample.
Using this data set, we found that NIR excess (JHK$_{s}$W1W2) is correlated with optical veiling in both colors and magnitudes of excess, suggesting that there is a relationship between the accretion process and the properties of the dust disk's inner wall. The relationship persists in asynchronous measurements, implying that the relationship, if it is causal, is relatively steady-state. One possible explanation is that a more massive inner disk, which could cause a higher inner disk wall, is related to higher accretion rates, causing both increased NIR emission and increased optical veiling.
We found that the $\sim$50\% of our disk-host sample with no measured veiling showed progressively more asymmetrical NIR excess distributions at redder wavelengths, indicating that they still have circumstellar material (consistent with their disk-host status), but that it may be cooler than the stars with nonzero veiling. This indicates that the inner disk is not as irradiated when there is no veiling, meaning that accretion luminosity may play a significant role in irradiating the inner disk in these sources. Alternatively, some of these objects may be transitional or pre-transitional disks, which have optically thin inner disks while their dust disks remain optically thick at larger radii, and have little or no accretion. Future work investigating the timescale of optical veiling variability and its connection to NIR disk emission is needed to fully understand this effect in the large fraction of apparently veiling-free disk host stars.
We found that binary stars do not show significantly different disk properties than single stars, but note that many of our binary systems were wide binaries, which are known to have similar properties as single stars. Although our high-veiling binary sample suffered from small number statistics, the few high-veiling binaries in our sample had fainter disks than single stars. Future work should focus on a larger sample of close binary stars with a wider range of veiling values to better understand any possible impacts of close companions on inner disk and accretion properties.
\acknowledgements
We thank Neal Evans, Stella Offner, and Ben Tofflemire for their helpful feedback on the contents of this paper. We thank the referee for their useful comments and suggestions. K.S. acknowledges that this material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1610403. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France (DOI : 10.26093/cds/vizier). The original description of the VizieR service was published in 2000, A\&AS 143, 23. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
\software{numpy \citep{Harris2020}, matplotlib \citep{Hunter2007}, scipy \citep{Virtanen2020}, astropy \citep{Astropy2013, Astropy2018}, astroquery \citep{Astroquery2019}
}
| 2023-04-23T06:40:47.186Z | 2022-02-22T02:01:47.000Z | redpajama/arxiv | arxiv_0001 | 1,114 | 11,826 |
2d9747a4a2ef8318283e2ff0baaf189a46431526 |
\chapter{A more detailed spectral analysis}\label{a:better}
\section{From Remark \ref{r:better2}(i) to Remark \ref{r:better}(c)}
Let us assume the validity of \ref{r:better2}(i) and prove Remark \ref{r:better}(c). Let $m_0\ge 2$ be the integer such that
\begin{align}\label{z1}
&{\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\} \neq \emptyset \, ,
\\
&{\rm spec}\, (L_{\text{st}}, U_{m}) \cap \{{\rm Re}\, z > 0\} = \emptyset \,
\quad
\text{for any $m>m_0$} \, .
\end{align}
We show that Remark \ref{r:better}(c) holds with $m = m_0$.
For any $z\in {\rm spec}_{m_0}\, (L_{\text{st}}) \cap \{{\rm Re}\, z > 0\}$ we denote by $V_z:= P_z(L^2_{m_0})$ the image of the Riesz projector
\begin{equation*}
P_z = \frac{1}{2\pi i} \int_\gamma (w-L_{\text{st}})^{-1} dw \, ,
\end{equation*}
where $\gamma$ parameterizes the boundary of a ball containing $z$ and no other eigenvalues of $L_{\text{st}}$.
It is enough to show that $P_z(U_{km_0}) = \{0\}$ for any $k\in \mathbb{Z}\setminus\{-1, 1\}$, $z\in {\rm spec}_{m_0}\, (L_{\text{st}}) \cap \{{\rm Re}\, z > 0\}$ since it gives
$$V_z = P_z(U_{m_0}\cup U_{-m_0}) \subset U_{m_0} \cup U_{-m_0}\, ,$$
where the second inclusion follows from the fact that $U_m$ is always an invariant space of $L_{\text st}$.
If $k>1$, from \eqref{z1} we know that $z\notin {\rm spec}\, (L_{\text{st}}, U_{km_0})$, hence $P_z(U_{km_0})$ is trivial. If $k<-1$,
we reduce to the previous situation by observing that $P_z(U_{km_0}) = \overline{ P_{\bar z}(U_{-km_0})}$.
\section{Proof of Remark \ref{r:better2}(i)} In order to show this point, given Lemma \ref{l:almost-final-2}, we just need to prove the following statement.
\begin{lemma}\label{l:almost-final-1}
For every fixed $\Xi\in \mathscr{C}$ there is $M_0>0$ such that $\mathscr{U}_m$ is empty for every $m\geq M_0$.
\end{lemma}
Indeed, given the conclusion above we infer that $\mathscr{U}_m$ is empty for every $m\geq m_a$ and it thus suffices to select $m_0$ as the largest integer strictly smaller than $m_a$.
Before coming to the proof of Lemma \ref{l:almost-final-1} we state an auxiliary fact which will be used in the argument and which can be readily inferred from the computations in Step 1 of the proof of Lemma \ref{l:will-apply-Rouche}.
\begin{lemma}\label{l:operator-B_z}
For every $0 < \sigma < \tau < 1$ there is a constant $C$ (depending only upon $\sigma$ and $\tau$) such that $B_z := \mathcal{K}_{m_0} \circ \frac{1}{\Xi-z}$ is a bounded operator from $C^\sigma$ to $C^\tau$ for every $z$ with ${\rm Im}\, z>0$ and
\[
\|B_z\|_{\mathcal{L} (C^\sigma, C^\tau)} \leq C\, .
\]
\end{lemma}
\begin{proof}[Proof of Lemma \ref{l:almost-final-1}] The proof will be by contradiction and thus, assuming that the statement is false, we can select:
\begin{itemize}
\item[(i)] a sequence $\{m_j\}\subset [1, \infty[$ with $m_j\to \infty$;
\item[(ii)] a sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$;
\item[(iii)] and a sequence $\{\psi_j\}\subset L^2 (\mathbb R)$ solving the equation
\begin{equation}\label{e:eigenvalue-equation-20}
-\frac{d^2 \psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi -z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
\medskip
{\bf Step 1.} We first prove that $\{z_j\}$ is bounded and every cluster point must be an element of $[0, \Xi (-\infty)]$. Otherwise for a subsequence, not relabeled, we get the estimate
\[
\sup \left\|\frac{A}{\Xi-z_j}\right\|_{L^\infty} =: C_0 < \infty\, .
\]
By scalar multiplying \eqref{e:eigenvalue-equation-20} by $\psi_j$ and taking the real part of the resulting equation we then conclude
\[
\int (|\psi_j'|^2 + m_j^2 |\psi_j|^2) \leq C_0 \int |\psi_j|^2\, ,
\]
which clearly it is not feasible because $C_0 < m_j^2$ for a sufficiently large $j$ (and $\psi_j$ is nontrivial).
Up to subsequences we can thus assume that $z_j$ converges to some $z_0 \in [0, \Xi (-\infty)]$.
\medskip
{\bf Step 2.} We next analyze the cases $z_0 =0$ and $z_0 = \Xi (-\infty)$. The argument is similar to that used in Section \ref{s:3+4} in case (C). Let us argue first for $z_0=0$. We observe that $\Xi^{-1} |A|$ belongs to $L^1 (]-\infty, N])$ for any fixed $N$ and that, likewise, $|\Xi-z_j|^{-1} |A|$ have a uniform $L^1$ bound on any $]-\infty, N]$. We can then
use the Lemma \ref{l:ODE2} to normalize $\psi_j$ so that it is asymptotic to $e^{m_j t}$ and also to write
\[
\psi_j (t) = e^{m_j (t)} (1+z_j (t))
\]
with
\[
|z_j (t)| \leq \exp \left(\frac{1}{m_j} \int_{-\infty}^N \frac{|A|}{|\Xi-z_j|}\right) -1 \qquad
\mbox{for all $t\leq N$.}
\]
In particular, we have $|z_j (t)|\leq \frac{C(N)}{m_j}$ on $]-\infty, N]$. We next scalar multiply \eqref{e:eigenvalue-equation-20} by $\psi_j$ and take the imaginary part to conclude
\[
- \left(\int_{-\infty}^a + \int_b^\infty\right) \frac{A}{|\Xi-z_j|^2} |\psi_j|^2\leq
\int_a^b \frac{A}{|\Xi-z_j|^2} |\psi_j|^2\, .
\]
In particular, since $\frac{A}{|\Xi-z_j|^2}$ is bounded from above by a constant $C$ independent of $j$ on $[a,b]$ and $-\frac{A}{|\Xi-z_j|^2}$ is bounded from below by a constant $c>0$ independent of $j$ on $[b+1, b+2]$, we conclude
\[
\int_{b+1}^{b+2} |\psi_j|^2 \leq \frac{C}{c} \int_a^b |\psi_j|^2\, .
\]
We next choose $N$ larger than $b+2$ and use the estimate $|z_j (t)|\leq \frac{C(N)}{m_j}$ to argue that, for $j$ large enough, we have $\frac{1}{2} e^{m_j t} \leq |\psi_j (t)| \leq 2 e^{m_j t}$ on $]-\infty, N]$. In particular, we infer
\[
\int_{b+1}^{b+2} e^{2m_j t} \leq C \int_a^b e^{2m_j t}
\]
provided the constant $C$ is chosen large enough (but independent of $j$) and $j$ is large enough. The latter inequality is certainly impossible for $m_j$ large enough, leading to a contradiction.
The argument to exclude $z_0 = \Xi (-\infty)$ is entirely analogous, this time normalizing for $t\to \infty$ and reaching an inequality of type
\[
\int_{a-2}^{a-1} e^{-2m_j t} \leq C \int_a^b e^{-2m_j t}
\]
for a constant $C$ independent of $j$ and any $j$ large enough.
\medskip
{\bf Step 3.} We next examine the last case, that is $z_0 = \Xi (c)$. This time we fix a $\sigma \in \, ]0,1[$ and normalize $\psi_j$ so that $\|\psi_j\|_{C^\sigma}=1$. We observe that
\[
\psi_j = - \mathcal{K}_{m_j} \left(\frac{A}{\Xi-z_j} \psi_j\right)\, ,
\]
and also recall that $\mathcal{K}_{m_j} (\varphi) = \frac{1}{2m_j} e^{-m_j |\cdot|} * \varphi$.
We set $m_0=m_a$ write further
\[
\psi_j = - \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} +m_0^2\right) \left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_j}\right) \psi_j\right)\, .
\]
Recalling Lemma \ref{l:operator-B_z}, we can fix a $\tau \in ]\sigma,1[$ to achieve
\[
\left\|\left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_j}\right) \psi_j\right)\right\|_{C^\tau}\leq C
\]
for some constant $C$ independent of $j$.
We will show in the final step that
\begin{itemize}
\item[(Cl)] $\|\mathcal{K}_{m_j} \circ (-\frac{d^2}{dt^2} + m_0^2)\|_{\mathcal{L} (C^\tau, C^\tau)} \leq C$ for some constant $C$ independent of $k$.
\end{itemize}
In particular, we achieve
\begin{equation}\label{e:estimate-C-tau}
\|\psi_j\|_{C^\tau} \leq C\, .
\end{equation}
We now wish to show that indeed $\|\psi_j\|_{C^\sigma} \leq \frac{1}{2}$ for $j$ large enough, which obviously would be a contradiction. In order to achieve the latter estimate we use a Littlewood-Paley decomposition. We fix a cut-off function $\chi$ which is supported in $]\frac{1}{2}, 2[$, define $\chi_\ell (t) :=\chi (2^{-\ell} t)$ for $\ell\in \mathbb N$ and assume that $\chi$ has been chosen so that
\[
\sum_{\ell \in \mathbb N} \chi_\ell \equiv 1 \qquad \mbox{on $[1, \infty[$}.
\]
We then define
\[
\chi_{-1} := 1 - \sum_{\ell \in \mathbb N} \chi_\ell\,
\]
and introduce the Littlewood-Paley operator $\Delta_\ell$ as $\Delta_\ell (\varphi) = \mathscr{F}^{-1} (\chi_\ell \mathscr{F} (\varphi))$, where $\mathscr{F}$ is the Fourier transform.
We finally recall that (see \cite[Section 1.4.2]{Grafakos}), if we define
\[
\|\varphi\|_{X^\sigma} := \sum_{\ell \geq -1} 2^{\sigma \ell} \|\Delta_\ell \varphi\|_{L^\infty}\, ,
\]
then
\[
C (\sigma)^{-1} \|\varphi\|_{X^\sigma} \leq \|\varphi\|_{C^\sigma} \leq C (\sigma) \|\varphi\|_{X^\sigma}\, .
\]
We are now ready to perform our final estimate. We fix a large $N$, which will be chosen later, and for $\ell\geq N$ we write
\begin{align*}
\sum_{\ell \geq N} 2^{\sigma \ell} \|\Delta_\ell \psi_j\|_\infty
&\leq 2^{-N (\tau-\sigma)} \sum_{\ell\geq N} 2^{\tau \ell} \| \Delta_\ell \psi_j\|_\infty
\leq 2^{-N (\tau-\sigma)} C (\tau) \|\psi_j\|_{C^\tau} \leq C 2^{-N (\tau-\sigma)}\, ,
\end{align*}
where the constant $C$ is independent of both $N$ and $j$. Next, for any $\ell$ we observe that
\[
\Delta_\ell \psi_j = \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \underbrace{\left(\Delta_\ell \left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z} \psi_j \right)\right)\right)}_{=: \Gamma_{\ell,j}}\, .
\]
Now
\[
\|\Gamma_{\ell,j}\|_{L^\infty} \leq C 2^{-\ell\sigma} \left\|\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z} \psi_j \right)\right\|_{C^\sigma} \leq C 2^{-\ell \sigma}\, .
\]
On the other hand, because of the frequency localization, we have
\[
\Delta_\ell \psi_j = \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \circ (\Delta_{\ell-1}+\Delta_\ell+\Delta_{\ell+1}) (\Gamma_{\ell,j})
\]
and the estimate
\[
\left\|\mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \circ \Delta_\ell\right\|_{\mathcal{L} (L^\infty, L^\infty)} \leq \frac{C}{m_j^2} \left(2^{2\ell} + m_0^2\right)\, .
\]
We can therefore write the estimate
\begin{align*}
\|\psi_j\|_{C^\sigma} & \leq \frac{C}{m_j^2} \sum_{\ell=-1}^{N} (2^{(2+2\sigma) \ell} + m_0^2)
+ C 2^{-N (\tau-\sigma)} &\leq \frac{CN}{m_j^2} \left(2^{(2+2\sigma) N} + m_0^2\right) + C 2^{-N (\tau-\sigma)}\, ,
\end{align*}
where the constants $C$ are independent of $N$ and $j$. In particular, we fix first $N$ large enough to get $C 2^{-N (\tau-\sigma)} \leq \frac{1}{4}$ and we then choose $m_j$ large enough so that
\[
\frac{CN}{m_j^2} \left(2^{(2+2\sigma) N} + m_0^2\right) \leq \frac{1}{4}\, .
\]
These two estimates imply $\|\psi_j\|_{C^\sigma} \leq \frac{1}{2}$, contradicting the normalization $\|\psi_j\|_{C^\sigma} = 1$.
\medskip
{\bf Step 4.} To complete the proof of the Lemma we need to show (Cl). We first write
\[
T_{m, \ell} := \Delta_\ell \circ \mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right)\, .
\]
The operator $T_{m, \ell}$ is the convolution with a kernel $K_{m, \ell}$ whose Fourier symbol is given by
$\chi_\ell (\xi) \frac{|\xi|^2 + m_0^2}{|\xi|^2 +m^2}$. Hence, for $\ell \geq 0$ we have
\[
K_{m, \ell} (t) = \frac{1}{2\pi} \int \chi \left(\frac{\xi}{2^\ell}\right) \frac{|\xi|^2 + m_0^2}{|\xi|^2 + m^2} e^{i \xi t}\, d\xi\, .
\]
and
\[
(-it)^k K_{m, \ell} (t) = \frac{1}{2\pi} \int \frac{d^k}{d\xi^k} \left( \chi \left(\frac{\xi}{2^\ell}\right) \frac{|\xi|^2 + m_0^2}{|\xi|^2 + m^2}\right) e^{it\xi}\, d\xi\, .
\]
In particular, we easily conclude
\[
\| |t|^k K_{m, \ell}\|_{L^\infty} \leq C (k) 2^{\ell (1-k)}\, ,
\]
for a constant $C (k)$ independent of both $m\geq 1$ and $\ell$, but which depends on $k$. From the latter we can estimate
\begin{align*}
\|K_{m, \ell}\|_{L^1} &\leq \int_{|t|\leq 2^{-\ell}} |K_{m, \ell} (s)|\, ds +
\int_{|t|\geq 2^{-\ell}} \frac{|s^2 K_{m, \ell} (s)|}{|s|^2}\, ds\\
&\leq C + C 2^{-\ell} \int_{2^{-\ell}}^\infty \frac{1}{s^2}\, ds \leq C\, .
\end{align*}
For $\ell = -1$ we just conclude, likewise
\[
\||t|^k K_{m, -1}\|_{L^\infty} \leq C (k)
\]
for a constant $C(k)$ independent of $m$, but depending of $k$. Once again using the cases $k=0$ and $2$ of the latter inequality we achieve
\[
\|K_{m, -1}\|_{L^1} \leq C \, .
\]
We have thus bounded all $\|K_{m, \ell}\|_{L^1}$ with a universal constant $C$ independent of both $m\geq 1$ and $\ell\in \mathbb N \cup \{-1\}$. In particular, since $\|T_{m, \ell}\|_{\mathcal{L} (L^\infty, L^\infty)} = \|K_{m, \ell}\|_{L^1}$ and
\[
\mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) = \sum_{\ell \geq -1} T_{m, \ell}
= \sum_{\ell \ge -1} T_{m, \ell} \circ (\Delta_{\ell-1}+\Delta_\ell+\Delta_{\ell+1})\, ,
\]
we can estimate
\begin{align*}
\left\| \mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) (\varphi)\right\|_{C^\sigma}
&\leq C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|T_{m, \ell} (\varphi)\|_{L^\infty}
= C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|T_{m, \ell} (\Delta_\ell \varphi)\|_{L^\infty}\\
&\leq C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|\Delta_\ell \varphi\|_{L^\infty}
\leq C (\sigma) \|\varphi\|_{C^\sigma}\, .
\end{align*}
This completes the proof of (Cl) and hence of the entire Lemma.
\end{proof}
{\color{red}
\section{Proof of Theorem \ref{thm:spectral-stronger-2}}
In \cite{Vishik2}, Vishik claims the following improved version of Theorem \ref{thm:spectral5}, which would immediately imply Theorem \ref{thm:spectral-stronger-2}.
\begin{theorem}\label{thm:Vishikversion}
There are a function $\Xi\in \mathscr{C}$ and an integer $m_0\geq 2$ such that $\mathscr{U}_{m} = \emptyset$ for any integer $m>m_0$ and $\mathscr{U}_{m_0}$ consists of a single $z$. Moreover, the algebraic multiplicity of $m_0 z$ as an eigenvalue of $\mathcal{L}_{m_0}$ is $1$.
\end{theorem}
Vishik's suggested proof of Theorem \ref{thm:Vishikversion} builds upon Proposition \ref{p:3+4} and the following improved versions of Proposition \ref{p:5-7} and Proposition \ref{p:almost-final}.
\begin{proposition}\label{prop:Vishicimproved1}\label{PROP:VISHICIMPROVED1}
Assume $- \lambda_a < -1$ and let $m_a=\sqrt{\lambda_a}$. Then there
exist $\varepsilon >0$ and $\delta>0$ with the following property.
For every $h\in ]0, \delta[$, $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) = \{z_{m_a-h}\}$, where $(m_a-h) z_{m_a-h}$ is an eigenvalue of $\mathcal{L}_{m_a-h}$ with algebraic multiplicity $1$.
\end{proposition}
In \cite{Vishik2} Vishik only gives the argument that $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a))$ contains a single element $z_{m_a-h}$ and the corresponding eigenspace of $(m_a-h)^{-1} \mathcal{L}_{m_a-h}$ has dimension $1$ (i.e. its {\em geometric} multiplicity is $1$, cf. Remark \ref{r:b-also-2}). However it is essential to have the {\em algebraic} multiplicity equal to $1$ in order to complete his suggested argument. After we pointed out to him the gap in his paper, he suggested in \cite{Vishik3} the proof of Proposition \ref{prop:Vishicimproved1} reported below. Before coming to it, we point out that
a spectral perturbation argument as in the proof of Lemma \ref{l:almost-final-2} (which we outline anyway below)
easily implies the following.
\begin{proposition}\label{prop:Vishicimproved2}
Assume $- \lambda_a<-1$ and let $m_a = \sqrt{\lambda_a}$ and $m_b:= \max \{1, \sqrt{\lambda_b}\}$. Then $\mathscr{U}_m$ consists of a single element $z_m$ for every $m\in ]m_b, m_a[$ and moreover the algebraic multiplicity of $z_m$ as an eigenvalue of $m^{-1} \mathcal{L}_m$ is $1$.
\end{proposition}
Taking the previous proposition for granted, we just need a choice of $\Xi$ for which $\lambda_a >1$ and $]m_b, m_a[$ contains an integer, which is guaranteed by Lemma \ref{L:BOTTOM}, and we conclude Theorem \ref{thm:Vishikversion}.
\medskip
We now explain how to prove Proposition \ref{prop:Vishicimproved2}.
From Proposition \ref{p:almost-final} and Lemma \ref{l:almost-final-1} we know that $\mathscr{U}_m \neq \emptyset$, for every $m\in ]m_b, m_a[$, and $\mathscr{U}_m = \emptyset$ for $m \ge m_a$. Moreover, Remark \ref{rmk:algebraic dim const} implies that the sum of the algebraic multiplicities of $z\in \mathscr{U}_m$, as eigenvalues of $m^{-1} \mathcal{L}_m$, is constant for $m\in ]m_b, m_a[$. Hence, to conclude we just need to prove that the latter is $1$ for some $m\in ]m_b, m_a[$.
To that aim we show that for any $\varepsilon>0$ there exists $\delta>0$ such that $\mathscr{U}_{m_a-h} = \mathscr{U}_{m_a-h}\cap B_{\varepsilon}(\Xi(a))$ for any $h\in ]0,\delta[$. This is enough for our purposes since, together with Proposition \ref{prop:Vishicimproved1}, it gives $\mathscr{U}_{m_a-h}= \mathscr{U}_{m_a-h}\cap B_{\varepsilon}(\Xi(a))=\{ z_{m_a-h}\}$ where $z_{m_a-h}$ is an eigenvalue of $(m_a-h)^{-1} \mathcal{L}_{m_a - h}$ with algebraic multiplicity $1$.
Assume for contradiction the existence of a sequence $(m_j)_{j\in\mathbb N}$ in $]m_b,m_a[$ converging to $m_a$ such that there are $z_j\in \mathscr{U}_{m_j}$ with $|z_j - \Xi(a)|>\varepsilon$ for some $\varepsilon>0$. Up to extracting a subsequence, we may assume $z_j \to z$ for some $z\in \mathbb C$ with $|z-\Xi(a)|\ge \varepsilon$. Proposition \ref{p:3+4} implies that the imaginary part of $z$ is positive. Arguing as in the first step of proof of Proposition \ref{p:3+4} we can prove that $z\in \mathscr{U}_{m_a}$ and reach a contradiction.
\section{Proof of Proposition \ref{prop:Vishicimproved1}}
The proof of Proposition \ref{prop:Vishicimproved1} can be reduced to the following weaker version using Remark \ref{rmk:algebraic dim const} and the argument outlined in the previous paragraph.
\begin{proposition}\label{prop:Vishikimproved-weaker}
Assume $- \lambda_a < -1$ and let $m_a=\sqrt{\lambda_a}$. Let $h$ and $\varepsilon$ be sufficiently small so that Proposition \ref{p:3+4} and Remark \ref{r:b-also-2} apply, namely
$\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) = \{z_{m_a-h}\}$, where $z_{m_a-h}$ is an eigenvalue of $(m_a-h)^{-1} \mathcal{L}_{m_a-h}$ with {\em geometric} multiplicity $1$. Then, if $h$ is chosen possibly smaller, the algebraic multiplicity of $z_{m_a-h}$ is also $1$.
\end{proposition}
We now come to the proof of the latter, which is the heart of the matter. First of all, we introduce a suitable transformation of the space $\mathcal{H}$ (which we recall is the domain of the operator $\mathcal{L}_m$, defined in \eqref{e:def-H}). We introduce the Hilbert space
\[
\mathcal{H}^e :=\left\{ f: \mathbb R \to \mathbb C\, :\, \int |f (t)|^2 e^{-2t}\, dt < \infty\right\}
\]
and the isometry $T: \mathcal{H} \to \mathcal{H}^e$ given by
\[
\gamma (r) \mapsto e^{2t} \gamma (e^t)\, .
\]
Rather than considering the operator $\mathcal{L}_m$ on $\mathcal{H}$, it turns out to be more convenient to consider the operator $T \circ \mathcal{L}_m \circ T^{-1}$ on $\mathcal{H}^e$. Since the spectra of the two operators coincide, with a slight abuse of notation we will keep writing $\mathcal{L}_m$ in place of $T \circ \mathcal{L}_m \circ T^{-1}$, and we will keep $\mathscr{U}_m$ to denote the point spectrum of ${m^{-1}}T \circ \mathcal{L}_m \circ T^{-1}$ in the upper half plane.
Simple computations show that the operator $\mathcal{L}_m$ is given, on $\mathcal{H}^e$ by
\[
\mathcal{L}_m (\alpha) = m \Xi \alpha - m A \varphi
\]
where $\varphi$ is the unique $L^2$ solution of
\[
\varphi'' - m^2 \varphi = \alpha\,
\]
(note that we are claiming $\varphi\in L^2$ rather than $\varphi \in \mathcal{H}^e$, cf. Section \ref{s:eigenvalue-equation}).
We can now come to the main idea behind the simplicity of $z_{m_a-h}$, which is borrowed from \cite{Vishik3}. A prominent role is played by the adjoint of $\mathcal{L}_m$ (considered as a bounded linear operator from $\mathcal{H}^e$ into itself): for the latter we will use the notation $\mathcal{L}_m^\star$.
\begin{lemma}\label{l:aggiunto}
Assume that $h$ and $\varepsilon$ are small enough so that $\{z_{m_a-h}\} =\mathscr{U}_{m_a-h}\cap B_\varepsilon (\Xi (a))$ and $z_{m_a-h}$ has geometric multiplicity $1$ in ${\rm spec}\, ((m_a-h)^{-1} \mathcal{L}_{m_a-h}, \mathcal{H}^e)$. Let $\alpha_h \in \mathcal{H}^e\setminus \{0\}$ be such that $(m_a-h)^{-1}\mathcal{L}_{m_a-h}(\alpha_h) - z_{m_a - h}\alpha_h=0$. If $h$ is small enough, then there is $\beta_h\in \mathcal{H}^e$ such that $(m_a-h)^{-1}\mathcal{L}_{m_a-h}^\star(\beta_h) - \bar z_{m_a-h}\beta_h =0$ and
\begin{equation}\label{e:dual-pairing}
\langle \alpha_h, \beta_h \rangle_{\mathcal{H}^e} = \int \alpha_h (t) \bar \beta_h (t)\, e^{-2t}dt \neq 0\, .
\end{equation}
\end{lemma}
Let us show how the latter implies Proposition \ref{prop:Vishikimproved-weaker}. Assume $z_{m_a-h}$ were an element of ${\rm spec}\, ((m_a-h)^{-1}\mathcal{L}_{m_a-h}, \mathcal{H}^e)\cap B_\varepsilon (\Xi (a))$ with geometric multiplicity $1$ and algebraic multiplicity larger than $1$: our goal is to show that $h$ cannot be too small.
The properties just listed mean that the following bounded operator on $\mathcal{H}^e$,
\[
L_h := (m_a-h)^{-1}\mathcal{L}_{m_a-h} - z_{m_a-h} \, ,
\]
has a 1-dimensional kernel, $0$ is in its point spectrum, and $0$ has algebraic multiplicity strictly larger than $1$. These properties imply that any element $\alpha_h$ in the kernel of $L_h$ (i.e. any eigenfunction of $(m_a-h)^{-1}\mathcal{L}_{m_a-h}$ with eigenvalue $z_{m_a-h}$) is in the image of $L_h$. Fix one such element $\alpha_h$ and let $\eta_h$ be such that $L_h (\eta_h) = \alpha_h$. If $h$ is small enough, we can fix $\beta_h$ as in Lemma \ref{l:aggiunto}, and observe that it is in the kernel of the adjoint operator $L^\star_h$. We then must have
\[
0 \neq \int \alpha_h \bar\beta_h \, e^{-2t}dt= \int L_h (\eta_h) \bar\beta_h \, e^{-2t}dt
= \int \eta_h \overline{L_h^\star (\beta_h)} \, e^{-2t}dt = 0\, ,
\]
which is not possible.}
\begin{proof}[Proof of Lemma \ref{l:aggiunto}]
{\color{red}
We begin by proving the following claim:
\begin{itemize}
\item[(Cl)] For any $z\in \mathcal{U}_m$, with $m>1$, such that
$m^{-1}\mathcal{L}_m(\alpha_z) - z \alpha_z = 0$,
there exists $\beta_z\in \mathcal{H}^e$ such that
\begin{equation}
m^{-1}\mathcal{L}_m^\star(\beta_z) - \bar z \beta_z = 0 \, ,
\end{equation}
and
\begin{equation}\label{eq:keyfunction}
\left\langle \alpha_z, \beta_z \right\rangle_{\mathcal{H}^e}
=
\int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z)^2}\varphi_z(t)^2\, d t\, ,
\end{equation}
where $\varphi_z$ is the unique solution on $L^2(\mathbb{R})$ of $\varphi_z'' - m^2\varphi_z = \alpha_z$.
\end{itemize}
To that aim we first observe that the adjoint of $\mathcal{L}_m$ in $\mathcal{H}^e$ is given by
\begin{equation}
\mathcal{L}_m^\star (\alpha) = m( \Xi \alpha - e^{2t} \mathcal{K}_m(A\alpha e^{-2t})) \, ,
\end{equation}
where $\mathcal{K}_m$ is the inverse of $-\frac{d^2}{dt^2} + m^2$ as a closed unbounded self-adjoint operator in $L^2(\mathbb{R})$.
Notice that $\mathcal{L}^\star$ is well defined because $e^{-t}\alpha \in L^2(\mathbb{R})$ and $A\sim e^{2t}$ as $t\to -\infty$.
We now observe that, if $z\in \mathcal{U}_m$, $m^{-1}\mathcal{L}_m (\alpha_z) = z \alpha_z$, and $\beta_z$ is defined by
\[
\beta_z:= e^{2t}\frac{\bar \varphi_z}{\Xi - \bar z}
= e^{2t}\frac{\mathcal{K}_m(\bar \alpha_z)}{\Xi - \bar z}\, ,
\]
then
\begin{equation}\label{eq:adj}
m^{-1}\mathcal{L}_m^\star (\beta_z) = \bar z \beta_z \, .
\end{equation}
Notice that $\beta_z\in W^{2,2}_{\rm loc}\cap \mathcal{H}^e$ decays exponentially fast at $\infty$ thanks to the bound $| \varphi_z(t)|\le C e^{-m|t|}$, for every $t\in \mathbb{R}$, proven in Lemma \ref{c:decay}.
Let us now verify \eqref{eq:adj}:
We first observe that
\begin{equation}
\alpha_z = \frac{A\varphi_z}{\Xi - z} \, ,
\end{equation}
hence
\begin{align*}
m^{-1}\mathcal{L}_m^\star (\beta_z) & = \Xi \beta_z - e^{2t}\mathcal{K}_m \left(\frac{A \bar \varphi_z}{\Xi - \bar z}\right) =
\Xi \beta_z - e^{2t}\mathcal{K}_m(\bar \alpha_z)
\\& = \Xi \beta_z -(\Xi - \bar z) \beta_z = \bar z \beta_z \, .
\end{align*}
It is now immediate to conclude \eqref{eq:keyfunction}.
}
\medskip
{\color{red}
In order to simplify our notation we use $\mathcal{L}_h$, $z_h$ and $m_h$ in place of $\mathcal{L}_{m_a-h}$, $z_{a-h}$ and $m_a-h$.
Given $\alpha_h\in \mathcal{H}^e$ as in the statement of the Lemma we denote by $\varphi_h$ the unique $L^2$ solution of $\varphi_h'' - m_h^2 \varphi_h = \alpha_h$. We now can apply (Cl) above to find $\beta_h\in \mathcal{H}^e$ which solves $m_h^{-1}\mathcal{L}_h^\star (\beta_h) = \bar z_h \beta_h$ and such that
\begin{equation}\label{eq:keyfunction1}
\left\langle \alpha_h, \beta_h \right\rangle_{\mathcal{H}^e}
=
\int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z_h)^2}\varphi_h(t)^2\, d t\, .
\end{equation}
To conclude the proof it suffices to show that, after appropriately normalizing the functions $\alpha_h$ (i.e. after multiplying them by an appropriate constant factor, which might depend on $h$) we have
\begin{equation}\label{e:limit-nonzero}
\lim_{h\to 0} \left\langle \alpha_h, \beta_h \right\rangle_{\mathcal{H}^e}
= c \neq 0 \, .
\end{equation}
Note that for the latter conclusion, which we will prove in the next two steps, we will use the assumption that $\alpha_h\neq 0$.
\bigskip
{\bf Step 1:} We show that, up to multiplication of $\alpha_h$ by a suitable constant factor (which might vary with $h$), $\varphi_h \to \varphi$ in $W^{1,2}$ and in $C^{1,\alpha}$, as $h\to 0$, where $\varphi \in W^{2,\infty}$ is a nontrivial solution to
\begin{equation}
- \frac{d^2 \varphi}{dt^2} + m_a^2 \varphi + \frac{A}{\Xi - \Xi (a)} \varphi = 0\, .
\end{equation}
By Remark \ref{r:phi(a)-nonzero}, the nontriviality of $\varphi$ implies $\varphi (a) \neq 0$, and hence, up to multiplication by another constant factor, we will assume, without loss of generality, that $\varphi (a)=1$.
Recall that $\varphi_h$ solves the equation
\begin{equation}\label{e:ODE-again-100}
- \frac{d^2 \varphi_h}{dt^2} + m_h^2 \varphi_h + \frac{A}{\Xi - z_h} \varphi_h
= 0 \, .
\end{equation}
For the moment, let us normalize the functions so that
\begin{equation}\label{e:normalization-2}
\int (|\varphi_h'|^2 + m_h^2 |\varphi_h|^2) = 1\, ,
\end{equation}
as in \eqref{e:L2-normalization}. We then can argue as for the bounds \eqref{e:exp-bound-1} and \eqref{e:exp-bound-2} to derive the existence of constants $C$ and $\beta>2$ (independent of $h$) such that
\begin{equation}\label{e:exp-bound-3}
\left|\varphi_h (t)\right| \leq C e^{- \beta |t|} \quad \forall t\, .
\end{equation}
Recalling Section \ref{s:5-7-part-II}, we know that $z_h = \Xi (a) + c(a) h + o (h)$, where $c(a)$ is a complex number with positive imaginary part, which we denote by $d(a)$. Using the monotonicity of $\Xi$ we can write $z_h = \Xi (t (h)) + i (d(a) h + o (h))$ for some $t(h)$ which satisfies the bound $|t (h) -a|\leq C h$ for some positive constant $C$. In particular, using the mean value theorem and the fact that the derivative of $\Xi$ does not vanish on $[a-1, a+1]$, we get
\[
|\Xi (t) - z_h|^2\geq C^{-1} (|t-t(h)|^2 + |h|^2)\qquad \forall t\in [a-1,1+1]\, ,
\]
where $C$ is some positive constant. Next, using that $|t(h)-a|\leq C h$, we conclude the estimate
\[
|\Xi (t) - z_h|\geq C^{-1} |t-a| \qquad \forall t\in [a-1, a+1]\, ,
\]
with a constant $C$ independent of $h$. Since $a$ is a zero of $A$, we finally conclude that the functions
\[
\frac{A(t)}{\Xi (t) - z_h}
\]
are in fact uniformly bounded, independently of $h$. Using the latter estimate and \eqref{e:exp-bound-3} we thus infer that
\begin{equation}\label{e:exp-bound-4}
|\varphi_h'' (t)|\leq C e^{-\beta |t|}\, .
\end{equation}
In particular, upon extraction of a subsequence, we can assume that the $\varphi_h$ converge to a function $\varphi$ strongly in $W^{1,2}$, weakly in $W^{2,\infty}$, and hence strongly in $C^{1, \alpha}$ for every $\alpha<1$. In particular, because of the normalization \eqref{e:normalization-2}, $\varphi$ is a nontrivial $W^{2, \infty}$ function, satisfying the same exponential decay as in \eqref{e:exp-bound-3} and \eqref{e:exp-bound-4}. Moreover, given the bound on the functions $\frac{A(t)}{\Xi (t) -z_h}$, $\varphi$ is in fact a solution of
\begin{equation}\label{e:ODE-again-101}
- \frac{d^2 \varphi}{dt^2} + m_a^2 \varphi + \frac{A}{\Xi - \Xi (a)} \varphi = 0\, .
\end{equation}
Recalling Remark \ref{r:phi(a)-nonzero}, $\varphi (a)\neq 0$ and $\varphi$ is unique up to a constant factor. In particular, we must have
\begin{equation}\label{e:comoda}
\liminf_{h\downarrow 0} |\varphi_h (a)|>0,
\end{equation}
otherwise for a suitable subsequence we would have convergence to a nontrivial solution $\varphi$ for which $\varphi (a)=0$. Because of \eqref{e:comoda} we can use the different normalization $\varphi_h (a) = 1$, which in turn implies that $\varphi_h$ converges (without extracting subsequences) to the unique $W^{2,2}$ solution $\varphi$ of \eqref{e:ODE-again-101} which satisfies $\varphi (a)=1$.
\medskip
{\bf Step 2:} We prove that
\begin{equation}
\lim_{h\to 0} \, {\rm Im} \int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z_h)^2}\varphi_h(t)^2\, d t
=
\frac{2A'(a)\varphi(a)^2}{d(a)\Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds \, .
\end{equation}
Recalling that $z_h = \Xi(t(h)) + i (d(a)h + o(h))$, we write
\begin{align*}
{\rm Im}\, \left[\frac{A}{(\Xi-z_h)^2} \varphi_h^2\right]
& = {\rm Im}\, \left[\frac{A((\Xi-\Xi(t(h)))+ i(d(a)h + o(h)))^2}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}({\rm Re}\, \varphi_h + i {\rm Im}\, \varphi_h)^2\right]
\\&
= \frac{2(d(a)h + o(h)) A(\Xi - \Xi(t(h)))}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}({\rm Re}\, \varphi_h^2 - {\rm Im}\, \varphi_h^2)
\\ & \qquad +
\frac{2A }{(\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2}{\rm Re}\, \varphi_h {\rm Im}\, \varphi_h
\\ & \qquad -\frac{ 4(d(a)h + o(h))^2 A}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}{\rm Re}\, \varphi_h {\rm Im}\, \varphi_h
\\ & =: I_h + II_h + III_h \, .
\end{align*}
To ease notation we set
\begin{equation}
f_h := {\rm Re}\, \varphi_h^2 - {\rm Im}\, \varphi_h^2 \, ,
\quad
g_h := {\rm Re}\, \varphi_h\, {\rm Im}\varphi_h \, ,
\end{equation}
and observe that $f_h \to \varphi^2$, $g_h \to 0$ as $h\to 0$, where the convergence is, in both cases, in the strong topologies of $L^2$ and of $C^\alpha$, for every $\alpha <1$.
We will show below that:
\begin{align}
\lim_{h\to 0} \int I_h &= \frac{2A'(a)\varphi(a)^2}{d(a) \Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds=: L (a)\label{e:limit-I}\, ,\\
\lim_{h\to 0} \int II_h &= 0\label{e:limit-II}\, ,\\
\lim_{h\to 0} \int III_h &=0\label{e:limit-III}\, .
\end{align}
Considering that none of the numbers $\Xi' (a)$, $A'(a)$, $\varphi (a)$, and $d(a)$ vanish, $L(a)\neq 0$. This implies \eqref{e:limit-nonzero} and concludes the proof. We next study separately the three limits above.
\medskip
{\bf Proof of \eqref{e:limit-I}.}
There exists $\delta>0$ and $r>0$ such that for any $h$ sufficiently small one has $|\Xi(t) - \Xi(t(h))|>\delta$ for all $t\in \mathbb{R}\setminus (a-r/2,a+r/2)$. This implies that
\begin{equation}
\lim_{h \to 0} \int_{\mathbb{R}\setminus (t(h) - r, t(h) + r)} I_h = 0 \, ,
\end{equation}
hence, we are left with
\begin{equation}
\lim_{h \to 0} 2h d(a)\int_{t(h) - r}^{t(h) + r} \frac{A(t)(\Xi(t)- \Xi(t(h))}{((\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2)^2} f_h(t)
\, d t \, .
\end{equation}
We change variables according to $t= t(h) + sh$:
\begin{align*}
2\int_{-\frac{r}{h}}^{\frac{r}{h}} s & \left(\frac{A(t(h) + sh)}{h} \frac{\Xi(t(h) + sh )- \Xi(t(h))}{sh}\right) \times
\\&
\left( s^2\left(\frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh}\right)^2 + (d(a) + o(h)/h)^2 \right)^{-2}
f_h(t(h) + sh)\, d s
=: C(h) \, .
\end{align*}
Notice that, for any $s\in \mathbb{R}$, we have
\begin{equation}
\lim_{h \to 0}\frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh} = \Xi'(a) \, .
\end{equation}
Moreover the monotonicity of $\Xi$ implies that
\begin{equation}
1/C \le \left| \frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh} \right| \le C
\quad \text{for any $s\in (-r/h, r/h)$} \, .
\end{equation}
Notice that
\begin{equation}
\frac{A(t(h) + sh)}{h}
=
\frac{A(t(h) + sh)-A(t(h))}{h} + \frac{A(t(h)) - A(a)}{h} \, ,
\end{equation}
hence, up to extracting a subsequence $h_i \to 0$, we have
\begin{equation}
\frac{A(t(h_i) + sh_i)}{h_i} \to A'(a)s + x
\end{equation}
for some $x\in \mathbb{R}$ (recall that $|t(h)-a|\leq C h$ and note that $x$ might depend on the subsequence).
Collecting all the estimates above, and using the dominated convergence theorem we deduce that, along the subsequence $h_i$,
\begin{equation}
\lim_{i\to\infty} C(h_i)= 2\int_{\mathbb{R}} \frac{s(A'(a)s + x)\Xi'(a)}{(s^2 \Xi'(a)^2 + d(a)^2)^2} \varphi(a)^2\, ds
=
\frac{2A'(a)\varphi(a)^2}{d(a) \Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds \, .
\end{equation}
Observe that the limit does not depend on $x$ and hence does not depend on the chosen subsequence.
\medskip
{\bf Proof of \eqref{e:limit-III}.} Arguing as we did for $I_h$ and using that $g_h\to 0$ in $C^{1/2}$, as $h\to 0$, we easily deduce that
\begin{equation}
\lim_{h\to 0} \int III_h = 0 \, .
\end{equation}
\medskip
{\bf Proof of \eqref{e:limit-II}.}
We need to show that
\begin{equation}
\lim_{h \to 0} \int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2} g_h(t)\, dt = 0 \, .
\end{equation}
Observe that $G_h := g_h/|\varphi_h|^2 = |\varphi_h|^{-2} {\rm Re}\, \varphi_h {\rm Im}\, \varphi_h $, and in particular, $|G_h|\leq \frac{1}{2}$. Moreover there exists $r>0$ such that $G_h \to 0$ in $(a-r, a+r)$ in the $C^\alpha$ topology for every $\alpha<1$ (here, we are using that $|\varphi_h(a)|^2 \to |\varphi(a)|^2\neq 0$ as $h\to 0$).
We write
\begin{align}
\int_\mathbb{R} & \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} G_h(t)\, dt
\\& =
\int_\mathbb{R} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h))) dt \, ,
\end{align}
where we took advantage of the identity
\begin{equation}
\int_\mathbb{R} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2}\, d t = 0 \, ,
\end{equation}
proven in \eqref{e:imaginary-trick}.
Arguing as we did for $I_h$ we can reduce the problem to show
\begin{align}
\lim_{h \to 0} & \int_{t(h) - r}^{t(h) + r} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt = 0\, .
\end{align}
We split the integral to the sum of
\begin{align*}
J_1 (h) &:= \int_{t(h)-r}^{t(h)+r} \frac{(A(t)- A(t (h))|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt\\
J_2 (h) &:= A(t (h)) \int_{t(h)-r}^{t(h)+r} \frac{|\varphi_h(t)|^2}{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt\, .
\end{align*}
Next observe that, in the interval that interests us, the following inequalities hold provided $r$ and $h$ are sufficiently small:
\begin{align*}
|A (t) - A(t(h))|&\leq C |t- t(h)|\\
|A(t(h)| &= |A (t (h))-A(a)|\leq C|t(h)-a|\leq C h\\
|G_h (t) - G_h (t (h))|&\leq \|G_h\|_{C^{1/2} (a-r, a+r)} |t-t(h)|^{1/2}\\
|\Xi (t) - \Xi (t(h))| &\geq C^{-1} |t-t(h)|\\
(d(a)h + o (h))^2 &\geq C^{-1} h^2\, .
\end{align*}
Since $\|\varphi_h\|_{L^\infty} \leq C$, we can change variable in the integrals to $\sigma =t-t(h)$ and estimate them as follows:
\begin{align*}
|J_1 (h)|&\leq C \|G_h\|_{C^{1/2} (a-r,a+r)} \int_{-r/2}^{r/2} \sigma^{-1/2}\, d\sigma \leq C\|G_h\|_{C^{1/2}} r^{1/2}\, ,\\
|J_2 (h)| &\leq C h \int_{-\infty}^\infty \frac{\sigma^{1/2}}{\sigma^2 + C^{-1} h^2}\, d\sigma
= C h^{1/2} \int \frac{\tau^{1/2}}{\tau^2 + C^{-1}} d\tau \leq C h^{1/2}\, .
\end{align*}
Clearly $J_2 (h)\to 0$, while $J_1 (h) \to 0$ because $\|G_h\|_{C^{1/2} (a-r,a+r)} \to 0$.}
\end{proof}
\chapter{Proofs of technical statements}
\section{Proof of Remark \ref{r:bounded}}
More generally, we will show here that, for any $q_0\in[1,2[$ and $q_1\in]2,\infty]$, it is true that \begin{equation*}\lVert K_2*\omega\rVert_{L^\infty(\ensuremath{\mathbb R}^2;\ensuremath{\mathbb R}^2)}\le C(q_0, q_1) (\lVert \omega\rVert_{L^{q_0}(\ensuremath{\mathbb R}^2)}+\lVert \omega\rVert_{L^{q_1}(\ensuremath{\mathbb R}^2)})\end{equation*} for all $\omega\in L^{q_0}\cap L^{q_1}$.
Indeed, passing to polar coordinates, one sees that $K_2\vert_{B_1}\in L^{q_1^*}(B_1; \ensuremath{\mathbb R}^2)$ and $K_2\vert_{\ensuremath{\mathbb R}^2\setminus B_1}\in L^{q_0^*}(\ensuremath{\mathbb R}^2\setminus B_1;\ensuremath{\mathbb R}^2)$, where $q_1^*, q_2^*$ are given by $\frac1{q_i}+\frac1{q_i^*}=1$ for $i\in\{1,2\}$. Hölder's inequality implies that for any $x\in\ensuremath{\mathbb R}^2$,
\begin{equation*}
\begin{split}
\abs{(K_2*\omega)(x)} &= \abs{((K_2 \mathbf 1_{B_1})*\omega)(x)+((K_2 (1-\mathbf 1_{B_1}))*\omega)(x)} \\
&\le\norm{K_2}_{L^{q_1^*}(B_1)}\norm{\omega}_{L^{q_1}(\ensuremath{\mathbb R}^2)} + \norm{K_2}_{L^{q_0^*}(\ensuremath{\mathbb R}^2\setminus B_1)}\norm{\omega}_{L^{q_0}(\ensuremath{\mathbb R}^2)} \\
&\le C(q_0, q_1) (\lVert \omega\rVert_{L^{q_0}(\ensuremath{\mathbb R}^2)}+\lVert \omega\rVert_{L^{q_1}(\ensuremath{\mathbb R}^2)}).
\end{split}
\end{equation*}
Since $x$ is arbitrary, this achieves a proof of the claim above.
\section{Proof of Theorem \ref{thm:Yudo}}
\textbf{Existence.} The existence argument is a classical density argument. Take any sequence $(\omega_0^{(n)})_{n\in\mathbb N}$ of functions in $L^1\cap C^\infty_c$ that converges strongly in $L^1$ to $\omega_0$. Analogously, pick a sequence of smooth functions $(f_n)_{n\in\mathbb N}$ in $C^\infty_c (\ensuremath{\mathbb R}^2\times[0, T])$ converging in $L^1(\ensuremath{\mathbb R}^2\times[0,T])$ to $f$ and satisfying the bound $\|f_n (\cdot, t)\|_{L^\infty} \leq \|f (\cdot, t)\|_{L^\infty}$ for a.e. $t$. Then, let $\omega^{(n)}$ denote the solution of the corresponding Cauchy problem of the Euler equations in vorticity form. The existence of such solutions is a classical well-known fact, see for instance \cite[Theorem A]{McGrath}. Following Remark \ref{r:A-priori-estimates}, these solutions satisfy all the \`a priori estimates needed in Proposition \ref{p:convergence}. Therefore, following the proof of Proposition \ref{p:convergence}, one obtains, in the limit $n\to\infty$, a solution $\omega\in L^\infty([0,T]; L^1\cap L^\infty)$ of the given Cauchy problem. Furthermore, since the à priori estimates of Remark \ref{r:A-priori-estimates} are uniform in $n$, one gets $K_2*\omega\in L^\infty([0,T]; L^2)$.
\begin{remark}
The proof of Proposition \ref{p:convergence} has a fixed force $f$ but a straightforward adaptation of the arguments handles the case above, namely with a sequence of forces $(f_n)_{n\in\mathbb N}$ that converges in $L^1(\ensuremath{\mathbb R}^2\times[0,T])$ to a given $f$. More precisely the only difference occurs in the term $I_4 (k)$ of \eqref{e:term-I4}, which anyway enjoys convergence to the same limit.
\end{remark}
\textbf{Uniqueness.} The uniqueness proof needs two important facts. The first is a well-known ODE inequality, whose short proof is given, for the reader's convenience, at the end of the section.
\begin{lemma}\label{l:ODE lemma}
Let $T>0$ and let $E:[0,T]\to[0,\infty[$ be a differentiable function satisfying
\begin{equation}\label{e:ODE inequality for E}
\dot E(t)\le p M E(t)^{1-1/p} \quad \text{ and }\quad E(0)=0
\end{equation}
for some fixed $M>0$. Then $E(t)\le (Mt)^p$ for all $t\in[0,T]$.
\end{lemma}
The second is the classical Calderón-Zygmund $L^p$ estimate, where we need the sharp $p$-dependence of the corresponding constant. This fact is also well known, cf. for instance \cite[Formula (8.45), page 322]{MajdaBertozzi}).
\begin{lemma}\label{l:Estimate on Lp norm of gradient of velocity}
For every $p_0>1$ there is a constant $c (p_0)$ with the following property.
If $v=K_2*\omega$ for some $\omega\in L^1 \cap L^p(\ensuremath{\mathbb R}^2)$ with $p\in [p_0, \infty[$, then $\norm{D v}_{L^p}\le p c \norm{\omega}_{L^p}$.
\end{lemma}
Now, let $v_1=K_2*\omega_1, v_2=K_2*\omega_2$ be two solutions of \eqref{e:Euler} satisfying the assumptions of Theorem \ref{thm:Yudo} and note that $w:=v_1-v_2$ solves
\begin{equation}\label{e:Gleichung fuer die Differenz}
\partial_t w +(v_1\cdot\nabla)w +(w\cdot\nabla)v_2=-\nabla(p_1-p_2)
\end{equation}
(where $p_1, p_2$ are the pressures corresponding to $v_1$ and $v_2$). Clearly
\[
E(t):=\int_{\ensuremath{\mathbb R}^2} |w(x,t)|^2\,\mathrm dx \leq 2 \int_{\ensuremath{\mathbb R}^2} |v_1 (x,t))^2|\,\mathrm dx + 2 \int_{\ensuremath{\mathbb R}^2} |v_2 (x,t)^2|\,\mathrm dx <\infty
\]
is a bounded function on $[0,T]$.
We scalar multiply \eqref{e:Gleichung fuer die Differenz} with $w$, integrate by parts and use the divergence free conditions of $v_1, v_2$, and $w$ to conclude and
\begin{align*}
\dot E(t) = & - 2\int_{\ensuremath{\mathbb R}^2}((w\cdot\nabla)v_2)w\,\mathrm dx
\le 2\int_{\ensuremath{\mathbb R}^2}|w(x,t)|^2 \abs{D v_2(x,t)}\,\mathrm dx\\
\le & 2\norm{\nabla v_2(\cdot, t)}_{L^p}\norm{w(\cdot, t)}_{L^\infty}^{2/p}\norm{w(\cdot,t)}_{L^2}^{2-2/p}\,.
\end{align*}
Using Remark \ref{r:bounded}, we also have
\begin{equation*}
\begin{split}
\sup_{t\in[0,T]} \norm{w(\cdot, t)}_{L^\infty} &\le \sup_{t\in[0,T]}(\norm{v_1(\cdot, t)}_{L^\infty}+\norm{v_2(\cdot, t)}_{L^\infty}) \\
&\le C \sup_{t\in[0, T]}(\norm{\omega_1(\cdot, t)}_{L^1}+\norm{\omega_1(\cdot, t)}_{L^\infty}+\norm{\omega_2(\cdot, t)}_{L^1}+\norm{\omega_2(\cdot, t)}_{L^\infty}) <\infty\, .
\end{split}
\end{equation*}
Next fix any $p\geq 2$. From Lemma \ref{l:Estimate on Lp norm of gradient of velocity} and the classical $L^p$ interpolation we conclude
\begin{equation*}
\norm{D v_2(\cdot, t)}_{L^p}\le p c \norm{\omega_2 (\cdot, t)}_{L^p}\le p c \norm{\omega_2 }_{L^\infty([0,T]; L^1)}^{1/p}\norm{\omega_2 (\cdot, t)}_{L^\infty([0,T]; L^\infty)}^{1-1/p}.
\end{equation*}
Therefore, $\dot E(t)\le p M_p E(t)^{1-1/p}$
with
\begin{align*}
M_p &= 2\norm{w}_{L^\infty([0,T];L^\infty)}^{2/p} c \norm{\omega_2}_{L^\infty([0,T]; L^1)}^{1/p}\norm{\omega_2 }_{L^\infty([0,T]; L^\infty)}^{1-1/p}\\
&\le 2c \left({\textstyle{\frac{1}{p}}}\norm{w}_{L^\infty([0,T];L^\infty)}^2\norm{\omega_2}_{L^\infty([0,T];L^1)}+\left(1-{\textstyle{\frac{1}{p}}}\right)\norm{\omega_2}_{L^\infty([0,T];L^\infty)}\right) \\
&\le 2c (\norm{w}_{L^\infty([0,T];L^\infty)}^2\norm{\omega_2}_{L^\infty([0,T];L^1)}+\norm{\omega_2}_{L^\infty([0,T];L^\infty)})=: M<\infty\, .
\end{align*}
We can thus apply
Lemma \ref{l:ODE lemma} to obtain that $E(t)\le (M_p t)^p \leq (Mt)^p$.
In particular, for any $t\geq \frac{1}{2M}$ we have $E(t)\le \frac 1{2^p}$ and we can let $p$ tend to $\infty$ to infer $E(t)=0$. Since the same estimates apply to any translation $\tilde{E} (t):= E (t+t_0)$ of the function $E$, we immediately conclude that $E$ vanishes identically, namely that $v_1=v_2$ on $\mathbb R^2\times [0,T]$.
\begin{proof}[Proof of Lemma \ref{l:ODE lemma}] Fix an arbitrary $t_0\leq T$ and note that if $E(t_0)=0$ there is nothing to show. Hence assume $E(t_0)> 0$ and set $a:=\sup\{t:E(t)=0\text{ and }t\le t_0\}$ (note that the set is nonempty because $E (0)=0$). $E(a)=0$ by continuity of $E$ and clearly $E(t)>0$ for all $t\in]a,t_0]$. Therefore, we can divide \eqref{e:ODE inequality for E} by $E(t)^{1-1/p}$ to obtain that $\dot E(t) E^{1/p-1}(t)\le p M$
for all $t\in]a, t_0]$. Integrating both sides gives
\begin{equation}\label{e:Integral bound on E}
\int_{a}^{t_0} \dot E(t) E^{1/p-1}(t)\le p M (t_0-a)\, .
\end{equation}
But the left hand side equals $p E^{1/p}(t_0)-pE^{1/p}(a)=p E^{1/p}(t_0)$, from which we infer
$E^{1/p}(t_0)\le M (t_0-a) \le M t_0$.
\end{proof}
\section{Proof of Proposition \ref{p:convergence}}
Recall first the following classical metrizability result of weak${}^*$ topologies of separable Banach spaces.
\begin{lemma}[Metrizability Lemma]\label{l:Metrizability}
Let $X$ be a separable Banach space and let $K\subset X^*$ be weakly${}^*$-compact. Then $K$ is metrizable in the weak${}^*$ topology inherited from $X^*$ and a metric that induces this topology is given by
\begin{equation}\label{e:Metrization-of-weak-star-topology}
d(l, \tilde l)=\sum_{n=1}^\infty 2^{-n}\min\{1, \vert l(x_n)-\tilde l(x_n)\vert\},
\end{equation}
where $(x_n)_{n\in \mathbb N}$ is any sequence in $X$ such that $\{x_n:n\in\mathbb N\}$ is dense in $X$.
\end{lemma}
Now on to the proof of Proposition \ref{p:convergence}. We will prove convergence of the $\omega_{\varepsilon, k}$ to a $\omega_\varepsilon$ for fixed $\varepsilon$ and $k\to\infty$ in the space $C([0, T]; K)$, where $K:=\{u\in L^q(\ensuremath{\mathbb R}^2):\lVert u\rVert_{L^q}\le R\}$ is equipped with the weak${}^*$ topology inherited from $L^q_{\text w}$. (We will talk about the choice of $q$ later.) Here, $R$ is the uniform bound obtained in \eqref{e:uniform_bound} of Corollary \ref{c:omega_k_epsilon}. Note that since every $L^q$ space is reflexive, one can work just as well with the weak topology on $K$. Let $(\phi_n)_{n\in\mathbb N}$ be a sequence of smooth functions such that $\{\phi_n:n\in\mathbb N\}$ is dense in every $L^q$. The metric given by \eqref{e:Metrization-of-weak-star-topology} now induces the topology of $K$, and it does not depend on $q$. Therefore, using the uniform bound \eqref{e:uniform_bound}, we conclude that \emph{the choice of $q$ does not matter}. It is sufficient to prove the statement of Proposition \ref{p:convergence} for one fixed $q\in]1, p]$ in order to prove it for all $q\in]1, p]$.
\begin{claim}
The $\omega_{\varepsilon, k}$, seen as functions from $[0, T]$ to $K$, are equicontinuous (for simplicity we define each $\omega_{\varepsilon, k} (\cdot, t)$ on the interval $[0,t_k]$ as constantly equal to $\omega_{\varepsilon, k} (\cdot, t_k)$).
\end{claim}
\begin{proof}
For $\tilde\omega, \hat\omega \in L^q(\ensuremath{\mathbb R}^2)$, let
\begin{equation*}
d_i(\tilde\omega, \hat \omega) \overset{\text{Def.}}=\left\lvert\int_{\ensuremath{\mathbb R}^2} (\tilde\omega-\hat\omega) \phi_i\right\rvert.
\end{equation*}
Since each $\omega_{\varepsilon, k}$ solves the Euler equations in vorticity form, we can estimate
\begin{equation}\label{e:bound-on-ith-distance}
\begin{split}
d_i(\omega_{\varepsilon, k}(t, \cdot), \omega_{\varepsilon, k}(s, \cdot)) &= \left\lvert\int_{\ensuremath{\mathbb R}^2}\int_s^t\partial_\tau\omega_{\varepsilon, k}(\sigma, x)\phi_i(x)\,\mathrm d\sigma\,\mathrm dx\right\rvert\\
&=\left\lvert\int_{\ensuremath{\mathbb R}^2}\int_s^t ((K_2*\omega_{\varepsilon, k})\cdot\nabla)\omega_{\varepsilon, k}(x, \sigma + f (x, \sigma))\phi_i(x) \,\mathrm d\sigma\,\mathrm dx\right\rvert\\
&\le\lVert\nabla\phi_i\rVert_{L^\infty(\ensuremath{\mathbb R}^2)}\int_{\ensuremath{\mathbb R}^2}\int_s^t\lvert K_2*\omega_{\varepsilon, k}\rvert\lvert\omega_{\varepsilon, k}\rvert\,\mathrm d\sigma\,\mathrm dx\\
& \qquad + \lVert\phi_i\rVert_{L^\infty(\ensuremath{\mathbb R}^2)} \int_{\ensuremath{\mathbb R}^2}\int_s^t |f(x, \sigma)|\, dx\, d\sigma\\
&\le C(\lVert\nabla\phi_i\rVert_\infty + \|\phi_i\|_\infty) \lvert s-t\rvert
\end{split}
\end{equation}
whenever $t\geq s \geq t_k$.
Let $\tilde\varepsilon>0$. We can find a $N\in\mathbb N$ (depending on $\tilde\varepsilon$) such that $\sum_{n=N^+1}^\infty 2^{-n}\le\frac{\tilde\varepsilon}2$. If \begin{equation*}\lvert t-s\rvert\le \frac{\tilde\varepsilon}{2NC\max_{i\in\{1,\dots,N\}}(\lVert\nabla\phi_i\rVert_{\infty} + \|\phi_i\|_\infty)},\end{equation*} where $C$ is the constant from \eqref{e:bound-on-ith-distance}, then, by the bound in \eqref{e:bound-on-ith-distance}, we get
\begin{equation*}
d(\omega_{\varepsilon, k}(t, \cdot),\omega_{\varepsilon, k}(s, \cdot))\le\frac{\tilde\varepsilon}2+\sum_{i=1}^N \frac{\tilde\varepsilon}{2N}=\tilde\varepsilon.\qedhere
\end{equation*}
\end{proof}
By the Banach-Alaoglu theorem, bounded subsets are relatively compact in the weak${}^*$ topology. Therefore, using reflexivity, for every $t\in[0,T]$, the bounded set $\{\omega_{\varepsilon, k}(\cdot, t): k\in\mathbb N\}$ is also relatively compact in $L^q_{\text w}$.
Therefore, using Arzelà-Ascoli, we can conclude that there exists a subsequence of $(\omega_{\varepsilon, k})_{k\in\mathbb N}$, not relabeled, that converges in $C([0, T]; L_{\text w}^q)$, for every $q$, to the same $\omega_\varepsilon\in C([0, T]; L_{\text w}^q)$.
\begin{claim}
The $\omega_\varepsilon$ is a solution of the Euler equations in vorticity formulation.
\end{claim}
\begin{proof}
We have, for every $k\in\mathbb N$ and $\phi\in C_{\text c}^\infty(\ensuremath{\mathbb R}^2\times[0, T])$ with $\phi(\cdot, T)=0$, (cf. \eqref{e:distrib})
\begin{align}
&\underbrace{\int_{\ensuremath{\mathbb R}^2} \omega_{\varepsilon, k}(x, t_k)\phi(x, t_k)\,\mathrm dx}_{=:I_1 (k)} + \underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2}\ \omega_{\varepsilon, k}(x,t)\partial_t\phi(x,t)\,\mathrm dx\,\mathrm dt}_{=:I_2 (k)}\nonumber\\
+ &\underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2} \omega_{\varepsilon, k}(x,t)((K_2*_x\omega_{\varepsilon, k})(x,t)\cdot\nabla)\phi(x,t)\,\mathrm dx\,\mathrm dt}_{=:I_3 (k)} +\underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2} f(x, t)\phi(x, t)\,\mathrm dx\,\mathrm dt}_{=:I_4 (k)} = 0\label{e:term-I4}\, .
\end{align}
The term $I_4(k)$ converges to
\begin{equation*}
\int_{\ensuremath{\mathbb R}^2\times[0, T]} f(x, t)\phi(x,t)\,\mathrm dx\,\mathrm dt\, .
\end{equation*}
By the convergence of the $\omega_{\varepsilon, k}$, \begin{equation*}\lim_{k\to\infty} I_2(k)=\int_{\ensuremath{\mathbb R}^2\times[0, T]} \omega_\varepsilon(x, t)\partial_t\phi(x,t)\,\mathrm dx\,\mathrm dt.\end{equation*} By the definition of the initial condition of $\omega_{\varepsilon, k}$ (cf. \eqref{e:Euler-later-times}), $\omega_{\varepsilon, k}(\cdot, t_k)$ converges strongly in $L^1(\ensuremath{\mathbb R}^2)$ to $\tilde\omega(\cdot, 0)=\omega_0=\omega_\varepsilon(\cdot, 0)$. Therefore, \begin{equation*}\lim_{k\to\infty} I_1(k)=\int_{\ensuremath{\mathbb R}^2}\omega_\varepsilon(x, 0)\phi(x, 0)\,\mathrm dx.\end{equation*}
It therefore only remains to prove the convergence of $I_3$, for which we will require yet another claim.
\begin{claim}
For every $r\in[2, \infty[$ and every $t\in[0,1]$, the set $\{v_{\varepsilon, k}(\cdot, t): k\in\mathbb N\}$ is compact in $L^r (B_R)$ for every $R>0$.
\end{claim}
\begin{proof}
From \eqref{e:uniform_bound}, we know that $\|v_{\varepsilon, k}(\cdot, t)\|_{L^2(\ensuremath{\mathbb R}^2)}\le C$ for some constant $C$ that is independent of $t$. Recall that $v_{\varepsilon, k}=\nabla^\bot\psi_{\varepsilon, k}$, where $\psi_{\varepsilon, k}$ solves $\Delta\psi_{\varepsilon, k} = \omega_{\varepsilon, k}$. Therefore, using the Calder\'{o}n-Zygmund inequality, one gets
\begin{equation*}
\norm{\nabla v_{\varepsilon, k}(\cdot, t)}_{L^2}\le C\norm{\omega_{\varepsilon, k}(\cdot, t)}_{L^2}.
\end{equation*}
Since the $L^2$ norms of the $\omega_{\varepsilon, k}(\cdot, t)$ are uniformly bounded, we can conclude that
\begin{equation*}
\sup_{k\in\infty} \norm{v_{\varepsilon, k}(\cdot, t)}_{L^2}<\infty.
\end{equation*}
Hence we conclude the compactness in $L^r (B_R)$ from Rellich's Theorem.
\end{proof}
Therefore, the $v_{\varepsilon, k}(\cdot, t)$ converge to $v_\varepsilon(\cdot, t)$ strongly in every $L^r (B_R)$ with $r\in[2,\infty[$. Moreover, thanks to \eqref{e:uniform_bound}, we can apply the dominated convergence theorem by Lebesgue to conclude that $v_{\varepsilon, k}\to v_\varepsilon$ as $k\to\infty$ in the space $L^1([0, T]; L^r (B_R))$ for every $r\in[2,\infty[$.
By definition,
\begin{equation*}
\omega_{\varepsilon, k} (v_{\varepsilon, k}\cdot\nabla)\phi-\omega_{\varepsilon} (v_{\varepsilon}\cdot\nabla)\phi = \omega_{\varepsilon, k} (v_{\varepsilon, k}-v_{\varepsilon})\cdot \nabla \phi +
(\omega_{\varepsilon,k} - \omega_\varepsilon) v_\varepsilon \cdot \nabla \phi\, .
\end{equation*}
We thus rewrite
\begin{align}
I_3 (k) &= \int_0^T \int_{B_R} \omega_{\varepsilon, k} (v_{\varepsilon,k}-v_\varepsilon)\cdot \nabla \phi\, dx\, dt
+ \int_0^T \underbrace{\int_{B_R} (\omega_{\varepsilon,k} - \omega_\varepsilon) v_\varepsilon \cdot \nabla \phi\, dx}_{=:J_k(t)}\, dt\, .\label{e:I3-converges-to-0}
\end{align}
Observe first that, for each fixed $t$,
\[
\lim_{k\to\infty} J_k (t) = 0\, ,
\]
since $\omega_{\varepsilon, k} (\cdot, t) - \omega_\varepsilon (\cdot, t)$ converges weakly to $0$ in $L^2$, while $v_\varepsilon (\cdot, t)\cdot \nabla \phi (\cdot, t)$ is a fixed $L^2$ function. On the other hand
\[
|J_k (t)|\leq \|\nabla \phi (\cdot, t)\|_\infty (\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^2} + \|\omega_\varepsilon (\cdot, t)\|_{L^2}) \|v_\varepsilon (\cdot, t)\|_{L^2}\, .
\]
Therefore the second integral in \eqref{e:I3-converges-to-0} converges to $0$. The first integral can be bounded by
\[
\|\nabla \phi\|_{L^\infty} \|v_{\varepsilon, k} - v_\varepsilon\|_{L^1 ([0,T], L^2 (B_R))} \|\omega_{\varepsilon, k}\|_{L^\infty ([0,T], L^2 (B_R)}
\]
and converges to $0$ as well.
\end{proof}
\section{Proof of Lemma \ref{l:extension}}
Consider $\vartheta \in L^2_m\cap \mathscr{S}$ for $m\geq 2$ and let $v:= K_2*\vartheta$. We first claim that
\begin{equation}\label{e:average}
\int_{B_R} v = 0 \qquad \qquad \mbox{for every $R>0$.}
\end{equation}
With \eqref{e:average} at our disposal, since $\|Dv\|_{L^2 (\mathbb R^2)} = \|\vartheta\|_{L^2 (\mathbb R^2)}$, we use the Poincar\'e inequality to conclude
\begin{equation}
R^{-1} \|v\|_{L^2 (B_R)} + \|Dv\|_{L^2 (B_R)} \leq C \|\vartheta\|_{L^2 (\mathbb R^2)}
\end{equation}
for a geometric constant $C$. This is then enough to infer the remaining conclusions of the lemma.
In order to achieve \eqref{e:average} observe first that $v = \nabla^\perp h$, where $h$ is the unique potential-theoretic solution of $\Delta h = \vartheta$, given by $h = K * \vartheta$ with $K (x) = \frac{1}{2\pi} \log |x|$. Since $K(R_\theta x) = K (x)$ and $\vartheta (x) = \vartheta (R_{2\pi/m} x)$, it follows that $h (R_{2\pi/m} x) = h (x)$, i.e. $h$ is $m$-fold symmetric. Therefore $R_{2\pi/m} \nabla h (R_{2\pi/m} x) = \nabla h (x)$. In particular, integrating in $x$ and using that the rotation is a measure-preserving transformation of the disk, we conclude
\[
\int_{B_R} \nabla h = R_{2\pi/m} \int_{B_R} \nabla h \, ,
\]
and thus,
\[
\int_{B_R} \nabla h = \frac{1}{m} \sum_{k=0}^{m-1} R_{2k\pi/m} \int_{B_R} \nabla h
\]
However, since $m\ge 2$, $\sum_{k=0}^{m-1} R_{2k\pi/m} = 0$, showing that $\int_{B_R} \nabla h = 0$.
\begin{remark}\label{r:Camillo_dumb}
We next show that it is not possible to find a continuous extension of the operator $L^2\cap \mathscr{S} \ni \vartheta \mapsto K_2* \vartheta \in \mathscr{S}'$ to the whole $L^2$. First of all we observe that, if such an extension exists, it then needs to coincide with $K_2* \vartheta$ when $\vartheta \in L^1 \cap L^2$. We next exhibit a sequence of divergence free vector fields $\{v_k\}\subset W^{1,1}\cap W^{1,2}$ with the property that $\omega_k = \curl v_k$ converge to $0$ strongly in $L^2$ but $v_k$ converge locally to a constant vector field $v_0\neq 0$. In order to do this, we first define the following functions $\phi_k$ on the positive real axis:
\[
\phi_k (r) :=
\left\{
\begin{array}{ll}
1 + \frac{3}{4\ln k} \qquad &\mbox{for $r\leq \frac{k}{2}$}\\ \\
1 + \frac{3}{4\ln k} - \frac{1}{k^2 \ln k} \left(r-\frac{k}{2}\right)^2\qquad &\mbox{for $\frac{k}{2} \leq r \leq k$}\\ \\
1 + \frac{1}{2 \ln k}- \frac{1}{\ln k} \ln \frac{r}{k} \qquad & \mbox{for $k\leq r \leq k^2$}\\ \\
\frac{1}{2 k^4 \ln k} (r-2k^2)^2\qquad & \mbox{for $k^2 \leq r\leq 2k^2$}\\ \\
0 \qquad &\mbox{for $r\geq 2 k^2$}\, .
\end{array}
\right.
\]
Observe that $\phi_k$ is $C^1$ and its derivative is Lipschitz. Next we define the stream functions
\[
\psi_k (x) = - \phi_k (|x|) v_0^\perp \cdot x
\]
and the vector field $v_k (x) = \nabla^\perp \psi_k (x)$. By construction $v_k$ is divergence free, compactly supported, and Lipschitz. In particular, it belongs to $W^{1,p}$ for every $p$. Moreover, $v_k$ equals $(1+\frac{3}{4\ln k}) v_0$ on $B_{k/2}$ and it thus follows that, as $k\to \infty$, $v_k$ converges locally to the constant vector field $v_0$. It remains to check that $\curl v_k = \Delta \psi_k$ converges to $0$ strongly in $L^2$. We compute
\[
\Delta \psi_k = - \underbrace{v_0^\perp \cdot x\, \Delta (\phi_k (|x|))}_{=:f_k} - \underbrace{\nabla (\phi_k (|x|))\cdot v_0^\perp}_{=: g_k}\,
\]
and we seek to bound $f_k$ and $g_k$ pointwise. For what concerns $f_k$ observe that $\Delta\phi_k$ vanishes on $|x|\leq \frac{k}{2}$, $k\leq |x| \leq k^2$, and $2k^2 \leq |x|$. On the remaining regions, using the formula for the Laplacian in polar coordinates, we can estimate
\[
|f_k (x)|\leq |v_0| |x| (|\phi'' (|x|)| + |x|^{-1} |\phi' (|x|)|)\, .
\]
In particular, we conclude
\[
|f_k (|x|)| \leq \frac{C}{|x| \ln k}\, ,
\]
for a constant $C$ independent of $k$.
As for $g_k$, it vanishes for $|x|\leq \frac{k}{2}$ and $|x|\geq k^2$, and where it does not vanish we have the estimate
\[
|g_k (x)|\leq |v_0| |\psi' (|x|)| \leq \frac{C}{|x|\ln k}\, ,
\]
again for a constant $C$ independent of $k$. Passing to polar coordinates, we can thus estimate
\begin{align*}
\|\Delta \psi_k\|^2_{L^2 (\mathbb R^2)} \leq & \frac{C}{(\ln k)^2} \int_{k/2}^{2k^2} \frac{1}{r} \, dr =
\frac{C}{(\ln k)^2} \left(\ln (2k^2) - \ln {\textstyle{\frac{k}{2}}}\right)
= \frac{C \ln (4k)}{(\ln k)^2}\, .
\end{align*}
\end{remark}
\chapter{A more detailed spectral analysis}\label{a:better}
\section{From Remark \ref{r:better2}(i) to Remark \ref{r:better}(c)}
Let us assume the validity of \ref{r:better2}(i) and prove Remark \ref{r:better}(c). Let $m_0\ge 2$ be the integer such that
\begin{align}\label{z1}
&{\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\} \neq \emptyset \, ,
\\
&{\rm spec}\, (L_{\text{st}}, U_{m}) \cap \{{\rm Re}\, z > 0\} = \emptyset \,
\quad
\text{for any $m>m_0$} \, .
\end{align}
We show that Remark \ref{r:better}(c) holds with $m = m_0$.
For any $z\in {\rm spec}_{m_0}\, (L_{\text{st}}) \cap \{{\rm Re}\, z > 0\}$ we denote by $V_z:= P_z(L^2_{m_0})$ the image of the Riesz projector
\begin{equation*}
P_z = \frac{1}{2\pi i} \int_\gamma (w-L_{\text{st}})^{-1} dw \, ,
\end{equation*}
where $\gamma$ parameterizes the boundary of a ball containing $z$ and no other eigenvalues of $L_{\text{st}}$.
It is enough to show that $P_z(U_{km_0}) = \{0\}$ for any $k\in \mathbb{Z}\setminus\{-1, 1\}$, $z\in {\rm spec}_{m_0}\, (L_{\text{st}}) \cap \{{\rm Re}\, z > 0\}$ since it gives
$$V_z = P_z(U_{m_0}\cup U_{-m_0}) \subset U_{m_0} \cup U_{-m_0}\, ,$$
where the second inclusion follows from the fact that $U_m$ is always an invariant space of $L_{\text st}$.
If $k>1$, from \eqref{z1} we know that $z\notin {\rm spec}\, (L_{\text{st}}, U_{km_0})$, hence $P_z(U_{km_0})$ is trivial. If $k<-1$,
we reduce to the previous situation by observing that $P_z(U_{km_0}) = \overline{ P_{\bar z}(U_{-km_0})}$.
\section{Proof of Remark \ref{r:better2}(i)} In order to show this point, given Lemma \ref{l:almost-final-2}, we just need to prove the following statement.
\begin{lemma}\label{l:almost-final-1}
For every fixed $\Xi\in \mathscr{C}$ there is $M_0>0$ such that $\mathscr{U}_m$ is empty for every $m\geq M_0$.
\end{lemma}
Indeed, given the conclusion above we infer that $\mathscr{U}_m$ is empty for every $m\geq m_a$ and it thus suffices to select $m_0$ as the largest integer strictly smaller than $m_a$.
Before coming to the proof of Lemma \ref{l:almost-final-1} we state an auxiliary fact which will be used in the argument and which can be readily inferred from the computations in Step 1 of the proof of Lemma \ref{l:will-apply-Rouche}.
\begin{lemma}\label{l:operator-B_z}
For every $0 < \sigma < \tau < 1$ there is a constant $C$ (depending only upon $\sigma$ and $\tau$) such that $B_z := \mathcal{K}_{m_0} \circ \frac{1}{\Xi-z}$ is a bounded operator from $C^\sigma$ to $C^\tau$ for every $z$ with ${\rm Im}\, z>0$ and
\[
\|B_z\|_{\mathcal{L} (C^\sigma, C^\tau)} \leq C\, .
\]
\end{lemma}
\begin{proof}[Proof of Lemma \ref{l:almost-final-1}] The proof will be by contradiction and thus, assuming that the statement is false, we can select:
\begin{itemize}
\item[(i)] a sequence $\{m_j\}\subset [1, \infty[$ with $m_j\to \infty$;
\item[(ii)] a sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$;
\item[(iii)] and a sequence $\{\psi_j\}\subset L^2 (\mathbb R)$ solving the equation
\begin{equation}\label{e:eigenvalue-equation-20}
-\frac{d^2 \psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi -z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
\medskip
{\bf Step 1.} We first prove that $\{z_j\}$ is bounded and every cluster point must be an element of $[0, \Xi (-\infty)]$. Otherwise for a subsequence, not relabeled, we get the estimate
\[
\sup \left\|\frac{A}{\Xi-z_j}\right\|_{L^\infty} =: C_0 < \infty\, .
\]
By scalar multiplying \eqref{e:eigenvalue-equation-20} by $\psi_j$ and taking the real part of the resulting equation we then conclude
\[
\int (|\psi_j'|^2 + m_j^2 |\psi_j|^2) \leq C_0 \int |\psi_j|^2\, ,
\]
which clearly it is not feasible because $C_0 < m_j^2$ for a sufficiently large $j$ (and $\psi_j$ is nontrivial).
Up to subsequences we can thus assume that $z_j$ converges to some $z_0 \in [0, \Xi (-\infty)]$.
\medskip
{\bf Step 2.} We next analyze the cases $z_0 =0$ and $z_0 = \Xi (-\infty)$. The argument is similar to that used in Section \ref{s:3+4} in case (C). Let us argue first for $z_0=0$. We observe that $\Xi^{-1} |A|$ belongs to $L^1 (]-\infty, N])$ for any fixed $N$ and that, likewise, $|\Xi-z_j|^{-1} |A|$ have a uniform $L^1$ bound on any $]-\infty, N]$. We can then
use the Lemma \ref{l:ODE2} to normalize $\psi_j$ so that it is asymptotic to $e^{m_j t}$ and also to write
\[
\psi_j (t) = e^{m_j (t)} (1+z_j (t))
\]
with
\[
|z_j (t)| \leq \exp \left(\frac{1}{m_j} \int_{-\infty}^N \frac{|A|}{|\Xi-z_j|}\right) -1 \qquad
\mbox{for all $t\leq N$.}
\]
In particular, we have $|z_j (t)|\leq \frac{C(N)}{m_j}$ on $]-\infty, N]$. We next scalar multiply \eqref{e:eigenvalue-equation-20} by $\psi_j$ and take the imaginary part to conclude
\[
- \left(\int_{-\infty}^a + \int_b^\infty\right) \frac{A}{|\Xi-z_j|^2} |\psi_j|^2\leq
\int_a^b \frac{A}{|\Xi-z_j|^2} |\psi_j|^2\, .
\]
In particular, since $\frac{A}{|\Xi-z_j|^2}$ is bounded from above by a constant $C$ independent of $j$ on $[a,b]$ and $-\frac{A}{|\Xi-z_j|^2}$ is bounded from below by a constant $c>0$ independent of $j$ on $[b+1, b+2]$, we conclude
\[
\int_{b+1}^{b+2} |\psi_j|^2 \leq \frac{C}{c} \int_a^b |\psi_j|^2\, .
\]
We next choose $N$ larger than $b+2$ and use the estimate $|z_j (t)|\leq \frac{C(N)}{m_j}$ to argue that, for $j$ large enough, we have $\frac{1}{2} e^{m_j t} \leq |\psi_j (t)| \leq 2 e^{m_j t}$ on $]-\infty, N]$. In particular, we infer
\[
\int_{b+1}^{b+2} e^{2m_j t} \leq C \int_a^b e^{2m_j t}
\]
provided the constant $C$ is chosen large enough (but independent of $j$) and $j$ is large enough. The latter inequality is certainly impossible for $m_j$ large enough, leading to a contradiction.
The argument to exclude $z_0 = \Xi (-\infty)$ is entirely analogous, this time normalizing for $t\to \infty$ and reaching an inequality of type
\[
\int_{a-2}^{a-1} e^{-2m_j t} \leq C \int_a^b e^{-2m_j t}
\]
for a constant $C$ independent of $j$ and any $j$ large enough.
\medskip
{\bf Step 3.} We next examine the last case, that is $z_0 = \Xi (c)$. This time we fix a $\sigma \in \, ]0,1[$ and normalize $\psi_j$ so that $\|\psi_j\|_{C^\sigma}=1$. We observe that
\[
\psi_j = - \mathcal{K}_{m_j} \left(\frac{A}{\Xi-z_j} \psi_j\right)\, ,
\]
and also recall that $\mathcal{K}_{m_j} (\varphi) = \frac{1}{2m_j} e^{-m_j |\cdot|} * \varphi$.
We set $m_0=m_a$ write further
\[
\psi_j = - \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} +m_0^2\right) \left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_j}\right) \psi_j\right)\, .
\]
Recalling Lemma \ref{l:operator-B_z}, we can fix a $\tau \in ]\sigma,1[$ to achieve
\[
\left\|\left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_j}\right) \psi_j\right)\right\|_{C^\tau}\leq C
\]
for some constant $C$ independent of $j$.
We will show in the final step that
\begin{itemize}
\item[(Cl)] $\|\mathcal{K}_{m_j} \circ (-\frac{d^2}{dt^2} + m_0^2)\|_{\mathcal{L} (C^\tau, C^\tau)} \leq C$ for some constant $C$ independent of $k$.
\end{itemize}
In particular, we achieve
\begin{equation}\label{e:estimate-C-tau}
\|\psi_j\|_{C^\tau} \leq C\, .
\end{equation}
We now wish to show that indeed $\|\psi_j\|_{C^\sigma} \leq \frac{1}{2}$ for $j$ large enough, which obviously would be a contradiction. In order to achieve the latter estimate we use a Littlewood-Paley decomposition. We fix a cut-off function $\chi$ which is supported in $]\frac{1}{2}, 2[$, define $\chi_\ell (t) :=\chi (2^{-\ell} t)$ for $\ell\in \mathbb N$ and assume that $\chi$ has been chosen so that
\[
\sum_{\ell \in \mathbb N} \chi_\ell \equiv 1 \qquad \mbox{on $[1, \infty[$}.
\]
We then define
\[
\chi_{-1} := 1 - \sum_{\ell \in \mathbb N} \chi_\ell\,
\]
and introduce the Littlewood-Paley operator $\Delta_\ell$ as $\Delta_\ell (\varphi) = \mathscr{F}^{-1} (\chi_\ell \mathscr{F} (\varphi))$, where $\mathscr{F}$ is the Fourier transform.
We finally recall that (see \cite[Section 1.4.2]{Grafakos}), if we define
\[
\|\varphi\|_{X^\sigma} := \sum_{\ell \geq -1} 2^{\sigma \ell} \|\Delta_\ell \varphi\|_{L^\infty}\, ,
\]
then
\[
C (\sigma)^{-1} \|\varphi\|_{X^\sigma} \leq \|\varphi\|_{C^\sigma} \leq C (\sigma) \|\varphi\|_{X^\sigma}\, .
\]
We are now ready to perform our final estimate. We fix a large $N$, which will be chosen later, and for $\ell\geq N$ we write
\begin{align*}
\sum_{\ell \geq N} 2^{\sigma \ell} \|\Delta_\ell \psi_j\|_\infty
&\leq 2^{-N (\tau-\sigma)} \sum_{\ell\geq N} 2^{\tau \ell} \| \Delta_\ell \psi_j\|_\infty
\leq 2^{-N (\tau-\sigma)} C (\tau) \|\psi_j\|_{C^\tau} \leq C 2^{-N (\tau-\sigma)}\, ,
\end{align*}
where the constant $C$ is independent of both $N$ and $j$. Next, for any $\ell$ we observe that
\[
\Delta_\ell \psi_j = \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \underbrace{\left(\Delta_\ell \left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z} \psi_j \right)\right)\right)}_{=: \Gamma_{\ell,j}}\, .
\]
Now
\[
\|\Gamma_{\ell,j}\|_{L^\infty} \leq C 2^{-\ell\sigma} \left\|\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z} \psi_j \right)\right\|_{C^\sigma} \leq C 2^{-\ell \sigma}\, .
\]
On the other hand, because of the frequency localization, we have
\[
\Delta_\ell \psi_j = \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \circ (\Delta_{\ell-1}+\Delta_\ell+\Delta_{\ell+1}) (\Gamma_{\ell,j})
\]
and the estimate
\[
\left\|\mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \circ \Delta_\ell\right\|_{\mathcal{L} (L^\infty, L^\infty)} \leq \frac{C}{m_j^2} \left(2^{2\ell} + m_0^2\right)\, .
\]
We can therefore write the estimate
\begin{align*}
\|\psi_j\|_{C^\sigma} & \leq \frac{C}{m_j^2} \sum_{\ell=-1}^{N} (2^{(2+2\sigma) \ell} + m_0^2)
+ C 2^{-N (\tau-\sigma)} &\leq \frac{CN}{m_j^2} \left(2^{(2+2\sigma) N} + m_0^2\right) + C 2^{-N (\tau-\sigma)}\, ,
\end{align*}
where the constants $C$ are independent of $N$ and $j$. In particular, we fix first $N$ large enough to get $C 2^{-N (\tau-\sigma)} \leq \frac{1}{4}$ and we then choose $m_j$ large enough so that
\[
\frac{CN}{m_j^2} \left(2^{(2+2\sigma) N} + m_0^2\right) \leq \frac{1}{4}\, .
\]
These two estimates imply $\|\psi_j\|_{C^\sigma} \leq \frac{1}{2}$, contradicting the normalization $\|\psi_j\|_{C^\sigma} = 1$.
\medskip
{\bf Step 4.} To complete the proof of the Lemma we need to show (Cl). We first write
\[
T_{m, \ell} := \Delta_\ell \circ \mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right)\, .
\]
The operator $T_{m, \ell}$ is the convolution with a kernel $K_{m, \ell}$ whose Fourier symbol is given by
$\chi_\ell (\xi) \frac{|\xi|^2 + m_0^2}{|\xi|^2 +m^2}$. Hence, for $\ell \geq 0$ we have
\[
K_{m, \ell} (t) = \frac{1}{2\pi} \int \chi \left(\frac{\xi}{2^\ell}\right) \frac{|\xi|^2 + m_0^2}{|\xi|^2 + m^2} e^{i \xi t}\, d\xi\, .
\]
and
\[
(-it)^k K_{m, \ell} (t) = \frac{1}{2\pi} \int \frac{d^k}{d\xi^k} \left( \chi \left(\frac{\xi}{2^\ell}\right) \frac{|\xi|^2 + m_0^2}{|\xi|^2 + m^2}\right) e^{it\xi}\, d\xi\, .
\]
In particular, we easily conclude
\[
\| |t|^k K_{m, \ell}\|_{L^\infty} \leq C (k) 2^{\ell (1-k)}\, ,
\]
for a constant $C (k)$ independent of both $m\geq 1$ and $\ell$, but which depends on $k$. From the latter we can estimate
\begin{align*}
\|K_{m, \ell}\|_{L^1} &\leq \int_{|t|\leq 2^{-\ell}} |K_{m, \ell} (s)|\, ds +
\int_{|t|\geq 2^{-\ell}} \frac{|s^2 K_{m, \ell} (s)|}{|s|^2}\, ds\\
&\leq C + C 2^{-\ell} \int_{2^{-\ell}}^\infty \frac{1}{s^2}\, ds \leq C\, .
\end{align*}
For $\ell = -1$ we just conclude, likewise
\[
\||t|^k K_{m, -1}\|_{L^\infty} \leq C (k)
\]
for a constant $C(k)$ independent of $m$, but depending of $k$. Once again using the cases $k=0$ and $2$ of the latter inequality we achieve
\[
\|K_{m, -1}\|_{L^1} \leq C \, .
\]
We have thus bounded all $\|K_{m, \ell}\|_{L^1}$ with a universal constant $C$ independent of both $m\geq 1$ and $\ell\in \mathbb N \cup \{-1\}$. In particular, since $\|T_{m, \ell}\|_{\mathcal{L} (L^\infty, L^\infty)} = \|K_{m, \ell}\|_{L^1}$ and
\[
\mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) = \sum_{\ell \geq -1} T_{m, \ell}
= \sum_{\ell \ge -1} T_{m, \ell} \circ (\Delta_{\ell-1}+\Delta_\ell+\Delta_{\ell+1})\, ,
\]
we can estimate
\begin{align*}
\left\| \mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) (\varphi)\right\|_{C^\sigma}
&\leq C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|T_{m, \ell} (\varphi)\|_{L^\infty}
= C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|T_{m, \ell} (\Delta_\ell \varphi)\|_{L^\infty}\\
&\leq C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|\Delta_\ell \varphi\|_{L^\infty}
\leq C (\sigma) \|\varphi\|_{C^\sigma}\, .
\end{align*}
This completes the proof of (Cl) and hence of the entire Lemma.
\end{proof}
{{red}
\section{Proof of Theorem \ref{thm:spectral-stronger-2}}
In \cite{Vishik2}, Vishik claims the following improved version of Theorem \ref{thm:spectral5}, which would immediately imply Theorem \ref{thm:spectral-stronger-2}.
\begin{theorem}\label{thm:Vishikversion}
There are a function $\Xi\in \mathscr{C}$ and an integer $m_0\geq 2$ such that $\mathscr{U}_{m} = \emptyset$ for any integer $m>m_0$ and $\mathscr{U}_{m_0}$ consists of a single $z$. Moreover, the algebraic multiplicity of $m_0 z$ as an eigenvalue of $\mathcal{L}_{m_0}$ is $1$.
\end{theorem}
Vishik's suggested proof of Theorem \ref{thm:Vishikversion} builds upon Proposition \ref{p:3+4} and the following improved versions of Proposition \ref{p:5-7} and Proposition \ref{p:almost-final}.
\begin{proposition}\label{prop:Vishicimproved1}\label{PROP:VISHICIMPROVED1}
Assume $- \lambda_a < -1$ and let $m_a=\sqrt{\lambda_a}$. Then there
exist $\varepsilon >0$ and $\delta>0$ with the following property.
For every $h\in ]0, \delta[$, $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) = \{z_{m_a-h}\}$, where $(m_a-h) z_{m_a-h}$ is an eigenvalue of $\mathcal{L}_{m_a-h}$ with algebraic multiplicity $1$.
\end{proposition}
In \cite{Vishik2} Vishik only gives the argument that $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a))$ contains a single element $z_{m_a-h}$ and the corresponding eigenspace of $(m_a-h)^{-1} \mathcal{L}_{m_a-h}$ has dimension $1$ (i.e. its {\em geometric} multiplicity is $1$, cf. Remark \ref{r:b-also-2}). However it is essential to have the {\em algebraic} multiplicity equal to $1$ in order to complete his suggested argument. After we pointed out to him the gap in his paper, he suggested in \cite{Vishik3} the proof of Proposition \ref{prop:Vishicimproved1} reported below. Before coming to it, we point out that
a spectral perturbation argument as in the proof of Lemma \ref{l:almost-final-2} (which we outline anyway below)
easily implies the following.
\begin{proposition}\label{prop:Vishicimproved2}
Assume $- \lambda_a<-1$ and let $m_a = \sqrt{\lambda_a}$ and $m_b:= \max \{1, \sqrt{\lambda_b}\}$. Then $\mathscr{U}_m$ consists of a single element $z_m$ for every $m\in ]m_b, m_a[$ and moreover the algebraic multiplicity of $z_m$ as an eigenvalue of $m^{-1} \mathcal{L}_m$ is $1$.
\end{proposition}
Taking the previous proposition for granted, we just need a choice of $\Xi$ for which $\lambda_a >1$ and $]m_b, m_a[$ contains an integer, which is guaranteed by Lemma \ref{L:BOTTOM}, and we conclude Theorem \ref{thm:Vishikversion}.
\medskip
We now explain how to prove Proposition \ref{prop:Vishicimproved2}.
From Proposition \ref{p:almost-final} and Lemma \ref{l:almost-final-1} we know that $\mathscr{U}_m \neq \emptyset$, for every $m\in ]m_b, m_a[$, and $\mathscr{U}_m = \emptyset$ for $m \ge m_a$. Moreover, Remark \ref{rmk:algebraic dim const} implies that the sum of the algebraic multiplicities of $z\in \mathscr{U}_m$, as eigenvalues of $m^{-1} \mathcal{L}_m$, is constant for $m\in ]m_b, m_a[$. Hence, to conclude we just need to prove that the latter is $1$ for some $m\in ]m_b, m_a[$.
To that aim we show that for any $\varepsilon>0$ there exists $\delta>0$ such that $\mathscr{U}_{m_a-h} = \mathscr{U}_{m_a-h}\cap B_{\varepsilon}(\Xi(a))$ for any $h\in ]0,\delta[$. This is enough for our purposes since, together with Proposition \ref{prop:Vishicimproved1}, it gives $\mathscr{U}_{m_a-h}= \mathscr{U}_{m_a-h}\cap B_{\varepsilon}(\Xi(a))=\{ z_{m_a-h}\}$ where $z_{m_a-h}$ is an eigenvalue of $(m_a-h)^{-1} \mathcal{L}_{m_a - h}$ with algebraic multiplicity $1$.
Assume for contradiction the existence of a sequence $(m_j)_{j\in\mathbb N}$ in $]m_b,m_a[$ converging to $m_a$ such that there are $z_j\in \mathscr{U}_{m_j}$ with $|z_j - \Xi(a)|>\varepsilon$ for some $\varepsilon>0$. Up to extracting a subsequence, we may assume $z_j \to z$ for some $z\in \mathbb C$ with $|z-\Xi(a)|\ge \varepsilon$. Proposition \ref{p:3+4} implies that the imaginary part of $z$ is positive. Arguing as in the first step of proof of Proposition \ref{p:3+4} we can prove that $z\in \mathscr{U}_{m_a}$ and reach a contradiction.
\section{Proof of Proposition \ref{prop:Vishicimproved1}}
The proof of Proposition \ref{prop:Vishicimproved1} can be reduced to the following weaker version using Remark \ref{rmk:algebraic dim const} and the argument outlined in the previous paragraph.
\begin{proposition}\label{prop:Vishikimproved-weaker}
Assume $- \lambda_a < -1$ and let $m_a=\sqrt{\lambda_a}$. Let $h$ and $\varepsilon$ be sufficiently small so that Proposition \ref{p:3+4} and Remark \ref{r:b-also-2} apply, namely
$\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) = \{z_{m_a-h}\}$, where $z_{m_a-h}$ is an eigenvalue of $(m_a-h)^{-1} \mathcal{L}_{m_a-h}$ with {\em geometric} multiplicity $1$. Then, if $h$ is chosen possibly smaller, the algebraic multiplicity of $z_{m_a-h}$ is also $1$.
\end{proposition}
We now come to the proof of the latter, which is the heart of the matter. First of all, we introduce a suitable transformation of the space $\mathcal{H}$ (which we recall is the domain of the operator $\mathcal{L}_m$, defined in \eqref{e:def-H}). We introduce the Hilbert space
\[
\mathcal{H}^e :=\left\{ f: \mathbb R \to \mathbb C\, :\, \int |f (t)|^2 e^{-2t}\, dt < \infty\right\}
\]
and the isometry $T: \mathcal{H} \to \mathcal{H}^e$ given by
\[
\gamma (r) \mapsto e^{2t} \gamma (e^t)\, .
\]
Rather than considering the operator $\mathcal{L}_m$ on $\mathcal{H}$, it turns out to be more convenient to consider the operator $T \circ \mathcal{L}_m \circ T^{-1}$ on $\mathcal{H}^e$. Since the spectra of the two operators coincide, with a slight abuse of notation we will keep writing $\mathcal{L}_m$ in place of $T \circ \mathcal{L}_m \circ T^{-1}$, and we will keep $\mathscr{U}_m$ to denote the point spectrum of ${m^{-1}}T \circ \mathcal{L}_m \circ T^{-1}$ in the upper half plane.
Simple computations show that the operator $\mathcal{L}_m$ is given, on $\mathcal{H}^e$ by
\[
\mathcal{L}_m (\alpha) = m \Xi \alpha - m A \varphi
\]
where $\varphi$ is the unique $L^2$ solution of
\[
\varphi'' - m^2 \varphi = \alpha\,
\]
(note that we are claiming $\varphi\in L^2$ rather than $\varphi \in \mathcal{H}^e$, cf. Section \ref{s:eigenvalue-equation}).
We can now come to the main idea behind the simplicity of $z_{m_a-h}$, which is borrowed from \cite{Vishik3}. A prominent role is played by the adjoint of $\mathcal{L}_m$ (considered as a bounded linear operator from $\mathcal{H}^e$ into itself): for the latter we will use the notation $\mathcal{L}_m^\star$.
\begin{lemma}\label{l:aggiunto}
Assume that $h$ and $\varepsilon$ are small enough so that $\{z_{m_a-h}\} =\mathscr{U}_{m_a-h}\cap B_\varepsilon (\Xi (a))$ and $z_{m_a-h}$ has geometric multiplicity $1$ in ${\rm spec}\, ((m_a-h)^{-1} \mathcal{L}_{m_a-h}, \mathcal{H}^e)$. Let $\alpha_h \in \mathcal{H}^e\setminus \{0\}$ be such that $(m_a-h)^{-1}\mathcal{L}_{m_a-h}(\alpha_h) - z_{m_a - h}\alpha_h=0$. If $h$ is small enough, then there is $\beta_h\in \mathcal{H}^e$ such that $(m_a-h)^{-1}\mathcal{L}_{m_a-h}^\star(\beta_h) - \bar z_{m_a-h}\beta_h =0$ and
\begin{equation}\label{e:dual-pairing}
\langle \alpha_h, \beta_h \rangle_{\mathcal{H}^e} = \int \alpha_h (t) \bar \beta_h (t)\, e^{-2t}dt \neq 0\, .
\end{equation}
\end{lemma}
Let us show how the latter implies Proposition \ref{prop:Vishikimproved-weaker}. Assume $z_{m_a-h}$ were an element of ${\rm spec}\, ((m_a-h)^{-1}\mathcal{L}_{m_a-h}, \mathcal{H}^e)\cap B_\varepsilon (\Xi (a))$ with geometric multiplicity $1$ and algebraic multiplicity larger than $1$: our goal is to show that $h$ cannot be too small.
The properties just listed mean that the following bounded operator on $\mathcal{H}^e$,
\[
L_h := (m_a-h)^{-1}\mathcal{L}_{m_a-h} - z_{m_a-h} \, ,
\]
has a 1-dimensional kernel, $0$ is in its point spectrum, and $0$ has algebraic multiplicity strictly larger than $1$. These properties imply that any element $\alpha_h$ in the kernel of $L_h$ (i.e. any eigenfunction of $(m_a-h)^{-1}\mathcal{L}_{m_a-h}$ with eigenvalue $z_{m_a-h}$) is in the image of $L_h$. Fix one such element $\alpha_h$ and let $\eta_h$ be such that $L_h (\eta_h) = \alpha_h$. If $h$ is small enough, we can fix $\beta_h$ as in Lemma \ref{l:aggiunto}, and observe that it is in the kernel of the adjoint operator $L^\star_h$. We then must have
\[
0 \neq \int \alpha_h \bar\beta_h \, e^{-2t}dt= \int L_h (\eta_h) \bar\beta_h \, e^{-2t}dt
= \int \eta_h \overline{L_h^\star (\beta_h)} \, e^{-2t}dt = 0\, ,
\]
which is not possible.}
\begin{proof}[Proof of Lemma \ref{l:aggiunto}]
{{red}
We begin by proving the following claim:
\begin{itemize}
\item[(Cl)] For any $z\in \mathcal{U}_m$, with $m>1$, such that
$m^{-1}\mathcal{L}_m(\alpha_z) - z \alpha_z = 0$,
there exists $\beta_z\in \mathcal{H}^e$ such that
\begin{equation}
m^{-1}\mathcal{L}_m^\star(\beta_z) - \bar z \beta_z = 0 \, ,
\end{equation}
and
\begin{equation}\label{eq:keyfunction}
\left\langle \alpha_z, \beta_z \right\rangle_{\mathcal{H}^e}
=
\int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z)^2}\varphi_z(t)^2\, d t\, ,
\end{equation}
where $\varphi_z$ is the unique solution on $L^2(\mathbb{R})$ of $\varphi_z'' - m^2\varphi_z = \alpha_z$.
\end{itemize}
To that aim we first observe that the adjoint of $\mathcal{L}_m$ in $\mathcal{H}^e$ is given by
\begin{equation}
\mathcal{L}_m^\star (\alpha) = m( \Xi \alpha - e^{2t} \mathcal{K}_m(A\alpha e^{-2t})) \, ,
\end{equation}
where $\mathcal{K}_m$ is the inverse of $-\frac{d^2}{dt^2} + m^2$ as a closed unbounded self-adjoint operator in $L^2(\mathbb{R})$.
Notice that $\mathcal{L}^\star$ is well defined because $e^{-t}\alpha \in L^2(\mathbb{R})$ and $A\sim e^{2t}$ as $t\to -\infty$.
We now observe that, if $z\in \mathcal{U}_m$, $m^{-1}\mathcal{L}_m (\alpha_z) = z \alpha_z$, and $\beta_z$ is defined by
\[
\beta_z:= e^{2t}\frac{\bar \varphi_z}{\Xi - \bar z}
= e^{2t}\frac{\mathcal{K}_m(\bar \alpha_z)}{\Xi - \bar z}\, ,
\]
then
\begin{equation}\label{eq:adj}
m^{-1}\mathcal{L}_m^\star (\beta_z) = \bar z \beta_z \, .
\end{equation}
Notice that $\beta_z\in W^{2,2}_{\rm loc}\cap \mathcal{H}^e$ decays exponentially fast at $\infty$ thanks to the bound $| \varphi_z(t)|\le C e^{-m|t|}$, for every $t\in \mathbb{R}$, proven in Lemma \ref{c:decay}.
Let us now verify \eqref{eq:adj}:
We first observe that
\begin{equation}
\alpha_z = \frac{A\varphi_z}{\Xi - z} \, ,
\end{equation}
hence
\begin{align*}
m^{-1}\mathcal{L}_m^\star (\beta_z) & = \Xi \beta_z - e^{2t}\mathcal{K}_m \left(\frac{A \bar \varphi_z}{\Xi - \bar z}\right) =
\Xi \beta_z - e^{2t}\mathcal{K}_m(\bar \alpha_z)
\\& = \Xi \beta_z -(\Xi - \bar z) \beta_z = \bar z \beta_z \, .
\end{align*}
It is now immediate to conclude \eqref{eq:keyfunction}.
}
\medskip
{{red}
In order to simplify our notation we use $\mathcal{L}_h$, $z_h$ and $m_h$ in place of $\mathcal{L}_{m_a-h}$, $z_{a-h}$ and $m_a-h$.
Given $\alpha_h\in \mathcal{H}^e$ as in the statement of the Lemma we denote by $\varphi_h$ the unique $L^2$ solution of $\varphi_h'' - m_h^2 \varphi_h = \alpha_h$. We now can apply (Cl) above to find $\beta_h\in \mathcal{H}^e$ which solves $m_h^{-1}\mathcal{L}_h^\star (\beta_h) = \bar z_h \beta_h$ and such that
\begin{equation}\label{eq:keyfunction1}
\left\langle \alpha_h, \beta_h \right\rangle_{\mathcal{H}^e}
=
\int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z_h)^2}\varphi_h(t)^2\, d t\, .
\end{equation}
To conclude the proof it suffices to show that, after appropriately normalizing the functions $\alpha_h$ (i.e. after multiplying them by an appropriate constant factor, which might depend on $h$) we have
\begin{equation}\label{e:limit-nonzero}
\lim_{h\to 0} \left\langle \alpha_h, \beta_h \right\rangle_{\mathcal{H}^e}
= c \neq 0 \, .
\end{equation}
Note that for the latter conclusion, which we will prove in the next two steps, we will use the assumption that $\alpha_h\neq 0$.
\bigskip
{\bf Step 1:} We show that, up to multiplication of $\alpha_h$ by a suitable constant factor (which might vary with $h$), $\varphi_h \to \varphi$ in $W^{1,2}$ and in $C^{1,\alpha}$, as $h\to 0$, where $\varphi \in W^{2,\infty}$ is a nontrivial solution to
\begin{equation}
- \frac{d^2 \varphi}{dt^2} + m_a^2 \varphi + \frac{A}{\Xi - \Xi (a)} \varphi = 0\, .
\end{equation}
By Remark \ref{r:phi(a)-nonzero}, the nontriviality of $\varphi$ implies $\varphi (a) \neq 0$, and hence, up to multiplication by another constant factor, we will assume, without loss of generality, that $\varphi (a)=1$.
Recall that $\varphi_h$ solves the equation
\begin{equation}\label{e:ODE-again-100}
- \frac{d^2 \varphi_h}{dt^2} + m_h^2 \varphi_h + \frac{A}{\Xi - z_h} \varphi_h
= 0 \, .
\end{equation}
For the moment, let us normalize the functions so that
\begin{equation}\label{e:normalization-2}
\int (|\varphi_h'|^2 + m_h^2 |\varphi_h|^2) = 1\, ,
\end{equation}
as in \eqref{e:L2-normalization}. We then can argue as for the bounds \eqref{e:exp-bound-1} and \eqref{e:exp-bound-2} to derive the existence of constants $C$ and $\beta>2$ (independent of $h$) such that
\begin{equation}\label{e:exp-bound-3}
\left|\varphi_h (t)\right| \leq C e^{- \beta |t|} \quad \forall t\, .
\end{equation}
Recalling Section \ref{s:5-7-part-II}, we know that $z_h = \Xi (a) + c(a) h + o (h)$, where $c(a)$ is a complex number with positive imaginary part, which we denote by $d(a)$. Using the monotonicity of $\Xi$ we can write $z_h = \Xi (t (h)) + i (d(a) h + o (h))$ for some $t(h)$ which satisfies the bound $|t (h) -a|\leq C h$ for some positive constant $C$. In particular, using the mean value theorem and the fact that the derivative of $\Xi$ does not vanish on $[a-1, a+1]$, we get
\[
|\Xi (t) - z_h|^2\geq C^{-1} (|t-t(h)|^2 + |h|^2)\qquad \forall t\in [a-1,1+1]\, ,
\]
where $C$ is some positive constant. Next, using that $|t(h)-a|\leq C h$, we conclude the estimate
\[
|\Xi (t) - z_h|\geq C^{-1} |t-a| \qquad \forall t\in [a-1, a+1]\, ,
\]
with a constant $C$ independent of $h$. Since $a$ is a zero of $A$, we finally conclude that the functions
\[
\frac{A(t)}{\Xi (t) - z_h}
\]
are in fact uniformly bounded, independently of $h$. Using the latter estimate and \eqref{e:exp-bound-3} we thus infer that
\begin{equation}\label{e:exp-bound-4}
|\varphi_h'' (t)|\leq C e^{-\beta |t|}\, .
\end{equation}
In particular, upon extraction of a subsequence, we can assume that the $\varphi_h$ converge to a function $\varphi$ strongly in $W^{1,2}$, weakly in $W^{2,\infty}$, and hence strongly in $C^{1, \alpha}$ for every $\alpha<1$. In particular, because of the normalization \eqref{e:normalization-2}, $\varphi$ is a nontrivial $W^{2, \infty}$ function, satisfying the same exponential decay as in \eqref{e:exp-bound-3} and \eqref{e:exp-bound-4}. Moreover, given the bound on the functions $\frac{A(t)}{\Xi (t) -z_h}$, $\varphi$ is in fact a solution of
\begin{equation}\label{e:ODE-again-101}
- \frac{d^2 \varphi}{dt^2} + m_a^2 \varphi + \frac{A}{\Xi - \Xi (a)} \varphi = 0\, .
\end{equation}
Recalling Remark \ref{r:phi(a)-nonzero}, $\varphi (a)\neq 0$ and $\varphi$ is unique up to a constant factor. In particular, we must have
\begin{equation}\label{e:comoda}
\liminf_{h\downarrow 0} |\varphi_h (a)|>0,
\end{equation}
otherwise for a suitable subsequence we would have convergence to a nontrivial solution $\varphi$ for which $\varphi (a)=0$. Because of \eqref{e:comoda} we can use the different normalization $\varphi_h (a) = 1$, which in turn implies that $\varphi_h$ converges (without extracting subsequences) to the unique $W^{2,2}$ solution $\varphi$ of \eqref{e:ODE-again-101} which satisfies $\varphi (a)=1$.
\medskip
{\bf Step 2:} We prove that
\begin{equation}
\lim_{h\to 0} \, {\rm Im} \int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z_h)^2}\varphi_h(t)^2\, d t
=
\frac{2A'(a)\varphi(a)^2}{d(a)\Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds \, .
\end{equation}
Recalling that $z_h = \Xi(t(h)) + i (d(a)h + o(h))$, we write
\begin{align*}
{\rm Im}\, \left[\frac{A}{(\Xi-z_h)^2} \varphi_h^2\right]
& = {\rm Im}\, \left[\frac{A((\Xi-\Xi(t(h)))+ i(d(a)h + o(h)))^2}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}({\rm Re}\, \varphi_h + i {\rm Im}\, \varphi_h)^2\right]
\\&
= \frac{2(d(a)h + o(h)) A(\Xi - \Xi(t(h)))}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}({\rm Re}\, \varphi_h^2 - {\rm Im}\, \varphi_h^2)
\\ & \qquad +
\frac{2A }{(\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2}{\rm Re}\, \varphi_h {\rm Im}\, \varphi_h
\\ & \qquad -\frac{ 4(d(a)h + o(h))^2 A}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}{\rm Re}\, \varphi_h {\rm Im}\, \varphi_h
\\ & =: I_h + II_h + III_h \, .
\end{align*}
To ease notation we set
\begin{equation}
f_h := {\rm Re}\, \varphi_h^2 - {\rm Im}\, \varphi_h^2 \, ,
\quad
g_h := {\rm Re}\, \varphi_h\, {\rm Im}\varphi_h \, ,
\end{equation}
and observe that $f_h \to \varphi^2$, $g_h \to 0$ as $h\to 0$, where the convergence is, in both cases, in the strong topologies of $L^2$ and of $C^\alpha$, for every $\alpha <1$.
We will show below that:
\begin{align}
\lim_{h\to 0} \int I_h &= \frac{2A'(a)\varphi(a)^2}{d(a) \Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds=: L (a)\label{e:limit-I}\, ,\\
\lim_{h\to 0} \int II_h &= 0\label{e:limit-II}\, ,\\
\lim_{h\to 0} \int III_h &=0\label{e:limit-III}\, .
\end{align}
Considering that none of the numbers $\Xi' (a)$, $A'(a)$, $\varphi (a)$, and $d(a)$ vanish, $L(a)\neq 0$. This implies \eqref{e:limit-nonzero} and concludes the proof. We next study separately the three limits above.
\medskip
{\bf Proof of \eqref{e:limit-I}.}
There exists $\delta>0$ and $r>0$ such that for any $h$ sufficiently small one has $|\Xi(t) - \Xi(t(h))|>\delta$ for all $t\in \mathbb{R}\setminus (a-r/2,a+r/2)$. This implies that
\begin{equation}
\lim_{h \to 0} \int_{\mathbb{R}\setminus (t(h) - r, t(h) + r)} I_h = 0 \, ,
\end{equation}
hence, we are left with
\begin{equation}
\lim_{h \to 0} 2h d(a)\int_{t(h) - r}^{t(h) + r} \frac{A(t)(\Xi(t)- \Xi(t(h))}{((\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2)^2} f_h(t)
\, d t \, .
\end{equation}
We change variables according to $t= t(h) + sh$:
\begin{align*}
2\int_{-\frac{r}{h}}^{\frac{r}{h}} s & \left(\frac{A(t(h) + sh)}{h} \frac{\Xi(t(h) + sh )- \Xi(t(h))}{sh}\right) \times
\\&
\left( s^2\left(\frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh}\right)^2 + (d(a) + o(h)/h)^2 \right)^{-2}
f_h(t(h) + sh)\, d s
=: C(h) \, .
\end{align*}
Notice that, for any $s\in \mathbb{R}$, we have
\begin{equation}
\lim_{h \to 0}\frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh} = \Xi'(a) \, .
\end{equation}
Moreover the monotonicity of $\Xi$ implies that
\begin{equation}
1/C \le \left| \frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh} \right| \le C
\quad \text{for any $s\in (-r/h, r/h)$} \, .
\end{equation}
Notice that
\begin{equation}
\frac{A(t(h) + sh)}{h}
=
\frac{A(t(h) + sh)-A(t(h))}{h} + \frac{A(t(h)) - A(a)}{h} \, ,
\end{equation}
hence, up to extracting a subsequence $h_i \to 0$, we have
\begin{equation}
\frac{A(t(h_i) + sh_i)}{h_i} \to A'(a)s + x
\end{equation}
for some $x\in \mathbb{R}$ (recall that $|t(h)-a|\leq C h$ and note that $x$ might depend on the subsequence).
Collecting all the estimates above, and using the dominated convergence theorem we deduce that, along the subsequence $h_i$,
\begin{equation}
\lim_{i\to\infty} C(h_i)= 2\int_{\mathbb{R}} \frac{s(A'(a)s + x)\Xi'(a)}{(s^2 \Xi'(a)^2 + d(a)^2)^2} \varphi(a)^2\, ds
=
\frac{2A'(a)\varphi(a)^2}{d(a) \Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds \, .
\end{equation}
Observe that the limit does not depend on $x$ and hence does not depend on the chosen subsequence.
\medskip
{\bf Proof of \eqref{e:limit-III}.} Arguing as we did for $I_h$ and using that $g_h\to 0$ in $C^{1/2}$, as $h\to 0$, we easily deduce that
\begin{equation}
\lim_{h\to 0} \int III_h = 0 \, .
\end{equation}
\medskip
{\bf Proof of \eqref{e:limit-II}.}
We need to show that
\begin{equation}
\lim_{h \to 0} \int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2} g_h(t)\, dt = 0 \, .
\end{equation}
Observe that $G_h := g_h/|\varphi_h|^2 = |\varphi_h|^{-2} {\rm Re}\, \varphi_h {\rm Im}\, \varphi_h $, and in particular, $|G_h|\leq \frac{1}{2}$. Moreover there exists $r>0$ such that $G_h \to 0$ in $(a-r, a+r)$ in the $C^\alpha$ topology for every $\alpha<1$ (here, we are using that $|\varphi_h(a)|^2 \to |\varphi(a)|^2\neq 0$ as $h\to 0$).
We write
\begin{align}
\int_\mathbb{R} & \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} G_h(t)\, dt
\\& =
\int_\mathbb{R} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h))) dt \, ,
\end{align}
where we took advantage of the identity
\begin{equation}
\int_\mathbb{R} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2}\, d t = 0 \, ,
\end{equation}
proven in \eqref{e:imaginary-trick}.
Arguing as we did for $I_h$ we can reduce the problem to show
\begin{align}
\lim_{h \to 0} & \int_{t(h) - r}^{t(h) + r} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt = 0\, .
\end{align}
We split the integral to the sum of
\begin{align*}
J_1 (h) &:= \int_{t(h)-r}^{t(h)+r} \frac{(A(t)- A(t (h))|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt\\
J_2 (h) &:= A(t (h)) \int_{t(h)-r}^{t(h)+r} \frac{|\varphi_h(t)|^2}{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt\, .
\end{align*}
Next observe that, in the interval that interests us, the following inequalities hold provided $r$ and $h$ are sufficiently small:
\begin{align*}
|A (t) - A(t(h))|&\leq C |t- t(h)|\\
|A(t(h)| &= |A (t (h))-A(a)|\leq C|t(h)-a|\leq C h\\
|G_h (t) - G_h (t (h))|&\leq \|G_h\|_{C^{1/2} (a-r, a+r)} |t-t(h)|^{1/2}\\
|\Xi (t) - \Xi (t(h))| &\geq C^{-1} |t-t(h)|\\
(d(a)h + o (h))^2 &\geq C^{-1} h^2\, .
\end{align*}
Since $\|\varphi_h\|_{L^\infty} \leq C$, we can change variable in the integrals to $\sigma =t-t(h)$ and estimate them as follows:
\begin{align*}
|J_1 (h)|&\leq C \|G_h\|_{C^{1/2} (a-r,a+r)} \int_{-r/2}^{r/2} \sigma^{-1/2}\, d\sigma \leq C\|G_h\|_{C^{1/2}} r^{1/2}\, ,\\
|J_2 (h)| &\leq C h \int_{-\infty}^\infty \frac{\sigma^{1/2}}{\sigma^2 + C^{-1} h^2}\, d\sigma
= C h^{1/2} \int \frac{\tau^{1/2}}{\tau^2 + C^{-1}} d\tau \leq C h^{1/2}\, .
\end{align*}
Clearly $J_2 (h)\to 0$, while $J_1 (h) \to 0$ because $\|G_h\|_{C^{1/2} (a-r,a+r)} \to 0$.}
\end{proof}
\chapter{Proofs of technical statements}
\section{Proof of Remark \ref{r:bounded}}
More generally, we will show here that, for any $q_0\in[1,2[$ and $q_1\in]2,\infty]$, it is true that \begin{equation*}\lVert K_2*\omega\rVert_{L^\infty(\ensuremath{\mathbb R}^2;\ensuremath{\mathbb R}^2)}\le C(q_0, q_1) (\lVert \omega\rVert_{L^{q_0}(\ensuremath{\mathbb R}^2)}+\lVert \omega\rVert_{L^{q_1}(\ensuremath{\mathbb R}^2)})\end{equation*} for all $\omega\in L^{q_0}\cap L^{q_1}$.
Indeed, passing to polar coordinates, one sees that $K_2\vert_{B_1}\in L^{q_1^*}(B_1; \ensuremath{\mathbb R}^2)$ and $K_2\vert_{\ensuremath{\mathbb R}^2\setminus B_1}\in L^{q_0^*}(\ensuremath{\mathbb R}^2\setminus B_1;\ensuremath{\mathbb R}^2)$, where $q_1^*, q_2^*$ are given by $\frac1{q_i}+\frac1{q_i^*}=1$ for $i\in\{1,2\}$. Hölder's inequality implies that for any $x\in\ensuremath{\mathbb R}^2$,
\begin{equation*}
\begin{split}
\abs{(K_2*\omega)(x)} &= \abs{((K_2 \mathbf 1_{B_1})*\omega)(x)+((K_2 (1-\mathbf 1_{B_1}))*\omega)(x)} \\
&\le\norm{K_2}_{L^{q_1^*}(B_1)}\norm{\omega}_{L^{q_1}(\ensuremath{\mathbb R}^2)} + \norm{K_2}_{L^{q_0^*}(\ensuremath{\mathbb R}^2\setminus B_1)}\norm{\omega}_{L^{q_0}(\ensuremath{\mathbb R}^2)} \\
&\le C(q_0, q_1) (\lVert \omega\rVert_{L^{q_0}(\ensuremath{\mathbb R}^2)}+\lVert \omega\rVert_{L^{q_1}(\ensuremath{\mathbb R}^2)}).
\end{split}
\end{equation*}
Since $x$ is arbitrary, this achieves a proof of the claim above.
\section{Proof of Theorem \ref{thm:Yudo}}
\textbf{Existence.} The existence argument is a classical density argument. Take any sequence $(\omega_0^{(n)})_{n\in\mathbb N}$ of functions in $L^1\cap C^\infty_c$ that converges strongly in $L^1$ to $\omega_0$. Analogously, pick a sequence of smooth functions $(f_n)_{n\in\mathbb N}$ in $C^\infty_c (\ensuremath{\mathbb R}^2\times[0, T])$ converging in $L^1(\ensuremath{\mathbb R}^2\times[0,T])$ to $f$ and satisfying the bound $\|f_n (\cdot, t)\|_{L^\infty} \leq \|f (\cdot, t)\|_{L^\infty}$ for a.e. $t$. Then, let $\omega^{(n)}$ denote the solution of the corresponding Cauchy problem of the Euler equations in vorticity form. The existence of such solutions is a classical well-known fact, see for instance \cite[Theorem A]{McGrath}. Following Remark \ref{r:A-priori-estimates}, these solutions satisfy all the \`a priori estimates needed in Proposition \ref{p:convergence}. Therefore, following the proof of Proposition \ref{p:convergence}, one obtains, in the limit $n\to\infty$, a solution $\omega\in L^\infty([0,T]; L^1\cap L^\infty)$ of the given Cauchy problem. Furthermore, since the à priori estimates of Remark \ref{r:A-priori-estimates} are uniform in $n$, one gets $K_2*\omega\in L^\infty([0,T]; L^2)$.
\begin{remark}
The proof of Proposition \ref{p:convergence} has a fixed force $f$ but a straightforward adaptation of the arguments handles the case above, namely with a sequence of forces $(f_n)_{n\in\mathbb N}$ that converges in $L^1(\ensuremath{\mathbb R}^2\times[0,T])$ to a given $f$. More precisely the only difference occurs in the term $I_4 (k)$ of \eqref{e:term-I4}, which anyway enjoys convergence to the same limit.
\end{remark}
\textbf{Uniqueness.} The uniqueness proof needs two important facts. The first is a well-known ODE inequality, whose short proof is given, for the reader's convenience, at the end of the section.
\begin{lemma}\label{l:ODE lemma}
Let $T>0$ and let $E:[0,T]\to[0,\infty[$ be a differentiable function satisfying
\begin{equation}\label{e:ODE inequality for E}
\dot E(t)\le p M E(t)^{1-1/p} \quad \text{ and }\quad E(0)=0
\end{equation}
for some fixed $M>0$. Then $E(t)\le (Mt)^p$ for all $t\in[0,T]$.
\end{lemma}
The second is the classical Calderón-Zygmund $L^p$ estimate, where we need the sharp $p$-dependence of the corresponding constant. This fact is also well known, cf. for instance \cite[Formula (8.45), page 322]{MajdaBertozzi}).
\begin{lemma}\label{l:Estimate on Lp norm of gradient of velocity}
For every $p_0>1$ there is a constant $c (p_0)$ with the following property.
If $v=K_2*\omega$ for some $\omega\in L^1 \cap L^p(\ensuremath{\mathbb R}^2)$ with $p\in [p_0, \infty[$, then $\norm{D v}_{L^p}\le p c \norm{\omega}_{L^p}$.
\end{lemma}
Now, let $v_1=K_2*\omega_1, v_2=K_2*\omega_2$ be two solutions of \eqref{e:Euler} satisfying the assumptions of Theorem \ref{thm:Yudo} and note that $w:=v_1-v_2$ solves
\begin{equation}\label{e:Gleichung fuer die Differenz}
\partial_t w +(v_1\cdot\nabla)w +(w\cdot\nabla)v_2=-\nabla(p_1-p_2)
\end{equation}
(where $p_1, p_2$ are the pressures corresponding to $v_1$ and $v_2$). Clearly
\[
E(t):=\int_{\ensuremath{\mathbb R}^2} |w(x,t)|^2\,\mathrm dx \leq 2 \int_{\ensuremath{\mathbb R}^2} |v_1 (x,t))^2|\,\mathrm dx + 2 \int_{\ensuremath{\mathbb R}^2} |v_2 (x,t)^2|\,\mathrm dx <\infty
\]
is a bounded function on $[0,T]$.
We scalar multiply \eqref{e:Gleichung fuer die Differenz} with $w$, integrate by parts and use the divergence free conditions of $v_1, v_2$, and $w$ to conclude and
\begin{align*}
\dot E(t) = & - 2\int_{\ensuremath{\mathbb R}^2}((w\cdot\nabla)v_2)w\,\mathrm dx
\le 2\int_{\ensuremath{\mathbb R}^2}|w(x,t)|^2 \abs{D v_2(x,t)}\,\mathrm dx\\
\le & 2\norm{\nabla v_2(\cdot, t)}_{L^p}\norm{w(\cdot, t)}_{L^\infty}^{2/p}\norm{w(\cdot,t)}_{L^2}^{2-2/p}\,.
\end{align*}
Using Remark \ref{r:bounded}, we also have
\begin{equation*}
\begin{split}
\sup_{t\in[0,T]} \norm{w(\cdot, t)}_{L^\infty} &\le \sup_{t\in[0,T]}(\norm{v_1(\cdot, t)}_{L^\infty}+\norm{v_2(\cdot, t)}_{L^\infty}) \\
&\le C \sup_{t\in[0, T]}(\norm{\omega_1(\cdot, t)}_{L^1}+\norm{\omega_1(\cdot, t)}_{L^\infty}+\norm{\omega_2(\cdot, t)}_{L^1}+\norm{\omega_2(\cdot, t)}_{L^\infty}) <\infty\, .
\end{split}
\end{equation*}
Next fix any $p\geq 2$. From Lemma \ref{l:Estimate on Lp norm of gradient of velocity} and the classical $L^p$ interpolation we conclude
\begin{equation*}
\norm{D v_2(\cdot, t)}_{L^p}\le p c \norm{\omega_2 (\cdot, t)}_{L^p}\le p c \norm{\omega_2 }_{L^\infty([0,T]; L^1)}^{1/p}\norm{\omega_2 (\cdot, t)}_{L^\infty([0,T]; L^\infty)}^{1-1/p}.
\end{equation*}
Therefore, $\dot E(t)\le p M_p E(t)^{1-1/p}$
with
\begin{align*}
M_p &= 2\norm{w}_{L^\infty([0,T];L^\infty)}^{2/p} c \norm{\omega_2}_{L^\infty([0,T]; L^1)}^{1/p}\norm{\omega_2 }_{L^\infty([0,T]; L^\infty)}^{1-1/p}\\
&\le 2c \left({\textstyle{\frac{1}{p}}}\norm{w}_{L^\infty([0,T];L^\infty)}^2\norm{\omega_2}_{L^\infty([0,T];L^1)}+\left(1-{\textstyle{\frac{1}{p}}}\right)\norm{\omega_2}_{L^\infty([0,T];L^\infty)}\right) \\
&\le 2c (\norm{w}_{L^\infty([0,T];L^\infty)}^2\norm{\omega_2}_{L^\infty([0,T];L^1)}+\norm{\omega_2}_{L^\infty([0,T];L^\infty)})=: M<\infty\, .
\end{align*}
We can thus apply
Lemma \ref{l:ODE lemma} to obtain that $E(t)\le (M_p t)^p \leq (Mt)^p$.
In particular, for any $t\geq \frac{1}{2M}$ we have $E(t)\le \frac 1{2^p}$ and we can let $p$ tend to $\infty$ to infer $E(t)=0$. Since the same estimates apply to any translation $\tilde{E} (t):= E (t+t_0)$ of the function $E$, we immediately conclude that $E$ vanishes identically, namely that $v_1=v_2$ on $\mathbb R^2\times [0,T]$.
\begin{proof}[Proof of Lemma \ref{l:ODE lemma}] Fix an arbitrary $t_0\leq T$ and note that if $E(t_0)=0$ there is nothing to show. Hence assume $E(t_0)> 0$ and set $a:=\sup\{t:E(t)=0\text{ and }t\le t_0\}$ (note that the set is nonempty because $E (0)=0$). $E(a)=0$ by continuity of $E$ and clearly $E(t)>0$ for all $t\in]a,t_0]$. Therefore, we can divide \eqref{e:ODE inequality for E} by $E(t)^{1-1/p}$ to obtain that $\dot E(t) E^{1/p-1}(t)\le p M$
for all $t\in]a, t_0]$. Integrating both sides gives
\begin{equation}\label{e:Integral bound on E}
\int_{a}^{t_0} \dot E(t) E^{1/p-1}(t)\le p M (t_0-a)\, .
\end{equation}
But the left hand side equals $p E^{1/p}(t_0)-pE^{1/p}(a)=p E^{1/p}(t_0)$, from which we infer
$E^{1/p}(t_0)\le M (t_0-a) \le M t_0$.
\end{proof}
\section{Proof of Proposition \ref{p:convergence}}
Recall first the following classical metrizability result of weak${}^*$ topologies of separable Banach spaces.
\begin{lemma}[Metrizability Lemma]\label{l:Metrizability}
Let $X$ be a separable Banach space and let $K\subset X^*$ be weakly${}^*$-compact. Then $K$ is metrizable in the weak${}^*$ topology inherited from $X^*$ and a metric that induces this topology is given by
\begin{equation}\label{e:Metrization-of-weak-star-topology}
d(l, \tilde l)=\sum_{n=1}^\infty 2^{-n}\min\{1, \vert l(x_n)-\tilde l(x_n)\vert\},
\end{equation}
where $(x_n)_{n\in \mathbb N}$ is any sequence in $X$ such that $\{x_n:n\in\mathbb N\}$ is dense in $X$.
\end{lemma}
Now on to the proof of Proposition \ref{p:convergence}. We will prove convergence of the $\omega_{\varepsilon, k}$ to a $\omega_\varepsilon$ for fixed $\varepsilon$ and $k\to\infty$ in the space $C([0, T]; K)$, where $K:=\{u\in L^q(\ensuremath{\mathbb R}^2):\lVert u\rVert_{L^q}\le R\}$ is equipped with the weak${}^*$ topology inherited from $L^q_{\text w}$. (We will talk about the choice of $q$ later.) Here, $R$ is the uniform bound obtained in \eqref{e:uniform_bound} of Corollary \ref{c:omega_k_epsilon}. Note that since every $L^q$ space is reflexive, one can work just as well with the weak topology on $K$. Let $(\phi_n)_{n\in\mathbb N}$ be a sequence of smooth functions such that $\{\phi_n:n\in\mathbb N\}$ is dense in every $L^q$. The metric given by \eqref{e:Metrization-of-weak-star-topology} now induces the topology of $K$, and it does not depend on $q$. Therefore, using the uniform bound \eqref{e:uniform_bound}, we conclude that \emph{the choice of $q$ does not matter}. It is sufficient to prove the statement of Proposition \ref{p:convergence} for one fixed $q\in]1, p]$ in order to prove it for all $q\in]1, p]$.
\begin{claim}
The $\omega_{\varepsilon, k}$, seen as functions from $[0, T]$ to $K$, are equicontinuous (for simplicity we define each $\omega_{\varepsilon, k} (\cdot, t)$ on the interval $[0,t_k]$ as constantly equal to $\omega_{\varepsilon, k} (\cdot, t_k)$).
\end{claim}
\begin{proof}
For $\tilde\omega, \hat\omega \in L^q(\ensuremath{\mathbb R}^2)$, let
\begin{equation*}
d_i(\tilde\omega, \hat \omega) \overset{\text{Def.}}=\left\lvert\int_{\ensuremath{\mathbb R}^2} (\tilde\omega-\hat\omega) \phi_i\right\rvert.
\end{equation*}
Since each $\omega_{\varepsilon, k}$ solves the Euler equations in vorticity form, we can estimate
\begin{equation}\label{e:bound-on-ith-distance}
\begin{split}
d_i(\omega_{\varepsilon, k}(t, \cdot), \omega_{\varepsilon, k}(s, \cdot)) &= \left\lvert\int_{\ensuremath{\mathbb R}^2}\int_s^t\partial_\tau\omega_{\varepsilon, k}(\sigma, x)\phi_i(x)\,\mathrm d\sigma\,\mathrm dx\right\rvert\\
&=\left\lvert\int_{\ensuremath{\mathbb R}^2}\int_s^t ((K_2*\omega_{\varepsilon, k})\cdot\nabla)\omega_{\varepsilon, k}(x, \sigma + f (x, \sigma))\phi_i(x) \,\mathrm d\sigma\,\mathrm dx\right\rvert\\
&\le\lVert\nabla\phi_i\rVert_{L^\infty(\ensuremath{\mathbb R}^2)}\int_{\ensuremath{\mathbb R}^2}\int_s^t\lvert K_2*\omega_{\varepsilon, k}\rvert\lvert\omega_{\varepsilon, k}\rvert\,\mathrm d\sigma\,\mathrm dx\\
& \qquad + \lVert\phi_i\rVert_{L^\infty(\ensuremath{\mathbb R}^2)} \int_{\ensuremath{\mathbb R}^2}\int_s^t |f(x, \sigma)|\, dx\, d\sigma\\
&\le C(\lVert\nabla\phi_i\rVert_\infty + \|\phi_i\|_\infty) \lvert s-t\rvert
\end{split}
\end{equation}
whenever $t\geq s \geq t_k$.
Let $\tilde\varepsilon>0$. We can find a $N\in\mathbb N$ (depending on $\tilde\varepsilon$) such that $\sum_{n=N^+1}^\infty 2^{-n}\le\frac{\tilde\varepsilon}2$. If \begin{equation*}\lvert t-s\rvert\le \frac{\tilde\varepsilon}{2NC\max_{i\in\{1,\dots,N\}}(\lVert\nabla\phi_i\rVert_{\infty} + \|\phi_i\|_\infty)},\end{equation*} where $C$ is the constant from \eqref{e:bound-on-ith-distance}, then, by the bound in \eqref{e:bound-on-ith-distance}, we get
\begin{equation*}
d(\omega_{\varepsilon, k}(t, \cdot),\omega_{\varepsilon, k}(s, \cdot))\le\frac{\tilde\varepsilon}2+\sum_{i=1}^N \frac{\tilde\varepsilon}{2N}=\tilde\varepsilon.\qedhere
\end{equation*}
\end{proof}
By the Banach-Alaoglu theorem, bounded subsets are relatively compact in the weak${}^*$ topology. Therefore, using reflexivity, for every $t\in[0,T]$, the bounded set $\{\omega_{\varepsilon, k}(\cdot, t): k\in\mathbb N\}$ is also relatively compact in $L^q_{\text w}$.
Therefore, using Arzelà-Ascoli, we can conclude that there exists a subsequence of $(\omega_{\varepsilon, k})_{k\in\mathbb N}$, not relabeled, that converges in $C([0, T]; L_{\text w}^q)$, for every $q$, to the same $\omega_\varepsilon\in C([0, T]; L_{\text w}^q)$.
\begin{claim}
The $\omega_\varepsilon$ is a solution of the Euler equations in vorticity formulation.
\end{claim}
\begin{proof}
We have, for every $k\in\mathbb N$ and $\phi\in C_{\text c}^\infty(\ensuremath{\mathbb R}^2\times[0, T])$ with $\phi(\cdot, T)=0$, (cf. \eqref{e:distrib})
\begin{align}
&\underbrace{\int_{\ensuremath{\mathbb R}^2} \omega_{\varepsilon, k}(x, t_k)\phi(x, t_k)\,\mathrm dx}_{=:I_1 (k)} + \underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2}\ \omega_{\varepsilon, k}(x,t)\partial_t\phi(x,t)\,\mathrm dx\,\mathrm dt}_{=:I_2 (k)}\nonumber\\
+ &\underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2} \omega_{\varepsilon, k}(x,t)((K_2*_x\omega_{\varepsilon, k})(x,t)\cdot\nabla)\phi(x,t)\,\mathrm dx\,\mathrm dt}_{=:I_3 (k)} +\underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2} f(x, t)\phi(x, t)\,\mathrm dx\,\mathrm dt}_{=:I_4 (k)} = 0\label{e:term-I4}\, .
\end{align}
The term $I_4(k)$ converges to
\begin{equation*}
\int_{\ensuremath{\mathbb R}^2\times[0, T]} f(x, t)\phi(x,t)\,\mathrm dx\,\mathrm dt\, .
\end{equation*}
By the convergence of the $\omega_{\varepsilon, k}$, \begin{equation*}\lim_{k\to\infty} I_2(k)=\int_{\ensuremath{\mathbb R}^2\times[0, T]} \omega_\varepsilon(x, t)\partial_t\phi(x,t)\,\mathrm dx\,\mathrm dt.\end{equation*} By the definition of the initial condition of $\omega_{\varepsilon, k}$ (cf. \eqref{e:Euler-later-times}), $\omega_{\varepsilon, k}(\cdot, t_k)$ converges strongly in $L^1(\ensuremath{\mathbb R}^2)$ to $\tilde\omega(\cdot, 0)=\omega_0=\omega_\varepsilon(\cdot, 0)$. Therefore, \begin{equation*}\lim_{k\to\infty} I_1(k)=\int_{\ensuremath{\mathbb R}^2}\omega_\varepsilon(x, 0)\phi(x, 0)\,\mathrm dx.\end{equation*}
It therefore only remains to prove the convergence of $I_3$, for which we will require yet another claim.
\begin{claim}
For every $r\in[2, \infty[$ and every $t\in[0,1]$, the set $\{v_{\varepsilon, k}(\cdot, t): k\in\mathbb N\}$ is compact in $L^r (B_R)$ for every $R>0$.
\end{claim}
\begin{proof}
From \eqref{e:uniform_bound}, we know that $\|v_{\varepsilon, k}(\cdot, t)\|_{L^2(\ensuremath{\mathbb R}^2)}\le C$ for some constant $C$ that is independent of $t$. Recall that $v_{\varepsilon, k}=\nabla^\bot\psi_{\varepsilon, k}$, where $\psi_{\varepsilon, k}$ solves $\Delta\psi_{\varepsilon, k} = \omega_{\varepsilon, k}$. Therefore, using the Calder\'{o}n-Zygmund inequality, one gets
\begin{equation*}
\norm{\nabla v_{\varepsilon, k}(\cdot, t)}_{L^2}\le C\norm{\omega_{\varepsilon, k}(\cdot, t)}_{L^2}.
\end{equation*}
Since the $L^2$ norms of the $\omega_{\varepsilon, k}(\cdot, t)$ are uniformly bounded, we can conclude that
\begin{equation*}
\sup_{k\in\infty} \norm{v_{\varepsilon, k}(\cdot, t)}_{L^2}<\infty.
\end{equation*}
Hence we conclude the compactness in $L^r (B_R)$ from Rellich's Theorem.
\end{proof}
Therefore, the $v_{\varepsilon, k}(\cdot, t)$ converge to $v_\varepsilon(\cdot, t)$ strongly in every $L^r (B_R)$ with $r\in[2,\infty[$. Moreover, thanks to \eqref{e:uniform_bound}, we can apply the dominated convergence theorem by Lebesgue to conclude that $v_{\varepsilon, k}\to v_\varepsilon$ as $k\to\infty$ in the space $L^1([0, T]; L^r (B_R))$ for every $r\in[2,\infty[$.
By definition,
\begin{equation*}
\omega_{\varepsilon, k} (v_{\varepsilon, k}\cdot\nabla)\phi-\omega_{\varepsilon} (v_{\varepsilon}\cdot\nabla)\phi = \omega_{\varepsilon, k} (v_{\varepsilon, k}-v_{\varepsilon})\cdot \nabla \phi +
(\omega_{\varepsilon,k} - \omega_\varepsilon) v_\varepsilon \cdot \nabla \phi\, .
\end{equation*}
We thus rewrite
\begin{align}
I_3 (k) &= \int_0^T \int_{B_R} \omega_{\varepsilon, k} (v_{\varepsilon,k}-v_\varepsilon)\cdot \nabla \phi\, dx\, dt
+ \int_0^T \underbrace{\int_{B_R} (\omega_{\varepsilon,k} - \omega_\varepsilon) v_\varepsilon \cdot \nabla \phi\, dx}_{=:J_k(t)}\, dt\, .\label{e:I3-converges-to-0}
\end{align}
Observe first that, for each fixed $t$,
\[
\lim_{k\to\infty} J_k (t) = 0\, ,
\]
since $\omega_{\varepsilon, k} (\cdot, t) - \omega_\varepsilon (\cdot, t)$ converges weakly to $0$ in $L^2$, while $v_\varepsilon (\cdot, t)\cdot \nabla \phi (\cdot, t)$ is a fixed $L^2$ function. On the other hand
\[
|J_k (t)|\leq \|\nabla \phi (\cdot, t)\|_\infty (\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^2} + \|\omega_\varepsilon (\cdot, t)\|_{L^2}) \|v_\varepsilon (\cdot, t)\|_{L^2}\, .
\]
Therefore the second integral in \eqref{e:I3-converges-to-0} converges to $0$. The first integral can be bounded by
\[
\|\nabla \phi\|_{L^\infty} \|v_{\varepsilon, k} - v_\varepsilon\|_{L^1 ([0,T], L^2 (B_R))} \|\omega_{\varepsilon, k}\|_{L^\infty ([0,T], L^2 (B_R)}
\]
and converges to $0$ as well.
\end{proof}
\section{Proof of Lemma \ref{l:extension}}
Consider $\vartheta \in L^2_m\cap \mathscr{S}$ for $m\geq 2$ and let $v:= K_2*\vartheta$. We first claim that
\begin{equation}\label{e:average}
\int_{B_R} v = 0 \qquad \qquad \mbox{for every $R>0$.}
\end{equation}
With \eqref{e:average} at our disposal, since $\|Dv\|_{L^2 (\mathbb R^2)} = \|\vartheta\|_{L^2 (\mathbb R^2)}$, we use the Poincar\'e inequality to conclude
\begin{equation}
R^{-1} \|v\|_{L^2 (B_R)} + \|Dv\|_{L^2 (B_R)} \leq C \|\vartheta\|_{L^2 (\mathbb R^2)}
\end{equation}
for a geometric constant $C$. This is then enough to infer the remaining conclusions of the lemma.
In order to achieve \eqref{e:average} observe first that $v = \nabla^\perp h$, where $h$ is the unique potential-theoretic solution of $\Delta h = \vartheta$, given by $h = K * \vartheta$ with $K (x) = \frac{1}{2\pi} \log |x|$. Since $K(R_\theta x) = K (x)$ and $\vartheta (x) = \vartheta (R_{2\pi/m} x)$, it follows that $h (R_{2\pi/m} x) = h (x)$, i.e. $h$ is $m$-fold symmetric. Therefore $R_{2\pi/m} \nabla h (R_{2\pi/m} x) = \nabla h (x)$. In particular, integrating in $x$ and using that the rotation is a measure-preserving transformation of the disk, we conclude
\[
\int_{B_R} \nabla h = R_{2\pi/m} \int_{B_R} \nabla h \, ,
\]
and thus,
\[
\int_{B_R} \nabla h = \frac{1}{m} \sum_{k=0}^{m-1} R_{2k\pi/m} \int_{B_R} \nabla h
\]
However, since $m\ge 2$, $\sum_{k=0}^{m-1} R_{2k\pi/m} = 0$, showing that $\int_{B_R} \nabla h = 0$.
\begin{remark}\label{r:Camillo_dumb}
We next show that it is not possible to find a continuous extension of the operator $L^2\cap \mathscr{S} \ni \vartheta \mapsto K_2* \vartheta \in \mathscr{S}'$ to the whole $L^2$. First of all we observe that, if such an extension exists, it then needs to coincide with $K_2* \vartheta$ when $\vartheta \in L^1 \cap L^2$. We next exhibit a sequence of divergence free vector fields $\{v_k\}\subset W^{1,1}\cap W^{1,2}$ with the property that $\omega_k = \curl v_k$ converge to $0$ strongly in $L^2$ but $v_k$ converge locally to a constant vector field $v_0\neq 0$. In order to do this, we first define the following functions $\phi_k$ on the positive real axis:
\[
\phi_k (r) :=
\left\{
\begin{array}{ll}
1 + \frac{3}{4\ln k} \qquad &\mbox{for $r\leq \frac{k}{2}$}\\ \\
1 + \frac{3}{4\ln k} - \frac{1}{k^2 \ln k} \left(r-\frac{k}{2}\right)^2\qquad &\mbox{for $\frac{k}{2} \leq r \leq k$}\\ \\
1 + \frac{1}{2 \ln k}- \frac{1}{\ln k} \ln \frac{r}{k} \qquad & \mbox{for $k\leq r \leq k^2$}\\ \\
\frac{1}{2 k^4 \ln k} (r-2k^2)^2\qquad & \mbox{for $k^2 \leq r\leq 2k^2$}\\ \\
0 \qquad &\mbox{for $r\geq 2 k^2$}\, .
\end{array}
\right.
\]
Observe that $\phi_k$ is $C^1$ and its derivative is Lipschitz. Next we define the stream functions
\[
\psi_k (x) = - \phi_k (|x|) v_0^\perp \cdot x
\]
and the vector field $v_k (x) = \nabla^\perp \psi_k (x)$. By construction $v_k$ is divergence free, compactly supported, and Lipschitz. In particular, it belongs to $W^{1,p}$ for every $p$. Moreover, $v_k$ equals $(1+\frac{3}{4\ln k}) v_0$ on $B_{k/2}$ and it thus follows that, as $k\to \infty$, $v_k$ converges locally to the constant vector field $v_0$. It remains to check that $\curl v_k = \Delta \psi_k$ converges to $0$ strongly in $L^2$. We compute
\[
\Delta \psi_k = - \underbrace{v_0^\perp \cdot x\, \Delta (\phi_k (|x|))}_{=:f_k} - \underbrace{\nabla (\phi_k (|x|))\cdot v_0^\perp}_{=: g_k}\,
\]
and we seek to bound $f_k$ and $g_k$ pointwise. For what concerns $f_k$ observe that $\Delta\phi_k$ vanishes on $|x|\leq \frac{k}{2}$, $k\leq |x| \leq k^2$, and $2k^2 \leq |x|$. On the remaining regions, using the formula for the Laplacian in polar coordinates, we can estimate
\[
|f_k (x)|\leq |v_0| |x| (|\phi'' (|x|)| + |x|^{-1} |\phi' (|x|)|)\, .
\]
In particular, we conclude
\[
|f_k (|x|)| \leq \frac{C}{|x| \ln k}\, ,
\]
for a constant $C$ independent of $k$.
As for $g_k$, it vanishes for $|x|\leq \frac{k}{2}$ and $|x|\geq k^2$, and where it does not vanish we have the estimate
\[
|g_k (x)|\leq |v_0| |\psi' (|x|)| \leq \frac{C}{|x|\ln k}\, ,
\]
again for a constant $C$ independent of $k$. Passing to polar coordinates, we can thus estimate
\begin{align*}
\|\Delta \psi_k\|^2_{L^2 (\mathbb R^2)} \leq & \frac{C}{(\ln k)^2} \int_{k/2}^{2k^2} \frac{1}{r} \, dr =
\frac{C}{(\ln k)^2} \left(\ln (2k^2) - \ln {\textstyle{\frac{k}{2}}}\right)
= \frac{C \ln (4k)}{(\ln k)^2}\, .
\end{align*}
\end{remark}
\chapter{General strategy: background field and self-similar coordinates}
\label{chapter:general}
\section{The initial velocity and the force}
First of all, the initial velocity $v_0$ of Theorem \ref{thm:main} will have the following structure\index{aaga@$\alpha$}\index{aagb@$\beta$}
\begin{equation}\label{e:v_0}
v_0 (x) =
\begin{cases}
\beta (2-\alpha)^{-1} |x|^{-\alpha} \chi (\lvert x\rvert) x^\perp\;\; &\mbox{if }{\bar\alpha} =\alpha\\
0 &\mbox{if }{\bar\alpha}>\alpha
\end{cases}
\end{equation}
where $0<\alpha\leq {\bar\alpha}<1$, $\chi$ is a smooth cut-off function, compactly supported in $\mathbb R$ and identically $1$ on the interval $[-1,1]$, and $\beta$ is a sufficiently large constant (whose choice will depend on $\alpha$). For simplicity we will assume that $\chi$ takes values in $[0,1]$ and it is monotone non-increasing on $[0, \infty[$, even though none of these conditions play a significant role.
A direct computation gives $\div v_0 = 0$.
The corresponding $\omega_0$ is then given by\index{aagz_0@$\omega_0$}\index{aalv_0@$v_0$}\index{aagx@$\chi$}
\begin{equation}\label{e:omega_0}
\omega_0 (x) =
\curl v_0 (x) =
\begin{cases}
\beta \left[ |x|^{-\alpha} \chi (|x|) + (2-\alpha)^{-1} \chi' (|x|) |x|^{1-\alpha}\right] \;\;&\mbox{if }{\bar\alpha}=\alpha\\
0 &\mbox{if }{\bar\alpha}>\alpha
\end{cases}
\end{equation}
and the relation $v_0 = K_2*\omega_0$ comes from standard Calder{\'o}n-Zygmund theory (since ${\rm div}\, v_0 =0$, $\curl v_0=\omega_0$ and $v_0$ is compactly supported).
${\bar\alpha}\in ]0,1[$ is chosen depending on $p$ in Theorem \ref{thm:main}, so that ${\bar\alpha} p < 2$: in the rest of the notes we assume that $p$, ${\bar\alpha}$, and $\alpha$ are fixed. In particular it follows from the definition that $\omega_0\in L^1\cap L^p$ and that $v_0 \in L^1 \cap L^\infty$.
Next, the function $|x|^{-{\bar\alpha}}$ will be appropriately smoothed to a (radial) function
\begin{equation}\label{e:def-bar-Omega}
\bar \Omega (x) = g (|x|)
\end{equation}
\index{aalg@$g$}\index{aagZbar@$\bar \Omega$}
such that:
\begin{align}
&g \in C^\infty ([0,R]) \qquad \qquad &\forall R>0\, ,\\
&g (r) = r^{-{\bar\alpha}} \qquad\qquad &\mbox{for $r\geq 2$,}\label{e:decay-at-infinity}\\
&g(r) = g(0) + \frac{g''(0)}{2} r^2\qquad \qquad &\mbox{for $r$ in a neighborhood of $0$.}\label{e:g-constant-around-0}
\end{align}
This smoothing will be carefully chosen so to achieve some particular properties, whose proof will take a good portion of the notes (we remark however that while a sufficient degree of smoothness and the decay \eqref{e:decay-at-infinity} play an important role, the condition \eqref{e:g-constant-around-0} is just technical and its role is to simplify some arguments). We next define the function $\bar V (x)$\index{aagf@$\zeta$} \index{aalVbar@$\bar V$} as
\begin{equation}\label{e:def-barV}
\bar V (x) = \zeta (|x|) x^\perp\, ,
\end{equation}
where $\zeta$ is
\begin{equation}\label{e:def-zeta}
\zeta(r) = \frac{1}{r^2}\int_0^r \rho g(\rho)\,\mathrm d\rho\, .
\end{equation}
\begin{remark}\label{r:well-defined-2} Observe that under our assumptions $\bar \Omega\in L^q(\ensuremath{\mathbb R}^2)$ for every $q>\frac{2}{{\bar\alpha}}$, but it does not belong to any $L^q(\ensuremath{\mathbb R}^2)$ with $q\leq \frac{2}{{\bar\alpha}}$. Since when $p\geq 2$ the condition ${\bar\alpha} p <2$ implies ${\bar\alpha} < 1$, we cannot appeal to Young's Theorem as in Remark \ref{r:well-defined} to define $K_2* \bar\Omega$.
Nonetheless, $\bar V$ can be understood as a natural definition of $K_2* \bar\Omega$ for radial distributions of vorticity which are in $L^1_{\rm loc}$. Indeed observe first that ${\rm div}\, \bar V=0$ and ${\rm curl}\, \bar V = \bar\Omega$, and notice also that $\bar V$ would decay at infinity like $|x|^{-1}$ if $\bar\Omega$ were compactly supported. This shows that $\bar V$ would indeed coincide with $K_2*\bar \Omega$ for compactly supported radial vorticities. Since we can approximate $\bar \Omega$ with $\bar\Omega_N := \bar\Omega \mathbf{1}_{B_N}$, passing into the limit in the corresponding formulas for $K_2* \bar\Omega_N$ we would achieve $\bar V$.
Note also that in the remaining computations what really matters are the identities ${\rm div}\, \bar V = 0$ and ${\rm curl}\, \bar V = \bar \Omega$ and so regarding $\bar V$ as $K_2* \bar\Omega$ only simplifies our terminology and notation.
\end{remark}
The force $f$ will then be defined in such a way that $\tilde \omega$, the curl of the velocity \index{aalf@$f$}\index{aalvtilde@$\tilde{v}$}\index{aagztilde@$\tilde{\omega}$}
\begin{equation}\label{e:tilde-v}
\tilde v (x, t) = \beta t^{1/\alpha-1} \bar V \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|)\,
\end{equation}
is a solution of \eqref{e:Euler}. In particular, since $(\tilde v\cdot\nabla)\tilde \omega=0$, the force $f$ is given by the explicit formula
\begin{equation}\label{e:def-f}
f (x,t) = \partial_t \tilde{\omega} (x,t)\, .
\end{equation}
With this choice a simple computation, left to the reader, shows that $\tilde{\omega}$ solves \eqref{e:Euler} with initial data $\omega_0$. Note in passing that, although as pointed our in Remark \ref{r:well-defined-2} there is not enough summability to make sense of the identity $K_2* \bar \Omega = \bar V$ by using standard Lebesgue integration, the relation $K_2* \tilde\omega = \tilde{v}$ is made obvious by ${\rm div}\, \tilde{v} =0$, ${\rm curl}\, \tilde{v} = \tilde{\omega}$, and the boundedness of the supports of both $\tilde{\omega}$ and $\tilde{v}$.
The pair $(\tilde{\omega}, \tilde{v})$ is one of the solutions claimed to exist in Theorem \ref{thm:main}. The remaining ones will be described as a one-parameter family $(\omega_\varepsilon, v_\varepsilon)$ for a nonzero choice of the parameter $\varepsilon$, while $(\tilde{\omega}, \tilde{v})$ will correspond to the choice $\varepsilon =0$. We will however stick to the notation $(\tilde\omega, \tilde v)$ to avoid confusions with the initial data.
It remains to check that $f$ belongs to the functional spaces claimed in Theorem \ref{thm:main}.
\begin{lemma}\label{lem:Curl of tilde v}
$\tilde\omega$ is a smooth function on $\{t>0\}$ which satisfies, for all $t>0$ and $x\in\ensuremath{\mathbb R}^2$,
\begin{equation}\label{e:curl-tilde-v}
\tilde \omega (x, t) = \beta t^{-1} \bar \Omega \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|) + \beta t^{-1} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\chi' (|x|)\, ,
\end{equation}
while the external force $f$ and $\partial_t \tilde{v} = K_2*f$ belong, respectively, to the spaces $L^1([0,T]; L^1 \cap L^p)$ and $L^1 ([0,T], L^2)$ for every positive $T$. Likewise $\tilde\omega \in L^\infty ([0,T], L^1\cap L^p)$ and $\tilde{v} \in L^\infty ([0,T], L^2)$.
\end{lemma}
We end the section with a proof of the lemma, while we resume our explanation of the overall approach to Theorem \ref{thm:main} in the next section.
\begin{proof} The formula \eqref{e:curl-tilde-v} is a simple computation. From it we also conclude that $\tilde\omega = \curl\tilde v$ is a smooth function on $\{t>0\}$ and hence differentiable in all variables.
Observe next that $|\bar V (x)|\leq C |x|^{1-{\bar\alpha}}$ and we can thus estimate $|\tilde{v} (x,t)|\leq C {t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{1-{\bar\alpha}}$. Since its spatial support is contained in ${\rm spt}\, (\chi)$, we conclude that $\tilde v$ is bounded and belongs to $L^\infty ([0,{T}], L^2)$ {for any $T>0$.}
Using that $\bar \Omega (x) = |x|^{-{\bar\alpha}}=g(\abs x)$ for $|x|\geq 2$, we write
\begin{align*}
\tilde{\omega} (x,t) = & \beta t^{-1} g \left(\frac{|x|}{t^{1/\alpha}}\right) \chi (|x|) \mathbf{1}_{\{|x|\leq 2 t^{1/\alpha}\}}
+ \beta { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{-{\bar\alpha}} \chi (|x|) \mathbf{1}_{\{|x|> 2t^{1/\alpha}\}} \\
&+ \beta t^{-1}\zeta \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\chi' (|x|)\, .
\end{align*}
In particular, recalling that $|\bar \Omega (x)|\leq C|x|^{-{\bar\alpha}}$ and $\zeta (|x|) |x| \leq C |x|^{1-{\bar\alpha}}$ we easily see that
\begin{align}
\|\tilde\omega (\cdot, t)\|_{L^1} &\leq C \int_{\{|x|\in {\rm spt}\, (\chi)\}} { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{-{\bar\alpha}} \, dx + C \int_{\{|x|\in {\rm spt}\, (\chi')\}} { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{1-{\bar\alpha}}\, dx\, ,\\
\|\tilde{\omega} (\cdot, t)\|_{L^p}^p &\leq C \int_{\{|x|\in {\rm spt}\, (\chi)\}} { t^{\left(\frac{{\bar\alpha}}{\alpha} -1\right)p}} |x|^{-p {\bar\alpha}} \, dx + C \int_{\{|x|\in {\rm spt}\, (\chi')\}} { t^{\left(\frac{{\bar\alpha}}{\alpha} -1\right)p}}|x|^{p-p{\bar\alpha}}\, dx\, .
\end{align}
This implies immediately that $\tilde\omega \in L^\infty ([0,{ T]}, L^1\cap L^p)$ {for any $T>0$ }, given that ${\bar\alpha} p <2$ (and hence $|x|^{-{\bar\alpha} p}$ is locally integrable).
We now differentiate in time in the open regions $\{|x|< 2t^{1/\alpha}\}$ and $\{|x| > 2t^{1/\alpha}\}$ separately to achieve\footnote{Since we will only estimate integral norms of $f$, its values on $\{|x|= 2t^{1/\alpha}\}$ are of no importance. However, given that $f$ is in fact smooth over the whole domain $\{t>0\}$, we can infer the validity of the formula \eqref{e:f1-f2} for every point $x\in \{|x|= 2t^{1/\alpha}\}$ by approximating it with a sequence of points in $\{|x|< 2t^{1/\alpha}\}$ and passing to the limit in the corresponding expressions.}
\begin{align}
f (x,t) = & - \beta \left(t^{-2} g \left(\frac{|x|}{t^{1/\alpha}}\right)
+ \frac{1}{\alpha}{t^{-2-1/\alpha}} |x| g' \left(\frac{|x|}{t^{1/\alpha}}\right)\right) \chi (|x|) \mathbf{1}_{\{|x|\leq 2 t^{1/\alpha}\}}\nonumber\\
& {+ \beta \left(\frac{{\bar\alpha}}{\alpha}-1\right)t^{\frac{{\bar\alpha}}{\alpha}-2} |x|^{-{\bar\alpha}}\chi(|x|)\mathbf{1}_{\{|x|>2t^{1/\alpha}\}}
}\nonumber\\
& - \beta \left(t^{-2} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) +\frac{1}{\alpha} t^{-2-1/\alpha} \zeta' \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\right) |x|\chi' (|x|)\nonumber\\
=:& f_1 (x,t) + f_2 (x,t){ + f_3(x,t)}\, .\label{e:f1-f2}
\end{align}
We wish to prove that $f\in L^1 ([0,T], L^1\cap L^p)$. On the other hand, since for any $T_0>0$ both $f_1+f_2$ and $f_3$ are smooth and have compact support on $\mathbb R^2\times [T_0, T]$, it suffices to show that $f\in L^1 ([0,T_0], L^1\cap L^p)$ for a sufficiently small $T_0$. Recalling that $|g (|x|)| + |g' (|x|)||x| \leq C |x|^{-{\bar\alpha}}$, we can then bound
\begin{equation}\label{e:bound-f1}
|f_1 (x,t)|\leq C {t^{-2+\frac{{\bar\alpha}}{\alpha}}} |x|^{-{\bar\alpha}} \mathbf{1}_{|x|\leq 2 t^{1/\alpha}} \qquad
\mbox{for all $0<t<T_0$ and all $x$}.
\end{equation}
Thus
\begin{align}
\|f_1\|_{L^1 (\mathbb R^2\times [0,T_0])} &\leq C \int_0^{T_0} t^{2/\alpha -2}\, dt\, , < \infty\\
\|f_1\|_{L^1 ([0,T_0];L^p( \mathbb R^2))} &\leq C \int_0^{T_0} t^{2/(\alpha p) -2}\, dt < \infty\, ,
\end{align}
where the condition $2> {\bar\alpha} p$ entered precisely in the finiteness of the latter integral.
Coming to the second term{\color{red}, we observe that it vanishes when $\bar\alpha = \alpha$. When $\alpha < \bar\alpha$,} {since $\chi$ is compactly supported in $\mathbb R$, we get
\begin{align*}
\|f_2\|_{L^1(\mathbb R^2\times [0,T_0])} &\leq C \int_0^{T_0} t^{\frac{{\bar\alpha}}{\alpha}-2}(1+t^{\frac 1{\alpha}(2-{\bar\alpha})}) dt<+\infty\\
\|f_2\|_{L^1([0,T_0];L^p(\mathbb R^2))}
&\leq \int_0^{T_0} t^{\frac{{\bar\alpha}}{\alpha}-2}(1+t^{\frac p{\alpha}(2-{\bar\alpha})})^{1/p} dt <+\infty
\, .
\end{align*}}
The last term can be computed explicitly as
\begin{align*}
\zeta (r) &= \frac{1}{r^2} \left(C + \int_2^r \rho^{1-{\bar\alpha}}\,\mathrm d\rho\right) = a r^{-2} + b r^{-{\bar\alpha}} & \text{for all } r \geq 2\, ,
\shortintertext{where $a$ and $b$ are two fixed constants. Likewise}
\zeta'(r)&= -2 a r^{-3} -{\bar\alpha} b r^{-{\bar\alpha}-1}\qquad &\text{for all } r \geq 2\, .
\end{align*}
Recall that $\chi' (|x|)=0$ for $|x|\leq 1$.
Therefore, for $t\leq T_0$ sufficiently small, the functions $\zeta$ and $\zeta'$ are computed on $|x| t^{-1/\alpha} \geq 2$ in the formula for $f_3$ (cf. \eqref{e:f1-f2}). Thus, {
\[
f_3 (x,t)
=-\beta t^{-2}\left(
\left(1-\frac 2{{\bar\alpha}}\right)a t^{\frac 2{\alpha}}|x|^{-1} + b\left(1-\frac{{\bar\alpha}}{\alpha}\right)t^{\frac{{\bar\alpha}}{\alpha}} |x|^{1-{\bar\alpha}}
\right)\chi'(|x|)
\]}
{\color{red} In particular $f_3$ has compact support. Since $\alpha <1$ the function
\[
-\beta t^{-2}
\left(1-\frac 2{{\bar\alpha}}\right)a t^{\frac 2{\alpha}}|x|^{-1} \chi'(|x|)\, ,
\]
is bounded, and thus belongs to
$L^1 ([0,T_0], L^1\cap L^p)$. As for the second summand, it vanishes if $\alpha = \bar \alpha$, while its $L^p$ norm at time $t$ can be bounded by $C t^{-2+\frac{\bar \alpha}{\alpha}}$ if $\bar\alpha > \alpha$. The latter function however belongs to $L^1 ([0,T_0])$.}
Observe next that, since for every positive $t$ the function $f (\cdot, t)$ is smooth and compactly supported, $K_2* f (\cdot, t)$ is the unique divergence-free vector field which belongs to $L^1$ and such that its curl gives $f (\cdot, t)$. Hence, since $f (\cdot, t) = \curl \partial_t \tilde{v} (\cdot, t)$ and $\partial_t \tilde{v} (\cdot, t)$ is smooth and compactly supported, we necessarily have $K_2 * f (\cdot, t) = \partial_t \tilde{v} (\cdot, t)$. It remains to show that $\partial_t \tilde{v} \in L^1 ([0,T]; L^2)$ for every positive $T$. To that end we compute
{\color{red}
\[
\tilde{v} (x,t) = \beta t^{1/\alpha-1} \bar{V} \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|)
= \beta t^{-1} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) x^\perp \chi (|x|)
\]
\[
\partial_t \tilde{v} (x,t ) = - \beta t^{-2} \chi (|x|) x^\perp \left(\zeta \left( \frac{|x|}{t^{1/\alpha}}\right) +\frac{1}{\alpha} \frac{|x|}{t^{1/\alpha}} \zeta' \left(\frac{|x|}{t^{1/\alpha}} \right)\right)\, .
\]
In order to compute the $L^2$ norm of $\partial_t \tilde{v} (\cdot, t)$ we break the space into two regions as in the computations above. In the region $\{|x|\leq 2 t^{1/\alpha}\}$ we use that $|\zeta| + |g|+ |\zeta'|$ are bounded to compute
\[
\int_{|x|\leq 2 t^{1/\alpha}} |\partial_t \tilde{v} (x,t)|^2\, dx \leq C t^{-4} \int_{|x|\leq t^{1/\alpha}} |x|^2\,dx \leq C t^{4/\alpha -4}\, ,
\]
which is a bounded function on $[0,1]$. On $\{|x|\geq t^{1/\alpha}\}$ we observe that the function can be explicitly computed as
\[
-\beta t^{-2} \chi (|x|) x^\perp \left(\left(1-\frac{2}{\alpha}\right) t^{2/\alpha} |x|^{-2} + b \left(1-\frac{\bar \alpha}{\alpha}\right) t^{\frac{\bar\alpha}{\alpha}}|x|^{-\bar \alpha}\right)\, .
\]
If we let $\bar R>0$ be such that the support of $\chi$ is contained in $B_{\bar R}$, we use polar coordinates to estimate
\[
\int_{|x|\geq 2 t^{1/\alpha}} |\partial_t \tilde{v} (x,t)|^2\, dx \leq C t^{-4+4/\alpha} \int_{2t^{1/\alpha}}^{\bar R} \frac{d\rho}{\rho} + C |\alpha - \bar \alpha| t^{2\frac{\bar \alpha}{\alpha}-4}\, .
\]
We can therefore estimate the $L^2$ norm of $\partial_t \tilde{v}$ at time $t$ by
\[
\|\partial_t \tilde{v} (\cdot, t)\|_{L^2} \leq C + C |\alpha - \bar \alpha| t^{\frac{\bar \alpha}{\alpha} -2}\, .
\]
When $\alpha = \bar \alpha$ we conclude that the $L^2$ norm of $\partial_t \tilde{v}$ is bounded, while for $\bar\alpha > \alpha$ the function $t\mapsto t^{\frac{\bar \alpha}{\alpha} -2}$ belongs to $L^1 ([0,T])$.
}
\end{proof}
\section{The infinitely many solutions}
We next give a more precise statement leading to Theorem \ref{thm:main} as a corollary.
\begin{theorem}\label{thm:main2}
Let $p\in ]2, \infty[$ be given and let $\alpha$ {and ${\bar\alpha}$} be any positive number such that {$\alpha \leq {\bar\alpha}$} and ${\bar\alpha} p <2$. For an appropriate choice of the smooth function $\bar \Omega$ and of a positive constant $\beta$ as in the previous section, we can find, additionally:
\begin{itemize}
\item[(a)] a suitable nonzero function $\eta\in (L^1\cap H^2) (\mathbb R^2; \mathbb C)$ with $K_2 * \eta\in L^2(\ensuremath{\mathbb R}^2; \mathbb C^2)$, \index{aagh@$\eta$}
\item[(b)] a real number $b_0$ and a positive number $a_0>0$, \index{aalazero@$a_0$}\index{aalbzero@$b_0$}
\end{itemize}
with the following property.
Consider $\omega_0$, $v_0$, $\tilde{v}$, $\tilde\omega = \curl \tilde{v}$, and $f$ as defined in \eqref{e:v_0},\eqref{e:omega_0}, \eqref{e:tilde-v}, and \eqref{e:def-f}. Then for every $\varepsilon\in \ensuremath{\mathbb R}$ there is a solution $\omega_\varepsilon$ of \eqref{e:Euler} with initial data $\omega_0$ such that \index{aage@$\varepsilon$}\index{aagzepsilon@$\omega_\varepsilon$}
\begin{enumerate}[(i)]
\item\label{item:1-omega in L infinity L1 Lp} $\omega_\varepsilon \in L^\infty ([0,T], L^1\cap L^p )$ for every $T>0$;
\item\label{item:2-v in L infinity L2} $v_\varepsilon := K_2 * \omega_\varepsilon \in L^\infty ([0,T], L^2)$ for every $T>0$;
\item\label{item:3-eigenvalue bound} as $t\to0$,
\begin{equation}\label{e:asymptotic-in-t}
\|\omega_\varepsilon (\cdot, t) - \tilde\omega (\cdot, t) - \varepsilon t^{a_0-1} \operatorname{Re} (t^{i b_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2(\ensuremath{\mathbb R}^2)} = o (t^{a_0 +1/\alpha -1})\, ;
\end{equation}
\item\label{e:Camillo-is-silly} if $b_0=0$, then $\eta$ is real-valued.
\end{enumerate}
\end{theorem}
Observe that, by a simple computation,
\begin{align*}
\| t^{a_0-1} \operatorname{Re} (t^{ib_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2} = t^{a_0 +1/\alpha -1} \|\operatorname{Re} (t^{ib_0} \eta)\|_{L^2}\, ,
\end{align*}
and thus it follows from \eqref{e:asymptotic-in-t} that
\begin{equation}\label{eq:difference-of-the-omega}
\limsup_{t\downarrow 0} t^{1-1/\alpha - a_0} \|\omega_{\varepsilon} (\cdot, t) - \omega_{\bar\varepsilon} (\cdot, t)\|_{L^2} \geq |\varepsilon - \bar\varepsilon| \max_{\theta\in[0,2\pi]}\| \operatorname{Re} (e^{i\theta} \eta)\|_{L^2}\,
\end{equation}
(note that in the last conclusion we need (iv) if $b_0=0$).
Since $\|\eta\|_{L^2} >0$, we conclude that the solutions $\omega_\varepsilon$ described in Theorem \ref{thm:main2} must be all distinct.
For each fixed $\varepsilon$, the solution $\omega_\varepsilon$ will be achieved as a limit of a suitable sequence of approximations $\omega_{\varepsilon, k}$\index{aagzepsilonk@$\omega_{\varepsilon, k}$} in the following way. After fixing a sequence of positive times $t_k$\index{aaltk@$t_k$} converging to $0$, which for convenience are chosen to be $t_k := e^{-k}$, we solve the following Cauchy problem for the Euler equations in vorticity formulation
\begin{equation}\label{e:Euler-later-times}
\left\{
\begin{array}{ll}
& \partial_t \omega_{\varepsilon,k} + ((K_2* \omega_{\varepsilon,k})\cdot \nabla) \omega_{\varepsilon,k} = f \\ \\
& \omega_{\varepsilon, k} (\cdot, t_k) = \tilde\omega (\cdot, t_k) + \varepsilon t_k^{a_0-1} \operatorname{Re} (t_k^{ib_0} \eta (t_k^{-1/\alpha}\cdot))\, .
\end{array}\right.
\end{equation}
Observe that, since $t_k$ is positive, the initial data $\omega_{\varepsilon, k} (\cdot, t_k)$ belongs to $L^1\cap L^\infty$, while the initial velocity defining $v_{k, \varepsilon}:= K_2 * \omega_{\varepsilon, k} (\cdot, t_k)$ belongs to $L^2$. Since $K_2 * f \in L^1 ([0,T], L^2)$ for every $T$, we can apply the classical theorem of Yudovich (namely, Theorem \ref{thm:Yudo} and Remark \ref{r:A-priori-estimates}) to conclude that
\begin{corollary}\label{c:omega_k_epsilon}
For every $k$, $\varepsilon$, and every $T$ there exists a unique solution $\omega_{\varepsilon, k}$ of \eqref{e:Euler-later-times} with the property that $\omega_{\varepsilon , k} \in L^\infty ([t_k, T], L^1\cap L^\infty)$ and $v_{\varepsilon, k}\in L^\infty ([t_k, T], L^2)$ for every positive $T$. Moreover, we have the following bounds for every $t$
\begin{align}
\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^1} \leq &\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^1} +
\int_{t_k}^t \|f (\cdot, s)\|_{L^1}\,\mathrm ds\\
\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^p} \leq &\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^p} +
\int_{t_k}^t \|f (\cdot, s)\|_{L^p}\,\mathrm ds \label{e:omega_Lp_estimate}\\
\|v_{\varepsilon, k} (\cdot, t)\|_{L^2}\leq &\|v_{\varepsilon, k} (\cdot, t_k)\|_{L^2} +
\int_{t_k}^t \|K_2* f (\cdot, s)\|_{L^2}\,\mathrm ds\, .
\end{align}
\end{corollary}
Next, since we can easily bound $\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^1}$, $\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^p}$, and $\|v_{\varepsilon, k} (\cdot, t_k)\|_{L^2}$ independently of $k$, for each fixed $\varepsilon$ we conclude
\begin{equation}\label{e:uniform_bound}
\sup_{k\in \mathbb N} \sup_{t\in [t_k, T]}
\left(\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^1} + \|\omega_{\varepsilon, k} (\cdot, t)\|_{L^p} + \|v_{\varepsilon, k} (\cdot, t)\|_{L^2}
\right) < \infty\, .
\end{equation}
In turn we can use \eqref{e:uniform_bound} to conclude that, for each fixed $\varepsilon$, a subsequence of $\omega_{\varepsilon, k}$ converges to a solution $\omega_\varepsilon$ of \eqref{e:Euler} which satisfies the conclusions \ref{item:1-omega in L infinity L1 Lp} and \ref{item:2-v in L infinity L2} of Theorem \ref{thm:main2}.
\begin{proposition}\label{p:convergence}\label{P:CONVERGENCE}
Assume $p, \alpha, {\bar\alpha}, \omega_0, v_0, \tilde\omega, \tilde{v}, f, a_0, b_0$, and $\bar \eta$ are as in Theorem \ref{thm:main2} and let $\omega_{\varepsilon, k}$ be as in Corollary \ref{c:omega_k_epsilon}. Then, for every fixed $\varepsilon$, there is a subsequence, not relabeled, with the property that $\omega_{\varepsilon, k}$ converges (uniformly in $C ([0,T], L^q_w)$ for every positive $T$ and every $1< q\leq p$, where $L^q_w$ denotes the space $L^q$ endowed with the weak topology) to a solution $\omega_\varepsilon$ of \eqref{e:Euler} on $[0, \infty[$ with initial data $\omega_0$ and satisfying the bounds \ref{item:1-omega in L infinity L1 Lp} and \ref{item:2-v in L infinity L2} of Theorem \ref{thm:main2}.
\end{proposition}
The proof uses classical convergence theorems and we give it in the appendix for the reader's convenience. The real difficulty in the proof of Theorem \ref{thm:main2} is to ensure that the bound (iii) holds. This is reduced to the derivation of suitable estimates on $\omega_{\varepsilon, k}$, which we detail in the following statement.
\begin{theorem}\label{thm:main3}
Assume $p, \alpha, {\bar\alpha}$ are as in Theorem \ref{thm:main2} and fix $\varepsilon >0$. For an appropriate choice of $\bar \Omega$ and $\beta$ there is a triple $\eta$, $a_0$, and $b_0$ as in Theorem \ref{thm:main2} and three positive constants $T_0, \delta_0$, and $C$ with the property that
\begin{equation}\label{e:asymptotic-in-t-2}
\|\omega_{\varepsilon,k} (\cdot, t) - \tilde\omega (\cdot, t) - \varepsilon t^{a_0-1} {\rm Re}\, (t^{ib_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2} \leq C t^{a_0+1/\alpha-1+\delta_0} \qquad \forall t\in [t_k, T_0]\, .
\end{equation}
\end{theorem}
It is then obvious that the final conclusion \ref{item:3-eigenvalue bound} of Theorem \ref{thm:main2} is a consequence of the more precise estimate \eqref{e:asymptotic-in-t-2} on the approximations $\omega_{\varepsilon,k}$. The rest of these lecture notes are thus devoted to the proof of Theorem \ref{thm:main3} and we will start in the next section by breaking it into two main parts.
\section{Logarithmic time scale and main Ansatz}
\index{similarity-variables@Similarity variables}\index{aagZ@$\Omega$}\index{aagt@$\tau$}\index{aagx@$\xi$}
First of all, we will change variables and unknowns of the Euler equations (in vorticity formulation) in a way which will be convenient for many computations. Given a solution $\omega$ of \eqref{e:Euler} on $\ensuremath{\mathbb R}^2\times [T_0, T_1]$ with $0\leq T_0 \leq T_1$, we introduce a new function $\Omega$ on $\mathbb R^2 \times [\ln T_0, \ln T_1]$ with the following transformation. We set $\tau=\ln t$, $\xi=x t^{-1/\alpha}$ and
\begin{equation}\label{e:omega->Omega}
\Omega (\xi, \tau) := e^{\tau} \omega (e^{\tau/\alpha} \xi, e^\tau)\, ,
\end{equation}
which in turn results in
\begin{equation}\label{e:Omega->omega}
\omega (x, t) = t^{-1} \Omega (t^{-1/\alpha} x, \ln t)\, .
\end{equation}
Observe that, if $v (\cdot, t) = K_2 * \omega (\cdot, t)$ and $V( \cdot, \tau) = K_2 * \Omega (\cdot, \tau)$, we can derive similar transformation rules for the velocities as \index{aalV@$V$}
\begin{align}
V (\xi, \tau) &= e^{\tau (1-1/\alpha)} v(e^{\tau/\alpha} \xi, e^\tau)\label{e:v->V}\, ,\\
v (x,t) &= t^{-1+1/\alpha} V (t^{-1/\alpha} x, \ln t)\label{e:V-t>v}\, .
\end{align}
Likewise, we have an analogous transformation rule for the force $f$, which results in \index{aalF@$F$}
\begin{align}
F (\xi, \tau) &= e^{2\tau} f (e^{\tau/\alpha} \xi, e^\tau)\, ,\label{e:f->F}\\
f (x,t) &= t^{-2} F (t^{-1/\alpha} x, \ln t)\label{e:F->f}\, .
\end{align}
In order to improve the readability of our arguments, throughout the rest of the notes we will use the overall convention that, given some object related to the Euler equations in the ``original system of coordinates'', the corresponding object after applying the transformations above will be denoted with the same letter in capital case.
\begin{remark}
Note that the naming of $\bar V$ and $\bar\Omega$ is somewhat of an exception to this convention, since $(\bar\Omega, \bar V)$ is a solution of \eqref{e:Euler} in Eulerian variables. However, if you ``force them to be functions of $\xi$,'' which is how they will be used in the non-linear part, then they solve the Euler equations in self-similar variables with forcing (see \eqref{e:Euler-transformed}).
\end{remark}
Straightforward computations allow then to pass from \eqref{e:Euler} to an equation for the new unknown $\Omega$ in the new coordinates. More precisely, we have the following
\begin{lemma}\label{l:coordinates-change}
Let $p>2$ and $\infty \geq T_1 > T_0\geq 0$. Then $\omega\in L^\infty_{\text{loc}} (]T_0, T_1[; L^1\cap L^p)$ and $v (\cdot, t) = K_2* \omega (\cdot, t)$ satisfy
\begin{equation}\label{e:Euler-again}
\partial_t \omega + (v \cdot \nabla) \omega = f\, ,
\end{equation}
if and only if $\Omega$ and $V (\cdot, t) = K_2 * \Omega (\cdot, t)$ satisfy
\begin{equation}\label{e:Euler-transformed}
\partial_\tau \Omega - \left(1 + \frac{\xi}{\alpha}\cdot \nabla\right) \Omega + (V\cdot \nabla) \Omega = F\, .
\end{equation}
\end{lemma}
We next observe that, due to the structural assumptions on $\tilde \omega$ and $\tilde v$, the corresponding fields $\tilde \Omega$ and $\tilde V$ can be expressed in the following way: \index{aalVtilde@$\tilde V$}\index{aagZtilde@$\tilde\Omega$}
\begin{align}
\tilde{V} (\xi, \tau) &= \beta \bar V (\xi) \chi (e^{\tau/\alpha} |\xi|)\, ,\label{e:tildeV}\\
\tilde{\Omega} (\xi, \tau) &= \beta \bar \Omega (\xi) \chi (e^{\tau/\alpha} |\xi|) + \beta \zeta (|\xi|)
\chi' (e^{\tau/\alpha} |\xi|) e^{\tau/\alpha} |\xi|\, \label{e:tildeOmega}.
\end{align}
Observe that, for every fixed compact set $K$ there is a sufficiently negative $-T (K)$ with the property that
\begin{itemize}
\item $\chi (e^{\tau/\alpha} |\cdot|)= 1$ and $\chi' (e^{\tau/\alpha} \cdot) = 0$ on $K$ whenever $\tau \leq - T (K)$.
\end{itemize}
Since in order to prove Theorem \ref{thm:main} we are in fact interested in very small times $t$, which in turn correspond to very negative $\tau$, it is natural to consider $\tilde\Omega$ and $\tilde{V}$ as perturbations of $\beta \bar \Omega$ and $\beta \bar V$. We will therefore introduce the notation
\begin{align}
\tilde \Omega &= \beta \bar \Omega + \Omega_r\, ,\\
\tilde V & = \beta \bar V + V_r := \beta \bar V + K_2* \Omega_r\, .
\end{align}
We are thus lead to the following Ansatz for $\Omega_{\varepsilon,k} (\xi, \tau) = e^{\tau} \omega_{\varepsilon ,k} (e^{\tau/\alpha} \xi, e^\tau)$:
\begin{equation}\label{e:Ansatz-1}
\Omega_{\varepsilon, k} (\xi, \tau) = \beta \bar \Omega (\xi) + \Omega_r (\xi, \tau) + \varepsilon e^{\tau a_0} {\rm Re}\, (e^{i\tau b_0} \eta (\xi)) + \Omega_{\text{per}, k} (\xi, \tau)\, .
\end{equation}
The careful reader will notice that indeed the function $\Omega_{\text{per},k}$ depends upon the parameter $\varepsilon$ as well, but since such dependence will not really play a significant role in our discussion, in order to keep our notation simple, we will always omit it. \index{aagZr@$\Omega_r$}\index{aagZepsilonk@$\Omega_{\varepsilon, k}$}\index{aagZperk@$\Omega_{\text{per}, k}$}\index{aalVr@$V_r$}
We are next ready to complete our Ansatz by prescribing one fundamental property of the function $\eta$.
We first introduce the integro-differential operator \index{aalLss@$L_{\text{ss}}$}\index{Self-similar operator}
\begin{equation}\label{e:Lss}
L_{\text{ss}} (\Omega) := \left(1+\frac{\xi}{\alpha} \cdot \nabla\right) \Omega - \beta (\bar V \cdot \nabla) \Omega - \beta ((K_2* \Omega)\cdot \nabla) \bar \Omega\, .
\end{equation}
We will then prescribe that $\eta$ is an eigenfunction of $L_{\text{ss}}$ with eigenvalue $z_0 = a_0 + ib_0$, namely, \index{aalz0@$z_0$}
\begin{equation}\label{e:Ansatz-2}
L_{\text{ss}} (\eta) = z_0 \eta\, .
\end{equation}
Observe in particular that, since $L_{\text{ss}}$ is a real operator (i.e. $L_{\text{ss}} (\eta)$ is real-valued when $\eta$ is real-valued, cf. Section \ref{s:abstract-operators}), the complex conjugate $\bar \eta$ is an eigenfunction of $L_{\text{ss}}$ with eigenvalue $\bar z_0$, so that, in particular, the function
\begin{equation}\label{e:Omega_lin}
\Omega_{\text{lin}} (\xi, \tau) := \varepsilon e^{a_0 \tau} {\rm Re}\, (e^{i b_0 \tau} \eta (\xi))
= \frac{\varepsilon}{2} (e^{z_0 \tau} \eta (\xi) + e^{\bar z_0 \tau} \bar \eta (\xi))
\end{equation}
satisfies the linear evolution equation
\begin{equation}\label{e:evolution_of_Omega_lin}
\partial_\tau \Omega_{\text{lin}} - L_{\text{ss}} (\Omega_{\text{lin}})=0\, .
\end{equation}
The relevance of our choice will become clear from the discussion of Section \ref{s:nonlinear}. The point is that \eqref{e:evolution_of_Omega_lin} is close to the linearization of Euler (in the new system of coordinates) around $\tilde{\Omega}$. The``true linearization'' would be given by \eqref{e:evolution_of_Omega_lin} if we were to substitute $\bar \Omega$ and $\bar V$ in \eqref{e:Lss} with $\tilde{\Omega}$ and $\tilde{V}$. Since however the pair $(\tilde \Omega, \tilde{V})$ is well approximated by $(\bar \Omega, \bar V)$ for very negative times, we will show that \eqref{e:evolution_of_Omega_lin} drives indeed the evolution of $\Omega_{\varepsilon,k}-\tilde{\Omega}$ up to an error term (i.e. $\Omega_{\text{per},k}$) which is smaller than $\Omega_{\text{lin}}$.
\section{Linear theory}
We will look for the eigenfunction $\eta$ in a particular subspace of $L^2$. More precisely for every
$m\in \mathbb N\setminus \{0\}$ we denote by $L^2_m$ the set of those elements $\vartheta \in L^2 (\mathbb R^2, \mathbb C)$ which are $m$-fold symmetric, i.e., denoting by $R_\theta: \mathbb R^2\to \mathbb R^2$ the counterclockwise rotation of angle $\theta$ around the origin, \index{rotational-symmetry@Rotationally symmetric function space}\index{aalL2m@$L^2_m$}
they satisfy the condition
\begin{align*}
\vartheta &= \vartheta \circ R_{2\pi/m}\, .
\end{align*}
In particular, $L^2_m$ is a closed subspace of $L^2 (\mathbb R^2, \mathbb C)$. Note however that the term ``$m$-fold symmetric'' is somewhat misleading when $m=1$: in that case the transformation $R_{2\pi/m} = R_{2\pi}$ is the identity and in particular $L^2_1 = L^2 (\mathbb R^2, \mathbb C)$. Indeed we will look for $\eta$ in $L^2_m$ for a sufficiently large $m\geq 2$.
An important technical detail is that, while the operator $L^2 \cap \mathscr{S} \ni \omega \mapsto K_2* \omega \in \mathscr{S}'$ {\em cannot} be extended continuously to the whole $L^2$ (cf. Remark \ref{r:Camillo_dumb}), for $m\geq 2$ it {\em can} be extended to a continuous operator from $L^2_m$ into $\mathscr{S}'$: this is the content of the following lemma.
\begin{lemma}\label{l:extension}\label{L:EXTENSION}
For every $m\geq 2$ there is a unique continuous operator $T: L^2_m \to \mathscr{S}'$ with the following properties:
\begin{itemize}
\item[(a)] If $\vartheta\in \mathscr{S}$, then $T (\vartheta) = K_2*\vartheta$ (in the sense of distributions);
\item[(b)] There is $C>0$ such that for every $\vartheta \in L^2_m$, there is $v=v(\vartheta)\in W^{1,2}_{\text{loc}}$ with
\begin{itemize}
\item[(b1)] $R^{-1} \|v\|_{L^2 (B_R)} + \|Dv\|_{L^2 (B_R)} \leq C\|\vartheta\|_{L^2 (\mathbb R^2)}$ for all $R>0$;
\item[(b2)] ${\rm div}\, v =0$ and $\langle T(\vartheta), \varphi\rangle = \int v\cdot \varphi$ for every test function $\varphi \in \mathscr{S}$.
\end{itemize}
\end{itemize}
\end{lemma}
From now on the operator $T$ will still be denoted by $K_2*$ and the function $v$ will be denoted by $K_2*\omega$. Observe also that, if $\hat\Omega$ is an $L^2_{\text{loc}}$ function such that $\|\hat\Omega\|_{L^2 (B_R)}$ grows polynomially in $R$, the integration of a Schwartz function times $v \hat\Omega$ is a well defined tempered distribution. In the rest of the notes, any time that we write a product $\hat\Omega K_2 * \vartheta $ for an element $\vartheta\in L^2_m$ and an $L^2_{\text{loc}}$ function $\hat\Omega$ we will always implicitly assume that
$\|\hat\Omega\|_{L^2 (B_R)}$ grows at most polynomially in $R$ and that the product is understoood as a well-defined element of $\mathscr{S}'$.
The relevance of this discussion is that, for $m\geq 2$, we can now consider the operator $L_{\text{ss}}$ as a closed, densely defined unbounded operator on $L^2_m$. We let
\index{aalLss@$L_{\text{ss}}$}\index{Self-similar operator}
\begin{equation}\label{e:def-Lss-formal}
L_{\text{ss}} (\Omega) = \left(1- {\textstyle{\frac{2}{\alpha}}}\right) \Omega - {\rm div}\, \left(\left(-{\textstyle{\frac{\xi}{\alpha}}} + \beta \bar V\right) \Omega\right) - \beta ( K_2*\Omega \cdot \nabla) \bar\Omega\,
\end{equation}
and its domain is
\begin{equation}\label{e:D(Lss)-formal}
D_m (L_{\text{ss}}) =\{\Omega\in L^2_m : L_{\text{ss}} (\Omega)\in L^2_m\}\, .
\end{equation}
When $\Omega\in \mathscr{S}$ it can be readily checked that $L_{\text{ss}}$ as defined in \eqref{e:def-Lss-formal} coincides with \eqref{e:Lss}.
The definition makes obvious that $L_{\text{ss}}$ is a closed and densely defined unbounded operator over $L^2_m$. We will later show that $\Omega \mapsto (K_2*\Omega \cdot \nabla) \bar \Omega$ is in fact a compact operator from $L^2_m$ into $L^2_m$ and therefore we have
\begin{equation}\label{e:D(Lss)-formal-2}
D_m (L_{\text{ss}}) := \left\{\Omega\in L^2_m : {\rm div} \left(\beta \bar V\Omega- {\textstyle{\frac{\xi}{\alpha}}}\Omega\right)\, \in L^2_m\right\}\, .
\end{equation}
From now on, having fixed $m\geq 2$ and regarding $L_{\text{ss}}$ as an unbounded, closed, and densely defined operator in the sense given above, the spectrum ${\rm spec}_m\, (L_{\text{ss}})$ on $L^2_m$ is defined as the (closed) set which is the complement of the {\em resolvent} of $L_{\text{ss}}$, the latter being the (open) set of $z_0 \in \mathbb C$ such that $L_{\text{ss}}-z_0$ has a bounded inverse $(L_{\text{ss}}-z_0)^{-1} : L^2_m \to L^2_m$.\footnote{The textbook definition would require the inverse to take values in $D_m (L_{\text{ss}})$. Note however that this is a consequence of our very definition of $D_m (L_{\text{ss}})$.}.
The choice of $\eta$ will then be defined by the following theorem which summarizes a quite delicate spectral analysis.
\begin{theorem}\label{thm:spectral}\label{THM:SPECTRAL}
For an appropriate choice of $\bar \Omega$ there is an integer $m\geq 2$ with the following property. For every positive $\bar a>0$, if $\beta$ is chosen appropriately large, then there is $\eta\in L^2_m\setminus \{0\}$ and $z_0=a_0+ib_0$ such that:
\begin{itemize}
\item[(i)] $a_0 \geq \bar a$ and $L_{\text{ss}} (\eta) = z_0 \eta$;
\item[(ii)] For any $z \in {\rm spec}_m\, (L_{\text{ss}})$ we have ${\rm Re}\, z\leq a_0$;
\item[(iii)] If $b_0=0$, then $\eta$ is real valued;
\item[(iv)] There is $k\geq 1$ integer and $e:\mathbb R^+\to \mathbb C$ such that $\eta (x) = e (r) e^{ikm \theta}$ if $b_0\neq 0$ and $\eta (x) = {\rm Re}\, (e(r) e^{ikm\theta})$ if $b_0= 0$.
\end{itemize}
\end{theorem}
In fact we will prove some more properties of $\eta$, namely, suitable regularity and decay at infinity, but these are effects of the eigenvalue equation and will be addressed later.
The proof of Theorem \ref{thm:spectral} will be split in two chapters. In the first one we regard $L_{\text{ss}}$ as perturbation of a simpler operator $L_{\text{st}}$, which is obtained from $L_{\text{ss}}$ by ignoring the $(1+\frac{\xi}{\alpha}\cdot \nabla)$ part: the intuition behind neglecting this term is that the remaining part of the operator $L_{\text{ss}}$ is multiplied by the constant $\beta$, which will be chosen appropriately large. The second chapter will be dedicated to proving a theorem analogous to Theorem \ref{thm:spectral} for the operator $L_{\text{st}}$. The analysis will take heavily advantage of an appropriate splitting of $L^2_m$ as a direct sum of invariant subspaces of $L_{\text{st}}$. The latter are obtained by expanding in Fourier series the trace of any element of $L^2_m$ on the unit circle. In each of these invariant subspaces the spectrum of $L_{\text{st}}$ can be related to the spectrum of a suitable second order differential operator in a single real variable.
\section{Nonlinear theory}\label{s:nonlinear}
The linear theory will then be used to show Theorem \ref{thm:main3}. In fact, given the decomposition introduced in \eqref{e:Ansatz-1}, we can now formulate a yet more precise statement from which we conclude Theorem \ref{thm:main3} as a corollary.
\begin{theorem}\label{thm:main4}
Let $p$, $\alpha$, and ${\bar\alpha}$ be as in Theorem \ref{thm:main2} and assume $\bar a$ is sufficiently large. Let $\bar \Omega$, $\eta$, $a_0$, and $b_0$ be as in Theorem \ref{thm:spectral} and for every $\varepsilon \in \mathbb R$, $k\in \mathbb N$ consider the solutions $\omega_{\varepsilon,k}$ of \eqref{e:Euler-later-times} and $\Omega_{\varepsilon,k} (\xi, \tau) = e^\tau \omega_{\varepsilon,k} (e^{\tau/\alpha} \xi, e^\tau)$. If we define $\Omega_{\text{per},k}$ through \eqref{e:Ansatz-1}, then there are $\tau_0 = \tau_0 (\varepsilon)$ and $\delta_0>0$, independent of $k$, such that
\begin{equation}\label{e:H2-estimate}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_{L^2} \leq e^{\tau (a_0+\delta_0)} \qquad\qquad\qquad \forall \tau\leq \tau_0\, .
\end{equation}
\end{theorem}
\eqref{e:asymptotic-in-t-2} is a simple consequence of \eqref{e:H2-estimate} after translating it back to the original coordinates. In order to give a feeling for why \eqref{e:H2-estimate} holds we will detail the equation that $\Omega_{\text{per}, k}$ satisfies.
First of all subtracting the equation satisfied by $\tilde{\Omega}$ from the one satisfied by $\Omega_{\varepsilon, k}$ we achieve
\begin{align*}
&\partial_\tau \Omega_{\text{lin}} + \partial_\tau \Omega_{\text{per},k} - \left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) \Omega_{\text{lin}}
-\left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) \Omega_{\text{per}, k} \\
+ & (\tilde{V} \cdot \nabla) \Omega_{\text{lin}} + V_{\text{lin}}\cdot \nabla \tilde{\Omega} + (\tilde{V}\cdot \nabla ) \Omega_{\text{per},k} + (V_{\text{per},k}\cdot \nabla) \tilde{\Omega} + (V_{\text{lin}}\cdot \nabla) \Omega_{\text{per}, k}\\
+ & (V_{\text{per}, k} \cdot \nabla) \Omega_{\text{lin}}
+ (V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} + (V_{\text{per},k}\cdot \nabla) \Omega_{\text{per}, k} = 0\, ,
\end{align*}
where we have used the convention $\tilde{V}=K_2*\tilde\Omega$, $V_{\text{per},k} = K_2* \Omega_{\text{per},k}$, and $V_{\text{lin}}= K_2* \Omega_{\text{lin}}$. Next recall that $\tilde{\Omega}=\beta\bar\Omega + \Omega_r$ and recall also the definition of $L_{\text{ss}}$ in \eqref{e:Lss} and the fact that $\partial_\tau \Omega_{\text{lin}} - L_{\text{ss}} (\Omega_{\text{lin}})= 0$. In particular formally we reach
\begin{align}
& (\partial_{\tau} - L_{\text{ss}}) \Omega_{\text{per}, k} + ((V_{\text{lin}}+V_r)\cdot \nabla) \Omega_{\text{per},k} + (V_{\text{per},k} \cdot \nabla) (\Omega_{\text{lin}} + \Omega_r) + (V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}\nonumber\\
= & -(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r\cdot \nabla) \Omega_{\text{lin}} - (V_{\text{lin}}\cdot \nabla) \Omega_r\, ,\label{e:master}
\end{align}
which must be supplemented with the initial condition
\[
\Omega_{\text{per},k} (\cdot, -k)= 0\, .
\]
In fact, in order to justify \eqref{e:master} we need to show that $\Omega_{\text{per},k} (\cdot, \tau)\in L^2_m$ for every $\tau$, which is the content of the following elementary lemma.
\begin{lemma}\label{l:evolution-in-L2m}
The function $\Omega_{\text{per},k} (\cdot, \tau)$ belongs to $L^2_m$ for every $\tau$.
\end{lemma}
\begin{proof}
It suffices to prove that $\omega_{\varepsilon, k} (\cdot, t)$ is $m$-fold symmetric, since the transformation rule then implies that $\Omega_{\varepsilon,k} (\cdot, \tau)$ is $m$-fold symmetric and $\Omega_{\text{per}, k} (\cdot, \tau)$ is obtained from the latter by subtracting $e^{a_0\tau} {\rm Re} (e^{ib_0\tau} \eta) + \tilde{\Omega} (\cdot, \tau)$, which is also $m$-fold symmetric. In order to show that $\omega_{\varepsilon, k}$ is $m$-fold symmetric just consider that $\omega_{\varepsilon, k} (R_{2\pi/m} (\cdot), \tau)$ solves \eqref{e:Euler-later-times} because both the forcing term and the initial data are invariant under a rotation of $\frac{2\pi}{m}$ (and the Euler equations are rotationally invariant). Then the uniqueness part of Yudovich's statement implies $\omega_{\varepsilon, k} (\cdot, t) = \omega_{\varepsilon, k} (R_{2\pi/m} (\cdot), t)$.
\end{proof}
We proceed with our discussion and observe that $V_{\text{lin}} + V_r$ and $\Omega_{\text{lin}}+\Omega_r$ are both ``small'' in appropriate sense for sufficiently negative times, while, because of the initial condition being $0$ at $-k$, for some time after $-k$ we expect that the quadratic nonlinearity $(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}$ will not contribute much to the growth of $\Omega_{\text{per}, k} (\cdot, \tau)$. Schematically, we can break \eqref{e:master} as
\begin{align}
& (\partial_{\tau} - L_{\text{ss}}) \Omega_{\text{per}, k} + \underbrace{((V_{\text{lin}}+V_r)\cdot \nabla) \Omega_{\text{per},k} + (V_{\text{per},k} \cdot \nabla) (\Omega_{\text{lin}} + \Omega_r)}_{\mbox{small linear terms}} + \underbrace{(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}}_{\mbox{quadratic term}}\nonumber\\
= & \underbrace{-(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r\cdot \nabla) \Omega_{\text{lin}} - (V_{\text{lin}}\cdot \nabla) \Omega_r}_{\mbox{forcing term } \mathscr{F}}\, ,\label{e:master-schematics}
\end{align}
In particular we can hope that the growth of $\Omega_{\text{per},k} (\cdot, \tau)$ is comparable to that of the solution of the following ``forced'' linear problem
\begin{equation}\label{e:master-linear}
(\partial_{\tau} - L_{\text{ss}}) \Omega = \mathscr{F}\, .
\end{equation}
Observe that we know that $\Omega_{\text{lin}} (\cdot, \tau)$ and $V_{\text{lin}} (\cdot, \tau)$ decay like $e^{a_0 \tau}$. We can then expect to gain a slightly faster exponential decay for $\mathscr{F} (\cdot, \tau)$ because of the smallness of $V_r$ and $\Omega_r$. On the other hand from Theorem \ref{thm:spectral} we expect that the semigroup generated by $L_{\text{ss}}$ enjoys growth estimates of type $e^{a_0\tau}$ on $L^2_m$ (this will be rigorously justified using classical results in the theory of strongly continuous semigroups). We then wish to show, using the Duhamel's formula for the semigroup $e^{\tau L_{\text{ss}}}$, that the growth of $\Omega_{\text{per},k}$ is bounded by $e^{a_0\tau} (e^{\delta_0 \tau} - e^{-\delta_0 k})$ for some positive $\delta_0$ for some time $\tau$ after the initial $-k$: the crucial point will be to show that the latter bound is valid for $\tau$ up until a ``universal'' time $\tau_0$, independent of $k$.
Even though intuitively sound, this approach will require several delicate arguments, explained in the final chapter of the notes. In particular:
\begin{itemize}
\item we will need to show that the quadratic term $(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}$ is small up to some time $\tau_0$ independent of $k$, in spite of the fact that there is a ``loss of derivative'' in it (and thus we cannot directly close an implicit Gronwall argument using the Duhamel formula and the semigroup estimate for $L_{\text{ss}}$);
\item The terms $\Omega_r$ and $V_r$ are not really negligible in absolute terms, but rather, for very negative times, they are supported in a region in space which goes towards spatial $\infty$.
\end{itemize}
The first issue will be solved by closing the estimates in a space of more regular functions, which contains $L^2$ and embeds in $L^\infty$ (in fact $L^2\cap W^{1,4}$): the bound on the growth of the $L^2$ norm will be achieved through the semigroup estimate for $L_{\text{ss}}$ via Duhamel, while the bound of the first derivative will be achieved through an energy estimate, which will profit from the $L^2$ one. The second point by restricting further the functional space in which we will close to estimates for $\Omega_{per, k}$. We will require an appropriate decay of the derivative of the solutions, more precisely we will require that the latter belong to $L^2 (|x|^2\,\mathrm dx)$. Of course in order to use this strategy we will need to show that the initial perturbation $\eta$ belongs to the correct space of functions.
\chapter{General strategy: background field and self-similar coordinates}
\label{chapter:general}
\section{The initial velocity and the force}
First of all, the initial velocity $v_0$ of Theorem \ref{thm:main} will have the following structure\index{aaga@$\alpha$}\index{aagb@$\beta$}
\begin{equation}\label{e:v_0}
v_0 (x) =
\begin{cases}
\beta (2-\alpha)^{-1} |x|^{-\alpha} \chi (\lvert x\rvert) x^\perp\;\; &\mbox{if }{\bar\alpha} =\alpha\\
0 &\mbox{if }{\bar\alpha}>\alpha
\end{cases}
\end{equation}
where $0<\alpha\leq {\bar\alpha}<1$, $\chi$ is a smooth cut-off function, compactly supported in $\mathbb R$ and identically $1$ on the interval $[-1,1]$, and $\beta$ is a sufficiently large constant (whose choice will depend on $\alpha$). For simplicity we will assume that $\chi$ takes values in $[0,1]$ and it is monotone non-increasing on $[0, \infty[$, even though none of these conditions play a significant role.
A direct computation gives $\div v_0 = 0$.
The corresponding $\omega_0$ is then given by\index{aagz_0@$\omega_0$}\index{aalv_0@$v_0$}\index{aagx@$\chi$}
\begin{equation}\label{e:omega_0}
\omega_0 (x) =
\curl v_0 (x) =
\begin{cases}
\beta \left[ |x|^{-\alpha} \chi (|x|) + (2-\alpha)^{-1} \chi' (|x|) |x|^{1-\alpha}\right] \;\;&\mbox{if }{\bar\alpha}=\alpha\\
0 &\mbox{if }{\bar\alpha}>\alpha
\end{cases}
\end{equation}
and the relation $v_0 = K_2*\omega_0$ comes from standard Calder{\'o}n-Zygmund theory (since ${\rm div}\, v_0 =0$, $\curl v_0=\omega_0$ and $v_0$ is compactly supported).
${\bar\alpha}\in ]0,1[$ is chosen depending on $p$ in Theorem \ref{thm:main}, so that ${\bar\alpha} p < 2$: in the rest of the notes we assume that $p$, ${\bar\alpha}$, and $\alpha$ are fixed. In particular it follows from the definition that $\omega_0\in L^1\cap L^p$ and that $v_0 \in L^1 \cap L^\infty$.
Next, the function $|x|^{-{\bar\alpha}}$ will be appropriately smoothed to a (radial) function
\begin{equation}\label{e:def-bar-Omega}
\bar \Omega (x) = g (|x|)
\end{equation}
\index{aalg@$g$}\index{aagZbar@$\bar \Omega$}
such that:
\begin{align}
&g \in C^\infty ([0,R]) \qquad \qquad &\forall R>0\, ,\\
&g (r) = r^{-{\bar\alpha}} \qquad\qquad &\mbox{for $r\geq 2$,}\label{e:decay-at-infinity}\\
&g(r) = g(0) + \frac{g''(0)}{2} r^2\qquad \qquad &\mbox{for $r$ in a neighborhood of $0$.}\label{e:g-constant-around-0}
\end{align}
This smoothing will be carefully chosen so to achieve some particular properties, whose proof will take a good portion of the notes (we remark however that while a sufficient degree of smoothness and the decay \eqref{e:decay-at-infinity} play an important role, the condition \eqref{e:g-constant-around-0} is just technical and its role is to simplify some arguments). We next define the function $\bar V (x)$\index{aagf@$\zeta$} \index{aalVbar@$\bar V$} as
\begin{equation}\label{e:def-barV}
\bar V (x) = \zeta (|x|) x^\perp\, ,
\end{equation}
where $\zeta$ is
\begin{equation}\label{e:def-zeta}
\zeta(r) = \frac{1}{r^2}\int_0^r \rho g(\rho)\,\mathrm d\rho\, .
\end{equation}
\begin{remark}\label{r:well-defined-2} Observe that under our assumptions $\bar \Omega\in L^q(\ensuremath{\mathbb R}^2)$ for every $q>\frac{2}{{\bar\alpha}}$, but it does not belong to any $L^q(\ensuremath{\mathbb R}^2)$ with $q\leq \frac{2}{{\bar\alpha}}$. Since when $p\geq 2$ the condition ${\bar\alpha} p <2$ implies ${\bar\alpha} < 1$, we cannot appeal to Young's Theorem as in Remark \ref{r:well-defined} to define $K_2* \bar\Omega$.
Nonetheless, $\bar V$ can be understood as a natural definition of $K_2* \bar\Omega$ for radial distributions of vorticity which are in $L^1_{\rm loc}$. Indeed observe first that ${\rm div}\, \bar V=0$ and ${\rm curl}\, \bar V = \bar\Omega$, and notice also that $\bar V$ would decay at infinity like $|x|^{-1}$ if $\bar\Omega$ were compactly supported. This shows that $\bar V$ would indeed coincide with $K_2*\bar \Omega$ for compactly supported radial vorticities. Since we can approximate $\bar \Omega$ with $\bar\Omega_N := \bar\Omega \mathbf{1}_{B_N}$, passing into the limit in the corresponding formulas for $K_2* \bar\Omega_N$ we would achieve $\bar V$.
Note also that in the remaining computations what really matters are the identities ${\rm div}\, \bar V = 0$ and ${\rm curl}\, \bar V = \bar \Omega$ and so regarding $\bar V$ as $K_2* \bar\Omega$ only simplifies our terminology and notation.
\end{remark}
The force $f$ will then be defined in such a way that $\tilde \omega$, the curl of the velocity \index{aalf@$f$}\index{aalvtilde@$\tilde{v}$}\index{aagztilde@$\tilde{\omega}$}
\begin{equation}\label{e:tilde-v}
\tilde v (x, t) = \beta t^{1/\alpha-1} \bar V \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|)\,
\end{equation}
is a solution of \eqref{e:Euler}. In particular, since $(\tilde v\cdot\nabla)\tilde \omega=0$, the force $f$ is given by the explicit formula
\begin{equation}\label{e:def-f}
f (x,t) = \partial_t \tilde{\omega} (x,t)\, .
\end{equation}
With this choice a simple computation, left to the reader, shows that $\tilde{\omega}$ solves \eqref{e:Euler} with initial data $\omega_0$. Note in passing that, although as pointed our in Remark \ref{r:well-defined-2} there is not enough summability to make sense of the identity $K_2* \bar \Omega = \bar V$ by using standard Lebesgue integration, the relation $K_2* \tilde\omega = \tilde{v}$ is made obvious by ${\rm div}\, \tilde{v} =0$, ${\rm curl}\, \tilde{v} = \tilde{\omega}$, and the boundedness of the supports of both $\tilde{\omega}$ and $\tilde{v}$.
The pair $(\tilde{\omega}, \tilde{v})$ is one of the solutions claimed to exist in Theorem \ref{thm:main}. The remaining ones will be described as a one-parameter family $(\omega_\varepsilon, v_\varepsilon)$ for a nonzero choice of the parameter $\varepsilon$, while $(\tilde{\omega}, \tilde{v})$ will correspond to the choice $\varepsilon =0$. We will however stick to the notation $(\tilde\omega, \tilde v)$ to avoid confusions with the initial data.
It remains to check that $f$ belongs to the functional spaces claimed in Theorem \ref{thm:main}.
\begin{lemma}\label{lem:Curl of tilde v}
$\tilde\omega$ is a smooth function on $\{t>0\}$ which satisfies, for all $t>0$ and $x\in\ensuremath{\mathbb R}^2$,
\begin{equation}\label{e:curl-tilde-v}
\tilde \omega (x, t) = \beta t^{-1} \bar \Omega \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|) + \beta t^{-1} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\chi' (|x|)\, ,
\end{equation}
while the external force $f$ and $\partial_t \tilde{v} = K_2*f$ belong, respectively, to the spaces $L^1([0,T]; L^1 \cap L^p)$ and $L^1 ([0,T], L^2)$ for every positive $T$. Likewise $\tilde\omega \in L^\infty ([0,T], L^1\cap L^p)$ and $\tilde{v} \in L^\infty ([0,T], L^2)$.
\end{lemma}
We end the section with a proof of the lemma, while we resume our explanation of the overall approach to Theorem \ref{thm:main} in the next section.
\begin{proof} The formula \eqref{e:curl-tilde-v} is a simple computation. From it we also conclude that $\tilde\omega = \curl\tilde v$ is a smooth function on $\{t>0\}$ and hence differentiable in all variables.
Observe next that $|\bar V (x)|\leq C |x|^{1-{\bar\alpha}}$ and we can thus estimate $|\tilde{v} (x,t)|\leq C {t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{1-{\bar\alpha}}$. Since its spatial support is contained in ${\rm spt}\, (\chi)$, we conclude that $\tilde v$ is bounded and belongs to $L^\infty ([0,{T}], L^2)$ {for any $T>0$.}
Using that $\bar \Omega (x) = |x|^{-{\bar\alpha}}=g(\abs x)$ for $|x|\geq 2$, we write
\begin{align*}
\tilde{\omega} (x,t) = & \beta t^{-1} g \left(\frac{|x|}{t^{1/\alpha}}\right) \chi (|x|) \mathbf{1}_{\{|x|\leq 2 t^{1/\alpha}\}}
+ \beta { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{-{\bar\alpha}} \chi (|x|) \mathbf{1}_{\{|x|> 2t^{1/\alpha}\}} \\
&+ \beta t^{-1}\zeta \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\chi' (|x|)\, .
\end{align*}
In particular, recalling that $|\bar \Omega (x)|\leq C|x|^{-{\bar\alpha}}$ and $\zeta (|x|) |x| \leq C |x|^{1-{\bar\alpha}}$ we easily see that
\begin{align}
\|\tilde\omega (\cdot, t)\|_{L^1} &\leq C \int_{\{|x|\in {\rm spt}\, (\chi)\}} { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{-{\bar\alpha}} \, dx + C \int_{\{|x|\in {\rm spt}\, (\chi')\}} { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{1-{\bar\alpha}}\, dx\, ,\\
\|\tilde{\omega} (\cdot, t)\|_{L^p}^p &\leq C \int_{\{|x|\in {\rm spt}\, (\chi)\}} { t^{\left(\frac{{\bar\alpha}}{\alpha} -1\right)p}} |x|^{-p {\bar\alpha}} \, dx + C \int_{\{|x|\in {\rm spt}\, (\chi')\}} { t^{\left(\frac{{\bar\alpha}}{\alpha} -1\right)p}}|x|^{p-p{\bar\alpha}}\, dx\, .
\end{align}
This implies immediately that $\tilde\omega \in L^\infty ([0,{ T]}, L^1\cap L^p)$ {for any $T>0$ }, given that ${\bar\alpha} p <2$ (and hence $|x|^{-{\bar\alpha} p}$ is locally integrable).
We now differentiate in time in the open regions $\{|x|< 2t^{1/\alpha}\}$ and $\{|x| > 2t^{1/\alpha}\}$ separately to achieve\footnote{Since we will only estimate integral norms of $f$, its values on $\{|x|= 2t^{1/\alpha}\}$ are of no importance. However, given that $f$ is in fact smooth over the whole domain $\{t>0\}$, we can infer the validity of the formula \eqref{e:f1-f2} for every point $x\in \{|x|= 2t^{1/\alpha}\}$ by approximating it with a sequence of points in $\{|x|< 2t^{1/\alpha}\}$ and passing to the limit in the corresponding expressions.}
\begin{align}
f (x,t) = & - \beta \left(t^{-2} g \left(\frac{|x|}{t^{1/\alpha}}\right)
+ \frac{1}{\alpha}{t^{-2-1/\alpha}} |x| g' \left(\frac{|x|}{t^{1/\alpha}}\right)\right) \chi (|x|) \mathbf{1}_{\{|x|\leq 2 t^{1/\alpha}\}}\nonumber\\
& {+ \beta \left(\frac{{\bar\alpha}}{\alpha}-1\right)t^{\frac{{\bar\alpha}}{\alpha}-2} |x|^{-{\bar\alpha}}\chi(|x|)\mathbf{1}_{\{|x|>2t^{1/\alpha}\}}
}\nonumber\\
& - \beta \left(t^{-2} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) +\frac{1}{\alpha} t^{-2-1/\alpha} \zeta' \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\right) |x|\chi' (|x|)\nonumber\\
=:& f_1 (x,t) + f_2 (x,t){ + f_3(x,t)}\, .\label{e:f1-f2}
\end{align}
We wish to prove that $f\in L^1 ([0,T], L^1\cap L^p)$. On the other hand, since for any $T_0>0$ both $f_1+f_2$ and $f_3$ are smooth and have compact support on $\mathbb R^2\times [T_0, T]$, it suffices to show that $f\in L^1 ([0,T_0], L^1\cap L^p)$ for a sufficiently small $T_0$. Recalling that $|g (|x|)| + |g' (|x|)||x| \leq C |x|^{-{\bar\alpha}}$, we can then bound
\begin{equation}\label{e:bound-f1}
|f_1 (x,t)|\leq C {t^{-2+\frac{{\bar\alpha}}{\alpha}}} |x|^{-{\bar\alpha}} \mathbf{1}_{|x|\leq 2 t^{1/\alpha}} \qquad
\mbox{for all $0<t<T_0$ and all $x$}.
\end{equation}
Thus
\begin{align}
\|f_1\|_{L^1 (\mathbb R^2\times [0,T_0])} &\leq C \int_0^{T_0} t^{2/\alpha -2}\, dt\, , < \infty\\
\|f_1\|_{L^1 ([0,T_0];L^p( \mathbb R^2))} &\leq C \int_0^{T_0} t^{2/(\alpha p) -2}\, dt < \infty\, ,
\end{align}
where the condition $2> {\bar\alpha} p$ entered precisely in the finiteness of the latter integral.
Coming to the second term{{red}, we observe that it vanishes when $\bar\alpha = \alpha$. When $\alpha < \bar\alpha$,} {since $\chi$ is compactly supported in $\mathbb R$, we get
\begin{align*}
\|f_2\|_{L^1(\mathbb R^2\times [0,T_0])} &\leq C \int_0^{T_0} t^{\frac{{\bar\alpha}}{\alpha}-2}(1+t^{\frac 1{\alpha}(2-{\bar\alpha})}) dt<+\infty\\
\|f_2\|_{L^1([0,T_0];L^p(\mathbb R^2))}
&\leq \int_0^{T_0} t^{\frac{{\bar\alpha}}{\alpha}-2}(1+t^{\frac p{\alpha}(2-{\bar\alpha})})^{1/p} dt <+\infty
\, .
\end{align*}}
The last term can be computed explicitly as
\begin{align*}
\zeta (r) &= \frac{1}{r^2} \left(C + \int_2^r \rho^{1-{\bar\alpha}}\,\mathrm d\rho\right) = a r^{-2} + b r^{-{\bar\alpha}} & \text{for all } r \geq 2\, ,
\shortintertext{where $a$ and $b$ are two fixed constants. Likewise}
\zeta'(r)&= -2 a r^{-3} -{\bar\alpha} b r^{-{\bar\alpha}-1}\qquad &\text{for all } r \geq 2\, .
\end{align*}
Recall that $\chi' (|x|)=0$ for $|x|\leq 1$.
Therefore, for $t\leq T_0$ sufficiently small, the functions $\zeta$ and $\zeta'$ are computed on $|x| t^{-1/\alpha} \geq 2$ in the formula for $f_3$ (cf. \eqref{e:f1-f2}). Thus, {
\[
f_3 (x,t)
=-\beta t^{-2}\left(
\left(1-\frac 2{{\bar\alpha}}\right)a t^{\frac 2{\alpha}}|x|^{-1} + b\left(1-\frac{{\bar\alpha}}{\alpha}\right)t^{\frac{{\bar\alpha}}{\alpha}} |x|^{1-{\bar\alpha}}
\right)\chi'(|x|)
\]}
{{red} In particular $f_3$ has compact support. Since $\alpha <1$ the function
\[
-\beta t^{-2}
\left(1-\frac 2{{\bar\alpha}}\right)a t^{\frac 2{\alpha}}|x|^{-1} \chi'(|x|)\, ,
\]
is bounded, and thus belongs to
$L^1 ([0,T_0], L^1\cap L^p)$. As for the second summand, it vanishes if $\alpha = \bar \alpha$, while its $L^p$ norm at time $t$ can be bounded by $C t^{-2+\frac{\bar \alpha}{\alpha}}$ if $\bar\alpha > \alpha$. The latter function however belongs to $L^1 ([0,T_0])$.}
Observe next that, since for every positive $t$ the function $f (\cdot, t)$ is smooth and compactly supported, $K_2* f (\cdot, t)$ is the unique divergence-free vector field which belongs to $L^1$ and such that its curl gives $f (\cdot, t)$. Hence, since $f (\cdot, t) = \curl \partial_t \tilde{v} (\cdot, t)$ and $\partial_t \tilde{v} (\cdot, t)$ is smooth and compactly supported, we necessarily have $K_2 * f (\cdot, t) = \partial_t \tilde{v} (\cdot, t)$. It remains to show that $\partial_t \tilde{v} \in L^1 ([0,T]; L^2)$ for every positive $T$. To that end we compute
{{red}
\[
\tilde{v} (x,t) = \beta t^{1/\alpha-1} \bar{V} \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|)
= \beta t^{-1} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) x^\perp \chi (|x|)
\]
\[
\partial_t \tilde{v} (x,t ) = - \beta t^{-2} \chi (|x|) x^\perp \left(\zeta \left( \frac{|x|}{t^{1/\alpha}}\right) +\frac{1}{\alpha} \frac{|x|}{t^{1/\alpha}} \zeta' \left(\frac{|x|}{t^{1/\alpha}} \right)\right)\, .
\]
In order to compute the $L^2$ norm of $\partial_t \tilde{v} (\cdot, t)$ we break the space into two regions as in the computations above. In the region $\{|x|\leq 2 t^{1/\alpha}\}$ we use that $|\zeta| + |g|+ |\zeta'|$ are bounded to compute
\[
\int_{|x|\leq 2 t^{1/\alpha}} |\partial_t \tilde{v} (x,t)|^2\, dx \leq C t^{-4} \int_{|x|\leq t^{1/\alpha}} |x|^2\,dx \leq C t^{4/\alpha -4}\, ,
\]
which is a bounded function on $[0,1]$. On $\{|x|\geq t^{1/\alpha}\}$ we observe that the function can be explicitly computed as
\[
-\beta t^{-2} \chi (|x|) x^\perp \left(\left(1-\frac{2}{\alpha}\right) t^{2/\alpha} |x|^{-2} + b \left(1-\frac{\bar \alpha}{\alpha}\right) t^{\frac{\bar\alpha}{\alpha}}|x|^{-\bar \alpha}\right)\, .
\]
If we let $\bar R>0$ be such that the support of $\chi$ is contained in $B_{\bar R}$, we use polar coordinates to estimate
\[
\int_{|x|\geq 2 t^{1/\alpha}} |\partial_t \tilde{v} (x,t)|^2\, dx \leq C t^{-4+4/\alpha} \int_{2t^{1/\alpha}}^{\bar R} \frac{d\rho}{\rho} + C |\alpha - \bar \alpha| t^{2\frac{\bar \alpha}{\alpha}-4}\, .
\]
We can therefore estimate the $L^2$ norm of $\partial_t \tilde{v}$ at time $t$ by
\[
\|\partial_t \tilde{v} (\cdot, t)\|_{L^2} \leq C + C |\alpha - \bar \alpha| t^{\frac{\bar \alpha}{\alpha} -2}\, .
\]
When $\alpha = \bar \alpha$ we conclude that the $L^2$ norm of $\partial_t \tilde{v}$ is bounded, while for $\bar\alpha > \alpha$ the function $t\mapsto t^{\frac{\bar \alpha}{\alpha} -2}$ belongs to $L^1 ([0,T])$.
}
\end{proof}
\section{The infinitely many solutions}
We next give a more precise statement leading to Theorem \ref{thm:main} as a corollary.
\begin{theorem}\label{thm:main2}
Let $p\in ]2, \infty[$ be given and let $\alpha$ {and ${\bar\alpha}$} be any positive number such that {$\alpha \leq {\bar\alpha}$} and ${\bar\alpha} p <2$. For an appropriate choice of the smooth function $\bar \Omega$ and of a positive constant $\beta$ as in the previous section, we can find, additionally:
\begin{itemize}
\item[(a)] a suitable nonzero function $\eta\in (L^1\cap H^2) (\mathbb R^2; \mathbb C)$ with $K_2 * \eta\in L^2(\ensuremath{\mathbb R}^2; \mathbb C^2)$, \index{aagh@$\eta$}
\item[(b)] a real number $b_0$ and a positive number $a_0>0$, \index{aalazero@$a_0$}\index{aalbzero@$b_0$}
\end{itemize}
with the following property.
Consider $\omega_0$, $v_0$, $\tilde{v}$, $\tilde\omega = \curl \tilde{v}$, and $f$ as defined in \eqref{e:v_0},\eqref{e:omega_0}, \eqref{e:tilde-v}, and \eqref{e:def-f}. Then for every $\varepsilon\in \ensuremath{\mathbb R}$ there is a solution $\omega_\varepsilon$ of \eqref{e:Euler} with initial data $\omega_0$ such that \index{aage@$\varepsilon$}\index{aagzepsilon@$\omega_\varepsilon$}
\begin{enumerate}[(i)]
\item\label{item:1-omega in L infinity L1 Lp} $\omega_\varepsilon \in L^\infty ([0,T], L^1\cap L^p )$ for every $T>0$;
\item\label{item:2-v in L infinity L2} $v_\varepsilon := K_2 * \omega_\varepsilon \in L^\infty ([0,T], L^2)$ for every $T>0$;
\item\label{item:3-eigenvalue bound} as $t\to0$,
\begin{equation}\label{e:asymptotic-in-t}
\|\omega_\varepsilon (\cdot, t) - \tilde\omega (\cdot, t) - \varepsilon t^{a_0-1} \operatorname{Re} (t^{i b_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2(\ensuremath{\mathbb R}^2)} = o (t^{a_0 +1/\alpha -1})\, ;
\end{equation}
\item\label{e:Camillo-is-silly} if $b_0=0$, then $\eta$ is real-valued.
\end{enumerate}
\end{theorem}
Observe that, by a simple computation,
\begin{align*}
\| t^{a_0-1} \operatorname{Re} (t^{ib_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2} = t^{a_0 +1/\alpha -1} \|\operatorname{Re} (t^{ib_0} \eta)\|_{L^2}\, ,
\end{align*}
and thus it follows from \eqref{e:asymptotic-in-t} that
\begin{equation}\label{eq:difference-of-the-omega}
\limsup_{t\downarrow 0} t^{1-1/\alpha - a_0} \|\omega_{\varepsilon} (\cdot, t) - \omega_{\bar\varepsilon} (\cdot, t)\|_{L^2} \geq |\varepsilon - \bar\varepsilon| \max_{\theta\in[0,2\pi]}\| \operatorname{Re} (e^{i\theta} \eta)\|_{L^2}\,
\end{equation}
(note that in the last conclusion we need (iv) if $b_0=0$).
Since $\|\eta\|_{L^2} >0$, we conclude that the solutions $\omega_\varepsilon$ described in Theorem \ref{thm:main2} must be all distinct.
For each fixed $\varepsilon$, the solution $\omega_\varepsilon$ will be achieved as a limit of a suitable sequence of approximations $\omega_{\varepsilon, k}$\index{aagzepsilonk@$\omega_{\varepsilon, k}$} in the following way. After fixing a sequence of positive times $t_k$\index{aaltk@$t_k$} converging to $0$, which for convenience are chosen to be $t_k := e^{-k}$, we solve the following Cauchy problem for the Euler equations in vorticity formulation
\begin{equation}\label{e:Euler-later-times}
\left\{
\begin{array}{ll}
& \partial_t \omega_{\varepsilon,k} + ((K_2* \omega_{\varepsilon,k})\cdot \nabla) \omega_{\varepsilon,k} = f \\ \\
& \omega_{\varepsilon, k} (\cdot, t_k) = \tilde\omega (\cdot, t_k) + \varepsilon t_k^{a_0-1} \operatorname{Re} (t_k^{ib_0} \eta (t_k^{-1/\alpha}\cdot))\, .
\end{array}\right.
\end{equation}
Observe that, since $t_k$ is positive, the initial data $\omega_{\varepsilon, k} (\cdot, t_k)$ belongs to $L^1\cap L^\infty$, while the initial velocity defining $v_{k, \varepsilon}:= K_2 * \omega_{\varepsilon, k} (\cdot, t_k)$ belongs to $L^2$. Since $K_2 * f \in L^1 ([0,T], L^2)$ for every $T$, we can apply the classical theorem of Yudovich (namely, Theorem \ref{thm:Yudo} and Remark \ref{r:A-priori-estimates}) to conclude that
\begin{corollary}\label{c:omega_k_epsilon}
For every $k$, $\varepsilon$, and every $T$ there exists a unique solution $\omega_{\varepsilon, k}$ of \eqref{e:Euler-later-times} with the property that $\omega_{\varepsilon , k} \in L^\infty ([t_k, T], L^1\cap L^\infty)$ and $v_{\varepsilon, k}\in L^\infty ([t_k, T], L^2)$ for every positive $T$. Moreover, we have the following bounds for every $t$
\begin{align}
\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^1} \leq &\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^1} +
\int_{t_k}^t \|f (\cdot, s)\|_{L^1}\,\mathrm ds\\
\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^p} \leq &\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^p} +
\int_{t_k}^t \|f (\cdot, s)\|_{L^p}\,\mathrm ds \label{e:omega_Lp_estimate}\\
\|v_{\varepsilon, k} (\cdot, t)\|_{L^2}\leq &\|v_{\varepsilon, k} (\cdot, t_k)\|_{L^2} +
\int_{t_k}^t \|K_2* f (\cdot, s)\|_{L^2}\,\mathrm ds\, .
\end{align}
\end{corollary}
Next, since we can easily bound $\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^1}$, $\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^p}$, and $\|v_{\varepsilon, k} (\cdot, t_k)\|_{L^2}$ independently of $k$, for each fixed $\varepsilon$ we conclude
\begin{equation}\label{e:uniform_bound}
\sup_{k\in \mathbb N} \sup_{t\in [t_k, T]}
\left(\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^1} + \|\omega_{\varepsilon, k} (\cdot, t)\|_{L^p} + \|v_{\varepsilon, k} (\cdot, t)\|_{L^2}
\right) < \infty\, .
\end{equation}
In turn we can use \eqref{e:uniform_bound} to conclude that, for each fixed $\varepsilon$, a subsequence of $\omega_{\varepsilon, k}$ converges to a solution $\omega_\varepsilon$ of \eqref{e:Euler} which satisfies the conclusions \ref{item:1-omega in L infinity L1 Lp} and \ref{item:2-v in L infinity L2} of Theorem \ref{thm:main2}.
\begin{proposition}\label{p:convergence}\label{P:CONVERGENCE}
Assume $p, \alpha, {\bar\alpha}, \omega_0, v_0, \tilde\omega, \tilde{v}, f, a_0, b_0$, and $\bar \eta$ are as in Theorem \ref{thm:main2} and let $\omega_{\varepsilon, k}$ be as in Corollary \ref{c:omega_k_epsilon}. Then, for every fixed $\varepsilon$, there is a subsequence, not relabeled, with the property that $\omega_{\varepsilon, k}$ converges (uniformly in $C ([0,T], L^q_w)$ for every positive $T$ and every $1< q\leq p$, where $L^q_w$ denotes the space $L^q$ endowed with the weak topology) to a solution $\omega_\varepsilon$ of \eqref{e:Euler} on $[0, \infty[$ with initial data $\omega_0$ and satisfying the bounds \ref{item:1-omega in L infinity L1 Lp} and \ref{item:2-v in L infinity L2} of Theorem \ref{thm:main2}.
\end{proposition}
The proof uses classical convergence theorems and we give it in the appendix for the reader's convenience. The real difficulty in the proof of Theorem \ref{thm:main2} is to ensure that the bound (iii) holds. This is reduced to the derivation of suitable estimates on $\omega_{\varepsilon, k}$, which we detail in the following statement.
\begin{theorem}\label{thm:main3}
Assume $p, \alpha, {\bar\alpha}$ are as in Theorem \ref{thm:main2} and fix $\varepsilon >0$. For an appropriate choice of $\bar \Omega$ and $\beta$ there is a triple $\eta$, $a_0$, and $b_0$ as in Theorem \ref{thm:main2} and three positive constants $T_0, \delta_0$, and $C$ with the property that
\begin{equation}\label{e:asymptotic-in-t-2}
\|\omega_{\varepsilon,k} (\cdot, t) - \tilde\omega (\cdot, t) - \varepsilon t^{a_0-1} {\rm Re}\, (t^{ib_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2} \leq C t^{a_0+1/\alpha-1+\delta_0} \qquad \forall t\in [t_k, T_0]\, .
\end{equation}
\end{theorem}
It is then obvious that the final conclusion \ref{item:3-eigenvalue bound} of Theorem \ref{thm:main2} is a consequence of the more precise estimate \eqref{e:asymptotic-in-t-2} on the approximations $\omega_{\varepsilon,k}$. The rest of these lecture notes are thus devoted to the proof of Theorem \ref{thm:main3} and we will start in the next section by breaking it into two main parts.
\section{Logarithmic time scale and main Ansatz}
\index{similarity-variables@Similarity variables}\index{aagZ@$\Omega$}\index{aagt@$\tau$}\index{aagx@$\xi$}
First of all, we will change variables and unknowns of the Euler equations (in vorticity formulation) in a way which will be convenient for many computations. Given a solution $\omega$ of \eqref{e:Euler} on $\ensuremath{\mathbb R}^2\times [T_0, T_1]$ with $0\leq T_0 \leq T_1$, we introduce a new function $\Omega$ on $\mathbb R^2 \times [\ln T_0, \ln T_1]$ with the following transformation. We set $\tau=\ln t$, $\xi=x t^{-1/\alpha}$ and
\begin{equation}\label{e:omega->Omega}
\Omega (\xi, \tau) := e^{\tau} \omega (e^{\tau/\alpha} \xi, e^\tau)\, ,
\end{equation}
which in turn results in
\begin{equation}\label{e:Omega->omega}
\omega (x, t) = t^{-1} \Omega (t^{-1/\alpha} x, \ln t)\, .
\end{equation}
Observe that, if $v (\cdot, t) = K_2 * \omega (\cdot, t)$ and $V( \cdot, \tau) = K_2 * \Omega (\cdot, \tau)$, we can derive similar transformation rules for the velocities as \index{aalV@$V$}
\begin{align}
V (\xi, \tau) &= e^{\tau (1-1/\alpha)} v(e^{\tau/\alpha} \xi, e^\tau)\label{e:v->V}\, ,\\
v (x,t) &= t^{-1+1/\alpha} V (t^{-1/\alpha} x, \ln t)\label{e:V-t>v}\, .
\end{align}
Likewise, we have an analogous transformation rule for the force $f$, which results in \index{aalF@$F$}
\begin{align}
F (\xi, \tau) &= e^{2\tau} f (e^{\tau/\alpha} \xi, e^\tau)\, ,\label{e:f->F}\\
f (x,t) &= t^{-2} F (t^{-1/\alpha} x, \ln t)\label{e:F->f}\, .
\end{align}
In order to improve the readability of our arguments, throughout the rest of the notes we will use the overall convention that, given some object related to the Euler equations in the ``original system of coordinates'', the corresponding object after applying the transformations above will be denoted with the same letter in capital case.
\begin{remark}
Note that the naming of $\bar V$ and $\bar\Omega$ is somewhat of an exception to this convention, since $(\bar\Omega, \bar V)$ is a solution of \eqref{e:Euler} in Eulerian variables. However, if you ``force them to be functions of $\xi$,'' which is how they will be used in the non-linear part, then they solve the Euler equations in self-similar variables with forcing (see \eqref{e:Euler-transformed}).
\end{remark}
Straightforward computations allow then to pass from \eqref{e:Euler} to an equation for the new unknown $\Omega$ in the new coordinates. More precisely, we have the following
\begin{lemma}\label{l:coordinates-change}
Let $p>2$ and $\infty \geq T_1 > T_0\geq 0$. Then $\omega\in L^\infty_{\text{loc}} (]T_0, T_1[; L^1\cap L^p)$ and $v (\cdot, t) = K_2* \omega (\cdot, t)$ satisfy
\begin{equation}\label{e:Euler-again}
\partial_t \omega + (v \cdot \nabla) \omega = f\, ,
\end{equation}
if and only if $\Omega$ and $V (\cdot, t) = K_2 * \Omega (\cdot, t)$ satisfy
\begin{equation}\label{e:Euler-transformed}
\partial_\tau \Omega - \left(1 + \frac{\xi}{\alpha}\cdot \nabla\right) \Omega + (V\cdot \nabla) \Omega = F\, .
\end{equation}
\end{lemma}
We next observe that, due to the structural assumptions on $\tilde \omega$ and $\tilde v$, the corresponding fields $\tilde \Omega$ and $\tilde V$ can be expressed in the following way: \index{aalVtilde@$\tilde V$}\index{aagZtilde@$\tilde\Omega$}
\begin{align}
\tilde{V} (\xi, \tau) &= \beta \bar V (\xi) \chi (e^{\tau/\alpha} |\xi|)\, ,\label{e:tildeV}\\
\tilde{\Omega} (\xi, \tau) &= \beta \bar \Omega (\xi) \chi (e^{\tau/\alpha} |\xi|) + \beta \zeta (|\xi|)
\chi' (e^{\tau/\alpha} |\xi|) e^{\tau/\alpha} |\xi|\, \label{e:tildeOmega}.
\end{align}
Observe that, for every fixed compact set $K$ there is a sufficiently negative $-T (K)$ with the property that
\begin{itemize}
\item $\chi (e^{\tau/\alpha} |\cdot|)= 1$ and $\chi' (e^{\tau/\alpha} \cdot) = 0$ on $K$ whenever $\tau \leq - T (K)$.
\end{itemize}
Since in order to prove Theorem \ref{thm:main} we are in fact interested in very small times $t$, which in turn correspond to very negative $\tau$, it is natural to consider $\tilde\Omega$ and $\tilde{V}$ as perturbations of $\beta \bar \Omega$ and $\beta \bar V$. We will therefore introduce the notation
\begin{align}
\tilde \Omega &= \beta \bar \Omega + \Omega_r\, ,\\
\tilde V & = \beta \bar V + V_r := \beta \bar V + K_2* \Omega_r\, .
\end{align}
We are thus lead to the following Ansatz for $\Omega_{\varepsilon,k} (\xi, \tau) = e^{\tau} \omega_{\varepsilon ,k} (e^{\tau/\alpha} \xi, e^\tau)$:
\begin{equation}\label{e:Ansatz-1}
\Omega_{\varepsilon, k} (\xi, \tau) = \beta \bar \Omega (\xi) + \Omega_r (\xi, \tau) + \varepsilon e^{\tau a_0} {\rm Re}\, (e^{i\tau b_0} \eta (\xi)) + \Omega_{\text{per}, k} (\xi, \tau)\, .
\end{equation}
The careful reader will notice that indeed the function $\Omega_{\text{per},k}$ depends upon the parameter $\varepsilon$ as well, but since such dependence will not really play a significant role in our discussion, in order to keep our notation simple, we will always omit it. \index{aagZr@$\Omega_r$}\index{aagZepsilonk@$\Omega_{\varepsilon, k}$}\index{aagZperk@$\Omega_{\text{per}, k}$}\index{aalVr@$V_r$}
We are next ready to complete our Ansatz by prescribing one fundamental property of the function $\eta$.
We first introduce the integro-differential operator \index{aalLss@$L_{\text{ss}}$}\index{Self-similar operator}
\begin{equation}\label{e:Lss}
L_{\text{ss}} (\Omega) := \left(1+\frac{\xi}{\alpha} \cdot \nabla\right) \Omega - \beta (\bar V \cdot \nabla) \Omega - \beta ((K_2* \Omega)\cdot \nabla) \bar \Omega\, .
\end{equation}
We will then prescribe that $\eta$ is an eigenfunction of $L_{\text{ss}}$ with eigenvalue $z_0 = a_0 + ib_0$, namely, \index{aalz0@$z_0$}
\begin{equation}\label{e:Ansatz-2}
L_{\text{ss}} (\eta) = z_0 \eta\, .
\end{equation}
Observe in particular that, since $L_{\text{ss}}$ is a real operator (i.e. $L_{\text{ss}} (\eta)$ is real-valued when $\eta$ is real-valued, cf. Section \ref{s:abstract-operators}), the complex conjugate $\bar \eta$ is an eigenfunction of $L_{\text{ss}}$ with eigenvalue $\bar z_0$, so that, in particular, the function
\begin{equation}\label{e:Omega_lin}
\Omega_{\text{lin}} (\xi, \tau) := \varepsilon e^{a_0 \tau} {\rm Re}\, (e^{i b_0 \tau} \eta (\xi))
= \frac{\varepsilon}{2} (e^{z_0 \tau} \eta (\xi) + e^{\bar z_0 \tau} \bar \eta (\xi))
\end{equation}
satisfies the linear evolution equation
\begin{equation}\label{e:evolution_of_Omega_lin}
\partial_\tau \Omega_{\text{lin}} - L_{\text{ss}} (\Omega_{\text{lin}})=0\, .
\end{equation}
The relevance of our choice will become clear from the discussion of Section \ref{s:nonlinear}. The point is that \eqref{e:evolution_of_Omega_lin} is close to the linearization of Euler (in the new system of coordinates) around $\tilde{\Omega}$. The``true linearization'' would be given by \eqref{e:evolution_of_Omega_lin} if we were to substitute $\bar \Omega$ and $\bar V$ in \eqref{e:Lss} with $\tilde{\Omega}$ and $\tilde{V}$. Since however the pair $(\tilde \Omega, \tilde{V})$ is well approximated by $(\bar \Omega, \bar V)$ for very negative times, we will show that \eqref{e:evolution_of_Omega_lin} drives indeed the evolution of $\Omega_{\varepsilon,k}-\tilde{\Omega}$ up to an error term (i.e. $\Omega_{\text{per},k}$) which is smaller than $\Omega_{\text{lin}}$.
\section{Linear theory}
We will look for the eigenfunction $\eta$ in a particular subspace of $L^2$. More precisely for every
$m\in \mathbb N\setminus \{0\}$ we denote by $L^2_m$ the set of those elements $\vartheta \in L^2 (\mathbb R^2, \mathbb C)$ which are $m$-fold symmetric, i.e., denoting by $R_\theta: \mathbb R^2\to \mathbb R^2$ the counterclockwise rotation of angle $\theta$ around the origin, \index{rotational-symmetry@Rotationally symmetric function space}\index{aalL2m@$L^2_m$}
they satisfy the condition
\begin{align*}
\vartheta &= \vartheta \circ R_{2\pi/m}\, .
\end{align*}
In particular, $L^2_m$ is a closed subspace of $L^2 (\mathbb R^2, \mathbb C)$. Note however that the term ``$m$-fold symmetric'' is somewhat misleading when $m=1$: in that case the transformation $R_{2\pi/m} = R_{2\pi}$ is the identity and in particular $L^2_1 = L^2 (\mathbb R^2, \mathbb C)$. Indeed we will look for $\eta$ in $L^2_m$ for a sufficiently large $m\geq 2$.
An important technical detail is that, while the operator $L^2 \cap \mathscr{S} \ni \omega \mapsto K_2* \omega \in \mathscr{S}'$ {\em cannot} be extended continuously to the whole $L^2$ (cf. Remark \ref{r:Camillo_dumb}), for $m\geq 2$ it {\em can} be extended to a continuous operator from $L^2_m$ into $\mathscr{S}'$: this is the content of the following lemma.
\begin{lemma}\label{l:extension}\label{L:EXTENSION}
For every $m\geq 2$ there is a unique continuous operator $T: L^2_m \to \mathscr{S}'$ with the following properties:
\begin{itemize}
\item[(a)] If $\vartheta\in \mathscr{S}$, then $T (\vartheta) = K_2*\vartheta$ (in the sense of distributions);
\item[(b)] There is $C>0$ such that for every $\vartheta \in L^2_m$, there is $v=v(\vartheta)\in W^{1,2}_{\text{loc}}$ with
\begin{itemize}
\item[(b1)] $R^{-1} \|v\|_{L^2 (B_R)} + \|Dv\|_{L^2 (B_R)} \leq C\|\vartheta\|_{L^2 (\mathbb R^2)}$ for all $R>0$;
\item[(b2)] ${\rm div}\, v =0$ and $\langle T(\vartheta), \varphi\rangle = \int v\cdot \varphi$ for every test function $\varphi \in \mathscr{S}$.
\end{itemize}
\end{itemize}
\end{lemma}
From now on the operator $T$ will still be denoted by $K_2*$ and the function $v$ will be denoted by $K_2*\omega$. Observe also that, if $\hat\Omega$ is an $L^2_{\text{loc}}$ function such that $\|\hat\Omega\|_{L^2 (B_R)}$ grows polynomially in $R$, the integration of a Schwartz function times $v \hat\Omega$ is a well defined tempered distribution. In the rest of the notes, any time that we write a product $\hat\Omega K_2 * \vartheta $ for an element $\vartheta\in L^2_m$ and an $L^2_{\text{loc}}$ function $\hat\Omega$ we will always implicitly assume that
$\|\hat\Omega\|_{L^2 (B_R)}$ grows at most polynomially in $R$ and that the product is understoood as a well-defined element of $\mathscr{S}'$.
The relevance of this discussion is that, for $m\geq 2$, we can now consider the operator $L_{\text{ss}}$ as a closed, densely defined unbounded operator on $L^2_m$. We let
\index{aalLss@$L_{\text{ss}}$}\index{Self-similar operator}
\begin{equation}\label{e:def-Lss-formal}
L_{\text{ss}} (\Omega) = \left(1- {\textstyle{\frac{2}{\alpha}}}\right) \Omega - {\rm div}\, \left(\left(-{\textstyle{\frac{\xi}{\alpha}}} + \beta \bar V\right) \Omega\right) - \beta ( K_2*\Omega \cdot \nabla) \bar\Omega\,
\end{equation}
and its domain is
\begin{equation}\label{e:D(Lss)-formal}
D_m (L_{\text{ss}}) =\{\Omega\in L^2_m : L_{\text{ss}} (\Omega)\in L^2_m\}\, .
\end{equation}
When $\Omega\in \mathscr{S}$ it can be readily checked that $L_{\text{ss}}$ as defined in \eqref{e:def-Lss-formal} coincides with \eqref{e:Lss}.
The definition makes obvious that $L_{\text{ss}}$ is a closed and densely defined unbounded operator over $L^2_m$. We will later show that $\Omega \mapsto (K_2*\Omega \cdot \nabla) \bar \Omega$ is in fact a compact operator from $L^2_m$ into $L^2_m$ and therefore we have
\begin{equation}\label{e:D(Lss)-formal-2}
D_m (L_{\text{ss}}) := \left\{\Omega\in L^2_m : {\rm div} \left(\beta \bar V\Omega- {\textstyle{\frac{\xi}{\alpha}}}\Omega\right)\, \in L^2_m\right\}\, .
\end{equation}
From now on, having fixed $m\geq 2$ and regarding $L_{\text{ss}}$ as an unbounded, closed, and densely defined operator in the sense given above, the spectrum ${\rm spec}_m\, (L_{\text{ss}})$ on $L^2_m$ is defined as the (closed) set which is the complement of the {\em resolvent} of $L_{\text{ss}}$, the latter being the (open) set of $z_0 \in \mathbb C$ such that $L_{\text{ss}}-z_0$ has a bounded inverse $(L_{\text{ss}}-z_0)^{-1} : L^2_m \to L^2_m$.\footnote{The textbook definition would require the inverse to take values in $D_m (L_{\text{ss}})$. Note however that this is a consequence of our very definition of $D_m (L_{\text{ss}})$.}.
The choice of $\eta$ will then be defined by the following theorem which summarizes a quite delicate spectral analysis.
\begin{theorem}\label{thm:spectral}\label{THM:SPECTRAL}
For an appropriate choice of $\bar \Omega$ there is an integer $m\geq 2$ with the following property. For every positive $\bar a>0$, if $\beta$ is chosen appropriately large, then there is $\eta\in L^2_m\setminus \{0\}$ and $z_0=a_0+ib_0$ such that:
\begin{itemize}
\item[(i)] $a_0 \geq \bar a$ and $L_{\text{ss}} (\eta) = z_0 \eta$;
\item[(ii)] For any $z \in {\rm spec}_m\, (L_{\text{ss}})$ we have ${\rm Re}\, z\leq a_0$;
\item[(iii)] If $b_0=0$, then $\eta$ is real valued;
\item[(iv)] There is $k\geq 1$ integer and $e:\mathbb R^+\to \mathbb C$ such that $\eta (x) = e (r) e^{ikm \theta}$ if $b_0\neq 0$ and $\eta (x) = {\rm Re}\, (e(r) e^{ikm\theta})$ if $b_0= 0$.
\end{itemize}
\end{theorem}
In fact we will prove some more properties of $\eta$, namely, suitable regularity and decay at infinity, but these are effects of the eigenvalue equation and will be addressed later.
The proof of Theorem \ref{thm:spectral} will be split in two chapters. In the first one we regard $L_{\text{ss}}$ as perturbation of a simpler operator $L_{\text{st}}$, which is obtained from $L_{\text{ss}}$ by ignoring the $(1+\frac{\xi}{\alpha}\cdot \nabla)$ part: the intuition behind neglecting this term is that the remaining part of the operator $L_{\text{ss}}$ is multiplied by the constant $\beta$, which will be chosen appropriately large. The second chapter will be dedicated to proving a theorem analogous to Theorem \ref{thm:spectral} for the operator $L_{\text{st}}$. The analysis will take heavily advantage of an appropriate splitting of $L^2_m$ as a direct sum of invariant subspaces of $L_{\text{st}}$. The latter are obtained by expanding in Fourier series the trace of any element of $L^2_m$ on the unit circle. In each of these invariant subspaces the spectrum of $L_{\text{st}}$ can be related to the spectrum of a suitable second order differential operator in a single real variable.
\section{Nonlinear theory}\label{s:nonlinear}
The linear theory will then be used to show Theorem \ref{thm:main3}. In fact, given the decomposition introduced in \eqref{e:Ansatz-1}, we can now formulate a yet more precise statement from which we conclude Theorem \ref{thm:main3} as a corollary.
\begin{theorem}\label{thm:main4}
Let $p$, $\alpha$, and ${\bar\alpha}$ be as in Theorem \ref{thm:main2} and assume $\bar a$ is sufficiently large. Let $\bar \Omega$, $\eta$, $a_0$, and $b_0$ be as in Theorem \ref{thm:spectral} and for every $\varepsilon \in \mathbb R$, $k\in \mathbb N$ consider the solutions $\omega_{\varepsilon,k}$ of \eqref{e:Euler-later-times} and $\Omega_{\varepsilon,k} (\xi, \tau) = e^\tau \omega_{\varepsilon,k} (e^{\tau/\alpha} \xi, e^\tau)$. If we define $\Omega_{\text{per},k}$ through \eqref{e:Ansatz-1}, then there are $\tau_0 = \tau_0 (\varepsilon)$ and $\delta_0>0$, independent of $k$, such that
\begin{equation}\label{e:H2-estimate}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_{L^2} \leq e^{\tau (a_0+\delta_0)} \qquad\qquad\qquad \forall \tau\leq \tau_0\, .
\end{equation}
\end{theorem}
\eqref{e:asymptotic-in-t-2} is a simple consequence of \eqref{e:H2-estimate} after translating it back to the original coordinates. In order to give a feeling for why \eqref{e:H2-estimate} holds we will detail the equation that $\Omega_{\text{per}, k}$ satisfies.
First of all subtracting the equation satisfied by $\tilde{\Omega}$ from the one satisfied by $\Omega_{\varepsilon, k}$ we achieve
\begin{align*}
&\partial_\tau \Omega_{\text{lin}} + \partial_\tau \Omega_{\text{per},k} - \left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) \Omega_{\text{lin}}
-\left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) \Omega_{\text{per}, k} \\
+ & (\tilde{V} \cdot \nabla) \Omega_{\text{lin}} + V_{\text{lin}}\cdot \nabla \tilde{\Omega} + (\tilde{V}\cdot \nabla ) \Omega_{\text{per},k} + (V_{\text{per},k}\cdot \nabla) \tilde{\Omega} + (V_{\text{lin}}\cdot \nabla) \Omega_{\text{per}, k}\\
+ & (V_{\text{per}, k} \cdot \nabla) \Omega_{\text{lin}}
+ (V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} + (V_{\text{per},k}\cdot \nabla) \Omega_{\text{per}, k} = 0\, ,
\end{align*}
where we have used the convention $\tilde{V}=K_2*\tilde\Omega$, $V_{\text{per},k} = K_2* \Omega_{\text{per},k}$, and $V_{\text{lin}}= K_2* \Omega_{\text{lin}}$. Next recall that $\tilde{\Omega}=\beta\bar\Omega + \Omega_r$ and recall also the definition of $L_{\text{ss}}$ in \eqref{e:Lss} and the fact that $\partial_\tau \Omega_{\text{lin}} - L_{\text{ss}} (\Omega_{\text{lin}})= 0$. In particular formally we reach
\begin{align}
& (\partial_{\tau} - L_{\text{ss}}) \Omega_{\text{per}, k} + ((V_{\text{lin}}+V_r)\cdot \nabla) \Omega_{\text{per},k} + (V_{\text{per},k} \cdot \nabla) (\Omega_{\text{lin}} + \Omega_r) + (V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}\nonumber\\
= & -(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r\cdot \nabla) \Omega_{\text{lin}} - (V_{\text{lin}}\cdot \nabla) \Omega_r\, ,\label{e:master}
\end{align}
which must be supplemented with the initial condition
\[
\Omega_{\text{per},k} (\cdot, -k)= 0\, .
\]
In fact, in order to justify \eqref{e:master} we need to show that $\Omega_{\text{per},k} (\cdot, \tau)\in L^2_m$ for every $\tau$, which is the content of the following elementary lemma.
\begin{lemma}\label{l:evolution-in-L2m}
The function $\Omega_{\text{per},k} (\cdot, \tau)$ belongs to $L^2_m$ for every $\tau$.
\end{lemma}
\begin{proof}
It suffices to prove that $\omega_{\varepsilon, k} (\cdot, t)$ is $m$-fold symmetric, since the transformation rule then implies that $\Omega_{\varepsilon,k} (\cdot, \tau)$ is $m$-fold symmetric and $\Omega_{\text{per}, k} (\cdot, \tau)$ is obtained from the latter by subtracting $e^{a_0\tau} {\rm Re} (e^{ib_0\tau} \eta) + \tilde{\Omega} (\cdot, \tau)$, which is also $m$-fold symmetric. In order to show that $\omega_{\varepsilon, k}$ is $m$-fold symmetric just consider that $\omega_{\varepsilon, k} (R_{2\pi/m} (\cdot), \tau)$ solves \eqref{e:Euler-later-times} because both the forcing term and the initial data are invariant under a rotation of $\frac{2\pi}{m}$ (and the Euler equations are rotationally invariant). Then the uniqueness part of Yudovich's statement implies $\omega_{\varepsilon, k} (\cdot, t) = \omega_{\varepsilon, k} (R_{2\pi/m} (\cdot), t)$.
\end{proof}
We proceed with our discussion and observe that $V_{\text{lin}} + V_r$ and $\Omega_{\text{lin}}+\Omega_r$ are both ``small'' in appropriate sense for sufficiently negative times, while, because of the initial condition being $0$ at $-k$, for some time after $-k$ we expect that the quadratic nonlinearity $(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}$ will not contribute much to the growth of $\Omega_{\text{per}, k} (\cdot, \tau)$. Schematically, we can break \eqref{e:master} as
\begin{align}
& (\partial_{\tau} - L_{\text{ss}}) \Omega_{\text{per}, k} + \underbrace{((V_{\text{lin}}+V_r)\cdot \nabla) \Omega_{\text{per},k} + (V_{\text{per},k} \cdot \nabla) (\Omega_{\text{lin}} + \Omega_r)}_{\mbox{small linear terms}} + \underbrace{(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}}_{\mbox{quadratic term}}\nonumber\\
= & \underbrace{-(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r\cdot \nabla) \Omega_{\text{lin}} - (V_{\text{lin}}\cdot \nabla) \Omega_r}_{\mbox{forcing term } \mathscr{F}}\, ,\label{e:master-schematics}
\end{align}
In particular we can hope that the growth of $\Omega_{\text{per},k} (\cdot, \tau)$ is comparable to that of the solution of the following ``forced'' linear problem
\begin{equation}\label{e:master-linear}
(\partial_{\tau} - L_{\text{ss}}) \Omega = \mathscr{F}\, .
\end{equation}
Observe that we know that $\Omega_{\text{lin}} (\cdot, \tau)$ and $V_{\text{lin}} (\cdot, \tau)$ decay like $e^{a_0 \tau}$. We can then expect to gain a slightly faster exponential decay for $\mathscr{F} (\cdot, \tau)$ because of the smallness of $V_r$ and $\Omega_r$. On the other hand from Theorem \ref{thm:spectral} we expect that the semigroup generated by $L_{\text{ss}}$ enjoys growth estimates of type $e^{a_0\tau}$ on $L^2_m$ (this will be rigorously justified using classical results in the theory of strongly continuous semigroups). We then wish to show, using the Duhamel's formula for the semigroup $e^{\tau L_{\text{ss}}}$, that the growth of $\Omega_{\text{per},k}$ is bounded by $e^{a_0\tau} (e^{\delta_0 \tau} - e^{-\delta_0 k})$ for some positive $\delta_0$ for some time $\tau$ after the initial $-k$: the crucial point will be to show that the latter bound is valid for $\tau$ up until a ``universal'' time $\tau_0$, independent of $k$.
Even though intuitively sound, this approach will require several delicate arguments, explained in the final chapter of the notes. In particular:
\begin{itemize}
\item we will need to show that the quadratic term $(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}$ is small up to some time $\tau_0$ independent of $k$, in spite of the fact that there is a ``loss of derivative'' in it (and thus we cannot directly close an implicit Gronwall argument using the Duhamel formula and the semigroup estimate for $L_{\text{ss}}$);
\item The terms $\Omega_r$ and $V_r$ are not really negligible in absolute terms, but rather, for very negative times, they are supported in a region in space which goes towards spatial $\infty$.
\end{itemize}
The first issue will be solved by closing the estimates in a space of more regular functions, which contains $L^2$ and embeds in $L^\infty$ (in fact $L^2\cap W^{1,4}$): the bound on the growth of the $L^2$ norm will be achieved through the semigroup estimate for $L_{\text{ss}}$ via Duhamel, while the bound of the first derivative will be achieved through an energy estimate, which will profit from the $L^2$ one. The second point by restricting further the functional space in which we will close to estimates for $\Omega_{per, k}$. We will require an appropriate decay of the derivative of the solutions, more precisely we will require that the latter belong to $L^2 (|x|^2\,\mathrm dx)$. Of course in order to use this strategy we will need to show that the initial perturbation $\eta$ belongs to the correct space of functions.
\chapter{Introduction}
In these notes we will consider the Euler equations in the $2$-dimensional space in vorticity formulation\index{Euler equations@Euler equations}, which are given by
\begin{equation}\label{e:Euler}
\left\{
\begin{array}{ll}
&\partial_t \omega + (v\cdot \nabla) \omega = f\\ \\
& v (\cdot, t) = K_2 * \omega (\cdot, t)
\end{array}
\right.
\end{equation}
where $K_2$ is the usual $2$-dimensional Biot-Savart kernel and $f$ is a given external force. \index{Biot-Savart kernel@Biot-Savart kernel}\index{aalf@$f$}\index{aalK_2@$K_2$}\index{external force@external force}\index{force, external@force, external}
$v$ is the velocity field, and it is a function defined on a space-time domain of type $\mathbb R^2 \times [0, T]$. By the Biot-Savart law we have $\omega = \curl v = \partial_{x_1} v_2 - \partial_{x_2} v_1 = \nabla\times v$\index{vorticity@vorticity}\index{velocity@velocity}\index{aagz@$\omega$}\index{aalv@$v$}.
We will study the Cauchy problem for \eqref{e:Euler} with initial data
\begin{equation}\label{e:Cauchy}
\omega (\cdot, 0) = \omega_0\,
\end{equation}
on the domain $\mathbb R^2 \times [0,\infty[$
under the assumptions that
\begin{itemize}
\item[(i)] $\omega_0\in L^1\cap L^p$ for some $p>2$ and $v_0=K_2* \omega_0 \in L^2$;
\item[(ii)] $f\in L^1 ([0,T], L^1\cap L^p)$ and $K_2*f\in L^1 ([0,T], L^2)$ for every $T<\infty$.
\end{itemize}
In particular we understand solutions $\omega$ in the usual sense of distributions, namely,
\begin{equation}\label{e:distrib}
\int_0^T \int_{\ensuremath{\mathbb R}^2} [\omega (\partial_t \phi + K_2* \omega \cdot \nabla \phi) + f \phi]\, dx\, dt
= - \int_{\ensuremath{\mathbb R}^2} \phi (x,0)\, \omega_0 (x)\, dx
\end{equation}
for every smooth test function $\phi\in C^\infty_c (\mathbb R^2 \times [0,T[)$.\index{Solution, weak} In view of (i)-(ii) and standard energy estimates we will restrict our attention to weak solutions which satisfy the following bounds:
\begin{itemize}
\item[(a)] $\omega \in L^\infty ([0,T], L^1\cap L^p)$ and $v\in L^\infty ([0,T], L^2)$ for every $T<\infty$.
\end{itemize}
The purpose of these notes is to give a proof of the following:
\begin{theorem}\label{thm:main}
For every $p\in ]2, \infty[$ there is a triple $\omega_0, v_0$, and $f$ satisfing (i)-(ii)
with the property that there are uncountably many solutions $(\omega, v)$ of \eqref{e:Euler} and \eqref{e:Cauchy} on $\mathbb R^2\times [0, \infty [$ which satisfy the bound (a). Moreover, $\omega_0$ can be chosen to vanish identically.
\end{theorem}
In fact the $f$ given by the proof is smooth and compactly supported on any closed interval of time $[\varepsilon, T]\subset ]0, \infty[$. Moreover, a closer inspection of the argument reveals that any of the solutions $(\omega, v)$ enjoy bounds on the $W^{1,4}_{\rm loc}$ norm of $\omega (t, \cdot)$, and good decay properties at infinity, whenever $t$ is positive (and obviously such estimates degenerate as $t\downarrow 0$). In particular $v$ belongs to $C^1_{\rm loc} (\mathbb R^2\times ]0, \infty[)$. It is not difficult to modify the arguments detailed in these notes to produce examples which have even more regularity and better decay for positive times, but we do not pursue the issue here.
\begin{remark}\label{r:bounded}\label{R:BOUNDED}
Recall that
\begin{equation}\label{e:bound-on-Biot-Savart}
\|K_2* \omega (\cdot, t)\|_{L^\infty}\leq C (p) (\|\omega (\cdot, t)\|_{L^1} + \|\omega (\cdot, t)\|_{L^p})
\end{equation}
whenever $p>2$ (cf. the Appendix for the proof). Therefore we conclude that each solution $v$ in Theorem \ref{thm:main} is bounded on $\mathbb R^2\times [0,T]$ for every positive $T$.
\end{remark}
The above groundbreaking result was proved by Vishik in the two papers \cite{Vishik1} and \cite{Vishik2} (upon which these notes are heavily based) and answers a long-standing open question in the PDE theory of the incompressible Euler equations, as it shows that it is impossible to extend to the $L^p$ scale the following classical uniqueness result of Yudovich.
\begin{theorem}\label{thm:Yudo}\label{THM:YUDO}
Consider a strictly positive $T$, an initial vorticity $\omega_0 \in L^1\cap L^\infty$ with $v_0=K_2*\omega_0 \in L^2$ and an external force $f\in L^1 ([0,T]; L^1\cap L^\infty)$ with $K_2*f\in L^1 ([0,T]; L^2)$. Then there is a unique solution $\omega$ of \eqref{e:Euler} and \eqref{e:Cauchy} on $\mathbb R^2\times [0, T]$ satisfying the estimates $\omega \in L^\infty ([0,T], L^1\cap L^\infty)$ and $v = K_2* \omega \in L^\infty ([0,T], L^2)$.
\end{theorem}
The above theorem in a bounded domain was originally proven by Yudovich in 1963 \cite{Yudovich1963}, who also proved a somewhat more technical statement on unbounded domains. We have not been able to find an exact reference for the statement above (cf. for instance \cite[Theorem 8.2]{MajdaBertozzi} and the paragraph right afterwards, where the authors point out the validity of the Theorem in the case of $f=0$). We therefore give a detailed proof in the appendix for the reader's convenience.
\begin{remark}\label{r:A-priori-estimates}
We recall that the solution of Theorem \ref{thm:Yudo} satisfies a set of important a priori estimates, which can be justified using the uniqueness part and a simple approximation procedure. Indeed if $(\omega, v)$ is a smooth solution of \eqref{e:Euler}, then the method of characteristics shows that, for every $t$, there exists a family of volume-preserving diffeomorphisms $T_s:\ensuremath{\mathbb R}^2\to\ensuremath{\mathbb R}^2, s\in[0, t]$, such that \begin{equation*}
\omega(x, t)=\omega_0(T_0 x) + \int_0^t f(T_s x, s)\,\mathrm ds.
\end{equation*}
Therefore, since volume-preserving diffeomorphisms preserve all $L^q$ norms, we get, for all $q\in[1,\infty]$,
\begin{equation*}
\norm{\omega(\cdot, t)}_{L^q}\le\norm{\omega_0}_{L^q}+\int_0^t \norm{f(\cdot, s)}_{L^q}\,\mathrm ds.
\end{equation*}
Furthermore, a usual integration by parts argument, as seen in \cite[Lemma 1.1]{Yudovich1963}, shows that $v$ satisfies the estimate
\begin{equation*}
\norm{v(\cdot, t)}_{L^2}\le\norm{v_0}_{L^2}+\int_0^t \norm{K_2* f(\cdot,s)}_{L^2}\,\mathrm ds.
\end{equation*}
\end{remark}
\begin{remark}\label{r:well-defined}
Recall that the Biot-Savart kernel is given by the formula
\begin{equation}\label{e:Biot-Savart}
K_2 (x_1, x_2) = \frac{x^\perp}{2\pi |x|^2} = \frac{1}{2\pi |x|^2} (-x_2, x_1)\, .
\end{equation}
In particular, while $K_2\not \in L^p$ for any $p$, it can be easily broken into
\begin{equation}\label{e:decomposition-Biot-Savart-Kernel}
K_2 = K_2 \mathbf{1}_{B_1} + K_2 \mathbf{1}_{B_1^c}\, ,
\end{equation}
where $B_1$ denotes the unit ball around $0$.
Observe that $K_2 \mathbf{1}_{B_1} \in L^q$ for every $q\in [1,2[$ and $K_2 \mathbf{1}_{B_1^c}\in L^r$ for every $r\in ]2, \infty]$. Under the assumption that $\omega \in L^{2-\delta}$ for some positive $\delta >0$, this decomposition allows us to define the convolution $K_2* \omega$ as $(K_2 \mathbf{1}_{B_1}) * \omega
+ (K_2 \mathbf{1}_{B_1^c}) * \omega$, where each separate summand makes sense as Lebesgue integrals thanks to Young's convolution inequality.\footnote{Young's convolution inequality states that, if $g_1\in L^{p_1}$ and $g_2\in L^{p_2}$ with $1\leq \frac{1}{p_1} + \frac{1}{p_2} \leq 2$, then $g_1 (y-\cdot) g_2 (\cdot)$ belongs to $L^1$ for a.e. $y$ and $g_1* g_2\in L^r$ for $\frac{1}{r}=\frac{1}{p_1} + \frac{1}{p_2} -1$.}
On the other hand we caution the reader that, for general $\omega\in L^2$, $K_2*\omega$ may not be well-defined. More precisely, if we denote by $\mathscr{S}$\index{aalSscript@$\mathscr{S}$} the \index{Schwartz space@Schwartz space $\mathscr{S}$} Schwartz space of rapidly decaying smooth functions and by $\mathscr{S}'$\index{aalSscript'@$\mathscr{S}'$} the space of tempered distributions \index{tempered distribution@tempered distribution}\index{space of tempered distributions@space $\mathscr{S}'$ of tempered distributions} (endowed, respectively, with their classical Fr\'echet and weak topologies), it can be shown that there is no continuous extension of the operator $\mathscr{S} \ni \omega \mapsto K_2 *\omega\in \mathscr{S}'$ to a continuous operator from $L^2$ to $\mathscr{S}'$, cf. Remark \ref{r:Camillo_dumb}.
This fact creates some technical issues in many arguments where we will indeed need to consider a suitable continuous extension of the operator $\omega \mapsto K_2*\omega$ to {\em some} closed linear subspace of $L^2$, namely, $m$-fold rotationally symmetric functions in $L^2$ (for some integer $m\geq 2$). Such an extension will be shown to exist thanks to some special structural properties of the subspace.
\end{remark}
\section{Idea of the proof}
We now describe, briefly, the rough idea of and motivation for the proof. An extensive description of the proof with precise statements can be found in Chapter~\ref{chapter:general}, which breaks down the whole argument into three separate (and independent) parts. The subsequent three chapters are then dedicated to the detailed proofs.
First, we recall two essential features of the two-dimensional Euler equations:
\begin{enumerate}
\item \emph{Steady states}. The two-dimensional Euler equations possess a large class of explicit, radially symmetric steady states called \emph{vortices}:\footnote{They are sometimes also called rotating or circular flows.}
\begin{equation}
\label{eq:vorticesdef}
\bar{\omega}(x) = g(|x|), \quad \bar{v}(x) = \zeta(|x|) x^\perp.
\end{equation}
\item \emph{Scaling symmetry}. The Euler equations possess a two-parameter scaling symmetry: If $(\omega,v)$ is a solution of~\eqref{e:Euler} with vorticity forcing $f$, and $\lambda, \mu > 0$, then
\begin{equation}
\omega_{\lambda,\mu}(x,t) = \mu \omega(\lambda x, \mu t), \quad v_{\lambda,\mu}(x,t) = \frac{\mu}{\lambda} v(\lambda x, \mu t),
\end{equation}
define a solution with vorticity forcing\textbf{}
\begin{equation}
f_{\lambda,\mu}(x,t) = \mu^2 f(\lambda x, \mu t).
\end{equation}
The scaling symmetry corresponds to the physical dimensions
\begin{equation}
[x] = L,\quad [t] = T, \quad [v] = \frac{L}{T}, \quad [\omega] = \frac{1}{T}, \quad \text{ and } \quad [f] = \frac{1}{T^2}.
\end{equation}
\end{enumerate}
We now elaborate on the above two features:
\smallskip
\emph{1. Unstable vortices}. The stability analysis of shear flows $u = (b(y),0)$ and vortices~\eqref{eq:vorticesdef} is classical, with seminal contributions due to Rayleigh~\cite{Rayleigh1879}, Kelvin~\cite{Thomson1880}, Orr~\cite{Orr}, and many others. The linearized Euler equations around the background vortex $\bar{\omega}$ are
\begin{equation}
\label{eq:linearizedeulerintro}
\partial_t \omega - L_{\rm st} \omega :=
\partial_t \omega + \zeta(r) \partial_\theta \omega + (v \cdot e_r) g'(r) = 0, \quad v = K_2 \ast \omega.
\end{equation}
Consider the eigenvalue problem associated to the linearized operator $L_{\rm st}$. It suffices to consider $\psi = e^{ik\theta} \psi_k(|x|)$, $k \geq 0$, the stream function associated to a vorticity perturbation $\omega$ (that is, $\Delta \psi = \omega$). It is convenient to pass to an exponential variable $s = \log r$ and define $\phi(s) = \psi_k(e^s)$; $A(s) = e^s g'(e^s)$ ($r \; \times$ the radial derivative of the background vorticity); and $\Xi(s) = \zeta(e^s)$ (the differential rotation). The eigenvalue problem for $L_{\rm st}$, with eigenvalue $\lambda = -ikz$, can be rewritten as
\begin{equation}
\label{eq:theeigenvaluequation}
\left( \Xi(s) - z \right) \left( \frac{d^2}{dy^2}- k^2 \right) \phi - A(s) \phi = 0.
\end{equation}
This is \emph{Rayleigh's stability equation}. The eigenvalue $\lambda$ is unstable when ${\rm Im}(z) > 0$, in which case we can divide by $\Xi - z$ and analyze a steady Schr{\"o}dinger equation. It is possible to understand~\eqref{eq:theeigenvaluequation} well enough to design vortices for which the corresponding linear operator has an unstable eigenfunction. For shear flows, this analysis goes back to Tollmien~\cite{Tollmien}. The problem was treated rigorously by Z.~Lin~\cite{LinSIMA2003} for bounded and unbounded shear flows and rotating flows in an annulus.\footnote{For those interested in hydrodynamic stability more generally, see the classic monograph~\cite{DrazinReid}. Chapter~4 therein concerns the stability of shear flows, including Rayleigh's criterion and a sketch of Tollmien's idea.}
The case of unbounded vortices, which is the crucial one for the purposes of these notes, was treated by Vishik in~\cite{Vishik2}, see Chapter~\ref{chapter:linearpartii} below. In the cases relevant to these notes, $L_{\rm st}$ has at least one unstable eigenvalue $\lambda$. While the latter could well be real, for the sake of our argument let us assume that it is a complex number $\lambda= a_0 + b_0 i$ ($a_0, b_0 > 0$) and let $\bar\lambda = a_0- b_0 i$ be its complex conjugate. If we denote by $\eta$ and $\bar{\eta}$ two corresponding (nontrivial) eigenfunctions, it can be checked that they are {\em not} radially symmetric.
With the unstable modes in hand, one may seek a trajectory on the \emph{unstable manifold} associated to $\lambda$ and $\bar{\lambda}$. For example, one such trajectory may look like
\begin{equation}
\omega = \bar{\omega} + \omega^{\rm lin} + o(e^{a_0 t}),
\end{equation}
where $\omega^{\rm lin} = {\rm Re}(e^{\lambda t} \eta)$ is a solution of the linearized Euler equations~\eqref{eq:linearizedeulerintro}. These solutions converge to $\bar{\omega}$ exponentially in backward time.
Hence, we expect that certain unstable vortices exhibit a kind of \emph{non-uniqueness at time $t = -\infty$} and moreover break the radial symmetry. The existence of unstable manifolds associated to a general class of Euler flows in dimension $n \geq 2$ was demonstrated by Lin and Zeng~\cite{LinZengCPAM2013,LinZengCorrigendum2014}.\footnote{There is a substantial mathematical literature on the nonlinear instability of Euler flows, see~\cite{friedlanderstraussvishikearly,FriedlanderHoward,friedlanderstraussvishik,bardosguostrauss,friedlandervishik,linnonlinear}.}
\smallskip
\emph{2. Self-similar solutions}. It is natural to consider solutions invariant under the scaling symmetry and, in particular, it is natural to consider those self-similar solutions which live exactly at the desired integrability. If we fix a relationship $L^\alpha \sim T$ in the scaling symmetries, the similarity variables are\footnote{We may regard the logarithmic time as $\tau = \log (t/t_0)$, so that $t$ is non-dimensionalized according to a fixed reference time $t_0 = 1$.}
\begin{equation}\label{e:self-similar-scaling}
\xi = \frac{x}{t^{\frac{1}{\alpha}}}, \quad \tau = \log t
\end{equation}
\begin{equation}
v(x,t) = \frac{1}{t^{1-\frac{1}{\alpha}}} V(\xi, \tau), \quad \omega(x,t) = \frac{1}{t} \Omega(\xi, \tau).
\end{equation}
Notice that physical time $t=0$ corresponds to logarithmic time $\tau = -\infty$. The function $\Omega$ is known as the \emph{profile}.
The Euler equations, without force, in similarity variables are
\begin{equation}
\label{eq:similarityvareulereqns}
\left\{\begin{array}{ll}
&\partial_\tau \Omega - \left( 1 + \frac{\xi}{\alpha} \cdot \nabla_\xi \right) \Omega + V \cdot \nabla_\xi \Omega = 0\\ \\
& V= K_2* \Omega \, .
\end{array}\right.
\end{equation}
Profiles $\Omega$ satisfying $\| \Omega(\cdot,\tau) \|_{L^p} = O(1)$ as $\tau \to -\infty$ satisfy $\| \omega(\cdot,t) \|_{L^p} = O(t^{-1+\frac{2}{\alpha p}})$ as $t \to 0^+$, and similarly in the weak $L^p$ norms. Hence, the Lebesgue and weak Lebesgue norms with $p = 2/\alpha$ would be $O(1)$ in either variables. To show sharpness of the Yudovich class, we consider $0 < \alpha \ll 1$.
\bigskip
The route to non-uniqueness through unstable vortices and self-similar solutions is as follows: Suppose that $\bar{\Omega}$ is an unstable steady state of the similarity variable Euler equations~\eqref{eq:similarityvareulereqns} (in particular, $\bar{\omega}(x,t) = t^{-1} \bar{\Omega}(\xi)$ is a self-similar solution of the usual Euler equations). Find a trajectory $\Omega$ on the unstable manifold associated to $\bar{\Omega}$. In similarity variables, the steady state $\bar{\Omega}$ will be ``non-unique at minus infinity", which corresponds to non-uniqueness at time $t=0$ in the physical variables.
One natural class of background profiles $\bar{\Omega}$ consists of \emph{power-law vortices} $\bar{\omega} = \beta |x|^{-\alpha}$, $\beta \in \ensuremath{\mathbb R}$, which are simultaneously steady solutions and self-similar solutions without force. At present, we do not know whether the above strategy can be implemented with power-law vortices.
Instead, we choose a smooth vortex profile $g(|x|)$, with power-law decay as $|x| \to +\infty$, which is unstable for the Euler dynamics. Our background will be the self-similar solution with profile $\bar{\Omega} = g(|\xi|)$, which solves the Euler equations \emph{with a self-similar force}. This profile may be considered a well-designed smoothing of a power-law vortex. When the background is large, it is reasonable to expect that the additional term in the similarity variable Euler equations~\eqref{eq:similarityvareulereqns} can be treated perturbatively, so that $g(|\xi|)$ will also be unstable for the similarity variable Euler dynamics. This heuristic is justified in Chapter~\ref{chapter:linearparti}.
In order to ensure that the solutions have finite energy, we also truncate the background velocity at distance $O(1)$ in physical space. This generates a different force. The truncation's contribution to the force is smooth and heuristically does not destroy the non-uniqueness, which can be thought of as ``emerging" from the singularity at the space-time origin. Our precise Ansatz is~\eqref{e:Ansatz-1}, which is the heart of the nonlinear part of these notes.
\section{Differences with Vishik's work}
While we follow the strategy of Vishik in~\cite{Vishik1,Vishik2}, we deviate from his proof in some ways. We start by listing two changes which, although rather minor, affect the presentation substantially.
\begin{enumerate}
\item We decouple the parameter $\alpha$ in~\eqref{e:self-similar-scaling} governing the self-similar scaling from the decay rate $\bar{\alpha}$ of the smooth profile $g$ at infinity. In \cite{Vishik1} these two parameters are equal; however, it is rather obvious that the argument goes through as long as $\alpha \leq \bar \alpha$. If we then choose $\alpha < \bar \alpha$ the resulting solution has zero initial data. This is a very minor remark, but it showcases the primary role played by the forcing $f$ in the equation.
\item Strictly speaking Vishik's Ansatz for the ``background solution'' is in fact different from our Ansatz (even taking into account the truncation at infinity). The interested reader might compare \eqref{e:tilde-v} and \eqref{e:curl-tilde-v} with \cite[(6.3)]{Vishik1}. Note in particular that the coordinates used in \cite{Vishik1} are not really \eqref{e:self-similar-scaling} but rather a more complicated variant. Moreover, Vishik's Ansatz contains a parameter $\varepsilon$, whose precise role is perhaps not initially transparent, and which is ultimately scaled away in~\cite[Chapter 9]{Vishik1}. This obscures that the whole approach hinges on finding a solution $\Omega$ of a truncated version of \eqref{eq:similarityvareulereqns}
asymptotic to the unstable manifold of the steady state $\bar \Omega$ at $-\infty$.
In our case, $\Omega$ is constructed by solving appropriate initial value problems for the truncated version of \eqref{eq:similarityvareulereqns} at negative times $-k$ and then taking their limit; this plays the role of Vishik's parameter~$\varepsilon$.
\end{enumerate}
We next list two more ways in which our notes deviate from \cite{Vishik1,Vishik2}. These differences are much more substantial.
\begin{enumerate}
\item[(3)] The crucial nonlinear estimate in the proof of Theorem \ref{thm:main} (cf. \eqref{e:asymptotic-in-t} and the more refined version \eqref{e:asymptotic-in-t-2}), which shows that the solution $\Omega$ is asymptotic, at minus infinity, to an unstable solution of the linearized equation, is proved in a rather different way. In particular our argument is completely Eulerian and based on energy estimates, while a portion of Vishik's proof relies in a crucial way on the Lagrangian formulation of the equation. The approach introduced here will be exploited by the first and third author in their forthcoming work~\cite{AC} and we believe it might be useful in other contexts.
\item[(4)] Another technical, but crucial, difference, concerns the simplicity of the unstable eigenvalue $\eta$. While Vishik claims such simplicity in \cite{Vishik2}, {\color{red} the argument given in the latter reference is actually incomplete. After we pointed out the gap to him, he provided a clever way to fill it in \cite{Vishik3}}. These notes point out that such simplicity is not really needed in the nonlinear part of the analysis: in fact a much weaker linear analysis than the complete one carried in \cite{Vishik2} {\color{red} is already enough to close the argument for Theorem \ref{thm:main}. However, for completeness and for the interested readers, we include in Appendix~\ref{a:better} the necessary additional arguments needed to conclude the more precise description of \cite{Vishik2}.}
\end{enumerate}
\section{Further remarks}\label{s:final-remarks}
Recently, Bressan, Murray, and Shen investigated in \cite{BressanAposteriori,BressanSelfSimilar} a different non-uniqueness scenario for~\eqref{e:Euler} which would demonstrate sharpness of the Yudovich class without a force. The scenario therein, partially inspired by the works of Elling~\cite{EllingAlgebraicSpiral,EllingSelfSimilar}, is also based on self-similarity and symmetry breaking but follows a different route.
Self-similarity and symmetry breaking moreover play a central role in the work of Jia, {\v S}ver{\'a}k, and Guillod~\cite{JiaSverakInventiones,JiaSverakIllposed,guillod2017numerical} on the conjectural non-uniqueness of weak Leray-Hopf solutions of the Navier-Stokes equations. One crucial difficulty in~\cite{JiaSverakIllposed}, compared to Vishik's approach, is that the self-similar solutions in~\cite{JiaSverakIllposed} are far from explicit. Therefore, the spectral condition therein seems difficult to verify analytically, although it has been checked with non-rigorous numerics in~\cite{guillod2017numerical}. The work~\cite{JiaSverakIllposed} already contains a version of the unstable manifold approach, see p. 3759--3760, and a truncation to finite energy.
At present, the above two programs, while very intriguing and highly suggestive, require a significant numerical component not present in Vishik's approach. On the other hand, at present, Vishik's approach includes a forcing term absent from the above two programs, whose primary role is showcased by the fact that the initial data can be taken to be zero.
\bigskip
Much of the recent progress on non-uniqueness of the Euler equations has been driven by Onsager's conjecture, which was solved in~\cite{IsettOnsager}. With Theorem \ref{thm:main} in hand, we can now summarize the situation for the Euler equations \da{in dimension three as follows:}
\begin{itemize}[leftmargin=*]
\item $\alpha \in (1,2)$: (\emph{Local well-posedness and energy conservation}) For each divergence-free $u_0 \in C^{\alpha}\da{(\mathbb{T}^3)}$ and force $f \in L^1(]0,T[;C^\alpha\da{(\mathbb{T}^3)})$, there exists $T' \in ]0,T[$ and a unique local-in-time solution $u \in L^\infty(]0,T'[;C^\alpha\da{(\mathbb{T}^3)})$. The solution $u$ depends continuously\footnote{The continuous dependence is more subtle for quasilinear equations than semilinear equations, and uniform continuity is not guaranteed in the regularity class in which the solutions are found, see the discussion in~\cite{taonotes}. One can see this at the level of the equation for the difference of two solutions $u^{(1)}$ and $u^{(2)}$: One of the solutions becomes the ``background" and, hence, loses a derivative. One way to recover the continuous dependence stated above is to compare the above two solutions with initial data $u_0^{(1)}$, $u_0^{(2)}$ and forcing terms $f^{(1)}$, $f^{(2)}$ to approximate solutions $u^{(1),\varepsilon}$, $u^{(2),\varepsilon}$ with mollified initial data $u_0^{(1),\varepsilon}$, $u_0^{(2),\varepsilon}$ and mollified forcing terms $f^{(1),\varepsilon}$, $f^{(2),\varepsilon}$. One then estimates $\| u^{(1)} - u^{(2)} \| \leq \| u^{(1)} - u^{(1),\varepsilon} \| + \| u^{(1),\varepsilon} - u^{(2),\varepsilon} \| + \| u^{(2),\varepsilon} - u^{(2)} \|$. The approximate solutions, which are more regular, are allowed to lose derivatives in a controlled way.} in the above class on its initial data and forcing term. Moreover, the solution $u$ conserves energy.
\item $1/3 < \alpha < 1$: (\emph{Non-uniqueness and energy conservation}) There exist $T > 0$, a force $f \in L^1(]0,T[;\da{L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T})})$, and two distinct weak solutions \da{$u_1,u_2 \in L^\infty(]0,T[;L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T}))$} to the Euler equations with zero initial data and force $f$. For any $T>0$, weak solutions $u \in L^\infty(]0,T[;\da{L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T})})$ with forcing in the above class conserve energy~\cite{constantinetiti}.
\item $0 < \alpha < 1/3$: (\emph{Non-uniqueness and anomalous dissipation}) There exist $T>0$ and two distinct admissible weak solutions (see~\cite{OnsagerAdmissible}) $u_1,u_2 \in L^\infty(]0,T[;C^\alpha\da{(\mathbb{T}^3)})$ to the Euler equations with the same initial data and zero force and which moreover dissipate energy.
\end{itemize}
While we are not aware of the first two statements with force in the literature, the proofs are easy adaptations of those with zero force. In order to obtain the non-uniqueness statement in the region $1/3 < \alpha < 1$, one can \da{extend the non-unique solutions on $\ensuremath{\mathbb R}^2$ to be constant in the $x_3$ direction.}
The borderline cases may be sensitive to the function spaces in question. For example, the three-dimensional Euler equations are ill-posed in $C^k$, $k \geq 1$~\cite{bougainillposed}. Furthermore, of the above statements, only the negative direction of Onsager's conjecture is open in $n=2$.
We finally point out that an expanded version of these notes is contained in the master's thesis of the sixth author, cf. \cite{Maximilian-thesis}.
\chapter{Introduction}
In these notes we will consider the Euler equations in the $2$-dimensional space in vorticity formulation\index{Euler equations@Euler equations}, which are given by
\begin{equation}\label{e:Euler}
\left\{
\begin{array}{ll}
&\partial_t \omega + (v\cdot \nabla) \omega = f\\ \\
& v (\cdot, t) = K_2 * \omega (\cdot, t)
\end{array}
\right.
\end{equation}
where $K_2$ is the usual $2$-dimensional Biot-Savart kernel and $f$ is a given external force. \index{Biot-Savart kernel@Biot-Savart kernel}\index{aalf@$f$}\index{aalK_2@$K_2$}\index{external force@external force}\index{force, external@force, external}
$v$ is the velocity field, and it is a function defined on a space-time domain of type $\mathbb R^2 \times [0, T]$. By the Biot-Savart law we have $\omega = \curl v = \partial_{x_1} v_2 - \partial_{x_2} v_1 = \nabla\times v$\index{vorticity@vorticity}\index{velocity@velocity}\index{aagz@$\omega$}\index{aalv@$v$}.
We will study the Cauchy problem for \eqref{e:Euler} with initial data
\begin{equation}\label{e:Cauchy}
\omega (\cdot, 0) = \omega_0\,
\end{equation}
on the domain $\mathbb R^2 \times [0,\infty[$
under the assumptions that
\begin{itemize}
\item[(i)] $\omega_0\in L^1\cap L^p$ for some $p>2$ and $v_0=K_2* \omega_0 \in L^2$;
\item[(ii)] $f\in L^1 ([0,T], L^1\cap L^p)$ and $K_2*f\in L^1 ([0,T], L^2)$ for every $T<\infty$.
\end{itemize}
In particular we understand solutions $\omega$ in the usual sense of distributions, namely,
\begin{equation}\label{e:distrib}
\int_0^T \int_{\ensuremath{\mathbb R}^2} [\omega (\partial_t \phi + K_2* \omega \cdot \nabla \phi) + f \phi]\, dx\, dt
= - \int_{\ensuremath{\mathbb R}^2} \phi (x,0)\, \omega_0 (x)\, dx
\end{equation}
for every smooth test function $\phi\in C^\infty_c (\mathbb R^2 \times [0,T[)$.\index{Solution, weak} In view of (i)-(ii) and standard energy estimates we will restrict our attention to weak solutions which satisfy the following bounds:
\begin{itemize}
\item[(a)] $\omega \in L^\infty ([0,T], L^1\cap L^p)$ and $v\in L^\infty ([0,T], L^2)$ for every $T<\infty$.
\end{itemize}
The purpose of these notes is to give a proof of the following:
\begin{theorem}\label{thm:main}
For every $p\in ]2, \infty[$ there is a triple $\omega_0, v_0$, and $f$ satisfing (i)-(ii)
with the property that there are uncountably many solutions $(\omega, v)$ of \eqref{e:Euler} and \eqref{e:Cauchy} on $\mathbb R^2\times [0, \infty [$ which satisfy the bound (a). Moreover, $\omega_0$ can be chosen to vanish identically.
\end{theorem}
In fact the $f$ given by the proof is smooth and compactly supported on any closed interval of time $[\varepsilon, T]\subset ]0, \infty[$. Moreover, a closer inspection of the argument reveals that any of the solutions $(\omega, v)$ enjoy bounds on the $W^{1,4}_{\rm loc}$ norm of $\omega (t, \cdot)$, and good decay properties at infinity, whenever $t$ is positive (and obviously such estimates degenerate as $t\downarrow 0$). In particular $v$ belongs to $C^1_{\rm loc} (\mathbb R^2\times ]0, \infty[)$. It is not difficult to modify the arguments detailed in these notes to produce examples which have even more regularity and better decay for positive times, but we do not pursue the issue here.
\begin{remark}\label{r:bounded}\label{R:BOUNDED}
Recall that
\begin{equation}\label{e:bound-on-Biot-Savart}
\|K_2* \omega (\cdot, t)\|_{L^\infty}\leq C (p) (\|\omega (\cdot, t)\|_{L^1} + \|\omega (\cdot, t)\|_{L^p})
\end{equation}
whenever $p>2$ (cf. the Appendix for the proof). Therefore we conclude that each solution $v$ in Theorem \ref{thm:main} is bounded on $\mathbb R^2\times [0,T]$ for every positive $T$.
\end{remark}
The above groundbreaking result was proved by Vishik in the two papers \cite{Vishik1} and \cite{Vishik2} (upon which these notes are heavily based) and answers a long-standing open question in the PDE theory of the incompressible Euler equations, as it shows that it is impossible to extend to the $L^p$ scale the following classical uniqueness result of Yudovich.
\begin{theorem}\label{thm:Yudo}\label{THM:YUDO}
Consider a strictly positive $T$, an initial vorticity $\omega_0 \in L^1\cap L^\infty$ with $v_0=K_2*\omega_0 \in L^2$ and an external force $f\in L^1 ([0,T]; L^1\cap L^\infty)$ with $K_2*f\in L^1 ([0,T]; L^2)$. Then there is a unique solution $\omega$ of \eqref{e:Euler} and \eqref{e:Cauchy} on $\mathbb R^2\times [0, T]$ satisfying the estimates $\omega \in L^\infty ([0,T], L^1\cap L^\infty)$ and $v = K_2* \omega \in L^\infty ([0,T], L^2)$.
\end{theorem}
The above theorem in a bounded domain was originally proven by Yudovich in 1963 \cite{Yudovich1963}, who also proved a somewhat more technical statement on unbounded domains. We have not been able to find an exact reference for the statement above (cf. for instance \cite[Theorem 8.2]{MajdaBertozzi} and the paragraph right afterwards, where the authors point out the validity of the Theorem in the case of $f=0$). We therefore give a detailed proof in the appendix for the reader's convenience.
\begin{remark}\label{r:A-priori-estimates}
We recall that the solution of Theorem \ref{thm:Yudo} satisfies a set of important a priori estimates, which can be justified using the uniqueness part and a simple approximation procedure. Indeed if $(\omega, v)$ is a smooth solution of \eqref{e:Euler}, then the method of characteristics shows that, for every $t$, there exists a family of volume-preserving diffeomorphisms $T_s:\ensuremath{\mathbb R}^2\to\ensuremath{\mathbb R}^2, s\in[0, t]$, such that \begin{equation*}
\omega(x, t)=\omega_0(T_0 x) + \int_0^t f(T_s x, s)\,\mathrm ds.
\end{equation*}
Therefore, since volume-preserving diffeomorphisms preserve all $L^q$ norms, we get, for all $q\in[1,\infty]$,
\begin{equation*}
\norm{\omega(\cdot, t)}_{L^q}\le\norm{\omega_0}_{L^q}+\int_0^t \norm{f(\cdot, s)}_{L^q}\,\mathrm ds.
\end{equation*}
Furthermore, a usual integration by parts argument, as seen in \cite[Lemma 1.1]{Yudovich1963}, shows that $v$ satisfies the estimate
\begin{equation*}
\norm{v(\cdot, t)}_{L^2}\le\norm{v_0}_{L^2}+\int_0^t \norm{K_2* f(\cdot,s)}_{L^2}\,\mathrm ds.
\end{equation*}
\end{remark}
\begin{remark}\label{r:well-defined}
Recall that the Biot-Savart kernel is given by the formula
\begin{equation}\label{e:Biot-Savart}
K_2 (x_1, x_2) = \frac{x^\perp}{2\pi |x|^2} = \frac{1}{2\pi |x|^2} (-x_2, x_1)\, .
\end{equation}
In particular, while $K_2\not \in L^p$ for any $p$, it can be easily broken into
\begin{equation}\label{e:decomposition-Biot-Savart-Kernel}
K_2 = K_2 \mathbf{1}_{B_1} + K_2 \mathbf{1}_{B_1^c}\, ,
\end{equation}
where $B_1$ denotes the unit ball around $0$.
Observe that $K_2 \mathbf{1}_{B_1} \in L^q$ for every $q\in [1,2[$ and $K_2 \mathbf{1}_{B_1^c}\in L^r$ for every $r\in ]2, \infty]$. Under the assumption that $\omega \in L^{2-\delta}$ for some positive $\delta >0$, this decomposition allows us to define the convolution $K_2* \omega$ as $(K_2 \mathbf{1}_{B_1}) * \omega
+ (K_2 \mathbf{1}_{B_1^c}) * \omega$, where each separate summand makes sense as Lebesgue integrals thanks to Young's convolution inequality.\footnote{Young's convolution inequality states that, if $g_1\in L^{p_1}$ and $g_2\in L^{p_2}$ with $1\leq \frac{1}{p_1} + \frac{1}{p_2} \leq 2$, then $g_1 (y-\cdot) g_2 (\cdot)$ belongs to $L^1$ for a.e. $y$ and $g_1* g_2\in L^r$ for $\frac{1}{r}=\frac{1}{p_1} + \frac{1}{p_2} -1$.}
On the other hand we caution the reader that, for general $\omega\in L^2$, $K_2*\omega$ may not be well-defined. More precisely, if we denote by $\mathscr{S}$\index{aalSscript@$\mathscr{S}$} the \index{Schwartz space@Schwartz space $\mathscr{S}$} Schwartz space of rapidly decaying smooth functions and by $\mathscr{S}'$\index{aalSscript'@$\mathscr{S}'$} the space of tempered distributions \index{tempered distribution@tempered distribution}\index{space of tempered distributions@space $\mathscr{S}'$ of tempered distributions} (endowed, respectively, with their classical Fr\'echet and weak topologies), it can be shown that there is no continuous extension of the operator $\mathscr{S} \ni \omega \mapsto K_2 *\omega\in \mathscr{S}'$ to a continuous operator from $L^2$ to $\mathscr{S}'$, cf. Remark \ref{r:Camillo_dumb}.
This fact creates some technical issues in many arguments where we will indeed need to consider a suitable continuous extension of the operator $\omega \mapsto K_2*\omega$ to {\em some} closed linear subspace of $L^2$, namely, $m$-fold rotationally symmetric functions in $L^2$ (for some integer $m\geq 2$). Such an extension will be shown to exist thanks to some special structural properties of the subspace.
\end{remark}
\section{Idea of the proof}
We now describe, briefly, the rough idea of and motivation for the proof. An extensive description of the proof with precise statements can be found in Chapter~\ref{chapter:general}, which breaks down the whole argument into three separate (and independent) parts. The subsequent three chapters are then dedicated to the detailed proofs.
First, we recall two essential features of the two-dimensional Euler equations:
\begin{enumerate}
\item \emph{Steady states}. The two-dimensional Euler equations possess a large class of explicit, radially symmetric steady states called \emph{vortices}:\footnote{They are sometimes also called rotating or circular flows.}
\begin{equation}
\label{eq:vorticesdef}
\bar{\omega}(x) = g(|x|), \quad \bar{v}(x) = \zeta(|x|) x^\perp.
\end{equation}
\item \emph{Scaling symmetry}. The Euler equations possess a two-parameter scaling symmetry: If $(\omega,v)$ is a solution of~\eqref{e:Euler} with vorticity forcing $f$, and $\lambda, \mu > 0$, then
\begin{equation}
\omega_{\lambda,\mu}(x,t) = \mu \omega(\lambda x, \mu t), \quad v_{\lambda,\mu}(x,t) = \frac{\mu}{\lambda} v(\lambda x, \mu t),
\end{equation}
define a solution with vorticity forcing\textbf{}
\begin{equation}
f_{\lambda,\mu}(x,t) = \mu^2 f(\lambda x, \mu t).
\end{equation}
The scaling symmetry corresponds to the physical dimensions
\begin{equation}
[x] = L,\quad [t] = T, \quad [v] = \frac{L}{T}, \quad [\omega] = \frac{1}{T}, \quad \text{ and } \quad [f] = \frac{1}{T^2}.
\end{equation}
\end{enumerate}
We now elaborate on the above two features:
\smallskip
\emph{1. Unstable vortices}. The stability analysis of shear flows $u = (b(y),0)$ and vortices~\eqref{eq:vorticesdef} is classical, with seminal contributions due to Rayleigh~\cite{Rayleigh1879}, Kelvin~\cite{Thomson1880}, Orr~\cite{Orr}, and many others. The linearized Euler equations around the background vortex $\bar{\omega}$ are
\begin{equation}
\label{eq:linearizedeulerintro}
\partial_t \omega - L_{\rm st} \omega :=
\partial_t \omega + \zeta(r) \partial_\theta \omega + (v \cdot e_r) g'(r) = 0, \quad v = K_2 \ast \omega.
\end{equation}
Consider the eigenvalue problem associated to the linearized operator $L_{\rm st}$. It suffices to consider $\psi = e^{ik\theta} \psi_k(|x|)$, $k \geq 0$, the stream function associated to a vorticity perturbation $\omega$ (that is, $\Delta \psi = \omega$). It is convenient to pass to an exponential variable $s = \log r$ and define $\phi(s) = \psi_k(e^s)$; $A(s) = e^s g'(e^s)$ ($r \; \times$ the radial derivative of the background vorticity); and $\Xi(s) = \zeta(e^s)$ (the differential rotation). The eigenvalue problem for $L_{\rm st}$, with eigenvalue $\lambda = -ikz$, can be rewritten as
\begin{equation}
\label{eq:theeigenvaluequation}
\left( \Xi(s) - z \right) \left( \frac{d^2}{dy^2}- k^2 \right) \phi - A(s) \phi = 0.
\end{equation}
This is \emph{Rayleigh's stability equation}. The eigenvalue $\lambda$ is unstable when ${\rm Im}(z) > 0$, in which case we can divide by $\Xi - z$ and analyze a steady Schr{\"o}dinger equation. It is possible to understand~\eqref{eq:theeigenvaluequation} well enough to design vortices for which the corresponding linear operator has an unstable eigenfunction. For shear flows, this analysis goes back to Tollmien~\cite{Tollmien}. The problem was treated rigorously by Z.~Lin~\cite{LinSIMA2003} for bounded and unbounded shear flows and rotating flows in an annulus.\footnote{For those interested in hydrodynamic stability more generally, see the classic monograph~\cite{DrazinReid}. Chapter~4 therein concerns the stability of shear flows, including Rayleigh's criterion and a sketch of Tollmien's idea.}
The case of unbounded vortices, which is the crucial one for the purposes of these notes, was treated by Vishik in~\cite{Vishik2}, see Chapter~\ref{chapter:linearpartii} below. In the cases relevant to these notes, $L_{\rm st}$ has at least one unstable eigenvalue $\lambda$. While the latter could well be real, for the sake of our argument let us assume that it is a complex number $\lambda= a_0 + b_0 i$ ($a_0, b_0 > 0$) and let $\bar\lambda = a_0- b_0 i$ be its complex conjugate. If we denote by $\eta$ and $\bar{\eta}$ two corresponding (nontrivial) eigenfunctions, it can be checked that they are {\em not} radially symmetric.
With the unstable modes in hand, one may seek a trajectory on the \emph{unstable manifold} associated to $\lambda$ and $\bar{\lambda}$. For example, one such trajectory may look like
\begin{equation}
\omega = \bar{\omega} + \omega^{\rm lin} + o(e^{a_0 t}),
\end{equation}
where $\omega^{\rm lin} = {\rm Re}(e^{\lambda t} \eta)$ is a solution of the linearized Euler equations~\eqref{eq:linearizedeulerintro}. These solutions converge to $\bar{\omega}$ exponentially in backward time.
Hence, we expect that certain unstable vortices exhibit a kind of \emph{non-uniqueness at time $t = -\infty$} and moreover break the radial symmetry. The existence of unstable manifolds associated to a general class of Euler flows in dimension $n \geq 2$ was demonstrated by Lin and Zeng~\cite{LinZengCPAM2013,LinZengCorrigendum2014}.\footnote{There is a substantial mathematical literature on the nonlinear instability of Euler flows, see~\cite{friedlanderstraussvishikearly,FriedlanderHoward,friedlanderstraussvishik,bardosguostrauss,friedlandervishik,linnonlinear}.}
\smallskip
\emph{2. Self-similar solutions}. It is natural to consider solutions invariant under the scaling symmetry and, in particular, it is natural to consider those self-similar solutions which live exactly at the desired integrability. If we fix a relationship $L^\alpha \sim T$ in the scaling symmetries, the similarity variables are\footnote{We may regard the logarithmic time as $\tau = \log (t/t_0)$, so that $t$ is non-dimensionalized according to a fixed reference time $t_0 = 1$.}
\begin{equation}\label{e:self-similar-scaling}
\xi = \frac{x}{t^{\frac{1}{\alpha}}}, \quad \tau = \log t
\end{equation}
\begin{equation}
v(x,t) = \frac{1}{t^{1-\frac{1}{\alpha}}} V(\xi, \tau), \quad \omega(x,t) = \frac{1}{t} \Omega(\xi, \tau).
\end{equation}
Notice that physical time $t=0$ corresponds to logarithmic time $\tau = -\infty$. The function $\Omega$ is known as the \emph{profile}.
The Euler equations, without force, in similarity variables are
\begin{equation}
\label{eq:similarityvareulereqns}
\left\{\begin{array}{ll}
&\partial_\tau \Omega - \left( 1 + \frac{\xi}{\alpha} \cdot \nabla_\xi \right) \Omega + V \cdot \nabla_\xi \Omega = 0\\ \\
& V= K_2* \Omega \, .
\end{array}\right.
\end{equation}
Profiles $\Omega$ satisfying $\| \Omega(\cdot,\tau) \|_{L^p} = O(1)$ as $\tau \to -\infty$ satisfy $\| \omega(\cdot,t) \|_{L^p} = O(t^{-1+\frac{2}{\alpha p}})$ as $t \to 0^+$, and similarly in the weak $L^p$ norms. Hence, the Lebesgue and weak Lebesgue norms with $p = 2/\alpha$ would be $O(1)$ in either variables. To show sharpness of the Yudovich class, we consider $0 < \alpha \ll 1$.
\bigskip
The route to non-uniqueness through unstable vortices and self-similar solutions is as follows: Suppose that $\bar{\Omega}$ is an unstable steady state of the similarity variable Euler equations~\eqref{eq:similarityvareulereqns} (in particular, $\bar{\omega}(x,t) = t^{-1} \bar{\Omega}(\xi)$ is a self-similar solution of the usual Euler equations). Find a trajectory $\Omega$ on the unstable manifold associated to $\bar{\Omega}$. In similarity variables, the steady state $\bar{\Omega}$ will be ``non-unique at minus infinity", which corresponds to non-uniqueness at time $t=0$ in the physical variables.
One natural class of background profiles $\bar{\Omega}$ consists of \emph{power-law vortices} $\bar{\omega} = \beta |x|^{-\alpha}$, $\beta \in \ensuremath{\mathbb R}$, which are simultaneously steady solutions and self-similar solutions without force. At present, we do not know whether the above strategy can be implemented with power-law vortices.
Instead, we choose a smooth vortex profile $g(|x|)$, with power-law decay as $|x| \to +\infty$, which is unstable for the Euler dynamics. Our background will be the self-similar solution with profile $\bar{\Omega} = g(|\xi|)$, which solves the Euler equations \emph{with a self-similar force}. This profile may be considered a well-designed smoothing of a power-law vortex. When the background is large, it is reasonable to expect that the additional term in the similarity variable Euler equations~\eqref{eq:similarityvareulereqns} can be treated perturbatively, so that $g(|\xi|)$ will also be unstable for the similarity variable Euler dynamics. This heuristic is justified in Chapter~\ref{chapter:linearparti}.
In order to ensure that the solutions have finite energy, we also truncate the background velocity at distance $O(1)$ in physical space. This generates a different force. The truncation's contribution to the force is smooth and heuristically does not destroy the non-uniqueness, which can be thought of as ``emerging" from the singularity at the space-time origin. Our precise Ansatz is~\eqref{e:Ansatz-1}, which is the heart of the nonlinear part of these notes.
\section{Differences with Vishik's work}
While we follow the strategy of Vishik in~\cite{Vishik1,Vishik2}, we deviate from his proof in some ways. We start by listing two changes which, although rather minor, affect the presentation substantially.
\begin{enumerate}
\item We decouple the parameter $\alpha$ in~\eqref{e:self-similar-scaling} governing the self-similar scaling from the decay rate $\bar{\alpha}$ of the smooth profile $g$ at infinity. In \cite{Vishik1} these two parameters are equal; however, it is rather obvious that the argument goes through as long as $\alpha \leq \bar \alpha$. If we then choose $\alpha < \bar \alpha$ the resulting solution has zero initial data. This is a very minor remark, but it showcases the primary role played by the forcing $f$ in the equation.
\item Strictly speaking Vishik's Ansatz for the ``background solution'' is in fact different from our Ansatz (even taking into account the truncation at infinity). The interested reader might compare \eqref{e:tilde-v} and \eqref{e:curl-tilde-v} with \cite[(6.3)]{Vishik1}. Note in particular that the coordinates used in \cite{Vishik1} are not really \eqref{e:self-similar-scaling} but rather a more complicated variant. Moreover, Vishik's Ansatz contains a parameter $\varepsilon$, whose precise role is perhaps not initially transparent, and which is ultimately scaled away in~\cite[Chapter 9]{Vishik1}. This obscures that the whole approach hinges on finding a solution $\Omega$ of a truncated version of \eqref{eq:similarityvareulereqns}
asymptotic to the unstable manifold of the steady state $\bar \Omega$ at $-\infty$.
In our case, $\Omega$ is constructed by solving appropriate initial value problems for the truncated version of \eqref{eq:similarityvareulereqns} at negative times $-k$ and then taking their limit; this plays the role of Vishik's parameter~$\varepsilon$.
\end{enumerate}
We next list two more ways in which our notes deviate from \cite{Vishik1,Vishik2}. These differences are much more substantial.
\begin{enumerate}
\item[(3)] The crucial nonlinear estimate in the proof of Theorem \ref{thm:main} (cf. \eqref{e:asymptotic-in-t} and the more refined version \eqref{e:asymptotic-in-t-2}), which shows that the solution $\Omega$ is asymptotic, at minus infinity, to an unstable solution of the linearized equation, is proved in a rather different way. In particular our argument is completely Eulerian and based on energy estimates, while a portion of Vishik's proof relies in a crucial way on the Lagrangian formulation of the equation. The approach introduced here will be exploited by the first and third author in their forthcoming work~\cite{AC} and we believe it might be useful in other contexts.
\item[(4)] Another technical, but crucial, difference, concerns the simplicity of the unstable eigenvalue $\eta$. While Vishik claims such simplicity in \cite{Vishik2}, {the argument given in the latter reference is actually incomplete. After we pointed out the gap to him, he provided a clever way to fill it in \cite{Vishik3}}. These notes point out that such simplicity is not really needed in the nonlinear part of the analysis: in fact a much weaker linear analysis than the complete one carried in \cite{Vishik2} {is already enough to close the argument for Theorem \ref{thm:main}. However, for completeness and for the interested readers, we include in Appendix~\ref{a:better} the necessary additional arguments needed to conclude the more precise description of \cite{Vishik2}.}
\end{enumerate}
\section{Further remarks}\label{s:final-remarks}
Recently, Bressan, Murray, and Shen investigated in \cite{BressanAposteriori,BressanSelfSimilar} a different non-uniqueness scenario for~\eqref{e:Euler} which would demonstrate sharpness of the Yudovich class without a force. The scenario therein, partially inspired by the works of Elling~\cite{EllingAlgebraicSpiral,EllingSelfSimilar}, is also based on self-similarity and symmetry breaking but follows a different route.
Self-similarity and symmetry breaking moreover play a central role in the work of Jia, {\v S}ver{\'a}k, and Guillod~\cite{JiaSverakInventiones,JiaSverakIllposed,guillod2017numerical} on the conjectural non-uniqueness of weak Leray-Hopf solutions of the Navier-Stokes equations. One crucial difficulty in~\cite{JiaSverakIllposed}, compared to Vishik's approach, is that the self-similar solutions in~\cite{JiaSverakIllposed} are far from explicit. Therefore, the spectral condition therein seems difficult to verify analytically, although it has been checked with non-rigorous numerics in~\cite{guillod2017numerical}. The work~\cite{JiaSverakIllposed} already contains a version of the unstable manifold approach, see p. 3759--3760, and a truncation to finite energy.
At present, the above two programs, while very intriguing and highly suggestive, require a significant numerical component not present in Vishik's approach. On the other hand, at present, Vishik's approach includes a forcing term absent from the above two programs, whose primary role is showcased by the fact that the initial data can be taken to be zero.
\bigskip
Much of the recent progress on non-uniqueness of the Euler equations has been driven by Onsager's conjecture, which was solved in~\cite{IsettOnsager}. With Theorem \ref{thm:main} in hand, we can now summarize the situation for the Euler equations {in dimension three as follows:}
\begin{itemize}[leftmargin=*]
\item $\alpha \in (1,2)$: (\emph{Local well-posedness and energy conservation}) For each divergence-free $u_0 \in C^{\alpha}{(\mathbb{T}^3)}$ and force $f \in L^1(]0,T[;C^\alpha{(\mathbb{T}^3)})$, there exists $T' \in ]0,T[$ and a unique local-in-time solution $u \in L^\infty(]0,T'[;C^\alpha{(\mathbb{T}^3)})$. The solution $u$ depends continuously\footnote{The continuous dependence is more subtle for quasilinear equations than semilinear equations, and uniform continuity is not guaranteed in the regularity class in which the solutions are found, see the discussion in~\cite{taonotes}. One can see this at the level of the equation for the difference of two solutions $u^{(1)}$ and $u^{(2)}$: One of the solutions becomes the ``background" and, hence, loses a derivative. One way to recover the continuous dependence stated above is to compare the above two solutions with initial data $u_0^{(1)}$, $u_0^{(2)}$ and forcing terms $f^{(1)}$, $f^{(2)}$ to approximate solutions $u^{(1),\varepsilon}$, $u^{(2),\varepsilon}$ with mollified initial data $u_0^{(1),\varepsilon}$, $u_0^{(2),\varepsilon}$ and mollified forcing terms $f^{(1),\varepsilon}$, $f^{(2),\varepsilon}$. One then estimates $\| u^{(1)} - u^{(2)} \| \leq \| u^{(1)} - u^{(1),\varepsilon} \| + \| u^{(1),\varepsilon} - u^{(2),\varepsilon} \| + \| u^{(2),\varepsilon} - u^{(2)} \|$. The approximate solutions, which are more regular, are allowed to lose derivatives in a controlled way.} in the above class on its initial data and forcing term. Moreover, the solution $u$ conserves energy.
\item $1/3 < \alpha < 1$: (\emph{Non-uniqueness and energy conservation}) There exist $T > 0$, a force $f \in L^1(]0,T[;{L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T})})$, and two distinct weak solutions {$u_1,u_2 \in L^\infty(]0,T[;L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T}))$} to the Euler equations with zero initial data and force $f$. For any $T>0$, weak solutions $u \in L^\infty(]0,T[;{L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T})})$ with forcing in the above class conserve energy~\cite{constantinetiti}.
\item $0 < \alpha < 1/3$: (\emph{Non-uniqueness and anomalous dissipation}) There exist $T>0$ and two distinct admissible weak solutions (see~\cite{OnsagerAdmissible}) $u_1,u_2 \in L^\infty(]0,T[;C^\alpha{(\mathbb{T}^3)})$ to the Euler equations with the same initial data and zero force and which moreover dissipate energy.
\end{itemize}
While we are not aware of the first two statements with force in the literature, the proofs are easy adaptations of those with zero force. In order to obtain the non-uniqueness statement in the region $1/3 < \alpha < 1$, one can {extend the non-unique solutions on $\ensuremath{\mathbb R}^2$ to be constant in the $x_3$ direction.}
The borderline cases may be sensitive to the function spaces in question. For example, the three-dimensional Euler equations are ill-posed in $C^k$, $k \geq 1$~\cite{bougainillposed}. Furthermore, of the above statements, only the negative direction of Onsager's conjecture is open in $n=2$.
We finally point out that an expanded version of these notes is contained in the master's thesis of the sixth author, cf. \cite{Maximilian-thesis}.
\chapter{Linear theory: Part I}
\label{chapter:linearparti}
In this chapter, we will reduce Theorem \ref{thm:spectral} to an analogous spectral result for another differential operator, and we will also show an important corollary of Theorem \ref{thm:spectral} concerning the semigroup that it generates. We start by giving the two relevant statements, but we will need first to introduce some notation and terminology.
First of all, in the rest of the chapter we will always assume that the positive integer $m$ is at least $2$. We then introduce a new (closed and densely defined) operator on $L^2_m$, which we will denote by $L_{\text{st}}$. The operator is defined by\index{aalLst@$L_{\text{st}}$}
\begin{equation}
L_{\text{st}} (\Omega) = - {\rm div}\, (\bar V \Omega) - (K_2*\Omega \cdot \nabla) \bar \Omega\,
\end{equation}
and (recalling that the operator $\Omega\mapsto (K_2* \Omega\cdot \nabla) \bar \Omega$ is bounded and compact, as will be shown below) its domain in $L^2_m$ is given by
\begin{equation}
D_m (L_{\text{st}}) = \{\Omega\in L^2_m : {\rm div}\, (\bar V \Omega)\in L^2\}\, .
\end{equation}
The key underlying idea behind the introduction of $L_{\text{st}}$ is that we can write $L_{\text{ss}}$ as
\[
L_{\text{ss}} = \left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) + \beta L_{\text{st}} \,
\]
and since $\beta$ will be chosen very large, we will basically study the spectrum of $L_{\text{ss}}$ as a perturbation of the spectrum of $\beta L_{\text{st}}$. In particular Theorem \ref{thm:spectral} will be derived from a more precise spectral analysis of $L_{\text{st}}$. Before coming to it, we split the space $L^2_m$ into an appropriate infinite sum of closed orthogonal subspaces.
First of all, if we fix an element $\vartheta\in L^2 (\mathbb R^2)$ and we introduce the polar coordinates $(\theta, r)$ through $x= r (\cos \theta , \sin \theta)$, we can then use the Fourier expansion to write
\begin{equation}\label{e:Fourier}
\vartheta (x) =\sum_{k\in \mathbb Z} a_k (r) e^{ik\theta}\,
\end{equation}
where
\[
a_k (r) := \frac{1}{2\pi} \int_0^{2\pi} \vartheta(r \cos(\theta),r\sin(\theta)) e^{-ik\theta}\,\mathrm d\theta .
\]
By Plancherel's formula,
\[
\|\vartheta\|_{L^2 (\ensuremath{\mathbb R}^2)}^2 = 2\pi \sum_{k\in \mathbb Z} \|a_k\|^2_{L^2 (\ensuremath{\mathbb R}^+, r\,\mathrm dr)}\, .
\]
In particular it will be convenient to introduce the subspaces\index{aalUk@$U_k$}
\begin{equation}
U_k :=\{f(r) e^{ik\theta} : f \in L^2 (\mathbb R^+, r\,\mathrm dr)\}\, .
\end{equation}
Each $U_k$ is a closed subspace of $L^2$, distinct $U_k$'s are orthogonal to each other and moreover
\begin{equation}\label{e:Fourier-2}
L^2_m = \bigoplus_{k\in \mathbb Z} U_{km}\, .
\end{equation}
Each $U_{mk}$ is an invariant space of $L_{\text{st}}$, and it can be easily checked that $U_{mk}\subset D_m (L_{\text{st}})$ and that indeed the restriction of $L_{\text{st}}$ to $U_{mk}$ is a bounded operator. Following the same convention as for $L_{\text{ss}}$ we will denote by ${\rm spec}_m\, (L_{\text{st}})$ the spectrum of $L_{\text{st}}$ on $L^2_m$.
\begin{theorem}\label{thm:spectral2}\label{THM:SPECTRAL2}
For every $m\geq 2$ and every $\bar\Omega$
we have
\begin{itemize}
\item[(a)] each $z_i\in {\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re} \,z\, \neq 0\}$ belongs to the discrete spectrum and if ${\rm Im}\, (z_i)=0$, then there is a nontrivial real eigenfunction relative to $z_i$.
\end{itemize}
Moreover, for an appropriate choice of $\bar\Omega$ there is an integer $m\geq 2$ such that:
\begin{itemize}
\item[(b)] ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is nonempty.
\end{itemize}
\end{theorem}
\begin{remark}\label{r:better}\label{R:BETTER}
The theorem stated above contains the minimal amount of information that we need to complete the proof of Theorem \ref{thm:main2}. We can however infer some additional conclusions with more work, more precisely we can show that
\begin{itemize}
\item[(c)] $m$ can be chosen so that, in addition to (b), ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is finite and the image of the Riesz projector\footnote{Recall that in the case of an isolated point $z$ in the spectrum of a closed, densely defined operator $A$, the Riesz projector is defined as
\[
\frac{1}{2\pi i} \int_\gamma (w -A)^{-1}\, dw
\]
for any simple closed rectifiable contour $\gamma$ bounding a closed disk $D$ with $D \cap {\rm spec}\, (A) = \{z\}$. For an element of the discrete spectrum the Riesz projector has finite rank (the algebraic multiplicity of the eigenvalue $z$).}\index{Riesz projector}\index{aalPz@$P_z$}
$P_{z}$ of $L_{\text{st}}$ relative to each $z\in {\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is contained in $U_m\cup U_{-m}$.
\end{itemize}
Since this property is not needed to prove Theorem~\ref{thm:main2} we defer its proof to Appendix~\ref{a:better}.
\end{remark}
{\color{red} In \cite{Vishik2} Vishik claims the following greatly improved statement.
\begin{theorem}\label{thm:spectral-stronger}\label{THM:SPECTRAL-STRONGER}
For a suitable $\bar \Omega$:
\begin{itemize}
\item[(c')] $m$ can be chosen so that, in addition to (b) and (c), ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}\cap U_m$ consists of a single element, with algebraic multiplicity $1$ in $U_m$.
\end{itemize}
\end{theorem}
Since the spectrum of $L_{\text{st}}$ is invariant under complex conjugation (b), (c), and (c') imply that ${\rm spec}_m\, (L_{\text{st}})\cap \{{\rm Re}\, z>0\}$ consists either of a single real eigenvalue or of two complex conjugate eigenvalues. In the first case, the algebraic and geometric multiplicity of the eigenvalue is $2$ and the space of eigenfunctions has a basis consisting of an element of $U_m$ and its complex conjugate in $U_{-m}$. In the second case the two eigenvalues $z$ and $\bar z$ have algebraic multiplicity $1$ and their eigenspaces are generated, respectively, by an element of $U_m$ and its complex conjugate in $U_{-m}$.
The argument given in \cite{Vishik2} for (c') is however not complete. Vishik provided later (\cite{Vishik3}) a way to close the gap. In Appendix~\ref{a:better} we will give a proof of Theorem \ref{thm:spectral-stronger} along his lines.}
\medskip
In this chapter we also derive an important consequence of Theorem \ref{thm:spectral} for the semigroup generated by $L_{\text{ss}}$.\index{Semigroup}
\begin{theorem}\label{t:group}\label{T:GROUP}
For every $m\geq 2$, $L_{\text{ss}}$ is the generator of a strongly continuous semigroup on $L^2_m$ which will be denoted by $e^{\tau L_{\text{ss}}}$\index{aalEtauLss@$e^{\tau L_{\text{ss}}}$}, and the growth bound $\omega (L_{\text{ss}})$ of $e^{\tau L_{\text{ss}}}$ equals\index{Semigroup, growth bound}
\[
a_0 := \sup \{{\rm Re}\, z_0 : z_0\in {\rm spec}_m (L_{\text{ss}})\}\, <\infty\,
\]
if $a_0 \geq 1-\frac{1}{\alpha}$.
In other words, for every $\delta>0$, there is a constant $M (\delta)$ with the property that
\begin{equation}\label{e:growth-bound}
\left\|e^{\tau L_{\text{ss}}} \Omega\right\|_{L^2} \leq M (\delta) e^{(a_0 +\delta) \tau} \|\Omega\|_{L^2}
\qquad \qquad \forall \tau\geq 0,\, \forall \Omega\in L^2_m\, .
\end{equation}
\end{theorem}
\section{Preliminaries}\label{s:abstract-operators}
In this section we start with some preliminaries which will take advantage of several structural properties of the operators $L_{\text{ss}}$ and $L_{\text{st}}$. First of all we decompose $L_{\text{st}}$ as
\begin{equation}\label{e:decompo_L_st}
L_{\text{st}} = S_1 + \mathscr{K}\, ,
\end{equation}\index{aalK@$\mathscr{K}$}\index{aalS1@$S_1$}where
\begin{align}
S_1 (\Omega)&:= - {\rm div}\, (\bar V \Omega)\label{e:S1}\\
\mathscr{K} (\Omega) &:= - (K_2*\Omega \cdot \nabla) \bar \Omega \label{e:compatto}\, .
\end{align}
Hence we introduce the operator\index{aalS2@$S_2$}
\begin{equation}
S_2 (\Omega) := {\rm div} \left(\left(\frac{\xi}{\alpha} - \beta \bar V\right) \Omega\right) - \frac{\Omega}{\alpha}\, ,
\end{equation}
so that we can decompose $L_{\text{ss}}$ as
\begin{equation}
L_{\text{ss}} = \left(1-\frac{1}{\alpha}\right) + S_2 + \beta \mathscr{K}\, .
\end{equation}
The domains of the various operators $A$ involved are always understood as $D_m (A):= \{\Omega : A(\Omega)\in L^2\}$.
Finally, we introduce the real Hilbert spaces $L^2_m (\mathbb R)$ and $U_j (\mathbb R)$ by setting\index{aalL2mR@$L^2_m(\ensuremath{\mathbb R})$}\index{aalUkR@$U_k(\ensuremath{\mathbb R})$}
\begin{align}
L^2_m (\mathbb R) &:= \{{\rm Re}\, \Omega : \Omega \in L^2_m\}\,
\end{align}
and, for $j>0$ natural,
\begin{equation}
U_j (\mathbb R) :=\{{\rm Re}\, \Omega: \Omega \in U_j\}\, .
\end{equation}
Observe that while clearly $L^2_m (\mathbb R)$ is a real subspace of $L^2_m$, $U_j (\mathbb R)$ is a real subspace of $U_j \oplus U_{-j}$.
As it is customary, $L^2_m (\mathbb R)$ and its real vector subspaces are endowed with the inner product\index{Rotationally symmetric function space, inner product}
\begin{equation}
\langle \Omega, \Xi\rangle_{\mathbb R} = \int \Omega\, \Xi\, ,
\end{equation}
while $L^2_m$ and its complex vector subspaces are endowed with the Hermitation product
\begin{equation}
\langle \Omega, \Xi\rangle_{\mathbb C} = \int ({\rm Re}\, \Omega\, {\rm Re}\, \Xi +
{\rm Im}\, \Omega\, {\rm Im}\, \Xi) + i \int ({\rm Im}\, \Omega\, {\rm Re}\, \Xi - {\rm Re} \, \Omega\, {\rm Im}\, \Xi)\, .
\end{equation}
We will omit the subscripts from $\langle \cdot, \cdot \rangle$ when the underlying field is clear from the context. The following proposition details the important structural properties of the various operators. A closed unbounded operator $A$ on $L^2_m$ will be called \emph{real} if its restriction $A_{\mathbb R}$ to $L^2_m (\mathbb R)$ is a closed, densely defined operator with domain $D_m (A) \cap L^2_m (\mathbb R)$ such that $A(\Omega)\in L^2_m(\mathbb R)$ for all $\Omega\in D_m(A)\cap L^2_m(\mathbb R)$.
\begin{proposition}\label{p:abstract}
\begin{itemize}
\item[(i)] The operators $\mathscr{K}$, $S_1$ and $S_2$ are all real operators.
\item[(ii)] $\mathscr{K}$ is bounded and compact. More precisely there is a sequence of finite dimensional vector spaces $V_n \subset C^\infty_c (\mathbb R^2,\mathbb C)\cap L^2_m$ with the property that, if $P_n$ denotes the orthogonal projection onto $V_n$, then
\begin{equation}\label{e:explicit-approx}
\lim_{n\to\infty} \|\mathscr{K} - P_n\circ \mathscr{K}\|_O = 0\, ,
\end{equation}
where $\|\cdot\|_O$ denotes the operator norm.
\item[(iii)] $S_1$ and $S_2$ are skew-adjoint.
\item[(iv)] $D_m (L_{\text{st}}) = D_m (S_1)$ and $D_m (L_{\text{ss}}) = D_m (S_2)$.
\item[(v)] $U_{km}$ is an invariant subspace of $S_1, S_2, \mathscr{K}, L_{\text{st}}, L_{\text{ss}}$.
\item[(vi)] The restrictions of $S_1$ and $L_{\text{st}}$ to each $U_{km}$ are bounded operators.
\end{itemize}
\end{proposition}
\begin{proof} The verification of (i), (iii), (iv), (v), and (vi) are all simple and left therefore to the reader. We thus come to (ii) and prove the compactness of the operator $\mathscr{K}$. Recalling Lemma \ref{l:extension}, for every $\Omega \in L^2_m$ we can write the tempered distribution $\mathscr{K} (\Omega)$ as
\begin{equation}
\mathscr{K} (\Omega) = \nabla \bar \Omega \cdot V
\end{equation}
where $V=V(\Omega)$ is a $W^{1,2}_{\text{loc}}$ function with the properties that
\begin{equation}\label{e:stima-W12}
R^{-1} \|V \|_{L^2 (B_R)} + \|DV\|_{L^2 (B_R)} \leq C \|\Omega\|_{L^2} \qquad \forall R \geq 0\, . \end{equation}
Since $|\nabla \bar \Omega (\xi)| \leq C |\xi|^{-{\bar\alpha} -1}$ for $|\xi|\geq 1$, whenever $R\geq 1$ we can estimate
\begin{align*}
\|\mathscr{K} (\Omega)\|^2_{L^2 (B_R^c)} &= \sum_{j=0}^\infty \|\mathscr{K} (\Omega)\|_{L^2 (B_{2^{j+1} R}\setminus B_{2^j R})}^2 \leq C R^{-2-2{\bar\alpha}} \sum_{j=0}^\infty 2^{-2 (1+{\bar\alpha}) j} \|V\|^2_{L^2 (B_{2^{j+1} R})}\\
&\leq C R^{-2-2{\bar\alpha}} \sum_{j=0}^\infty 2^{-2(1+{\bar\alpha}) j} 2^{2j+2} R^2 \|\Omega\|_{L^2}^2
\leq C R^{-2{\bar\alpha}} \|\Omega\|_{L^2}^2\, .
\end{align*}
This shows at the same time that
\begin{itemize}
\item $\mathscr{K}$ is a bounded operator;
\item If we introduce the operators
\begin{equation}
\Omega \mapsto \mathscr{K}_N (\Omega) := \mathscr{K} (\Omega) \mathbf{1}_{B_N}\, ,
\end{equation}
then $\|\mathscr{K}_N- \mathscr{K}\|_{O}\to 0$.
\end{itemize}
Since the uniform limit of compact operators is a compact operator, it suffices to show that each $\mathscr{K}_N$ is a compact operator. This is however an obvious consequence of \eqref{e:stima-W12} and the compact embedding of $W^{1,2} (B_N)$ into $L^2 (B_N)$.
As for the remainder of the statement (ii), by the classical characterization of compact operators on a Hilbert space, for every $\varepsilon > 0$ there is a finite-rank linear map $L_N$ such that $\|\mathscr{K} - L_N\|_O \leq \frac{\varepsilon}{4}$. If we denote by $W_N$ the image of $L_N$ and by $Q_N$ the orthogonal projection onto it, given that $Q_N \circ L_N = L_N$ we can estimate
\[
\|Q_N \circ \mathscr{K} - \mathscr{K}\|_O \leq \|Q_N \circ \mathscr{K} - Q_N \circ L_N\|_O
+ \|L_N - \mathscr{K}\|_O \leq 2 \|L_N - \mathscr{K}\|_0 \leq \frac{\varepsilon}{2}\, .
\]
Fix next an orthonormal base $\{w_1, \ldots, w_N\}$ of $W_N$ and, using the density of $C^\infty_c (\mathbb R^2)$, approximate each element $w_i$ in the base with $v_i\in C^\infty_c (\mathbb R^2, \mathbb C)$. This can be done for instance convolving $w_i$ with a smooth radial kernel and multiplying by a suitable cut-off function. If the $v_i$'s are taken sufficiently close to $w_i$, the orthogonal projection $P_N$ onto $V_N = {\rm span}\, (v_1, \ldots , v_N)$ satisfies $\|Q_N-P_N\|_O \leq \frac{\varepsilon}{2\|\mathscr{K}\|_O}$ and thus
\[
\|\mathscr{K} - P_N \circ \mathscr{K}\|_O \leq \|\mathscr{K} - Q_N \circ \mathscr{K}\|_O + \|P_N-Q_N\|_O \|\mathscr{K}\|_O \leq \varepsilon\, .\qedhere
\]
\end{proof}
\section{Proof of Theorem \ref{t:group} and proof of Theorem \ref{thm:spectral2}(a)}\label{subsect:Proof-of-spectral2-and-group}
The above structural facts allow us to gain some important consequences as simple corollaries of classical results in spectral theory, which we gather in the next statement. Observe in particular that the statement (a) of Theorem \ref{thm:spectral2} follows from it.
In what follows we take the definition of essential spectrum of an operator as given in \cite{EngelNagel}. We caution the reader that other authors use different definitions; at any rate the main conclusion about the essential spectra of the operators $L_{\text{ss}}$ and $L_{\text{st}}$ in Corollary \ref{c:structural} below depends only upon the property that the essential and discrete spectra are disjoint (which is common to all the different definitions used in the literature).
\begin{corollary}\label{c:structural}
The essential spectrum of $L_{\text{st}}$ and the essential spectrum of $L_{\text{ss}} - \left(1-\frac{1}{\alpha}\right)$ are contained in the imaginary axis, while the remaining part of the spectrum is contained in the discrete spectrum. In particular, every $z\in {\rm spec}_m (L_{\text{st}})$ (resp. $z\in {\rm spec}_m (L_{\text{ss}})$) with nonzero real part (resp. real part different from $1-\frac{1}{\alpha}$) has the following properties.
\begin{itemize}
\item[(i)] $z$ is isolated in ${\rm spec}_m (L_{\text{st}})$ (resp. ${\rm spec}_m (L_{\text{ss}})$);
\item[(ii)] There is at least one nontrivial $\Omega$ such that $L_{\text{st}} (\Omega) = z \Omega$ (resp. $L_{\text{ss}} (\Omega) = z \Omega$) and if ${\rm Im}\, (z)=0$, then $\Omega$ can be chosen to be real-valued;
\item[(iii)] The Riesz projection $P_z$ has finite rank;
\item[(iv)] ${\rm Im}\, (P_z) = \bigoplus_{k\in \mathbb Z\setminus \{0\}} ({\rm Im}\, (P_z)\cap U_{km})$ and in particular the intersection ${\rm Im}\, (P_z) \cap U_{km}$ is trivial for all but a finite number of $k$'s and it is nontrivial for at least one $k$.
\end{itemize}
Moreover, Theorem \ref{t:group} holds.
\end{corollary}
\begin{proof} The points (i)-(iii) are consequence of classical theory, but we present briefly their proofs referring to \cite{Kato}. Observe that addition of a constant multiple $c$ of the identity only shifts the spectrum (and its properties) by the constant $c$. The statements for $L_{\text{ss}}$ are thus reduced to similar statements for $S_2+\beta \mathscr{K}$.
Next since the arguments for $L_{\text{st}} = S_1 + \mathscr{K}$ only use the skew-adjointness of $S_1$ and the compactness of $\mathscr{K}$, they apply verbatim to $S_2+\beta\mathscr{K}$. We thus only argue for $L_{\text{st}}$. First of all observe that, since $S_1$ is skew-adjoint, its spectrum is contained in the imaginary axis. In particular, for every $z$ with ${\rm Re}\, z \neq 0$ the operator $S_1-z$ is invertible and thus Fredholm with Fredholm index $0$. Hence by \cite[Theorem 5.26, Chapter IV]{Kato}, $L_{\text{st}}-z= S_1-z +\mathscr{K}$ is as well Fredholm and has index $0$. By \cite[Theorem 5.31, Chapter IV]{Kato} there is a discrete set $\Sigma \subset \{z: {\rm Re}\, z\neq 0\}$ with the property that the dimension of the kernel (which equals that of the cokernel) of $L_{\text{st}} -z$ is constant on the open sets $\{{\rm Re}\, z > 0\}\setminus \Sigma$ and $\{{\rm Re}\, z<0\}\setminus \Sigma$. Since, for every $z$ such that $|{\rm Re}\, z|> \|\mathscr{K}\|_O$, we know that $L_{\text{st}}-z$ has a bounded inverse from the Neumann series, the kernel (and cokernel) of $L_{\text{st}}-z$ equals $0$ on $\{{\rm Re}\, z \neq 0\}\setminus \Sigma$. From \cite[Theorem 5.28, Chapter IV]{Kato} it then follows that $\Sigma$ is a subset of the discrete spectrum of $L_{\text{st}}$. Obviously the essential spectrum must be contained in the imaginary axis.
In order to show (iv), denote by $P_k$ the orthogonal projection onto $U_{km}$ and observe that, since $L_{\text{st}} \circ P_k = P_k \circ L_{\text{st}}$,
\begin{equation}\label{e:commute}
P_z \circ P_k = \frac{1}{2\pi i} \int_\gamma \frac{1}{w-L_{\text{st}}}\circ P_k \, dw
= \frac{1}{2\pi i} \int P_k \circ \left(\frac{1}{w-L_{\text{st}}}\right)\, dw = P_k \circ P_z\, .
\end{equation}
Writing
\begin{equation}\label{e:splitting}
P_z = \sum_k P_z\circ P_k
\end{equation}
and observing that the commutation \eqref{e:commute} gives the orthogonality of the images of the $P_z\circ P_k$, since ${\rm Im}\, (P_z)$ is finite dimensional, we conclude that the sum is finite, i.e. that $P_z\circ P_k =0$ for all but finitely many $k$'s. Moreover, since $P_z^2 = P_z$ and $P_z$ equals the identity on ${\rm Im}\, (P_z)$, we see immediately that $U_{km} \cap {\rm Im}\, (P_z) = {\rm Im}\, (P_z\circ P_k)$.
We now come to the proof of Theorem \ref{t:group}.
We have already shown that, if ${\rm Re}\, \lambda$ is large enough, then $\lambda$ belongs to the resolvent of $L_{\text{ss}}$, which shows that $a_0 < \infty$. Next, observe that $L_{\text{ss}}$ generates a strongly continuous group if and only if $S_2+\beta \mathscr{K}$ does. On the other hand, using the skew-adjointness of $S_2$, we conclude that, if ${\rm Re}\, z > \beta \|\mathscr{K}\|_O$, then $z$ is in the resolvent of $S_2+\beta \mathscr{K}$ and
\[
\|(S_2+\beta \mathscr{K} - z)^{-1}\|_O \leq \frac{1}{{\rm Re}\, z - \beta \|\mathscr{K}\|_O}\, .
\]
Therefore we can apply \cite[Corollary 3.6, Chapter II]{EngelNagel} to conclude that $S_2+\beta \mathscr{K}$ generates a strongly continuous semigroup. Since the same argument applies to $-S_2-\beta \mathscr{K}$, we actually conclude that indeed the operator generates a strongly continuous group.
Next we invoke \cite[Corollary 2.11, Chapter IV]{EngelNagel} that characterizes the growth bound $\omega_0 (L_{\text{ss}})$ of the semigroup $e^{tL_{\text{ss}}}$ as
\[
\omega_0 (L_{\text{ss}}) = \max \{\omega_{\text{ess}} (L_{\text{ss}}), a_0\}\, ,
\]
where $\omega_{\text{ess}}$ is the essential growth bound of \cite[Definition 2.9, Chapter IV]{EngelNagel}. By \cite[Proposition 2.12, Chapter IV]{EngelNagel}, $\omega_{\text{ess}} (L_{\text{ss}})$ equals $\omega_{\text{ess}} (1-\frac{1}{\alpha} + S_2)$ and, since $e^{\tau S_2}$ is a unitary operator, the growth bound of $e^{(1-1/\alpha +S_2)\tau}$ equals $1-\frac{1}{\alpha}$, from which we conclude that $\omega_{\text{ess}} (1-\frac{1}{\alpha} + S_2)\leq 1-\frac{1}{\alpha}$. In particular we infer that if $a_0\geq 1-\frac{1}{\alpha}$, then $\omega_0 (L_{\text{ss}})=a_0$.
\end{proof}
\section{Proof of Theorem \ref{thm:spectral}: preliminary lemmas}
In this and the next section we will derive Theorem \ref{thm:spectral} from Theorem \ref{thm:spectral2}. It is convenient to introduce the following operator:
\begin{equation}\label{e:Lbeta}
L_\beta : = \frac{1}{\beta} \left(L_{\text{ss}} - \left(1-{\textstyle{\frac{1}{\alpha}}}\right)\right)
= \frac{1}{\beta} S_2 + \mathscr{K} \, .
\end{equation}
In particular
\begin{equation}\label{e:Lbeta-2}
L_\beta (\Omega) = \frac{1}{\beta}\left[{\rm div}\, \left(\frac{\xi}{\alpha} \Omega\right)+ \frac{\Omega}{\alpha}\right] + L_{st}\, .
\end{equation}
Clearly the spectrum of $L_{\text{ss}}$ can be easily computed from the spectrum of $L_\beta$. The upshot of this section and the next section is that, as $\beta \to \infty$, the spectrum of $L_\beta$ converges to that of $L_{\text{st}}$ in a rather strong sense.
In this section we state two preliminary lemmas. We will use extensively the notation $P_V$ for the orthogonal projection onto some closed subspace $V$ of $L^2_m$.
\begin{lemma}\label{l:two}
Let $H= L^2_m, U_{km}, U_{-km}$, or any closed invariant subspace common to $L_{st}$ and all the $L_\beta$.
For every compact set $K\subset \mathbb C \setminus (i \mathbb R \cup {\rm spec}_m (L_{\text{st}} \circ P_H))$, there is $\beta_0 (K)$ such that $K\subset \mathbb C \setminus (\mathbb R \cup {\rm spec}_m (L_\beta \circ P_H))$ for $\beta \geq \beta_0 (K)$. Moreover,
\begin{equation}\label{e:op_norm_est}
\sup_{\beta \geq \beta_0 (K)} \sup_{z\in K} \|(L_\beta \circ P_H - z)^{-1}\|_O < \infty\,
\end{equation}
and
$(L_\beta \circ P_H -z)^{-1}$ converges strongly to $(L_{\text{st}}\circ P_H -z)^{-1}$ for every $z\in K$, namely,
\begin{equation}\label{e:strong_convergence}
\lim_{\beta\to \infty} \|(L_\beta \circ P_H -z)^{-1} (w) - (L_{\text{st}}\circ P_H -z)^{-1} (w)\| = 0\, \qquad \forall w\in L^2_m\, .
\end{equation}
\end{lemma}
\begin{lemma}\label{l:three}
For every $\varepsilon >0$ there is a $R=R (\varepsilon)$ such that
\begin{equation}\label{e:exclude_large_eigenvalue}
{\rm spec}_m (L_\beta) \cap \{z : |z|\geq R, |{\rm Re}\, z|\geq \varepsilon\} = \emptyset \qquad
\forall \beta \geq 1\, .
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{l:two}] The proof is analogous for all $H$ and we will thus show it for $H=L^2_m$. Fix first $z$ such that ${\rm Re}\, z \neq 0$ and recalling that $z- \beta^{-1} S_2$ is invertible, we write
\begin{equation}\label{e:invert_L_beta-z}
z- L_\beta = (z- \beta^{-1} S_2) (1 - (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K})\, .
\end{equation}
\medskip
{\bf Step 1} The operators $(\beta^{-1} S_2 -z)^{-1}$ enjoy the bound
\begin{equation}\label{e:uniform-bound-inverse-beta}
\|(z-\beta^{-1} S_2)^{-1}\|_O \leq |{\rm Re}\, z|^{-1}
\end{equation}
because $\beta^{-1} S_2$ are skew-adjoint. We claim that $(z-\beta^{-1} S_2)^{-1}$ converges strongly to $(z-S_1)^{-1}$ for $\beta \to \infty$. For a family of operators with a uniform bound on the operator norm, it suffices to show strong convergence of $(z-\beta^{-1} S_2)^{-1} w$ for a (strongly) dense subset.
Without loss of generality we can assume ${\rm Re}\, z >0$. Recalling that $\beta^{-1} S_2$ generates a strongly continuous unitary semigroup, we can use the formula
\begin{equation}\label{e:exponential_formula}
(z-\beta^{-1} S_2)^{-1} (w) = \int_0^\infty e^{-(z-\beta^{-1} S_2)\tau} (w)\, d\tau\, .
\end{equation}
Next observe that $\|e^{\beta^{-1} S_2\tau}\|_O=1$. Moreover if $w\in \mathscr{S}$, $e^{\beta^{-1}S_2 \tau} w$ is the solution of a transport equation with a locally bounded and smooth coefficient and initial data $w$. We can thus pass into the limit in $\beta\to \infty$ and conclude that $e^{\beta^{-1} S_2 \tau} w$ converges strongly in $L^2$ to $e^{S_1\tau} w$. We can thus use the dominated convergence theorem in \eqref{e:exponential_formula} to conclude that $(z-\beta^{-1} S_2)^{-1} (w)$ converges to $(z-S_1)^{-1} (w)$ strongly in $L^2$. Since $\mathscr{S}$ is strongly dense, this concludes our proof.
\medskip
{\bf Step 2} We next show that $(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}$ converges in the operator norm to
$(z-S_1)^{-1} \circ \mathscr{K}$. Indeed using Proposition \ref{p:abstract} we can find a sequence of finite rank projections $P_N$ such that $P_N \circ \mathscr{K}$ converges to $\mathscr{K}$ in operator norm. From Step 1 it suffices to show that $(z-\beta^{-1} S_2)^{-1} \circ P_N \circ \mathscr{K}$ converges to $(z-S_1)^{-1} \circ P_N \circ \mathscr{K}$ in operator norm for each $N$. But clearly $(z-\beta^{-1} S_2)^{-1} \circ P_N$ is a finite rank operator and for finite rank operators the norm convergence is equivalent to strong convergence. The latter has been proved in Step 1.
\medskip
{\bf Step 3} Fix $z$ which is outside the spectrum of $L_{st}$. Because of Step 2 we conclude that
\[
(1- (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}) \to (1- (z-S_1)^{-1}\circ \mathscr{K})
\]
in the operator norm. Observe that $1-(z-S_1)^{-1}\circ \mathscr{K}$ is a compact perturbation of the identity. As such it is a Fredholm operator with index $0$ and thus invertible if and only if its kernel is trivial. Its kernel is given by $w$ which satisfy
\[
z w - S_1 (w)- \mathscr{K} (w) = 0\, ,
\]
i.e. it is the kernel of $z-(S_1+\mathscr{K}) = z- L_{\text{st}}$, which we have assumed to be trivial since $z$ is in the spectrum of $L_{\text{st}}$. Thus $(1-(z-S_1)^{-1} \circ \mathscr{K})$ is invertible and hence, because of the operator convergence, so is $(1-(z-\beta^{-1} S_2)^{-1}\circ \mathscr{K})$ for any sufficiently large $\beta$. Hence, by \eqref{e:invert_L_beta-z} so is $z-L_\beta$.
\medskip
{\bf Step 4} The inverse $(z- L_\beta)^{-1}$ is given explicitly by the formula
\begin{equation}\label{e:inversion_formula}
(z-L_\beta)^{-1} = (z-\beta^{-1} S_2)^{-1} (1-(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K})^{-1}\, .
\end{equation}
Since $1-(z-S_2)^{-1} \circ \mathscr{K}$ converges to $1- (z-S_1)^{-1} \circ \mathscr{K}$ in the operator norm, their inverses converge as well in the operator norm. Since the composition of strongly convergent operators with norm convergent operators is strongly convergent, we conclude that $(z-L_\beta)^{-1}$ converges strongly to the operator
\[
(z-S_1)^{-1} (1- (z-S_1)^{-1} \circ \mathscr{K})^{-1} = (z-L_{\text{st}})^{-1}\, .
\]
\medskip
{\bf Step 5} Choose now a compact set $K\subset \mathbb C \setminus (i \mathbb R \cup {\rm spec}_m (L_{\text{st}}))$.
Recall first that
\[
K \ni z \mapsto (z-S_1)^{-1}
\]
is continuous in the operator norm. Thus $K\ni z \mapsto (1- (z-S_1)^{-1} \circ \mathscr{K})$ is continuous in the operator norm. {\color{red} We claim that $K\times [0,1]\ni (z, \delta) \mapsto (1-(z-\delta S_2)^{-1} \circ \mathscr{K})$ is also continuous in the operator norm and in order to show this we will prove the uniform continuity in $z$ once we fix $\delta$, with an estimate which is independent of $\delta$. We first write
\[
\|(1-(z - \delta S_2)^{-1}\circ \mathcal{K}) - (1- (z'- \delta S_2)^{-1} \circ \mathcal{K})\|_O
\leq \|(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1}\|_O \|\mathcal{K}\|_O \, .
\]
Hence we compute
\[
(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1} = (z - \delta S_2)^{-1} \circ ((z' - \delta S_2)-(z-\delta S_2)) \circ (z'-\delta S_2)^{-1}
\]
and use \eqref{e:uniform-bound-inverse-beta} to estimate
\[
\|(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1}\|_O \leq |z-z'| \|(z - \delta S_2)^{-1}\|_O \|(z'-\delta S_2)^{-1}\|_O
\leq \frac{|z-z'|}{| {\rm Re}\, z| |{\rm Re}\, z'|}\, .
\]
}
Since the space of invertible operators is open in the norm topology, this implies the existence of a $\delta_0>0$ such that $K\times [0,\delta_0] \ni (z, \delta) \mapsto (1-(z-\delta S_2)^{-1}\circ \mathscr{K})^{-1}$ is well defined and continuous. Thus, for $\beta \geq \beta_0= \delta_0^{-1}$ we conclude that $1-(z-\beta^{-1} S_2)^{-1}\circ \mathscr{K}$ is invertible and the norm of its inverse is bounded by a constant $C$ independent of $\beta$ and $z\in K$. By \eqref{e:inversion_formula} and \eqref{e:uniform-bound-inverse-beta}, we infer that in the same range of $z$ and $\beta$ the norm of the operators $(z-L_\beta)^{-1}$ enjoy a uniform bound.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:three}]
We show \eqref{e:exclude_large_eigenvalue} for ${\rm Re}\, z \geq \varepsilon$ replacing $|{\rm Re}\, z|\geq \varepsilon$, as the argument for the complex lower half-plane is entirely analogous.
Using \eqref{e:invert_L_beta-z}, we wish to show that there is $R = R (\varepsilon)$ such that the operator
\[
1 - (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}
\]
is invertible for all $\beta \geq 1$ and $z$ such that $|z|\geq R$ and ${\rm Re}\, z \geq \varepsilon$.
This will follow after showing that, for $\beta$ and $z$ in the same range
\begin{equation}\label{e:small}
\|(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}\|_O \leq \frac{1}{2}\, .
\end{equation}
By \eqref{e:uniform-bound-inverse-beta}, we can use Proposition \ref{p:abstract} to reduce \eqref{e:small} to the claim
\begin{equation}\label{e:small-2}
\|(z-\beta^{-1} S_2)^{-1} \circ P_V \circ \mathscr{K}\|_O \leq \frac{1}{4}\, ,
\end{equation}
where $P_V$ is the projection onto an appropriately chosen finite-dimensional space $V\subset C^\infty_c$. If $N$ is the dimension of the space and $w_1, \ldots , w_N$ an orthonormal base, it suffices to show that
\begin{equation}\label{e:small-3}
\|(z-\beta^{-1} S_2)^{-1} (w_i)\|_{L^2} \leq \frac{1}{4N} \qquad \forall i\, .
\end{equation}
We argue for one of them and set $w=w_i$. The goal is thus to show
\eqref{e:small-3} provided $|z|\geq R$ for some large enough $R$. We use again \eqref{e:exponential_formula} and write
\[
(z-\beta^{-1} S_2)^{-1} (w) = \underbrace{\int_0^T e^{-(z-\beta^{-1} S_2) \tau} (w)\, d\tau}_{=:(A)} + \underbrace{\int_T^\infty e^{-(z-\beta^{-1} S_2) \tau} (w)\, d\tau}_{=:(B)}\, .
\]
We first observe that
\[
\|(B)\| \leq \int_T^\infty e^{-\varepsilon \tau}\, d\tau \leq \frac{e^{-\varepsilon T}}{\varepsilon}\, .
\]
Thus, choosing $T$ sufficiently large we achieve $\|(B)\| \leq \frac{1}{8N}$.
Having fixed $T$ we integrate by parts in the integral defining (A) to get
\begin{align*}
(A) & =\int_0^T e^{-z\tau} e^{\beta^{-1} S_2 \tau} (w)\, d\tau
= \underbrace{\frac{w - e^{- (z-\beta^{-1} S_2) T} (w)}{z}}_{=: (A1)} + \underbrace{\frac{1}{z} \int_0^T e^{-z} \beta^{-1} S_2 \circ e^{\beta^{-1} S_2 \tau} (w)\, d\tau}_{=: (A2)}\, .
\end{align*}
First of all we can bound
\[
\|(A1)\| \leq \frac{1+ e^{-T\varepsilon}}{|z|}\leq \frac{2}{R}\, .
\]
As for the second term, observe that $[0,T]\ni \tau \mapsto e^{\beta^{-1} S_2 \tau} (w)$ is the solution of a transport equation with smooth coefficients and smooth and compactly supported initial data, considered over a finite interval of time. Hence the support of the solution is compact and the solution is smooth. Moreover, the operators $\beta^{-1} S_2$ are first-order differential operators with coefficients which are smooth and whose derivatives are all bounded. In particular
\[
\max_{\tau\in [0,T]} \|\beta^{-1} S_2 \circ e^{\beta^{-1} S_2 \tau} (w)\| \leq C
\]
for a constant $C$ depending on $w$ and $T$ but not on $\beta$, in particular we can estimate
\[
\|(A2)\|\leq \frac{C (T)}{R}
\]
Since the choice of $T$ has already been given, we can now choose $R$ large enough to conclude $\|(A)\|\leq \frac{1}{8N}$ as desired.
\end{proof}
\section{Proof of Theorem \ref{thm:spectral}: conclusion}
First of all observe that $z\in {\rm spec}_m (L_\beta)$ if and only if $\beta z + 1-\frac{1}{\alpha}\in {\rm spec}_m (L_{\text{ss}})$. Thus, in order to prove Theorem \ref{thm:spectral} it suffices to find $\beta_0$ and $c_0$ positive such that:
\begin{itemize}
\item[(P)] If $\beta \geq \beta_0$, then ${\rm spec}_m (L_\beta)$ contains an element $z$ with ${\rm Re}\, z \geq c_0$ such that ${\rm Re}\, z = \max \{{\rm Re}\, w : w\in {\rm spec}_m\, (L_\beta)\}$.
\end{itemize}
Observe indeed that using the fact that the $U_{km}$ are invariant subspaces of $L_{\text{ss}}$, $\beta z + 1-\frac{1}{\alpha}$ have an eigenfunction $\vartheta$ which belongs to one of them, and we can assume that $k\geq 1$ by possibly passing to the complex conjugate $\bar z$. If $z$ is not real, we then set $\eta=\vartheta$ and the latter has the properties claimed in Theorem \ref{thm:spectral}. If $z$ is real it then follows that the real and imaginary part of $\vartheta$ are both eigenfunctions and upon multiplying by $i$ we can assume that the real part of $\vartheta$ is nonzero. We can thus set $\eta= {\rm Re}\, \vartheta$ as the eigenfunction of Theorem \ref{thm:spectral}.
We will split the proof in two parts, namely, we will show separately that
\begin{itemize}
\item[(P1)] There are $\beta_1, c_0 >0$ such that ${\rm spec}_m (L_\beta)\cap \{{\rm Re}\, z \geq c_0\}\neq \emptyset$ for all $\beta \geq \beta_1$.
\item[(P2)] If $\beta \geq \beta_0:= \max \{\beta_1, 1\}$, then $\sup \{{\rm Re}\, w : w\in {\rm spec}\, (L_\beta)\}$ is attained.
\end{itemize}
\medskip
{\bf Proof of (P1).} We fix $z\in {\rm spec}\, (L_{\text{st}})$ with positive real part and we set $2c_0:= {\rm Re}\, z$. We then fix a contour $\gamma \subset B_\varepsilon (z)$ which:
\begin{itemize}
\item it is a simple smooth curve;
\item it encloses $z$ and no other portion of the spectrum of $L_{\text{st}}$;
\item it does not intersect the spectrum of $L_{\text{st}}$;
\item it is contained in $\{w: {\rm Re}\, w \geq c_0\}$.
\end{itemize}
By the Riesz formula we know that
\[
P_z = \frac{1}{2\pi i} \int_\gamma (w-L_{\text{st}})^{-1}\, dw
\]
is a projection onto a subspace which contains all eigenfunctions of $L_{\text{st}}$ relative to the eigevanlue $z$. In particular this projection is nontrivial. By Lemma \ref{l:two} for all sufficiently large $\beta$ the curve $\gamma$ is not contained in the spectrum of $L_\beta$ and we can thus define
\[
P_{z,\beta} = \frac{1}{2\pi i} \int_\gamma (w-L_\beta)^{-1}\, dw\, .
\]
If $\gamma$ does not enclose any element of the spectrum of $L_\beta$, then $P_{z, \beta} = 0$. On the other hand, by Lemma \ref{l:two} and the dominated convergence theorem,
\[
P_{z,\beta} (u) \to P_z (u)
\]
strongly for every $u$. I.e. the operators $I_{z,\beta}$ converge strongly to the operator $P_z$. If for a sequence $\beta_k\to \infty$ the operators $P_{z,\beta_k}$ where trivial, then $P_z$ would be trivial too. Since this is excluded, we conclude that the curve $\gamma$ encloses some elements of the spectrum of $L_\beta$ for all $\beta$ large enough. Each such element has real part not smaller than $c_0$.
\medskip
{\bf Proof of (P2).} Set $\varepsilon := c_0$ and apply Lemma \ref{l:three} to find $R>0$ such that
${\rm spec}_m (L_\beta)\setminus \overline{B}_R$ is contained in $\{w: {\rm Re}\, w < c_0\}$. In particular, if $\beta \geq \max\{\beta_1, 1\}$, then the eigenvalue $z$ found in the previous step belongs to $\overline{B}_R$ and thus
\[
\sup\, \{{\rm Re}\, w : w\in {\rm spec}\, (L_\beta)\} =
\sup\, \{w: {\rm Re}\, w \geq c_0, |w|\leq R\}\cap {\rm spec}\, (L_\beta) \, .
\]
However, since ${\rm spec}\, (L_\beta)\cap \{w: {\rm Re}\, w \neq 0\}$ belongs to the discrete spectrum, the set $\{w: {\rm Re}\, w \geq c_0, |w|\leq R\}\cap {\rm spec}\, (L_\beta)$ is finite.
\chapter{Linear theory: Part I}
\label{chapter:linearparti}
In this chapter, we will reduce Theorem \ref{thm:spectral} to an analogous spectral result for another differential operator, and we will also show an important corollary of Theorem \ref{thm:spectral} concerning the semigroup that it generates. We start by giving the two relevant statements, but we will need first to introduce some notation and terminology.
First of all, in the rest of the chapter we will always assume that the positive integer $m$ is at least $2$. We then introduce a new (closed and densely defined) operator on $L^2_m$, which we will denote by $L_{\text{st}}$. The operator is defined by\index{aalLst@$L_{\text{st}}$}
\begin{equation}
L_{\text{st}} (\Omega) = - {\rm div}\, (\bar V \Omega) - (K_2*\Omega \cdot \nabla) \bar \Omega\,
\end{equation}
and (recalling that the operator $\Omega\mapsto (K_2* \Omega\cdot \nabla) \bar \Omega$ is bounded and compact, as will be shown below) its domain in $L^2_m$ is given by
\begin{equation}
D_m (L_{\text{st}}) = \{\Omega\in L^2_m : {\rm div}\, (\bar V \Omega)\in L^2\}\, .
\end{equation}
The key underlying idea behind the introduction of $L_{\text{st}}$ is that we can write $L_{\text{ss}}$ as
\[
L_{\text{ss}} = \left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) + \beta L_{\text{st}} \,
\]
and since $\beta$ will be chosen very large, we will basically study the spectrum of $L_{\text{ss}}$ as a perturbation of the spectrum of $\beta L_{\text{st}}$. In particular Theorem \ref{thm:spectral} will be derived from a more precise spectral analysis of $L_{\text{st}}$. Before coming to it, we split the space $L^2_m$ into an appropriate infinite sum of closed orthogonal subspaces.
First of all, if we fix an element $\vartheta\in L^2 (\mathbb R^2)$ and we introduce the polar coordinates $(\theta, r)$ through $x= r (\cos \theta , \sin \theta)$, we can then use the Fourier expansion to write
\begin{equation}\label{e:Fourier}
\vartheta (x) =\sum_{k\in \mathbb Z} a_k (r) e^{ik\theta}\,
\end{equation}
where
\[
a_k (r) := \frac{1}{2\pi} \int_0^{2\pi} \vartheta(r \cos(\theta),r\sin(\theta)) e^{-ik\theta}\,\mathrm d\theta .
\]
By Plancherel's formula,
\[
\|\vartheta\|_{L^2 (\ensuremath{\mathbb R}^2)}^2 = 2\pi \sum_{k\in \mathbb Z} \|a_k\|^2_{L^2 (\ensuremath{\mathbb R}^+, r\,\mathrm dr)}\, .
\]
In particular it will be convenient to introduce the subspaces\index{aalUk@$U_k$}
\begin{equation}
U_k :=\{f(r) e^{ik\theta} : f \in L^2 (\mathbb R^+, r\,\mathrm dr)\}\, .
\end{equation}
Each $U_k$ is a closed subspace of $L^2$, distinct $U_k$'s are orthogonal to each other and moreover
\begin{equation}\label{e:Fourier-2}
L^2_m = \bigoplus_{k\in \mathbb Z} U_{km}\, .
\end{equation}
Each $U_{mk}$ is an invariant space of $L_{\text{st}}$, and it can be easily checked that $U_{mk}\subset D_m (L_{\text{st}})$ and that indeed the restriction of $L_{\text{st}}$ to $U_{mk}$ is a bounded operator. Following the same convention as for $L_{\text{ss}}$ we will denote by ${\rm spec}_m\, (L_{\text{st}})$ the spectrum of $L_{\text{st}}$ on $L^2_m$.
\begin{theorem}\label{thm:spectral2}\label{THM:SPECTRAL2}
For every $m\geq 2$ and every $\bar\Omega$
we have
\begin{itemize}
\item[(a)] each $z_i\in {\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re} \,z\, \neq 0\}$ belongs to the discrete spectrum and if ${\rm Im}\, (z_i)=0$, then there is a nontrivial real eigenfunction relative to $z_i$.
\end{itemize}
Moreover, for an appropriate choice of $\bar\Omega$ there is an integer $m\geq 2$ such that:
\begin{itemize}
\item[(b)] ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is nonempty.
\end{itemize}
\end{theorem}
\begin{remark}\label{r:better}\label{R:BETTER}
The theorem stated above contains the minimal amount of information that we need to complete the proof of Theorem \ref{thm:main2}. We can however infer some additional conclusions with more work, more precisely we can show that
\begin{itemize}
\item[(c)] $m$ can be chosen so that, in addition to (b), ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is finite and the image of the Riesz projector\footnote{Recall that in the case of an isolated point $z$ in the spectrum of a closed, densely defined operator $A$, the Riesz projector is defined as
\[
\frac{1}{2\pi i} \int_\gamma (w -A)^{-1}\, dw
\]
for any simple closed rectifiable contour $\gamma$ bounding a closed disk $D$ with $D \cap {\rm spec}\, (A) = \{z\}$. For an element of the discrete spectrum the Riesz projector has finite rank (the algebraic multiplicity of the eigenvalue $z$).}\index{Riesz projector}\index{aalPz@$P_z$}
$P_{z}$ of $L_{\text{st}}$ relative to each $z\in {\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is contained in $U_m\cup U_{-m}$.
\end{itemize}
Since this property is not needed to prove Theorem~\ref{thm:main2} we defer its proof to Appendix~\ref{a:better}.
\end{remark}
{{red} In \cite{Vishik2} Vishik claims the following greatly improved statement.
\begin{theorem}\label{thm:spectral-stronger}\label{THM:SPECTRAL-STRONGER}
For a suitable $\bar \Omega$:
\begin{itemize}
\item[(c')] $m$ can be chosen so that, in addition to (b) and (c), ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}\cap U_m$ consists of a single element, with algebraic multiplicity $1$ in $U_m$.
\end{itemize}
\end{theorem}
Since the spectrum of $L_{\text{st}}$ is invariant under complex conjugation (b), (c), and (c') imply that ${\rm spec}_m\, (L_{\text{st}})\cap \{{\rm Re}\, z>0\}$ consists either of a single real eigenvalue or of two complex conjugate eigenvalues. In the first case, the algebraic and geometric multiplicity of the eigenvalue is $2$ and the space of eigenfunctions has a basis consisting of an element of $U_m$ and its complex conjugate in $U_{-m}$. In the second case the two eigenvalues $z$ and $\bar z$ have algebraic multiplicity $1$ and their eigenspaces are generated, respectively, by an element of $U_m$ and its complex conjugate in $U_{-m}$.
The argument given in \cite{Vishik2} for (c') is however not complete. Vishik provided later (\cite{Vishik3}) a way to close the gap. In Appendix~\ref{a:better} we will give a proof of Theorem \ref{thm:spectral-stronger} along his lines.}
\medskip
In this chapter we also derive an important consequence of Theorem \ref{thm:spectral} for the semigroup generated by $L_{\text{ss}}$.\index{Semigroup}
\begin{theorem}\label{t:group}\label{T:GROUP}
For every $m\geq 2$, $L_{\text{ss}}$ is the generator of a strongly continuous semigroup on $L^2_m$ which will be denoted by $e^{\tau L_{\text{ss}}}$\index{aalEtauLss@$e^{\tau L_{\text{ss}}}$}, and the growth bound $\omega (L_{\text{ss}})$ of $e^{\tau L_{\text{ss}}}$ equals\index{Semigroup, growth bound}
\[
a_0 := \sup \{{\rm Re}\, z_0 : z_0\in {\rm spec}_m (L_{\text{ss}})\}\, <\infty\,
\]
if $a_0 \geq 1-\frac{1}{\alpha}$.
In other words, for every $\delta>0$, there is a constant $M (\delta)$ with the property that
\begin{equation}\label{e:growth-bound}
\left\|e^{\tau L_{\text{ss}}} \Omega\right\|_{L^2} \leq M (\delta) e^{(a_0 +\delta) \tau} \|\Omega\|_{L^2}
\qquad \qquad \forall \tau\geq 0,\, \forall \Omega\in L^2_m\, .
\end{equation}
\end{theorem}
\section{Preliminaries}\label{s:abstract-operators}
In this section we start with some preliminaries which will take advantage of several structural properties of the operators $L_{\text{ss}}$ and $L_{\text{st}}$. First of all we decompose $L_{\text{st}}$ as
\begin{equation}\label{e:decompo_L_st}
L_{\text{st}} = S_1 + \mathscr{K}\, ,
\end{equation}\index{aalK@$\mathscr{K}$}\index{aalS1@$S_1$}where
\begin{align}
S_1 (\Omega)&:= - {\rm div}\, (\bar V \Omega)\label{e:S1}\\
\mathscr{K} (\Omega) &:= - (K_2*\Omega \cdot \nabla) \bar \Omega \label{e:compatto}\, .
\end{align}
Hence we introduce the operator\index{aalS2@$S_2$}
\begin{equation}
S_2 (\Omega) := {\rm div} \left(\left(\frac{\xi}{\alpha} - \beta \bar V\right) \Omega\right) - \frac{\Omega}{\alpha}\, ,
\end{equation}
so that we can decompose $L_{\text{ss}}$ as
\begin{equation}
L_{\text{ss}} = \left(1-\frac{1}{\alpha}\right) + S_2 + \beta \mathscr{K}\, .
\end{equation}
The domains of the various operators $A$ involved are always understood as $D_m (A):= \{\Omega : A(\Omega)\in L^2\}$.
Finally, we introduce the real Hilbert spaces $L^2_m (\mathbb R)$ and $U_j (\mathbb R)$ by setting\index{aalL2mR@$L^2_m(\ensuremath{\mathbb R})$}\index{aalUkR@$U_k(\ensuremath{\mathbb R})$}
\begin{align}
L^2_m (\mathbb R) &:= \{{\rm Re}\, \Omega : \Omega \in L^2_m\}\,
\end{align}
and, for $j>0$ natural,
\begin{equation}
U_j (\mathbb R) :=\{{\rm Re}\, \Omega: \Omega \in U_j\}\, .
\end{equation}
Observe that while clearly $L^2_m (\mathbb R)$ is a real subspace of $L^2_m$, $U_j (\mathbb R)$ is a real subspace of $U_j \oplus U_{-j}$.
As it is customary, $L^2_m (\mathbb R)$ and its real vector subspaces are endowed with the inner product\index{Rotationally symmetric function space, inner product}
\begin{equation}
\langle \Omega, \Xi\rangle_{\mathbb R} = \int \Omega\, \Xi\, ,
\end{equation}
while $L^2_m$ and its complex vector subspaces are endowed with the Hermitation product
\begin{equation}
\langle \Omega, \Xi\rangle_{\mathbb C} = \int ({\rm Re}\, \Omega\, {\rm Re}\, \Xi +
{\rm Im}\, \Omega\, {\rm Im}\, \Xi) + i \int ({\rm Im}\, \Omega\, {\rm Re}\, \Xi - {\rm Re} \, \Omega\, {\rm Im}\, \Xi)\, .
\end{equation}
We will omit the subscripts from $\langle \cdot, \cdot \rangle$ when the underlying field is clear from the context. The following proposition details the important structural properties of the various operators. A closed unbounded operator $A$ on $L^2_m$ will be called \emph{real} if its restriction $A_{\mathbb R}$ to $L^2_m (\mathbb R)$ is a closed, densely defined operator with domain $D_m (A) \cap L^2_m (\mathbb R)$ such that $A(\Omega)\in L^2_m(\mathbb R)$ for all $\Omega\in D_m(A)\cap L^2_m(\mathbb R)$.
\begin{proposition}\label{p:abstract}
\begin{itemize}
\item[(i)] The operators $\mathscr{K}$, $S_1$ and $S_2$ are all real operators.
\item[(ii)] $\mathscr{K}$ is bounded and compact. More precisely there is a sequence of finite dimensional vector spaces $V_n \subset C^\infty_c (\mathbb R^2,\mathbb C)\cap L^2_m$ with the property that, if $P_n$ denotes the orthogonal projection onto $V_n$, then
\begin{equation}\label{e:explicit-approx}
\lim_{n\to\infty} \|\mathscr{K} - P_n\circ \mathscr{K}\|_O = 0\, ,
\end{equation}
where $\|\cdot\|_O$ denotes the operator norm.
\item[(iii)] $S_1$ and $S_2$ are skew-adjoint.
\item[(iv)] $D_m (L_{\text{st}}) = D_m (S_1)$ and $D_m (L_{\text{ss}}) = D_m (S_2)$.
\item[(v)] $U_{km}$ is an invariant subspace of $S_1, S_2, \mathscr{K}, L_{\text{st}}, L_{\text{ss}}$.
\item[(vi)] The restrictions of $S_1$ and $L_{\text{st}}$ to each $U_{km}$ are bounded operators.
\end{itemize}
\end{proposition}
\begin{proof} The verification of (i), (iii), (iv), (v), and (vi) are all simple and left therefore to the reader. We thus come to (ii) and prove the compactness of the operator $\mathscr{K}$. Recalling Lemma \ref{l:extension}, for every $\Omega \in L^2_m$ we can write the tempered distribution $\mathscr{K} (\Omega)$ as
\begin{equation}
\mathscr{K} (\Omega) = \nabla \bar \Omega \cdot V
\end{equation}
where $V=V(\Omega)$ is a $W^{1,2}_{\text{loc}}$ function with the properties that
\begin{equation}\label{e:stima-W12}
R^{-1} \|V \|_{L^2 (B_R)} + \|DV\|_{L^2 (B_R)} \leq C \|\Omega\|_{L^2} \qquad \forall R \geq 0\, . \end{equation}
Since $|\nabla \bar \Omega (\xi)| \leq C |\xi|^{-{\bar\alpha} -1}$ for $|\xi|\geq 1$, whenever $R\geq 1$ we can estimate
\begin{align*}
\|\mathscr{K} (\Omega)\|^2_{L^2 (B_R^c)} &= \sum_{j=0}^\infty \|\mathscr{K} (\Omega)\|_{L^2 (B_{2^{j+1} R}\setminus B_{2^j R})}^2 \leq C R^{-2-2{\bar\alpha}} \sum_{j=0}^\infty 2^{-2 (1+{\bar\alpha}) j} \|V\|^2_{L^2 (B_{2^{j+1} R})}\\
&\leq C R^{-2-2{\bar\alpha}} \sum_{j=0}^\infty 2^{-2(1+{\bar\alpha}) j} 2^{2j+2} R^2 \|\Omega\|_{L^2}^2
\leq C R^{-2{\bar\alpha}} \|\Omega\|_{L^2}^2\, .
\end{align*}
This shows at the same time that
\begin{itemize}
\item $\mathscr{K}$ is a bounded operator;
\item If we introduce the operators
\begin{equation}
\Omega \mapsto \mathscr{K}_N (\Omega) := \mathscr{K} (\Omega) \mathbf{1}_{B_N}\, ,
\end{equation}
then $\|\mathscr{K}_N- \mathscr{K}\|_{O}\to 0$.
\end{itemize}
Since the uniform limit of compact operators is a compact operator, it suffices to show that each $\mathscr{K}_N$ is a compact operator. This is however an obvious consequence of \eqref{e:stima-W12} and the compact embedding of $W^{1,2} (B_N)$ into $L^2 (B_N)$.
As for the remainder of the statement (ii), by the classical characterization of compact operators on a Hilbert space, for every $\varepsilon > 0$ there is a finite-rank linear map $L_N$ such that $\|\mathscr{K} - L_N\|_O \leq \frac{\varepsilon}{4}$. If we denote by $W_N$ the image of $L_N$ and by $Q_N$ the orthogonal projection onto it, given that $Q_N \circ L_N = L_N$ we can estimate
\[
\|Q_N \circ \mathscr{K} - \mathscr{K}\|_O \leq \|Q_N \circ \mathscr{K} - Q_N \circ L_N\|_O
+ \|L_N - \mathscr{K}\|_O \leq 2 \|L_N - \mathscr{K}\|_0 \leq \frac{\varepsilon}{2}\, .
\]
Fix next an orthonormal base $\{w_1, \ldots, w_N\}$ of $W_N$ and, using the density of $C^\infty_c (\mathbb R^2)$, approximate each element $w_i$ in the base with $v_i\in C^\infty_c (\mathbb R^2, \mathbb C)$. This can be done for instance convolving $w_i$ with a smooth radial kernel and multiplying by a suitable cut-off function. If the $v_i$'s are taken sufficiently close to $w_i$, the orthogonal projection $P_N$ onto $V_N = {\rm span}\, (v_1, \ldots , v_N)$ satisfies $\|Q_N-P_N\|_O \leq \frac{\varepsilon}{2\|\mathscr{K}\|_O}$ and thus
\[
\|\mathscr{K} - P_N \circ \mathscr{K}\|_O \leq \|\mathscr{K} - Q_N \circ \mathscr{K}\|_O + \|P_N-Q_N\|_O \|\mathscr{K}\|_O \leq \varepsilon\, .\qedhere
\]
\end{proof}
\section{Proof of Theorem \ref{t:group} and proof of Theorem \ref{thm:spectral2}(a)}\label{subsect:Proof-of-spectral2-and-group}
The above structural facts allow us to gain some important consequences as simple corollaries of classical results in spectral theory, which we gather in the next statement. Observe in particular that the statement (a) of Theorem \ref{thm:spectral2} follows from it.
In what follows we take the definition of essential spectrum of an operator as given in \cite{EngelNagel}. We caution the reader that other authors use different definitions; at any rate the main conclusion about the essential spectra of the operators $L_{\text{ss}}$ and $L_{\text{st}}$ in Corollary \ref{c:structural} below depends only upon the property that the essential and discrete spectra are disjoint (which is common to all the different definitions used in the literature).
\begin{corollary}\label{c:structural}
The essential spectrum of $L_{\text{st}}$ and the essential spectrum of $L_{\text{ss}} - \left(1-\frac{1}{\alpha}\right)$ are contained in the imaginary axis, while the remaining part of the spectrum is contained in the discrete spectrum. In particular, every $z\in {\rm spec}_m (L_{\text{st}})$ (resp. $z\in {\rm spec}_m (L_{\text{ss}})$) with nonzero real part (resp. real part different from $1-\frac{1}{\alpha}$) has the following properties.
\begin{itemize}
\item[(i)] $z$ is isolated in ${\rm spec}_m (L_{\text{st}})$ (resp. ${\rm spec}_m (L_{\text{ss}})$);
\item[(ii)] There is at least one nontrivial $\Omega$ such that $L_{\text{st}} (\Omega) = z \Omega$ (resp. $L_{\text{ss}} (\Omega) = z \Omega$) and if ${\rm Im}\, (z)=0$, then $\Omega$ can be chosen to be real-valued;
\item[(iii)] The Riesz projection $P_z$ has finite rank;
\item[(iv)] ${\rm Im}\, (P_z) = \bigoplus_{k\in \mathbb Z\setminus \{0\}} ({\rm Im}\, (P_z)\cap U_{km})$ and in particular the intersection ${\rm Im}\, (P_z) \cap U_{km}$ is trivial for all but a finite number of $k$'s and it is nontrivial for at least one $k$.
\end{itemize}
Moreover, Theorem \ref{t:group} holds.
\end{corollary}
\begin{proof} The points (i)-(iii) are consequence of classical theory, but we present briefly their proofs referring to \cite{Kato}. Observe that addition of a constant multiple $c$ of the identity only shifts the spectrum (and its properties) by the constant $c$. The statements for $L_{\text{ss}}$ are thus reduced to similar statements for $S_2+\beta \mathscr{K}$.
Next since the arguments for $L_{\text{st}} = S_1 + \mathscr{K}$ only use the skew-adjointness of $S_1$ and the compactness of $\mathscr{K}$, they apply verbatim to $S_2+\beta\mathscr{K}$. We thus only argue for $L_{\text{st}}$. First of all observe that, since $S_1$ is skew-adjoint, its spectrum is contained in the imaginary axis. In particular, for every $z$ with ${\rm Re}\, z \neq 0$ the operator $S_1-z$ is invertible and thus Fredholm with Fredholm index $0$. Hence by \cite[Theorem 5.26, Chapter IV]{Kato}, $L_{\text{st}}-z= S_1-z +\mathscr{K}$ is as well Fredholm and has index $0$. By \cite[Theorem 5.31, Chapter IV]{Kato} there is a discrete set $\Sigma \subset \{z: {\rm Re}\, z\neq 0\}$ with the property that the dimension of the kernel (which equals that of the cokernel) of $L_{\text{st}} -z$ is constant on the open sets $\{{\rm Re}\, z > 0\}\setminus \Sigma$ and $\{{\rm Re}\, z<0\}\setminus \Sigma$. Since, for every $z$ such that $|{\rm Re}\, z|> \|\mathscr{K}\|_O$, we know that $L_{\text{st}}-z$ has a bounded inverse from the Neumann series, the kernel (and cokernel) of $L_{\text{st}}-z$ equals $0$ on $\{{\rm Re}\, z \neq 0\}\setminus \Sigma$. From \cite[Theorem 5.28, Chapter IV]{Kato} it then follows that $\Sigma$ is a subset of the discrete spectrum of $L_{\text{st}}$. Obviously the essential spectrum must be contained in the imaginary axis.
In order to show (iv), denote by $P_k$ the orthogonal projection onto $U_{km}$ and observe that, since $L_{\text{st}} \circ P_k = P_k \circ L_{\text{st}}$,
\begin{equation}\label{e:commute}
P_z \circ P_k = \frac{1}{2\pi i} \int_\gamma \frac{1}{w-L_{\text{st}}}\circ P_k \, dw
= \frac{1}{2\pi i} \int P_k \circ \left(\frac{1}{w-L_{\text{st}}}\right)\, dw = P_k \circ P_z\, .
\end{equation}
Writing
\begin{equation}\label{e:splitting}
P_z = \sum_k P_z\circ P_k
\end{equation}
and observing that the commutation \eqref{e:commute} gives the orthogonality of the images of the $P_z\circ P_k$, since ${\rm Im}\, (P_z)$ is finite dimensional, we conclude that the sum is finite, i.e. that $P_z\circ P_k =0$ for all but finitely many $k$'s. Moreover, since $P_z^2 = P_z$ and $P_z$ equals the identity on ${\rm Im}\, (P_z)$, we see immediately that $U_{km} \cap {\rm Im}\, (P_z) = {\rm Im}\, (P_z\circ P_k)$.
We now come to the proof of Theorem \ref{t:group}.
We have already shown that, if ${\rm Re}\, \lambda$ is large enough, then $\lambda$ belongs to the resolvent of $L_{\text{ss}}$, which shows that $a_0 < \infty$. Next, observe that $L_{\text{ss}}$ generates a strongly continuous group if and only if $S_2+\beta \mathscr{K}$ does. On the other hand, using the skew-adjointness of $S_2$, we conclude that, if ${\rm Re}\, z > \beta \|\mathscr{K}\|_O$, then $z$ is in the resolvent of $S_2+\beta \mathscr{K}$ and
\[
\|(S_2+\beta \mathscr{K} - z)^{-1}\|_O \leq \frac{1}{{\rm Re}\, z - \beta \|\mathscr{K}\|_O}\, .
\]
Therefore we can apply \cite[Corollary 3.6, Chapter II]{EngelNagel} to conclude that $S_2+\beta \mathscr{K}$ generates a strongly continuous semigroup. Since the same argument applies to $-S_2-\beta \mathscr{K}$, we actually conclude that indeed the operator generates a strongly continuous group.
Next we invoke \cite[Corollary 2.11, Chapter IV]{EngelNagel} that characterizes the growth bound $\omega_0 (L_{\text{ss}})$ of the semigroup $e^{tL_{\text{ss}}}$ as
\[
\omega_0 (L_{\text{ss}}) = \max \{\omega_{\text{ess}} (L_{\text{ss}}), a_0\}\, ,
\]
where $\omega_{\text{ess}}$ is the essential growth bound of \cite[Definition 2.9, Chapter IV]{EngelNagel}. By \cite[Proposition 2.12, Chapter IV]{EngelNagel}, $\omega_{\text{ess}} (L_{\text{ss}})$ equals $\omega_{\text{ess}} (1-\frac{1}{\alpha} + S_2)$ and, since $e^{\tau S_2}$ is a unitary operator, the growth bound of $e^{(1-1/\alpha +S_2)\tau}$ equals $1-\frac{1}{\alpha}$, from which we conclude that $\omega_{\text{ess}} (1-\frac{1}{\alpha} + S_2)\leq 1-\frac{1}{\alpha}$. In particular we infer that if $a_0\geq 1-\frac{1}{\alpha}$, then $\omega_0 (L_{\text{ss}})=a_0$.
\end{proof}
\section{Proof of Theorem \ref{thm:spectral}: preliminary lemmas}
In this and the next section we will derive Theorem \ref{thm:spectral} from Theorem \ref{thm:spectral2}. It is convenient to introduce the following operator:
\begin{equation}\label{e:Lbeta}
L_\beta : = \frac{1}{\beta} \left(L_{\text{ss}} - \left(1-{\textstyle{\frac{1}{\alpha}}}\right)\right)
= \frac{1}{\beta} S_2 + \mathscr{K} \, .
\end{equation}
In particular
\begin{equation}\label{e:Lbeta-2}
L_\beta (\Omega) = \frac{1}{\beta}\left[{\rm div}\, \left(\frac{\xi}{\alpha} \Omega\right)+ \frac{\Omega}{\alpha}\right] + L_{st}\, .
\end{equation}
Clearly the spectrum of $L_{\text{ss}}$ can be easily computed from the spectrum of $L_\beta$. The upshot of this section and the next section is that, as $\beta \to \infty$, the spectrum of $L_\beta$ converges to that of $L_{\text{st}}$ in a rather strong sense.
In this section we state two preliminary lemmas. We will use extensively the notation $P_V$ for the orthogonal projection onto some closed subspace $V$ of $L^2_m$.
\begin{lemma}\label{l:two}
Let $H= L^2_m, U_{km}, U_{-km}$, or any closed invariant subspace common to $L_{st}$ and all the $L_\beta$.
For every compact set $K\subset \mathbb C \setminus (i \mathbb R \cup {\rm spec}_m (L_{\text{st}} \circ P_H))$, there is $\beta_0 (K)$ such that $K\subset \mathbb C \setminus (\mathbb R \cup {\rm spec}_m (L_\beta \circ P_H))$ for $\beta \geq \beta_0 (K)$. Moreover,
\begin{equation}\label{e:op_norm_est}
\sup_{\beta \geq \beta_0 (K)} \sup_{z\in K} \|(L_\beta \circ P_H - z)^{-1}\|_O < \infty\,
\end{equation}
and
$(L_\beta \circ P_H -z)^{-1}$ converges strongly to $(L_{\text{st}}\circ P_H -z)^{-1}$ for every $z\in K$, namely,
\begin{equation}\label{e:strong_convergence}
\lim_{\beta\to \infty} \|(L_\beta \circ P_H -z)^{-1} (w) - (L_{\text{st}}\circ P_H -z)^{-1} (w)\| = 0\, \qquad \forall w\in L^2_m\, .
\end{equation}
\end{lemma}
\begin{lemma}\label{l:three}
For every $\varepsilon >0$ there is a $R=R (\varepsilon)$ such that
\begin{equation}\label{e:exclude_large_eigenvalue}
{\rm spec}_m (L_\beta) \cap \{z : |z|\geq R, |{\rm Re}\, z|\geq \varepsilon\} = \emptyset \qquad
\forall \beta \geq 1\, .
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{l:two}] The proof is analogous for all $H$ and we will thus show it for $H=L^2_m$. Fix first $z$ such that ${\rm Re}\, z \neq 0$ and recalling that $z- \beta^{-1} S_2$ is invertible, we write
\begin{equation}\label{e:invert_L_beta-z}
z- L_\beta = (z- \beta^{-1} S_2) (1 - (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K})\, .
\end{equation}
\medskip
{\bf Step 1} The operators $(\beta^{-1} S_2 -z)^{-1}$ enjoy the bound
\begin{equation}\label{e:uniform-bound-inverse-beta}
\|(z-\beta^{-1} S_2)^{-1}\|_O \leq |{\rm Re}\, z|^{-1}
\end{equation}
because $\beta^{-1} S_2$ are skew-adjoint. We claim that $(z-\beta^{-1} S_2)^{-1}$ converges strongly to $(z-S_1)^{-1}$ for $\beta \to \infty$. For a family of operators with a uniform bound on the operator norm, it suffices to show strong convergence of $(z-\beta^{-1} S_2)^{-1} w$ for a (strongly) dense subset.
Without loss of generality we can assume ${\rm Re}\, z >0$. Recalling that $\beta^{-1} S_2$ generates a strongly continuous unitary semigroup, we can use the formula
\begin{equation}\label{e:exponential_formula}
(z-\beta^{-1} S_2)^{-1} (w) = \int_0^\infty e^{-(z-\beta^{-1} S_2)\tau} (w)\, d\tau\, .
\end{equation}
Next observe that $\|e^{\beta^{-1} S_2\tau}\|_O=1$. Moreover if $w\in \mathscr{S}$, $e^{\beta^{-1}S_2 \tau} w$ is the solution of a transport equation with a locally bounded and smooth coefficient and initial data $w$. We can thus pass into the limit in $\beta\to \infty$ and conclude that $e^{\beta^{-1} S_2 \tau} w$ converges strongly in $L^2$ to $e^{S_1\tau} w$. We can thus use the dominated convergence theorem in \eqref{e:exponential_formula} to conclude that $(z-\beta^{-1} S_2)^{-1} (w)$ converges to $(z-S_1)^{-1} (w)$ strongly in $L^2$. Since $\mathscr{S}$ is strongly dense, this concludes our proof.
\medskip
{\bf Step 2} We next show that $(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}$ converges in the operator norm to
$(z-S_1)^{-1} \circ \mathscr{K}$. Indeed using Proposition \ref{p:abstract} we can find a sequence of finite rank projections $P_N$ such that $P_N \circ \mathscr{K}$ converges to $\mathscr{K}$ in operator norm. From Step 1 it suffices to show that $(z-\beta^{-1} S_2)^{-1} \circ P_N \circ \mathscr{K}$ converges to $(z-S_1)^{-1} \circ P_N \circ \mathscr{K}$ in operator norm for each $N$. But clearly $(z-\beta^{-1} S_2)^{-1} \circ P_N$ is a finite rank operator and for finite rank operators the norm convergence is equivalent to strong convergence. The latter has been proved in Step 1.
\medskip
{\bf Step 3} Fix $z$ which is outside the spectrum of $L_{st}$. Because of Step 2 we conclude that
\[
(1- (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}) \to (1- (z-S_1)^{-1}\circ \mathscr{K})
\]
in the operator norm. Observe that $1-(z-S_1)^{-1}\circ \mathscr{K}$ is a compact perturbation of the identity. As such it is a Fredholm operator with index $0$ and thus invertible if and only if its kernel is trivial. Its kernel is given by $w$ which satisfy
\[
z w - S_1 (w)- \mathscr{K} (w) = 0\, ,
\]
i.e. it is the kernel of $z-(S_1+\mathscr{K}) = z- L_{\text{st}}$, which we have assumed to be trivial since $z$ is in the spectrum of $L_{\text{st}}$. Thus $(1-(z-S_1)^{-1} \circ \mathscr{K})$ is invertible and hence, because of the operator convergence, so is $(1-(z-\beta^{-1} S_2)^{-1}\circ \mathscr{K})$ for any sufficiently large $\beta$. Hence, by \eqref{e:invert_L_beta-z} so is $z-L_\beta$.
\medskip
{\bf Step 4} The inverse $(z- L_\beta)^{-1}$ is given explicitly by the formula
\begin{equation}\label{e:inversion_formula}
(z-L_\beta)^{-1} = (z-\beta^{-1} S_2)^{-1} (1-(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K})^{-1}\, .
\end{equation}
Since $1-(z-S_2)^{-1} \circ \mathscr{K}$ converges to $1- (z-S_1)^{-1} \circ \mathscr{K}$ in the operator norm, their inverses converge as well in the operator norm. Since the composition of strongly convergent operators with norm convergent operators is strongly convergent, we conclude that $(z-L_\beta)^{-1}$ converges strongly to the operator
\[
(z-S_1)^{-1} (1- (z-S_1)^{-1} \circ \mathscr{K})^{-1} = (z-L_{\text{st}})^{-1}\, .
\]
\medskip
{\bf Step 5} Choose now a compact set $K\subset \mathbb C \setminus (i \mathbb R \cup {\rm spec}_m (L_{\text{st}}))$.
Recall first that
\[
K \ni z \mapsto (z-S_1)^{-1}
\]
is continuous in the operator norm. Thus $K\ni z \mapsto (1- (z-S_1)^{-1} \circ \mathscr{K})$ is continuous in the operator norm. {{red} We claim that $K\times [0,1]\ni (z, \delta) \mapsto (1-(z-\delta S_2)^{-1} \circ \mathscr{K})$ is also continuous in the operator norm and in order to show this we will prove the uniform continuity in $z$ once we fix $\delta$, with an estimate which is independent of $\delta$. We first write
\[
\|(1-(z - \delta S_2)^{-1}\circ \mathcal{K}) - (1- (z'- \delta S_2)^{-1} \circ \mathcal{K})\|_O
\leq \|(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1}\|_O \|\mathcal{K}\|_O \, .
\]
Hence we compute
\[
(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1} = (z - \delta S_2)^{-1} \circ ((z' - \delta S_2)-(z-\delta S_2)) \circ (z'-\delta S_2)^{-1}
\]
and use \eqref{e:uniform-bound-inverse-beta} to estimate
\[
\|(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1}\|_O \leq |z-z'| \|(z - \delta S_2)^{-1}\|_O \|(z'-\delta S_2)^{-1}\|_O
\leq \frac{|z-z'|}{| {\rm Re}\, z| |{\rm Re}\, z'|}\, .
\]
}
Since the space of invertible operators is open in the norm topology, this implies the existence of a $\delta_0>0$ such that $K\times [0,\delta_0] \ni (z, \delta) \mapsto (1-(z-\delta S_2)^{-1}\circ \mathscr{K})^{-1}$ is well defined and continuous. Thus, for $\beta \geq \beta_0= \delta_0^{-1}$ we conclude that $1-(z-\beta^{-1} S_2)^{-1}\circ \mathscr{K}$ is invertible and the norm of its inverse is bounded by a constant $C$ independent of $\beta$ and $z\in K$. By \eqref{e:inversion_formula} and \eqref{e:uniform-bound-inverse-beta}, we infer that in the same range of $z$ and $\beta$ the norm of the operators $(z-L_\beta)^{-1}$ enjoy a uniform bound.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:three}]
We show \eqref{e:exclude_large_eigenvalue} for ${\rm Re}\, z \geq \varepsilon$ replacing $|{\rm Re}\, z|\geq \varepsilon$, as the argument for the complex lower half-plane is entirely analogous.
Using \eqref{e:invert_L_beta-z}, we wish to show that there is $R = R (\varepsilon)$ such that the operator
\[
1 - (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}
\]
is invertible for all $\beta \geq 1$ and $z$ such that $|z|\geq R$ and ${\rm Re}\, z \geq \varepsilon$.
This will follow after showing that, for $\beta$ and $z$ in the same range
\begin{equation}\label{e:small}
\|(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}\|_O \leq \frac{1}{2}\, .
\end{equation}
By \eqref{e:uniform-bound-inverse-beta}, we can use Proposition \ref{p:abstract} to reduce \eqref{e:small} to the claim
\begin{equation}\label{e:small-2}
\|(z-\beta^{-1} S_2)^{-1} \circ P_V \circ \mathscr{K}\|_O \leq \frac{1}{4}\, ,
\end{equation}
where $P_V$ is the projection onto an appropriately chosen finite-dimensional space $V\subset C^\infty_c$. If $N$ is the dimension of the space and $w_1, \ldots , w_N$ an orthonormal base, it suffices to show that
\begin{equation}\label{e:small-3}
\|(z-\beta^{-1} S_2)^{-1} (w_i)\|_{L^2} \leq \frac{1}{4N} \qquad \forall i\, .
\end{equation}
We argue for one of them and set $w=w_i$. The goal is thus to show
\eqref{e:small-3} provided $|z|\geq R$ for some large enough $R$. We use again \eqref{e:exponential_formula} and write
\[
(z-\beta^{-1} S_2)^{-1} (w) = \underbrace{\int_0^T e^{-(z-\beta^{-1} S_2) \tau} (w)\, d\tau}_{=:(A)} + \underbrace{\int_T^\infty e^{-(z-\beta^{-1} S_2) \tau} (w)\, d\tau}_{=:(B)}\, .
\]
We first observe that
\[
\|(B)\| \leq \int_T^\infty e^{-\varepsilon \tau}\, d\tau \leq \frac{e^{-\varepsilon T}}{\varepsilon}\, .
\]
Thus, choosing $T$ sufficiently large we achieve $\|(B)\| \leq \frac{1}{8N}$.
Having fixed $T$ we integrate by parts in the integral defining (A) to get
\begin{align*}
(A) & =\int_0^T e^{-z\tau} e^{\beta^{-1} S_2 \tau} (w)\, d\tau
= \underbrace{\frac{w - e^{- (z-\beta^{-1} S_2) T} (w)}{z}}_{=: (A1)} + \underbrace{\frac{1}{z} \int_0^T e^{-z} \beta^{-1} S_2 \circ e^{\beta^{-1} S_2 \tau} (w)\, d\tau}_{=: (A2)}\, .
\end{align*}
First of all we can bound
\[
\|(A1)\| \leq \frac{1+ e^{-T\varepsilon}}{|z|}\leq \frac{2}{R}\, .
\]
As for the second term, observe that $[0,T]\ni \tau \mapsto e^{\beta^{-1} S_2 \tau} (w)$ is the solution of a transport equation with smooth coefficients and smooth and compactly supported initial data, considered over a finite interval of time. Hence the support of the solution is compact and the solution is smooth. Moreover, the operators $\beta^{-1} S_2$ are first-order differential operators with coefficients which are smooth and whose derivatives are all bounded. In particular
\[
\max_{\tau\in [0,T]} \|\beta^{-1} S_2 \circ e^{\beta^{-1} S_2 \tau} (w)\| \leq C
\]
for a constant $C$ depending on $w$ and $T$ but not on $\beta$, in particular we can estimate
\[
\|(A2)\|\leq \frac{C (T)}{R}
\]
Since the choice of $T$ has already been given, we can now choose $R$ large enough to conclude $\|(A)\|\leq \frac{1}{8N}$ as desired.
\end{proof}
\section{Proof of Theorem \ref{thm:spectral}: conclusion}
First of all observe that $z\in {\rm spec}_m (L_\beta)$ if and only if $\beta z + 1-\frac{1}{\alpha}\in {\rm spec}_m (L_{\text{ss}})$. Thus, in order to prove Theorem \ref{thm:spectral} it suffices to find $\beta_0$ and $c_0$ positive such that:
\begin{itemize}
\item[(P)] If $\beta \geq \beta_0$, then ${\rm spec}_m (L_\beta)$ contains an element $z$ with ${\rm Re}\, z \geq c_0$ such that ${\rm Re}\, z = \max \{{\rm Re}\, w : w\in {\rm spec}_m\, (L_\beta)\}$.
\end{itemize}
Observe indeed that using the fact that the $U_{km}$ are invariant subspaces of $L_{\text{ss}}$, $\beta z + 1-\frac{1}{\alpha}$ have an eigenfunction $\vartheta$ which belongs to one of them, and we can assume that $k\geq 1$ by possibly passing to the complex conjugate $\bar z$. If $z$ is not real, we then set $\eta=\vartheta$ and the latter has the properties claimed in Theorem \ref{thm:spectral}. If $z$ is real it then follows that the real and imaginary part of $\vartheta$ are both eigenfunctions and upon multiplying by $i$ we can assume that the real part of $\vartheta$ is nonzero. We can thus set $\eta= {\rm Re}\, \vartheta$ as the eigenfunction of Theorem \ref{thm:spectral}.
We will split the proof in two parts, namely, we will show separately that
\begin{itemize}
\item[(P1)] There are $\beta_1, c_0 >0$ such that ${\rm spec}_m (L_\beta)\cap \{{\rm Re}\, z \geq c_0\}\neq \emptyset$ for all $\beta \geq \beta_1$.
\item[(P2)] If $\beta \geq \beta_0:= \max \{\beta_1, 1\}$, then $\sup \{{\rm Re}\, w : w\in {\rm spec}\, (L_\beta)\}$ is attained.
\end{itemize}
\medskip
{\bf Proof of (P1).} We fix $z\in {\rm spec}\, (L_{\text{st}})$ with positive real part and we set $2c_0:= {\rm Re}\, z$. We then fix a contour $\gamma \subset B_\varepsilon (z)$ which:
\begin{itemize}
\item it is a simple smooth curve;
\item it encloses $z$ and no other portion of the spectrum of $L_{\text{st}}$;
\item it does not intersect the spectrum of $L_{\text{st}}$;
\item it is contained in $\{w: {\rm Re}\, w \geq c_0\}$.
\end{itemize}
By the Riesz formula we know that
\[
P_z = \frac{1}{2\pi i} \int_\gamma (w-L_{\text{st}})^{-1}\, dw
\]
is a projection onto a subspace which contains all eigenfunctions of $L_{\text{st}}$ relative to the eigevanlue $z$. In particular this projection is nontrivial. By Lemma \ref{l:two} for all sufficiently large $\beta$ the curve $\gamma$ is not contained in the spectrum of $L_\beta$ and we can thus define
\[
P_{z,\beta} = \frac{1}{2\pi i} \int_\gamma (w-L_\beta)^{-1}\, dw\, .
\]
If $\gamma$ does not enclose any element of the spectrum of $L_\beta$, then $P_{z, \beta} = 0$. On the other hand, by Lemma \ref{l:two} and the dominated convergence theorem,
\[
P_{z,\beta} (u) \to P_z (u)
\]
strongly for every $u$. I.e. the operators $I_{z,\beta}$ converge strongly to the operator $P_z$. If for a sequence $\beta_k\to \infty$ the operators $P_{z,\beta_k}$ where trivial, then $P_z$ would be trivial too. Since this is excluded, we conclude that the curve $\gamma$ encloses some elements of the spectrum of $L_\beta$ for all $\beta$ large enough. Each such element has real part not smaller than $c_0$.
\medskip
{\bf Proof of (P2).} Set $\varepsilon := c_0$ and apply Lemma \ref{l:three} to find $R>0$ such that
${\rm spec}_m (L_\beta)\setminus \overline{B}_R$ is contained in $\{w: {\rm Re}\, w < c_0\}$. In particular, if $\beta \geq \max\{\beta_1, 1\}$, then the eigenvalue $z$ found in the previous step belongs to $\overline{B}_R$ and thus
\[
\sup\, \{{\rm Re}\, w : w\in {\rm spec}\, (L_\beta)\} =
\sup\, \{w: {\rm Re}\, w \geq c_0, |w|\leq R\}\cap {\rm spec}\, (L_\beta) \, .
\]
However, since ${\rm spec}\, (L_\beta)\cap \{w: {\rm Re}\, w \neq 0\}$ belongs to the discrete spectrum, the set $\{w: {\rm Re}\, w \geq c_0, |w|\leq R\}\cap {\rm spec}\, (L_\beta)$ is finite.
\chapter{Linear theory: Part II}
\label{chapter:linearpartii}\label{sect:Proof-spectral-2}
This chapter is devoted to proving Theorem \ref{thm:spectral2}. Because of the discussions in the previous chapter, considering the decomposition
\[
L^2_m = \bigoplus_{k\in \mathbb Z} U_{km}\, ,
\]
the statement of Theorem \ref{thm:spectral2} can be reduced to the study of the spectra of the restrictions $L_{\text{st}}|_{U_{km}}$ of the operator $L_{\text{st}}$ to the invariant subspaces $U_{km}$. For this reason we introduce the notation ${\rm spec}\, (L_{\text{st}}, U_j)$ for the spectrum of the operator $L_{\text{st}}|_{U_j}$, understood as an operator from $U_j$ to $U_j$. The following is a very simple observation.
\begin{lemma}
The restriction of the operator $L_{\text{st}}$ to the radial functions $U_0$ is identically $0$. Moreover, $z\in {\rm spec}\, (L_{\text{st}}, U_j)$ if and only if $\bar z \in {\rm spec}\, (L_{\text{st}}, U_{-j})$.
\end{lemma}
We will then focus on proving the following statement, which is slightly stronger than what we need to infer Theorem \ref{thm:spectral2}.
\begin{theorem}\label{thm:spectral3}
For a suitable choice of $\bar \Omega$, there is $m_0\geq 2$ such that
${\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\}$ is nonempty and ${\rm spec}\, (L_{\text{st}}, U_{m_0})\cap \{{\rm Re}\, z \geq \bar a\}$ is finite for every positive $\bar a$.
\end{theorem}
\begin{remark}\label{r:better2}\label{R:BETTER2} As it is the case for Theorem \ref{thm:spectral2} we can deepen our analysis and prove the following stronger statement:
\begin{itemize}
\item[(i)] For a suitable choice of $m_0$, in addition to the conclusion of Theorem \ref{thm:spectral3} we have ${\rm spec}\, (L_{\text{st}}, U_m) \subset i \mathbb R$ for every $m> m_0$.
\end{itemize}
This will be done in Appendix \ref{a:better}, where we will also show how conclusion (c) of Remark \ref{r:better} follows from it.
\end{remark}
{\color{red} Note that in \cite{Vishik2} Vishik claims the following stronger statement.
\begin{theorem}\label{thm:spectral-stronger-2}\label{THM:SPECTRAL-STRONGER-2}
For a suitable choice of $m_0$, in addition to the conclusion of Theorem \ref{thm:spectral3} and to Remark \ref{r:better2}(i), we have also
\begin{itemize}
\item[(ii)] ${\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\}$ consists of a single eigenvalue with algebraic multiplicity $1$.
\end{itemize}
\end{theorem}
In Appendix \ref{a:better} we will show how to prove the latter conclusion and how Theorem \ref{thm:spectral-stronger} follows from it.}
\section{Preliminaries}
If we write an arbitrary element $\Omega\in U_m$ as $\Omega (x) = e^{im\theta} \gamma (r)$ using polar coordinates, we find an isomorphism of the Hilbert space $U_m$ with the Hilbert space
\begin{equation}\label{e:def-H}
\mathcal{H}:= \left\{\gamma : \mathbb R^+ \to \mathbb C : \int_0^\infty |\gamma (r)|^2\, r\, dr < \infty\right\}
\end{equation}
and thus the operator $L_{\text{st}}: U_m \to U_m$ can be identified with an operator $\mathcal{L}_m : \mathcal{H}\to \mathcal{H}$. In fact, since $L_{\text{st}} = S_1+\mathscr{K}$, where $S_1$ is skew-adjoint and $\mathscr{K}$ compact, $\mathcal{L}_m$ is also a compact perturbation of a skew-adjoint operator. In order to simplify our notation and terminology, we will then revert our considerations to the operator $i\mathcal{L}_m$, which will be written as the sum of a self-adjoint operator, denoted by $\mathcal{S}_m$, and a compact operator, denoted by $\mathscr{K}_m$.
\begin{lemma}\label{l:S-in-polar}
After the latter identification, if $\bar\Omega (x) = g (|x|)$ and $\zeta$ is given through the formula \eqref{e:def-zeta}, then $\mathcal{S}_m: \mathcal{H}\to \mathcal{H}$ is the following bounded self-adjoint operator:
\begin{equation}\label{e:explicit}
\gamma \mapsto \mathcal{S}_m (\gamma) = m \zeta \gamma\, .
\end{equation}
\end{lemma}
\begin{proof}
The formula is easy to check. The self-adjointness of \eqref{e:explicit} is obvious. Concerning the boundedness we need to show that $\zeta$ is bounded. Since $g$ is smooth (and hence locally bounded), $\zeta$ is smooth and locally bounded by \eqref{e:def-zeta}. To show that it is globally bounded recall that $g (r) = r^{-{\bar\alpha}}$ for $r\geq 2$, so that
\[
\zeta (r) = \frac{\tilde c_0}{r^2} + \frac{1}{r^2} \int_2^r \rho^{1-{\bar\alpha}}\, d\rho = \frac{c_0}{r^2} + \frac{c_1}{r^{\bar\alpha}} \qquad \forall r\geq 2\, ,
\]
where $c_0$ and $c_1$ are two appropriate constants.
\end{proof}
A suitable, more complicated, representation formula can be shown for the operator $\mathscr{K}_m$.
\begin{lemma}\label{l:K-in-polar}
Under the assumptions of Lemma \ref{l:S-in-polar}, the compact operator $\mathscr{K}_m: \mathcal{H}\to \mathcal{H}$ is given by
\begin{equation}\label{e:explicit2}
\gamma \mapsto \mathscr{K}_m (\gamma)= - \frac{m}{r} \psi g'\,
\end{equation}
where
\begin{equation}\label{e:explicit3}
\psi (r) = - \frac{1}{2m} r^{m} \int_r^\infty \gamma (s) s^{1-m}\, ds - \frac{1}{2m} r^{-m} \int_0^r \gamma (s) s^{1+m}\, ds\, .
\end{equation}
\end{lemma}
\begin{remark}\label{r:potential-theory}
When $\gamma$ is compactly supported, $\phi (\theta,r):= \psi (r) e^{im\theta}$ with $\psi$ as in \eqref{e:explicit} gives the unique potential-theoretic solution of $\Delta \phi = \gamma e^{im\theta}$, namely, $\phi$ obtained as the convolution of $\gamma e^{im\theta}$ with the Newtonian potential $\frac{1}{2\pi} \ln r$. For general $\gamma\in \mathcal{H}$ we do not have enough summability to define such convolution using Lebesgue integration, but, as already done before, we keep calling $\phi$ the potential-theoretic solution of $\Delta \phi = \gamma e^{im\theta}$.
\end{remark}
\begin{proof}[Proof of Lemma \ref{l:K-in-polar}]
First of all we want to show that the formula is correct when $\Omega = \gamma (r) e^{im \theta} \in C^\infty_c \cap L^2_m$. We are interested in computing $-i (K_2*\Omega\cdot \nabla) \bar \Omega$. First of all we recall that $K_2* \Omega = \nabla^\perp \phi$, where $\phi$ is the potential-theoretic solution of $\Delta \phi = \Omega$. Recall that for $\phi$ we have the explicit formula
\[
\phi (x) = \frac{1}{2\pi} \int_{\ensuremath{\mathbb R}^2} \Omega (y) \ln |y-x|\,\mathrm dy\, .
\]
$\phi$ is clearly smooth and hence locally bounded. Observe that $\Omega$ averages to $0$ and thus
\[
\phi (x) = \frac{1}{2\pi} \int_{\ensuremath{\mathbb R}^2} \Omega (y) (\ln |y-x| - \ln |x|)\,\mathrm dy\, .
\]
Fix $R$ larger than $1$ so that ${\rm spt}\, (\Omega) \subset B_R$ and choose $|x|\geq 2R$. We then have the following elementary inequality for every $y\in {\rm spt}\, (\Omega)$:
\[
|\ln |x| - \ln |x-y||\leq \ln (|x-y| + |y|) - \ln (|x-y|)\leq \frac{|y|}{|y-x|} \leq \frac{2|y|}{|x|}\, ,
\]
from which we conclude that $|\phi (x)|\leq C (1+|x|)^{-1}$. Hence $\phi$ is the only solution to $\Delta \phi = \Omega$ with the property that it converges to $0$ at infinity. This allows us to show that $\phi$ satisfies the formula
\[
\phi (x) = \psi (r) e^{im\theta}
\]
where $\psi$ is given by formula \eqref{e:explicit3}. We indeed just need to check that the Laplacian of
$\psi (r) e^{im\theta}$ equals $\gamma (r) e^{im\theta}$ and that $\lim_{r\to \infty} \psi (r) = 0$.
Using the formula $\Delta = \frac{1}{r^2} \frac{\partial^2}{\partial \theta^2} + \frac{1}{r} \frac{\partial}{\partial r} + \frac{\partial^2}{\partial r^2}$ the first claim is a direct verification. Next, since $\gamma (r) =0$ for $r\geq R$, we conclude $\psi (r) = C r^{-m}$ for all $r\geq R$, which shows the second claim.
Observe next that
\[
\nabla \phi = \frac{m i}{r^2} \psi (r) e^{im\theta} \frac{\partial}{\partial \theta} - \frac{\partial}{\partial r} \left(\psi (r) e^{im\theta}\right) \frac{\partial}{\partial r}\, ,
\]
which turns into
\[
\nabla \phi^\perp = - \frac{mi}{r} \psi (r) e^{im\theta} \frac{\partial}{\partial r} - \frac{1}{r} \frac{\partial}{\partial r} \left(\psi (r) e^{im\theta}\right) \frac{\partial}{\partial \theta}\, .
\]
Since $\bar\Omega (x) = g(r)$, we then conclude that
\[
- (K_2*\Omega\cdot \nabla) \bar\Omega = \frac{mi}{r} \psi (r) e^{im\theta} g' (r)\, .
\]
Upon multiplication by $i$ we obtain formula \eqref{e:explicit2}. Since we know from the previous chapter that $\mathscr{K}$ is a bounded and compact operator and $\mathscr{K}_m$ is just the restriction of $i\mathscr{K}$ to a closed invariant subspace of it, the boundedness and compactness of $\mathscr{K}_m$ is obvious.
\end{proof}
Notice next that, while in all the discussion so far we have always assumed that $m$ is an integer larger than $1$, the operator $\mathcal{S}_m$ can in fact be easily defined for every {\em real} $m>1$, while, using the formulae \eqref{e:explicit2} and \eqref{e:explicit3} we can also make sense of $\mathscr{K}_m$ for every real $m>1$. In particular we can define as well the operator $\mathcal{L}_m$ for every $m>1$. The possibility of varying $m$ as a real parameter will play a crucial role in the rest of the chapter, and we start by showing that, for $m$ in the above range, the boundedness of $\mathcal{L}_m$ and $\mathcal{S}_m$ and the compactness of $\mathscr{K}_m$ continue to hold.
\begin{proposition}\label{p:all-m}
The operators $\mathcal{L}_m$, $\mathcal{S}_m$, and $\mathscr{K}_m$ are bounded operators from $\mathcal{H}$ to $\mathcal{H}$ for every real $m>1$, with a uniform bound on their norms if $m$ ranges in a compact set. Moreover, under the same assumption $\mathscr{K}_m$ is compact. In particular:
\begin{itemize}
\item[(i)] ${\rm spec}\, (\mathcal{L}_m)$ is compact;
\item[(ii)] for every $z$ with ${\rm Im}\, z \neq 0$ the operator $\mathcal{L}_m-z$ is a bounded Fredholm operator with index $0$;
\item[(iii)] every $z\in {\rm spec}\, (\mathcal{L}_m)$ with ${\rm Im}\, z \neq 0$ belongs to the discrete spectrum.
\end{itemize}
\end{proposition}
\begin{proof} The boundedness of $\mathcal{S}_m$ is obvious. Having shown the boundedness and compactness of $\mathscr{K}_m$, (i) follows immediately from the boundedness of $\mathcal{L}_m$, while (ii) follows immediately from the fact that $\mathcal{L}_m - z$ is a compact perturbation of the operator $\mathcal{S}_m -z$, which is invertible because $\mathcal{S}_m$ is selfajoint, and (iii) is a standard consequence of (ii).
First of all let us prove that $\mathcal{K}_m$ is bounded (the proof is necessary because from what previously proved, we can just conclude the boundedness and compactness of the operator for {\em integer values} of $m$ larger than $1$). We observe first that the function $\| r^{-1}\psi\|_\infty \leq \|\gamma\|_{\mathcal{H}}$, as it follows from Cauchy-Schwarz that
\begin{align*}
r^{m-1} \int_r^\infty |\gamma (s)| s^{1-m}\, ds&\leq
r^{m-1} \left(\int_r^\infty |\gamma(s)|^2 s\, ds\right)^{\frac{1}{2}} \left(\int_r^\infty s^{1-2m}\, ds\right)^{\frac{1}{2}}
\leq \frac{1}{\sqrt{2m-2}} \|\gamma\|_{\mathcal{H}}\\
r^{-m-1} \int_0^r |\gamma (s)| s^{1+m}\, ds &\leq r^{-m-1} \left(\int_0^r |\gamma (s)|^2 s\, ds\right)^{\frac{1}{2}} \left(\int_0^r s^{1+2m}\, ds\right)^{\frac{1}{2}}
\leq \frac{1}{\sqrt{2m+2}} \|\gamma\|_{\mathcal{H}}\, .
\end{align*}
Since $g' (r) \leq C (1+r)^{-1-{\bar\alpha}}$, it follows immediately that
\begin{equation}\label{e:K_m-pointwise-bound}
|(\mathcal{K}_m (\gamma)) (r)| \leq \frac{C\|\gamma\|_{\mathcal{H}}}{(1+r)^{1+{\bar\alpha}}}
\end{equation}
and in particular
\[
\|\mathcal{K}_m (\gamma)\|_{\mathcal{H}} \leq
C\|\gamma\|_{\mathcal{H}} \left(\int_0^\infty \frac{s}{(1+s)^{2+2{\bar\alpha}}}\, ds\right)^{\frac{1}{2}}
\leq C \|\gamma\|_{\mathcal{H}} \, .
\]
This completes the proof of boundedness of the operator. In order to show compactness consider now a bounded sequence $\{\gamma_k\}\subset \mathcal{H}$. Observe that for every fixed $N$, \eqref{e:explicit3} gives the following obvious bound
\begin{equation}
\|\mathcal{K}_m(\gamma_k)\|_{W^{1,2} [N^{-1}, N]} \leq C (N) \|\gamma_k\|_{\mathcal{H}}\, .
\end{equation}
In particular, through a standard diagonal procedure, we can extract a subsequence of $\{\mathcal{K}_m(\gamma_k)\}$ (not relabeled) which converges strongly in $L^2 ([N^{-1}, N], rdr)$ for every $N$. It is now easy to show that $\{\mathcal{K}_m (\gamma_k)\}_k$ is a Cauchy sequence in $\mathcal{H}$. Fix indeed $\varepsilon>0$. Using \eqref{e:K_m-pointwise-bound} it is easy to show that there is a sufficiently large $N$ with the property that
\begin{equation}\label{e:Cauchy-1}
\sup_k \|\mathcal{K}_m (\gamma_k) \mathbf{1}_{[0, N^{-1}]\cup [N, \infty[}\|_{\mathcal{H}} < \frac{\varepsilon}{3}\, .
\end{equation}
Hence, given such an $N$, we can choose $k_0$ big enough so that
\begin{equation}\label{e:Cauchy-2}
\|(\mathcal{K}_m (\gamma_k) - \mathcal{K}_m (\gamma_j)) \mathbf{1}_{[N^{-1}, N]}\|_{\mathcal{H}} \leq
\frac{\varepsilon}{3} \qquad \forall k,j \geq k_0\, .
\end{equation}
Combining \eqref{e:Cauchy-1} and \eqref{e:Cauchy-2} we immediately conclude
\[
\|\mathcal{K}_m (\gamma_k) - \mathcal{K}_m (\gamma_j)\|_{\mathcal{H}} < \varepsilon
\]
for every $j,k \geq k_0$. This completes the proof that $\{\mathcal{K}_m (\gamma_j)\}$ is a Cauchy sequence and hence the proof that $\mathcal{K}_m$ is compact.
\end{proof}
\section{The eigenvalue equation and the class \texorpdfstring{$\mathscr{C}$}{scrC}}
\label{s:eigenvalue-equation}
Using the operators introduced in the previous setting, we observe that Theorem \ref{thm:spectral3} is equivalent to showing that ${\rm spec}\, (\mathcal{L}_{m_0}) \cap \{{\rm Im}\, z>0\}$ is finite and nonempty.
We next notice that, thanks to Proposition \ref{p:all-m}, the latter is equivalent to showing that the equation\footnote{Recall that $\psi$ is defined through \eqref{e:explicit3}.}
\begin{equation}\label{e:eigenvalue-equation}
m \zeta \gamma - \frac{m}{r} g' \psi = z \gamma
\end{equation}
has a nontrivial solution $\gamma \in \mathcal{H}$ for some integer $m=m_0\geq 2$ and some complex number $z$ with positive imaginary part.
We thus turn \eqref{e:eigenvalue-equation} into an ODE problem by changing the unknown from $\gamma$ to the function $\psi$.
In particular, recall that the relation between the two is that $\Delta (\psi (r) e^{im\theta}) = \gamma (r) e^{im\theta}$, and $\psi e^{im\theta}$ is in fact the potential-theoretic solution. We infer that
\[
\psi'' + \frac{1}{r} \psi' - \frac{m^2}{r^2} \psi = \gamma\,
\]
and hence \eqref{e:eigenvalue-equation} becomes
\begin{equation}\label{e:eigenvalue-equation-2}
- \psi'' - \frac{1}{r}\psi' + \frac{m^2}{r^2} \psi + \frac{g'}{r (\zeta -m^{-1} z)} \psi = 0 \, .
\end{equation}
Notice that, by classical estimates for ODEs, $\psi \in W^{2,2}_{\text{loc}} (\mathbb R^+)$.
Observe, moreover, that if $\psi\in L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$ solves \eqref{e:eigenvalue-equation-2} and $z$ has nonzero imaginary part, it follows that
\[
\gamma = \frac{mg'}{r (m \zeta -z)} \psi
\]
belongs to $L^2 (r dr)$ and solves \eqref{e:eigenvalue-equation}, because the function $\frac{mg'}{m \zeta -z}$ is bounded. Viceversa, assume that $\gamma \in L^2 (r dr)$ solves \eqref{e:eigenvalue-equation}. Then $\psi$ solves \eqref{e:eigenvalue-equation-2} and we claim that $\psi\in L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$. First of all notice that, by classical Calder{\'o}n-Zygmund estimates, $\phi (x) := \psi (r) e^{im\theta}$ is a $W^{2,2}_{\text{loc}}$ function of $\mathbb R^2$. As such $\phi\in C^\omega (B_1)$ for every $\omega<1$ and therefore $\psi\in C^\omega ([0,1])$ and, by symmetry considerations, $\psi (0) =0$. Thus it turns out that $|\psi (r)|\leq C r^\omega$ for every $r\in [0,1]$, which easily shows that $\psi\in L^2 ([0,1], \frac{dr}{r})$. It remains to show that
\begin{equation}\label{e:correzione-1}
\int_1^\infty \frac{|\psi (r)|^2}{r}\, dr < \infty\, .
\end{equation}
However recall that, for $r$ sufficiently large, $\zeta (r) = \frac{c_0}{r^2}+ \frac{c_1}{r^{\bar\alpha}}$ for some constants $c_0$ and $c_1$, while $g' (r) = -{\bar\alpha} r^{1+{\bar\alpha}}$. We thus infer
\[
|\psi (r)| = \left|\frac{r (\zeta (r)-\frac{z}{m})}{g' (r)}\right| \leq \frac{C|\gamma (r)|}{r^{\bar\alpha}}\, ,
\]
which in turn easily implies \eqref{e:correzione-1} because $\int_1^\infty |\gamma (r)|^2 r\, dr < \infty$.
Hence our problem is equivalent to understand for which $m$ and $z$ with positive imaginary part there is an $L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$ solution of \eqref{e:eigenvalue-equation-2}. The next step is to change variables to $t = \ln r$ and we thus set $\varphi (t) = \psi (e^t)$, namely, $\psi (r) = \varphi (\ln r)$. The condition that $\psi\in L^2 (\frac{dr}{r})$ translates then into $\varphi\in L^2 (\mathbb R)$ and $\psi\in W^{2,2}_{\text{loc}}$ translates into $\varphi\in W^{2,2}_{\text{loc}}$.
Moreover, if we substitute the complex number $z$ with $\frac{z}{m}$ we can rewrite
\begin{equation}\label{e:eigenvalue-equation-3}
- \varphi'' (t) + m^2 \varphi (t) + \frac{A(t)}{\Xi (t) - z} \varphi (t) = 0\, ,
\end{equation}
which is \emph{Rayleigh's stability equation},
where the functions $A$ and $\Xi$ are given by changing variables in the corresponding functions $g'$ and
$\zeta$:
\begin{align}
A (t) &= \frac{d}{dt} g(e^t)\\
\Xi (t) &= \int_{-\infty}^t e^{-2 (t-\tau)} g (e^\tau)\, d\tau\, .
\end{align}
Note in particular that we can express $A$ and $\Xi$ through the relation
\begin{equation}\label{e:A-Xi}
A = \Xi'' + 2 \Xi'\, .
\end{equation}
The function $g$ (and so our radial function $\bar\Omega$) can be expressed in terms of $\Xi$ through the formula
\begin{equation}\label{e:formula-g}
g (e^t) = e^{-2t} \frac{d}{dt} (e^{2t} \Xi (t))\, .
\end{equation}
Rather than looking for $g$ we will then look for $\Xi$ in an appropriate class $\mathscr{C}$ which we next detail:
\begin{definition}\label{d:class-C}
The class $\mathscr{C}$ consists of those functions $\Xi: \mathbb R \to ]0, \infty[$ such that
\begin{itemize}
\item[(i)] $\Xi (-\infty) := \lim_{t\to - \infty} \Xi (t)$ is finite and there are constants $c_0>0$ and $M_0$ such that $\Xi (t) = \Xi (-\infty) - c_0 e^{2t}$ for all $t\leq M_0$;
\item[(ii)] there is a constant $c_1$ such that $\Xi (t) = c_1 e^{-2t} + \frac{1}{2-{\bar\alpha}} e^{-{\bar\alpha} t}$ for $t\geq \ln 2$;
\item[(iii)] $A$ has exactly two zeros, denoted by $a<b$, and $A' (a)>0$ and $A' (b)<0$ (in particular $A<0$ on $]-\infty,a[ \cup ]b, \infty[$ and $A>0$ on $]a,b[$);
\item[(iv)] $\Xi ' (t) <0$ for every $t$.
\end{itemize}
\end{definition}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/fig1.pdf}
\caption{A sketch of the function in the class $\mathscr{C}$ which will be finally chosen in Section \ref{s:choice-of-A} to prove Theorem \ref{thm:spectral3}, in the $t= \log r$ axis. The graph of $A(t)$ is the solid curve, $G(t):=\Xi'(t)+2\Xi(t)$ the dashed one, and $\Xi'(t)$ the dotted one. Even though $A$ is smooth, its derivative undergoes a very sharp change around the points $t = \frac{1}{2}$ and the point $t= -\frac{1}{\sqrt{B}}$, where $B$ is an appropriately large constant, cf. Section \ref{s:choice-of-A}.}
\label{fig:fig1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\textwidth]{Figures/fig2.pdf}
\caption{The profile of the background vorticity $\bar \Omega(x) = g(r)$ in the original coordinates (the solid curve). Compare with the exact singular profile $r^{-{\bar\alpha}}$ (the dashed curve)}
\label{fig:fig2}
\end{figure}
Fix $\Xi\in \mathscr{C}$. By \eqref{e:formula-g}, $g$ is then smooth, it equals $2\Xi (-\infty)- 4 c_0 r^2$ in a neighborhood of $0$, and it is equal to $r^{-{\bar\alpha}}$ for $r\geq 2$, thanks to the conditions (i)-(ii). In particular the corresponding function $\bar \Omega (x) = g (|x|)$ satisfies the requirements of Theorem \ref{thm:spectral3}. We are now ready to turn Theorem \ref{thm:spectral3} into a (in fact stronger) statement for the eigenvalue equation \eqref{e:eigenvalue-equation-3}. In order to simplify its formulation and several other ones in the rest of these notes, we introduce the following sets
\begin{definition}\label{d:sets-U-m}
Having fixed $\Xi\in \mathscr{C}$ and a real number $m>1$, we denote by $\mathscr{U}_m$ the set of those complex $z$ with positive imaginary part with the property that there are nontrivial solutions $\varphi\in L^2\cap W^{2,2}_{\text{loc}} (\mathbb R, \mathbb C)$ of \eqref{e:eigenvalue-equation-3}.
\end{definition}
{\begin{remark}\label{r:m-factor-eigenvalue}
Observe that $z$ belongs to $\mathscr{U}_m$ if and only if it has positive imaginary part and $m z$ is an eigenvalue of $\mathcal{L}_m$.
\end{remark}}
\begin{theorem}\label{thm:spectral5}\label{THM:SPECTRAL5}
There is a function $\Xi\in \mathscr{C}$ and an integer $m_0\geq 2$ such that $\mathscr{U}_{m_0}$ is finite and nonempty.
\end{theorem}
\section{Overview of the proof of Theorem \ref{thm:spectral5}}\label{sec:overviewspectralthm}
The rest of the chapter is devoted to proving Theorem \ref{thm:spectral5}. The proof will be achieved through a careful study of Rayleigh's stability equation~\eqref{e:eigenvalue-equation-3} and, in particular, the set $\mathscr{P}$ of pairs $(m,z)$ such that $z\in \mathscr{U}_m$ and $m>1$, i.e.,
\begin{equation}\label{e:def-pairs}
\mathscr{P}:= \left\{(m,z)\in \mathbb R \times \mathbb C: z\in \mathscr{U}_m, m>1\right\}\, .
\end{equation}
Given that $\Xi$ is strictly decreasing, we have
\[
\lim_{t\to -\infty} \Xi (t) > \Xi (a) > \Xi (b) > \lim_{t\to \infty} \Xi (t) = 0
\]
and in order to simplify our notation we will use $\Xi (-\infty)$ for $\lim_{t\to -\infty} \Xi (t)$ and, occasionally, $\Xi (\infty)$ for $0$.
The first step in the proof of Theorem \ref{thm:spectral5} is understanding which pairs $(m,z)$ belong to the closure of $\mathscr{P}$ and have ${\rm Im}\, z =0$. Solutions $(m,z,\varphi)$ to~\eqref{e:eigenvalue-equation-3} with $(m,z) \in \overline{\mathscr{P}}$ are sometimes called \emph{neutral limiting modes}~\cite{LinSIMA2003}.\footnote{The interested reader may compare with the strategy for bounded shear flows in~\cite{LinSIMA2003}.} To that end, it is convenient to introduce the following two self-adjoint operators:
\begin{align}
L_a &:= -\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t) - \Xi (a)}\label{e:def-L_a}\\
L_b &:= -\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t) - \Xi (b)}\label{e:def-L_b}\, .
\end{align}
Thanks to the definition of the class $\mathscr{C}$, it is easy to see that both functions $\frac{A(t)}{\Xi (t) - \Xi (a)}$ and $\frac{A(t)}{\Xi (t) - \Xi (b)}$ are bounded and that $\frac{A(t)}{\Xi (t) - \Xi (a)} < \frac{A(t)}{\Xi (t) - \Xi (b)}$. Moreover, the first is negative on $]-\infty, b[$ and positive on $]b, \infty[$, while the second is negative on $]-\infty, a[$ and positive on $]a, \infty[$. Recall that the spectra of these operators are necessarily real and denote by $-\lambda_a$ and $-\lambda_b$ the smallest element in the respective ones: observe that, by the Rayleigh quotient characterization, $-\lambda_a < -\lambda_b$.
The following proposition characterizes the possible neutral limiting modes:
\begin{proposition}\label{p:3+4}\label{P:3+4}
If $(m_0,z)\in \overline{\mathscr{P}}$ and ${\rm Im}\, z =0$, then either $z= \Xi (a)$ or $z= \Xi (b)$. {\color{red} Moreover, in either case, if $m_0>1$ then necessarily $m_0 = \sqrt{\lambda_a}$ or $m_0 = \sqrt{\lambda_b}$.} Assume in addition that $- \lambda_a < -1$. Then, for $z = \Xi (a)$, the unique $m\geq 1$ such that \eqref{e:eigenvalue-equation-3} has a nontrivial solution $\psi_a\in L^2$ is $m_a = \sqrt{\lambda_a}$. Moreover, any nontrivial solution has the property that $\psi_a (a) \neq 0$.
\end{proposition}
{\color{red}
\begin{remark}\label{r:b-also}
We remark that the exact same argument applies with $b$ in place of $a$ when $\lambda_b >1$, even though this fact does not play any role in the rest of the notes.
\end{remark}
}
Observe that this does not yet show that $(m_a, \Xi (a))\in \overline{\mathscr{P}}$ corresponds to a neutral limiting mode. The latter property will be achieved in a second step, in which we seek a curve of unstable modes emanating from $(m_a, \Xi(a))$:
\begin{proposition}\label{p:5-7}\label{P:5-7}
Assume $- \lambda_a<-1$ and let $m_a=\sqrt{\lambda_a}$.
There are positive constants $\varepsilon >0$ and $\delta>0$ with the following property:
For every $h\in ]0, \delta[$, $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) \neq \emptyset$.
\end{proposition}
{\color{red}
\begin{remark}\label{r:b-also-2} In fact, the argument given for the proposition proves the stronger conclusion that $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a))$ consists of a single point $z$, with the property that $mz$ is an eigenvalue of $\mathcal{L}_m$ with geometric multiplicity $1$.
Moreover, the very same argument applies to $b$ in place of $a$ and $h \in ]-\delta,0[$ if $\lambda_b >1$.
\end{remark}
}
Combined with some further analysis, in which the curve of unstable modes is continued, the latter proposition will allow us to conclude the following:
\begin{proposition}\label{p:almost-final}\label{P:ALMOST-FINAL}
Assume $- \lambda_a<-1$, let $m_a = \sqrt{\lambda_a}$ and set $m_b:= \sqrt{\max \{1, \lambda_b\}}$: then
$\mathscr{U}_m\neq \emptyset$ for every $m\in ]m_b, m_a[$. \end{proposition}
Thus far, we have not selected our function $\Xi$: the above properties are valid for any element in the class $\mathscr{C}$. The choice of $\Xi$ comes in the very last step.
\begin{proposition}\label{p:final}
There is a choice of $\Xi\in \mathscr{C}$ with the property that $]m_b,m_a[$ contains an integer larger than $1$.
\end{proposition}
Clearly, the combination of Proposition \ref{p:almost-final} and Proposition \ref{p:final} gives Theorem \ref{thm:spectral5}: we first choose $\Xi$ as in Proposition \ref{p:final} and hence we select $m_0$ as the largest natural number which belongs to the interval $]m_b,m_a[$; the properties claimed in Theorem \ref{thm:spectral5} follow then from Proposition \ref{p:almost-final}. The proof of Proposition \ref{p:final} is in fact a rather straightforward application of the following.
\begin{lemma}\label{l:bottom}\label{L:BOTTOM}
Let $m_0$ be any integer. Then there exists $\Xi\in \mathscr{C}$ with $a=0$ and $b=\frac{1}{2}$ such that the smallest eigenvalue of the operator $L_a$ is smaller than $-m_0^2$.
\end{lemma}
\begin{remark}\label{rmk:veryunstablemodes}
A consequence of Lemma~\ref{l:bottom} is that the most unstable wavenumber $m_0$ can be made arbitrarily large. Only $m_0 \geq 2$ is necessary to prove non-uniqueness.
\end{remark}
The rest of the chapter will be devoted to proving the Propositions \ref{p:3+4} and \ref{p:5-7} and Lemma \ref{l:bottom}. We finish this section by giving the simple proof of Proposition \ref{p:final}
\begin{proof} For simplicity we fix $a=0$ and $b=\frac{1}{2}$ and we look at the set of functions $\Xi$ with this particular choice of zeros for $A$. We then denote by $L_{\Xi, a}$ the operator in \eqref{e:def-L_a}. We fix an arbitrary $\Xi_0\in \mathscr{C}$ and let $-\lambda (0)$ be the smallest eigenvalue of $L_{\Xi_0,a}$. We then consider the smallest integer $m_0\geq 3$ such that $m_0^2 > \lambda (0)$. By Lemma \ref{l:bottom} there is an element $\Xi_1\in \mathscr{C}$ with the property that $a=0$, $b=\frac{1}{2}$ and, if $-\lambda (1)$ is the smallest element of the spectrum of $L_{\Xi_1, a}$, then $-\lambda (1) < m_0^2$. For $\sigma\in [0,1]$ consider $L_{\Xi_\sigma,a}$ where
\[
\Xi_\sigma = (1-\sigma) \Xi_0 + \sigma \Xi_1\,
\]
and observe that $\Xi_\sigma \in \mathscr{C}$ for every $\sigma\in [0,1]$.
Since $\sigma \mapsto \Xi_\sigma$ is continuous in the uniform convergence, by the Rayleigh quotient characterization we see that the smallest element $-\lambda (\sigma)$ of the spectrum of $L_{\Xi_\sigma,a}$ is a continuous function of $\sigma$. There is thus one $\sigma\in [0,1[$ with $\lambda (\sigma)= m_0^2$. Let $\sigma_0$ be the largest $\sigma$ with $\lambda (\sigma)= m_0^2$. Observe now that, if we let $- \mu (\sigma_0)$ be the smallest eigenvalue of $L_{\Xi_{\sigma_0}, b}$, then $\mu (\sigma_0) < m_0^2$. In addition, $\sigma\mapsto \mu (\sigma)$ is also continuous and thus there is $h>0$ such that $\mu (\sigma) < m_0^2$ for all $\sigma\in [\sigma_0-h, \sigma_0+h]$. On the other hand $\lambda (\sigma_0+h)> m_0^2$. This shows that $m_b < m_0 < m_a$ if we choose $\Xi= \Xi_{\sigma_0+h}$, completing the proof of our claim.
\end{proof}
\section{ODE Lemmas}
An essential tool in the proofs of the Propositions \ref{p:3+4} and \ref{p:5-7} are the following two ODE lemmas.
\begin{lemma}\label{l:ODE1}
Let $m>0$. For every $f\in L^2 (\mathbb R)$ there is a unique $\psi\in L^2(\ensuremath{\mathbb R}) \cap W^{2,2}_{\text{loc}}$ s.t.
\begin{equation}\label{e:Laplacian-1d}
-\frac{d^2\psi}{dt^2} + m^2\psi = f
\end{equation}
and it is given by
\begin{equation}\label{e:potential-1d}
\psi (t) = \frac{1}{2m} \int_{\ensuremath{\mathbb R}} e^{-m|t-\tau|} f (\tau)\, d\tau\, .
\end{equation}
\end{lemma}
\begin{proof} The lemma is a classical well-known fact. At any rate the verification that
$\psi$ as in \eqref{e:potential-1d} solves \eqref{e:Laplacian-1d} is an elementary computation while, since obviosuly $e^{-m|t|}\in L^1$, $\psi\in L^2$ if $f\in L^2$. Moreover, any other solution $\hat\psi$ of \eqref{e:Laplacian-1d} must satisfy $\hat\psi (t) = \psi (t) + C_+ e^{mt} + C_- e^{-mt}$ for some constants $C_\pm$ and the requirement $\hat\psi\in L^2$ immediately implies $C_+=C_-=0$.
\end{proof}
The second ODE Lemma is the following:
\begin{lemma}\label{l:ODE2}
Let $v\in L^1 (\mathbb R, \mathbb C)$. Then for every constant $c_-$ there is a unique solution $y \in W^{2,1}_{\text{loc}} (\mathbb R, \mathbb C)$ of
\begin{equation}\label{e:ODE2}
- \frac{d^2y}{dt^2} + (m^2 + v) y =0
\end{equation}
with the property that
\begin{equation}\label{e:y=e^mt}
\lim_{t\to - \infty} e^{-mt} y (t) =c_-\, .
\end{equation}
Moreover we have $y(t) = e^{mt} (c_-+z(t))$ for a function $z(t)$ which satisfies the bounds
\begin{align}
|z(t)| &\leq |c_-|\left[\exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\right]\label{e:est-z}\\
|z'(t)| &\leq 2m |c_-|\left[\exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\right]\label{e:est-z'}
\end{align}
A symmetric statement, left to the reader, holds for solutions such that
\begin{equation}\label{e:y=e^mt-plus}
\lim_{t\to \infty} e^{mt} y (t) =c_+\, .
\end{equation}
\end{lemma}
Important consequences of the above Lemmas are the following:
\begin{corollary}\label{c:decay}
If $(m,z)\in \mathscr{P}$, then the space of solutions $\varphi\in L^2\cap W^{2,2}_{\text{loc}}$ of \eqref{e:eigenvalue-equation-3} is $1$-dimensional. Moreover for any such $\varphi$ there is a constant $C$ with the property that
\begin{align}
|\varphi (t)| &\leq C e^{-m|t|}\,
\end{align}
and there are two constants $C_+$ and $C_-$ such that
\begin{align}
\lim_{t\to\infty} e^{mt} \varphi (t) &= C_+\\
\lim_{t\to -\infty} e^{-mt} \varphi (t) &= C_-\, .
\end{align}
The constants are either both nonzero or both zero, in which case $\varphi$ vanishes identically.
The same conclusions apply if $m>1$, $z\in \{\Xi (a), \Xi (b)\}$ and $\varphi$ solves \eqref{e:eigenvalue-equation-3}.
\end{corollary}
\begin{proof}
Observe that $|\Xi (t)-z|\geq |{\rm Im}\, z|$, while $A (t) = 6 c_0 e^{2t}$ for $-t$ sufficiently large and $|A(t)|\leq 2 e^{-2{\bar\alpha} t}$ for $t$ sufficiently large. In particular
\begin{equation}\label{e:estimate-A-over-Xi}
\frac{|A(t)|}{|\Xi (t)-z|} \leq C e^{-2{\bar\alpha} |t|}\, .
\end{equation}
First of all notice that, if $\varphi\in L^2\cap W^{2,2}_{\text{loc}}$ solves \eqref{e:eigenvalue-equation-3}, by Lemma \ref{l:ODE1} (applied with $f= -\frac{A\varphi}{\Xi-z}$) we have
\begin{equation}\label{e:integral-equation}
|\varphi (t)| \leq \frac{C}{2m} \int e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)|\, d\tau\, .
\end{equation}
Using Cauchy-Schwarz and the fact that $\varphi\in L^2$ we immediately obtain that $\varphi\in L^\infty$, namely, that there is a constant $C$ such that $|\varphi|\leq C$. We now prove inductively that $|\varphi (t)|\leq C_k e^{-k{\bar\alpha} |t|}$ as long as $k{\bar\alpha} \leq m$. The case $k=0$ has already been shown. Assume thus that the inequality holds for $k-1$ and that $k{\bar\alpha} \leq m$. We then observe that
\begin{align*}
e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)| &\leq C_{k-1} e^{- m|t-\tau| - k{\bar\alpha} |\tau|} e^{-{\bar\alpha} |\tau|}
\leq C_{k-1} e^{-k{\bar\alpha} (|t-\tau| + |\tau|)} e^{-{\bar\alpha} |\tau|}\\
&\leq C_{k-1} e^{-k{\bar\alpha} |t|} e^{-{\bar\alpha} |\tau|}\, .
\end{align*}
Inserting in \eqref{e:integral-equation} and using that $e^{-{\bar\alpha} |\tau|}\in L^1$ we then obtain $|\varphi (t)|\leq C_k e^{-k{\bar\alpha} |t|}$. Assuming now $k{\bar\alpha} \leq m < (k+1) {\bar\alpha}$ we can, likewise, bound
\[
e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)| \leq C_k e^{- m|t-\tau| - (k+1){\bar\alpha} |\tau|} e^{-{\bar\alpha} |\tau|} \leq C_k e^{-m|t|} e^{-{\bar\alpha} |\tau|}
\]
and plugging into \eqref{e:integral-equation} one last time we conclude $|\varphi (t)|\leq C e^{-m|t|}$.
In order to show that $\varphi$ is unique up to a multiplicative constants, it suffices to show that $\lim_{t\to -\infty} e^{-mt} \varphi (t)$ exists and is finite. Hence Lemma \ref{l:ODE2} would conclude that the solution is uniquely determined by $C_-$, and that the latter must be nonzero, otherwise $\varphi\equiv 0$.
In order to show existence and finiteness of $C_-$ rewrite
\[
\varphi (t) = \frac{e^{mt}}{2m} \int_t^\infty e^{-ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds
+ \frac{e^{-mt}}{2m} \int_{-\infty} ^t e^{m s} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds\, .
\]
Since by our estimates both $e^{-ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)$ and $e^{m s} \frac{A(s)}{\Xi (s) -z} \varphi (s)$ are integrable, we conclude that $C_{\pm}$ exist and equal
\begin{align*}
C_\pm = \frac{1}{2m}\int_{-\infty}^\infty e^{\pm ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds\, .
\end{align*}
\medskip
As for the last sentence of the statement of the lemma, the same arguments can be used in the case $z\in \{\Xi (a), \Xi (b)\}$, since the crucial point is that, thanks to the assumption that $A (a) = A(b)=0$ and $\Xi' (a) \neq 0 \neq \Xi' (b)$, the estimate \eqref{e:estimate-A-over-Xi} remains valid.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:ODE2}] We distinguish between the case $c_-\neq 0$ and $c_-=0$. In the case $c_- \neq 0$ we can divide by $c_-$ and reduce the statement to $c_-=1$. For the existence it suffices to look for a solution of \eqref{e:ODE2} which satisfies \eqref{e:y=e^mt} on a half-line of type $]-\infty, T]$ for some $T$. Such solution has then a $W^{2,1}_{\text{loc}}$ continuation on $[T, \infty[$ by standard ODE theory. Likewise the uniqueness is settled once we can show the uniqueness holds on $]-\infty, T]$. Observe next that, if the solution exists, we would clearly conclude that $\frac{d^2 y}{dt^2}\in L^1 (]-\infty, T])$, hence implying that
\[
\lim_{t\to-\infty} y' (t)
\]
exists and is finite. On the other hand \eqref{e:y=e^mt} implies that such limit must be $0$.
Let $\tilde{y} (t) = e^{-mt} y (t)$ and observe that we are looking for a solution of
\[
(e^{2mt} \tilde{y}')' = e^{2mt} v \tilde{y}\, .
\]
Integrating between $-N$ and $t$ the latter identity and letting $t\to -\infty$ we conclude
\begin{equation}\label{e:tildey'}
e^{2mt} \tilde{y}' (t) = \int_{-\infty}^t e^{2ms} v (s)\tilde{y} (s)\, ds\, .
\end{equation}
Divide by $e^{2mt}$ and integrate once more to reach
\begin{align*}
\tilde{y} (t) -1 = - \int_{-\infty}^t \int_{-\infty}^r e^{2m (s-r)} v(s)\tilde{y} (s)\, ds\, dr
= \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds
\end{align*}
We then define the transformation
\begin{equation}\label{e:fixed-point}
\mathscr{F} (\tilde{y}) (t) = \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds + 1\,
\end{equation}
which we consider as a map from $L^\infty (]-\infty, T])$ into itself.
From our discussion we conclude that $y$ solves \eqref{e:ODE2} and obeys \eqref{e:y=e^mt} if and only if $\tilde{y}$ is a fixed point of $\mathscr{F}$. Choosing $T$ large enough so that $\|v\|_{L^1 (]-\infty, T])}\leq m$ we see immediately that $\mathscr{F}$ is contraction on $L^\infty (]-\infty, T])$ and it thus has a unique fixed point. We have thus showed existence and uniqueness of the solution in question.
Observe now that $z(t) = \tilde{y} (t) -1$ and set
\[
Z(t) := \exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\, .
\]
$Z$ solves the ODE $Z' = \frac{|v|}{2m} Z + \frac{|v|}{2m}$ and, since $\lim_{t\to-\infty} Z(t) =0$, the integral equation
\[
Z (t) = \frac{1}{2m} \int_{-\infty}^t |v(s)| Z(s)\, ds + \frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\, .
\]
We first want to show that $|z(t)|\leq Z(t)$ on $]-\infty, T]$. We set $\tilde{y}_0 := Z+1$ and define inductively $\tilde{y}_{i+1} = \mathscr{F} (\tilde{y}_i)$. From the above discussion we know that $\tilde{y}_i$ converges uniformly to $\tilde{y}$ and it suffices thus to show that $|\tilde{y}_i -1| \leq Z$ for all $i$.
By definition we have $|\tilde{y}_0-1| = Z$ and thus we need to show the inductive step. We estimate
\begin{align*}
|\tilde{y}_{i+1} (t) -1| &\leq \frac{1}{2m} \int_{-\infty}^t |v(s)| |\tilde{y}_i (s)|\, ds\\
&\leq \frac{1}{2m} \int_{-\infty}^t |v(s)| Z(s)\, ds + \frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds = Z(t)\, ,
\end{align*}
We have shown \eqref{e:est-z} on $]-\infty, T]$. In order to extend the inequality to the whole real axis observe first that we can assume, without loss of generality, that $\|v\|_{L^1 (\mathbb R)}>0$, otherwise we trivially have $|\tilde{y} (t)-1| = Z(t) =0$ for all $t$. In particular we can select $T$ so that all of the above holds and at the same time $\|v\|_{L^1 (]-\infty, T])}>0$. This implies $Z(T)>0$. Moreover, by \eqref{e:fixed-point} and $\mathscr{F} (\tilde{y})= \tilde{y}$, either
\begin{align*}
|\tilde{y} (T) -1| &< \frac{1}{2m} \int_{-\infty}^{T} |v(s)| |\tilde{y} (s)|\, ds
\end{align*}
or $|v||\tilde{y}|$ vanishes identically on $]-\infty, T]$. In both cases we conclude $|\tilde{y} (T)-1|< Z(T)$. Consider now $\sup \{t\geq T: |\tilde{y} (t)-1|< Z (t)\}$. Such supremum cannot be a finite number $T_0$ because in that case we would have $|\tilde{y} (T_0)-1| = Z(t_0)$ while the same argument leading to the strict inequality $|\tilde{y} (T)-1|< Z(T)$ implies $|\tilde{y} (T_0)-1|< Z (T_0)$.
Having shown \eqref{e:est-z} we now come to \eqref{e:est-z'}. Recalling \eqref{e:tildey'} we have
\begin{align*}
z'(t) &= \int_{-\infty}^t e^{-2m (t-s)} v (s) (z(s)+1)\, ds\\
&\leq \int_{-\infty}^t e^{-2m (t-s)} |v (s)| Z(s)\, ds + \int_{-\infty}^t e^{-2m (t-s)} |v (s)|\, ds
= 2m Z(t)\, .
\end{align*}
We now come to the case $c_- =0$. In that case we need to show that the unique solution is identically~$0$. Arguing as for the case $c_- =1$ we conclude that $\varphi$ is a fixed point of the transformation
\[
\mathscr{F} (\varphi) (t) = \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds
\]
Again, for a sufficiently small $T$, $\mathscr{F}$ is a contraction on $L^\infty (]-\infty, T])$ and hence it has a unique fixed point. Since however $0$ is, trivially, a fixed point, we conclude that $\varphi\equiv 0$ on $]-\infty, T]$. Standard ODE theory implies then that $\varphi$ vanishes identically on the whole $\mathbb R$.
\end{proof}
\section{Proof of Proposition \ref{p:3+4}}\label{s:3+4}
We start by showing the last statement of the proposition, namely:
\begin{itemize}
\item[(A)] For $z = \Xi (a)$ and under the assumption that $\lambda_a>1$, the unique $m$ such that \eqref{e:eigenvalue-equation-3} has a nontrivial solution $\psi_a\in L^2$ is $m_a = \sqrt{\lambda_a}$.
\end{itemize}
{\color{red} Before coming to its proof we also observe that the same argument applies with $b$ in place of $a$.}
First of all observe that, for $z=\Xi (a)$, the equation \eqref{e:eigenvalue-equation-3}, which becomes
\begin{equation}\label{e:eigenvalue-equation-again}
-\frac{d^2\varphi}{dt^2} + m^2 \varphi + \frac{A}{\Xi-\Xi (a)} \varphi = 0,
\end{equation}
has nontrivial solutions $\varphi\in W^{2,2}_{\text{loc}}\cap L^2 (\mathbb R; \mathbb C)$ if and only if it has nontrivial solution $\varphi \in W^{2,2}_{\text{loc}} \cap L^2 (\mathbb R;\mathbb R)$. That the equation has a nontrivial solution when $m=\sqrt{\lambda_a}$ follows from the classical theory of self-adjoint operators. We therefore only need to show that the existence of a nontrivial solution is only possible for a single $m\geq 1$. Arguing by contradiction assume there are two, $1\leq m_1< m_2$, and denote by $\psi_1$ and $\psi_2$ the respective solutions. Then there is a nontrivial linear combination
\[
\psi = C_1 \psi_1 + C_2 \psi_2
\]
which vanishes on $a$. Observe that $\psi_1$ and $\psi_2$ can be interpreted as eigenfuctions of the self-adjoint operator $-\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t)-\Xi (a)} $ relative to distinct eigenvalues and they are, therefore, $L^2$ orthogonal. Summing the equations, multiplying by $\psi$ and integrating by parts we achieve
\begin{equation}\label{e:tested}
\underbrace{\int \left((\psi')^2 +\frac{A}{\Xi-\Xi (a)} \psi^2\right)}_{=:I} = - C_1^2 m_1^2 \int \psi_1^2 - C_2^2 m_2^2 \int \psi_2^2\, .
\end{equation}
Recalling that $A = \Xi'' + 2\Xi' = (\Xi' + 2\Xi)'$, we wish to integrate by parts the second integrand in the left-hand side. Observe that, because $\psi$ vanishes on $a$ and $\Xi' (a) \neq 0$, the function $\frac{\psi^2}{\Xi - \Xi (a)}$ is in fact continuously differentiable. In particular we can write
\[
\int\frac{A}{\Xi-\Xi (a)} \psi^2 = \int \left(\frac{\Xi'+2\Xi}{(\Xi-\Xi(a))^2} \Xi' \psi^2 - 2\frac{\Xi'+2\Xi}{\Xi-\Xi (a)}\psi\psi'\right)\, .
\]
Substituting it into I, we achieve
\begin{align*}
I &= \int \left(\psi' - \frac{\Xi'}{\Xi-\Xi (a)}\psi\right)^2 + \int \left(\frac{2\Xi\Xi'\psi^2}{(\Xi-\Xi (a))^2} - \frac{4\Xi\psi\psi'}{\Xi-\Xi (a)} \right)\\
&= \int \left(\psi' - \frac{\Xi'}{\Xi-\Xi (a)}\psi\right)^2 + 2 \int \frac{\Xi'}{\Xi-\Xi (a)} \psi^2\, ,
\end{align*}
where to reach the second line we have written the first term in the second integral as
\[
- 2 \Xi \frac{d}{dt} \left(\frac{1}{\Xi-\Xi (a)}\right) \psi^2
\]
and integrated it by parts. Again thanks to the fact that $\psi$ vanishes at $a$ we can write it as $\psi= (\Xi-\Xi (a)) \eta$ and hence conclude
\begin{align*}
I &= \int ((\Xi - \Xi (a))\eta')^2 + \int 2 (\Xi-\Xi (a))\Xi' \eta^2 = \int ((\Xi - \Xi (a))\eta')^2 - 2 \int (\Xi-\Xi (a))^2 \eta\eta'\\
&= \int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 - \int (\Xi-\Xi(a))^2 \eta^2\\
&= \int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 - \int (C_1^2\psi_2^2 + C_2^2\psi_2^2)\, .
\end{align*}
Inserting the latter in \eqref{e:tested} we conclude
\[
\int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 = - C_1^2 (m_1^2-1) \int \psi_1^2 - C_2^2 (m_2^2-1) \int \psi_2^2\, .
\]
Observe that, since $m_2>1$ and $\psi_2$ is nontrivial, we conclude that $C_2=0$. This would then imply that $\psi = C_1 \psi_1$ and we can thus assume $C_1=1$ in all our computations. In particular $\eta'=\eta$, which implies $\eta (t) = C e^t$. We can now write $\psi_1 (t) = (\Xi (t)-\Xi(a)) \eta (t)$ and given the properties of $\Xi (t)$ we easily see that this would violate the decay at $+\infty$ that we know for $\psi_1$ from Corollary \ref{c:decay}.
{\color{red}
\begin{remark}\label{r:phi(a)-nonzero}
We record here a consequence of the above argument: a nontrivial solution $\varphi$ of \eqref{e:eigenvalue-equation-again} necessarily satisfies $\varphi (a) \neq 0$ (and thus it must be unique up to constant factors).
\end{remark}
}
\medskip
We next show that
\begin{itemize}
\item[(B)] If $(m_0,z)\in \overline{\mathscr{P}}$, $m_0\geq 1$ and $z\in \mathbb R$, then $z$ is in the closure of the range of $\Xi$.
\end{itemize}
We again argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in [1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $z\in \mathbb R\setminus \overline{\Xi (\mathbb R)}$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-5}
-\frac{d^2\psi_j}{dt^2} + m^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0
\end{equation}
\end{itemize}
By Corollary \ref{c:decay} we can normalize our functions $\psi_j$ so that $\psi_j (t) e^{-m_jt} \to 1$ as $t\to-\infty$ and $\psi_j (t) e^{m_jt} \to C_j\neq 0$ as $t\to\infty$. Observe also that there is a positive constant $c_0$ such that $|\Xi-z_j|\geq c_0$ for all $j$ sufficiently large, thanks to (ii). In particular, the functions $\frac{A}{\Xi-z_j}$ are uniformly bounded in $L^1$. By Lemma \ref{l:ODE2} there is a positive $T_0\geq b+1$, independent of $j$ such that
\begin{equation}\label{e:uniform-exp}
\left|\psi_j (t) - C_j e^{-m_j t}\right| \leq \frac{C_j}{2} e^{- m_j t} \qquad \forall t \geq T_0\, ,
\end{equation}
and there is a constant $C$, independent of $j$ such that
\begin{equation}\label{e:uniform-inside}
\|\psi_j\|_{L^\infty ([a,b])} \leq C\, .
\end{equation}
Next multiply \eqref{e:eigenvalue-5} by $\bar \psi_j$, integrate in $t$ and take the imaginary part of the resulting equality to conclude
\begin{equation}\label{e:imaginary-trick}
\int \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im\, z_j})^2} |\psi_j|^2 = 0\, .
\end{equation}
We might break the integral into three integrals on the regions $]-\infty, a[$, $]a,b[$, and $]b, \infty[$, where the function $A$ is, respectively, negative, positive, and negative. This gives
\[
-\int_{T_0}^{2T_0} \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 \leq
\int_a^b \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2
\]
Now, the right-hand side of the inequality can be bounded uniformly independently of $j$ by \eqref{e:uniform-inside} and (ii). On the other hand the function $\frac{-A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im z_j})^2}$ is larger than a positive constant $c$ independent of $j$ on $[T_0, 2 T_0]$. Using \eqref{e:uniform-exp} we can achieve a uniform bound $|C_j|\leq C$ for the constants $C_j$. The latter bound, combined with the estimates of Lemma \ref{l:ODE2} and the uniform bound on $\|\frac{A}{\Xi-z_j}\|_{L^1}$ easily imply that $\psi_j$ is precompact in $L^2$. We can thus extract a subsequence, not relabeled, converging to a nontrivial $L^2$ solution $\psi$ of
\begin{equation}\label{e:eigenvalue-equation-6}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-z} \psi = 0\, .
\end{equation}
Without loss of generality we assume that $\psi$ is real valued, since $z$ is real. We can thus multiply \eqref{e:eigenvalue-equation-6} by $\psi$ and integrate to achieve
\[
\int ((\psi')^2 + m_0^2 \psi^2) + \int \frac{\Xi''+2\Xi'}{\Xi- z} \psi^2 = 0\, .
\]
Integrating by parts $\int \frac{\Xi''}{\Xi-z} \psi^2$ we find
\[
\int ((\psi')^2 + m_0^2 \psi^2) + \int \left(\frac{(\Xi')^2}{(\Xi-z)^2} \psi^2 - 2 \frac{\Xi'}{\Xi-z} \psi' \psi\right) +
\int \frac{2\Xi'}{\Xi-z} \psi^2 = 0 \, ,
\]
which we can rewrite as
\begin{equation}\label{e:energy-trick}
\int \left(\left(\psi' - \frac{\Xi'}{\Xi-z} \psi\right)^2 + m_0^2 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-z} \psi^2 = 0\, .
\end{equation}
As already done in the previous paragraphs we set $\eta = \frac{\psi}{\Xi-z}$ and write the identity as
\[
\int \left((\Xi-z)^2 (\eta')^2 + m_0^2 (\Xi-z)^2 \eta^2 + 2 \Xi' (\Xi-z) \eta^2\right) = 0
\]
Integrating by parts the last term we find
\[
\int (\Xi-z)^2 (\eta'-\eta)^2 + \int (m_0^2-1) (\Xi-z)^2 \eta^2 = 0\, .
\]
We thus conclude that $m_0=1$ and $\eta'=\eta$, i.e. $\eta (t) = C e^t$, but again we see that this would violate $\psi\in L^2$.
\medskip
We next employ a suitable variation of the latter argument to show that
\begin{itemize}
\item[(C)] $(m_0, 0)$ and $(m_0, \Xi (-\infty))$ do not belong to $\overline{\mathscr{P}}$ if $m_0\geq 1$.
\end{itemize}
We again argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in [1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $0$ or to $\Xi (- \infty)$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-equation-7}
-\frac{d^2\psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
We first focus on the case $z_j\to 0$. Normalize again the solutions so that $\psi_j (t)$ is asymptotic to $e^{m_0t}$ for $t$ negative, and to $C_j e^{-m_0t}$ for $t$ positive.
Observe that in this case we have $\frac{A}{\Xi}\in L^1 (]-\infty, N])$ for every $N$, while $\frac{A}{\Xi-z_j}$ enjoys a uniform $L^1$ bound on any $]-\infty, N]$. We can thus apply Lemma \ref{l:ODE2} and conclude the $\psi_j$ can be assumed to converge uniformly to a function $\psi$ on $]-\infty, N]$ for every $N$ and that likewise $\psi (t)$ is asymptotic to $e^{m_0 t}$ for $t$ negative.
As done previously we multiply the equation \eqref{e:eigenvalue-equation-7} by $\bar\psi_j$, integrate, and take the imaginary part. In particular we gain the inequality
\[
\int_b^\infty \frac{A}{(\Xi- {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 \leq - \int_a^b \frac{A}{(\Xi- {\rm Re}\, z_j)^2+ ({\rm Im}\, z_j)^2} |\psi_j|^2\, .
\]
Since $z_j\to 0$ and the range of $\Xi$ on $[a,b]$ is bounded away from $0$, we conclude that the right-hand side is uniformly bounded. In particular, passing to the limit we conclude that
\begin{equation}\label{e:info-L^1}
\Xi^{-2} A |\psi|^2 \in L^1 ([b, \infty[)\, .
\end{equation}
Observe however that
\[
\lim_{t\to\infty} \frac{A(t)}{\Xi (t)} = \lim_{t\to \infty} \frac{-{\bar\alpha} e^{-{\bar\alpha} t}}{c_1 e^{-2t}+\frac{1}{2-{\bar\alpha}} e^{-{\bar\alpha} t}} = - {\bar\alpha} (2-{\bar\alpha})\, .
\]
In particular we conclude that $\psi\in L^2$. Moreover, we can write
\[
\frac{A}{\Xi} = -{\bar\alpha} (2-{\bar\alpha}) + B
\]
for a function $B$ which belongs to $L^1 ([T, \infty[)$ for every $T$. We thus have that
\[
\frac{d^2\psi}{dt^2} + (m_0^2 - {\bar\alpha} (2-{\bar\alpha})) \psi + B \psi = 0\, .
\]
Recalling that $0<{\bar\alpha} <1$ and $m_0\geq 1$, we have $m_0^2 - {\bar\alpha} (2-{\bar\alpha})>0$ and we can therefore apply Lemma \ref{l:ODE2} to conclude that, for $\bar m := \sqrt{m_0^2 - {\bar\alpha} (2-{\bar\alpha})}$
\[
\lim_{t\to \infty} e^{\bar m t} \psi (t)
\]
exists, it is finite, and nonzero. Observe however that \eqref{e:info-L^1} forces $e^{{\bar\alpha} t} |\psi|^2\in L^1$, which in particular implies that $\bar m > \frac{{\bar\alpha}}{2}$
We next argue as in the derivation of \eqref{e:energy-trick} to get
\[
\int \left( \left(\psi' - \frac{\Xi'}{\Xi} \psi\right)^2 + m_0 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi} \psi^2 = 0\, .
\]
We again set $\psi= \Xi \eta$ and observe that, by our considerations, $\eta$ decays exponentially at $-\infty$, while it is asymptotic to $e^{({\bar\alpha} - \bar m) t}$ at $+\infty$. We rewrite the latter identity as
\[
\int (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) = 0\, .
\]
We wish to integrate by parts the latter term to find
\begin{equation}\label{e:da-giustificare}
\int (\Xi^2 (\eta'-\eta)^2 + (m_0^2-1) \Xi^2 \eta^2)=0\, .
\end{equation}
Since we have exponential decay of $\eta$ at $-\infty$, while at $+\infty$ $\eta$ might grow, the latter integration by parts need some careful justification. First of all we notice that $\Xi \Xi' \eta^2$ decays exponentially at $+\infty$ and thus, since the other two integrands are positive, we can write
\[
\int (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) =
\lim_{N\to\infty} \int_{-\infty}^N (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) \, .
\]
Next, we can integrate by parts the second integrand (before passing to the limit) to write
\[
\int_{-\infty}^N (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) =
\int_{-\infty}^N (\Xi^2 (\eta'-\eta)^2 + (m_0^2-1) \Xi^2 \eta^2) + \Xi^2 (N) \eta^2 (N)\, .
\]
Since $\Xi (N) \eta (N)$ converges to $0$ exponentially, passing into the limit we conclude \eqref{e:da-giustificare}.
As before this would imply $m_0=1$ and $\eta (t) = C e^t$, while we have already argued that $\eta$ decays exponentially at $\infty$.
We next tackle the case $z_j \to \Xi (-\infty)$. This time we observe that $\frac{A}{\Xi-z_j}$ enjoys a uniform $L^1$ bound on $[T, \infty[$ for every $T$ and we thus normalize the functions $\psi_j$ so that $\psi_j (t)$ is asymptotic to $e^{-m_j t}$ for $t\to \infty$. Arguing as above, we assume that $\psi_j$ converges uniformly on all $[T, \infty[$ to a $\psi$ which is asymptotic to $e^{-m_0 t}$ and solves
\begin{equation}\label{e:eigenvalue-equation-9}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (-\infty)} \psi=0\, .
\end{equation}
As above we can assume that $\psi$ is real valued. Moreover, this time we infer (with the same method used to prove \eqref{e:info-L^1})
\begin{equation}\label{e:info-L1-2}
(\Xi-\Xi (-\infty))^{-2} A \psi^2 \in L^1 (\mathbb R)
\end{equation}
This time observe that, for $t$ sufficiently negative, $\frac{A(t)}{\Xi (t)- \Xi (-\infty)} = 8$. In particular we can explicitly solve the equation as
\[
\psi (t) = C_1 e^{-t\sqrt{m_0^2+8}} + C_2 e^{t\sqrt{m_0^2+8}}
\]
when $t$ is sufficiently negative.
However, if $C_1$ were positive, \eqref{e:info-L1-2} would not hold. In particular we infer exponential decay at $-\infty$. We can now argue as for the case $z_j\to 0$: we multiply \eqref{e:eigenvalue-equation-9} by $\psi$, integrate in time and perform an integration by part to infer
\[
\int \left( \left(\psi' - \frac{\Xi'}{\Xi - \Xi (-\infty)} \psi\right)^2 + m_0 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-\Xi (-\infty)} \psi^2 = 0\, .
\]
We then introduce $\eta$ so that $\psi = (\Xi-\Xi (-\infty)) \eta$. This time we infer exponential decay for $\eta$ at both $\infty$ and $-\infty$. Arguing as above we rewrite the last identity as
\[
\int ((\Xi- \Xi (-\infty))^2 (\eta'-\eta)^2 + (m_0^2-1) (\Xi- \Xi (-\infty))^2 \eta^2)=0\, ,
\]
reaching again a contradiction.
\medskip
In order to complete the proof of the proposition we need to show
\begin{itemize}
\item[(D)] If $(m_0, \Xi (c)) \in \overline{\mathscr{P}}$ {\color{red} and $m_0> 1$}, then either $c=a$ or $c=b$ {\color{red} and moreover we have, respectively, $m_0 = \sqrt{\lambda_a}$ or $m_0 = \sqrt{\lambda_b}$}.
\end{itemize}
As before we argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in ]1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $\Xi (c)$ for some $c\not\in \{a,b\}$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-equation-10}
-\frac{d^2\psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
This time we normalize the $\psi_j$'s so that
\begin{equation}\label{e:L2-normalization}
\int (|\psi_j'|^2 + m_j^2 |\psi_j|^2) =1\, .
\end{equation}
By Lemma \ref{l:ODE2} we know that $\psi_j (t)$ is asymptotic to $\rho_j^\pm e^{\mp m_j t}$ for $t\to \pm \infty$, where $\rho_j^\pm \in \mathbb C \setminus \{0\}$. Since $\Xi (c)$ has a positive distance from both $0$ and $\Xi (-\infty)$, we can apply Lemma \ref{l:ODE2} to achieve uniform times $T_\pm$ with the properties that
\begin{align}
\left|\psi_j (t) - \rho_j^+ e^{-m_j t}\right|& \leq \frac{|\rho_j^+|}{2} e^{-m_j t} \qquad\qquad \forall t\geq T_+\, ,\label{e:exp-bound-1}\\
\left|\psi_j (t) - \rho_j^- e^{m_j t}\right| &\leq \frac{|\rho_j^-|}{2} e^{m_j t} \qquad\qquad \forall t\leq T_-\, .\label{e:exp-bound-2}
\end{align}
Combining the latter inequalities with \eqref{e:L2-normalization} we conclude that $\sup_j |\rho_j^\pm| < \infty$, and in particular $\{\psi_j\}_j$ is tight in $L^2$, i.e. for every $\varepsilon >0$ there is $N = N (\varepsilon)$ such that
\[
\sup_j \int_{|t|\geq N} |\psi_j|^2 < \varepsilon\, .
\]
The latter bound combined with \eqref{e:L2-normalization} implies, up to extraction of a subsequence which we do not relabel, the strong $L^2$ convergence of $\psi_j$ to a function $\psi$. Thanks to Sobolev embedding, the convergence is uniform on any compact set and, moreover, $\psi\in C^{1/2}$.
Arguing as for \eqref{e:imaginary-trick} we infer
\begin{equation}\label{e:imaginary-trick-2}
\int \frac{A}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 =0
\end{equation}
The latter bound implies $\psi (c)=0$. In fact first we
observe that $\frac{A}{|\Xi-z_j|^2} |\psi_j|^2$ converges in $L^1$ on $\mathbb R \setminus ]c-\delta, c+\delta[$ for every $\delta$. Choosing $\delta>0$ so that $|A (t) - A(c)| \leq \frac{|A(c)|}{2}$ for $t\in [c-\delta, c+\delta]$ and recalling that $|A(c)|>0$, we easily infer that
\[
\sup_j \int_{c-h}^{c+h} \frac{|\psi_j|^2}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} < \infty \qquad \forall h < \delta\, .
\]
If $\psi (c)$ were different from $0$, we can select a positive $h< \delta$ and a positive $c_0$ with the property that $|\psi (t)|^2 \geq 2c_0$ for all $t\in [c-h, c+h]$. In particular, for a large enough $j$ we infer $|\psi_j (t)|^2 \geq c$ for all $t\in [c-\delta, c+\delta]$. But then we would conclude
\[
\sup_j \int_{c-h}^{c+h} \frac{1}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} < \infty\, .
\]
Since the denominator converges to $(\Xi - \Xi (c))^2$, this is clearly not possible.
We now wish to pass in the limit in \eqref{e:eigenvalue-equation-10} to derive that
\begin{equation}\label{e:eigenvalue-equation-11}
- \psi'' + m_0^2 \psi + \frac{A}{\Xi-\Xi (c)} \psi =0\, ,
\end{equation}
where we notice that, thanks to $\psi (c)=0$ and the H\"older regularity of $\psi$, the function $\frac{A}{\Xi-\Xi (c)} \psi$ is indeed in $L^p$ for every $p<2$. We thus understand the equation distributionally.
The equation clearly passes to the limit outside the singularity $c$ of the denominator and thus we just need to pass it to the limit distributionally in some interval $]c-h,c+h[$. We write the third term as
\begin{align*}
\frac{A}{\Xi-z_j} \psi_j &= \left(\frac{d}{dt} \ln (\Xi-z_j) \right)\frac{A}{\Xi'} \psi\\
&= \frac{d}{dt} \left(\ln (\Xi-z_j) \frac{A}{\Xi'} \psi_j\right) - \ln (\Xi-z_j)\frac{A}{\Xi'} \psi'_j - \ln (\Xi-z_j) \frac{d}{dt} \left(\frac{A}{\Xi'}\right) \psi_j\, .
\end{align*}
Observe that we can define the logarithm unequivocally because $\Xi$ is real valued and ${\rm Im}\, z_j >0$.
Next, we remark that:
\begin{itemize}
\item[(i)] $\frac{A}{\Xi'}$ is smooth in $]c-h, c+h[$;
\item[(ii)] $\ln (\Xi-z_j)$ converges strongly \footnote{Since $\ln (\Xi-z_j)$ converges uniformly to $\ln (\Xi-\Xi(c))$ on any compact set which does not contain $c$, in order to reach the conclusion it suffices to prove a uniform $L^q$ bound on the functions, for every $q<\infty$. This can be easily concluded as follows. Choose an interval $[c-h, c+h]$ and recall that $\Xi$ does not change sign on it. For each $j$ large enough we then find a unique $c_j \in [c-h, c+h]$ such that $\Xi (c_j) = {\rm Re}\, z_j$. Using the mean value theorem we easily conclude that $|\Xi (t)-z_j|\geq |\Xi (t) - \Xi (c_j)|
\geq C^{-1} |t-c_j|$ for every $t\in [c-h, c+h]$, where $C^{-1} = \min \{|\Xi'(t)|: c-h\leq t \leq c+h\}$.} to $\ln (\Xi-\Xi(c))$ in $L^q (]c-h, c+h[)$ for every $q<\infty$;
\item[(iii)] $\psi_j' \to \psi'$ weakly in $L^2$, while $\psi_j\to \psi$ uniformly.
\end{itemize}
We thus conclude that $\frac{A}{\Xi-z_j} \psi_j$ converges distributionally to
\[
\frac{d}{dt} \left(\ln (\Xi-\Xi (c)) \frac{A}{\Xi'} \psi\right) - \ln (\Xi-\Xi(c)) \frac{A}{\Xi} \psi - \ln (\Xi-\Xi(c)) \frac{d}{dt} \left(\frac{A}{\Xi'}\right) \psi\, .
\]
Using now that $\psi\in W^{1,2}$ and $\psi (c)=0$ we can rewrite the latter distribution as
\[
\frac{A}{\Xi-\Xi (c)} \psi
\]
and hence conclude the validity of \eqref{e:eigenvalue-equation-11}.
Observe next that from \eqref{e:eigenvalue-equation-11} we infer $\psi''\in L^p$ for every $p< 2$, which in turn implies that $\psi$ is indeed $C^{1,\kappa}_{\text{loc}}$ for every $\kappa < \frac{1}{2}$. In turn this implies that $\frac{A}{\Xi-\Xi (c)} \psi$ is continuous at $c$, so that in particular $\psi$ is twice differentiable. We thus can argue as for the derivation of \eqref{e:energy-trick} and get
\begin{equation}\label{e:energy-trick-4}
\int \left(\left(\psi' - \frac{\Xi'}{\Xi-\Xi (c)} \psi\right)^2 + m_0^2 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-\Xi (c)} \psi^2 = 0\, .
\end{equation}
Once again we can set $\psi = (\Xi-\Xi (c)) \eta$ and observe that $\eta\in W^{1,2}$, to rewrite the latter identity as
\[
\int ((\Xi- \Xi (c))^2 (\eta'-\eta)^2 + (m_0^2-1) (\Xi- \Xi (c))^2 \eta^2)=0\, ,
\]
inferring that $\eta=0$.
We thus have concluded that $\psi$ vanishes identically, but this is not yet a contradiction since the normalization \eqref{e:L2-normalization} and the strong $L^2$ convergence does not ensure that $\psi$ is nontrivial. In order to complete our argument, note first that, by the monotonicity of $\Xi$, for each $j$ large enough there is a unique $c_j$ such that $\Xi (c_j) = {\rm Re}\, z_j$. We then multiply the equation \eqref{e:eigenvalue-equation-10} by $\bar \psi_j - \overline{\psi_j (c_j)}$ to obtain
\[
\int \left(|\psi_j'|^2 + m_j^2 \psi_j (\bar\psi_j - \overline{\psi_j (c_j)}) + \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})\right) = 0\, .
\]
Note that $c_j$ must converge to $c$ and that the integrals
\[
\int \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})
\]
converges to $0$ because $\psi_j - \psi_j (c_j)$ converges to $0$ uniformly and, thanks to the uniform exponential decay of $\psi_j$, the latter are uniformly bounded in $L^1$. For the same reason the first integral in the sum
\begin{equation}
\label{e:up}\int_{|t-c|\geq h} \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)}) +\int_{|t-c|\leq h} \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})
\end{equation}
converges to $0$ for every fixed $h$. On the other hand, $|\frac{A (t)}{\Xi (t)-z_j}| |\psi_j (t) - \psi_j (c_j)|\leq C |t-c_j|^{-1/2}$ and thus the second integrand in \eqref{e:up}
converges to $0$ as well. We thus conclude that the $L^2$ norm of $\psi'_j$ converges to $0$ as well. This however contradicts the normalization \eqref{e:L2-normalization}.
\section{Proof of Proposition \ref{p:5-7}: Part I}
We set $m_0=m_a$, $z_0 = \Xi (a)$, and
we fix a $\psi_0$ solution of
\[
-\frac{d^2\psi_0}{dt^2} + m_0^2 \psi_0 + \frac{A}{\Xi-z_0} \psi_0 = 0
\]
with $L^2$ norm equal $1$. Since the operator is self-adjoint we will indeed assume that $\psi_0$ is real. We then define the projector $P_0: L^2 (\mathbb R; \mathbb C) \to \{\kappa \psi_0:\kappa \in \mathbb C\}$ as
\[
P_0 (\psi) = \langle \psi, \psi_0\rangle \psi_0\, .
\]
Observe that $P_0$ is self-adjoint.
Next,
in a neighborhood of $(m_0, z_0)$ we will look for solutions of \eqref{e:eigenvalue-equation-3} by solving
\begin{equation}\label{e:Lagrange}
\left\{
\begin{array}{l}
-\psi'' + m^2 \psi + \frac{A}{\Xi-z} \psi + P_0 (\psi) = \psi_0\\ \\
\langle \psi, \psi_0\rangle =1
\end{array}
\right.
\end{equation}
which we can rewrite as
\begin{equation}\label{e:Lagrange-2}
\left\{
\begin{array}{l}
-\psi'' + m_0^2 \psi + \frac{A}{\Xi-z_0} \psi + P_0 (\psi) = A \left(((\Xi-z_0)^{-1} - (\Xi-z)^{-1}) \psi\right) + (m_0^2-m^2)\psi + \psi_0\\ \\
\langle \psi, \psi_0\rangle =1
\end{array}
\right.
\end{equation}
Next we observe that the operator $-\frac{d^2}{dt^2} + m_0^2$, considered as a closed unbounded self-adjoint operator in $L^2$ (with domain $W^{2,2}$) has an inverse $\mathcal{K}_{m_0}:L^2 \to L^2$ which is a bounded operator. We thus rewrite \eqref{e:Lagrange-2} as
\begin{equation}\label{e:Lagrange-3}
\left\{
\begin{array}{ll}
\underbrace{\psi + \mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_0} \psi + P_0 (\psi)\right)}_{=: T (\psi)}\\
\qquad\qquad \qquad= \underbrace{\mathcal{K}_{m_0}
\left(\left(A \left((\Xi-z_0)^{-1} - (\Xi-z)^{-1}\right) + (m_0^2 -m^2)\right) \psi\right)}_{=:- \mathcal{R}_{m,z} (\psi)} +
\mathcal{K}_{m_0} (\psi_0)\\ \\
\langle \psi, \psi_0 \rangle =1\, .
\end{array}
\right.
\end{equation}
The proof of Proposition \ref{p:5-7} will then be broken into two pieces. In this section we will show the first part, which we can summarize in the following
\begin{lemma}\label{l:solve-for-psi}
For every $\mu>0$, if $(m,z)$ sufficiently close to $(m_0, z_0)$ and ${\rm Im}\, z\geq \mu |{\rm Re}\, (z-z_0)|$ then there is a unique $\psi= \psi (m,z) \in L^2 (\mathbb R)$ solving
\begin{equation}\label{e:solve-for-psi}
T (\psi) + \mathcal{R}_{m,z} (\psi) = \mathcal{K}_{m_0} (\psi_0)\, .
\end{equation}
\end{lemma}
Before coming to its proof we single out two important ingredients.
\begin{lemma}\label{l:invert-T}
$T$ is a bounded operator with bounded inverse on the spaces $L^2$ and $C^\sigma$, for any $\sigma \in ]0,1[$.
\end{lemma}
\begin{proof} Recall that the operator $\mathcal{K}_m$ is given by the convolution with $\frac 1{2m} e^{-m|\cdot|}$. In this first step we prove that $T$ is a bounded operator with bounded inverse in the spaces $L^2 (\mathbb R)$ and $C^\sigma (\mathbb R)$\footnote{Observe that $\mathcal{K}_m$ is well-defined on $C^\sigma$ and so is the multiplication by $\frac{A}{\Xi-z_0}$, since the latter is a smooth functions with bounded derivatives, and the operator $P_0 (\psi) = \langle \psi, \psi_0\rangle \psi_0$: for the latter we just need to check that $\psi\overline{\psi_0}$ is integrable, which follows from the exponential decay of $\psi_0$, { cf. Corollary \ref{c:decay}}.}
Recall that $\frac{A}{\Xi-z_0}= \frac{A}{\Xi-\Xi (a)}$ is indeed a bounded smooth function (thanks to the structural assumptions on $\Xi$: in particular recall that $\Xi' (a)\neq 0$ and $A(a) =0$, which implies that $\frac{A}{\Xi-\Xi(a)}$ is in fact smooth at $a$). Moreover the function and its derivatives decay exponentially at $\pm \infty$. It follows therefore that $\psi \mapsto \mathcal{K}_{m_0} (\frac{A}{\Xi-z_0} \psi + P_0 (\psi))$ is a compact operator, both on $L^2$ and on $C^\sigma$. Thus $T$ is a Fredholm operator with index $0$. We thus just need to check that the kernel is $0$ in order to conclude that it is invertible with bounded inverse. In both cases we need to show that the equation
\begin{equation}\label{e:kernel-T}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (a)} \psi + P_0 (\psi) = 0
\end{equation}
has only the trivial solution. Observe that the kernel $V$ of the operator $\psi \mapsto -\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (a)} \psi$ is $1$-dimensional by Lemma \ref{l:ODE2} and Corollary \ref{c:decay}. In particular $V$ is generated by $\psi_0$. Since the operator $P_0$ is the orthogonal projection onto $V$ and $-\frac{d^2\psi}{dt^2} + m_0^2+ + \frac{A}{\Xi-\Xi (a)} \psi$ is self-adjoint, the kernel of $-\frac{d^2}{dt^2} + m_0^2+ \frac{A}{\Xi-\Xi (a)} \psi + P_0$ in $L^2$ must be trivial.
In order to argue that the kernel is $0$ on $C^\sigma$ we apply a variation of the same idea: first we observe that if $\psi$ is a $C^\sigma$ solution of \eqref{e:kernel-T}, then $\frac{A}{\Xi-\Xi (a)} \psi + P_0 (\psi)$ is also in $C^\sigma$ and hence $\psi''\in C^\sigma$. Observe also that the operator is self-adjoint and thus we can assume that $\psi$ is real-valued. We then multiply both sides of \eqref{e:kernel-T} by $\bar \psi_0$, integrate by parts and use the fact that $\psi_0$ is in the kernel of the self-adjoint operator $-\frac{d^2}{dt^2} + m_0^2 + \frac{A}{\Xi-\Xi (a)}$ to conclude that $(\langle \psi, \psi_0\rangle)^2 =0$. But then
$\psi$ is a bounded solution of $-\frac{d\psi}{dt^2} + m_0 \psi^2 + \frac{A}{\Xi-\Xi (a)}\psi =0$. Given that $\frac{A}{\Xi-\Xi (a)} \psi$ is a product of an exponentially decaying function and a bounded function, we conclude that $-\frac{d^2\psi}{dt^2} + m_0^2 \psi$ is an exponentially decaying function $f$. We thus have $\psi = \mathcal{K}_m (f) + C_1 e^{-m_0t} + C_2 e^{m_0t}$ for two constants $C_1$ and $C_2$. However $\mathcal{K}_m (f)$ decays exponentially at both $\pm \infty$ and thus, given that $\psi$ is bounded, we must have $C_1=C_2=0$. In particular $\psi$ decays exponentially at both $\pm \infty$ and so it is an $L^2$ function. But we already saw that every $L^2$ solution is trivial.
\end{proof}
\begin{lemma}\label{l:Rmz-small}
For every constant $\mu>0$ we define the cone $C_\mu := \{z: {\rm Im} z \geq \mu |{\rm Re}\, (z-z_0)|\}$. Then
\begin{equation}
\lim_{z\in C_\mu, (m,z)\to (m_0, z_0)} \|\mathcal{R}_{m,z}\|_O = 0\, ,
\end{equation}
where $\|L\|_O$ is the operator norm of $L$
when considered as a bounded operator from $L^2$ to $L^2$.
\end{lemma}
\begin{proof} Clearly, it suffices to show that
\begin{equation}
\lim_{z\in C_\mu, z\to z_0} \|\mathcal{K}_{m_0} \circ (A/(\Xi-z) - A/(\Xi-z_0))\|_O = 0\, .
\end{equation}
We can rewrite the operator as
\[
\psi \mapsto \mathcal{K}_m \left(\frac{A (z-z_0)}{(\Xi-z) (\Xi-z_0)} \psi\right) \, .
\]
First of all observe that the operators
\[
\psi \mapsto L_z (\psi) = \frac{A (z-z_0)}{(\Xi-z) (\Xi-z_0)} \psi
\]
are bounded in the operator norm uniformly in $z\in C_\mu$ by a constant $M$. Moreover, we can see that the adjoint operator is given by $L_z^* (\psi)= \frac{A (\bar z-z_0)}{(\Xi-\bar z) (\Xi-z_0)} \psi$ converges strongly in $L^2$ to $0$: indeed the functions $\frac{A (\bar z-z_0)}{(\Xi-\bar z) (\Xi-z_0)}$ are uniformly bounded and they converge to $0$ on $\mathbb R \setminus \{a\}$. We now use an argument entirely similar to that used in the proof of Lemma \ref{l:three}: given any $\varepsilon >0$ we fix the orthogonal projection $P_N$ onto a finite-dimensional subspace of $L^2$ with the property that $\|\mathcal{K}_{m_0}\circ P_N - \mathcal{K}_{m_0}\|_O$ is smaller than $\frac{\varepsilon}{2M}$. We then argue that for $|z-z_0|$ sufficiently small $P_N \circ L_z$ has operator norm smaller than $\frac{\varepsilon}{2}$. Having chosen an orthonormal base $\psi_1, \ldots, \psi_N$ for $V$, we recall that
\[
P_N (\psi)= \sum_i \langle \psi_i, \psi\rangle \psi_i\, .
\]
Therefore our claim amounts to show that
\[
|\langle \psi_i, L_z (\psi)\rangle|\leq \frac{\varepsilon}{2N}
\]
for $z$ sufficiently close to $z_0$ and every $\psi$ with $\|\psi\|_{L^2}\leq 1$. For the latter we use
\[
|\langle \psi_i, L_z (\psi)\rangle| = |\langle L_z^* (\psi_i), \psi \rangle|\leq \|L_z^* (\psi_i)\|_{L^2}\, .
\]
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:solve-for-psi}]
We rewrite the equation that we want to solve as
\[
\psi + T^{-1} \circ \mathcal{R}_{m,z} (\psi) = T^{-1} \circ \mathcal{K}_{m_0} (\psi_0)\, .
\]
Note that $P_0(\psi_0)=\psi_0$. Furthermore, since $\mathcal K_{m_0}$ is, by definition, the inverse operator of $-\frac{\mathrm d^2}{\mathrm dt^2}+m_0^2\operatorname{Id}$,
\begin{equation*}
\mathcal K_{m_0}^{-1}\left(\psi_0+\mathcal K_{m_0}\left(\frac{A}{\Xi-z_0}\psi_0\right)\right) = -\psi_0''+m_0^2\psi_0+\frac{A}{\Xi-z_0}\psi_0 = 0.
\end{equation*}
Therefore,
\begin{equation*}
\psi_0+\mathcal K_{m_0}\left(\frac{A}{\Psi-z_0}\psi_0\right) = 0.
\end{equation*}
In combination with the definition of $T$ in \eqref{e:Lagrange-3}, we get
\begin{equation*}
T(\psi_0) = \psi_0+\mathcal K_{m_0}\left(\frac{A}{\Xi-z_0}\psi_0+\psi_0\right)=\mathcal K_{m_0}(\psi_0),
\end{equation*}
in other words,
\begin{equation}\label{e:T-1K}
T^{-1} \circ \mathcal{K}_{m_0} (\psi_0) = \psi_0\, .
\end{equation}
Therefore, \eqref{e:solve-for-psi} becomes
\begin{equation}\label{e:to-Neumann}
(\operatorname{ Id} + T^{-1} \circ \mathcal{R}_{m,z}) (\psi) = \psi_0\, ,
\end{equation}
so the existence of a unique solution is guaranteed as soon as $\|T^{-1} \circ \mathcal{R}_{m,z}\|_{O} < 1$.
\end{proof}
\begin{remark}\label{r:Neumann-series}
In the remaining part of the proof of Proposition \ref{p:5-7} we will take advantage of the representation of $\psi$ as a function of $\psi_0$ through the Neumann series coming from \eqref{e:to-Neumann}. More precisely, our proof of Lemma \ref{l:solve-for-psi} leads to the following representation:
\begin{equation}\label{e:Neumann-series}
\psi = \psi_0 - (T^{-1} \circ \mathcal{R}_{m,z}) (\psi_0) + \sum_{k=2}^\infty (-1)^k (T^{-1}\circ \mathcal{R}_{m,z})^k (\psi_0)\, .
\end{equation}
\end{remark}
\section{Proof of Proposition \ref{p:5-7}: Part II}\label{s:5-7-part-II}
We now complete the proof of Proposition \ref{p:5-7}. The positive parameter $\mu>0$ in Lemma \ref{l:solve-for-psi} will have to be chosen sufficiently small: its choice will be specified in a few paragraphs, while for the moment we assume it to be fixed. We set $m_0 = m_a$ and $z_0 = \Xi (a)$. Thus, for each $(m_0+h ,z)$ in a set
\[
U_{\delta, \mu} := \{|h|< \delta, |z-z_0|< \delta, {\rm Im} z > \mu |{\rm Re}\, (z-z_0)|\}
\]
we know that that there is a solution $\psi = \psi (m_0+h,z)$ of \eqref{e:solve-for-psi} which moreover satisfies the expansion \eqref{e:Neumann-series}.
We then define the function
\begin{equation}
H (h,z) := \langle \psi (m_0+h,z), \psi_0\rangle\, ,
\end{equation}
and obviously we are looking for those $z$ which solve
\begin{equation}\label{e:what-we-want-to-do}
H (h,z) =1
\end{equation}
The main point of our analysis is the following
\begin{lemma}\label{l:will-apply-Rouche}
The function $H$ is holomorphic in $z$ and moreover
\begin{equation}\label{e:expansion}
H (h,z) = 1 - 2m_a h + c (a) (z-z_0) + o (|z-z_0| + |h|)
\end{equation}
where $c(a)$ is a complex number with ${\rm Im}\, c(a) > 0 $.
\end{lemma}
Given Lemma \ref{l:will-apply-Rouche}, consider now $\xi (h)$ which we obtain by solving $c(a) (\xi-z_0)= 2m_a h$, namely,
\[
\xi (h) = \frac{2m_a h}{c(a)} +z_0 = \frac{2m_a h}{|c(a)|^2} \overline{c(a)} +z_0\, .
\]
The idea behind the latter definition is that, if the term $o (|z-z_0| + |h|)$ vanished identically, $z = \xi (h)$ would be the solution of $H (h,z)=1$. Even though $o (|z-z_0| + |h|)$ does not vanish, we nonetheless expect that the solution $z$ of $H (h,z)=1$ is relatively close to $\xi (h)$.
Since ${\rm Im}\, c(a)>0$, $\xi (h)$ has positive imaginary part if $h<0$. In particular we have
\[
{\rm Im}\, \xi (h) \geq \gamma |h| \qquad \forall h < 0\, .
\]
where $\gamma$ is a positive constant. We then rewrite
\[
H (h, z) = 1 + c (a) (z-\xi (h)) + \underbrace{o (|\xi (h)-z_0| + h)}_{=: r(h)} + o (|z-\xi (h)|)\, .
\]
Consider the disk $D_h := \{|z-\xi (h)| \leq 2 \beta |h|\}$, for a suitably chosen constant $\beta>0$. We will show below that adjusting the constants $\mu$ and $\beta$ suitably, the disk will be in the domain of the holomorphic function $H (\cdot, z)$. Leaving this aside for the moment, by Rouch\'e Theorem, if we choose $h$ sufficiently small the set $H(h, D_h)$ contains a disk of radius $|c(a)|\beta h$ centered at $1+ r (h)$. But then for $h$ sufficiently small we also have $|r(h)| \leq \frac{|c(a)|\beta h}{2}$ and so we conclude that $1\in H (h, D_h)$, namely that there is a point $z (h)$ in the disk $D_h$ which is mapped in $1$ by $H (h, \cdot)$. This would then complete the proof of Proposition \ref{p:5-7} if we were able to prove that ${\rm Im}\, z (h) >0$. We therefore need to show that $D_h$ is in the domain of $H (h, \cdot)$, namely,
\[
{\rm Im}\, z \geq \mu |{\rm Re}\, (z-z_0)|\, \qquad \forall z\in D_h\, .
\]
We first estimate
\[
{\rm Im}\, z \geq {\rm}\, {\rm Im}\, \xi (h) - 2 \beta |h| \geq (\gamma- 2 \beta) |h|\, .
\]
Then
\begin{equation}\label{e:inequality-101}
|{\rm Re}\, z-z_0| \leq |\xi (h)-z_0| + |z-\xi (h)| \leq (|c(a)| + 2 \beta) |h|\, .
\end{equation}
We thus conclude that
\begin{equation}\label{e:inequality-102}
{\rm Im}\, z(h) \geq \frac{\gamma-2\beta}{|c(a)|+2\beta} |{\rm Re}\, (z (h)-z_0)|\, .
\end{equation}
Thus it suffices to chose $\beta = \frac{\gamma}{3}$ and $\mu = \frac{\gamma}{3 |c(a)|+\gamma}$. This guarantees at the same time the existence of a solution and the fact that $z (h)$ has positive imaginary part when $h<0$ (which results from combining \eqref{e:inequality-101} and \eqref{e:inequality-102}.
In order to complete the proof of Proposition \ref{p:5-7} we therefore just need to show Lemma \ref{l:will-apply-Rouche}.
\begin{proof}[Proof of Lemma \ref{l:will-apply-Rouche}]
In order to show holomorphicity we just need to show that, for each fixed $z$,
\[
z\mapsto \sum_{k=0}^\infty (- T^{-1} \circ \mathcal{R}_{m,z})^k
\]
is holomorphic. Since the series converges in the operator norm, it suffices to show that each map $z\mapsto (-T^{-1} \circ \mathcal{R}_{m,z})^k$ is holomorphic for every $k$, for which indeed it suffices to show that $z\mapsto \mathcal{R}_{m,z}$ is holomorphic. This is however obvious from the explicit formula. We therefore now come to the the Taylor expansion \eqref{e:expansion}.
\medskip
{\bf Step 1} We will show here that
\begin{equation}\label{e:small-in-Csigma}
\|\mathcal{R}_{m_0+h,z}\|_{\mathcal{L} (C^\sigma)} \leq C (\sigma) (|h| + |z-z_0|)\,
\end{equation}
for every $\sigma\in ]0,1[$, where $\|L\|_{\mathcal{L} (C^\sigma)}$ is the operator norm of a bounded linear operator $L$ on $C^{\sigma}$.
The estimate will have the following consequence. First of all using \eqref{e:Neumann-series} and $\|\psi_0\|_{L^2}^2 =1$ we expand
\begin{equation}\label{e:Taylor-2}
H (h,z) = 1 - \langle T^{-1} \circ \mathcal{R}_{m_0+h,z} (\psi_0), \psi_0\rangle
+ \underbrace{\sum_{k=2}^\infty \langle (-T^{-1} \circ \mathcal{R}_{m_0+h,z})^k (\psi_0), \psi_0\rangle}_{=: R_1 (z,h)}\, .
\end{equation}
Hence using \eqref{e:small-in-Csigma} we estimate
\begin{align}
|R_1 (z,h)| & \leq \sum_{k=2}^\infty \|(-T^{-1} \circ \mathcal{R}_{m_0+h,z})^k (\psi_0)\|_\infty \|\psi_0\|_{L^1}\nonumber\\
&\leq C \sum_{k=2}^\infty (\|T^{-1}\|_{C^\sigma} \|\mathcal{R}_{m_0+h,z}\|_{\mathcal{L} (C^\sigma)})^k \|\psi_0\|_{C^\sigma}\|\psi_0\|_{L^1} = o (|h|+|z-z_0|)\, ,\label{e:resto-1}
\end{align}
for some fixed $\sigma$.
In order to show \eqref{e:small-in-Csigma} we write
\[
\mathcal{R}_{m_0+h,z} (\psi) = (z-z_0) \mathcal{K}_{m_0} \left(\frac{1}{\Xi-z} \left(\frac{A}{\Xi-z_0} \psi\right)\right) + (2m_0 h +h^2) \mathcal{K}_{m_0} (\psi)\, .
\]
Since $\frac{A}{\Xi-z_0}$ is smooth, it suffices to show that the operators $B_z:= \mathcal{K}_{m_0} \circ \frac{1}{\Xi-z}$ are uniformly bounded in $\mathcal{L} (C^\sigma)$. We first fix a smooth cut-off function $\varphi \in C^\infty_c (]a-2, a+2[)$ which equals $1$ on $[a-1,a+1]$ and write
\[
B_z= B_z^1 + B_z^2
:= \mathcal{K}_{m_0} \circ \left(\frac{1-\varphi}{\Xi-z}\right)+\mathcal{K}_{m_0} \circ \left(\frac{\varphi}{\Xi-z} \right)\, .
\]
But since $(1-\varphi)/(\Xi-z)$ enjoys a uniform bound in $C^k$, it is easy to conclude that $\|B^1_z\|_{\mathcal{L} (C^\sigma)}$ is bounded uniformly in $z$. We thus need to bound
\begin{align*}
B^2_z (\psi) (t) &= \frac 1{2m_0}\int e^{-m_0 |t-s|} \frac{\varphi (s)}{\Xi (s) -z} \psi (s)\, ds\, .
\end{align*}
We first bound $\|B^2_z\|_{L^\infty}$. We write $z= x+iy$ and, since $x$ is close to $a$, we select the only $a'$ such that $\Xi (a')=x$ and write
\begin{align*}
B^2_z (\psi) (t) &= \frac 1{2m_0}\underbrace{\int e^{-m_0 |t-s|} \frac{\varphi (s) (\psi (s)-\psi (a'))}{(\Xi (s) -\Xi (a')) - iy} \, ds}_{=: I_1 (t)}
+ \frac {\psi(a')}{2m_0}\underbrace{\int e^{-m_0 |t-s|} \frac{\varphi (s)}{\Xi (s) -z}\, ds}_{=: I_2 (t)}
\end{align*}
Writing $\frac{1}{\Xi -z} = \frac{1}{\Xi'}\frac{d}{dt} \ln (\Xi -z)$ we can integrate by parts to get
\begin{align*}
I_2 (t) &= - \underbrace{\int m_0 \frac{t-s}{|t-s|} e^{-m_0 |t-s|} (\Xi' (s))^{-1} \ln (\Xi (s)-z) \varphi (s)\, ds}_{=:I_{2,1} (t)}\\
&\qquad -
\underbrace{\int e^{-m_0 |t-s|} \ln (\Xi (s) -z) \frac{d}{ds} ((\Xi')^{-1} \varphi) (s)\, ds}_{=: I_{2,2} (t)}
\end{align*}
and use the uniform bound for $\ln (\Xi (s)-z)$ in $L^1 ([a-2,a+2])$ to conclude that $|I_{2,1}|$ and $|I_{2,2}|$ are both bounded uniformly. As for $I_1$, note that, on any compact interval $K$ around $a'$, we have, since $\Xi'$ is continuous and $\Xi'<0$,
\begin{equation*}
C(K):=\inf_{x\in K} |\Xi'(x)| = -\max_{x\in K} \Xi'(x) >0.
\end{equation*}
Therefore by the mean value theorem, for all $s\in K$, there exists a $\iota = \iota(s)\in K$ such that
\begin{equation*}
\abs{\Xi(s)-\Xi(a')-iy}=\abs y+\abs{\Xi(s)-\Xi(a')} >\abs{\Xi(s)-\Xi(a')}= \abs{s-a'}\abs{ \Xi'(\iota)}\ge \abs{s-a'} C(K).
\end{equation*}
By the definition of the Hölder semi-norm, we thus have, for all $s\in K$,
\[
\left|\frac{\psi (s)- \psi (a')}{\Xi (s) - \Xi (a') - iy}\right| \leq \frac{\|\psi\|_{C^\sigma}}{C(K)|s-a'|^{1-\sigma}},
\]
which is integrable. Furthermore, outside of $K$ the integrand of $I_1$ is bounded and decays exponentially, therefore one can uniformly bound $I_1$.
We next wish to bound the seminorm
\[
[B^2_z (\psi)]_\sigma:= \sup_{t\neq t'} \frac{|B^2_z (\psi) (t) - B^2_z (\psi) (t')|}{|t-t'|^\sigma}\, .
\]
We write
\[
B^2_z (\psi) (t) - B^2_z (\psi) (t') = (I_1 (t) - I_1 (t')) + \psi' (a) (I_2 (t) - I_2 (t'))\, .
\]
Using that $|e^{-m_0 |t-s|} - e^{-m_0 |t'-s|}|\leq C |t-t'|$ we can bound
\[
|I_1 (t) - I_1 (t')| \leq C |t-t'| \int |\varphi (s)| \frac{|\psi (s)-\psi (a')|}{|\Xi (s) - \Xi (a')|}\, ds
\leq C \|\psi\|_{C^\sigma} |t-t'|\, .
\]
Similarly we can write
\[
|I_{2,2} (t) - I_{2,2} (t')| \leq C |t-t'| \int \left|\ln (\Xi (s) -z) \frac{d}{ds} ((\Xi')^{-1} \varphi) (s)\right|\, ds \leq C |t-t'|\, .
\]
Next denoting the function $(\Xi' (s))^{-1} \varphi (s) \ln (\Xi (s) -z)$ by $B (s)$ we assume $t> t'$ and write further
\begin{align*}
I_{2,1} (t) - I_{2,1} (t')&= m_0 \Bigg(\underbrace{\int_t^\infty e^{-m_0 (s-t)} B(s)\, ds - \int_{t'}^\infty e^{-m_0(s-t')} B(s)\, ds}_{=: J_+(t,t')}\Bigg)\\
&\qquad - m_0 \Bigg(\underbrace{\int_{-\infty}^t e^{-m_0 (t-s)} B(s)\, ds - \int_{-\infty}^{t'} e^{-m_0 (t'-s)} B(s)\, ds}_{=: J_- (t,t')}\Bigg)\, .
\end{align*}
Then we choose $p=\frac{1}{\sigma}$, let $p'$ be the dual exponent and estimate
\begin{align*}
|J_+ (t,t')| &\leq C |t-t'| \int_t^\infty |B(s)|\, ds + \int_{t'}^t |B (s)|\, ds\\
&\leq C |t-t'| \|B\|_{L^1} + |t-t'|^\sigma \|B\|_{L^{p'}}\, .
\end{align*}
A similar estimate for $J_- (t,t')$ finally shows the existence of a constant $C$ such that
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')|\leq C \|\psi\|_{C^\sigma} \left(|t-t'|+|t-t'|^\sigma\right)\, .
\]
Clearly this implies
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')|\leq C \|\psi\|_{C^\sigma} |t-t'|^\sigma \qquad \mbox{if $|t-t'|\leq 1$.}
\]
On the other hand we can trivially bound
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')| \leq 2 \|B^2_z (\psi)\|_\infty \leq C \|\psi\|_{C^\sigma} |t-t'|^\sigma
\quad\mbox{if $|t-t'|\geq 1$.}
\]
\medskip
{\bf Step 2.} In this second step we compute
\begin{align*}
\langle T^{-1} \mathcal{R}_{m,z} (\psi_0), \psi_0\rangle &=
\langle T^{-1} \circ \mathcal{K}_{m_0} \left(A ((\Xi-z)^{-1} - (\Xi-z_0)^{-1})\psi_0\right), \psi_0\rangle\\
&\qquad
+ (2m_0 h + h^2) \langle T^{-1} \circ \mathcal{K}_{m_0} (\psi_0), \psi_0\rangle\, .
\end{align*}
Recalling \eqref{e:T-1K} (and using that both $T^{-1}$ and $\mathcal{K}_{m_0}$ are self-adjoint) we rewrite the expression as
\begin{align}
\langle T^{-1} \mathcal{R}_{m,z} (\psi_0), \psi_0\rangle &=
(z-z_0) \langle T^{-1}\circ \mathcal{K}_{m_0} \big( A (\Xi-z)^{-1} (\Xi-z_0)^{-1} \psi_0\big), \psi_0\rangle + 2m_a h + h^2\nonumber\\
&= (z-z_0) \langle A (\Xi-z)^{-1} (\Xi-z_0)^{-1} \psi_0, T^{-1} \circ \mathcal{K}_{m_0} (\psi_0)\rangle + 2ma_ h + h^2\nonumber\\
&= (z-z_0) \underbrace{\langle A (\Xi-z)^{-1} (\Xi -z_0)^{-1} \psi_0, \psi_0 \rangle}_{=: G (z)} + 2m_a h + h^2\label{e:Taylor-3}\, .
\end{align}
We thus want to show that the following limit exists and to compute its imaginary part:
\[
- c (a) := \lim_{{\rm Im}\, z >0, z\to \Xi (a)} G (z) =
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} \int \frac{1}{\Xi (s)-z} |\psi_0 (s)|^2 \frac{A(s)}{\Xi (s) - \Xi (a)}\, ds\, .
\]
Observe indeed that inserting $G(z) = - c(a) + o (1)$ in \eqref{e:Taylor-3} and taking into account \eqref{e:Taylor-2} and \eqref{e:resto-1} we conclude that \eqref{e:expansion} holds.
In order to compute $c(a)$ we observe first that the function $\phi (s) := |\psi_0 (s)|^2 \frac{A(s)}{\Xi (s) - \Xi (a)}$ is smooth and decays exponentially. We thus rewrite
\[
G (z) = \int \frac{1}{\Xi (s)-z} \phi (s)\, ds\, .
\]
Next we decompose $z$ into its real and imaginary part as $z = x + iy$ and observe that
\begin{align*}
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} {\rm Re}\, G (z) &= \lim_{x\to \Xi(a), y \downarrow 0} \int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s)\, ds
\end{align*}
Here we are only interested in showing that the limit exists and we thus fix a cut-off function $\varphi\in C^\infty_c (]a-2, a+2[)$, identically $1$ on $[a-1, a+1]$ and split the integral into
\[
{\rm Re}\, G (z) = \int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s) \varphi (s)\, ds +
\int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s) (1-\varphi (s))\, ds\, .
\]
The second integral has a limit, while in order to show that the first has a limit we write
\[
\frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} = \frac{1}{2\Xi' (s)} \frac{d}{ds} \ln ((\Xi(s)-x)^2 + y^2)\, .
\]
We then integrate by parts and use the fact that $\ln ((\Xi (s)-x)^2 +y^2)$ converges to $2 \ln |(\Xi(s)-\Xi (a)|$ strongly in $L^q ([a-2,a+2])$ for every $q$ to infer the existence of the limit of the first integral.
As for the imaginary part we write instead
\begin{align}\label{e:arctan-integral}
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} {\rm Im}\, G (z) &= \lim_{x\to \Xi(a), y \downarrow 0} \int \frac{y}{(\Xi (s)-x)^2 + y^2} \phi (s)\, ds\, .
\end{align}
We wish to show that the latter integral converges to
\begin{equation}\label{e:arctan-integral-2}
I = \phi (a) \int \frac{ds}{(\Xi' (a))^2 s^2 +1} = \frac{\pi \phi (a)}{2 |\Xi' (a)|}\, .
\end{equation}
On the other hand $\phi (a) = |\psi_0 (a)|^2 A' (a) (\Xi' (a))^{-1}$.
Since $A' (a) > 0$ and $\Xi' (a)<0$, we conclude that $ c (a)$ exists and it is a complex number with positive imaginary part, which completes the proof of the lemma.
It remains to show the convergence of \eqref{e:arctan-integral} to \eqref{e:arctan-integral-2}. First observe that for each $x$ sufficiently close to $\Xi (a)$ there is a unique $a' = \Xi^{-1} (x)$ such that $\Xi (a')=x$. Changing variables ($s$ becomes $a'+s$), the integral in \eqref{e:arctan-integral} becomes
\begin{equation}
\int \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds\,
\end{equation}
and we wish to show that its limit is $I$ as $(a',y)\to (a,0)$.
Next, fix any $\delta>0$ and observe that
\[
\lim_{y\to 0} \int_{|s|\geq \delta} \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds=0
\]
uniformly in $a' \in [a-1, a+1]$. We therefore define
\[
I (\delta, a', y) := \int_{-\delta}^\delta \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds
\]
and we wish to show that, for every $\varepsilon >0$ there is a $\delta>0$ such that
\begin{equation}\label{e:arctan-integral-3}
\limsup_{(a',y) \downarrow (a,0)} \left| I (\delta, a', y) - I\right| \leq C \varepsilon\, ,
\end{equation}
where $C$ is a geometric constant.
We rewrite
\[
I (\delta, a', y) = \int_{-\delta y^{-1}}^{\delta y^{-1}} \frac{\phi (a'+ys)}{y^{-2} (\Xi (a' + ys) - \Xi (a'))^2 +1}\, ds\, .
\]
Fix now $\varepsilon$ and observe that, since $\Xi'$ and $\phi$ are continuous, if $\delta$ is chosen sufficiently small, then
\begin{align}
&((\Xi' (a))^2 - \varepsilon^2) s^2 \leq y^{-2} (\Xi (a' + ys) - \Xi (a'))^2 \leq ((\Xi' (a))^2 + \varepsilon^2) s^2\\
& |\phi (a' + ys) - \phi (a)| \leq \varepsilon\, .
\end{align}
for all $|a'-a|<\delta$ and $y |s| \leq \delta$. Choosing $\varepsilon>0$ so that $\varepsilon \leq \frac{|\Xi' (a)|}{2}$ we easily see that, when $|a'-a| < \delta$, we have
\[
\left|I (\delta, a', y) - \phi (a) \int_{-\delta y^{-1}}^{\delta y^{-1}} \frac{ds}{(\Xi' (a))^2 s^2 +1}\right| \leq C \varepsilon\, .
\]
In particular, as $y\downarrow 0$, we conclude \eqref{e:arctan-integral-3}.
\end{proof}
\section{Proof of Proposition \ref{p:almost-final}}
We reduce the proof of Proposition \ref{p:almost-final} to the following lemma.
\begin{lemma}\label{l:almost-final-2}
Consider $G:= \{m> 1, m \neq m_a, m_b : \mathscr{U}_m \neq \emptyset\}$. Then $G$ is relatively open and relatively closed in $]1, \infty[\setminus \{m_a, m_b\}$.
\end{lemma}
Proposition \ref{p:almost-final} is an obvious consequence of the latter lemma and of Proposition \ref{p:5-7}: Lemma \ref{l:almost-final-2} implies that $G$ is the union of connected components of $[1, \infty[\setminus \{m_a, m_b\}$. On the other hand the connected component $]m_b, m_a[$ intersects $G$ because of Proposition \ref{p:5-7} and thus it is contained in $G$.
We thus complete the proof of Proposition \ref{p:almost-final} showing Lemma \ref{l:almost-final-2}
\begin{proof}[Proof of Lemma \ref{l:almost-final-2}] We start with some preliminary considerations. Fix an interval $[c,d]\subset ]1, \infty[\setminus \{m_a, m_b\}$.
Recalling Proposition \ref{p:all-m} we know that, since the operator norm of $\mathcal{L}_m$ is bounded uniformly in $m\in [c,d]$,
\begin{itemize}
\item[(a)] There is $R>0$ such that $\mathcal{U}_m\subset B_R (0)$ for all $m\in [c,d]$.
\end{itemize}
However it also follows from Proposition \ref{p:3+4} that
\begin{itemize}
\item[(b)] There is a $\delta >0$ such that $\mathcal{U}_m\subset \{{\rm Im}\, z > \delta\}$.
\end{itemize}
\medskip
{\bf Step 1.} We first prove that $G$ is relatively closed. To that end we fix a sequence $m_j \to m\in ]1, \infty[\setminus \{m_a, m_b\}$ such that $m_j$ belongs to $G$. Without loss of generality we can assume $\{m_j\}\subset [c,d]\subset ]1, \infty[\setminus \{m_a, m_b\}$. For each $m_j$ we can then consider $z_j\in \mathscr{U}_{m_j}$, which by (a) and (b) we can assume to converge to some $z\in \mathbb C$ with positive imaginary part. We then let $\psi_j$ be a sequence of nontrivial elements in $L^2$ such that
\begin{equation}\label{e:eigenvalue-equation-21}
-\psi_j'' + m_j^2 \psi_j + \frac{A}{\Xi -z_j} \psi_j = 0\, ,
\end{equation}
and normalize them to $\|\psi_j\|_{L^2}=1$
Since ${\rm Im}\, z_j \geq \delta >0$, the sequence of functions $\frac{A}{\Xi-z_j}$ enjoy uniform bounds in the spaces $L^1$ and $C^k$. We can then argue as in Section \ref{s:3+4} to find that
\begin{itemize}
\item[(i)] $\|\psi_j'\|_{L^2}$ enjoy a uniform bound;
\item[(ii)] There are uniformly bounded nonzero constants $\{C^\pm_j\}$ with the property that $\psi_j$ is asymptotic to $C^\pm_j e^{\mp m_j t}$ and $\pm \infty$;
\item[(iii)] There is a $T_0>0$ independent of $j$ with the property that
\[
|\psi_j (t) - C_j e^{\mp m_j t}| \leq \frac{|C_j|}{2} e^{\mp m_j t} \qquad \forall \pm t > T_0\, .
\]
\end{itemize}
These three properties together imply that a subsequence, not relabeled, converges strongly in $L^2$ to some $\psi$. Passing into the limit in \eqref{e:eigenvalue-equation-21} we conclude that
\[
-\psi'' + m^2 \psi + \frac{A}{\Xi-z} \psi = 0\, .
\]
This shows that $z\in \mathscr{U}_m$, i.e. that $m \in G$.
\medskip
{\bf Step 2.} Here we show that $G$ is relatively open. To that end we consider some sequence $m_j \to m \in ]1, \infty[\setminus \{m_a, m_b\}$ with the property that $m_j \not\in G$ and we show that $m\not \in G$. By (a) and (b) above, it suffices to show that the domain
\[
\Delta := \{|z|< B_R : {\rm Im}\, z > \delta\}
\]
does not contain any element of ${\rm spec}\, m^{-1} \mathcal{L}_m$. Observe first that, since we know that it does not intersect $\gamma = \partial \Delta$, the distance between $\gamma$ and any element in ${\rm spec}\, m^{-1} \mathcal{L}_m$ is larger than a positive constant $\varepsilon$. Recalling that the spectrum on the upper half complex space is discrete, we have that
\[
P_m := \int_\gamma (m^{-1} \mathcal{L}_m -z)^{-1}\, dz
\]
is a projection on a finite-dimensional space which contains all eigenspaces of the elements $z\in {\rm spec}\, m^{-1} \mathcal{L}_m\cap \Delta = \mathcal{U}_m$. And since all such elements belong to the discrete spectrum, $\mathscr{U}_m = \emptyset$ if and only if $P_m = 0$. On the other hand
\[
P_{m_j} := \int_\gamma (m_j^{-1} \mathcal{L}_{m_j} -z)^{-1}\, dz
\]
equals $0$ precisely because $m_j \not \in G$. We thus just need to show that $P_{m_j}$ converges to $P_m$ to infer that $m\in G$. The latter follows from the following observations:
\begin{itemize}
\item[(i)] Since $\gamma$ is a compact set and does not intersect the spectrum of $m^{-1} \mathcal{L}_m$, there is a constant $M$ such that $\|(m^{-1} \mathcal{L}_m -z)^{-1}\|_O \leq M$ for all $z\in \gamma$;
\item[(ii)] $\mathcal{L}_{m_j}$ converges to $\mathcal{L}_m$ in the operator norm;
\item[(iii)] Writing
\[
(m_j^{-1} \mathcal{L}_{m_j} - z)^{-1} = ({\rm Id} + (m^{-1} \mathcal{L}_m -z)^{-1} (m_j^{-1} \mathcal{L}_{m_j} - m^{-1} \mathcal{L}_m))^{-1}(m^{-1} \mathcal{L}_m - z)^{-1}\, ,
\]
when $\|m_j^{-1} \mathcal{L}_{m_j} - m^{-1} \mathcal{L}_m\|_{\mathcal{L}} \leq \frac{1}{2M}$ we can use the Neumann series for the inverse to infer
\[
\sup_{z\in \gamma} \|(m_j^{-1} \mathcal{L}_{m_j} -z)^{-1} - (m^{-1} \mathcal{L}_m -z)^{-1}\|_O \leq C \|m^{-1} \mathcal{L}_m - m_j^{-1} \mathcal{L}_{m_j}\|_O\, ,
\]
for some constant $C$ independent of $j$.
\end{itemize}
We then conclude that $P_{m_j}$ converges to $P_m$ in the operator norm.
\end{proof}
\begin{remark}\label{rmk:algebraic dim const}
An immediate outcome of the argument above is that the sum of the algebraic multiplicities of $z\in \mathcal{U}_m$, as eigenvalues of $m^{-1} \mathcal{L}_m$, is constant on any connected component of $]-\infty, \infty[\setminus \{m_a, m_b\}$. Indeed, it coincides with the rank of the operator $P_m$ defined in Step 2.
\end{remark}
\section{Proof of Lemma \ref{l:bottom}}\label{s:choice-of-A}
Rather than looking for a suitable $\Xi$ we will write $G := \Xi' + 2\Xi$ and look for the latter function after expressing
\[
\Xi (t) := \int_{-\infty}^t e^{-2(t-\tau)} G (\tau)\, d\tau\, .
\]
To check that the above formula recovers $\Xi$ under our assumptions, observe first that
\[
G = \Xi'+2\Xi
\]
by the classical solution formula for first order ODEs with constant coefficients. It thus suffices to show that that the integral and $\Xi$ coincide in a neighborhood of $-\infty$. To that end consider that
that $\Xi (t) = \Xi (-\infty) - c_0 e^{2t}$ for any sufficiently negative $t$ and thus
\[
G (t) = 2\Xi (-\infty) - 4c_0 e^{2t}\, ,
\]
so that, for any such $t$,
\[
\Xi (t) = e^{-2t} \int_{-\infty}^t (2\Xi (-\infty) e^{2\tau} - 4c_0 e^{4\tau})\, d\tau =
\Xi (-\infty) - c_0 e^{2t}\, .
\]
We next read the conditions $\Xi\in \mathscr{C}$ in terms of $G$ to find that they are
\begin{itemize}
\item[(i)] $G (t) = 2 \Xi (-\infty) - 4 c_0 e^{2t}$ for all $t$ sufficiently negative;
\item[(ii)] $G (t) = e^{-{\bar\alpha} t}$ for all $t\geq \ln 2$;
\item[(iii)] There are exactly two zeros $a<b$ of $G'$ and $G'' (a)>0$, $G'' (b)<0$;
\item[(iv)] $\int_{-\infty}^t e^{-2(t-\tau)} G' (\tau) d\tau < 0$ for every $t$.
\end{itemize}
The conditions (i), (ii), and (iii) are obviously equivalent to the corresponding ones in Definition \ref{d:class-C}. As for (iv), we just need to check the formula
\[
\Xi' (t) = \int_{-\infty}^t e^{-2(t-\tau)} G' (\tau)\, d\tau\, .
\]
Arguing as above, the solution formula for first order ODEs with constant coefficients show that the two sides of the above identity can differ at most by a constant, while a direct verification using (i) shows that the two sides coincide for sufficiently negative $t$'s.
We next can read all the above conditions in terms of $A$, more precisely it suffices to impose
\begin{itemize}
\item[(i')] $A (t) = - 8 c_0 e^{2t}$ for all $t$ sufficiently negative;
\item[(ii')] $A(t) = -{\bar\alpha} e^{-{\bar\alpha} t}$ for all $t\geq \ln 2$;
\item[(iii')] There are exactly two zeros $a<b$ of $A$ and $A' (a) >0$, $A' (b)<0$;
\item[(iv')] $\int_{-\infty}^t e^{-2(t-\tau)} A(\tau) d\tau <0$ for every $t$.
\end{itemize}
In fact, assuming the four conditions above we easily recover $G$ by setting
\[
G (t) := - \int_t^\infty A(\tau)\, d\tau\, .
\]
Note in passing that since (i'), (ii'), (iii'), and (iv') imply (i), (ii), (iii), and (iv), which in turn imply the corresponding conditions in Definition \ref{d:class-C}, we derive
\[
\Xi (-\infty) = \frac{1}{2} G (-\infty) = - \int_{-\infty}^\infty A(\tau)\, d\tau\, .
\]
In turn, since $\Xi (\infty)$ = 0 and $\Xi'<0$, the latter equality implies
\[
\int_{-\infty}^\infty A (\tau)\, d\tau < 0\, .
\]
We next fix $a=0$ and $b=\frac{1}{2}$ and rather than imposing (iv') we impose the two conditions
\begin{itemize}
\item[(v')] $\int_{-\infty}^0 e^{2\tau} A (\tau) d\tau = -1$;
\item[(vi')] $\max A \leq \frac{1}{e}$.
\end{itemize}
Observe, indeed, that (iv) is equivalent to
\[
\int_{-\infty}^t e^{2\tau} A (\tau) \, d\tau < 0
\]
and that, since $A$ is negative on $]-\infty, 0[$ and $]\frac{1}{2}, \infty[$, the integral on the left-hand side is maximal for $t=\frac{1}{2}$. We then can use (v') and (vi') to estimate
\[
\int_{-\infty}^{\frac{1}{2}} e^{2\tau} A(\tau)\,d \tau \leq -1 + \frac{e}{2} \max A \leq -\frac{1}{2}\, .
\]
We next recall that, by the Rayleigh criterion,
\[
- \lambda_a = \min_{\|\psi\|_{L^2} = 1} \langle \psi, L_a \psi\rangle = \min_{\|\psi\|_{L^2} = 1} \int \left(|\psi'|^2 + \frac{A}{\Xi-\Xi (0)} |\psi|^2 \right)\, .
\]
We test the right-hand side with
\[
\psi (t) :=
\left\{
\begin{array}{ll}
0 \qquad & \mbox{for $|t|\geq \frac{1}{2}$}\\ \\
\sqrt{2} \cos (\pi t) \qquad &\mbox{for $|t|\leq \frac{1}{2}$.}
\end{array}\right.
\]
We therefore get
\begin{equation}\label{e:bottom-est-1}
- \lambda_a \leq 2 \pi^2 + 2 \int_{-1/2}^{1/2} \frac{A (t)}{\Xi (t) - \Xi (0)} \cos^2 \pi t\, dt\, .
\end{equation}
Next, for any fixed positive constant $B>0$, we impose that $A (t) = B t$ on the interval $]- \sqrt{B}^{-1}, 0]$ and we then continue it smoothly on $[0, \infty[$ so to satisfy (ii'), (iii'), and (v') on $[0, \infty[$ (the verification that this is possible is rather simple). We also can continue it smoothly on $]-\infty, - \sqrt{B}^{-1}]$ so to ensure (i'). In order to ensure (v') as well we just need to show that
\[
\int_{-\sqrt{B}^{-1}}^0 e^{2\tau} A(\tau)\, d\tau \geq -\frac{1}{2}\, .
\]
The latter is certainly ensured by
\[
\int_{- \sqrt{B}^{-1}}^0 e^{2\tau} A(\tau)\, d\tau \geq \int_{-\sqrt{B}^{-1}}^0 A(\tau)\, d\tau = -\frac{1}{2}\, .
\]
Now, observe that $\Xi' (0) = \int_{-\infty}^0 e^{2\tau} A (\tau) \, d\tau = -1$. For $t\in \, ]-\sqrt{B}^{-1}, 0[$ we wish to estimate $\Xi' (t)$ and to do it we compute
\begin{align*}
|\Xi' (t)- \Xi' (0)| & = \left|e^{-2t}\int_{-\infty}^t e^{2\tau} A (\tau)\, d\tau - \int_{-\infty}^0 e^{2\tau} A(\tau)\, d\tau\right|\\
&\leq e^{-2t} \left|\int_t^0 e^{2\tau} A(\tau)\, d\tau\right| + (e^{-2t}-1) \left|\int_{-\infty}^0 e^{2\tau} A(\tau)\, d\tau\right|\\
&\leq \frac{e^{2\sqrt{B}^{-1}}}{2} + (e^{2\sqrt{B}^{-1}}-1) \leq \frac{3}{4}\, ,
\end{align*}
which can be ensured by taking $B$ sufficiently large. In particular $-\frac{1}{4} \geq \Xi' (t) \geq -2 $ for $t\in \, ]-\sqrt{B}^{-1}, 0[$. We thus conclude that
\[
-2t \leq \Xi (t) - \Xi (0) \leq -\frac{t}{4} \qquad \forall t\in \, ]-\sqrt{B}^{-1}, 0[\, .
\]
In turn the latter can be used to show
\[
\frac{A (t)}{\Xi (t) - \Xi(0)} \leq - \frac{B}{2} \qquad \forall t\in \, ]-\sqrt{B}^{-1}, 0[\, .
\]
Since $\frac{A}{\Xi - \Xi (0)}$ is otherwise negative on $]-\frac{1}{2}, \frac{1}{2}[$, we conclude
\begin{equation}\label{e:bottom-est-2}
- \lambda_a \leq 2\pi^2 - 2 \int_{-\sqrt{B}^{-1}}^0 B \cos^2 \pi t\, dt\, .
\end{equation}
By taking $B$ large enough we can ensure that $\cos^2 \pi t\geq \frac{1}{2}$ on the interval $]-\sqrt{B}^{-1}, 0[$. In particular we achieve
\[
- \lambda_a \leq 2\pi^2 - \sqrt{B}\, .
\]
Since we can choose $\sqrt{B}$ as large as we wish, the latter inequality completes the proof of the lemma.
\chapter{Linear theory: Part II}
\label{chapter:linearpartii}\label{sect:Proof-spectral-2}
This chapter is devoted to proving Theorem \ref{thm:spectral2}. Because of the discussions in the previous chapter, considering the decomposition
\[
L^2_m = \bigoplus_{k\in \mathbb Z} U_{km}\, ,
\]
the statement of Theorem \ref{thm:spectral2} can be reduced to the study of the spectra of the restrictions $L_{\text{st}}|_{U_{km}}$ of the operator $L_{\text{st}}$ to the invariant subspaces $U_{km}$. For this reason we introduce the notation ${\rm spec}\, (L_{\text{st}}, U_j)$ for the spectrum of the operator $L_{\text{st}}|_{U_j}$, understood as an operator from $U_j$ to $U_j$. The following is a very simple observation.
\begin{lemma}
The restriction of the operator $L_{\text{st}}$ to the radial functions $U_0$ is identically $0$. Moreover, $z\in {\rm spec}\, (L_{\text{st}}, U_j)$ if and only if $\bar z \in {\rm spec}\, (L_{\text{st}}, U_{-j})$.
\end{lemma}
We will then focus on proving the following statement, which is slightly stronger than what we need to infer Theorem \ref{thm:spectral2}.
\begin{theorem}\label{thm:spectral3}
For a suitable choice of $\bar \Omega$, there is $m_0\geq 2$ such that
${\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\}$ is nonempty and ${\rm spec}\, (L_{\text{st}}, U_{m_0})\cap \{{\rm Re}\, z \geq \bar a\}$ is finite for every positive $\bar a$.
\end{theorem}
\begin{remark}\label{r:better2}\label{R:BETTER2} As it is the case for Theorem \ref{thm:spectral2} we can deepen our analysis and prove the following stronger statement:
\begin{itemize}
\item[(i)] For a suitable choice of $m_0$, in addition to the conclusion of Theorem \ref{thm:spectral3} we have ${\rm spec}\, (L_{\text{st}}, U_m) \subset i \mathbb R$ for every $m> m_0$.
\end{itemize}
This will be done in Appendix \ref{a:better}, where we will also show how conclusion (c) of Remark \ref{r:better} follows from it.
\end{remark}
{{red} Note that in \cite{Vishik2} Vishik claims the following stronger statement.
\begin{theorem}\label{thm:spectral-stronger-2}\label{THM:SPECTRAL-STRONGER-2}
For a suitable choice of $m_0$, in addition to the conclusion of Theorem \ref{thm:spectral3} and to Remark \ref{r:better2}(i), we have also
\begin{itemize}
\item[(ii)] ${\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\}$ consists of a single eigenvalue with algebraic multiplicity $1$.
\end{itemize}
\end{theorem}
In Appendix \ref{a:better} we will show how to prove the latter conclusion and how Theorem \ref{thm:spectral-stronger} follows from it.}
\section{Preliminaries}
If we write an arbitrary element $\Omega\in U_m$ as $\Omega (x) = e^{im\theta} \gamma (r)$ using polar coordinates, we find an isomorphism of the Hilbert space $U_m$ with the Hilbert space
\begin{equation}\label{e:def-H}
\mathcal{H}:= \left\{\gamma : \mathbb R^+ \to \mathbb C : \int_0^\infty |\gamma (r)|^2\, r\, dr < \infty\right\}
\end{equation}
and thus the operator $L_{\text{st}}: U_m \to U_m$ can be identified with an operator $\mathcal{L}_m : \mathcal{H}\to \mathcal{H}$. In fact, since $L_{\text{st}} = S_1+\mathscr{K}$, where $S_1$ is skew-adjoint and $\mathscr{K}$ compact, $\mathcal{L}_m$ is also a compact perturbation of a skew-adjoint operator. In order to simplify our notation and terminology, we will then revert our considerations to the operator $i\mathcal{L}_m$, which will be written as the sum of a self-adjoint operator, denoted by $\mathcal{S}_m$, and a compact operator, denoted by $\mathscr{K}_m$.
\begin{lemma}\label{l:S-in-polar}
After the latter identification, if $\bar\Omega (x) = g (|x|)$ and $\zeta$ is given through the formula \eqref{e:def-zeta}, then $\mathcal{S}_m: \mathcal{H}\to \mathcal{H}$ is the following bounded self-adjoint operator:
\begin{equation}\label{e:explicit}
\gamma \mapsto \mathcal{S}_m (\gamma) = m \zeta \gamma\, .
\end{equation}
\end{lemma}
\begin{proof}
The formula is easy to check. The self-adjointness of \eqref{e:explicit} is obvious. Concerning the boundedness we need to show that $\zeta$ is bounded. Since $g$ is smooth (and hence locally bounded), $\zeta$ is smooth and locally bounded by \eqref{e:def-zeta}. To show that it is globally bounded recall that $g (r) = r^{-{\bar\alpha}}$ for $r\geq 2$, so that
\[
\zeta (r) = \frac{\tilde c_0}{r^2} + \frac{1}{r^2} \int_2^r \rho^{1-{\bar\alpha}}\, d\rho = \frac{c_0}{r^2} + \frac{c_1}{r^{\bar\alpha}} \qquad \forall r\geq 2\, ,
\]
where $c_0$ and $c_1$ are two appropriate constants.
\end{proof}
A suitable, more complicated, representation formula can be shown for the operator $\mathscr{K}_m$.
\begin{lemma}\label{l:K-in-polar}
Under the assumptions of Lemma \ref{l:S-in-polar}, the compact operator $\mathscr{K}_m: \mathcal{H}\to \mathcal{H}$ is given by
\begin{equation}\label{e:explicit2}
\gamma \mapsto \mathscr{K}_m (\gamma)= - \frac{m}{r} \psi g'\,
\end{equation}
where
\begin{equation}\label{e:explicit3}
\psi (r) = - \frac{1}{2m} r^{m} \int_r^\infty \gamma (s) s^{1-m}\, ds - \frac{1}{2m} r^{-m} \int_0^r \gamma (s) s^{1+m}\, ds\, .
\end{equation}
\end{lemma}
\begin{remark}\label{r:potential-theory}
When $\gamma$ is compactly supported, $\phi (\theta,r):= \psi (r) e^{im\theta}$ with $\psi$ as in \eqref{e:explicit} gives the unique potential-theoretic solution of $\Delta \phi = \gamma e^{im\theta}$, namely, $\phi$ obtained as the convolution of $\gamma e^{im\theta}$ with the Newtonian potential $\frac{1}{2\pi} \ln r$. For general $\gamma\in \mathcal{H}$ we do not have enough summability to define such convolution using Lebesgue integration, but, as already done before, we keep calling $\phi$ the potential-theoretic solution of $\Delta \phi = \gamma e^{im\theta}$.
\end{remark}
\begin{proof}[Proof of Lemma \ref{l:K-in-polar}]
First of all we want to show that the formula is correct when $\Omega = \gamma (r) e^{im \theta} \in C^\infty_c \cap L^2_m$. We are interested in computing $-i (K_2*\Omega\cdot \nabla) \bar \Omega$. First of all we recall that $K_2* \Omega = \nabla^\perp \phi$, where $\phi$ is the potential-theoretic solution of $\Delta \phi = \Omega$. Recall that for $\phi$ we have the explicit formula
\[
\phi (x) = \frac{1}{2\pi} \int_{\ensuremath{\mathbb R}^2} \Omega (y) \ln |y-x|\,\mathrm dy\, .
\]
$\phi$ is clearly smooth and hence locally bounded. Observe that $\Omega$ averages to $0$ and thus
\[
\phi (x) = \frac{1}{2\pi} \int_{\ensuremath{\mathbb R}^2} \Omega (y) (\ln |y-x| - \ln |x|)\,\mathrm dy\, .
\]
Fix $R$ larger than $1$ so that ${\rm spt}\, (\Omega) \subset B_R$ and choose $|x|\geq 2R$. We then have the following elementary inequality for every $y\in {\rm spt}\, (\Omega)$:
\[
|\ln |x| - \ln |x-y||\leq \ln (|x-y| + |y|) - \ln (|x-y|)\leq \frac{|y|}{|y-x|} \leq \frac{2|y|}{|x|}\, ,
\]
from which we conclude that $|\phi (x)|\leq C (1+|x|)^{-1}$. Hence $\phi$ is the only solution to $\Delta \phi = \Omega$ with the property that it converges to $0$ at infinity. This allows us to show that $\phi$ satisfies the formula
\[
\phi (x) = \psi (r) e^{im\theta}
\]
where $\psi$ is given by formula \eqref{e:explicit3}. We indeed just need to check that the Laplacian of
$\psi (r) e^{im\theta}$ equals $\gamma (r) e^{im\theta}$ and that $\lim_{r\to \infty} \psi (r) = 0$.
Using the formula $\Delta = \frac{1}{r^2} \frac{\partial^2}{\partial \theta^2} + \frac{1}{r} \frac{\partial}{\partial r} + \frac{\partial^2}{\partial r^2}$ the first claim is a direct verification. Next, since $\gamma (r) =0$ for $r\geq R$, we conclude $\psi (r) = C r^{-m}$ for all $r\geq R$, which shows the second claim.
Observe next that
\[
\nabla \phi = \frac{m i}{r^2} \psi (r) e^{im\theta} \frac{\partial}{\partial \theta} - \frac{\partial}{\partial r} \left(\psi (r) e^{im\theta}\right) \frac{\partial}{\partial r}\, ,
\]
which turns into
\[
\nabla \phi^\perp = - \frac{mi}{r} \psi (r) e^{im\theta} \frac{\partial}{\partial r} - \frac{1}{r} \frac{\partial}{\partial r} \left(\psi (r) e^{im\theta}\right) \frac{\partial}{\partial \theta}\, .
\]
Since $\bar\Omega (x) = g(r)$, we then conclude that
\[
- (K_2*\Omega\cdot \nabla) \bar\Omega = \frac{mi}{r} \psi (r) e^{im\theta} g' (r)\, .
\]
Upon multiplication by $i$ we obtain formula \eqref{e:explicit2}. Since we know from the previous chapter that $\mathscr{K}$ is a bounded and compact operator and $\mathscr{K}_m$ is just the restriction of $i\mathscr{K}$ to a closed invariant subspace of it, the boundedness and compactness of $\mathscr{K}_m$ is obvious.
\end{proof}
Notice next that, while in all the discussion so far we have always assumed that $m$ is an integer larger than $1$, the operator $\mathcal{S}_m$ can in fact be easily defined for every {\em real} $m>1$, while, using the formulae \eqref{e:explicit2} and \eqref{e:explicit3} we can also make sense of $\mathscr{K}_m$ for every real $m>1$. In particular we can define as well the operator $\mathcal{L}_m$ for every $m>1$. The possibility of varying $m$ as a real parameter will play a crucial role in the rest of the chapter, and we start by showing that, for $m$ in the above range, the boundedness of $\mathcal{L}_m$ and $\mathcal{S}_m$ and the compactness of $\mathscr{K}_m$ continue to hold.
\begin{proposition}\label{p:all-m}
The operators $\mathcal{L}_m$, $\mathcal{S}_m$, and $\mathscr{K}_m$ are bounded operators from $\mathcal{H}$ to $\mathcal{H}$ for every real $m>1$, with a uniform bound on their norms if $m$ ranges in a compact set. Moreover, under the same assumption $\mathscr{K}_m$ is compact. In particular:
\begin{itemize}
\item[(i)] ${\rm spec}\, (\mathcal{L}_m)$ is compact;
\item[(ii)] for every $z$ with ${\rm Im}\, z \neq 0$ the operator $\mathcal{L}_m-z$ is a bounded Fredholm operator with index $0$;
\item[(iii)] every $z\in {\rm spec}\, (\mathcal{L}_m)$ with ${\rm Im}\, z \neq 0$ belongs to the discrete spectrum.
\end{itemize}
\end{proposition}
\begin{proof} The boundedness of $\mathcal{S}_m$ is obvious. Having shown the boundedness and compactness of $\mathscr{K}_m$, (i) follows immediately from the boundedness of $\mathcal{L}_m$, while (ii) follows immediately from the fact that $\mathcal{L}_m - z$ is a compact perturbation of the operator $\mathcal{S}_m -z$, which is invertible because $\mathcal{S}_m$ is selfajoint, and (iii) is a standard consequence of (ii).
First of all let us prove that $\mathcal{K}_m$ is bounded (the proof is necessary because from what previously proved, we can just conclude the boundedness and compactness of the operator for {\em integer values} of $m$ larger than $1$). We observe first that the function $\| r^{-1}\psi\|_\infty \leq \|\gamma\|_{\mathcal{H}}$, as it follows from Cauchy-Schwarz that
\begin{align*}
r^{m-1} \int_r^\infty |\gamma (s)| s^{1-m}\, ds&\leq
r^{m-1} \left(\int_r^\infty |\gamma(s)|^2 s\, ds\right)^{\frac{1}{2}} \left(\int_r^\infty s^{1-2m}\, ds\right)^{\frac{1}{2}}
\leq \frac{1}{\sqrt{2m-2}} \|\gamma\|_{\mathcal{H}}\\
r^{-m-1} \int_0^r |\gamma (s)| s^{1+m}\, ds &\leq r^{-m-1} \left(\int_0^r |\gamma (s)|^2 s\, ds\right)^{\frac{1}{2}} \left(\int_0^r s^{1+2m}\, ds\right)^{\frac{1}{2}}
\leq \frac{1}{\sqrt{2m+2}} \|\gamma\|_{\mathcal{H}}\, .
\end{align*}
Since $g' (r) \leq C (1+r)^{-1-{\bar\alpha}}$, it follows immediately that
\begin{equation}\label{e:K_m-pointwise-bound}
|(\mathcal{K}_m (\gamma)) (r)| \leq \frac{C\|\gamma\|_{\mathcal{H}}}{(1+r)^{1+{\bar\alpha}}}
\end{equation}
and in particular
\[
\|\mathcal{K}_m (\gamma)\|_{\mathcal{H}} \leq
C\|\gamma\|_{\mathcal{H}} \left(\int_0^\infty \frac{s}{(1+s)^{2+2{\bar\alpha}}}\, ds\right)^{\frac{1}{2}}
\leq C \|\gamma\|_{\mathcal{H}} \, .
\]
This completes the proof of boundedness of the operator. In order to show compactness consider now a bounded sequence $\{\gamma_k\}\subset \mathcal{H}$. Observe that for every fixed $N$, \eqref{e:explicit3} gives the following obvious bound
\begin{equation}
\|\mathcal{K}_m(\gamma_k)\|_{W^{1,2} [N^{-1}, N]} \leq C (N) \|\gamma_k\|_{\mathcal{H}}\, .
\end{equation}
In particular, through a standard diagonal procedure, we can extract a subsequence of $\{\mathcal{K}_m(\gamma_k)\}$ (not relabeled) which converges strongly in $L^2 ([N^{-1}, N], rdr)$ for every $N$. It is now easy to show that $\{\mathcal{K}_m (\gamma_k)\}_k$ is a Cauchy sequence in $\mathcal{H}$. Fix indeed $\varepsilon>0$. Using \eqref{e:K_m-pointwise-bound} it is easy to show that there is a sufficiently large $N$ with the property that
\begin{equation}\label{e:Cauchy-1}
\sup_k \|\mathcal{K}_m (\gamma_k) \mathbf{1}_{[0, N^{-1}]\cup [N, \infty[}\|_{\mathcal{H}} < \frac{\varepsilon}{3}\, .
\end{equation}
Hence, given such an $N$, we can choose $k_0$ big enough so that
\begin{equation}\label{e:Cauchy-2}
\|(\mathcal{K}_m (\gamma_k) - \mathcal{K}_m (\gamma_j)) \mathbf{1}_{[N^{-1}, N]}\|_{\mathcal{H}} \leq
\frac{\varepsilon}{3} \qquad \forall k,j \geq k_0\, .
\end{equation}
Combining \eqref{e:Cauchy-1} and \eqref{e:Cauchy-2} we immediately conclude
\[
\|\mathcal{K}_m (\gamma_k) - \mathcal{K}_m (\gamma_j)\|_{\mathcal{H}} < \varepsilon
\]
for every $j,k \geq k_0$. This completes the proof that $\{\mathcal{K}_m (\gamma_j)\}$ is a Cauchy sequence and hence the proof that $\mathcal{K}_m$ is compact.
\end{proof}
\section{The eigenvalue equation and the class \texorpdfstring{$\mathscr{C}$}{scrC}}
\label{s:eigenvalue-equation}
Using the operators introduced in the previous setting, we observe that Theorem \ref{thm:spectral3} is equivalent to showing that ${\rm spec}\, (\mathcal{L}_{m_0}) \cap \{{\rm Im}\, z>0\}$ is finite and nonempty.
We next notice that, thanks to Proposition \ref{p:all-m}, the latter is equivalent to showing that the equation\footnote{Recall that $\psi$ is defined through \eqref{e:explicit3}.}
\begin{equation}\label{e:eigenvalue-equation}
m \zeta \gamma - \frac{m}{r} g' \psi = z \gamma
\end{equation}
has a nontrivial solution $\gamma \in \mathcal{H}$ for some integer $m=m_0\geq 2$ and some complex number $z$ with positive imaginary part.
We thus turn \eqref{e:eigenvalue-equation} into an ODE problem by changing the unknown from $\gamma$ to the function $\psi$.
In particular, recall that the relation between the two is that $\Delta (\psi (r) e^{im\theta}) = \gamma (r) e^{im\theta}$, and $\psi e^{im\theta}$ is in fact the potential-theoretic solution. We infer that
\[
\psi'' + \frac{1}{r} \psi' - \frac{m^2}{r^2} \psi = \gamma\,
\]
and hence \eqref{e:eigenvalue-equation} becomes
\begin{equation}\label{e:eigenvalue-equation-2}
- \psi'' - \frac{1}{r}\psi' + \frac{m^2}{r^2} \psi + \frac{g'}{r (\zeta -m^{-1} z)} \psi = 0 \, .
\end{equation}
Notice that, by classical estimates for ODEs, $\psi \in W^{2,2}_{\text{loc}} (\mathbb R^+)$.
Observe, moreover, that if $\psi\in L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$ solves \eqref{e:eigenvalue-equation-2} and $z$ has nonzero imaginary part, it follows that
\[
\gamma = \frac{mg'}{r (m \zeta -z)} \psi
\]
belongs to $L^2 (r dr)$ and solves \eqref{e:eigenvalue-equation}, because the function $\frac{mg'}{m \zeta -z}$ is bounded. Viceversa, assume that $\gamma \in L^2 (r dr)$ solves \eqref{e:eigenvalue-equation}. Then $\psi$ solves \eqref{e:eigenvalue-equation-2} and we claim that $\psi\in L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$. First of all notice that, by classical Calder{\'o}n-Zygmund estimates, $\phi (x) := \psi (r) e^{im\theta}$ is a $W^{2,2}_{\text{loc}}$ function of $\mathbb R^2$. As such $\phi\in C^\omega (B_1)$ for every $\omega<1$ and therefore $\psi\in C^\omega ([0,1])$ and, by symmetry considerations, $\psi (0) =0$. Thus it turns out that $|\psi (r)|\leq C r^\omega$ for every $r\in [0,1]$, which easily shows that $\psi\in L^2 ([0,1], \frac{dr}{r})$. It remains to show that
\begin{equation}\label{e:correzione-1}
\int_1^\infty \frac{|\psi (r)|^2}{r}\, dr < \infty\, .
\end{equation}
However recall that, for $r$ sufficiently large, $\zeta (r) = \frac{c_0}{r^2}+ \frac{c_1}{r^{\bar\alpha}}$ for some constants $c_0$ and $c_1$, while $g' (r) = -{\bar\alpha} r^{1+{\bar\alpha}}$. We thus infer
\[
|\psi (r)| = \left|\frac{r (\zeta (r)-\frac{z}{m})}{g' (r)}\right| \leq \frac{C|\gamma (r)|}{r^{\bar\alpha}}\, ,
\]
which in turn easily implies \eqref{e:correzione-1} because $\int_1^\infty |\gamma (r)|^2 r\, dr < \infty$.
Hence our problem is equivalent to understand for which $m$ and $z$ with positive imaginary part there is an $L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$ solution of \eqref{e:eigenvalue-equation-2}. The next step is to change variables to $t = \ln r$ and we thus set $\varphi (t) = \psi (e^t)$, namely, $\psi (r) = \varphi (\ln r)$. The condition that $\psi\in L^2 (\frac{dr}{r})$ translates then into $\varphi\in L^2 (\mathbb R)$ and $\psi\in W^{2,2}_{\text{loc}}$ translates into $\varphi\in W^{2,2}_{\text{loc}}$.
Moreover, if we substitute the complex number $z$ with $\frac{z}{m}$ we can rewrite
\begin{equation}\label{e:eigenvalue-equation-3}
- \varphi'' (t) + m^2 \varphi (t) + \frac{A(t)}{\Xi (t) - z} \varphi (t) = 0\, ,
\end{equation}
which is \emph{Rayleigh's stability equation},
where the functions $A$ and $\Xi$ are given by changing variables in the corresponding functions $g'$ and
$\zeta$:
\begin{align}
A (t) &= \frac{d}{dt} g(e^t)\\
\Xi (t) &= \int_{-\infty}^t e^{-2 (t-\tau)} g (e^\tau)\, d\tau\, .
\end{align}
Note in particular that we can express $A$ and $\Xi$ through the relation
\begin{equation}\label{e:A-Xi}
A = \Xi'' + 2 \Xi'\, .
\end{equation}
The function $g$ (and so our radial function $\bar\Omega$) can be expressed in terms of $\Xi$ through the formula
\begin{equation}\label{e:formula-g}
g (e^t) = e^{-2t} \frac{d}{dt} (e^{2t} \Xi (t))\, .
\end{equation}
Rather than looking for $g$ we will then look for $\Xi$ in an appropriate class $\mathscr{C}$ which we next detail:
\begin{definition}\label{d:class-C}
The class $\mathscr{C}$ consists of those functions $\Xi: \mathbb R \to ]0, \infty[$ such that
\begin{itemize}
\item[(i)] $\Xi (-\infty) := \lim_{t\to - \infty} \Xi (t)$ is finite and there are constants $c_0>0$ and $M_0$ such that $\Xi (t) = \Xi (-\infty) - c_0 e^{2t}$ for all $t\leq M_0$;
\item[(ii)] there is a constant $c_1$ such that $\Xi (t) = c_1 e^{-2t} + \frac{1}{2-{\bar\alpha}} e^{-{\bar\alpha} t}$ for $t\geq \ln 2$;
\item[(iii)] $A$ has exactly two zeros, denoted by $a<b$, and $A' (a)>0$ and $A' (b)<0$ (in particular $A<0$ on $]-\infty,a[ \cup ]b, \infty[$ and $A>0$ on $]a,b[$);
\item[(iv)] $\Xi ' (t) <0$ for every $t$.
\end{itemize}
\end{definition}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/fig1.pdf}
\caption{A sketch of the function in the class $\mathscr{C}$ which will be finally chosen in Section \ref{s:choice-of-A} to prove Theorem \ref{thm:spectral3}, in the $t= \log r$ axis. The graph of $A(t)$ is the solid curve, $G(t):=\Xi'(t)+2\Xi(t)$ the dashed one, and $\Xi'(t)$ the dotted one. Even though $A$ is smooth, its derivative undergoes a very sharp change around the points $t = \frac{1}{2}$ and the point $t= -\frac{1}{\sqrt{B}}$, where $B$ is an appropriately large constant, cf. Section \ref{s:choice-of-A}.}
\label{fig:fig1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\textwidth]{Figures/fig2.pdf}
\caption{The profile of the background vorticity $\bar \Omega(x) = g(r)$ in the original coordinates (the solid curve). Compare with the exact singular profile $r^{-{\bar\alpha}}$ (the dashed curve)}
\label{fig:fig2}
\end{figure}
Fix $\Xi\in \mathscr{C}$. By \eqref{e:formula-g}, $g$ is then smooth, it equals $2\Xi (-\infty)- 4 c_0 r^2$ in a neighborhood of $0$, and it is equal to $r^{-{\bar\alpha}}$ for $r\geq 2$, thanks to the conditions (i)-(ii). In particular the corresponding function $\bar \Omega (x) = g (|x|)$ satisfies the requirements of Theorem \ref{thm:spectral3}. We are now ready to turn Theorem \ref{thm:spectral3} into a (in fact stronger) statement for the eigenvalue equation \eqref{e:eigenvalue-equation-3}. In order to simplify its formulation and several other ones in the rest of these notes, we introduce the following sets
\begin{definition}\label{d:sets-U-m}
Having fixed $\Xi\in \mathscr{C}$ and a real number $m>1$, we denote by $\mathscr{U}_m$ the set of those complex $z$ with positive imaginary part with the property that there are nontrivial solutions $\varphi\in L^2\cap W^{2,2}_{\text{loc}} (\mathbb R, \mathbb C)$ of \eqref{e:eigenvalue-equation-3}.
\end{definition}
{\begin{remark}\label{r:m-factor-eigenvalue}
Observe that $z$ belongs to $\mathscr{U}_m$ if and only if it has positive imaginary part and $m z$ is an eigenvalue of $\mathcal{L}_m$.
\end{remark}}
\begin{theorem}\label{thm:spectral5}\label{THM:SPECTRAL5}
There is a function $\Xi\in \mathscr{C}$ and an integer $m_0\geq 2$ such that $\mathscr{U}_{m_0}$ is finite and nonempty.
\end{theorem}
\section{Overview of the proof of Theorem \ref{thm:spectral5}}\label{sec:overviewspectralthm}
The rest of the chapter is devoted to proving Theorem \ref{thm:spectral5}. The proof will be achieved through a careful study of Rayleigh's stability equation~\eqref{e:eigenvalue-equation-3} and, in particular, the set $\mathscr{P}$ of pairs $(m,z)$ such that $z\in \mathscr{U}_m$ and $m>1$, i.e.,
\begin{equation}\label{e:def-pairs}
\mathscr{P}:= \left\{(m,z)\in \mathbb R \times \mathbb C: z\in \mathscr{U}_m, m>1\right\}\, .
\end{equation}
Given that $\Xi$ is strictly decreasing, we have
\[
\lim_{t\to -\infty} \Xi (t) > \Xi (a) > \Xi (b) > \lim_{t\to \infty} \Xi (t) = 0
\]
and in order to simplify our notation we will use $\Xi (-\infty)$ for $\lim_{t\to -\infty} \Xi (t)$ and, occasionally, $\Xi (\infty)$ for $0$.
The first step in the proof of Theorem \ref{thm:spectral5} is understanding which pairs $(m,z)$ belong to the closure of $\mathscr{P}$ and have ${\rm Im}\, z =0$. Solutions $(m,z,\varphi)$ to~\eqref{e:eigenvalue-equation-3} with $(m,z) \in \overline{\mathscr{P}}$ are sometimes called \emph{neutral limiting modes}~\cite{LinSIMA2003}.\footnote{The interested reader may compare with the strategy for bounded shear flows in~\cite{LinSIMA2003}.} To that end, it is convenient to introduce the following two self-adjoint operators:
\begin{align}
L_a &:= -\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t) - \Xi (a)}\label{e:def-L_a}\\
L_b &:= -\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t) - \Xi (b)}\label{e:def-L_b}\, .
\end{align}
Thanks to the definition of the class $\mathscr{C}$, it is easy to see that both functions $\frac{A(t)}{\Xi (t) - \Xi (a)}$ and $\frac{A(t)}{\Xi (t) - \Xi (b)}$ are bounded and that $\frac{A(t)}{\Xi (t) - \Xi (a)} < \frac{A(t)}{\Xi (t) - \Xi (b)}$. Moreover, the first is negative on $]-\infty, b[$ and positive on $]b, \infty[$, while the second is negative on $]-\infty, a[$ and positive on $]a, \infty[$. Recall that the spectra of these operators are necessarily real and denote by $-\lambda_a$ and $-\lambda_b$ the smallest element in the respective ones: observe that, by the Rayleigh quotient characterization, $-\lambda_a < -\lambda_b$.
The following proposition characterizes the possible neutral limiting modes:
\begin{proposition}\label{p:3+4}\label{P:3+4}
If $(m_0,z)\in \overline{\mathscr{P}}$ and ${\rm Im}\, z =0$, then either $z= \Xi (a)$ or $z= \Xi (b)$. {{red} Moreover, in either case, if $m_0>1$ then necessarily $m_0 = \sqrt{\lambda_a}$ or $m_0 = \sqrt{\lambda_b}$.} Assume in addition that $- \lambda_a < -1$. Then, for $z = \Xi (a)$, the unique $m\geq 1$ such that \eqref{e:eigenvalue-equation-3} has a nontrivial solution $\psi_a\in L^2$ is $m_a = \sqrt{\lambda_a}$. Moreover, any nontrivial solution has the property that $\psi_a (a) \neq 0$.
\end{proposition}
{{red}
\begin{remark}\label{r:b-also}
We remark that the exact same argument applies with $b$ in place of $a$ when $\lambda_b >1$, even though this fact does not play any role in the rest of the notes.
\end{remark}
}
Observe that this does not yet show that $(m_a, \Xi (a))\in \overline{\mathscr{P}}$ corresponds to a neutral limiting mode. The latter property will be achieved in a second step, in which we seek a curve of unstable modes emanating from $(m_a, \Xi(a))$:
\begin{proposition}\label{p:5-7}\label{P:5-7}
Assume $- \lambda_a<-1$ and let $m_a=\sqrt{\lambda_a}$.
There are positive constants $\varepsilon >0$ and $\delta>0$ with the following property:
For every $h\in ]0, \delta[$, $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) \neq \emptyset$.
\end{proposition}
{{red}
\begin{remark}\label{r:b-also-2} In fact, the argument given for the proposition proves the stronger conclusion that $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a))$ consists of a single point $z$, with the property that $mz$ is an eigenvalue of $\mathcal{L}_m$ with geometric multiplicity $1$.
Moreover, the very same argument applies to $b$ in place of $a$ and $h \in ]-\delta,0[$ if $\lambda_b >1$.
\end{remark}
}
Combined with some further analysis, in which the curve of unstable modes is continued, the latter proposition will allow us to conclude the following:
\begin{proposition}\label{p:almost-final}\label{P:ALMOST-FINAL}
Assume $- \lambda_a<-1$, let $m_a = \sqrt{\lambda_a}$ and set $m_b:= \sqrt{\max \{1, \lambda_b\}}$: then
$\mathscr{U}_m\neq \emptyset$ for every $m\in ]m_b, m_a[$. \end{proposition}
Thus far, we have not selected our function $\Xi$: the above properties are valid for any element in the class $\mathscr{C}$. The choice of $\Xi$ comes in the very last step.
\begin{proposition}\label{p:final}
There is a choice of $\Xi\in \mathscr{C}$ with the property that $]m_b,m_a[$ contains an integer larger than $1$.
\end{proposition}
Clearly, the combination of Proposition \ref{p:almost-final} and Proposition \ref{p:final} gives Theorem \ref{thm:spectral5}: we first choose $\Xi$ as in Proposition \ref{p:final} and hence we select $m_0$ as the largest natural number which belongs to the interval $]m_b,m_a[$; the properties claimed in Theorem \ref{thm:spectral5} follow then from Proposition \ref{p:almost-final}. The proof of Proposition \ref{p:final} is in fact a rather straightforward application of the following.
\begin{lemma}\label{l:bottom}\label{L:BOTTOM}
Let $m_0$ be any integer. Then there exists $\Xi\in \mathscr{C}$ with $a=0$ and $b=\frac{1}{2}$ such that the smallest eigenvalue of the operator $L_a$ is smaller than $-m_0^2$.
\end{lemma}
\begin{remark}\label{rmk:veryunstablemodes}
A consequence of Lemma~\ref{l:bottom} is that the most unstable wavenumber $m_0$ can be made arbitrarily large. Only $m_0 \geq 2$ is necessary to prove non-uniqueness.
\end{remark}
The rest of the chapter will be devoted to proving the Propositions \ref{p:3+4} and \ref{p:5-7} and Lemma \ref{l:bottom}. We finish this section by giving the simple proof of Proposition \ref{p:final}
\begin{proof} For simplicity we fix $a=0$ and $b=\frac{1}{2}$ and we look at the set of functions $\Xi$ with this particular choice of zeros for $A$. We then denote by $L_{\Xi, a}$ the operator in \eqref{e:def-L_a}. We fix an arbitrary $\Xi_0\in \mathscr{C}$ and let $-\lambda (0)$ be the smallest eigenvalue of $L_{\Xi_0,a}$. We then consider the smallest integer $m_0\geq 3$ such that $m_0^2 > \lambda (0)$. By Lemma \ref{l:bottom} there is an element $\Xi_1\in \mathscr{C}$ with the property that $a=0$, $b=\frac{1}{2}$ and, if $-\lambda (1)$ is the smallest element of the spectrum of $L_{\Xi_1, a}$, then $-\lambda (1) < m_0^2$. For $\sigma\in [0,1]$ consider $L_{\Xi_\sigma,a}$ where
\[
\Xi_\sigma = (1-\sigma) \Xi_0 + \sigma \Xi_1\,
\]
and observe that $\Xi_\sigma \in \mathscr{C}$ for every $\sigma\in [0,1]$.
Since $\sigma \mapsto \Xi_\sigma$ is continuous in the uniform convergence, by the Rayleigh quotient characterization we see that the smallest element $-\lambda (\sigma)$ of the spectrum of $L_{\Xi_\sigma,a}$ is a continuous function of $\sigma$. There is thus one $\sigma\in [0,1[$ with $\lambda (\sigma)= m_0^2$. Let $\sigma_0$ be the largest $\sigma$ with $\lambda (\sigma)= m_0^2$. Observe now that, if we let $- \mu (\sigma_0)$ be the smallest eigenvalue of $L_{\Xi_{\sigma_0}, b}$, then $\mu (\sigma_0) < m_0^2$. In addition, $\sigma\mapsto \mu (\sigma)$ is also continuous and thus there is $h>0$ such that $\mu (\sigma) < m_0^2$ for all $\sigma\in [\sigma_0-h, \sigma_0+h]$. On the other hand $\lambda (\sigma_0+h)> m_0^2$. This shows that $m_b < m_0 < m_a$ if we choose $\Xi= \Xi_{\sigma_0+h}$, completing the proof of our claim.
\end{proof}
\section{ODE Lemmas}
An essential tool in the proofs of the Propositions \ref{p:3+4} and \ref{p:5-7} are the following two ODE lemmas.
\begin{lemma}\label{l:ODE1}
Let $m>0$. For every $f\in L^2 (\mathbb R)$ there is a unique $\psi\in L^2(\ensuremath{\mathbb R}) \cap W^{2,2}_{\text{loc}}$ s.t.
\begin{equation}\label{e:Laplacian-1d}
-\frac{d^2\psi}{dt^2} + m^2\psi = f
\end{equation}
and it is given by
\begin{equation}\label{e:potential-1d}
\psi (t) = \frac{1}{2m} \int_{\ensuremath{\mathbb R}} e^{-m|t-\tau|} f (\tau)\, d\tau\, .
\end{equation}
\end{lemma}
\begin{proof} The lemma is a classical well-known fact. At any rate the verification that
$\psi$ as in \eqref{e:potential-1d} solves \eqref{e:Laplacian-1d} is an elementary computation while, since obviosuly $e^{-m|t|}\in L^1$, $\psi\in L^2$ if $f\in L^2$. Moreover, any other solution $\hat\psi$ of \eqref{e:Laplacian-1d} must satisfy $\hat\psi (t) = \psi (t) + C_+ e^{mt} + C_- e^{-mt}$ for some constants $C_\pm$ and the requirement $\hat\psi\in L^2$ immediately implies $C_+=C_-=0$.
\end{proof}
The second ODE Lemma is the following:
\begin{lemma}\label{l:ODE2}
Let $v\in L^1 (\mathbb R, \mathbb C)$. Then for every constant $c_-$ there is a unique solution $y \in W^{2,1}_{\text{loc}} (\mathbb R, \mathbb C)$ of
\begin{equation}\label{e:ODE2}
- \frac{d^2y}{dt^2} + (m^2 + v) y =0
\end{equation}
with the property that
\begin{equation}\label{e:y=e^mt}
\lim_{t\to - \infty} e^{-mt} y (t) =c_-\, .
\end{equation}
Moreover we have $y(t) = e^{mt} (c_-+z(t))$ for a function $z(t)$ which satisfies the bounds
\begin{align}
|z(t)| &\leq |c_-|\left[\exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\right]\label{e:est-z}\\
|z'(t)| &\leq 2m |c_-|\left[\exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\right]\label{e:est-z'}
\end{align}
A symmetric statement, left to the reader, holds for solutions such that
\begin{equation}\label{e:y=e^mt-plus}
\lim_{t\to \infty} e^{mt} y (t) =c_+\, .
\end{equation}
\end{lemma}
Important consequences of the above Lemmas are the following:
\begin{corollary}\label{c:decay}
If $(m,z)\in \mathscr{P}$, then the space of solutions $\varphi\in L^2\cap W^{2,2}_{\text{loc}}$ of \eqref{e:eigenvalue-equation-3} is $1$-dimensional. Moreover for any such $\varphi$ there is a constant $C$ with the property that
\begin{align}
|\varphi (t)| &\leq C e^{-m|t|}\,
\end{align}
and there are two constants $C_+$ and $C_-$ such that
\begin{align}
\lim_{t\to\infty} e^{mt} \varphi (t) &= C_+\\
\lim_{t\to -\infty} e^{-mt} \varphi (t) &= C_-\, .
\end{align}
The constants are either both nonzero or both zero, in which case $\varphi$ vanishes identically.
The same conclusions apply if $m>1$, $z\in \{\Xi (a), \Xi (b)\}$ and $\varphi$ solves \eqref{e:eigenvalue-equation-3}.
\end{corollary}
\begin{proof}
Observe that $|\Xi (t)-z|\geq |{\rm Im}\, z|$, while $A (t) = 6 c_0 e^{2t}$ for $-t$ sufficiently large and $|A(t)|\leq 2 e^{-2{\bar\alpha} t}$ for $t$ sufficiently large. In particular
\begin{equation}\label{e:estimate-A-over-Xi}
\frac{|A(t)|}{|\Xi (t)-z|} \leq C e^{-2{\bar\alpha} |t|}\, .
\end{equation}
First of all notice that, if $\varphi\in L^2\cap W^{2,2}_{\text{loc}}$ solves \eqref{e:eigenvalue-equation-3}, by Lemma \ref{l:ODE1} (applied with $f= -\frac{A\varphi}{\Xi-z}$) we have
\begin{equation}\label{e:integral-equation}
|\varphi (t)| \leq \frac{C}{2m} \int e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)|\, d\tau\, .
\end{equation}
Using Cauchy-Schwarz and the fact that $\varphi\in L^2$ we immediately obtain that $\varphi\in L^\infty$, namely, that there is a constant $C$ such that $|\varphi|\leq C$. We now prove inductively that $|\varphi (t)|\leq C_k e^{-k{\bar\alpha} |t|}$ as long as $k{\bar\alpha} \leq m$. The case $k=0$ has already been shown. Assume thus that the inequality holds for $k-1$ and that $k{\bar\alpha} \leq m$. We then observe that
\begin{align*}
e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)| &\leq C_{k-1} e^{- m|t-\tau| - k{\bar\alpha} |\tau|} e^{-{\bar\alpha} |\tau|}
\leq C_{k-1} e^{-k{\bar\alpha} (|t-\tau| + |\tau|)} e^{-{\bar\alpha} |\tau|}\\
&\leq C_{k-1} e^{-k{\bar\alpha} |t|} e^{-{\bar\alpha} |\tau|}\, .
\end{align*}
Inserting in \eqref{e:integral-equation} and using that $e^{-{\bar\alpha} |\tau|}\in L^1$ we then obtain $|\varphi (t)|\leq C_k e^{-k{\bar\alpha} |t|}$. Assuming now $k{\bar\alpha} \leq m < (k+1) {\bar\alpha}$ we can, likewise, bound
\[
e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)| \leq C_k e^{- m|t-\tau| - (k+1){\bar\alpha} |\tau|} e^{-{\bar\alpha} |\tau|} \leq C_k e^{-m|t|} e^{-{\bar\alpha} |\tau|}
\]
and plugging into \eqref{e:integral-equation} one last time we conclude $|\varphi (t)|\leq C e^{-m|t|}$.
In order to show that $\varphi$ is unique up to a multiplicative constants, it suffices to show that $\lim_{t\to -\infty} e^{-mt} \varphi (t)$ exists and is finite. Hence Lemma \ref{l:ODE2} would conclude that the solution is uniquely determined by $C_-$, and that the latter must be nonzero, otherwise $\varphi\equiv 0$.
In order to show existence and finiteness of $C_-$ rewrite
\[
\varphi (t) = \frac{e^{mt}}{2m} \int_t^\infty e^{-ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds
+ \frac{e^{-mt}}{2m} \int_{-\infty} ^t e^{m s} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds\, .
\]
Since by our estimates both $e^{-ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)$ and $e^{m s} \frac{A(s)}{\Xi (s) -z} \varphi (s)$ are integrable, we conclude that $C_{\pm}$ exist and equal
\begin{align*}
C_\pm = \frac{1}{2m}\int_{-\infty}^\infty e^{\pm ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds\, .
\end{align*}
\medskip
As for the last sentence of the statement of the lemma, the same arguments can be used in the case $z\in \{\Xi (a), \Xi (b)\}$, since the crucial point is that, thanks to the assumption that $A (a) = A(b)=0$ and $\Xi' (a) \neq 0 \neq \Xi' (b)$, the estimate \eqref{e:estimate-A-over-Xi} remains valid.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:ODE2}] We distinguish between the case $c_-\neq 0$ and $c_-=0$. In the case $c_- \neq 0$ we can divide by $c_-$ and reduce the statement to $c_-=1$. For the existence it suffices to look for a solution of \eqref{e:ODE2} which satisfies \eqref{e:y=e^mt} on a half-line of type $]-\infty, T]$ for some $T$. Such solution has then a $W^{2,1}_{\text{loc}}$ continuation on $[T, \infty[$ by standard ODE theory. Likewise the uniqueness is settled once we can show the uniqueness holds on $]-\infty, T]$. Observe next that, if the solution exists, we would clearly conclude that $\frac{d^2 y}{dt^2}\in L^1 (]-\infty, T])$, hence implying that
\[
\lim_{t\to-\infty} y' (t)
\]
exists and is finite. On the other hand \eqref{e:y=e^mt} implies that such limit must be $0$.
Let $\tilde{y} (t) = e^{-mt} y (t)$ and observe that we are looking for a solution of
\[
(e^{2mt} \tilde{y}')' = e^{2mt} v \tilde{y}\, .
\]
Integrating between $-N$ and $t$ the latter identity and letting $t\to -\infty$ we conclude
\begin{equation}\label{e:tildey'}
e^{2mt} \tilde{y}' (t) = \int_{-\infty}^t e^{2ms} v (s)\tilde{y} (s)\, ds\, .
\end{equation}
Divide by $e^{2mt}$ and integrate once more to reach
\begin{align*}
\tilde{y} (t) -1 = - \int_{-\infty}^t \int_{-\infty}^r e^{2m (s-r)} v(s)\tilde{y} (s)\, ds\, dr
= \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds
\end{align*}
We then define the transformation
\begin{equation}\label{e:fixed-point}
\mathscr{F} (\tilde{y}) (t) = \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds + 1\,
\end{equation}
which we consider as a map from $L^\infty (]-\infty, T])$ into itself.
From our discussion we conclude that $y$ solves \eqref{e:ODE2} and obeys \eqref{e:y=e^mt} if and only if $\tilde{y}$ is a fixed point of $\mathscr{F}$. Choosing $T$ large enough so that $\|v\|_{L^1 (]-\infty, T])}\leq m$ we see immediately that $\mathscr{F}$ is contraction on $L^\infty (]-\infty, T])$ and it thus has a unique fixed point. We have thus showed existence and uniqueness of the solution in question.
Observe now that $z(t) = \tilde{y} (t) -1$ and set
\[
Z(t) := \exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\, .
\]
$Z$ solves the ODE $Z' = \frac{|v|}{2m} Z + \frac{|v|}{2m}$ and, since $\lim_{t\to-\infty} Z(t) =0$, the integral equation
\[
Z (t) = \frac{1}{2m} \int_{-\infty}^t |v(s)| Z(s)\, ds + \frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\, .
\]
We first want to show that $|z(t)|\leq Z(t)$ on $]-\infty, T]$. We set $\tilde{y}_0 := Z+1$ and define inductively $\tilde{y}_{i+1} = \mathscr{F} (\tilde{y}_i)$. From the above discussion we know that $\tilde{y}_i$ converges uniformly to $\tilde{y}$ and it suffices thus to show that $|\tilde{y}_i -1| \leq Z$ for all $i$.
By definition we have $|\tilde{y}_0-1| = Z$ and thus we need to show the inductive step. We estimate
\begin{align*}
|\tilde{y}_{i+1} (t) -1| &\leq \frac{1}{2m} \int_{-\infty}^t |v(s)| |\tilde{y}_i (s)|\, ds\\
&\leq \frac{1}{2m} \int_{-\infty}^t |v(s)| Z(s)\, ds + \frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds = Z(t)\, ,
\end{align*}
We have shown \eqref{e:est-z} on $]-\infty, T]$. In order to extend the inequality to the whole real axis observe first that we can assume, without loss of generality, that $\|v\|_{L^1 (\mathbb R)}>0$, otherwise we trivially have $|\tilde{y} (t)-1| = Z(t) =0$ for all $t$. In particular we can select $T$ so that all of the above holds and at the same time $\|v\|_{L^1 (]-\infty, T])}>0$. This implies $Z(T)>0$. Moreover, by \eqref{e:fixed-point} and $\mathscr{F} (\tilde{y})= \tilde{y}$, either
\begin{align*}
|\tilde{y} (T) -1| &< \frac{1}{2m} \int_{-\infty}^{T} |v(s)| |\tilde{y} (s)|\, ds
\end{align*}
or $|v||\tilde{y}|$ vanishes identically on $]-\infty, T]$. In both cases we conclude $|\tilde{y} (T)-1|< Z(T)$. Consider now $\sup \{t\geq T: |\tilde{y} (t)-1|< Z (t)\}$. Such supremum cannot be a finite number $T_0$ because in that case we would have $|\tilde{y} (T_0)-1| = Z(t_0)$ while the same argument leading to the strict inequality $|\tilde{y} (T)-1|< Z(T)$ implies $|\tilde{y} (T_0)-1|< Z (T_0)$.
Having shown \eqref{e:est-z} we now come to \eqref{e:est-z'}. Recalling \eqref{e:tildey'} we have
\begin{align*}
z'(t) &= \int_{-\infty}^t e^{-2m (t-s)} v (s) (z(s)+1)\, ds\\
&\leq \int_{-\infty}^t e^{-2m (t-s)} |v (s)| Z(s)\, ds + \int_{-\infty}^t e^{-2m (t-s)} |v (s)|\, ds
= 2m Z(t)\, .
\end{align*}
We now come to the case $c_- =0$. In that case we need to show that the unique solution is identically~$0$. Arguing as for the case $c_- =1$ we conclude that $\varphi$ is a fixed point of the transformation
\[
\mathscr{F} (\varphi) (t) = \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds
\]
Again, for a sufficiently small $T$, $\mathscr{F}$ is a contraction on $L^\infty (]-\infty, T])$ and hence it has a unique fixed point. Since however $0$ is, trivially, a fixed point, we conclude that $\varphi\equiv 0$ on $]-\infty, T]$. Standard ODE theory implies then that $\varphi$ vanishes identically on the whole $\mathbb R$.
\end{proof}
\section{Proof of Proposition \ref{p:3+4}}\label{s:3+4}
We start by showing the last statement of the proposition, namely:
\begin{itemize}
\item[(A)] For $z = \Xi (a)$ and under the assumption that $\lambda_a>1$, the unique $m$ such that \eqref{e:eigenvalue-equation-3} has a nontrivial solution $\psi_a\in L^2$ is $m_a = \sqrt{\lambda_a}$.
\end{itemize}
{{red} Before coming to its proof we also observe that the same argument applies with $b$ in place of $a$.}
First of all observe that, for $z=\Xi (a)$, the equation \eqref{e:eigenvalue-equation-3}, which becomes
\begin{equation}\label{e:eigenvalue-equation-again}
-\frac{d^2\varphi}{dt^2} + m^2 \varphi + \frac{A}{\Xi-\Xi (a)} \varphi = 0,
\end{equation}
has nontrivial solutions $\varphi\in W^{2,2}_{\text{loc}}\cap L^2 (\mathbb R; \mathbb C)$ if and only if it has nontrivial solution $\varphi \in W^{2,2}_{\text{loc}} \cap L^2 (\mathbb R;\mathbb R)$. That the equation has a nontrivial solution when $m=\sqrt{\lambda_a}$ follows from the classical theory of self-adjoint operators. We therefore only need to show that the existence of a nontrivial solution is only possible for a single $m\geq 1$. Arguing by contradiction assume there are two, $1\leq m_1< m_2$, and denote by $\psi_1$ and $\psi_2$ the respective solutions. Then there is a nontrivial linear combination
\[
\psi = C_1 \psi_1 + C_2 \psi_2
\]
which vanishes on $a$. Observe that $\psi_1$ and $\psi_2$ can be interpreted as eigenfuctions of the self-adjoint operator $-\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t)-\Xi (a)} $ relative to distinct eigenvalues and they are, therefore, $L^2$ orthogonal. Summing the equations, multiplying by $\psi$ and integrating by parts we achieve
\begin{equation}\label{e:tested}
\underbrace{\int \left((\psi')^2 +\frac{A}{\Xi-\Xi (a)} \psi^2\right)}_{=:I} = - C_1^2 m_1^2 \int \psi_1^2 - C_2^2 m_2^2 \int \psi_2^2\, .
\end{equation}
Recalling that $A = \Xi'' + 2\Xi' = (\Xi' + 2\Xi)'$, we wish to integrate by parts the second integrand in the left-hand side. Observe that, because $\psi$ vanishes on $a$ and $\Xi' (a) \neq 0$, the function $\frac{\psi^2}{\Xi - \Xi (a)}$ is in fact continuously differentiable. In particular we can write
\[
\int\frac{A}{\Xi-\Xi (a)} \psi^2 = \int \left(\frac{\Xi'+2\Xi}{(\Xi-\Xi(a))^2} \Xi' \psi^2 - 2\frac{\Xi'+2\Xi}{\Xi-\Xi (a)}\psi\psi'\right)\, .
\]
Substituting it into I, we achieve
\begin{align*}
I &= \int \left(\psi' - \frac{\Xi'}{\Xi-\Xi (a)}\psi\right)^2 + \int \left(\frac{2\Xi\Xi'\psi^2}{(\Xi-\Xi (a))^2} - \frac{4\Xi\psi\psi'}{\Xi-\Xi (a)} \right)\\
&= \int \left(\psi' - \frac{\Xi'}{\Xi-\Xi (a)}\psi\right)^2 + 2 \int \frac{\Xi'}{\Xi-\Xi (a)} \psi^2\, ,
\end{align*}
where to reach the second line we have written the first term in the second integral as
\[
- 2 \Xi \frac{d}{dt} \left(\frac{1}{\Xi-\Xi (a)}\right) \psi^2
\]
and integrated it by parts. Again thanks to the fact that $\psi$ vanishes at $a$ we can write it as $\psi= (\Xi-\Xi (a)) \eta$ and hence conclude
\begin{align*}
I &= \int ((\Xi - \Xi (a))\eta')^2 + \int 2 (\Xi-\Xi (a))\Xi' \eta^2 = \int ((\Xi - \Xi (a))\eta')^2 - 2 \int (\Xi-\Xi (a))^2 \eta\eta'\\
&= \int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 - \int (\Xi-\Xi(a))^2 \eta^2\\
&= \int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 - \int (C_1^2\psi_2^2 + C_2^2\psi_2^2)\, .
\end{align*}
Inserting the latter in \eqref{e:tested} we conclude
\[
\int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 = - C_1^2 (m_1^2-1) \int \psi_1^2 - C_2^2 (m_2^2-1) \int \psi_2^2\, .
\]
Observe that, since $m_2>1$ and $\psi_2$ is nontrivial, we conclude that $C_2=0$. This would then imply that $\psi = C_1 \psi_1$ and we can thus assume $C_1=1$ in all our computations. In particular $\eta'=\eta$, which implies $\eta (t) = C e^t$. We can now write $\psi_1 (t) = (\Xi (t)-\Xi(a)) \eta (t)$ and given the properties of $\Xi (t)$ we easily see that this would violate the decay at $+\infty$ that we know for $\psi_1$ from Corollary \ref{c:decay}.
{{red}
\begin{remark}\label{r:phi(a)-nonzero}
We record here a consequence of the above argument: a nontrivial solution $\varphi$ of \eqref{e:eigenvalue-equation-again} necessarily satisfies $\varphi (a) \neq 0$ (and thus it must be unique up to constant factors).
\end{remark}
}
\medskip
We next show that
\begin{itemize}
\item[(B)] If $(m_0,z)\in \overline{\mathscr{P}}$, $m_0\geq 1$ and $z\in \mathbb R$, then $z$ is in the closure of the range of $\Xi$.
\end{itemize}
We again argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in [1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $z\in \mathbb R\setminus \overline{\Xi (\mathbb R)}$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-5}
-\frac{d^2\psi_j}{dt^2} + m^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0
\end{equation}
\end{itemize}
By Corollary \ref{c:decay} we can normalize our functions $\psi_j$ so that $\psi_j (t) e^{-m_jt} \to 1$ as $t\to-\infty$ and $\psi_j (t) e^{m_jt} \to C_j\neq 0$ as $t\to\infty$. Observe also that there is a positive constant $c_0$ such that $|\Xi-z_j|\geq c_0$ for all $j$ sufficiently large, thanks to (ii). In particular, the functions $\frac{A}{\Xi-z_j}$ are uniformly bounded in $L^1$. By Lemma \ref{l:ODE2} there is a positive $T_0\geq b+1$, independent of $j$ such that
\begin{equation}\label{e:uniform-exp}
\left|\psi_j (t) - C_j e^{-m_j t}\right| \leq \frac{C_j}{2} e^{- m_j t} \qquad \forall t \geq T_0\, ,
\end{equation}
and there is a constant $C$, independent of $j$ such that
\begin{equation}\label{e:uniform-inside}
\|\psi_j\|_{L^\infty ([a,b])} \leq C\, .
\end{equation}
Next multiply \eqref{e:eigenvalue-5} by $\bar \psi_j$, integrate in $t$ and take the imaginary part of the resulting equality to conclude
\begin{equation}\label{e:imaginary-trick}
\int \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im\, z_j})^2} |\psi_j|^2 = 0\, .
\end{equation}
We might break the integral into three integrals on the regions $]-\infty, a[$, $]a,b[$, and $]b, \infty[$, where the function $A$ is, respectively, negative, positive, and negative. This gives
\[
-\int_{T_0}^{2T_0} \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 \leq
\int_a^b \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2
\]
Now, the right-hand side of the inequality can be bounded uniformly independently of $j$ by \eqref{e:uniform-inside} and (ii). On the other hand the function $\frac{-A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im z_j})^2}$ is larger than a positive constant $c$ independent of $j$ on $[T_0, 2 T_0]$. Using \eqref{e:uniform-exp} we can achieve a uniform bound $|C_j|\leq C$ for the constants $C_j$. The latter bound, combined with the estimates of Lemma \ref{l:ODE2} and the uniform bound on $\|\frac{A}{\Xi-z_j}\|_{L^1}$ easily imply that $\psi_j$ is precompact in $L^2$. We can thus extract a subsequence, not relabeled, converging to a nontrivial $L^2$ solution $\psi$ of
\begin{equation}\label{e:eigenvalue-equation-6}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-z} \psi = 0\, .
\end{equation}
Without loss of generality we assume that $\psi$ is real valued, since $z$ is real. We can thus multiply \eqref{e:eigenvalue-equation-6} by $\psi$ and integrate to achieve
\[
\int ((\psi')^2 + m_0^2 \psi^2) + \int \frac{\Xi''+2\Xi'}{\Xi- z} \psi^2 = 0\, .
\]
Integrating by parts $\int \frac{\Xi''}{\Xi-z} \psi^2$ we find
\[
\int ((\psi')^2 + m_0^2 \psi^2) + \int \left(\frac{(\Xi')^2}{(\Xi-z)^2} \psi^2 - 2 \frac{\Xi'}{\Xi-z} \psi' \psi\right) +
\int \frac{2\Xi'}{\Xi-z} \psi^2 = 0 \, ,
\]
which we can rewrite as
\begin{equation}\label{e:energy-trick}
\int \left(\left(\psi' - \frac{\Xi'}{\Xi-z} \psi\right)^2 + m_0^2 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-z} \psi^2 = 0\, .
\end{equation}
As already done in the previous paragraphs we set $\eta = \frac{\psi}{\Xi-z}$ and write the identity as
\[
\int \left((\Xi-z)^2 (\eta')^2 + m_0^2 (\Xi-z)^2 \eta^2 + 2 \Xi' (\Xi-z) \eta^2\right) = 0
\]
Integrating by parts the last term we find
\[
\int (\Xi-z)^2 (\eta'-\eta)^2 + \int (m_0^2-1) (\Xi-z)^2 \eta^2 = 0\, .
\]
We thus conclude that $m_0=1$ and $\eta'=\eta$, i.e. $\eta (t) = C e^t$, but again we see that this would violate $\psi\in L^2$.
\medskip
We next employ a suitable variation of the latter argument to show that
\begin{itemize}
\item[(C)] $(m_0, 0)$ and $(m_0, \Xi (-\infty))$ do not belong to $\overline{\mathscr{P}}$ if $m_0\geq 1$.
\end{itemize}
We again argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in [1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $0$ or to $\Xi (- \infty)$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-equation-7}
-\frac{d^2\psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
We first focus on the case $z_j\to 0$. Normalize again the solutions so that $\psi_j (t)$ is asymptotic to $e^{m_0t}$ for $t$ negative, and to $C_j e^{-m_0t}$ for $t$ positive.
Observe that in this case we have $\frac{A}{\Xi}\in L^1 (]-\infty, N])$ for every $N$, while $\frac{A}{\Xi-z_j}$ enjoys a uniform $L^1$ bound on any $]-\infty, N]$. We can thus apply Lemma \ref{l:ODE2} and conclude the $\psi_j$ can be assumed to converge uniformly to a function $\psi$ on $]-\infty, N]$ for every $N$ and that likewise $\psi (t)$ is asymptotic to $e^{m_0 t}$ for $t$ negative.
As done previously we multiply the equation \eqref{e:eigenvalue-equation-7} by $\bar\psi_j$, integrate, and take the imaginary part. In particular we gain the inequality
\[
\int_b^\infty \frac{A}{(\Xi- {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 \leq - \int_a^b \frac{A}{(\Xi- {\rm Re}\, z_j)^2+ ({\rm Im}\, z_j)^2} |\psi_j|^2\, .
\]
Since $z_j\to 0$ and the range of $\Xi$ on $[a,b]$ is bounded away from $0$, we conclude that the right-hand side is uniformly bounded. In particular, passing to the limit we conclude that
\begin{equation}\label{e:info-L^1}
\Xi^{-2} A |\psi|^2 \in L^1 ([b, \infty[)\, .
\end{equation}
Observe however that
\[
\lim_{t\to\infty} \frac{A(t)}{\Xi (t)} = \lim_{t\to \infty} \frac{-{\bar\alpha} e^{-{\bar\alpha} t}}{c_1 e^{-2t}+\frac{1}{2-{\bar\alpha}} e^{-{\bar\alpha} t}} = - {\bar\alpha} (2-{\bar\alpha})\, .
\]
In particular we conclude that $\psi\in L^2$. Moreover, we can write
\[
\frac{A}{\Xi} = -{\bar\alpha} (2-{\bar\alpha}) + B
\]
for a function $B$ which belongs to $L^1 ([T, \infty[)$ for every $T$. We thus have that
\[
\frac{d^2\psi}{dt^2} + (m_0^2 - {\bar\alpha} (2-{\bar\alpha})) \psi + B \psi = 0\, .
\]
Recalling that $0<{\bar\alpha} <1$ and $m_0\geq 1$, we have $m_0^2 - {\bar\alpha} (2-{\bar\alpha})>0$ and we can therefore apply Lemma \ref{l:ODE2} to conclude that, for $\bar m := \sqrt{m_0^2 - {\bar\alpha} (2-{\bar\alpha})}$
\[
\lim_{t\to \infty} e^{\bar m t} \psi (t)
\]
exists, it is finite, and nonzero. Observe however that \eqref{e:info-L^1} forces $e^{{\bar\alpha} t} |\psi|^2\in L^1$, which in particular implies that $\bar m > \frac{{\bar\alpha}}{2}$
We next argue as in the derivation of \eqref{e:energy-trick} to get
\[
\int \left( \left(\psi' - \frac{\Xi'}{\Xi} \psi\right)^2 + m_0 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi} \psi^2 = 0\, .
\]
We again set $\psi= \Xi \eta$ and observe that, by our considerations, $\eta$ decays exponentially at $-\infty$, while it is asymptotic to $e^{({\bar\alpha} - \bar m) t}$ at $+\infty$. We rewrite the latter identity as
\[
\int (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) = 0\, .
\]
We wish to integrate by parts the latter term to find
\begin{equation}\label{e:da-giustificare}
\int (\Xi^2 (\eta'-\eta)^2 + (m_0^2-1) \Xi^2 \eta^2)=0\, .
\end{equation}
Since we have exponential decay of $\eta$ at $-\infty$, while at $+\infty$ $\eta$ might grow, the latter integration by parts need some careful justification. First of all we notice that $\Xi \Xi' \eta^2$ decays exponentially at $+\infty$ and thus, since the other two integrands are positive, we can write
\[
\int (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) =
\lim_{N\to\infty} \int_{-\infty}^N (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) \, .
\]
Next, we can integrate by parts the second integrand (before passing to the limit) to write
\[
\int_{-\infty}^N (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) =
\int_{-\infty}^N (\Xi^2 (\eta'-\eta)^2 + (m_0^2-1) \Xi^2 \eta^2) + \Xi^2 (N) \eta^2 (N)\, .
\]
Since $\Xi (N) \eta (N)$ converges to $0$ exponentially, passing into the limit we conclude \eqref{e:da-giustificare}.
As before this would imply $m_0=1$ and $\eta (t) = C e^t$, while we have already argued that $\eta$ decays exponentially at $\infty$.
We next tackle the case $z_j \to \Xi (-\infty)$. This time we observe that $\frac{A}{\Xi-z_j}$ enjoys a uniform $L^1$ bound on $[T, \infty[$ for every $T$ and we thus normalize the functions $\psi_j$ so that $\psi_j (t)$ is asymptotic to $e^{-m_j t}$ for $t\to \infty$. Arguing as above, we assume that $\psi_j$ converges uniformly on all $[T, \infty[$ to a $\psi$ which is asymptotic to $e^{-m_0 t}$ and solves
\begin{equation}\label{e:eigenvalue-equation-9}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (-\infty)} \psi=0\, .
\end{equation}
As above we can assume that $\psi$ is real valued. Moreover, this time we infer (with the same method used to prove \eqref{e:info-L^1})
\begin{equation}\label{e:info-L1-2}
(\Xi-\Xi (-\infty))^{-2} A \psi^2 \in L^1 (\mathbb R)
\end{equation}
This time observe that, for $t$ sufficiently negative, $\frac{A(t)}{\Xi (t)- \Xi (-\infty)} = 8$. In particular we can explicitly solve the equation as
\[
\psi (t) = C_1 e^{-t\sqrt{m_0^2+8}} + C_2 e^{t\sqrt{m_0^2+8}}
\]
when $t$ is sufficiently negative.
However, if $C_1$ were positive, \eqref{e:info-L1-2} would not hold. In particular we infer exponential decay at $-\infty$. We can now argue as for the case $z_j\to 0$: we multiply \eqref{e:eigenvalue-equation-9} by $\psi$, integrate in time and perform an integration by part to infer
\[
\int \left( \left(\psi' - \frac{\Xi'}{\Xi - \Xi (-\infty)} \psi\right)^2 + m_0 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-\Xi (-\infty)} \psi^2 = 0\, .
\]
We then introduce $\eta$ so that $\psi = (\Xi-\Xi (-\infty)) \eta$. This time we infer exponential decay for $\eta$ at both $\infty$ and $-\infty$. Arguing as above we rewrite the last identity as
\[
\int ((\Xi- \Xi (-\infty))^2 (\eta'-\eta)^2 + (m_0^2-1) (\Xi- \Xi (-\infty))^2 \eta^2)=0\, ,
\]
reaching again a contradiction.
\medskip
In order to complete the proof of the proposition we need to show
\begin{itemize}
\item[(D)] If $(m_0, \Xi (c)) \in \overline{\mathscr{P}}$ {{red} and $m_0> 1$}, then either $c=a$ or $c=b$ {{red} and moreover we have, respectively, $m_0 = \sqrt{\lambda_a}$ or $m_0 = \sqrt{\lambda_b}$}.
\end{itemize}
As before we argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in ]1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $\Xi (c)$ for some $c\not\in \{a,b\}$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-equation-10}
-\frac{d^2\psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
This time we normalize the $\psi_j$'s so that
\begin{equation}\label{e:L2-normalization}
\int (|\psi_j'|^2 + m_j^2 |\psi_j|^2) =1\, .
\end{equation}
By Lemma \ref{l:ODE2} we know that $\psi_j (t)$ is asymptotic to $\rho_j^\pm e^{\mp m_j t}$ for $t\to \pm \infty$, where $\rho_j^\pm \in \mathbb C \setminus \{0\}$. Since $\Xi (c)$ has a positive distance from both $0$ and $\Xi (-\infty)$, we can apply Lemma \ref{l:ODE2} to achieve uniform times $T_\pm$ with the properties that
\begin{align}
\left|\psi_j (t) - \rho_j^+ e^{-m_j t}\right|& \leq \frac{|\rho_j^+|}{2} e^{-m_j t} \qquad\qquad \forall t\geq T_+\, ,\label{e:exp-bound-1}\\
\left|\psi_j (t) - \rho_j^- e^{m_j t}\right| &\leq \frac{|\rho_j^-|}{2} e^{m_j t} \qquad\qquad \forall t\leq T_-\, .\label{e:exp-bound-2}
\end{align}
Combining the latter inequalities with \eqref{e:L2-normalization} we conclude that $\sup_j |\rho_j^\pm| < \infty$, and in particular $\{\psi_j\}_j$ is tight in $L^2$, i.e. for every $\varepsilon >0$ there is $N = N (\varepsilon)$ such that
\[
\sup_j \int_{|t|\geq N} |\psi_j|^2 < \varepsilon\, .
\]
The latter bound combined with \eqref{e:L2-normalization} implies, up to extraction of a subsequence which we do not relabel, the strong $L^2$ convergence of $\psi_j$ to a function $\psi$. Thanks to Sobolev embedding, the convergence is uniform on any compact set and, moreover, $\psi\in C^{1/2}$.
Arguing as for \eqref{e:imaginary-trick} we infer
\begin{equation}\label{e:imaginary-trick-2}
\int \frac{A}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 =0
\end{equation}
The latter bound implies $\psi (c)=0$. In fact first we
observe that $\frac{A}{|\Xi-z_j|^2} |\psi_j|^2$ converges in $L^1$ on $\mathbb R \setminus ]c-\delta, c+\delta[$ for every $\delta$. Choosing $\delta>0$ so that $|A (t) - A(c)| \leq \frac{|A(c)|}{2}$ for $t\in [c-\delta, c+\delta]$ and recalling that $|A(c)|>0$, we easily infer that
\[
\sup_j \int_{c-h}^{c+h} \frac{|\psi_j|^2}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} < \infty \qquad \forall h < \delta\, .
\]
If $\psi (c)$ were different from $0$, we can select a positive $h< \delta$ and a positive $c_0$ with the property that $|\psi (t)|^2 \geq 2c_0$ for all $t\in [c-h, c+h]$. In particular, for a large enough $j$ we infer $|\psi_j (t)|^2 \geq c$ for all $t\in [c-\delta, c+\delta]$. But then we would conclude
\[
\sup_j \int_{c-h}^{c+h} \frac{1}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} < \infty\, .
\]
Since the denominator converges to $(\Xi - \Xi (c))^2$, this is clearly not possible.
We now wish to pass in the limit in \eqref{e:eigenvalue-equation-10} to derive that
\begin{equation}\label{e:eigenvalue-equation-11}
- \psi'' + m_0^2 \psi + \frac{A}{\Xi-\Xi (c)} \psi =0\, ,
\end{equation}
where we notice that, thanks to $\psi (c)=0$ and the H\"older regularity of $\psi$, the function $\frac{A}{\Xi-\Xi (c)} \psi$ is indeed in $L^p$ for every $p<2$. We thus understand the equation distributionally.
The equation clearly passes to the limit outside the singularity $c$ of the denominator and thus we just need to pass it to the limit distributionally in some interval $]c-h,c+h[$. We write the third term as
\begin{align*}
\frac{A}{\Xi-z_j} \psi_j &= \left(\frac{d}{dt} \ln (\Xi-z_j) \right)\frac{A}{\Xi'} \psi\\
&= \frac{d}{dt} \left(\ln (\Xi-z_j) \frac{A}{\Xi'} \psi_j\right) - \ln (\Xi-z_j)\frac{A}{\Xi'} \psi'_j - \ln (\Xi-z_j) \frac{d}{dt} \left(\frac{A}{\Xi'}\right) \psi_j\, .
\end{align*}
Observe that we can define the logarithm unequivocally because $\Xi$ is real valued and ${\rm Im}\, z_j >0$.
Next, we remark that:
\begin{itemize}
\item[(i)] $\frac{A}{\Xi'}$ is smooth in $]c-h, c+h[$;
\item[(ii)] $\ln (\Xi-z_j)$ converges strongly \footnote{Since $\ln (\Xi-z_j)$ converges uniformly to $\ln (\Xi-\Xi(c))$ on any compact set which does not contain $c$, in order to reach the conclusion it suffices to prove a uniform $L^q$ bound on the functions, for every $q<\infty$. This can be easily concluded as follows. Choose an interval $[c-h, c+h]$ and recall that $\Xi$ does not change sign on it. For each $j$ large enough we then find a unique $c_j \in [c-h, c+h]$ such that $\Xi (c_j) = {\rm Re}\, z_j$. Using the mean value theorem we easily conclude that $|\Xi (t)-z_j|\geq |\Xi (t) - \Xi (c_j)|
\geq C^{-1} |t-c_j|$ for every $t\in [c-h, c+h]$, where $C^{-1} = \min \{|\Xi'(t)|: c-h\leq t \leq c+h\}$.} to $\ln (\Xi-\Xi(c))$ in $L^q (]c-h, c+h[)$ for every $q<\infty$;
\item[(iii)] $\psi_j' \to \psi'$ weakly in $L^2$, while $\psi_j\to \psi$ uniformly.
\end{itemize}
We thus conclude that $\frac{A}{\Xi-z_j} \psi_j$ converges distributionally to
\[
\frac{d}{dt} \left(\ln (\Xi-\Xi (c)) \frac{A}{\Xi'} \psi\right) - \ln (\Xi-\Xi(c)) \frac{A}{\Xi} \psi - \ln (\Xi-\Xi(c)) \frac{d}{dt} \left(\frac{A}{\Xi'}\right) \psi\, .
\]
Using now that $\psi\in W^{1,2}$ and $\psi (c)=0$ we can rewrite the latter distribution as
\[
\frac{A}{\Xi-\Xi (c)} \psi
\]
and hence conclude the validity of \eqref{e:eigenvalue-equation-11}.
Observe next that from \eqref{e:eigenvalue-equation-11} we infer $\psi''\in L^p$ for every $p< 2$, which in turn implies that $\psi$ is indeed $C^{1,\kappa}_{\text{loc}}$ for every $\kappa < \frac{1}{2}$. In turn this implies that $\frac{A}{\Xi-\Xi (c)} \psi$ is continuous at $c$, so that in particular $\psi$ is twice differentiable. We thus can argue as for the derivation of \eqref{e:energy-trick} and get
\begin{equation}\label{e:energy-trick-4}
\int \left(\left(\psi' - \frac{\Xi'}{\Xi-\Xi (c)} \psi\right)^2 + m_0^2 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-\Xi (c)} \psi^2 = 0\, .
\end{equation}
Once again we can set $\psi = (\Xi-\Xi (c)) \eta$ and observe that $\eta\in W^{1,2}$, to rewrite the latter identity as
\[
\int ((\Xi- \Xi (c))^2 (\eta'-\eta)^2 + (m_0^2-1) (\Xi- \Xi (c))^2 \eta^2)=0\, ,
\]
inferring that $\eta=0$.
We thus have concluded that $\psi$ vanishes identically, but this is not yet a contradiction since the normalization \eqref{e:L2-normalization} and the strong $L^2$ convergence does not ensure that $\psi$ is nontrivial. In order to complete our argument, note first that, by the monotonicity of $\Xi$, for each $j$ large enough there is a unique $c_j$ such that $\Xi (c_j) = {\rm Re}\, z_j$. We then multiply the equation \eqref{e:eigenvalue-equation-10} by $\bar \psi_j - \overline{\psi_j (c_j)}$ to obtain
\[
\int \left(|\psi_j'|^2 + m_j^2 \psi_j (\bar\psi_j - \overline{\psi_j (c_j)}) + \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})\right) = 0\, .
\]
Note that $c_j$ must converge to $c$ and that the integrals
\[
\int \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})
\]
converges to $0$ because $\psi_j - \psi_j (c_j)$ converges to $0$ uniformly and, thanks to the uniform exponential decay of $\psi_j$, the latter are uniformly bounded in $L^1$. For the same reason the first integral in the sum
\begin{equation}
\label{e:up}\int_{|t-c|\geq h} \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)}) +\int_{|t-c|\leq h} \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})
\end{equation}
converges to $0$ for every fixed $h$. On the other hand, $|\frac{A (t)}{\Xi (t)-z_j}| |\psi_j (t) - \psi_j (c_j)|\leq C |t-c_j|^{-1/2}$ and thus the second integrand in \eqref{e:up}
converges to $0$ as well. We thus conclude that the $L^2$ norm of $\psi'_j$ converges to $0$ as well. This however contradicts the normalization \eqref{e:L2-normalization}.
\section{Proof of Proposition \ref{p:5-7}: Part I}
We set $m_0=m_a$, $z_0 = \Xi (a)$, and
we fix a $\psi_0$ solution of
\[
-\frac{d^2\psi_0}{dt^2} + m_0^2 \psi_0 + \frac{A}{\Xi-z_0} \psi_0 = 0
\]
with $L^2$ norm equal $1$. Since the operator is self-adjoint we will indeed assume that $\psi_0$ is real. We then define the projector $P_0: L^2 (\mathbb R; \mathbb C) \to \{\kappa \psi_0:\kappa \in \mathbb C\}$ as
\[
P_0 (\psi) = \langle \psi, \psi_0\rangle \psi_0\, .
\]
Observe that $P_0$ is self-adjoint.
Next,
in a neighborhood of $(m_0, z_0)$ we will look for solutions of \eqref{e:eigenvalue-equation-3} by solving
\begin{equation}\label{e:Lagrange}
\left\{
\begin{array}{l}
-\psi'' + m^2 \psi + \frac{A}{\Xi-z} \psi + P_0 (\psi) = \psi_0\\ \\
\langle \psi, \psi_0\rangle =1
\end{array}
\right.
\end{equation}
which we can rewrite as
\begin{equation}\label{e:Lagrange-2}
\left\{
\begin{array}{l}
-\psi'' + m_0^2 \psi + \frac{A}{\Xi-z_0} \psi + P_0 (\psi) = A \left(((\Xi-z_0)^{-1} - (\Xi-z)^{-1}) \psi\right) + (m_0^2-m^2)\psi + \psi_0\\ \\
\langle \psi, \psi_0\rangle =1
\end{array}
\right.
\end{equation}
Next we observe that the operator $-\frac{d^2}{dt^2} + m_0^2$, considered as a closed unbounded self-adjoint operator in $L^2$ (with domain $W^{2,2}$) has an inverse $\mathcal{K}_{m_0}:L^2 \to L^2$ which is a bounded operator. We thus rewrite \eqref{e:Lagrange-2} as
\begin{equation}\label{e:Lagrange-3}
\left\{
\begin{array}{ll}
\underbrace{\psi + \mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_0} \psi + P_0 (\psi)\right)}_{=: T (\psi)}\\
\qquad\qquad \qquad= \underbrace{\mathcal{K}_{m_0}
\left(\left(A \left((\Xi-z_0)^{-1} - (\Xi-z)^{-1}\right) + (m_0^2 -m^2)\right) \psi\right)}_{=:- \mathcal{R}_{m,z} (\psi)} +
\mathcal{K}_{m_0} (\psi_0)\\ \\
\langle \psi, \psi_0 \rangle =1\, .
\end{array}
\right.
\end{equation}
The proof of Proposition \ref{p:5-7} will then be broken into two pieces. In this section we will show the first part, which we can summarize in the following
\begin{lemma}\label{l:solve-for-psi}
For every $\mu>0$, if $(m,z)$ sufficiently close to $(m_0, z_0)$ and ${\rm Im}\, z\geq \mu |{\rm Re}\, (z-z_0)|$ then there is a unique $\psi= \psi (m,z) \in L^2 (\mathbb R)$ solving
\begin{equation}\label{e:solve-for-psi}
T (\psi) + \mathcal{R}_{m,z} (\psi) = \mathcal{K}_{m_0} (\psi_0)\, .
\end{equation}
\end{lemma}
Before coming to its proof we single out two important ingredients.
\begin{lemma}\label{l:invert-T}
$T$ is a bounded operator with bounded inverse on the spaces $L^2$ and $C^\sigma$, for any $\sigma \in ]0,1[$.
\end{lemma}
\begin{proof} Recall that the operator $\mathcal{K}_m$ is given by the convolution with $\frac 1{2m} e^{-m|\cdot|}$. In this first step we prove that $T$ is a bounded operator with bounded inverse in the spaces $L^2 (\mathbb R)$ and $C^\sigma (\mathbb R)$\footnote{Observe that $\mathcal{K}_m$ is well-defined on $C^\sigma$ and so is the multiplication by $\frac{A}{\Xi-z_0}$, since the latter is a smooth functions with bounded derivatives, and the operator $P_0 (\psi) = \langle \psi, \psi_0\rangle \psi_0$: for the latter we just need to check that $\psi\overline{\psi_0}$ is integrable, which follows from the exponential decay of $\psi_0$, { cf. Corollary \ref{c:decay}}.}
Recall that $\frac{A}{\Xi-z_0}= \frac{A}{\Xi-\Xi (a)}$ is indeed a bounded smooth function (thanks to the structural assumptions on $\Xi$: in particular recall that $\Xi' (a)\neq 0$ and $A(a) =0$, which implies that $\frac{A}{\Xi-\Xi(a)}$ is in fact smooth at $a$). Moreover the function and its derivatives decay exponentially at $\pm \infty$. It follows therefore that $\psi \mapsto \mathcal{K}_{m_0} (\frac{A}{\Xi-z_0} \psi + P_0 (\psi))$ is a compact operator, both on $L^2$ and on $C^\sigma$. Thus $T$ is a Fredholm operator with index $0$. We thus just need to check that the kernel is $0$ in order to conclude that it is invertible with bounded inverse. In both cases we need to show that the equation
\begin{equation}\label{e:kernel-T}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (a)} \psi + P_0 (\psi) = 0
\end{equation}
has only the trivial solution. Observe that the kernel $V$ of the operator $\psi \mapsto -\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (a)} \psi$ is $1$-dimensional by Lemma \ref{l:ODE2} and Corollary \ref{c:decay}. In particular $V$ is generated by $\psi_0$. Since the operator $P_0$ is the orthogonal projection onto $V$ and $-\frac{d^2\psi}{dt^2} + m_0^2+ + \frac{A}{\Xi-\Xi (a)} \psi$ is self-adjoint, the kernel of $-\frac{d^2}{dt^2} + m_0^2+ \frac{A}{\Xi-\Xi (a)} \psi + P_0$ in $L^2$ must be trivial.
In order to argue that the kernel is $0$ on $C^\sigma$ we apply a variation of the same idea: first we observe that if $\psi$ is a $C^\sigma$ solution of \eqref{e:kernel-T}, then $\frac{A}{\Xi-\Xi (a)} \psi + P_0 (\psi)$ is also in $C^\sigma$ and hence $\psi''\in C^\sigma$. Observe also that the operator is self-adjoint and thus we can assume that $\psi$ is real-valued. We then multiply both sides of \eqref{e:kernel-T} by $\bar \psi_0$, integrate by parts and use the fact that $\psi_0$ is in the kernel of the self-adjoint operator $-\frac{d^2}{dt^2} + m_0^2 + \frac{A}{\Xi-\Xi (a)}$ to conclude that $(\langle \psi, \psi_0\rangle)^2 =0$. But then
$\psi$ is a bounded solution of $-\frac{d\psi}{dt^2} + m_0 \psi^2 + \frac{A}{\Xi-\Xi (a)}\psi =0$. Given that $\frac{A}{\Xi-\Xi (a)} \psi$ is a product of an exponentially decaying function and a bounded function, we conclude that $-\frac{d^2\psi}{dt^2} + m_0^2 \psi$ is an exponentially decaying function $f$. We thus have $\psi = \mathcal{K}_m (f) + C_1 e^{-m_0t} + C_2 e^{m_0t}$ for two constants $C_1$ and $C_2$. However $\mathcal{K}_m (f)$ decays exponentially at both $\pm \infty$ and thus, given that $\psi$ is bounded, we must have $C_1=C_2=0$. In particular $\psi$ decays exponentially at both $\pm \infty$ and so it is an $L^2$ function. But we already saw that every $L^2$ solution is trivial.
\end{proof}
\begin{lemma}\label{l:Rmz-small}
For every constant $\mu>0$ we define the cone $C_\mu := \{z: {\rm Im} z \geq \mu |{\rm Re}\, (z-z_0)|\}$. Then
\begin{equation}
\lim_{z\in C_\mu, (m,z)\to (m_0, z_0)} \|\mathcal{R}_{m,z}\|_O = 0\, ,
\end{equation}
where $\|L\|_O$ is the operator norm of $L$
when considered as a bounded operator from $L^2$ to $L^2$.
\end{lemma}
\begin{proof} Clearly, it suffices to show that
\begin{equation}
\lim_{z\in C_\mu, z\to z_0} \|\mathcal{K}_{m_0} \circ (A/(\Xi-z) - A/(\Xi-z_0))\|_O = 0\, .
\end{equation}
We can rewrite the operator as
\[
\psi \mapsto \mathcal{K}_m \left(\frac{A (z-z_0)}{(\Xi-z) (\Xi-z_0)} \psi\right) \, .
\]
First of all observe that the operators
\[
\psi \mapsto L_z (\psi) = \frac{A (z-z_0)}{(\Xi-z) (\Xi-z_0)} \psi
\]
are bounded in the operator norm uniformly in $z\in C_\mu$ by a constant $M$. Moreover, we can see that the adjoint operator is given by $L_z^* (\psi)= \frac{A (\bar z-z_0)}{(\Xi-\bar z) (\Xi-z_0)} \psi$ converges strongly in $L^2$ to $0$: indeed the functions $\frac{A (\bar z-z_0)}{(\Xi-\bar z) (\Xi-z_0)}$ are uniformly bounded and they converge to $0$ on $\mathbb R \setminus \{a\}$. We now use an argument entirely similar to that used in the proof of Lemma \ref{l:three}: given any $\varepsilon >0$ we fix the orthogonal projection $P_N$ onto a finite-dimensional subspace of $L^2$ with the property that $\|\mathcal{K}_{m_0}\circ P_N - \mathcal{K}_{m_0}\|_O$ is smaller than $\frac{\varepsilon}{2M}$. We then argue that for $|z-z_0|$ sufficiently small $P_N \circ L_z$ has operator norm smaller than $\frac{\varepsilon}{2}$. Having chosen an orthonormal base $\psi_1, \ldots, \psi_N$ for $V$, we recall that
\[
P_N (\psi)= \sum_i \langle \psi_i, \psi\rangle \psi_i\, .
\]
Therefore our claim amounts to show that
\[
|\langle \psi_i, L_z (\psi)\rangle|\leq \frac{\varepsilon}{2N}
\]
for $z$ sufficiently close to $z_0$ and every $\psi$ with $\|\psi\|_{L^2}\leq 1$. For the latter we use
\[
|\langle \psi_i, L_z (\psi)\rangle| = |\langle L_z^* (\psi_i), \psi \rangle|\leq \|L_z^* (\psi_i)\|_{L^2}\, .
\]
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:solve-for-psi}]
We rewrite the equation that we want to solve as
\[
\psi + T^{-1} \circ \mathcal{R}_{m,z} (\psi) = T^{-1} \circ \mathcal{K}_{m_0} (\psi_0)\, .
\]
Note that $P_0(\psi_0)=\psi_0$. Furthermore, since $\mathcal K_{m_0}$ is, by definition, the inverse operator of $-\frac{\mathrm d^2}{\mathrm dt^2}+m_0^2\operatorname{Id}$,
\begin{equation*}
\mathcal K_{m_0}^{-1}\left(\psi_0+\mathcal K_{m_0}\left(\frac{A}{\Xi-z_0}\psi_0\right)\right) = -\psi_0''+m_0^2\psi_0+\frac{A}{\Xi-z_0}\psi_0 = 0.
\end{equation*}
Therefore,
\begin{equation*}
\psi_0+\mathcal K_{m_0}\left(\frac{A}{\Psi-z_0}\psi_0\right) = 0.
\end{equation*}
In combination with the definition of $T$ in \eqref{e:Lagrange-3}, we get
\begin{equation*}
T(\psi_0) = \psi_0+\mathcal K_{m_0}\left(\frac{A}{\Xi-z_0}\psi_0+\psi_0\right)=\mathcal K_{m_0}(\psi_0),
\end{equation*}
in other words,
\begin{equation}\label{e:T-1K}
T^{-1} \circ \mathcal{K}_{m_0} (\psi_0) = \psi_0\, .
\end{equation}
Therefore, \eqref{e:solve-for-psi} becomes
\begin{equation}\label{e:to-Neumann}
(\operatorname{ Id} + T^{-1} \circ \mathcal{R}_{m,z}) (\psi) = \psi_0\, ,
\end{equation}
so the existence of a unique solution is guaranteed as soon as $\|T^{-1} \circ \mathcal{R}_{m,z}\|_{O} < 1$.
\end{proof}
\begin{remark}\label{r:Neumann-series}
In the remaining part of the proof of Proposition \ref{p:5-7} we will take advantage of the representation of $\psi$ as a function of $\psi_0$ through the Neumann series coming from \eqref{e:to-Neumann}. More precisely, our proof of Lemma \ref{l:solve-for-psi} leads to the following representation:
\begin{equation}\label{e:Neumann-series}
\psi = \psi_0 - (T^{-1} \circ \mathcal{R}_{m,z}) (\psi_0) + \sum_{k=2}^\infty (-1)^k (T^{-1}\circ \mathcal{R}_{m,z})^k (\psi_0)\, .
\end{equation}
\end{remark}
\section{Proof of Proposition \ref{p:5-7}: Part II}\label{s:5-7-part-II}
We now complete the proof of Proposition \ref{p:5-7}. The positive parameter $\mu>0$ in Lemma \ref{l:solve-for-psi} will have to be chosen sufficiently small: its choice will be specified in a few paragraphs, while for the moment we assume it to be fixed. We set $m_0 = m_a$ and $z_0 = \Xi (a)$. Thus, for each $(m_0+h ,z)$ in a set
\[
U_{\delta, \mu} := \{|h|< \delta, |z-z_0|< \delta, {\rm Im} z > \mu |{\rm Re}\, (z-z_0)|\}
\]
we know that that there is a solution $\psi = \psi (m_0+h,z)$ of \eqref{e:solve-for-psi} which moreover satisfies the expansion \eqref{e:Neumann-series}.
We then define the function
\begin{equation}
H (h,z) := \langle \psi (m_0+h,z), \psi_0\rangle\, ,
\end{equation}
and obviously we are looking for those $z$ which solve
\begin{equation}\label{e:what-we-want-to-do}
H (h,z) =1
\end{equation}
The main point of our analysis is the following
\begin{lemma}\label{l:will-apply-Rouche}
The function $H$ is holomorphic in $z$ and moreover
\begin{equation}\label{e:expansion}
H (h,z) = 1 - 2m_a h + c (a) (z-z_0) + o (|z-z_0| + |h|)
\end{equation}
where $c(a)$ is a complex number with ${\rm Im}\, c(a) > 0 $.
\end{lemma}
Given Lemma \ref{l:will-apply-Rouche}, consider now $\xi (h)$ which we obtain by solving $c(a) (\xi-z_0)= 2m_a h$, namely,
\[
\xi (h) = \frac{2m_a h}{c(a)} +z_0 = \frac{2m_a h}{|c(a)|^2} \overline{c(a)} +z_0\, .
\]
The idea behind the latter definition is that, if the term $o (|z-z_0| + |h|)$ vanished identically, $z = \xi (h)$ would be the solution of $H (h,z)=1$. Even though $o (|z-z_0| + |h|)$ does not vanish, we nonetheless expect that the solution $z$ of $H (h,z)=1$ is relatively close to $\xi (h)$.
Since ${\rm Im}\, c(a)>0$, $\xi (h)$ has positive imaginary part if $h<0$. In particular we have
\[
{\rm Im}\, \xi (h) \geq \gamma |h| \qquad \forall h < 0\, .
\]
where $\gamma$ is a positive constant. We then rewrite
\[
H (h, z) = 1 + c (a) (z-\xi (h)) + \underbrace{o (|\xi (h)-z_0| + h)}_{=: r(h)} + o (|z-\xi (h)|)\, .
\]
Consider the disk $D_h := \{|z-\xi (h)| \leq 2 \beta |h|\}$, for a suitably chosen constant $\beta>0$. We will show below that adjusting the constants $\mu$ and $\beta$ suitably, the disk will be in the domain of the holomorphic function $H (\cdot, z)$. Leaving this aside for the moment, by Rouch\'e Theorem, if we choose $h$ sufficiently small the set $H(h, D_h)$ contains a disk of radius $|c(a)|\beta h$ centered at $1+ r (h)$. But then for $h$ sufficiently small we also have $|r(h)| \leq \frac{|c(a)|\beta h}{2}$ and so we conclude that $1\in H (h, D_h)$, namely that there is a point $z (h)$ in the disk $D_h$ which is mapped in $1$ by $H (h, \cdot)$. This would then complete the proof of Proposition \ref{p:5-7} if we were able to prove that ${\rm Im}\, z (h) >0$. We therefore need to show that $D_h$ is in the domain of $H (h, \cdot)$, namely,
\[
{\rm Im}\, z \geq \mu |{\rm Re}\, (z-z_0)|\, \qquad \forall z\in D_h\, .
\]
We first estimate
\[
{\rm Im}\, z \geq {\rm}\, {\rm Im}\, \xi (h) - 2 \beta |h| \geq (\gamma- 2 \beta) |h|\, .
\]
Then
\begin{equation}\label{e:inequality-101}
|{\rm Re}\, z-z_0| \leq |\xi (h)-z_0| + |z-\xi (h)| \leq (|c(a)| + 2 \beta) |h|\, .
\end{equation}
We thus conclude that
\begin{equation}\label{e:inequality-102}
{\rm Im}\, z(h) \geq \frac{\gamma-2\beta}{|c(a)|+2\beta} |{\rm Re}\, (z (h)-z_0)|\, .
\end{equation}
Thus it suffices to chose $\beta = \frac{\gamma}{3}$ and $\mu = \frac{\gamma}{3 |c(a)|+\gamma}$. This guarantees at the same time the existence of a solution and the fact that $z (h)$ has positive imaginary part when $h<0$ (which results from combining \eqref{e:inequality-101} and \eqref{e:inequality-102}.
In order to complete the proof of Proposition \ref{p:5-7} we therefore just need to show Lemma \ref{l:will-apply-Rouche}.
\begin{proof}[Proof of Lemma \ref{l:will-apply-Rouche}]
In order to show holomorphicity we just need to show that, for each fixed $z$,
\[
z\mapsto \sum_{k=0}^\infty (- T^{-1} \circ \mathcal{R}_{m,z})^k
\]
is holomorphic. Since the series converges in the operator norm, it suffices to show that each map $z\mapsto (-T^{-1} \circ \mathcal{R}_{m,z})^k$ is holomorphic for every $k$, for which indeed it suffices to show that $z\mapsto \mathcal{R}_{m,z}$ is holomorphic. This is however obvious from the explicit formula. We therefore now come to the the Taylor expansion \eqref{e:expansion}.
\medskip
{\bf Step 1} We will show here that
\begin{equation}\label{e:small-in-Csigma}
\|\mathcal{R}_{m_0+h,z}\|_{\mathcal{L} (C^\sigma)} \leq C (\sigma) (|h| + |z-z_0|)\,
\end{equation}
for every $\sigma\in ]0,1[$, where $\|L\|_{\mathcal{L} (C^\sigma)}$ is the operator norm of a bounded linear operator $L$ on $C^{\sigma}$.
The estimate will have the following consequence. First of all using \eqref{e:Neumann-series} and $\|\psi_0\|_{L^2}^2 =1$ we expand
\begin{equation}\label{e:Taylor-2}
H (h,z) = 1 - \langle T^{-1} \circ \mathcal{R}_{m_0+h,z} (\psi_0), \psi_0\rangle
+ \underbrace{\sum_{k=2}^\infty \langle (-T^{-1} \circ \mathcal{R}_{m_0+h,z})^k (\psi_0), \psi_0\rangle}_{=: R_1 (z,h)}\, .
\end{equation}
Hence using \eqref{e:small-in-Csigma} we estimate
\begin{align}
|R_1 (z,h)| & \leq \sum_{k=2}^\infty \|(-T^{-1} \circ \mathcal{R}_{m_0+h,z})^k (\psi_0)\|_\infty \|\psi_0\|_{L^1}\nonumber\\
&\leq C \sum_{k=2}^\infty (\|T^{-1}\|_{C^\sigma} \|\mathcal{R}_{m_0+h,z}\|_{\mathcal{L} (C^\sigma)})^k \|\psi_0\|_{C^\sigma}\|\psi_0\|_{L^1} = o (|h|+|z-z_0|)\, ,\label{e:resto-1}
\end{align}
for some fixed $\sigma$.
In order to show \eqref{e:small-in-Csigma} we write
\[
\mathcal{R}_{m_0+h,z} (\psi) = (z-z_0) \mathcal{K}_{m_0} \left(\frac{1}{\Xi-z} \left(\frac{A}{\Xi-z_0} \psi\right)\right) + (2m_0 h +h^2) \mathcal{K}_{m_0} (\psi)\, .
\]
Since $\frac{A}{\Xi-z_0}$ is smooth, it suffices to show that the operators $B_z:= \mathcal{K}_{m_0} \circ \frac{1}{\Xi-z}$ are uniformly bounded in $\mathcal{L} (C^\sigma)$. We first fix a smooth cut-off function $\varphi \in C^\infty_c (]a-2, a+2[)$ which equals $1$ on $[a-1,a+1]$ and write
\[
B_z= B_z^1 + B_z^2
:= \mathcal{K}_{m_0} \circ \left(\frac{1-\varphi}{\Xi-z}\right)+\mathcal{K}_{m_0} \circ \left(\frac{\varphi}{\Xi-z} \right)\, .
\]
But since $(1-\varphi)/(\Xi-z)$ enjoys a uniform bound in $C^k$, it is easy to conclude that $\|B^1_z\|_{\mathcal{L} (C^\sigma)}$ is bounded uniformly in $z$. We thus need to bound
\begin{align*}
B^2_z (\psi) (t) &= \frac 1{2m_0}\int e^{-m_0 |t-s|} \frac{\varphi (s)}{\Xi (s) -z} \psi (s)\, ds\, .
\end{align*}
We first bound $\|B^2_z\|_{L^\infty}$. We write $z= x+iy$ and, since $x$ is close to $a$, we select the only $a'$ such that $\Xi (a')=x$ and write
\begin{align*}
B^2_z (\psi) (t) &= \frac 1{2m_0}\underbrace{\int e^{-m_0 |t-s|} \frac{\varphi (s) (\psi (s)-\psi (a'))}{(\Xi (s) -\Xi (a')) - iy} \, ds}_{=: I_1 (t)}
+ \frac {\psi(a')}{2m_0}\underbrace{\int e^{-m_0 |t-s|} \frac{\varphi (s)}{\Xi (s) -z}\, ds}_{=: I_2 (t)}
\end{align*}
Writing $\frac{1}{\Xi -z} = \frac{1}{\Xi'}\frac{d}{dt} \ln (\Xi -z)$ we can integrate by parts to get
\begin{align*}
I_2 (t) &= - \underbrace{\int m_0 \frac{t-s}{|t-s|} e^{-m_0 |t-s|} (\Xi' (s))^{-1} \ln (\Xi (s)-z) \varphi (s)\, ds}_{=:I_{2,1} (t)}\\
&\qquad -
\underbrace{\int e^{-m_0 |t-s|} \ln (\Xi (s) -z) \frac{d}{ds} ((\Xi')^{-1} \varphi) (s)\, ds}_{=: I_{2,2} (t)}
\end{align*}
and use the uniform bound for $\ln (\Xi (s)-z)$ in $L^1 ([a-2,a+2])$ to conclude that $|I_{2,1}|$ and $|I_{2,2}|$ are both bounded uniformly. As for $I_1$, note that, on any compact interval $K$ around $a'$, we have, since $\Xi'$ is continuous and $\Xi'<0$,
\begin{equation*}
C(K):=\inf_{x\in K} |\Xi'(x)| = -\max_{x\in K} \Xi'(x) >0.
\end{equation*}
Therefore by the mean value theorem, for all $s\in K$, there exists a $\iota = \iota(s)\in K$ such that
\begin{equation*}
\abs{\Xi(s)-\Xi(a')-iy}=\abs y+\abs{\Xi(s)-\Xi(a')} >\abs{\Xi(s)-\Xi(a')}= \abs{s-a'}\abs{ \Xi'(\iota)}\ge \abs{s-a'} C(K).
\end{equation*}
By the definition of the Hölder semi-norm, we thus have, for all $s\in K$,
\[
\left|\frac{\psi (s)- \psi (a')}{\Xi (s) - \Xi (a') - iy}\right| \leq \frac{\|\psi\|_{C^\sigma}}{C(K)|s-a'|^{1-\sigma}},
\]
which is integrable. Furthermore, outside of $K$ the integrand of $I_1$ is bounded and decays exponentially, therefore one can uniformly bound $I_1$.
We next wish to bound the seminorm
\[
[B^2_z (\psi)]_\sigma:= \sup_{t\neq t'} \frac{|B^2_z (\psi) (t) - B^2_z (\psi) (t')|}{|t-t'|^\sigma}\, .
\]
We write
\[
B^2_z (\psi) (t) - B^2_z (\psi) (t') = (I_1 (t) - I_1 (t')) + \psi' (a) (I_2 (t) - I_2 (t'))\, .
\]
Using that $|e^{-m_0 |t-s|} - e^{-m_0 |t'-s|}|\leq C |t-t'|$ we can bound
\[
|I_1 (t) - I_1 (t')| \leq C |t-t'| \int |\varphi (s)| \frac{|\psi (s)-\psi (a')|}{|\Xi (s) - \Xi (a')|}\, ds
\leq C \|\psi\|_{C^\sigma} |t-t'|\, .
\]
Similarly we can write
\[
|I_{2,2} (t) - I_{2,2} (t')| \leq C |t-t'| \int \left|\ln (\Xi (s) -z) \frac{d}{ds} ((\Xi')^{-1} \varphi) (s)\right|\, ds \leq C |t-t'|\, .
\]
Next denoting the function $(\Xi' (s))^{-1} \varphi (s) \ln (\Xi (s) -z)$ by $B (s)$ we assume $t> t'$ and write further
\begin{align*}
I_{2,1} (t) - I_{2,1} (t')&= m_0 \Bigg(\underbrace{\int_t^\infty e^{-m_0 (s-t)} B(s)\, ds - \int_{t'}^\infty e^{-m_0(s-t')} B(s)\, ds}_{=: J_+(t,t')}\Bigg)\\
&\qquad - m_0 \Bigg(\underbrace{\int_{-\infty}^t e^{-m_0 (t-s)} B(s)\, ds - \int_{-\infty}^{t'} e^{-m_0 (t'-s)} B(s)\, ds}_{=: J_- (t,t')}\Bigg)\, .
\end{align*}
Then we choose $p=\frac{1}{\sigma}$, let $p'$ be the dual exponent and estimate
\begin{align*}
|J_+ (t,t')| &\leq C |t-t'| \int_t^\infty |B(s)|\, ds + \int_{t'}^t |B (s)|\, ds\\
&\leq C |t-t'| \|B\|_{L^1} + |t-t'|^\sigma \|B\|_{L^{p'}}\, .
\end{align*}
A similar estimate for $J_- (t,t')$ finally shows the existence of a constant $C$ such that
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')|\leq C \|\psi\|_{C^\sigma} \left(|t-t'|+|t-t'|^\sigma\right)\, .
\]
Clearly this implies
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')|\leq C \|\psi\|_{C^\sigma} |t-t'|^\sigma \qquad \mbox{if $|t-t'|\leq 1$.}
\]
On the other hand we can trivially bound
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')| \leq 2 \|B^2_z (\psi)\|_\infty \leq C \|\psi\|_{C^\sigma} |t-t'|^\sigma
\quad\mbox{if $|t-t'|\geq 1$.}
\]
\medskip
{\bf Step 2.} In this second step we compute
\begin{align*}
\langle T^{-1} \mathcal{R}_{m,z} (\psi_0), \psi_0\rangle &=
\langle T^{-1} \circ \mathcal{K}_{m_0} \left(A ((\Xi-z)^{-1} - (\Xi-z_0)^{-1})\psi_0\right), \psi_0\rangle\\
&\qquad
+ (2m_0 h + h^2) \langle T^{-1} \circ \mathcal{K}_{m_0} (\psi_0), \psi_0\rangle\, .
\end{align*}
Recalling \eqref{e:T-1K} (and using that both $T^{-1}$ and $\mathcal{K}_{m_0}$ are self-adjoint) we rewrite the expression as
\begin{align}
\langle T^{-1} \mathcal{R}_{m,z} (\psi_0), \psi_0\rangle &=
(z-z_0) \langle T^{-1}\circ \mathcal{K}_{m_0} \big( A (\Xi-z)^{-1} (\Xi-z_0)^{-1} \psi_0\big), \psi_0\rangle + 2m_a h + h^2\nonumber\\
&= (z-z_0) \langle A (\Xi-z)^{-1} (\Xi-z_0)^{-1} \psi_0, T^{-1} \circ \mathcal{K}_{m_0} (\psi_0)\rangle + 2ma_ h + h^2\nonumber\\
&= (z-z_0) \underbrace{\langle A (\Xi-z)^{-1} (\Xi -z_0)^{-1} \psi_0, \psi_0 \rangle}_{=: G (z)} + 2m_a h + h^2\label{e:Taylor-3}\, .
\end{align}
We thus want to show that the following limit exists and to compute its imaginary part:
\[
- c (a) := \lim_{{\rm Im}\, z >0, z\to \Xi (a)} G (z) =
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} \int \frac{1}{\Xi (s)-z} |\psi_0 (s)|^2 \frac{A(s)}{\Xi (s) - \Xi (a)}\, ds\, .
\]
Observe indeed that inserting $G(z) = - c(a) + o (1)$ in \eqref{e:Taylor-3} and taking into account \eqref{e:Taylor-2} and \eqref{e:resto-1} we conclude that \eqref{e:expansion} holds.
In order to compute $c(a)$ we observe first that the function $\phi (s) := |\psi_0 (s)|^2 \frac{A(s)}{\Xi (s) - \Xi (a)}$ is smooth and decays exponentially. We thus rewrite
\[
G (z) = \int \frac{1}{\Xi (s)-z} \phi (s)\, ds\, .
\]
Next we decompose $z$ into its real and imaginary part as $z = x + iy$ and observe that
\begin{align*}
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} {\rm Re}\, G (z) &= \lim_{x\to \Xi(a), y \downarrow 0} \int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s)\, ds
\end{align*}
Here we are only interested in showing that the limit exists and we thus fix a cut-off function $\varphi\in C^\infty_c (]a-2, a+2[)$, identically $1$ on $[a-1, a+1]$ and split the integral into
\[
{\rm Re}\, G (z) = \int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s) \varphi (s)\, ds +
\int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s) (1-\varphi (s))\, ds\, .
\]
The second integral has a limit, while in order to show that the first has a limit we write
\[
\frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} = \frac{1}{2\Xi' (s)} \frac{d}{ds} \ln ((\Xi(s)-x)^2 + y^2)\, .
\]
We then integrate by parts and use the fact that $\ln ((\Xi (s)-x)^2 +y^2)$ converges to $2 \ln |(\Xi(s)-\Xi (a)|$ strongly in $L^q ([a-2,a+2])$ for every $q$ to infer the existence of the limit of the first integral.
As for the imaginary part we write instead
\begin{align}\label{e:arctan-integral}
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} {\rm Im}\, G (z) &= \lim_{x\to \Xi(a), y \downarrow 0} \int \frac{y}{(\Xi (s)-x)^2 + y^2} \phi (s)\, ds\, .
\end{align}
We wish to show that the latter integral converges to
\begin{equation}\label{e:arctan-integral-2}
I = \phi (a) \int \frac{ds}{(\Xi' (a))^2 s^2 +1} = \frac{\pi \phi (a)}{2 |\Xi' (a)|}\, .
\end{equation}
On the other hand $\phi (a) = |\psi_0 (a)|^2 A' (a) (\Xi' (a))^{-1}$.
Since $A' (a) > 0$ and $\Xi' (a)<0$, we conclude that $ c (a)$ exists and it is a complex number with positive imaginary part, which completes the proof of the lemma.
It remains to show the convergence of \eqref{e:arctan-integral} to \eqref{e:arctan-integral-2}. First observe that for each $x$ sufficiently close to $\Xi (a)$ there is a unique $a' = \Xi^{-1} (x)$ such that $\Xi (a')=x$. Changing variables ($s$ becomes $a'+s$), the integral in \eqref{e:arctan-integral} becomes
\begin{equation}
\int \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds\,
\end{equation}
and we wish to show that its limit is $I$ as $(a',y)\to (a,0)$.
Next, fix any $\delta>0$ and observe that
\[
\lim_{y\to 0} \int_{|s|\geq \delta} \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds=0
\]
uniformly in $a' \in [a-1, a+1]$. We therefore define
\[
I (\delta, a', y) := \int_{-\delta}^\delta \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds
\]
and we wish to show that, for every $\varepsilon >0$ there is a $\delta>0$ such that
\begin{equation}\label{e:arctan-integral-3}
\limsup_{(a',y) \downarrow (a,0)} \left| I (\delta, a', y) - I\right| \leq C \varepsilon\, ,
\end{equation}
where $C$ is a geometric constant.
We rewrite
\[
I (\delta, a', y) = \int_{-\delta y^{-1}}^{\delta y^{-1}} \frac{\phi (a'+ys)}{y^{-2} (\Xi (a' + ys) - \Xi (a'))^2 +1}\, ds\, .
\]
Fix now $\varepsilon$ and observe that, since $\Xi'$ and $\phi$ are continuous, if $\delta$ is chosen sufficiently small, then
\begin{align}
&((\Xi' (a))^2 - \varepsilon^2) s^2 \leq y^{-2} (\Xi (a' + ys) - \Xi (a'))^2 \leq ((\Xi' (a))^2 + \varepsilon^2) s^2\\
& |\phi (a' + ys) - \phi (a)| \leq \varepsilon\, .
\end{align}
for all $|a'-a|<\delta$ and $y |s| \leq \delta$. Choosing $\varepsilon>0$ so that $\varepsilon \leq \frac{|\Xi' (a)|}{2}$ we easily see that, when $|a'-a| < \delta$, we have
\[
\left|I (\delta, a', y) - \phi (a) \int_{-\delta y^{-1}}^{\delta y^{-1}} \frac{ds}{(\Xi' (a))^2 s^2 +1}\right| \leq C \varepsilon\, .
\]
In particular, as $y\downarrow 0$, we conclude \eqref{e:arctan-integral-3}.
\end{proof}
\section{Proof of Proposition \ref{p:almost-final}}
We reduce the proof of Proposition \ref{p:almost-final} to the following lemma.
\begin{lemma}\label{l:almost-final-2}
Consider $G:= \{m> 1, m \neq m_a, m_b : \mathscr{U}_m \neq \emptyset\}$. Then $G$ is relatively open and relatively closed in $]1, \infty[\setminus \{m_a, m_b\}$.
\end{lemma}
Proposition \ref{p:almost-final} is an obvious consequence of the latter lemma and of Proposition \ref{p:5-7}: Lemma \ref{l:almost-final-2} implies that $G$ is the union of connected components of $[1, \infty[\setminus \{m_a, m_b\}$. On the other hand the connected component $]m_b, m_a[$ intersects $G$ because of Proposition \ref{p:5-7} and thus it is contained in $G$.
We thus complete the proof of Proposition \ref{p:almost-final} showing Lemma \ref{l:almost-final-2}
\begin{proof}[Proof of Lemma \ref{l:almost-final-2}] We start with some preliminary considerations. Fix an interval $[c,d]\subset ]1, \infty[\setminus \{m_a, m_b\}$.
Recalling Proposition \ref{p:all-m} we know that, since the operator norm of $\mathcal{L}_m$ is bounded uniformly in $m\in [c,d]$,
\begin{itemize}
\item[(a)] There is $R>0$ such that $\mathcal{U}_m\subset B_R (0)$ for all $m\in [c,d]$.
\end{itemize}
However it also follows from Proposition \ref{p:3+4} that
\begin{itemize}
\item[(b)] There is a $\delta >0$ such that $\mathcal{U}_m\subset \{{\rm Im}\, z > \delta\}$.
\end{itemize}
\medskip
{\bf Step 1.} We first prove that $G$ is relatively closed. To that end we fix a sequence $m_j \to m\in ]1, \infty[\setminus \{m_a, m_b\}$ such that $m_j$ belongs to $G$. Without loss of generality we can assume $\{m_j\}\subset [c,d]\subset ]1, \infty[\setminus \{m_a, m_b\}$. For each $m_j$ we can then consider $z_j\in \mathscr{U}_{m_j}$, which by (a) and (b) we can assume to converge to some $z\in \mathbb C$ with positive imaginary part. We then let $\psi_j$ be a sequence of nontrivial elements in $L^2$ such that
\begin{equation}\label{e:eigenvalue-equation-21}
-\psi_j'' + m_j^2 \psi_j + \frac{A}{\Xi -z_j} \psi_j = 0\, ,
\end{equation}
and normalize them to $\|\psi_j\|_{L^2}=1$
Since ${\rm Im}\, z_j \geq \delta >0$, the sequence of functions $\frac{A}{\Xi-z_j}$ enjoy uniform bounds in the spaces $L^1$ and $C^k$. We can then argue as in Section \ref{s:3+4} to find that
\begin{itemize}
\item[(i)] $\|\psi_j'\|_{L^2}$ enjoy a uniform bound;
\item[(ii)] There are uniformly bounded nonzero constants $\{C^\pm_j\}$ with the property that $\psi_j$ is asymptotic to $C^\pm_j e^{\mp m_j t}$ and $\pm \infty$;
\item[(iii)] There is a $T_0>0$ independent of $j$ with the property that
\[
|\psi_j (t) - C_j e^{\mp m_j t}| \leq \frac{|C_j|}{2} e^{\mp m_j t} \qquad \forall \pm t > T_0\, .
\]
\end{itemize}
These three properties together imply that a subsequence, not relabeled, converges strongly in $L^2$ to some $\psi$. Passing into the limit in \eqref{e:eigenvalue-equation-21} we conclude that
\[
-\psi'' + m^2 \psi + \frac{A}{\Xi-z} \psi = 0\, .
\]
This shows that $z\in \mathscr{U}_m$, i.e. that $m \in G$.
\medskip
{\bf Step 2.} Here we show that $G$ is relatively open. To that end we consider some sequence $m_j \to m \in ]1, \infty[\setminus \{m_a, m_b\}$ with the property that $m_j \not\in G$ and we show that $m\not \in G$. By (a) and (b) above, it suffices to show that the domain
\[
\Delta := \{|z|< B_R : {\rm Im}\, z > \delta\}
\]
does not contain any element of ${\rm spec}\, m^{-1} \mathcal{L}_m$. Observe first that, since we know that it does not intersect $\gamma = \partial \Delta$, the distance between $\gamma$ and any element in ${\rm spec}\, m^{-1} \mathcal{L}_m$ is larger than a positive constant $\varepsilon$. Recalling that the spectrum on the upper half complex space is discrete, we have that
\[
P_m := \int_\gamma (m^{-1} \mathcal{L}_m -z)^{-1}\, dz
\]
is a projection on a finite-dimensional space which contains all eigenspaces of the elements $z\in {\rm spec}\, m^{-1} \mathcal{L}_m\cap \Delta = \mathcal{U}_m$. And since all such elements belong to the discrete spectrum, $\mathscr{U}_m = \emptyset$ if and only if $P_m = 0$. On the other hand
\[
P_{m_j} := \int_\gamma (m_j^{-1} \mathcal{L}_{m_j} -z)^{-1}\, dz
\]
equals $0$ precisely because $m_j \not \in G$. We thus just need to show that $P_{m_j}$ converges to $P_m$ to infer that $m\in G$. The latter follows from the following observations:
\begin{itemize}
\item[(i)] Since $\gamma$ is a compact set and does not intersect the spectrum of $m^{-1} \mathcal{L}_m$, there is a constant $M$ such that $\|(m^{-1} \mathcal{L}_m -z)^{-1}\|_O \leq M$ for all $z\in \gamma$;
\item[(ii)] $\mathcal{L}_{m_j}$ converges to $\mathcal{L}_m$ in the operator norm;
\item[(iii)] Writing
\[
(m_j^{-1} \mathcal{L}_{m_j} - z)^{-1} = ({\rm Id} + (m^{-1} \mathcal{L}_m -z)^{-1} (m_j^{-1} \mathcal{L}_{m_j} - m^{-1} \mathcal{L}_m))^{-1}(m^{-1} \mathcal{L}_m - z)^{-1}\, ,
\]
when $\|m_j^{-1} \mathcal{L}_{m_j} - m^{-1} \mathcal{L}_m\|_{\mathcal{L}} \leq \frac{1}{2M}$ we can use the Neumann series for the inverse to infer
\[
\sup_{z\in \gamma} \|(m_j^{-1} \mathcal{L}_{m_j} -z)^{-1} - (m^{-1} \mathcal{L}_m -z)^{-1}\|_O \leq C \|m^{-1} \mathcal{L}_m - m_j^{-1} \mathcal{L}_{m_j}\|_O\, ,
\]
for some constant $C$ independent of $j$.
\end{itemize}
We then conclude that $P_{m_j}$ converges to $P_m$ in the operator norm.
\end{proof}
\begin{remark}\label{rmk:algebraic dim const}
An immediate outcome of the argument above is that the sum of the algebraic multiplicities of $z\in \mathcal{U}_m$, as eigenvalues of $m^{-1} \mathcal{L}_m$, is constant on any connected component of $]-\infty, \infty[\setminus \{m_a, m_b\}$. Indeed, it coincides with the rank of the operator $P_m$ defined in Step 2.
\end{remark}
\section{Proof of Lemma \ref{l:bottom}}\label{s:choice-of-A}
Rather than looking for a suitable $\Xi$ we will write $G := \Xi' + 2\Xi$ and look for the latter function after expressing
\[
\Xi (t) := \int_{-\infty}^t e^{-2(t-\tau)} G (\tau)\, d\tau\, .
\]
To check that the above formula recovers $\Xi$ under our assumptions, observe first that
\[
G = \Xi'+2\Xi
\]
by the classical solution formula for first order ODEs with constant coefficients. It thus suffices to show that that the integral and $\Xi$ coincide in a neighborhood of $-\infty$. To that end consider that
that $\Xi (t) = \Xi (-\infty) - c_0 e^{2t}$ for any sufficiently negative $t$ and thus
\[
G (t) = 2\Xi (-\infty) - 4c_0 e^{2t}\, ,
\]
so that, for any such $t$,
\[
\Xi (t) = e^{-2t} \int_{-\infty}^t (2\Xi (-\infty) e^{2\tau} - 4c_0 e^{4\tau})\, d\tau =
\Xi (-\infty) - c_0 e^{2t}\, .
\]
We next read the conditions $\Xi\in \mathscr{C}$ in terms of $G$ to find that they are
\begin{itemize}
\item[(i)] $G (t) = 2 \Xi (-\infty) - 4 c_0 e^{2t}$ for all $t$ sufficiently negative;
\item[(ii)] $G (t) = e^{-{\bar\alpha} t}$ for all $t\geq \ln 2$;
\item[(iii)] There are exactly two zeros $a<b$ of $G'$ and $G'' (a)>0$, $G'' (b)<0$;
\item[(iv)] $\int_{-\infty}^t e^{-2(t-\tau)} G' (\tau) d\tau < 0$ for every $t$.
\end{itemize}
The conditions (i), (ii), and (iii) are obviously equivalent to the corresponding ones in Definition \ref{d:class-C}. As for (iv), we just need to check the formula
\[
\Xi' (t) = \int_{-\infty}^t e^{-2(t-\tau)} G' (\tau)\, d\tau\, .
\]
Arguing as above, the solution formula for first order ODEs with constant coefficients show that the two sides of the above identity can differ at most by a constant, while a direct verification using (i) shows that the two sides coincide for sufficiently negative $t$'s.
We next can read all the above conditions in terms of $A$, more precisely it suffices to impose
\begin{itemize}
\item[(i')] $A (t) = - 8 c_0 e^{2t}$ for all $t$ sufficiently negative;
\item[(ii')] $A(t) = -{\bar\alpha} e^{-{\bar\alpha} t}$ for all $t\geq \ln 2$;
\item[(iii')] There are exactly two zeros $a<b$ of $A$ and $A' (a) >0$, $A' (b)<0$;
\item[(iv')] $\int_{-\infty}^t e^{-2(t-\tau)} A(\tau) d\tau <0$ for every $t$.
\end{itemize}
In fact, assuming the four conditions above we easily recover $G$ by setting
\[
G (t) := - \int_t^\infty A(\tau)\, d\tau\, .
\]
Note in passing that since (i'), (ii'), (iii'), and (iv') imply (i), (ii), (iii), and (iv), which in turn imply the corresponding conditions in Definition \ref{d:class-C}, we derive
\[
\Xi (-\infty) = \frac{1}{2} G (-\infty) = - \int_{-\infty}^\infty A(\tau)\, d\tau\, .
\]
In turn, since $\Xi (\infty)$ = 0 and $\Xi'<0$, the latter equality implies
\[
\int_{-\infty}^\infty A (\tau)\, d\tau < 0\, .
\]
We next fix $a=0$ and $b=\frac{1}{2}$ and rather than imposing (iv') we impose the two conditions
\begin{itemize}
\item[(v')] $\int_{-\infty}^0 e^{2\tau} A (\tau) d\tau = -1$;
\item[(vi')] $\max A \leq \frac{1}{e}$.
\end{itemize}
Observe, indeed, that (iv) is equivalent to
\[
\int_{-\infty}^t e^{2\tau} A (\tau) \, d\tau < 0
\]
and that, since $A$ is negative on $]-\infty, 0[$ and $]\frac{1}{2}, \infty[$, the integral on the left-hand side is maximal for $t=\frac{1}{2}$. We then can use (v') and (vi') to estimate
\[
\int_{-\infty}^{\frac{1}{2}} e^{2\tau} A(\tau)\,d \tau \leq -1 + \frac{e}{2} \max A \leq -\frac{1}{2}\, .
\]
We next recall that, by the Rayleigh criterion,
\[
- \lambda_a = \min_{\|\psi\|_{L^2} = 1} \langle \psi, L_a \psi\rangle = \min_{\|\psi\|_{L^2} = 1} \int \left(|\psi'|^2 + \frac{A}{\Xi-\Xi (0)} |\psi|^2 \right)\, .
\]
We test the right-hand side with
\[
\psi (t) :=
\left\{
\begin{array}{ll}
0 \qquad & \mbox{for $|t|\geq \frac{1}{2}$}\\ \\
\sqrt{2} \cos (\pi t) \qquad &\mbox{for $|t|\leq \frac{1}{2}$.}
\end{array}\right.
\]
We therefore get
\begin{equation}\label{e:bottom-est-1}
- \lambda_a \leq 2 \pi^2 + 2 \int_{-1/2}^{1/2} \frac{A (t)}{\Xi (t) - \Xi (0)} \cos^2 \pi t\, dt\, .
\end{equation}
Next, for any fixed positive constant $B>0$, we impose that $A (t) = B t$ on the interval $]- \sqrt{B}^{-1}, 0]$ and we then continue it smoothly on $[0, \infty[$ so to satisfy (ii'), (iii'), and (v') on $[0, \infty[$ (the verification that this is possible is rather simple). We also can continue it smoothly on $]-\infty, - \sqrt{B}^{-1}]$ so to ensure (i'). In order to ensure (v') as well we just need to show that
\[
\int_{-\sqrt{B}^{-1}}^0 e^{2\tau} A(\tau)\, d\tau \geq -\frac{1}{2}\, .
\]
The latter is certainly ensured by
\[
\int_{- \sqrt{B}^{-1}}^0 e^{2\tau} A(\tau)\, d\tau \geq \int_{-\sqrt{B}^{-1}}^0 A(\tau)\, d\tau = -\frac{1}{2}\, .
\]
Now, observe that $\Xi' (0) = \int_{-\infty}^0 e^{2\tau} A (\tau) \, d\tau = -1$. For $t\in \, ]-\sqrt{B}^{-1}, 0[$ we wish to estimate $\Xi' (t)$ and to do it we compute
\begin{align*}
|\Xi' (t)- \Xi' (0)| & = \left|e^{-2t}\int_{-\infty}^t e^{2\tau} A (\tau)\, d\tau - \int_{-\infty}^0 e^{2\tau} A(\tau)\, d\tau\right|\\
&\leq e^{-2t} \left|\int_t^0 e^{2\tau} A(\tau)\, d\tau\right| + (e^{-2t}-1) \left|\int_{-\infty}^0 e^{2\tau} A(\tau)\, d\tau\right|\\
&\leq \frac{e^{2\sqrt{B}^{-1}}}{2} + (e^{2\sqrt{B}^{-1}}-1) \leq \frac{3}{4}\, ,
\end{align*}
which can be ensured by taking $B$ sufficiently large. In particular $-\frac{1}{4} \geq \Xi' (t) \geq -2 $ for $t\in \, ]-\sqrt{B}^{-1}, 0[$. We thus conclude that
\[
-2t \leq \Xi (t) - \Xi (0) \leq -\frac{t}{4} \qquad \forall t\in \, ]-\sqrt{B}^{-1}, 0[\, .
\]
In turn the latter can be used to show
\[
\frac{A (t)}{\Xi (t) - \Xi(0)} \leq - \frac{B}{2} \qquad \forall t\in \, ]-\sqrt{B}^{-1}, 0[\, .
\]
Since $\frac{A}{\Xi - \Xi (0)}$ is otherwise negative on $]-\frac{1}{2}, \frac{1}{2}[$, we conclude
\begin{equation}\label{e:bottom-est-2}
- \lambda_a \leq 2\pi^2 - 2 \int_{-\sqrt{B}^{-1}}^0 B \cos^2 \pi t\, dt\, .
\end{equation}
By taking $B$ large enough we can ensure that $\cos^2 \pi t\geq \frac{1}{2}$ on the interval $]-\sqrt{B}^{-1}, 0[$. In particular we achieve
\[
- \lambda_a \leq 2\pi^2 - \sqrt{B}\, .
\]
Since we can choose $\sqrt{B}$ as large as we wish, the latter inequality completes the proof of the lemma.
\chapter{Nonlinear theory}\label{sect:Proof-main4}
This final chapter will prove Theorem~\ref{thm:main4} and hence complete the argument leading to Theorem~\ref{thm:main}. To that end we fix a choice of $\bar \Omega$, $\bar V$, $m$ and $\eta$ as given by Theorem~\ref{thm:spectral}, where $\bar a>0$ is a large parameter whose choice will be specified only later. We introduce a particular space and we will indeed prove an estimate corresponding to \eqref{e:H2-estimate} in this smaller space.
\begin{definition}\label{d:X}
We denote by $X$ the subspace of elements $\Omega\in L^2_m$ for which the following norm is finite:
\begin{equation}\label{e:X-norm}
\|\Omega\|_X:= \|\Omega\|_{L^2} + \||x| \nabla \Omega\|_{L^2} + \|\nabla \Omega\|_{L^4}\, .
\end{equation}
\end{definition}
The above norm has two features which will play a crucial role in our estimates. The first feature, which is obvious, is that it ensures an appropriate decay of the $L^2$ norm of $D\Omega$ on the complements of large disks $\mathbb R^2\setminus B_R$. The second feature is that it allows to bound the $L^\infty$ norm of $\Omega$ and $\nabla (K_2 *\Omega)$ and to give a bound on the growth of $K_2*\Omega$ at infinity. More precisely, we have the following:
\begin{proposition}\label{p:X-bounds}\label{P:X-BOUNDS}
For all $\kappa \in ]0,1[$, there is a constant $C(\kappa) >0$ such that the following estimates hold for every $m$-fold symmetric $\Omega\in X$:
\begin{align}
|\nabla (K_2*\Omega) (x)|+|\Omega (x)| &\leq \frac{C(\kappa)}{1+|x|^{1-\kappa}}\|\Omega\|_X\qquad\forall x\in\ensuremath{\mathbb R}^2 \label{e:decay-Omega}\\
|K_2* \Omega (x)| &\leq C \|\Omega\|_X \min\{ |x| , 1\} \qquad \forall x\in \mathbb R^2\, \label{e:Hoelder}.
\end{align}
\end{proposition}
The aim of this chapter is therefore to give the bound
\begin{equation}\label{e:final-bound}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_{X} \leq e^{\tau (a_0+\delta_0)} \qquad\qquad\qquad \forall \tau\leq \tau_0\,
\end{equation}
for some appropriately chosen constants $\delta_0>0$ and $\tau_0<0$, independent of $k$. Of course the main difficulty will be to give the explicit estimate \eqref{e:final-bound}. However a first point will be to show that the norm is indeed finite for every $\tau\geq -k$. This will be a consequence of the following:
\begin{lemma}\label{l:initial-bound}\label{L:INITIAL-BOUND}
Provided $a_0$ is large enough, the eigenfunction $\eta$ of Theorem \ref{thm:spectral} belongs to $C^2 (\mathbb R^2\setminus \{0\})$ and satisfies the pointwise estimates
\begin{equation}\label{e:Hk}
|D^\ell \eta| (x) \leq C (1+|x|)^{-\ell-\varrho} \qquad \forall \ell\in \{0,1,2\}, \forall \varrho\in [0, 2[\,
\end{equation}
(in particular $\eta\in W^{2,\infty}$).
Moreover $\Omega_{\text{per}, k} \in C ([-k, T]; X)$ for every $T<\infty$.
\end{lemma}
In fact we can prove even sharper estimates if $m$ were larger than $2$. At any rate, one relevant outcome of Lemma
\ref{l:initial-bound} is that the bound \eqref{e:final-bound} holds at least for $\tau$ sufficiently close to $-k$, given that $\Omega_{\text{per}, k} (\cdot, -k)\equiv 0$. The main point of \eqref{e:final-bound} is then that we will be able to deduce the following estimates.
\begin{lemma}\label{l:final-estimates}
Under the assumptions of Theorem \ref{thm:main4} there is a constant $C_0$ (independent of $k$) such that the following holds. Assume that $\bar\tau \leq 0$ is such that for all $\tau\in [-k, \bar\tau]$ we have the estimate
\begin{equation}\label{e:a-priori}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_X \leq e^{(a_0+\delta_0) \tau}.
\end{equation}
Then
\begin{align}
\|\Omega_{\text{per},k} (\cdot, \bar \tau)\|_{L^2} &\leq C_0 e^{(a_0+\delta_0+1/2)\bar\tau}\, ,
\label{e:stima-L2}\\
\||x| D\Omega_{\text{per}, k} (\cdot, \bar\tau)\|_{L^2} &\leq C_0 e^{(a_0+2\delta_0)\bar\tau}\, , \label{e:stima-H1-pesata}\\
\|D \Omega_{\text{per},k} (\cdot, \bar \tau)\|_{L^4} &\leq C_0 e^{(a_0+2\delta_0)\bar\tau}\label{e:stima-L4}\, .
\end{align}
\end{lemma}
With the above lemma we easily conclude \eqref{e:final-bound} (and hence Theorem \ref{thm:main4}). Indeed denote by $\tau_k$ the largest non-positive time such that \begin{equation}\label{e:assumed-for-the-moment}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_X \leq e^{(a_0+\delta_0) \tau} \qquad \forall \tau\in [-k, \tau_k]\, .
\end{equation}
Then we must have
\begin{equation}\label{e:forza}
\|\Omega_{\text{per}, k} (\cdot, \tau_k)\|_X = e^{(a_0+\delta_0) \tau_k}\, .
\end{equation}
On the other hand, summing the three estimates \eqref{e:stima-L2}, \eqref{e:stima-H1-pesata}, and \eqref{e:stima-L4} we conclude
\begin{equation}\label{e:forza2}
\|\Omega_{\text{per}, k} (\cdot, \tau_k)\|_X \leq \bar C e^{(a_0+2 \delta_0) \tau_k}
\end{equation}
for some constant $\bar C$ independent of $k$. However \eqref{e:forza} and \eqref{e:forza2} give
$e^{\delta_0 \tau_k}\geq \bar C^{-1}$, i.e. $\tau_k \geq - \frac{1}{\delta_0} \ln \bar C$, implying that \eqref{e:final-bound} holds with $\tau_0:= - \frac{1}{\delta_0} \ln \bar C$.
After proving Proposition \ref{p:X-bounds} and Lemma \ref{l:initial-bound}, we will dedicate two separate sections to the three estimates \eqref{e:stima-L2}, \eqref{e:stima-H1-pesata}, and \eqref{e:stima-L4}. The first estimate, which we will call {\em baseline estimate}, will differ substantially from the other two, and to it we will dedicate a section. In order to accomplish the gain in the exponent in \eqref{e:stima-L2} we will use crucially the information on the semigroup which comes from Theorem \ref{thm:spectral} and Theorem \ref{t:group}, namely, that the growth bound $\omega (L_{ss})$ is precisely $a_0$ (i.e., the growth achieved by $\Omega_{\text{lin}}$). Since, however, the terms in the Duhamel's formula depend on derivatives, we need to invoke an a priori control of them, which is present in the norm $\|\cdot\|_X$. Indeed, one such term experiencing the derivative loss arise from the nonlinearity is the following:
\begin{equation}
\int_{-k}^\tau e^{(\tau-s) L_{\rm ss}} [(K_2 * \Omega_{\text{per}, k}) \cdot \nabla \Omega_{\text{per}, k}](\cdot,s) \, ds \, .
\end{equation}
Note that $\|\cdot\|_X$ also includes the weighted $L^2$ norm $\||x| D\Omega\|_{L^2}$ because we encounter a term where $D\Omega_{\text{per}, k}$ is multiplied by the function $V^r$, which grows at $\infty$ like $|x|^{1-{\bar\alpha}}$ when ${\bar\alpha} \in ]0,2[$. In order to close the argument we then need to control the $L^4$ norm and the weighted $L^2$ norm of $D\Omega_{\text{per}, k}$. The latter estimates will not be accomplished through a growth bound on the semigroup $e^{\tau L_{ss}}$ (which would invoke controls on yet higher derivatives), but rather through some careful energy estimates. The structure of the problem will then enter crucially, since the term $(K_2 * \Omega_{\text{per}, k}) \cdot \nabla \bar{\Omega}$ which we need to bound in the energy estimates will take advantage of the improved baseline estimate on the $L^2$ norm. The above term,
which is responsible for the creation of unstable eigenvalues, actually \emph{gains a derivative}. Finally, there is one remaining difficulty when estimating $D \Omega_{\text{per}, k}$ due to the transport term. Namely, differentiating the equation in cartesian coordinates contributes a term $(\partial_i \bar{V}) \cdot \nabla \Omega_{\text{per} ,k}$, which could destabilize the estimates. We exploit the structure of the problem again in a crucial way by estimating angular and radial derivatives, rather than derivatives in cartesian coordinates, and by estimating the angular derivatives {\em first} and the radial derivatives {\em after}.
\section{Proof of Proposition \ref{p:X-bounds}}
We start by bounding $|\Omega (x)|$. Since $W^{1,4} (B_2)$ embeds in $C^{1/2}$, the bound $|\Omega (x)|\leq C \|\Omega\|_X$ is true for every $x\in B_2$. Consider further $R:= \frac{|x|}{2} \geq 1$ and define $u_R (y) := \Omega (Ry)$. In particular let $B:= B_1 (\frac{x}{R})$ and notice that
\begin{align}
\|u_R\|_{L^2 (B)} &= R^{-1} \|\Omega\|_{L^2 (B_R (x))} \leq R^{-1}\|\Omega\|_X\, ,\\
\|Du_R\|_{L^2 (B)} &= \|D\Omega\|_{L^2 (B_R (x))} = R^{-1} \||\cdot |D\Omega\|_{L^2 (B_R (x))} \leq R^{-1} \|\Omega\|_X\, ,\\
\|D u_R\|_{L^4 (B)} &= R^{1/2} \|D\Omega\|_{L^4 (B_R (x))} \leq R^{1/2} \|\Omega\|_X\, .
\end{align}
By interpolation, for $\frac{1}{p} = \frac{\lambda}{2} + \frac{1-\lambda}{4}$ we have
\[
\|Du_R\|_{L^p (B)} \leq C \|\Omega\|_X R^{-\lambda + (1-\lambda)/2}\, .
\]
Choosing $p$ very close to $2$, but larger, we achieve $\|Du\|_{L^p (B)} \leq C R^{-1+\kappa} \|\Omega\|_X$. Since the average of $u_R$ over $B$ is smaller than $\|u_R\|_{L^2 (B)} \leq C R^{-1} \|\Omega\|_X$, from Poincar\'e we conclude $\|u_R\|_{W^{1,p} (B)} \leq C R^{-1+\kappa} \|\Omega\|_X$. In particular using the embedding of $W^{1,p}$ in $L^\infty$ we conclude
\begin{equation}\label{e:Morrey}
\|u_R\|_{L^\infty (B)} \leq C (p) R^{-1+\kappa} \|\Omega\|_X\, .
\end{equation}
Since however $|\Omega (x)|\leq \|u_R\|_{L^\infty (B)}$, we reach the estimate
\begin{equation}\label{e:decay-Omega-2}
|\Omega (x)|\leq \frac{C}{|x|^{1-\kappa}} \|\Omega\|_X\, .
\end{equation}
Note that the constant $C$ depends on $\kappa$: $\kappa$ is positive, but a very small choice of it forces to choose a $p$ very close to $2$, which in turn gives a dependence of the constant $C(p)$ in \eqref{e:Morrey} on $\kappa$.
We next come to the estimates for $\nabla K_2 * \Omega$. First of all, observe that
\begin{align*}
\|\nabla K_2 * \Omega\|_{L^2} \leq & C \|\Omega\|_{L^2} \leq C \|\Omega\|_X\\
\|D^2 K_2*\Omega\|_{L^2} + \|D^2 K_2* \Omega\|_{L^4} \leq & C \|D\Omega\|_{L^2} + C \|D\Omega\|_{L^4}
\leq C \|\Omega\|_X
\end{align*}
and and thus $|\nabla K_2*\Omega (x)|\leq C \|\Omega\|_X$ follows for every $x\in B_4$. Consider now $|x|\geq 4$, set $R:= \frac{|x|}{4}$ and let $\varphi\in C^\infty_c (B_{2R} (x))$ be a cut-off function identically equal to $1$ on $B_R (x)$ and $\psi\in C^\infty_c (B_{2R})$ equal to $1$ on $B_R$. We choose them so that $\|D^k \psi\|_{C^0} + \|D^k \varphi\|_{C^0} \leq C (k) R^{-k}$.
We split
\[
\nabla K_2 * \Omega = \nabla K_2 * (\varphi \Omega) + \nabla K_2 * (\psi \Omega) + \nabla K_2 * ((1-\varphi -\psi) \Omega)\, =: F_1 + F_2 + F_3\, .
\]
We have
\begin{align*}
\|F_1\|_{L^2} &\leq C\|\varphi \Omega\|_{L^2} \leq C \|\Omega\|_X\\
\|DF_1\|_{L^2} &\leq C\|D (\varphi \Omega)\|_{L^2} \leq C R^{-1} \|\Omega\|_{L^2} + C \|\varphi D\Omega\|_{L^2}
\leq C R^{-1} \|\Omega\|_X\\
\|D F_1\|_{L^4} & \leq C \|\Omega\|_X\, .
\end{align*}
The argument used above implies then $|F_1 (x)| \leq C (\kappa) |x|^{\kappa-1} \|\Omega\|_X$. As for estimating $F_2$ we observe that $F_2$ is harmonic outside $B_{2R}$. On the other hand $\|F_2\|_{L^2} \leq C \|\Omega\|_X$. Using the mean-value inequality for harmonic functions we then get
\[
|F_2 (x)| \leq \frac{1}{\pi (2R)^2} \int_{B_{2R} (x)} |F_2| \leq \frac{C}{R} \|\Omega\|_X\, .
\]
As for $F_3$ we write, using the bound on $|\Omega|$ and $|\nabla K (x-y)|\leq C |x-y|^{-2}$,
\begin{align*}
|F_3 (x)| &\leq \int_{\mathbb R^2\setminus (B_{2R} (x) \cup B_{2R})} \frac{C(\kappa) \|\Omega\|_X}{|x-y|^2 |y|^{1-\kappa}}\\
&\leq \int_{(\mathbb R^2\setminus B_{2R})\cap \{|x-y|\geq |y|\}} \frac{C(\kappa) \|\Omega\|_X}{|y|^{3-\kappa}}
+ \int_{(\mathbb R^2\setminus B_{2R} (x)) \cap \{|y|\geq |x-y|\}} \frac{C(\kappa) \|\Omega\|_X }{|x-y|^{3-\kappa}}
\leq \frac{C(\kappa) \|\Omega\|_X}{R^{1-\kappa}}\, .
\end{align*}
Recalling that $K_2*\Omega (0)=0$, integrating\eqref{e:decay-Omega} on the segment with endpoints $0$ and $x$ we conclude \eqref{e:Hoelder} for $|x|\leq 2$. In order to prove the bound when $|x|\geq 1$, fix a point $y$ with $3 R:= |y|\geq 1$. Let $\varphi$ be a radial cut-off function which is identically equal to $1$ on $B_{R} (0)$, is supported in $B_{2R} (0)$ and whose gradient is bounded by $C R^{-1}$. We then write
\begin{equation}\label{e:decompose}
K_2 * \Omega = K_2 * (\varphi \Omega) + K_2 * ((1-\varphi) \Omega)\, .
\end{equation}
Since the distance between $y$ and the support of $\varphi \Omega$ is larger than $R$, we can estimate
\begin{equation}\label{e:cut-inside}
|K_2*(\varphi \Omega) (y)|\leq \frac{C}{R} \int |\varphi \Omega|
\leq C\|\Omega\|_{L^2}\leq C\|\Omega\|_X\, .
\end{equation}
Next observe that, by Calderon-Zygmund,
\[
\|D^2 K_2 * (\Omega (1-\varphi))\|_{L^2} = \|D ((1-\varphi) \Omega)\|_{L^2} \leq \frac{C}{R} \|\Omega\|_{L^2} + C \|D\Omega\|_{L^2 (\mathbb R^2\setminus B_R)}\leq \frac{C}{R} \|\Omega\|_X\, .
\]
Since $(1-\varphi) \Omega$ belongs to $L^2_m$, the average over $B$ of $K_2* ((1-\varphi) \Omega)$ equals $0$. Hence we conclude from Poincar\'e inequality and Calderon-Zygmund that
$$
\|K_2 * ((1-\varphi) \Omega) \|_{L^{2}(B)} \leq CR\|DK_2 * ((1-\varphi) \Omega) \|_{L^{2}(B)} \leq CR\|((1-\varphi) \Omega) \|_{L^{2}(B)} \leq CR \|\Omega\|_X
$$
From Gagliardo-Nirenberg interpolation inequality applied on $B$ we have
\begin{align*}
\|K_2* ((1-\varphi) \Omega)\|_{L^\infty(B)}&\leq C\|D^2 K_2 * (\Omega (1-\varphi))\|_{L^2}^{1/2}\|K_2* ((1-\varphi) \Omega)\|^{1/2}_{L^{2}(B)}
\\
&\qquad + \frac C R \|K_2 * ((1-\varphi) \Omega) \|_{L^{2}(B)}
\leq C \|\Omega\|_X.
\end{align*}
\section{Proof of Lemma \ref{l:initial-bound}}
We will in fact prove the following more precise version of the estimates for $\eta$.
\begin{lemma}\label{l:pointwise}
Under the assumptions of Lemma \ref{l:initial-bound}, $\eta\in C^2 (\mathbb R^2\setminus \{0\})$ and its derivatives up to the second order satisfy the estimate
\begin{equation}\label{e:eta-pointwise-decay}
|D^j \eta (x)|\leq C (1+ |x|)^{-m-2-j-{\bar\alpha}} \qquad \forall x, \forall j\in \{0,1,2\}\, .
\end{equation}
In particular $\eta\in C^1 (\mathbb R^2)$ and its first derivatives are Lipschitz (namely, $\eta\in W^{2,\infty} (\mathbb R^2)$).
\end{lemma}
As for the second conclusion of the Lemma, observe that, by going back to the solutions $\omega_{\varepsilon, k}$, it follows from the regularity and decay of the initial data in \eqref{e:Euler-later-times} (just proved in the above lemma) and the regularity and decay of the forcing term $f$ for positive times, that $\omega_{\varepsilon, k}\in C ([t_k, T], X)$ for every $T> t_k$. Given the explicit transformation from $\omega_{\varepsilon, k}$ to $\Omega_{\varepsilon, k}$ we conclude that $\Omega_{\varepsilon, k} \in C ([-k, T], X)$ for every $T> -k$. Since the same regularity is enjoyed by $\tilde{\Omega}$ on $[-k, T]$ (the latter is in fact smooth and compactly supported on $\mathbb R^2\times [-k,T]$) and by $\Omega_{\text{lin}}$, we infer that $\Omega_{\text{per}, k} = \Omega_{\varepsilon, k} - \tilde{\Omega}-\Omega_{\text{lin}}$ belongs to $C ([-k, T], X)$.
\begin{proof}[Proof of Lemma \ref{l:pointwise}]
Consider $\eta\in L^2_m$, for $m\geq 2$ as in Theorem \ref{thm:spectral} and write it as $\eta (\theta, r) = \vartheta (r) e^{ik m\theta}$ when $b_0\neq 0$ or $\eta (\theta, r) = \vartheta (r) e^{ik m \theta} +\bar\vartheta (r) e^{-ikm\theta}$ if $b_0=0$ (where $\bar\vartheta$ denotes the complex conjugate of $\vartheta$).
In both cases $\vartheta (r) e^{ikm\theta}$ is an eigenfunction and through this property we will show that it satisfies the estimates of the Lemma. We can therefore assume without loss of generality that $\eta (x) = \vartheta (r) e^{ikm\theta}$. We will also see that the argument leads in fact to estimates \eqref{e:eta-pointwise-decay} with $km$ replacing $m$ and hence without loss of generality we assume
$k=1$. Furthermore an outcome of the argument below is that $\eta$ is smooth except possibly at the origin.
Note moreover that after having shown the pointwise bounds \eqref{e:eta-pointwise-decay} for $\eta$ outside of the origin, the $W^{2,\infty}$ regularity of $\eta$ follows immediately: indeed $\eta$ and $D\eta$ can be continuously extended to the origin by setting them equal to $0$, hence showing that $\eta\in C^1 (B_1)$, while the uniform bound of $|D^2 \eta|$ in $B_1\setminus \{0\}$ easily implies that $D\eta$ is Lipschitz.
\medskip
{\bf Step 1. Exponential coordinates.} We recall that the distribution $K_2* \eta$ is well defined, according to Lemma \ref{l:extension}, and its action on a test function in $\mathscr{S}$ is given by integrating the scalar product of the test with a function $v\in W^{1,2}_{\text{loc}} (\mathbb R^2, \mathbb C^2)$, cf. the proof of Lemma \ref{l:extension} in the appendix. It follows from the argument given there that $R_{2\pi/m} v (R_{2\pi/m} x) = v (x)$. Given that ${\rm div}\, V =0$, $-V^\perp$ is the gradient of a continuous function, which is determined by its value $\psi$ at the origin. $\psi$ inherits the symmetry and can thus be written as $\psi (\theta, r) = f (r) e^{im \theta}$. We thus conclude that
\begin{align}
\vartheta =&f'' +\frac{1}{r} f' -\frac{m^2}{r^2} f\label{e:Poisson}\\
v = &\frac{f'}{r} \frac{\partial}{\partial \theta} - \frac{im f}{r} \frac{\partial}{\partial r}\, .
\end{align}
Observe moreover that $v\in W^{1,2}_{\text{loc}}$ implies $f'', \frac{f'}{r}$, $\frac{f}{r^2}\in L^2_{\text{loc}}$. Therefore $f$ is determined by \eqref{e:Poisson} and the boundary conditions
\begin{itemize}
\item[(a)] $f (0) =0$;
\item[(b)] $\int_0^1 \frac{|f'(r)|^2}{r} \, dr < \infty$.
\end{itemize}
(observe that condition (b) implies $f'(0)=0$ when $f\in C^1$ and we can therefore interpret it as a surrogate for a boundary condition). We recall also that we have the estimate $\|D^2\psi\|_{L^2} \leq C \|\vartheta\|_{L^2} < \infty$ and owing to $\int_{B_R} D\psi = 0$, by Poincar\'e we achieve
\[
\|D\psi\|_{L^p (B_R)} \leq C (p) R^{2/p}\, .
\]
Using Morrey's embedding and the fact that $\psi (0)=0$ we conclude, in turn
\[
|\psi (x)|\leq C \|D\psi\|_{L^p (B_{|x|})} |x|^{1-2/p} \leq C |x|\, .
\]
In particular we conclude
\begin{equation}\label{e:linear-bound-f}
|f(r)|\leq C r\, .
\end{equation}
The equation satisfied by $\eta$ can thus be written in terms of the function $f$ as
\begin{equation}\label{e:third-order}
\left(1+\frac{r}{\alpha}\frac{d}{dr} - z_0 - im\beta \zeta\right) \left(f''+\frac{f'}{r} - \frac{m^2}{r^2} f\right) - \frac{imf \beta g'}{r} = 0\, ,
\end{equation}
where $g$ is the smooth function such that $\bar\Omega (x) = g (|x|)$ (in particular $g$ is constant in a neighborhood of the origin, and equals $r^{-\alpha}$ for $r\geq 2$) and $\zeta$ is given by the formula \eqref{e:def-zeta}. We next set $s = \ln r$ and in particular
\begin{align}
\tilde{g} (s) &= g (e^s)\\
h (s) & = f (e^s)\\
\tilde\zeta (s) &= \zeta (e^s)\, .
\end{align}
Note that
\begin{equation}\label{e:Gamma}
\vartheta (e^s) = e^{-2s} (h'' (s) - m^2 h (s)) =: e^{-2s} \Gamma (s)\, .
\end{equation}
In these new coordinates we observe that the claim of the lemma corresponds then to showing that $\vartheta\in C^2_{\text{loc}} (\mathbb R)$ and
\begin{align}
|\Gamma (s)| + |\Gamma' (s)| + |\Gamma'' (s)|&\leq C e^{4s} \qquad &\forall s\leq 0\label{e:decad-negativo}\, ,\\
|\Gamma (s)| + |\Gamma' (s)| + |\Gamma'' (s)|&\leq C e^{- (m+{\bar\alpha})s} \qquad &\forall s\geq 0\label{e:decad-positivo}\, .
\end{align}
In order to achieve the latter estimates we will need the following bounds on $\tilde{g}'$, $\tilde{g}''$, $\tilde\zeta$, and $\tilde{\zeta}'$, which we record here and can be easily checked:
\begin{align}
|\tilde{g} (s)| + |\tilde{g}' (s)| + |\tilde\zeta (s)| + |\tilde\zeta' (s)| &\leq C e^{-{\bar\alpha} s}\qquad &\forall s \geq 0\, ,\label{e:stima-tilde-g-positivo}\\
|\tilde{g} (s) - g (0)| + |\tilde{g}' (s)| + |\tilde\zeta (s) - \zeta (0)| +
|\tilde\zeta' (s)| &\leq C e^{2s}\qquad &\forall s\leq 0\, .\label{e:stima-tilde-g-negativo}
\end{align}
We observe next that, by \eqref{e:linear-bound-f}
\begin{equation}\label{e:the-very-first-crappy-bound}
|h(s)|\leq C e^s\, .
\end{equation}
\medskip
{\bf Step 2. The equation for $h$.} In these new coordinates the equation \eqref{e:third-order} becomes then
\begin{equation}
\left(1+\frac{1}{\alpha} \frac{d}{ds} -z_0 - im \beta \tilde{\zeta}\right) \left(e^{-2s} (h''-m^2 h)\right) - im h e^{-2s} \tilde{g}' = 0\, ,
\end{equation}
which we simplify as
\begin{equation}
\left[ \frac{d}{ds} - \alpha \left(im\beta \tilde{\zeta} + z_0 - 1 +\frac{2}{\alpha}\right)\right] (h''-m^2 h) - i\alpha mh \tilde{g}' = 0\, .
\end{equation}
We then define the integrating factor
\[
I (s) = \exp \left[- \alpha \int_0^s \left(im \beta \tilde{\zeta} + z_0 -1 +\frac{2}{\alpha}\right)\, d\sigma\right]
\]
We can thus write
\[
\frac{d}{ds} \left[ I (h'' -m^2 h)\right] = i \alpha m I h \tilde{g}'\, .
\]
Given that $z_0= a_0 + i b_0$,
\begin{equation}\label{e:exact-identity-I}
|I (s)|\le C e^{-(2+\alpha (a_0-1)) s}
\end{equation}
and in particular, by \eqref{e:stima-tilde-g-positivo}
\begin{equation}\label{e:first-crappy-bound}
|I h \tilde{g}'| (s) \leq C e^{-(1+\alpha a_0 + {({\bar\alpha}-\alpha)}) s}\, .
\end{equation}
This implies that the latter is an integrable function on every halfline $[s, \infty[$ so that we can write
\begin{equation}\label{e:ODE-again}
\Gamma (s) = h'' (s) - m^2 h (s) = - \alpha I(s)^{-1} \int_s^\infty i m I h \tilde{g}' \, .
\end{equation}
Since $\Gamma (s) = e^{2s} \vartheta (e^s)$ and $\vartheta \in L^2 (rdr)$, $e^{-s} \Gamma \in L^2 (ds)$. We claim in particular that $e^{-m|s|} \Gamma (s)$ is integrable. Indeed:
\[
\int_{\mathbb R}|\Gamma (s)| e^{-m|s|}\, ds \leq \|e^{-s} \Gamma\|_{L^2 (\mathbb R)} \left(\int_{\mathbb R} e^{-2 (m|s|-s)}\, ds\right)^{1/2}< \infty\, .
\]
We claim then that for the function $h$ we have the formula
\begin{equation}\label{e:formulozza}
h (s) = -\frac{1}{2m} e^{-ms} \int_{-\infty}^s e^{ms'} \Gamma (s') ds' - \frac{1}{2m} e^{ms} \int_s^\infty e^{-ms'} \Gamma (s')\, ds'\, .
\end{equation}
In order to prove the identity denote the right hand side by $H$ and observe that it is a solution of the
same ODE satisfied by $H$, namely, $H''-m^2 H = \Gamma$. Hence $(\frac{d^2}{ds^2} -m^2) (H-h) =0$, which implies that $H (s)-h(s) = C_1 e^{ms} + C_2 e^{-ms}$. On the other hand, using the information that $e^{-s}\Gamma\in L^2$, it can be readily checked that $H (s) = o (e^{m|s|})$ at both $\pm \infty$. Since this property is shared by $h$, thanks to the bound \eqref{e:the-very-first-crappy-bound}, we conclude that $C_1 e^{ms} + C_2 e^{-ms}$ must be $o (e^{m|s|})$ at $\pm \infty$, implying $C_1=C_2=0$.
\medskip
{\bf Step 3. Estimates at $+\infty$.} In this step we give bounds for the asymptotic behavior of $h$ at $+\infty$.
We recall \eqref{e:stima-tilde-g-positivo} and hence observe that
\begin{equation}\label{e:stima-Gamma}
|\Gamma (s)| \leq C e^{(2 + \alpha a_0 - \alpha) s} \int_s^\infty |h(\sigma)| e^{-(2+\alpha a_0+{({\bar\alpha}-\alpha)}) \sigma}\, d\sigma\, ,
\end{equation}
for $s$ positive.
On the other hand, for $s\geq 0$ we can also write from \eqref{e:formulozza}
\begin{equation}\label{e:stima-h-positiva}
|h(s)| \leq C e^{-ms} + C e^{-ms} \int_0^s e^{m\sigma} |\Gamma (\sigma)|\, d\sigma + C e^{ms} \int_s^\infty e^{-m\sigma} |\Gamma (\sigma)|\, d\sigma\, ,
\end{equation}
Starting with the information $|h(s)|\leq C e^s$ for $s>0$, we then infer from \eqref{e:stima-Gamma} that $|\Gamma (s)|\leq C e^{(1-{\bar\alpha}) s}$ for $s>0$. In turn plugging the latter into \eqref{e:stima-h-positiva} we infer $|h (s)|\leq C e^{(1-{\bar\alpha}) s}$ for $s>0$. The latter, plugged into \eqref{e:stima-Gamma} turns into $|\Gamma (s)|\leq C e^{(1-2{\bar\alpha}) s}$ for $s>0$. We then can keep iterating this procedure. The bootstrap argument can be repeated until we reach the largest integer $k$ such that $(1-k{\bar\alpha}) > -m$: one last iteration of the argument gives then
\begin{equation}\label{e:bound-finale-h-positivo}
|h(s)|\leq C e^{-ms}
\end{equation}
and hence, inserting one last time in \eqref{e:stima-Gamma}
\begin{equation}\label{e:decad-positivo-1}
|\Gamma (s)|\leq C e^{-(m+{\bar\alpha}) s}\, .
\end{equation}
In order to estimate the first and second derivatives of $\Gamma$ we observe that
\[
\frac{I'}{I} = -\alpha (im \beta \tilde{\zeta} + z_0-1+2\alpha)
\]
and we compute explicitly
\begin{align}
\Gamma' &= \alpha (im \beta \tilde\zeta + z_0-1+2\alpha) \Gamma + i m h \tilde{g}'\label{e:Gamma'}\\
\Gamma'' &=\alpha im \beta \tilde\zeta' \Gamma + \alpha (im \beta \tilde\zeta + z_0-1+2\alpha) \Gamma'
+ i m h \tilde{g}'' + i m h' \tilde{g}'\, .\label{e:Gamma''}
\end{align}
From \eqref{e:Gamma'} and the bounds \eqref{e:decad-positivo-1},\eqref{e:bound-finale-h-positivo}, and \eqref{e:stima-tilde-g-positivo}, we immediately conclude
\begin{equation}\label{e:decad-positivo-2}
|\Gamma' (s)|\leq C e^{-(m+{\bar\alpha})s}\, .
\end{equation}
As for the second derivative, using \eqref{e:decad-positivo-2}, \eqref{e:decad-positivo-1},\eqref{e:bound-finale-h-positivo}, and \eqref{e:stima-tilde-g-positivo}, we conclude
\[
|\alpha im \beta \tilde\zeta' \Gamma + \alpha (im \beta \tilde\zeta + z_0-1+2\alpha) \Gamma'
+ i m h \tilde{g}''|\leq C e^{-(m+{\bar\alpha})s}\, .
\]
In order to estimate the term $i m h' \tilde{g}'$ we differentiate \eqref{e:formulozza} to infer
\begin{equation}\label{e:h'}
h' (s) = \frac{1}{2} e^{-ms} \int_{-\infty}^s e^{ms'} \Gamma (s') ds' - \frac{1}{2} e^{ms} \int_s^\infty e^{-ms'} \Gamma (s')\, ds'\,
\end{equation}
and thus derive the bound $|h'(s)|\leq C e^{-ms}$ using the same argument for bounding $h$. In turn, combined again with \eqref{e:stima-tilde-g-positivo} we conclude $|i m h' \tilde{g}' (s)|\leq C e^{-(m+{\bar\alpha})s}$, hence completing the proof of \eqref{e:decad-positivo}.
\medskip
{\bf Step 4. Estimates at $-\infty$.}
For the bound at $-\infty$ we use instead \eqref{e:stima-tilde-g-negativo} (which we observe holds for positive $s$ as well). This leads to the inequality
\begin{equation}\label{e:bootstrap-negative-1}
|\Gamma (s)|\leq C e^{(2+ \alpha (a_0-1))s} \int_s^{\infty} e^{-\alpha (a_0-1) \sigma} |h(\sigma)|\, d\sigma
\end{equation}
In this argument we assume that $a_0$ is selected very large, depending on $m$.
In turn we estimate $h$ for negative $s$ by
\begin{equation}\label{e:bootstrap-negative-2}
|h (s)|\leq C e^{ms} + C e^{ms} \int_s^0 e^{-m\sigma} |\Gamma (\sigma)|\, d\sigma
+ C e^{-ms} \int_{-\infty}^s e^{m\sigma} |\Gamma (\sigma)|\, d\sigma\, .
\end{equation}
Observe now that we have $|h(s)|\leq C e^s$ for every $s$. Inserting this bound in \eqref{e:bootstrap-negative-1} and assuming that $a_0$ is large enough we conclude $|\Gamma (s)|\leq C e^{3s}$. In turn we can insert the latter bound in \eqref{e:bootstrap-negative-2} to conclude
$|h (s)|\leq C (e^{ms} + e^{3s})$. Since $m\geq 2$ we can then conclude $|h(s)|\leq C e^{2s}$ and inserting it in \eqref{e:bootstrap-negative-1} we conclude $|\Gamma (s)|\leq C e^{4s}$.
For the first and second derivatives we use the formulae \eqref{e:Gamma'}, \eqref{e:Gamma'}, and \eqref{e:h'} and argue as above to conclude $|\Gamma' (s)| + |\Gamma'' (s)|\leq C e^{4s}$.
\end{proof}
\section{Proof of the baseline \texorpdfstring{$L^2$}{L2} estimate}
In this section we prove \eqref{e:stima-L2}. In order to simplify the notation, from now on we will use $\Omega$ in place $\Omega_{\text{per}, k}$. We recall next equation \eqref{e:master}
\begin{align}
(\partial_{\tau} - L_{\text{ss}}) \Omega
= &- \underbrace{(V_{\text{lin}}\cdot \nabla) \Omega}_{=:\mathscr{F}_1} - \underbrace{(V_r\cdot \nabla) \Omega}_{=:\mathscr{F}_2}- \underbrace{(V \cdot \nabla) \Omega_{\text{lin}}}_{=:\mathscr{F}_3} + \underbrace{(V\cdot \nabla) \Omega_r}_{=:\mathscr{F}_4} + \underbrace{(V \cdot \nabla) \Omega}_{=:\mathscr{F}_5}\nonumber\\
&-\underbrace{(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}}}_{=:\mathscr{F}_6} - \underbrace{(V_r\cdot \nabla) \Omega_{\text{lin}}}_{=:\mathscr{F}_7} - \underbrace{(V_{\text{lin}}\cdot \nabla) \Omega_r}_{=:\mathscr{F}_8}\, .\label{e:master-2}
\end{align}
We then define $\mathscr{F} := - \sum_{i=1}^8 \mathscr{F}_i$.
Recalling Theorem \ref{t:group} and the fact that $\Omega (\cdot, -k) =0$, we estimate via Duhamel's formula
\begin{equation}\label{e:Duhamel}
\|\Omega (\cdot, \bar\tau)\|_{L^2} \leq C (\varepsilon) \int_{-k}^{\bar\tau} e^{(a_0+\varepsilon) (\bar\tau - s)} \|\mathscr{F} (\cdot, s)\|_{L^2}\, ds\, .
\end{equation}
We next estimate the $L^2$ norms of the various $\mathscr{F}_i$. In order to keep our notation simpler we use $\|\cdot\|_2$ for $\|\cdot\|_{L^2}$ and $\|\cdot\|_\infty$ for $\|\cdot\|_{L^\infty}$. $\mathscr{F}_1$ is simple:
\begin{equation}\label{e:F-1}
\|\mathscr{F}_1 (\cdot, s)\|_2 \leq \|V_{\text{lin}} (\cdot, s)\|_\infty \|D\Omega (\cdot, s)\|_{L^2} \leq C e^{a_0 s} e^{(a_0+\delta_0) s} \leq C e^{(2a_0+\delta_0) s}\, .
\end{equation}
As for $\mathscr{F}_2$ we use the fact that
\begin{align*}
\int |\mathscr{F}_2 (\xi, \tau)|^2\, d\xi & \leq C \int_{|\xi|\geq e^{-\tau/\alpha}} |\xi|^{2-{2{\bar\alpha}}} |D \Omega (\xi, \tau)|^2\, d\xi\\
& \leq C e^{{\frac{2{\bar\alpha}}{\alpha}}\tau} \int |\xi|^2 |D \Omega (\xi, \tau)|^2\, d\xi \leq C e^{{\frac{2{\bar\alpha}}{\alpha}}\tau} \|\Omega (\cdot, \tau)\|^2_X\, .
\end{align*}
We hence conclude
\begin{equation}\label{e:F2}
\|\mathscr{F}_2 (\cdot, \tau)\|_{L^2}
\leq C e^{(a_0+{\frac{{\bar\alpha}}{\alpha}}+\delta_0)\tau}
\leq C e^{(a_0+1+\delta_0)\tau}\, .
\end{equation}
As for $\mathscr{F}_3$, for every fixed $\tau$ with $\kappa=\frac{1}{2}$ we can use Proposition \ref{p:X-bounds} to conclude
\begin{align}\label{e:F3}
\|\mathscr{F}_3 (\cdot, \tau)\|_{L^2} &\leq \|V (\cdot, \tau)\|_{L^\infty} \|\nabla \Omega_{\text{lin}} (\cdot, \tau)\|_{L^2} \leq C \|\Omega (\cdot, \tau)\|_X \|\nabla \Omega_{\text{lin}} (\cdot, \tau)\|_{L^2}\leq C e^{(2a_0 +{\delta_0}) \tau}\, .
\end{align}
To estimate $\mathscr{F}_4$ we recall that
\begin{equation}
\Omega_r (\xi, \tau) = \beta (1-\chi (e^{\tau/\alpha} (\xi)) \bar\Omega (\xi) + e^{\tau/\alpha} (\beta \zeta (|\xi|) |\xi|) \chi' (e^{\tau/\alpha} \xi)\, .
\end{equation}
Differentiating the latter identity we get:
\begin{align}
|\nabla \Omega_r (\xi, \tau)| \leq & C \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} |D \bar \Omega| (\xi)
+ C e^{\tau/\alpha} (|\bar\Omega| (\xi) + | D (\zeta (|\xi|) |\xi|)) \mathbf{1}_{e^{-\tau/\alpha} R\geq |\xi|\geq e^{-\tau/\alpha}}\nonumber\\
& + C e^{2\tau/\alpha} (|\zeta (\xi)||\xi|) \mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\nonumber\\
\leq & C \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} |\xi|^{-1-{\bar\alpha}} + C (e^{\tau/\alpha} |\xi|^{-{\bar\alpha}} + e^{2\tau/\alpha} |\xi|^{1-{\bar\alpha}}) \mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\label{e:sfava}
\end{align}
where we are assuming that ${\rm spt}\, (\chi) \subset B_R$. We next use Proposition \ref{p:X-bounds} with $\kappa = \alpha/2$ to get $\|V (\cdot, \tau)\|_{L^\infty} \leq C \|\Omega (\cdot, \tau)\|_X\leq C e^{(a_0+\delta_0)\tau}$.
In particular we can estimate
\begin{align*}
\int |\mathscr{F}_4 (\xi, \tau)|^2 d\xi \leq & C e^{2(a_0+\delta_0) \tau} \int_{e^{-\tau/\alpha}}^\infty r^{-1-{ 2{\bar\alpha}}} d r+ C e^{2(a_0+\delta_0 + 1/\alpha)\tau} \int_{e^{-\tau/\alpha}}^{e^{-\tau/\alpha} R} r^{-{ 2{\bar\alpha}} +1}\, dr\\
& + C e^{2(a_0+\delta_0
+ 2/\alpha)\tau} \int_{e^{-\tau/\alpha}}^{e^{-\tau/\alpha} R} r^{3 -{ 2{\bar\alpha}}}\, dr \leq C e^{(2 a_0+ 2\delta_0 + {2})\tau}\, .
\end{align*}
We thus conclude
\begin{equation}\label{e:F-4}
\|\mathscr{F}_4 (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0+1) \tau}\, .
\end{equation}
For $\mathscr{F}_5$ we use again $\|V (\cdot, \tau)\|_{L^\infty} \leq C e^{(a_0+\delta_0) \tau}$ to get
\begin{equation}\label{e:F-5}
\|\mathscr{F}_5 (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0)\tau} \| D\Omega (\cdot, \tau)\|_2 \leq C e^{2(a_0+\delta_0)\tau}\, .
\end{equation}
$\mathscr{F}_6$ follows easily from Lemma \ref{l:pointwise} and Lemma \ref{l:initial-bound}
\begin{equation}\label{e:F-6}
\|\mathscr{F}_6 (\cdot, \tau)\|_2 \leq \|\Omega_{\text{lin}} (\cdot, \tau)\|_\infty \|\Omega_{\text{lin}} (\cdot, \tau)\|_2
\leq C e^{2(a_0+\delta_0) \tau}\, .
\end{equation}
$\mathscr{F}_7$ and $\mathscr{F}_8$ can be easily estimated using the explicit formula for $\Omega_r$ and the decay estimates given by Lemma \ref{l:pointwise} for $\Omega_{\text{lin}}$, in particular they enjoy an estimate which is better than \eqref{e:F-4}, i.e.
\begin{equation}\label{e:F-7+8}
\|\mathscr{F}_7 (\cdot, \tau)\|_2 + \|\mathscr{F}_8 (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0+1/2) \tau}\, .
\end{equation}
Assuming that $a_0$ is sufficiently large we then achieve the estimate
\begin{equation}\label{e:F-all-together}
\|\mathscr{F} (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0+1/2) \tau}\, .
\end{equation}
Inserting in \eqref{e:Duhamel} we choose $\varepsilon < 1/2 +\delta_0$ and we then achieve
\begin{equation}\label{e:baseline-bar-tau}
\|\Omega (\cdot, \bar\tau)\|_2 \leq C e^{(a_0+\varepsilon) \bar \tau} \int_{-k}^{\bar \tau} e^{(\delta_0+1/2 -\varepsilon) s}\, ds
\leq C e^{(a_0+\delta_0 + 1/2)\bar \tau}\, .
\end{equation}
In fact, observe that the argument just given implies the stronger conclusion
\begin{equation}\label{e:stronger-baseline}
\|\Omega (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0 + 1/2) \tau}\, , \qquad \forall \tau \in [-k, \bar \tau]\, .
\end{equation}
\section{Estimates on the first derivative}
In this section we prove \eqref{e:stima-H1-pesata} and \eqref{e:stima-L4}.
The proof will be achieved via $L^2$ and $L^4$ energy estimates, where we will differentiate \eqref{e:master} first with respect to the angular variable and then with respect to the radial variable. We start rewriting \eqref{e:master} as
\begin{align}
& \partial_\tau \Omega - \Omega + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_r + V_{\text{lin}} + V \right)\cdot \nabla\right) \Omega\nonumber\\
= & - \beta (V \cdot \nabla) \bar \Omega - (V\cdot \nabla)\Omega_{\text{lin}} - (V\cdot \nabla) \Omega_r - (V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r \cdot \nabla) \Omega_{\text{lin}}\nonumber\\
& - (V_{\text{lin}}\cdot \nabla) \Omega_r = : \mathscr{G}\, .\label{e:master-10}
\end{align}
We next differentiate in $\theta$. In order to simplify our notation we will write $\theta$ in the subscript (or eventually $,\theta$ if there is already another subscript). We also recall that $\Omega_r$, $\bar\Omega$ are radial functions, while $(V_r \cdot \nabla)$ and $(\bar V \cdot \nabla)$ are angular derivatives times a function which is radial, and $\xi\cdot \nabla$ is a radial derivative times a radial function. So we can write
\begin{align}
& \partial_\tau \Omega_\theta - \Omega_\theta + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_r + V_{\text{lin}} + V \right)\cdot \nabla\right) \Omega_\theta\nonumber\\
= &\mathscr{G}_\theta - (V_{\text{lin}, \theta}\cdot \nabla) \Omega - (V_\theta \cdot \nabla) \Omega =: \mathscr{H}_1\, \label{e:master-angular-1}\\
& \partial_\tau \frac{\Omega_\theta}{r} + \left(\frac{1}{\alpha}-1\right) \frac{\Omega_\theta}{r} + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_r + V_{\text{lin}} + V \right)\cdot \nabla\right) \frac{\Omega_\theta}{r}\nonumber\\
= &\frac{1}{r} \mathscr{G}_\theta - \frac{1}{r} (V_{\text{lin}, \theta}\cdot \nabla) \Omega - \frac{1}{r} (V_\theta \cdot \nabla) \Omega
+ \Omega_\theta ((V_{\text{lin}} + V) \cdot \nabla)\frac{1}{r}
= :\mathscr{H}_2\, . \label{e:master-angular-2}
\end{align}
We then multiply by $\Omega_\theta$ the first equation and integrate by parts the terms on the left-hand side to conclude
\begin{align}
\frac{d}{d\tau} \frac{1}{2} \|\Omega_\theta (\cdot, \tau)\|_2^2 &= \left(1 -\frac{1}{\alpha}\right) \|\Omega_\theta (\cdot, \tau)\|_2^2
+ \int \mathscr{H}_1 (\xi, \tau) \Omega_\theta (\xi, \tau)\nonumber\\
& \leq \|\mathscr{H}_1 (\cdot, \tau)\|_2 \|\Omega_\theta (\cdot, \tau)\|_2\label{e:first-energy-est}
\end{align}
Likewise we multiply the second identity by $(\frac{1}{r} \Omega_\theta)^3$ and integrate by parts to achieve
\begin{align}
\frac{d}{d\tau} \frac{1}{4} \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4^4 &= \left(1 -\frac{1}{\alpha}\right) \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4^4
+ \int \mathscr{H}_2 (\xi, \tau) (r^{-1} \Omega_\theta (\xi, \tau))^3\nonumber\\
&\leq \|\mathscr{H}_2 (\cdot, \tau)\|_4 \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4^3\, .\label{e:second-energy-est}
\end{align}
We next wish to estimate the two integrals in the right-hand sides of both equations.
We summarize the relevant estimates in the Lemma \ref{l:ugly-lemma} below. Note that they imply
\begin{align}
\frac{d}{d\tau} \|\Omega_\theta (\cdot, \tau)\|_2 &\leq C e^{(a_0+2\delta_0)\tau}\, ,\\
\frac{d}{d\tau} \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4 &\leq C e^{(a_0+2\delta_0)\tau}\, ,
\end{align}
Integrating the latter estimate between $-k$ and $\bar\tau$ we conclude
\begin{align}
\|\Omega_\theta (\cdot, \bar \tau)\|_2 &\leq C e^{(a_0+2 \delta_0)\bar \tau}\, ,\\
\|r^{-1} \Omega_\theta (\cdot, \bar \tau)\|_4 &\leq C e^{(a_0+2 \delta_0)\bar \tau}\, .
\end{align}
But in fact the very same argument give the stronger conclusions:
\begin{align}
\|\Omega_\theta (\cdot, \hat\tau)\|_2 &\leq C e^{(a_0+2 \delta_0)\hat \tau}\, \qquad &\forall \hat \tau\in [-k, \bar \tau ]\, ,\\
\|r^{-1} \Omega_\theta (\cdot, \hat \tau)\|_4 &\leq C e^{(a_0+2 \delta_0)\hat \tau}\qquad &\forall \hat \tau\in [-k, \bar\tau]\, .
\end{align}
\begin{lemma}\label{l:ugly-lemma}
Under the assumptions of Lemma \ref{l:final-estimates} we have
\begin{align}
\|D \mathscr{G} (\cdot, \tau)\|_{4} &\leq C e^{(a_0+2\delta_0) \tau}\label{e:DG}\\
\|r D\mathscr{G} (\cdot, \tau)\|_{2} &\leq C e^{(a_0+2\delta_0) \tau}\label{e:rDG}\\
\||D V_{\text{lin}}| |\nabla \Omega| (\cdot, \tau)\|_4 + \||D V| |\nabla \Omega| (\cdot, \tau) \|_4 &\leq C e^{(a_0+2 \delta_0) \tau}\label{e:DVDOmega}\\
\|r |D V_{\text{lin}}| |\nabla \Omega| (\cdot, \tau)\|_2 + \|r |D V| |\nabla \Omega| (\cdot, \tau) \|_2 &\leq C e^{(a_0+2 \delta_0) \tau}\label{e:rDVDOmega}\\
\|r^{-1}V_{\text{lin}} D \Omega (\cdot, \tau)\|_4 + \|r^{-1} V D\Omega (\cdot, \tau)\|_4 &\leq C e^{(a_0+2 \delta_0)\tau}\, .\label{e:comm-term}
\end{align}
\end{lemma}
\begin{proof}
{\bf Proof of \eqref{e:DG} and of \eqref{e:rDG}.} We break the terms as
\begin{align}
\|D\mathscr{G}\|_4 \leq & C \|DV D\bar\Omega\|_4 + C \|V D^2 \bar \Omega\|_4 + \|DVD\Omega_{\text{lin}}\|_4 + \|VD^2\Omega_{\text{lin}}\|_4\nonumber\\
&+ C \|DVD\Omega_r\|_4 + C \|VD^2\Omega_r\|_4 + C \|DV_{\text{lin}}D\Omega_{\text{lin}}\|_4 + C \|V_{\text{lin}}D^2 \Omega_{\text{lin}}\|_4\nonumber\\
& + \|D V_r D\Omega_{\text{lin}}\|_4 + \|V_r D^2 \Omega_{\text{lin}}\|_4 + \|DV_{\text{lin}}D\Omega_r\|_4 + \|V_{\text{lin}} D^2 \Omega_r\|_4\,
\end{align}
and
\begin{align}
\|rD\mathscr{G}\|_2 \leq & C \|rDV D\bar\Omega\|_2 + C \|rV D^2 \bar \Omega\|_2 + \|rDVD\Omega_{\text{lin}}\|_2 + \|rVD^2\Omega_{\text{lin}}\|_2\nonumber\\
&+ C \|rDVD\Omega_r\|_2 + C \|VrD^2\Omega_r\|_2 + C \|rDV_{\text{lin}}D\Omega_{\text{lin}}\|_2 + C \|rV_{\text{lin}}D^2 \Omega_{\text{lin}}\|_2\nonumber\\
& + \|rD V_r D\Omega_{\text{lin}}\|_2 + \|rV_r D^2 \Omega_{\text{lin}}\|_2 + \|rDV_{\text{lin}}D\Omega_r\|_2 + \|rV_{\text{lin}} D^2 \Omega_r\|_2\, .
\end{align}
The terms involving $\Omega$ and $\bar\Omega$ is where we use the baseline $L^2$ estimate. Observe that
\begin{equation}\label{e:interpolating-baseline}
\|\Omega (\cdot, \tau)\|_{4} \leq \|\Omega (\cdot, \tau)\|_{\infty}^{1/2} \|D\Omega (\cdot, \tau)\|^{1/2}_{2}
\leq C e^{(a_0 + \delta_0+1/4)\tau}\,
\end{equation}
and, by Calder{\'o}n-Zygmund,
\begin{equation}\label{e:CZ-baseline}
\|D K_2* \Omega (\cdot, \tau)\|_{4} \leq C \|\Omega (\cdot, \tau)\|_{4} \leq C e^{(a_0+\delta_0+1/4) \tau}\, .
\end{equation}
Next we estimate
\begin{align}
\|DV D\bar\Omega (\cdot, \tau)\|_4 \leq \|D \bar \Omega (\cdot, \tau)\|_\infty \|DV (\cdot, \tau)\|_4 \leq C \|\Omega (\cdot, \tau)\|_4
\leq C e^{(a_0+\delta_0+1/4) \tau}\, \label{e:per-bar-1}\\
\|r DV D\bar\Omega (\cdot, \tau)\|_{L^2} \leq \|r D \bar \Omega (\cdot, \tau)\|_\infty \|DV (\cdot, \tau)\|_2 \leq C \|\Omega (\cdot, \tau)\|_{L^2}
\leq C e^{(a_0+\delta_0+1/2) \tau}\label{e:per-bar-1-weight}
\end{align}
Next, recalling Lemma \ref{l:extension} we get
\begin{equation}
\|V (\cdot , \tau)\|_{L^2 (B_R)} \leq C R \|\Omega (\cdot, \tau)\|_2 \leq C R e^{(a_0+\delta_0+1/2) \tau}\, .
\end{equation}
However, using $\int_{B_R} V (\cdot, \tau) =0$, we can in fact estimate also
\begin{equation}
\|V (\cdot, \tau)\|_{L^4 (B_R)} \leq C R^{1/2} \|\Omega (\cdot, \tau)\|_2 \leq C R^{1/2} e^{(a_0+\delta_0+1/2)\tau}
\end{equation}
In particular we can infer
\begin{equation}
\| (1+|\xi|)^{-(1+\varepsilon)} V (\cdot, \tau)\|_2 \leq C(\varepsilon) e^{(a_0+\delta_0+1/2) \tau}
\end{equation}
and
\begin{equation}
\| (1+\xi)^{-(1/2+\varepsilon)} V (\cdot, \tau)\|_4 \leq C (\varepsilon) e^{(a_0+\delta_0+1/2) \tau}
\end{equation}
for every positive $\varepsilon$. On the other hand, given that $|D^2 \bar \Omega (\xi)|\leq C (1+|\xi|)^{-2-{\bar\alpha}}$, we easily infer
\begin{align}
\|V D^2 \bar \Omega (\cdot, \tau)\|_2 &\leq C \|(1+|\xi|)^{-{1}} V (\cdot, \tau)\|_4 \leq C e^{(a_0+\delta_0+1/2) \tau}\label{e:per-bar-2}\\
\|r V D^2 \bar \Omega (\cdot, \tau)\|_2 &\leq C \|(1+|\xi|)^{-1-{\bar\alpha}} V (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0+1/2) \tau}\, .\label{e:per-bar-2-r}
\end{align}
From now on we will not handle the terms with the weight $r$ as the proof is entirely analogous: we will just focus on the $L^4$ estimates and leave to the reader the computations with the weight.
For the two quadratic terms in $V_{\text{lin}}$ we can use Lemma \ref{l:pointwise} to achieve
\begin{equation}\label{e:lin-lin}
\|DV_{\text{lin}}D\Omega_{\text{lin}} (\cdot, \tau)\|_4 + \|V_{\text{lin}}D^2 \Omega_{\text{lin}} (\cdot, \tau)\|_4 \leq C e^{2a_0 \tau}\, .
\end{equation}
Likewise we can estimate
\begin{equation}\label{e:lin-per}
\|DV D \Omega_{\text{lin}} (\cdot, \tau)\|_4 + \|V D^2\Omega_{\text{lin}}\|_4 \leq C e^{a_0\tau}\|\Omega (\cdot, \tau)\|_X \leq C e^{(2a_0+\delta_0) \tau}\, ,
\end{equation}
(where for the second term we use the decay at infinity of $D^2 \Omega_{\text{lin}}$ to compensate the moderate growth of $V$, the argument is the same as for \eqref{e:per-bar-2} and we do not repeat it here).
Observe next that, by \eqref{e:sfava}, $\|D \Omega_r (\cdot, \tau)\|_\infty \leq C$ for $\tau\leq 0$. Hence the term $DV D\Omega_r$ can be estimated as in \eqref{e:per-bar-1}:
\begin{equation}\label{e:per-err-1}
\|DV D\Omega_r (\cdot, \tau)\|_4 \leq C \|DV (\cdot, \tau)\|_4 \leq C e^{(a_0+\delta_0+1/4) \tau}\, .
\end{equation}
As for the other term, differentiating once more and arguing as for \eqref{e:sfava} we get:
\begin{align}
& |D^2 \Omega_r (\xi, \tau)| \nonumber\\
\leq & C |\xi|^{-2-{\bar\alpha}} \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} + (e^{\tau/\alpha} |\xi|^{-1-{\bar\alpha}} + e^{2\tau/\alpha} |\xi|^{-{\bar\alpha}} + e^{3\tau/\alpha} |\xi|^{1-{\bar\alpha}}) \mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\nonumber\\
\leq & C |\xi|^{-2-{\bar\alpha}} \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} + C e^{3\tau/\alpha} |\xi|^{1-{\bar\alpha}} \mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\, . \label{e:sfava-2}
\end{align}
We can thus argue similarly as for \eqref{e:per-bar-2} to conclude
\begin{align}\label{e:per-err-2}
\|V D^2 \bar \Omega_r (\cdot, \tau)\|_4 &\leq C \|(1+|\xi|)^{-3/2} V (\cdot, \tau)\|_4 + C e^{3\tau/\alpha} \|V\|_{L^4 (B_{R e^{-\xi/\tau}})}\nonumber\\ &\leq C e^{(a_0+\delta_0+1/4) \tau}\, .
\end{align}
In order to handle the remaining three terms, we recall that, by Lemma \ref{l:pointwise},
\begin{align}
\|V_{\text{lin}} (\cdot, \tau) \|_\infty &\leq C e^{a_0 \tau}\\
|DV_{\text{lin}} (\xi, \tau)
&\leq C e^{a_0\tau} |\xi|^{-2-{\bar\alpha}}\\
|D^k \Omega_{\text{lin}} (\xi, \tau)| &\leq C e^{a_0\tau} |\xi|^{-2-k-{\bar\alpha}}\, .
\end{align}
On the other hand, owing to the computations in this and the previous section we can also write
\begin{align*}
|V_r (\xi, \tau)| &\leq C |\xi|^{1- {\bar\alpha}} \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}}\\
|DV_r (\xi, \tau)| + |\Omega_r (\xi, \tau)| &\leq C |\xi|^{-{\bar\alpha}}\mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} + C e^{\tau/\alpha} |\xi|^{1-{\bar\alpha}}
\mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\\
|D^k \Omega_r (\xi, \tau)| &\leq C |\xi|^{-k-{\bar\alpha}}\mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} + C e^{k \tau/\alpha} |\xi|^{1-{\bar\alpha}}
\mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\, .
\end{align*}
Integrating the estimates in the respective domain we easily get
\begin{equation}\label{e:lin-err}
\|D V_r D\Omega_{\text{lin}}\|_4 + \|V_r D^2 \Omega_{\text{lin}}\|_4 + \|DV_{\text{lin}}D\Omega_r\|_4 + \|V_{\text{lin}} D^2 \Omega_r\|_4\leq C e^{(a_0+1)\tau}\, .
\end{equation}
\medskip
{\bf Remaining estimates.} The two terms \eqref{e:DVDOmega} and \eqref{e:rDVDOmega} have already been covered in the argument above. It remains to handle \eqref{e:comm-term}. Notice that, by Lemma \ref{l:pointwise} and Proposition \ref{p:X-bounds} we have
\begin{align}
\|r^{-1} V (\cdot, \tau)\|_\infty &\leq C \|\Omega (\cdot, \tau)\|_X \leq C e^{(a_0+\delta_0) \tau}\\
\|r^{-1} V_{\text{lin}} (\cdot, \tau)\|_\infty & \leq C e^{a_0\tau}\, .
\end{align}
We thus conclude easily
\begin{align}
\|r^{-1} V D\Omega (\cdot, \tau)\|_4 & \leq C e^{(a_0+\delta_0) \tau}\|D\Omega (\cdot, \tau)\|_4 \leq C e^{2(a_0+\delta_0) \tau}\, \\
\|r^{-1} V_{\text{lin}} D\Omega(\cdot, \tau)\|_4 &\leq C e^{a_0\tau}\|D\Omega (\cdot, \tau)\|_4 \leq C e^{(2a_0+\delta_0) \tau}\, .
\end{align}
\end{proof}
We next differentiate in $r$ \eqref{e:master-10} in order to achieve similar identities to \eqref{e:master-angular-1} and \eqref{e:master-angular-2}. This time, given the ambiguity with $V_r$ and $\Omega_r$, we write $,r$ in the subscript to denote the radial derivative of {\em any} function.
\begin{align}
& \partial_\tau \Omega_{,r} + \left(1-\frac{1}{\alpha}\right) \Omega_{,r} + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_r + V_{\text{lin}} + V \right)\cdot \nabla\right) \Omega_{,r}\nonumber\\
= &\mathscr{G}_{,r} - (V_{\text{lin}, r}\cdot \nabla) \Omega - (V_{,r} \cdot \nabla) \Omega - \beta (\bar V_{,r} \cdot \nabla) \Omega - (V_{r,r}\cdot \nabla ) \Omega \label{e:master-radial-1}\\
& \partial_\tau r\Omega_{,r} - r \Omega_{,r} + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_{\text{lin}} + V \right)\cdot \nabla\right) (r \Omega_{,r})\nonumber\\
= & r \mathscr{G}_{,r} - r (V_{\text{lin}, r}\cdot \nabla) \Omega - r (V_{,r} \cdot \nabla) \Omega - r (\bar V_{,r} \cdot \nabla) \Omega - r (V_{r,r}\cdot \nabla ) \Omega\nonumber\\
& + \Omega_{,r} (V_{\text{lin}} + V)\cdot \nabla r \, . \label{e:master-radial-2}
\end{align}
Multiplying by $(\Omega_{,r})^3$ and $r \Omega_{,r}$ respectively, and using the estimates \eqref{e:DG} and \eqref{e:rDG}, we achieve, in the respective cases:
\begin{align}
\frac{d}{d\tau} \|\Omega_{,r} (\cdot, \tau)\|_4 &\leq C e^{(a_0+2 \delta_0)\tau}
+ C \|D V_{\text{lin}} D\Omega (\cdot, \tau)\|_4 + C\|D V D \Omega (\cdot, \tau)\|_4\nonumber\\
& \qquad\qquad + C \|\bar{V}_{,r}\cdot \nabla \Omega (\cdot, \tau)\|_4 + \|DV_r D \Omega (\cdot, \tau)\|_4\label{e:master-radial}\\
\frac{d}{d\tau} \|r\Omega_{,r} (\cdot, \tau)\|_2 &\leq C e^{(a_0+2 \delta_0)\tau} + C \|r V_{\text{lin}, r} D\Omega (\cdot, \tau)\|_2 + C\|r (V_{,r}\cdot \nabla) \Omega (\cdot, \tau)\|_2\nonumber\\
&+ C \|r \bar{V}_{,r}\cdot \nabla \Omega (\cdot, \tau)\|_2 + \|r (V_{r,r} \cdot\nabla) \Omega (\cdot, \tau)\|_2
+ \|V_{\text{lin}} D\Omega \|_2 + \|V D\Omega\|_2\label{e:master-radial-r}\, .
\end{align}
Note next that
\begin{align}
\|D V_{\text{lin}} D\Omega (\cdot, \tau)\|_4 + \|D V D \Omega (\cdot, \tau)\|_4 &\leq (\|DV_{\text{lin}} (\cdot, \tau)\|_\infty +
\|DV (\cdot, \tau)\|_\infty) \|D\Omega (\cdot, \tau)\|_4\nonumber\\
& \leq C e^{(2a_0+\delta_0)\tau}\, ,
\end{align}
and likewise
\begin{align}
\|r D V_{\text{lin}} D\Omega (\cdot, \tau)\|_2 + \|r D V D \Omega (\cdot, \tau)\|_2 &\leq (\|DV_{\text{lin}} (\cdot, \tau)\|_\infty +
\|DV (\cdot, \tau)\|_\infty) \|r D\Omega (\cdot, \tau)\|_2\nonumber\\
& \leq C e^{(2a_0+\delta_0)\tau}\, ,
\end{align}
The terms $(\bar V_{, r} \cdot \nabla) \Omega$ and $r(\bar V_{,r} \cdot \nabla)\Omega$ can be bounded observing that they involve only the angular derivative and that $\bar V_{,r}$ is bounded. Since the angular derivative has already been estimated (this is the reason for estimating it {\em before} estimating the radial derivative), we get
\begin{align}
\|\bar{V}_{,r}\cdot \nabla \Omega (\cdot, \tau)\|_4 &\leq C \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4 \leq C e^{(a_0+2\delta_0)\tau}\, .\\
\|r \bar{V}_{,r}\cdot \nabla \Omega (\cdot, \tau)\|_2 &\leq C \|\Omega_\theta (\cdot, \tau)\|_2 \leq C e^{(a_0+2 \delta_0)\tau}\, .
\end{align}
As for $DV_r D\Omega$ and $r DV_r D\Omega$\, we observe that $\|DV_r\|_\infty \leq e^\tau$ and thus
we easily get
\begin{align}
\|DV_r D\Omega (\cdot, \tau)\|_4 &\leq Ce^{\tau} \| D\Omega (\cdot, \tau)\|_4 \leq Ce^{(a_0+\delta_0+1)\tau}\\
\|r DV_r D\Omega (\cdot, \tau)\|_2 &\leq Ce^{\tau} \|r D\Omega (\cdot, \tau)\|_2 \leq Ce^{(a_0+\delta_0+1)\tau}
\end{align}
We finally need to estimate $\|V_{\text{lin}} D\Omega\|_2$ and $\|V D\Omega\|_2$, but we observe that this has already been done in the previous section, since they correspond to the terms $\mathscr{F}_1$ and $\mathscr{F}_4$ in \eqref{e:master-2}, cf. \eqref{e:F-1} and \eqref{e:F-4}.
Summarizing, we conclude
\begin{align}
\frac{d}{d\tau} \|\Omega_{,r} (\cdot, \tau)\|_2 &\leq C e^{(a_0+\delta_0+1/2)\tau}\\
\frac{d}{d\tau} \|r\Omega_{,r} (\cdot, \tau)\|_2 &\leq C e^{(a_0+\delta_0+1/2)\tau}\, ,
\end{align}
which we then integrate between $0$ and $\bar\tau$ to achieve the desired conclusion. | 2023-04-23T06:40:47.476Z | 2022-03-22T01:01:54.000Z | redpajama/arxiv | arxiv_0001 | 1,122 | 102,900 |
ae103fcf26793519bd16cdafd48d2deb47794018 | \section{The context}
The question at issue [1] is the relation between the fully covariant
analysis of fluid flows in general relativity [2-5] as compared with that
of Newtonian gravitational theory, and their
implications for the study of inhomogeneities in cosmology ([6-9]
and references therein). The particular point that has generated
interest has been the recent discovery of a family of General
Relativity exact solutions -- those with vanishing magnetic part
of the Weyl tensor -- where neighbouring flow lines evolve
independently of each other, in the sense that this evolution
depends only on the local fluid variables [10-14]. Actually it
has been known for a long time that this is true for spherically
symmetric solutions, but the solutions now under consideration
include the relativistic version of the Zeldovich theory
of gravitational collapse [15], however predicting filamentary
gravitational collapse rather than sheetlike gravitational
collapse as in the Zeldovich analysis. It has been suggested that
this difference arises because Newtonian theory is essentially
non-local, and that the relativistic theory should have a non-zero
magnetic part of the Weyl tensor to give corresponding non-local
physics. The issue then arises, does the magnetic part of the
Weyl tensor necessarily vanish in the Newtonian limit (as claimed
in [4]), and are the Lagrangian evolution equations in that
limit local in general? According to [1], the answers to both
questions are no.\\
However this discussion has not referred back to the analyses of
Newtonian cosmology by Heckmann and Schucking [16-18], where it
is shown firstly that {\it no}\, Newtonian cosmology -- based on a
potential and Poisson equation -- is possible without some
extension of Newtonian theory as normally understood; and that
the obvious extension of Newtonian theory to the cosmological
situation is essentially non-local -- so that local physics
cannot be decoupled from instantaneous boundary conditions at
infinity. This is because Newtonian theory is a singular limit of
general relativity theory, precluding the possibility of
gravitational waves proceeding at a finite speed. It does not in an
obvious way involve an analogue of the magnetic part of the
Weyl tensor [4].\\
By contrast, the Bertschinger-Hamilton theory [1] leads to influences
propagating at a finite speed, and so their effect do not arrive
at any event instantaneously from infinity. In this sense their
theory - which allows gravitational waves -- is a better
approximation to the correct classical theory of gravity (namely,
general relativity) than is Newtonian theory proper (or more
strictly, the Heckmann-Schucking version of Newtonian theory [16-18],
with boundary conditions allowing spatially
homogeneous cosmologies). In
what follows, to save repetition of these names we shall refer to
the different theories as BH (Bertschinger-Hamilton [1]), HS
(Heckmann-Schucking [16-18]), NG (standard Newtonian
gravitational theory expressed in a potential formalism), and GR
(general relativity theory).
\section{General Relativity Theory}
In GR applied to cosmology in a covariant manner [2-5], one decomposes
geometrical and physical quantities relative to a preferred 4-velocity
vector $u^a$ - the fundamental velocity that underlies any coherent
cosmological model [2-5]. This defines the fluid expansion
$\Theta$, shear $\sigma_{ab}$, vorticity $\omega_{ab}$, and
acceleration $a_a \equiv \dot{u}_a$. Their time derivatives
(denoted by a dot, e.g. $\dot{\Theta})$ are determined by the
matter density $\mu$ and pressure $p$ (here we restrict our
consideration to perfect fluids) together with the electric part
$E_{ab}$ of the Weyl tensor, whose derivative in turn is
determined by the magnetic part $H_{ab}$ of the Weyl tensor,
where these quantities are defined respectively from the Weyl
tensor $C_{abcd}$ by
\begin{equation}
E_{ac} = C_{abcd} u^b u^d, ~~~ H_{ac} = C_{abef} u^b
\eta^{ef}{}_{ch} u^h
\end{equation}
(each being a trace-free symmetric tensor that is orthogonal to
$u^a$). The quantities $E_{ab}$ and $H_{ab}$ obey
equations very similar in structure to Maxwell's equations written
relative to a general family of observers [3-5]. A particular
consequence is that $E_{ab}$ and $H_{ab}$ each obey a wave
equation, with somewhat complicated source terms (see [3] for the
case of an almost-Robertson-Walker space time).\\
In the case of interest, we specialize to `dust', that is the
pressure $p$ vanishes and consequently the fundamental flow lines
are geodesic ($a_a = \dot{u}_a = 0$). The resulting equations are
given below.
\section{Newtonian Theory and Cosmology}
Attempts to use Newton's force law in the case of an infinitely
spread out medium, such as in a spatially homogeneous Newtonian
cosmology, are plagued by infinities (because the matter source
of the gravitational force is unbounded), so in setting up the equations of
Newtonian cosmology, an approach based on a potential $\Phi$
is preferable. \\
The potential must satisfy the Poisson equation:
\begin{equation}
\nabla^2 \Phi = - 4 \pi G \rho
\end{equation}
where $\rho$ is the matter density. This has to be supplemented
by boundary conditions at infinity in order to get a unique
solution; the usual ones are
\begin{equation}
\lim_{r \rightarrow \infty} \Phi = 0\,.
\end{equation}
We take equns. (2,3) as defining standard Newtonian gravity. Now
the problem is that these cannot give us a sensible cosmological
model, for (3) is incompatible with a homogeneous distribution of
matter [16]; in order to deal with an infinite matter distribution we
have to drop (3) and replace it by some other boundary condition
which allows the potential to diverge at infinity, as for example
in the Newtonian version of the Robertson-Walker universe
models.\\
What are suitable boundary conditions? Following HS, we define
the Newtonian analogue of $E_{ab}$ by the equation
\begin{equation}
E_{\alpha \beta} = \Phi_{,\alpha\beta} - {1 \over 3}
h_{\alpha\beta} \nabla^2\Phi ~~~\Rightarrow ~~E_{\alpha\beta} =
E_{\beta\alpha}, ~~E^\alpha{}_\alpha = 0\, .
\end{equation
Then the Newtonian version of Robertson-Walker models are compatible with
the condition
\begin{equation}
\lim_{r \rightarrow \infty} E_{\alpha \beta} = 0\,,
\end{equation}
which is thus a weakening of condition (3) that allows one to
handle spatially homogeneous cosmological models, which
necessarily have uniform density\footnote{One also has to
generalise the idea of inertial motion to free-fall motion, in
analogy with general relativity, in order to define spatially
homogeneous Newtonian models such as the Newtonian version of the
FRW models; that issue is not of concern here.}. However this
conditions excludes the Newtonian analogs of the Kantowski-Sachs
and Bianchi spatially homogeneous but anisotropic universe models - which
are amongst the simplest generalisations of the FRW universes. Thus HS
proposed instead the condition
\begin{equation}
\lim_{r \rightarrow \infty} E_{\alpha \beta} = E_{\alpha
\beta}(t)|_\infty
\end{equation}
where $E_{\alpha \beta}(t)|_\infty$ are arbitrary functions of time
\footnote{in general they should depend on polar angles $\theta$
and $\phi$ also; however in the context of spatially homogeneous analogues
of the Bianchi models, they can depend only on $t$.}.
Thus the limit of the components $E_{\alpha \beta}$ at spatial
infinity may be arbitrarily prescribed as functions of time; they
then propagate in to any point with infinite speed, according to
the equation
\begin{equation}
E^\alpha{}_{\beta,\alpha} = {8\pi\over3}G \rho_{,\alpha}
\end{equation}
which follows from the Poisson equation (2) and definition (4).\\
There has been some debate over these boundary conditions,
Narlikar [19] suggesting that (5) are in fact better conditions to use
for Newtonian cosmology than (6) - but thereby excluding from his
consideration that family of NT models that correspond to the
Bianchi GR solutions allowed by (6) ([17,20]). Both are of course
generalisations of the `true' Newtonian conditions (3).
Incidentally, one can ask here why one does not specify either
$\lim_{r\rightarrow\infty} \Phi = \Phi(t)|_\infty$ or $\lim_{r
\rightarrow \infty} \nabla^2 \Phi$; the reason is that the first
diverges, while the second is determined by the limiting mass
density at infinity (through the Poisson equation (2)).\\
The point that is fundamental to us here is that NT itself gives
{\it no} equation for $\Phi\,\dot{}$, and consequently can give
{\it no} equation for $E\,\dot{}_{\alpha\beta}$ (with
$E_{\alpha\beta}$ defined by (4)). This is true whether one
writes NT in Lagrangian or Eulerian coordinates. Any
specification one may obtain for these time derivatives thus {\it
either} results from positing a theory that is not NT itself, but
rather some generalisation of NT; {\it or} from positing some set
of boundary conditions, such as (5) or (6). The
latter option does not directly give any local {\it equation} for
$E\,\dot{}_{\alpha\beta}$, which is then determined non-locally by
the chosen condition at infinity plus the local matter
conservation equations and the divergence equation (7) for
$E_{\alpha\beta}$. NT itself (which obeys (3)) gives no clue as
to which of these generalised boundary conditions ((6) or its
specialisation (5)) we should choose. \\
We support the HS view that (6) rather than (5) is the better option.
In both cases, information immediately propagates in from infinity to
determine the local physical response; the fact that the information
imparted in (5) is the `null' case - there is no change in
$E_{\alpha\beta}$ at infinity - does not change the fact we are
determining what happens locally by a choice of conditions at infinity;
and according to (6), these are arbitrarily specifiable.\\
One might ask what are the suitable conditions to use at infinity
for a Newtonian version of a perturbed FRW model, that has
statistically spatially homogeneous inhomogeneities superimposed
on a FRW background (which is necessarily spatially homogeneous
and therefore stretches to infinity). It would seem we are in
trouble here, for {\it none} of the conditions above seem
adequate to this case - because then there will be {\it no}
regular limit for $E_{\alpha\beta}$ as we go to spatial infinity
(because of the statistical fluctuations in the matter, this
quantity too may be expected to fluctuate statistically). We can
get a good description {\it either} by imposing periodic boundary
conditions, as is done in most numerical simulations - thus
avoiding the problem in the way first suggested by Einstein
through his proposal of spatially compact universe models (and
corresponding to the possibility of finding Newtonian versions of
the `small universe' idea [21]); {\it or} by insisting that the
perturbations die away outside some bounded domain, thus allowing
(5) for example as a boundary condition, at the expense of
denying the assumption of spatial homogeneity of conditions in
the universe - which is one of the central tenets of current
cosmological dogma. \\
Any use of NT to discuss the growth of perturbations in a cosmological
context should make it quite clear which of these options is adopted -
and why. It may perhaps be claimed that {\it physically} these both give
reasonable results - but that has to be shown. In our view the best
alternative would be a third option that has not so far been systematically
developed: namely, to adopt a Newtonian version of the `Finite Infinity'
proposal for the GR case [22], that is, to isolate the considered local
system by a sphere that is far enough away to be regarded as infinity
for all practical purposes but, because it is at a finite distance, can be
investigated easily and used as a surface where boundary conditions can
be imposed (and the residual influence of the outer regions on the
effectively `isolated' interior can thus be determined).
\section{The Bertschinger-Hamilton Theory}
The BH theory is developed as follows [1]:
the continuity and Poisson equations are written in `comoving' coordinates,
with a background FRW expansion factored out; thus these equations are
written for the fractional density perturbation $\delta$ and an associated
part $\phi$ of the full potential $\Phi$. This hides a tricky gauge
problem, because `comoving' here actually means relative to a fictitious
background FRW space-time (the matter moves relative to the chosen frame);
one might suggest it would in this context be preferable to use the
Newtonian version [23] of the GR gauge-invariant and covariant theory [6-9].
In the BH approach, the Newtonian gravity vector
$\vec{g}$ is defined from the perturbative potential $\phi$ by
$g = - \vec{\nabla}\phi$, and then in turn defines the tensor $E_{ij}$
by a perturbative version of (4), written in comoving coordinates. \\
BH then develop a series of equations that are very similar to those
obtained from the linearised GR equations. Substituting $\vec{g}$ into the
continuity equation gives
\begin{equation}
\vec{\nabla}.\left({\partial a \vec{g} \over \partial \tau}\right) =
4 \pi G a^3 \vec{\nabla}.\vec{f}, ~~~\vec{f} \equiv \rho \vec{v} =
\vec{f}_{||} + \vec{f}_\perp
\end{equation}
where the mass current has been decomposed in the comoving frame into
longitudinal and transverse parts obeying
\begin{equation}
\vec{\nabla} \times \vec{f}_{||} = 0, ~~~ \vec{\nabla}.\vec{f}_\perp = 0
\end{equation}
Then the transverse mass current is replaced by a transverse vector
field $\vec{H}$ obeying
\begin{equation}
\vec{\nabla}\times \vec{H} = - 16 \pi G a^2 \vec{f}_\perp,
{}~~~\vec{\nabla}.\vec{H} = 0
\end{equation}
From this in turn they define a traceless tensor
\begin{equation}
{\cal H}_{ij} \equiv - {1 \over 2} \nabla_j H_i + 2 v_k \epsilon^{kl}{}_i
\nabla_j g_l = H_{ij} + \epsilon _{ijk} A^k
\end{equation}
where $H_{ij}$ is the symmetric part of ${\cal H}_{ij}$ and $A^i$ the dual
of the anti-symmetric part. Now BH perform a series of manipulations, the key
ones of which are integrating (8) and substituting to get
\begin{equation}
{d\vec{g} \over d\tau} + {\dot{a}\over a}\vec{g} = 4 \pi G a^2
\bar{\rho} \vec{v} + \vec{A}
\end{equation}
and then taking the spatial gradient of this equation and substituting to
get
\begin{equation}
{d E_{ij} \over d\tau} + {\dot{a} \over a} E_{ij} - \nabla_k
\epsilon^{kl}{}_{(i} H_{j)l} + \Theta E_{ij} - h_{ij} \sigma^{kl} E_{kl} -
3 \sigma^k{}_{(i} E_{j)k} - \omega^k{}_{(i} E_{j)k}
= - 4 \pi G a^2 \rho \sigma_{ij}
\end{equation}
At this point there is something of a non-sequitur where a GR equation for
$E_{ij}$ is introduced; taking the curl of this equation and making some
substitutions BH obtain the time-derivative equation for $H_{ij}$:
\begin{equation}
{d H_{ij} \over d\tau} + {\dot{a} \over a} H_{ij} + \nabla_k
\epsilon^{kl}{}_{(i} E_{j)l} + \Theta H_{ij} - h_{ij} \sigma^{kl} H_{kl} -
3 \sigma^k{}_{(i} H_{j)k} - \omega^k{}_{(i} H_{j)k} = 0.
\end{equation}
Together with the divergence equations for $E_{ij}$ and $H_{ij}$, these give
the set of Maxwell-like equations for $E_{ij}$ and $H_{ij}$ that occur
in the linearised version of GR.\\
At this point it is quite clear that we have a theory that is something other
than NG, because (1) NG cannot determine the time evolution of $E_{ab}$ by
an equation like (13), as discussed in the previous section,
and (2) if we substitute (14) into the time derivative of (13) we
get a wave equation for $E_{ab}$, showing the possibility of gravitational
waves (with speed unity in the chosen units) in the BH theory (as in the
linearised GR theory, see [3]). However it is also clear on comparison with
the GR equations (see [1]) that the BH theory is a good generalisation of
Newtonian theory, in that it gives a good approximation to the GR equations.\\
How has it come about that the BH theory has produced a time evolution
equation for $E_{ij}$, and hence (with the time evolution equation for
$H_{ij}$) a wave equation for $E_{ij}$? The first point is
that, in going from (8) to (12), the solutions to the equations are
very under-determined, for example, there is considerable arbitrariness
in $\vec{g}$. Again in the series of manipulations that leads from
$\vec{f}$ to $\vec{H}$ to $H_{ij}$ and $A^k$, there is considerable
arbitrariness in these quantities if one allows for the general possible
solutions. In fact if one traces what happens in these equations, actually
(from the Newtonian viewpoint), {\it the tensor $H_{ij}$ is arbitrarily
specifiable}. This is because $\vec{f}$ determines only the $curl$ of
$\vec{H}$ (by (10)); on integrating to determine $\vec{H}$, the trace-free
symmetric part of $\vec{H}$ is (at least locally) arbitrarily specifiable;
but this is just $H_{ij}$. More precisely, the integrability equations
for $H_i$ to exist, given $H_{ij}$, are just the `divergence' equations
for $H_{ij}$ ((43) in BH); so $H_{ij}$ can be chosen as an arbitrary
solution of these spatial equations, allowing arbitrary time evolution.
Furthermore because of the arbitrary functions that occur in integrating
(8), there is also considerable arbitrariness in $A^i$.\\
How then do we get an equation ((13) here, (49) in BH) that gives the
time derivative of $E_{ij}$ in terms of $H_{ij}$? Our claim is that
in fact in this equation, {\it both $dE_{ij}/dt$ and $H_{ij}$ are
arbitrarily specifiable in NT}. From this viewpoint, $E_{ij}$ is arbitrarily
specifiable as a function of time, and (13) then tells us what $H_{ij}$ has
to be, given the definition (11) (or one can run this the other way: choose
$H_{ij}$ as you like, and (13) then determines $dE_{ij}/d\tau$). Hence this
equation does not actually determine the time evolution of $E_{ij}$ - for
there is no Newtonian equation for the time evolution of $H_{ij}$ (in a
truly NT approach, we might {\it define} $H_{ij}$ by (13), and accept
whatever identities follow from this definition; none then determine the
time evolution of the system [17]).\\
What then of the BH equations that determine the time derivative of $H_{ij}$
(and hence lead to the wave equation already commented on)? Well, in order
to derive them BH introduce their equation (52), which follows from GR rather
than NT. Thus it is here that they close the equations in causal terms -
by importing relations that do not in fact follow from NT. The time
derivative equation (14) for $H_{ij}$ (their equation (55)) then follows, and
hence the existence of gravitational waves (and the implied speed
of travel of those waves).\\
In summary, the $E_{ij}$ evolution equations arise from use of definitions
that bring the NT theory into the form of the GR equations, but do
not in fact determine the time evolution uniquely, as long
as we remain within the confines of NG (rather these equations define the
variable $H_{ij}$ which can absorb any chosen time evolution).
This does not prove wrong the set of equations given, which in fact allow
rather arbitrary solutions; however the appearance of uniqueness
is misleading, resulting from non-uniqueness of
solutions to the equations presented. One can obtain uniqueness by use of
special rather than general solutions to the equations (just as (5)
is a special case of (6)), but this is an arbitrary restriction on the
allowed solutions of the theory. The $H_{ij}$ evolution equation does not
result directly from NT at all, but rather is imported from GR. \\
Thus the BH theory
becomes determinate locally (and in good accord with linearised GR) only
by introducing extra equations which do not in fact follow from NG. The
result is a theory that is {\it a better approximation to GR than NT is},
\, in particular because it allows gravitational waves. This theory is more
local than Newtonian theory proper, as it has Cauchy development properties
like GR. It also has the advantage of having a quantity $H_{ij}$ that
corresponds in the appropriate way (in terms of its role in the equations)
to the magnetic part of the Weyl tensor in GR (NG theory proper has no such
tensor [4]).
\section{The better theory?}
Which then of these theories is the better theory? Our viewpoint is that the
best is GR because it is the most fundamental of all these theories - indeed
it is {\it the} fundamental classical gravitational theory. The true theory
is GR. The issue is what are usable approximations in Newtonian-like
conditions (for example, in studying local astrophysics). \\
BH is a good candidate, but is certainly a generalisation of NT
rather than being classical NT. It reflects the causal structure of GR,
but is closer to linearised GR rather than full GR (this is no
accident; it was constructed that way). From this viewpoint, NG in turn
is an acceptable approximation to BH theory in some circumstances; but the
range of conditions adequately covered by BH theory is wider than that
of NG but less than that of GR.\\
Which is more useful in astrophysical contexts will depend on those contexts.
Thus we have not considered here the issue of which theories give adequate
description of anisotropic collapse situations leading to the formation
of structure in the expanding universe. However the line of argument above
suggests that where NT and BH theory disagree, we should believe the latter
(but where BH and GR disagree, we should believe GR). \\
Finally does Newtonian theory itself demand a non-zero magnetic Weyl
tensor analogue? No - it is probably not even well-defined. But that
fact does not undermine the claims of BH theory to give a better
description of what will happen in some collapse scenarios. But that does not
mean we should necessarily expect a non-zero Magnetic Weyl tensor (or its
quasi-Newtonian analogue) in realistic collapse situations. The point is
that one should treat carefully the claim that neighbouring world-lines do
not influence each other in `silent universe' [11], for they do indeed feel
each other's gravitational influence [24] through the set of constraint
equations -- including the relativistic version of the Poisson equation --
that are necessarily satisfied in a full solution of the field equations (and
are consistent with the evolution equations in such `silent' universes [25]).
It is gravitational induction and gravitational waves that are precluded
in these solutions - but these effects are not likely to be important
during the Newtonian-like phase of gravitational collapse.\\ \\
We thank the FRD (South Africa) for financial support.\\ \\
{\bf References}\\
[1] E Bertschinger and A J S Hamilton: Lagrangian Evolution of the Weyl
Tensor. MIT preprint (1994) submitted to ApJ.
[2] W Kundt and M Trumper: Akad Wiss u Lit Mainz Math-Nat Kl, 12 (1961)
[3] S W Hawking: ApJ {\bf 145} 544 (1966)
[4] G F R Ellis: in General Relativity and Cosmology, ed. R K Sachs
(New York: Academic Press), 104 (1971)
[5] G F R Ellis: in Cargese Lectures in Physics, Vol 6, Ed E Schatzmann
(New York: Gordon and Breach), 1 (1973)
[6] G F R Ellis and M Bruni: ``A covariant and gauge-free approach to
density fluctuations in cosmology". {\em Phys Rev} {\bf D40}, 1804-1818
(1989).
[7] G F R Ellis, J Hwang, and M Bruni: ``Covariant and gauge-
independent perfect fluid Robertson-Walker perturbations". {\em Phys
Rev D.} 40, 1819-1826 (1989).
[8] G F R Ellis, M Bruni, and J C Hwang: ``Density-Gradient Vorticity
relation in perfect fluid Robertson-Walker perturbations". {\em Phys
Rev} {\bf D42} 1035-1046, (1990)
[9] M Bruni P K S Dunsby and G F R Ellis: ``Cosmological perturbations
and the physical meaning of gauge-invariant variables". {\em Astrophys
Journ} {\bf 395} 34-53 (1992)
[10] Matarrese, S., Pantano, O., and Saez, D. 1993,
{\it Phys. Rev.} D {\bf 47}, 1311.
[11] Matarrese, S., Pantano, O., and Saez, D. 1994,
{\it Phys. Rev. Lett.}, {\bf 72}, 320.
[12] K M Croudace, J Parry, D S Salopek, and J M Stewart, Applying the
Zel'dovich Approximation to general relativity. 1993.
[13] Bruni, M., Matarrese, S., and Pantano, O. Dynamics of
silent universes. 1994. Submitted to {\it Ap.J}.
[14] Bruni, M., Matarrese, S., and Pantano, O. 1994. A local view of
the observable universe. Submitted to {\it Ap.J}.
[15] Ya B Zeldovich, Ann Rev Fluid Mech 9:215-228 (1977)
[16] O Heckmann and E Schucking. Zs f Astrophysik 38:95-109 (1955)
[17] O Heckmann and E Schucking. Zs f Astrophysik 40:81-92 (1956)
[18] O Heckmann and E Schucking. Handbuch d Physik, Vol 53 (1959)
[19] J V Narlikar, Mon Not Roy Ast Soc 126: 203-208 (1963),
131:501-522 (1966)
[20] Ya B Zeldovich, Mon Not Roy Ast Soc 129: 19 (1965)
[21] G F R Ellis and G Schreiber: {\em Phys Lett} {\bf A115 },
97-107 (1986).
[22] G F R Ellis: In {\em General Relativity and Gravitation}, Ed B
Bertotti et al (Reidel, 1984), 215-288.
[23] G F R Ellis: {\em Mon Not Roy Ast Soc}, {\bf 243}, 509-516
(1990).
[24] G F R Ellis and D W Sciama: in {\em General Relativity}, ed. L.
O'Raifeartaigh (Oxford University Press, 1972), 35-59.
[25] W M Lesame, P K S Dunsby, and G F R Ellis, UCT preprint (1994).
\end{document}
| 2023-04-23T06:40:48.409Z | 1994-10-03T12:35:41.000Z | redpajama/arxiv | arxiv_0001 | 1,145 | 4,193 |
ccbbb0638040c5f31e039d5d2f964d261207ecea | \section{\normalsize\bf Introduction}
In his seminal work, Gribov$^{1}$ pointed out that the
non-triviality of the fundamental modular region in Coulomb or Landau
gauge has dynamical consequences. [The fundamental modular region is
defined explicitly in Eq.\eqn{4} below.] Recently$^{2}$, the
fundamental modular region was studied in the infinite-volume or
thermodynamic limit of a periodic lattice. It was found that the
fundamental modular region is characterized by a "horizon condition"
in the sense that the Euclidean probability gets concentrated where
the horizon function $H(U)$, a bulk quantity of order $V$, defined in
Eq.\eqn{39} below, vanishes. More precisly, it was shown
that at large volumes the vacuum expectation value of $H(U)$ is of
order unity, $\langle H(U)\rangle=O(1)$, (instead of $V$) that its
variance is of order $V$ , $\langle H^2(U)\rangle=O(V)$, instead of $V^2$.
It was also found$^{2}$ that the horizon condition may be implemented by a
Boltzmann factor $\exp[-\alpha H(U)]$, where $\alpha$ is a
thermodynamic parameter determined by the constraint $\langle
H(U)\rangle=0$. Recently a
calculation of glueball masses has been carried out$^{3}$ in which
the mass scale is set by $\alpha$. Previous calculations of glueball
masses from the
fundamental modular region of gauge theory have been done using a mode
expansion by
Cutkosky and co-workers$^{4}$ and by Van Baal and co-workers$^{5}$.
The argument whereby the horizon condition is established in
the {\it infinite-volume} limit of the {\it periodic} lattice, relies on two
technical hypotheses$^{2}$. In the present contribution, we shall prove
that, remarkably, the horizon condition holds {\it point-wise} for every
transverse configuration on the {\it finite} lattice with {\it free boundary
conditions}, $H(U)=0$.
\section{\normalsize\bf Lattice with Free Boundary Conditions in Minimal Landau
Gauge}
We consider a finite hypercubic lattice without
periodicity conditions, in $D$ Euclidean space-time dimensions.
Along each principal axis of the lattice there are $L$ links and $L+1$
sites. Lattice configurations $U$ are described by link
variables $U_{xy}=U_{yx}^\dagger$, and local gauge transformations $g$ by
the site variables $g_x$, where $U_{xy}$ and $g_x$ are both elements
of $SU(n)$.
The gauge-transform $U^g$ of the configuration $U$ by a local gauge
transformation $g$, is given explicitly by
\begin{equation}\label{1}
(U^g)_{xy}=g^\dagger_x U_{xy} g_y\,.
\end{equation}
For the purposes of analytic calculations, it is convenient to
minimize gauge fluctuations by choosing a gauge which makes the link
variables $U_{xy}$, on each link $(xy)$ of the lattice, as close to unity as
possible, in an equitable way over the whole lattice. For this
purpose, we take as a measure of the deviation of the link variables
from unity, the quantity
\begin{equation}\label{2}
I(U) = \sum_{(xy)} n^{-1} {\rm Re\ tr}(1 - U_{xy}) \,,
\end{equation}
where the sum extends over all links $(xy)$ of the lattice. It is
positive, $I(U)\geq 0$, and vanishes, $I(U)=0$, if and only
if $U_{xy}=1$ on every link $(xy)$. [The continuum analog of this
expression is the Hilbert norm of the connection $A$, $I[A] = \int
{\rm d}^D x |A(x)|^2$.] The restriction of this quantity to the gauge
orbit through an arbitrary configuration $U$ is given by the Morse function
\begin{equation}\label{3}
F_U(g)\equiv I(U^g)\,,
\end{equation}
regarded as a function on the local gauge group $G$, for fixed $U$. Gauge
fluctuations are minimized by the gauge-fixing which consists in
choosing as the representative, on each gauge orbit, that
configuration $U$ which yields the absolute minimum of $F_U(g)$ at
$g_x=1$. The set
$\Lambda$ of all configurations $U$ such that this function is an absolute
minimum on each gauge orbit at $g_x=1$, constitutes the
fundamental modular region,
\begin{equation}\label{4}
\Lambda\equiv \{U: F_U(1)\leq F_U(g)\ {\rm for\ all}\ g\}\,.
\end{equation}
Degenerate absolute minima occur only on the boundary of the
fundamental modular region, and are identified topologically.
At any local or absolute minimum, the minimizing function is
stationary, which gives the local gauge condition. The second
variation is non-negative. To obtain the explicit form of these
quantities, we write the local gauge transformation
$g_x=\exp (\omega_x)$, where $\omega_x=\omega_x^a t^a$. Here the $t^a$ form an
anti-hermitian basis of the defining representation of the Lie algebra
of $SU(n)$, $[t^a, t^b]=f^{abc} t^c$, normalized to
${\rm tr}(t^a t^b)=-{\scriptstyle \frac{1}{2}}\delta^{ab}$. We have, to second order in $\omega$,
\begin{eqnarray}\label{5}
F_U(g) &=& F_U(1) - (2n)^{-1}\sum_{(xy)}[{\rm tr}\{ [U_{xy} -
U_{xy}^\dagger] (\omega_y - \omega_x + {\scriptstyle \frac{1}{2}}[\omega_x, \omega_y])
\}\nonumber\\
& &\mbox{}{\hskip 10em} +{\scriptstyle \frac{1}{2}} {\rm tr}\{[U_{xy} + U_{xy}^\dagger ] (\omega_y -
\omega_x)^2 \}
]\nonumber\\
& &\mbox{}\nonumber\\
F_U(g) &=& F_U(1) + (2n)^{-1}\sum_{(xy)}[A_{xy}^c (\omega_y^c - \omega_x^c +
{\scriptstyle \frac{1}{2}} f^{abc} w^a w^b)\nonumber\\
& &\mbox{}{\hskip 10em} +{\scriptstyle \frac{1}{2}} (\omega_y^a - \omega_x^a) G_{xy}^{ac}
(\omega_y^c - \omega_x^c) ]\,.
\end{eqnarray}
Here we have introduced the real link variables
\begin{equation}\label{6}
A_{xy}^c\equiv - {\rm tr}[ (U_{xy} - U_{xy}^\dagger) t^c ]\,,
\end{equation}
which approach the classical connection in the continuum limit, and
\begin{equation}\label{7}
G_{xy}^{ab}\equiv - {\scriptstyle \frac{1}{2}}{\rm tr}\{ (t^a t^b + t^b t^a ) [ U_{xy} +
U_{xy}^\dagger ] \}\,.
\end{equation}
We introduce some elementary geometry of this lattice that
will be useful. Let $\phi_x\equiv \phi(x)$ be a site variable, and
let $(xy)$ denote a link of a lattice. Corresponding link
variables are defined by
\begin{equation}\label{8}
(\nabla \phi)_{xy}\equiv \phi_x - \phi_y
\end{equation}
\begin{equation}\label{9}
(a\phi)_{xy}\equiv{\scriptstyle \frac{1}{2}} (\phi_x + \phi_y)\,.
\end{equation}
These are defined for all links, including those for which $x$ or $y$ may
lie on a face, edge or vertex. (For $\nabla\phi$ we could alternatively use
the Cartan notation d$\phi$). We may also uniquely designate links by
$(xy)=(x,\mu)$, for $y=x+e_\mu$, where $e_\mu$ points in
the positive $\mu$-direction, and we shall also use, as convenient, the
notation for link variables
\begin{equation}\label{10}
\nabla_\mu\phi(x)\equiv (\nabla\phi)_{yx}
\end{equation}
\begin{equation}\label{11}
a_\mu\phi(x)\equiv (a\phi)_{xy}
\end{equation}
(These expressions are \underline{undefined} for sites $x$ where
$y=x+e_\mu$ is not a site of the lattice.) Given two link
variables $V_{xy}$ and $W_{xy}$, we may form the link variable
$P_{xy}=V_{xy}W_{xy}$, etc.
We also introduce the lattice divergence of a link variable
$A_\mu(x)$, which is a site variable $(\nabla\cdot A)_x$, defined by the dual
\begin{equation}\label{12}
(\nabla\cdot A, \phi)\equiv - (A, \nabla\phi) = - \sum_{(xy)} A_{xy}
(\nabla \phi)_{xy} = \sum_x (\nabla\cdot A)_x \phi_x\,,
\end{equation}
where the sums extend over all links and sites of the lattice.
Similarly the site variable $(a\cdot A)_x$ is defined by
\begin{equation}\label{13}
(a\cdot A, \phi)\equiv (A, a\phi) = \sum_{(xy)} A_{xy} (a\phi)_{xy} = \sum_x
(a\cdot A)_x \phi_x\,.
\end{equation}
The lattice divergence is defined for all sites $x$, and represents the
sum of all link variables leaving the site $x$. (It is {\it not} formed
simply of differences of link variables when $x$ is a boundary point of
the lattice.)
With the help of these definitions, we rewrite Eq.\eqn{5} in the form
\begin{equation}\label{14}
F_U(g) = F_U(1) + (2n)^{-1} [ - (A, \nabla \omega) + (\nabla \omega,
D(U)\omega)
]\,.
\end{equation}
Here $[D(U)\omega]_{yx}$ is a well-defined link variable which we call the
lattice gauge-covariant derivative of the site variable $\omega_x$,
\begin{equation}\label{15}
D_\mu^{ac} \omega^c(x)=
G_\mu^{ac}(x)\nabla_\mu\omega^c(x)+f^{abc}A_\mu^b(x)a_\mu
\omega^c(x)\,.
\end{equation}
It is the infinitesimal change in $A_\mu(x)$ under an infinitesimal gauge
transformation $\omega_x$. We also define the lattice Faddeev-Popov
matrix $M(U)$ by
\begin{equation}\label{16}
(\omega, M(U)\phi) = (\nabla\omega, D(U)\phi)\,.
\end{equation}
At a minimum, the minimizing function $F_U(g)$ is stationary, namely
$(A,\nabla\omega)=-(\nabla\cdot A,\omega)=0$. This holds
for all $\omega$, so the gauge condition is expressed by the
transversality of A,
\begin{equation}\label{17}
\nabla\cdot A=0\,.
\end{equation}
Because of transversality, the gauge just defined falls into the class
of lattice Landau gauges, and we call it the "minimal Landau gauge".
At a stationary point of the minimizing function, the matrix of second
derivatives is symmetric, so when $A$ is transverse, the Faddeev-Popov
matrix is symmetric,
\begin{equation}\label{18}
M \equiv -\nabla\cdot D(U) = - D(U)\cdot\nabla\qquad ({\rm for\ }
\nabla\cdot A=0)\,.
\end{equation}
This matrix is non-negative at a minimum, and the two conditions
together define the Gribov region $\Omega$,
\begin{equation}\label{19}
\Omega\equiv \{U: \nabla\cdot A(U) = 0\ {\rm and}\ M(U)\geq 0 \}\,.
\end{equation}
Because the set $\Lambda$ of absolute minima is contained in the set of
relative mimima, we have the inclusion
\begin{equation}\label{20}
\Lambda\subseteq\Omega\,,
\end{equation}
where $\Lambda$ is the fundamental modular region.
\section{\normalsize\bf Vanishing of the Horizon Function on the Finite Lattice
with Free
Boundary Conditions}
Because there are $L+1$ sites, but only $L$ links on each row of
the lattice with free boundary conditions, the transversality
condition is more restrictive for free boundary conditions than on a
periodic lattice, so various additional constraints hold, as we shall
now show.
Choose a particular direction, $\mu=0$, on the hypercubic
lattice, and call the corresponding (Euclidean) coordinate $t$ and
the other coordinates ${\bf x}$. Consider the sum over the hyperplane
labelled by $t$, for a transverse configuration, $\nabla\cdot A =0$,
\begin{equation}\label{21}
\sum_{\bf x} (\nabla\cdot A)(t,{\bf x}) = 0\,.
\end{equation}
Contributions to this sum from links that lie within the hyperplane
vanish, because all such links are connected to 2 sites within the
hyperplane and these 2 contributions cancel. There remain the
contributions from perpendicular links. For all $t$ that label interior
hyperplanes $1\leq t< L-1$, they give
\begin{equation}
Q_0(t) - Q_0(t-1) = 0\,,\nonumber
\end{equation}
whereas on the boundary hyperplanes they give
\begin{equation}
Q_0(L-1) = Q_0(0)= 0\,.\nonumber
\end{equation}
where
\begin{equation}\label{22}
Q_0(t)\equiv\sum_{\bf x} A_0(t,{\bf x})\,.
\end{equation}
We thus have proved the \underline{lemma}:\hfill\\
Let $A_\mu(x)$ be a transverse configuration, $\nabla\cdot A(x)=0$, on a
lattice with free boundary conditions. Then the "charges" vanish,
\begin{equation}\label{23}
Q_0(t)\equiv\sum_{\bf x} A_0(t, {\bf x}) = 0\qquad (t = 0,1,\dots L-1)\,.
\end{equation}
[For a lattice with periodic boundary conditions, only the weaker
condition $Q_0(t)={\rm const.}$ holds.] If we sum the last equation
over $t$, for generic $\mu$, we obtain:\hfill\\
\underline{Corollary}. Let $A_\mu(x)$ be a transverse configuration
$\nabla\cdot A(x)=0$ on a lattice with free boundary conditions.
Then the zero-momentum component of A vanishes,
\begin{equation}\label{24}
\sum_x A_\mu(x) = 0\,.
\end{equation}
Note that for the periodic lattice, transversality imposes no
condition whatsoever on the constant component of $A_\mu(x)$. On the other
hand, for the periodic lattice, the core of the fundamental modular
region was found$^{2}$ to satisfy Eq.\eqn{24}.
We shall now prove that the horizon condition is satisfied
point-wise for a finite lattice with free boundary conditions.
Consider the integral
\begin{eqnarray}\label{25}
I &\equiv&\int {\rm d}\phi {\rm d}\phi^* \exp[ ( \phi^*_i, M(U)\phi_i)
]\,,\nonumber\\
&=& \int {\rm d}\phi {\rm d}\phi^* \exp[ - (\nabla_\lambda\phi^{*a}_i,
D_\lambda^{ac}(U)\phi_i^c) ]\,,
\end{eqnarray}
where the $\phi_i^a(x)$ variables are integrated over the real axis,
and the $\phi^{*a}_i(x)$ are independent variables that are integrated over the
imaginary axis. The lattice gauge-covariant derivative $D_\lambda^{ac}(U)$ acts
on the upper index, of $\phi_i^c(x)$, and $i=1,\dots f$ is a dummy
index. For transverse $A=A(U)$, as we assume, the
symmetric matrix $M(U)\equiv-\nabla\cdot D(U)=-D(U)\cdot\nabla$ has a
null-space, $H_0$, consisting of
constant functions, so the integral\eqn{25} would diverge if $\phi(x)$
and $\phi^*(x)$
were independent integration variables for every $x$. The
integral is made finite by the constraint at a fixed point $y$ of the
lattice
\begin{equation}\label{26}
\phi_i^c(y) = \phi_i^{*c}(y) = 0\,,
\end{equation}
and the definition
\begin{equation}\label{27}
{\rm d}\phi {\rm d}\phi^*\equiv \prod_{x\not=y, i, a} {\rm
d}\phi_i^a(x) {\rm d}\phi^{*a}_i(x)\,.
\end{equation}
With this specification, the integral has the value
\begin{equation}\label{28}
I=\int {\rm d}\phi\prod_{i=1}^f\delta(M\phi_i) = {\det}^{-f}M_y\,,
\end{equation}
where $M_y$ is the matrix obtained from $M$ by deleting the rows and
columns that bear the label $y$.
Because $M(U)$ has a null space, $H_0$, consisting of constant
functions $\nabla\omega=0$, an alternative expression for $I$ is
\begin{equation}\label{29}
I= {\rm d}\phi {\rm d}\phi^* \exp[ (\phi^*_{i\perp},
M(U)\phi_{i\perp}) ]\,,
\end{equation}
where $\phi_\perp$ and $\phi^*_\perp$ are the projections of $\phi$
onto the orthogonal subspace $H_\perp$,
\begin{equation}\label{30}
\phi_\perp (x)= \phi(x)- S^{-1}\sum_x\phi(x)\,,
\end{equation}
and $S$ is the total number of sites in the lattice. The variables $\phi(x)$
and $\phi^*(x)$ for $x\not=y$ constitute a complete set of (linear)
coordinates on $H_\perp$, and the change of basis\eqn{30} is configuration
independent. We conclude that
\begin{equation}\label{31}
I = {\det}^{-f}M_y = {\det}^{-f}M_\perp\,,
\end{equation}
which shows that $I$ is independent of $y$. (The equation
${\det} M_y={\det} M_\perp$,holds for any symmetric matrix $M$ with the
property
that the sum of each row and each column vanish.)
Now let the dummy index $i$ represent the pair
$i=(\mu,a)$, so $\phi_{\mu,a}^c(x)$ is a site variable with a preferred
direction $\mu$, and make the shift on $\phi$ and $\phi^*$ given by
\begin{eqnarray}\label{32}
\phi_{\mu,a}^c(x) &=& \phi_{\mu,a}^{\prime c}(x) + x_\mu
\delta^c_a\nonumber\\
\phi_{\mu,a}^{*c}(x) &=& \phi_{\mu,a}^{*\prime c}(x) + x_\mu \delta^c_a
\end{eqnarray}
which are the lattice analogs of shifts previously introduced in
continuum theory$^{6}$. This shift makes no sense on a (finite) periodic
lattice, but it is well defined on a finite lattice with free boundary
conditions. (Note that the constraint at the lattice point $y$ for the
shifted fields differs from\eqn{26}, unless it is imposed at the
``origin'' $y=0$.) For $\nabla\cdot A=0$, one finds after a simple
calculation
\begin{eqnarray}\label{33}
I&\equiv&\int {\rm d}\phi {\rm d}\phi^* \exp[
(\nabla_\lambda\phi^{*b}_{\mu,a}, D_\lambda^{bc}(U)\phi_{\mu,a}^c) +
(D_\lambda^{ac}(U)\phi_{\lambda,a}^c)\nonumber\\
& &\mbox{}{\hskip 10em}+(D_\lambda^{ac}(U)\phi^{*c}_{\lambda,a}) +
\sum_{x,\mu} G_\mu^{aa}(x) ]\,.
\end{eqnarray}
It is convenient to introduce a field variable $B_{\lambda,a}^c(x)$,
by duality
\begin{equation}\label{34}
( B_{\lambda,a}^c, \phi_{\lambda,a}^c)\equiv -
(D_\lambda^{ac}(U)\phi_{\lambda,a}^c)\, .
\end{equation}
Like $\phi_{\lambda,a}^c$, the new variable $B_{\lambda,a}^c$ is a
site variable with a preferred direction $\mu$. We have
\begin{equation}\label{35}
B_{\lambda,a}^c(x) = (\nabla\cdot G)_\lambda^{ac}(x) - f^{abc} (a\cdot
A)_\lambda^b(x)\,,
\end{equation}
where $(\nabla\cdot G)_\lambda^{ac}(x)$ represents the contribution
to the lattice divergence $\nabla\cdot G^{ac}(x)$ associated with the
$\lambda$ axis, and similarly
for $(a\cdot A)_\lambda^b(x)$. With this definition,
\begin{eqnarray}\label{36}
I&\equiv&\int {\rm d}\phi {\rm d}\phi^* \exp[
(\nabla_\lambda\phi^{*b}_{\mu,a}, D_\lambda^{bc}(U)\phi_{\mu,a}^c) -
(B_{\lambda,a}^{c},\phi_{\lambda,a}^c)\nonumber\\
& &\mbox{}{\hskip 10em}-(\phi^{*c}_{\lambda,a},B_{\lambda,a}^c) +
\sum_{x,\mu} G_\mu^{aa}(x) ]\,.
\end{eqnarray}
We may now effect the $\phi$ and $\phi^*$ integrations by making the shifts
\begin{eqnarray}\label{37}
\phi_{\lambda,a}^b &=& \phi_{\lambda,a}^{\prime b} +
(M^{-1})^{bc}B_{\lambda,a}^c \nonumber\\
\phi_{\lambda,a}^{*b} &=& \phi_{\lambda,a}^{*\prime b} + (M^{-1})^{bc}
B_{\lambda,a}^c\,.
\end{eqnarray}
This expression is well-defined because $B_{\lambda,a}^c$ is
orthogonal to the null-space of $M$,
\begin{equation}\label{38}
\sum_x B_{\lambda,a}^c(x) = 0\,,
\end{equation}
and we may choose coordinates adapted to these subspaces. To see
this, observe that $\sum_x(\nabla\cdot G)_\lambda^{ac}(x)$ vanishes
because each link
is connected to two sites and gives opposite contributions from each.
Moreover $\sum_x(a\cdot A)_\lambda^b(x) = \sum_x A_\lambda^b(x)$
vanishes by the preceding corollary. This gives $I = I \exp[-H(U)]$,
where $H(U)$ is given by Eq.\eqn{39}. We have proven:\hfill\\
\underline{Theorem}. Let $U$ be a transverse configuration
$\nabla\cdot A(U)=0$,
and let the horizon function $H(U)$ on a finite lattice with free
boundary conditions be defined by
\begin{equation}\label{39}
H(U) = (B_{\lambda,a}^b, (M^{-1})^{bc} B_{\lambda,a}^c) - \sum_{x,\mu}
G_\mu^{aa}(x)\,,
\end{equation}
where $B$ is defined in\eqn{35} and $G_\mu^{ab}(x)$ in\eqn{7}. Then
$H(U)$ vanishes
\begin{equation}\label{40}
H(U) = 0\,.
\end{equation}
\noindent\underline{Remark}. On a periodic lattice, the horizon
condition does not hold point-wise. For example, the vacuum
configuration $U_\mu(x)=1$ on
a {\it periodic} lattice gives $B_{\lambda,a}^c(x)=0$, so $H(U)=-(n^2-1)DV$.
For the vacuum configuration on the finite lattice with {\it free} boundary
conditions, $B_{\lambda,a}^c(x)$ is entirely supported on the boundary. Thus
boundary contributions remain important at arbitrary large volume.
Long range boundary effects have also been found by Patrascioiu and
Seiler$^{7}$.
\noindent\underline{Discussion}. The result\eqn{40} is quite
remarkable and unexpected.
Recall that as the {\it volume of the periodic lattice approaches infinity},
the probability gets concentrated where the horizon condition holds$^{2}$
$H(U)=0$, and where $\sum_x A(x) = 0$. Here we have just
proven that these conditions are satisfied for all transverse
configurations $U$ on the finite lattice with free boundary
conditions.
\section{\normalsize\bf References}
\newcounter{ref}
\begin{list}%
{[\arabic{ref}]}{\usecounter{ref}\setlength{\leftmargin}{\parindent}}
\item V. N. Gribov, Nucl. Phys. B139 (1978) 1.
\item D. Zwanziger, Nucl. Phys. B412 (1994) 657.
\item M. Schaden and D. Zwanziger, {\it Glueball Masses from
the Gribov Horizon: Basic Equations and Numerical Estimates}, NYU
preprint ThPhSZ94-1.
\item R. E. Cutkosky, J. Math. Phys. 25 (1984) 939; R. E. Cutkosky
and K. Wang, Phys. Rev. D37 (1988) 3024; R. E. Cutkosky, Czech J.
Phys. 40 (1990) 252.
\item P. van Baal and N. D. Hari Dass, Nucl. Phys B385 (1992) 185;
J. Koller and P. van Baal, Nucl. Phys. B302 (1991) 1; P. van Baal,
Acta Physica Pol. B20 (1989) 295; P. Van Baal and B. van den Heuvel,
{\it Zooming in on the SU(2) Fundamental Domain}, (preprint) University of
Leiden, INLO-PUB-12/93.
\item N. Maggiore and M. Schaden, {\it Landau Gauge within the Gribov
Horizon}, NYU preprint, October 1993, Phys. Rev. D (to be published).
\item A. Patrascioiu and E. Seiler, {\it Super-Instantons in
Gauge Theories and Troubles with Perturbation Theory}, hep-lat
9402003, MPI-PhT/94-07, AZPH-TH/94-03, Phys. Rev. Lett (to be published).
\end{list}
\end{document}
| 2023-04-23T06:40:48.436Z | 1994-10-04T15:59:18.000Z | redpajama/arxiv | arxiv_0001 | 1,146 | 3,445 |
9d1b064acfb5fd7b4afa04e34a5c20dc193c5fbb | \section{INTRODUCTION}
\indent
Reference resolution is one of the important tasks in natural
language processing. In Japanese newspaper articles, pronouns are not
often used as referential expressions for company names, but
shortened company names and {\em dousha} (``the same company'') are used
more often (Muraki {\em et al}. 1993).
Although there have been studies of
reference resolution for various noun phrases in Japanese (Shibata {\em et al}.
1990; Kitani 1994), except Kitani's work, they do not clearly show how to
find the referents in
computationally plausible ways for a large amount of data, such as a
newspaper database. In this paper\footnote[1]{This paper was written
when the author was at the Computing Research Laboratory of New Mexico
State University. The author has been at University of Sheffield
since January 1994.}, we determine the referents of {\em dousha} and
their locations by hand, and then propose one simple and two
heuristic methods which use semantic information in text such as
company names and their patterns, so as to test these three methods on how
accurately they find the correct referents.\\
\\
\indent
{\em Dousha} is found with several particles such as ``{\em ha}'',
``{\em ga}'', ``{\em no}'', and ``{\em to}'' in newspaper articles.
Those which co-occur with {\em ha} and {\em ga} are chosen for the data
since they are the two most frequent particles when {\em dousha} is in the
subject position in a sentence. Typically, {\em ha} marks the topic of
the sentence and {\em ga} marks the subject of the sentence.
A typical use of {\em dousha} is as follows:
\begin{quotation}
\noindent
Nihon Kentakii Furaido Chikin ha,\\
Japan Kentucky Fried Chicken ha,\\
\\
sekai saidai no piza chien,\\
world's largest pizza chain store,\\
\\
Piza Hatto to teikei wo musubi,\\
Pizza Hut to tie-up establish,\\
\\
kotoshi gogatsu kara zenkoku de\\
starting May this year, nation-wide,\\
\\
takuhai piza chien no tenkai wo \\
pizza delivery chain store extension\\
\\
hajimesu to happyou shita.\\
begin announced.\\
\\
sarani {\em dousha ha} furaido chikin no\\
Moreover, the same company fried chicken of\\
\\
takuhai saabisu nimo noridasu.\\
delivery service as well will start.
\end{quotation}
\indent
A rough translation is:\\
``Kentucky Fried Chicken Japan announced that it had
established a tie-up with the world largest pizza chain store, Pizza
Hut, and began to expand pizza delivery chain stores nation-wide
starting in May this year. Moreover, {\em the company} will start
delivery of fried chicken as well.''\\
\\
\indent
{\em Dousha} in the second sentence refers to Kentucky Fried
Chicken Japan as ``{\em the company}'' does in the English translation. As
shown in this example, some articles contain more than one possible
referent or company, and the reference resolution of {\em dousha}
should identify the referent correctly.
\section{LOCATIONS AND CONTEXTS OF THE REFERENTS}
\indent
Most of the Japanese newspaper articles examined in this study are in the
domain of Joint-Ventures. The sources of the newspaper articles are mostly
{\em the Nikkei} and {\em the Ashahi}. The total number of the
articles is 1375, and there are 42 cases of {\em dousha} with {\em ga}
and 66 cases of {\em dousha} with {\em ha} in the entire set of articles.\\
\\
\indent
The following tables,
{\bf Table} 1 and {\bf Table} 2, show the locations and contexts where the
referents of both subsets of {\em dousha} appear.\\
\newpage
\onecolumn
\begin{center}
{\bf Table} 1 Locations and contexts of the referents of {\em dousha} with
{\em ga}\\
\vspace{4 mm}
\begin{tabular}{|l|l|r|} \hline
\multicolumn{3}{|c|}{{\em dousha} with {\em ga}} \\ \hline
location & context & number of cases \\ \hline
\multicolumn{2}{|c}{Within the same sentence} &
\multicolumn{1}{|r|}{19} \\ \hline
Subject & company name $+$ {\em ha} & 7 \\
& part of the subject $\ast $ & 1 \\
Non-subject & company name $+$ {\em niyoruto} & 3 \\
& others $\ast \ast \ast$ & 8 \\ \hline
\multicolumn{2}{|c}{In the previous sentence} &
\multicolumn{1}{|r|}{13} \\ \hline
Subject & company name $+$ {\em ha} & 8 \\
& company name $+$ {\em ga} & 1 \\
& emphasis structure $\ast \ast $ & 1 \\
& part of the subject $\ast $ & 1 \\
Non-subject & company name $+$ {\em to} & 2 \\ \hline
\multicolumn{2}{|c}{In two sentences before} &
\multicolumn{1}{|r|}{6} \\ \hline
Subject & company name $+$ {\em ha} & 5 \\
& company name $+$ {\em ga} & 1 \\ \hline
\multicolumn{2}{|c}{In previous paragraph} &
\multicolumn{1}{|r|}{1} \\ \hline
Topic of the paragraph & company name $+$ {\em ha} & 1 \\ \hline
\multicolumn{2}{|c}{In two paragraphs before} &
\multicolumn{1}{|r|}{3} \\ \hline
Topic of the paragraph & company name $+$ {\em ha} & 3 \\ \hline
\end{tabular}\\
\vspace{6 mm}
{\bf Table} 2 Locations and contexts of the referents of {\em dousha} with
{\em ha}\\
\vspace{4 mm}
\begin{tabular}{|l|l|r|} \hline
\multicolumn{3}{|c|}{{\em dousha} with {\em ha}} \\ \hline
location & context & number of cases \\ \hline
\multicolumn{2}{|c}{Within the same sentence} &
\multicolumn{1}{|r|}{2} \\ \hline
Subject & company name $+$ {\em ga} & 1 \\
& company name $+$ {\em deha} & 1 \\ \hline
\multicolumn{2}{|c}{In the previous sentence} &
\multicolumn{1}{|r|}{32} \\ \hline
Subject & company name $+$ {\em ha} & 21 \\
& emphasis structure $\ast \ast $ & 5 \\
& part of the subject $\ast $ & 4 \\
Non-subject & others & 2 \\ \hline
\multicolumn{2}{|c}{In two sentences before} &
\multicolumn{1}{|r|}{17} \\ \hline
Subject & company name $+$ {\em ha} & 16 \\
& part of the subject $\ast $ & 1 \\ \hline
\multicolumn{2}{|c}{In three sentences before (in the same paragraph)} &
\multicolumn{1}{|r|}{2} \\ \hline
Subject & company name $+$ {\em ha} & 2 \\ \hline
\multicolumn{2}{|c}{In previous paragraph} &
\multicolumn{1}{|r|}{7} \\ \hline
Topic of the paragraph & company name $+$ {\em ha} & 6 \\
Topic of the paragraph & company name $+$ {\em ga} & 1 \\ \hline
\multicolumn{2}{|c}{In two paragraphs before} &
\multicolumn{1}{|r|}{2} \\ \hline
Topic of the paragraph & company name $+$ {\em ha} & 2 \\ \hline
\multicolumn{2}{|c}{In three paragraphs before} &
\multicolumn{1}{|r|}{2} \\ \hline
Topic of the paragraph & company name $+$ {\em ha} & 2 \\ \hline
\end{tabular}
\vspace{6 mm}
\\
Note for Table 1 and Table 2 \\
\vspace{3 mm}
\begin{tabular}{|l|l|} \hline
$\ast $ & company name referred to is a part of a larger subject noun
phrase. \\ \hline
$\ast \ast$ & company name referred to comes at the end of the \\
& sentence, a way of emphasising the company name in Japanese. \\ \hline
$\ast \ast \ast$ & company name with {\em to} (with), {\em kara}
(from),\\
& {\em wo tsuuji} (through), {\em tono aidade} (between or among). \\
\hline
\end{tabular}
\end{center}
\vspace{2 mm}
\twocolumn
\indent
For {\em dousha} with {\em ga} ({\bf Table} 1), the referred company names,
or the referents appear in non-subject positions from time to time,
especially if the referent appears in the same sentence as {\em
dousha} does. For {\em dousha} with {\em ha} ({\bf Table} 2), compared with
{\bf Table} 1, very few referents are located in the same sentence, and most
of the referents are in the subject position. For both occurrences of {\em
dousha}, a considerable number of the referents appear two or more
sentences before, and a few of them show up even two or three
paragraphs before.
\section{THREE HEURISTIC METHODS TESTED}
\subsection{Three Heuristic Methods}
\indent
One simple and two heuristic methods to find the referents of {\em dousha}
are described below. The first, the simple method, is to take
the closest company name, (the one which appears most recently before
{\em dousha}), as its referent
({\bf Simple Closest Method} or {\bf SCM}). It is used in this paper
to indicate the baseline performance for reference resolution of {\em
dousha}.\\
\\
\indent
The second method is a modified Simple Closest
Method for {\em dousha} with {\em ga}. It is basically the same
as SCM except that:
\begin{itemize}
\item{if there is one or more company name in the same
sentence before the {\em dousha}, take the closest company name as
the referent.}
\item{if there is a company name immediately followed by {\em ha},
{\em ga}, {\em deha}, or {\em niyoruto} somewhere before {\em dousha},
use the closest such company name as the referent.}
\item{if the previous sentence ends with a company name, thus
putting an emphasis on the company name, make
it the referent.}
\item{if there is a pattern ``company name {\em no} human name
title...'' (equivalent to ``title human name of company name...'' in
English) in the previous sentence, then use the company name as the
referent.
Typical titles are {\em shachou} (president) and {\em kaichou}
(Chairman of Board).}
\end{itemize}
\noindent
The third heuristic method is used for {\em dousha} with {\em
ha} cases. It is also based on SCM except the following points:
\begin{itemize}
\item{if there is a company name immediately followed by {\em ha},
{\em ga}, {\em deha}, or {\em niyoruto} somewhere before {\em dousha},
use the closest such company name as the referent.}
\item{if the previous sentence ends with a company name, thus
putting an emphasis on the company name, make
it the referent.}
\item{if there is a pattern ``company name {\em no} human name
title...'' (equivalent to ``title human name of company name...'' in
English) in the previous sentence, then use the company name as the
referent.}
\end{itemize}
\indent
The third method is in fact a set of the second method, and both of them
use semantic information (i.e. company name, human name, title), syntactic
patterns (i.e. where a company
name, a human name, or a title appears in a sentence) and
several specific lexical items which come immediately
after the company names.
\subsection{Test Results}
\indent
The three methods have been tested on the development data
from which the methods were produced and on the set of
unseen test data.
\subsubsection{Against the development data}
As mentioned in section two, there are 42 cases of {\em dousha} with
{\em ga} and 66 cases of {\em dousha} with {\em ha}.\\
\\
\indent
For the {\em dousha} with {\em ga} cases, the Simple Closest Method
identifies the referents {\bf 67}\% correctly (27 correct out of 42), and
the second method does so {\bf 90}\% (38 out of 42) correctly. SCM
misses a number of referents which appear in
previous sentences, and most of those which appear two or more sentences
previously.\\
\\
\indent
For the cases of {\em dousha} with {\em ha}, SCM
identifies the referents correctly only {\bf 52}\% (34 correct out of 66),
however, the third heuristic method correctly identifies {\bf
94}\% (62 out of 66).
\subsubsection{Against the test data}
The test data was taken from Japanese newspaper articles on micro-electronics.
There are 1078 articles, and 51 cases of {\em dousha} with
{\em ga} and 250 cases of {\em dousha} with {\em ha}. The test has been
conducted against the all {\em ga} cases (51 of them) and the first
100 {\em ha} cases.\\
\\
\indent
For the {\em dousha} with {\em ga} cases, the Simple Closest Method
identifies the referents {\bf 80}\% correctly (41 correct out of 51), and
the second method does so {\bf 96}\% (49 out of 51) correctly.\\
\\
\indent
For the cases of {\em dousha} with {\em ha}, SCM
identifies the referents correctly only {\bf 83}\% (83 correct out of 100),
however, the third heuristic method correctly identifies {\bf
96}\% (96 out of 100).\\
\\
The following table, {\bf Table} 3, shows the summary of the test
results.\\
\begin{center}
{\bf Table} 3 Summary of Test Results\\
\vspace{4 mm}
\begin{tabular}{|l|r|r|} \hline
& Development Data & Test Data \\ \hline
\multicolumn{3}{|c|}{{\em dousha} with {\em ga}} \\ \hline
SCM & 67 \% & 80 \% \\ \hline
2nd method & 90 \% & 96 \% \\ \hline
\multicolumn{3}{|c|}{{\em dousha} with {\em ha}} \\ \hline
SCM & 52 \% & 83 \% \\ \hline
3rd method & 94 \% & 96 \% \\ \hline
\end{tabular}
\end{center}
\section{DISCUSSION}
\indent
The second and third heuristic methods show high accuracy in
finding the referents of {\em dousha} with {\em ga} and {\em ha}.
This means that partial semantic parsing (in which key semantic
information such as company name, human name, and title is marked) is
sufficient for reference resolution of important referential
expressions such as {\em dousha} in Japanese. Moreover, since
the two modified methods are simple, they will be easily implemented by
computationally inexpensive finite-state pattern matchers (Hobbs {\em et al}.
1992; Cowie {\em et al}. 1993). Therefore, they will be suitable for large
scale text processing (Jacobs 1992; Chinchor {\em et al}. 1993).\\
\\
\indent
One important point to realize is that the second and third
methods, although they are simple to implement,
achieve something that is rather complicated and may be
computationally expensive otherwise. For example, in order to find
the correct referent of
a given {\em dousha}, you may have to skip one entire paragraph and
find the referent two paragraphs before, or you may have to choose
the right company name from several possible company names which
appear before the given {\em dousha}. The modified methods
do this correctly most of the time without worrying about
constructing sometimes complicated syntactic structures of the
sentences in the search window for the possible referent.\\
\\
\indent
Another important point is that the modified methods make good use
of post-nominal particles, especially {\em ha} and {\em ga}.
For example, if the referent is located two sentences or
more before, then the referent (the company name) comes with {\em ha}
almost all the time (35 out of 38 such cases for both {\em dousha}).
It seems that if the referent of the {\em dousha} in consideration is
more than a certain distance before, two sentences in this case,
then the referent is marked with {\em ha} most of the time.
Kitani also uses this {\em ha} or {\em ga} marked company names
as key information in his reference resolution algorithm for
{\em dousha} (Kitani 1994).
\section {CONCLUSION}
\indent
The locations and contexts of the referents of {\em dousha} in
Japanese Joint-Venture articles are determined by hand. Three
heuristic methods are proposed and tested. The methods which use
semantic information in the text and its patterns show high accuracy
in finding the referents (96\% for {\em dousha} with {\em ga} and
96\% for {\em dousha} with {\em ha} for the unseen test data).
The high success rates suggest that
a semantic pattern-matching approach is not only a valid method but
also an efficient method for reference resolution in the newspaper article
domains. Since the Japanese language is highly case-inflected, case
(particle)
information is used effectively in these methods for reference
resolution. How much one can do with semantic pattern matching for
reference resolution of similar expressions such as ``the company'' or
``the Japanese company'' in English newspaper articles is a
topic for future research.
\section{ACKNOWLEDGEMENT}
\indent
I would like to thank the Tipster project group at the CRL for their
inspiration and suggestions. I would also like to thank Dr. Yorick
Wilks, Dr. John Barnden, Mr. Steve Helmreich, and Dr. Jim Cowie for
their productive comments.
The newspaper articles used in this study are from the Tipster
Information Extraction project provided by ARPA.
\section{REFERENCES}
Chinchor, N., L. Hirschman, and D. Lewis (1993). Evaluating Message
Understanding Systems: An Analysis of the Third Message Understanding
Conference (MUC-3). {\em Computational Linguistics, 19(3)},
{\em pp.} 409-449.\\
\\
Cowie, J., T. Wakao, L. Guthrie, W. Jin, J. Pustejovsky, and
S. Waterman (1993). The {\em Diderot} Information Extraction
System. In the proceedings of {\em The First Conference of the
Pacific Association for Computational Linguistics (PACLING 93)}
Simon Fraser University, Vancouver, B.C. Canada, {\em pp.} 23-32. \\
\\
Jacobs, P.S. (1992). Introduction: Text Power and Intelligent
Systems. {\em In} P.S. Jacobs {\em Ed}.,
{\em Text-Based Intelligent Systems}.
Lawrence Erlbaum Associates, Hillsdale New Jersey, {\em pp.} 1-8.\\
\\
Hobbs, J., D. Appelt, M. Tyson, J. Bear, and D. Israel (1992).
SRI International Description of the FASTUS System used for MUC-4.
In the proceedings of {\em Fourth Message Understanding Conference
(MUC-4)}, Morgan Kaufmann Publishers, San Mateo, {\em pp.} 269-275.\\
\\
Kitani, T. (1994). Merging Information by Discourse Processing
for Information Extraction. In the proceedings of {\em the tenth
IEEE Conference on Artificial Intelligence for Applications},
{\em pp.} 168-173.\\
\\
\\
Muraki, K., S. Doi, and S. Ando (1993). Context Analysis in
Information Extraction System based on Keywords and Text Structure.
In the proceedings of {\em the 47th National Conference of
Information Processing Society of Japan}, {\em 3-81}. (In Japanese).\\
\\
Shibata, M., O. Tanaka, and J. Fukumoto (1990). Anaphora in
Newspaper Editorials. In the proceedings of {\em the 40th National
Conference of Information Processing Society of Japan}, {\em 5F-4}. (In
Japanese).
\end{document}
| 2023-04-23T06:40:49.449Z | 1994-10-24T11:00:06.000Z | redpajama/arxiv | arxiv_0001 | 1,187 | 2,736 |
a21aa7fbaf8d7a25499bd603e602ef4d551a1d77 | \section{Check of solution}
\label{appchechsol}
In this appendix we are going to show that
\begin{eqnarray}
\rho(x,\lambda)
& = &
\frac{\c}{\sqrt{x}} + \int_{0}^{\infty}\, \sqrt{x}\,\cos(k\, \sqrt{x}) \, \ak\,dk
\end{eqnarray}
is a solution to the integro-differential equation
\begin{eqnarray}
\lambda\,\rho(x,\lambda)
&=&
{\frac{\partial}{\partial x}} \frac{1}{\sqrt{x}} \int_{0}^{\infty} \frac{\rho(y,\lambda)}{x- y } dy.
\label{appfokkerplankhalvakse}
\end{eqnarray}
To obtain better convergence in the manipulations
we will instead show $\rho(x,\lambda)$ fulfills the equation
obtained by integrating Eq.(\ref{appfokkerplankhalvakse}) from
$0$ to $z$. From the boundary condition Eq.(\ref{fluxboundarycondition})
we see that the lower boundary term on the right hand side to vanish.
Using this, Eq.(\ref{appfokkerplankhalvakse}) becomes
\begin{eqnarray}
\lambda\,\int_{0}^{z}\,dx \,\rho(x,\lambda)
&=&
\frac{1}{\sqrt{z}} \int_{0}^{\infty} \frac{\rho(y,\lambda)}{z - y } dy.
\label{appintegralfokkerplankhalvakse}
\end{eqnarray}
This equation is a stronger statement than
Eq. (\ref{appfokkerplankhalvakse}) since any constant will vanish when
Eq. (\ref{appintegralfokkerplankhalvakse}) is differentiated to give
Eq. (\ref{appfokkerplankhalvakse}).
We start by evaluating the right hand side of Eq.(62),
\begin{eqnarray}
\mbox{R.H.S}
&=&
\frac{1}{\sqrt{z}} \int_{0}^{\infty} \frac{\rho(y,\lambda)}{z - y } dy\\
&=&
\frac{1}{\sqrt{z}} \int_{0}^{\infty} \frac{\c}{z - y }\frac{dy}{\sqrt{y}}
+
\frac{1}{\sqrt{z}} \int_{0}^{\infty} \,dy \int_{0}^{\infty}\,
\frac{\sqrt{y}\,\cos(k\, \sqrt{y})}{z-y} \, \ak\,dk
\nonumber\\
&=&
\frac{1}{\sqrt{z}} \int_{0}^{\infty} \,dy \int_{0}^{\infty}\,
\frac{\sqrt{y}\,\cos(k\,\sqrt{ y})}{z-y} \, \ak\,dk.
\end{eqnarray}
Using the identity,
$$
\int_{0}^{\infty}\, \frac{\sqrt{y}\,\cos(k\, y)}{z-y} \,dy
=
2\pi\,\delta(k) + \pi\,\sqrt{z} \sin(k\,\sqrt{z}),
$$ in Eq. (64) gives
\begin{eqnarray}
\frac{1}{\sqrt{z}} \int_{0}^{\infty} \frac{\rho(y,\lambda)}{z - y } dy
&=&
\pi \,\int_{0}^{\infty}\,\sin(k\,\sqrt{z}) \,\ak\,dk.
\end{eqnarray}
\noindent
We now turn to the left hand side of Eq. (\ref{appintegralfokkerplankhalvakse})
\begin{eqnarray}
\mbox{L.H.S}&=&\lambda\,\int_{0}^{z}\,dx \,\rho(x,\lambda)\\
&=&
\lambda\int_0^{z}\frac{\c\,dx}{\sqrt{x}}
+
\lambda\int_0^{z}dx\,\int_{0}^{\infty}\, \sqrt{x}\,\cos(k\, \sqrt{x}) \, \ak\,dk
\nonumber\\
&=&
2\,\lambda\,\c\,\sqrt{z}
+
2\,\lambda\,\int_{0}^{\infty}\,\int_0^{\sqrt{z}}\,dt\,t^2\,\cos(k\,t)\,\ak\,dk.
\end{eqnarray}
The integral with respect to $t=\sqrt{x}$ is
\begin{eqnarray}
\int_0^{\sqrt{z}}\,dt\,t^2\,\cos(k\,t)
&=&
-\frac{\partial^2}{\partial k^2} \left\{\, \frac{\sin(k\,\sqrt{z})}{k}\right\}.
\end{eqnarray}
The left hand side is then
\begin{eqnarray}
\mbox{L.H.S}
&=&
2\,\lambda\,\c\,\sqrt{z}
-
2\,\lambda\int_{0}^{\infty}\,
\frac{\partial^2}{\partial k^2} \left\{\, \frac{\sin(k\,\sqrt{z})}{k}\right\}\,\ak\,dk
\nonumber\\
&=&
2\,\lambda\,\c\,\sqrt{z}
-2\,\lambda\,\left[\frac{\partial}{\partial k}
\left\{\,
\frac{\sin(k\,\sqrt{z})}{k}\right\}\,{\sqrt{k}\,\mbox{J}_{\frac{1}{3}}\left(\frac{2}{3\right]_{0}^{\infty}\nonumber\\
&& + 2\,\lambda\int_{0}^{\infty}\,
\frac{\partial}{\partial k} \left\{\,
\frac{\sin(k\,\sqrt{z})}{k}\right\}\,\frac{\partial}{\partial k}\left\{\,{\sqrt{k}\,\mbox{J}_{\frac{1}{3}}\left(\frac{2}{3\,\right\}\,dk.
\end{eqnarray}
The boundary terms from the partial integration are zero,
since
$$
{\sqrt{k}\,\mbox{J}_{\frac{1}{3}}\left(\frac{2}{3\sim {\rm const}\,k^{-\frac{1}{4}}
\cos({\rm const}\,k^{\frac{3}{2}}-{\rm phase}),\;k\rightarrow \infty
$$
\begin{equation}
\;\;\;\;\;\;\;\;=0,\;\;\;k=0.
\end{equation}
Now,
\begin{eqnarray}
\mbox{L.H.S}
&=&
2\,\lambda\,\c\,\sqrt{z}
+
2\,\lambda\,\left[\, \frac{\sin(k\,\sqrt{z})}{k}\,
\frac{\partial}{\partial k}\left\{\,{\sqrt{k}\,\mbox{J}_{\frac{1}{3}}\left(\frac{2}{3\,\right\}\right]_{0}^{\infty}
\nonumber\\
&&
-2\,\lambda\int_{0}^{\infty}\,\frac{\sin(k\,\sqrt{z})}{k}\,
\frac{\partial^2}{\partial k^2}\left\{\,{\sqrt{k}\,\mbox{J}_{\frac{1}{3}}\left(\frac{2}{3\,\right\}\,dk.
\end{eqnarray}
The boundary term at infinity vanishes. The boundary term at the origin
is
\begin{eqnarray}
- 2\,\lambda\,\sqrt{z}\,\left.\frac{\partial}{\partial k}\left\{\,{\sqrt{k}\,\mbox{J}_{\frac{1}{3}}\left(\frac{2}{3\,\right\}\right|_{k=0}
&=&
- 2\, \lambda\,\c\,\sqrt{z}.
\end{eqnarray}
By differentiation,
\begin{eqnarray}
\frac{1}{k}\,
\frac{\partial^2}{\partial k^2}\left\{\,{\sqrt{k}\,\mbox{J}_{\frac{1}{3}}\left(\frac{2}{3\,\right\}
&=&
- \frac{\pi}{2\,\lambda}\, {\sqrt{k}\,\mbox{J}_{\frac{1}{3}}\left(\frac{2}{3.
\end{eqnarray}
Gathering the pieces we have
\begin{eqnarray}
\lambda\,\int_{0}^{z}\,dx \,\rho(x,\lambda)
&=&
\pi \,\int_{0}^{\infty}\,\sin(k\,\sqrt{z}) \,\ak\,dk
\label{appsoltodiffresult1}\\
\lambda\,\int_{0}^{z}\,dx \,\rho(x,\lambda)
&=&
\frac{1}{\sqrt{z}} \int_{0}^{\infty} \frac{\rho(y,\lambda)}{z - y } dy
\label{appchecksoltodifresult}
\end{eqnarray}
Here we make two remarks on the above derivations:\\
1. Since the number of particles in the interval
$(0,z)$ goes to zero, when $z$ tends to zero, the last equation
shows that the zero flux condition at the origin is fulfilled.\\
2. This is concern with the $z\rightarrow \infty$ limit
in Eq. (\ref{appsoltodiffresult1}).
The integrand on the right hand side is the product of two strongly
oscillating functions,
which oscillate out of phase. Since none of them are
absolutely integrable, a Riemann-Lebesque lemma can not be used to conclude
that the integral tends to zero. We may appeal to
the theory of generalized functions that it is 0.
The R. H. S. of Eq. (74) can be re-written up to irrelevant constants as
$$ {\cal J}(a)=\int_{0}^{\infty}dy\;\sin(ya){\sqrt y}J_{1/3}(by^{3/2}),$$
where $a:={\sqrt z}$ and
$b:=\frac{2}{3}\left(\frac{\pi}{2\lambda}\right)^{1/2}.$ We wish to determine
${\cal J}(a)$ in the limit $a\rightarrow \infty,$ with $b (>0)$ fixed.
Consider instead,
$${\cal J}_{\mu}(a)=\int_{0}^{\infty}dy \frac{\sin(ya)}{y}y^{1-\mu}
J_{1/3}(by^{3/2}),\;\; -\frac{5}{4}<\Re\mu<\frac{5}{2}.$$
The integral we need can be defined as the analytic continuation
of ${\cal J}_{\mu}(a)$ to $\mu=-1/2.$ Now, since
$\lim_{a\to\infty}\frac{\sin(ax)}{x}=\delta(x),$ we conclude by
integrating over the $\delta$ function that
$\lim_{a\to\infty}{\cal J}_{\mu}(a)=0.$ Hence ${\cal J}(\infty)=0.$
Physically this means that the total number
of particles in each lambda mode is zero. The zero mode, of course,
contains particles.
\vfill\eject
\section{The equal time variance of a linear statistic.}
\label{chvarianceofalinearstatistic}
The equal time variance of a linear statistic ${\cal Q}$ is
\begin{eqnarray}
\mbox{Var}({\cal Q},0) &=&
\int_{0}^\infty dx \int_{0}^\infty dy \,{\cal Q}(x)\,{\cal Q}(y) \, \mbox{Corr}(x,y,0),
\label{equaltimevarianceexpressedwithcor}
\end{eqnarray}
a result first derived in \cite{Beenakeruniversal} by the method of
functional derivative. The claim in \cite{Beenakeruniversal} is that the
method is valid for all potentials, however,
this is flawed by the use of the ${N}=\infty$ density, which may not exist
for {\it all} potentials. For the
sake of completeness, we present in this
appendix a small modification of the arguments given in
\cite{Beenakeruniversal} and show that the result has a more general
validity. Instead of focusing on
Eq. (\ref{equaltimevarianceexpressedwithcor}),
we prefer to work with the partially integrated expressions
\begin{eqnarray}
\mbox{Var}({\cal Q},t) &=& -
\int_{0}^\infty dx \int_{0}^{\infty} dy \,{\cal Q}(x)\,{\cal Q}^{\prime}(y) \,
\mbox{ICorr}(x,y,t),\;\;\;
\label{timevarianceexpressedwithicor}
\end{eqnarray}
and
\begin{eqnarray}
\mbox{Var}({\cal Q},t) &=&
\int_{0}^\infty dx \int_{0}^{\infty} dy \,{\cal Q}^{\prime}(x)\,{\cal Q}^{\prime}(y) \,
\mbox{Int}(x,y,t),\;\;\;\footnotemark
\label{timevarianceexpressedwithiicor}
\end{eqnarray}
\footnotetext{To avoid the boundary terms it has been
assumed that the linear statistic fulfills
$\left.\sqrt{x}\,{\cal Q}(x)\right|_{x=0}=0$. The linear
statistic corresponding to the conductance $\frac{1}{1+x}$
satisfies this criterion.}
where
\begin{eqnarray}
\mbox{ICorr}(x,y,t) &\equiv& \int_{0}^{y} dz \,
\mbox{Corr}(x,z,t),\\
\mbox{Int}(x,y,t) &\equiv& \int_{0}^{x} dz \,
\mbox{ICorr}(z,y,t) = \int_{0}^{x} dz_1 \int_{0}^{y} dz_2 \,
\mbox{Corr}(z_1,z_2,t).
\end{eqnarray}
We will start by finding $\mbox{ICorr}(x,y,0)$.
By definition
$\sigma_{\gaslager} (x) = \sum_{n=1}^{\infty} \delta(x-x_{n}^{0})$
and
\begin{eqnarray}
<\,\sigma_{\gaslager} (x) \,>_{\mbox{eq}} &=& \frac{\int_0^{\infty} dx_{1}^{0}\ldots
\int_0^{\infty} dx_{{N}}^{0}\, \sigma_{\gaslager} (x)\,
e^{-W[u]}}
{\int_0^{\infty} dx_{1}^{0}\ldots\int_0^{\infty} dx_{{N}}^{0}\,e^{-W[u]}}.
\end{eqnarray}
The dependence of the energy, $W$, on the coordinates
$x_{1}^{0},\,x_{2}^{0},\ldots,x_{{N}}^{0}$ has been suppressed.
The external potential $u(x)$ is assumed to be bounded at the origin
which is the physically interesting situation. Now write
$u(x) = -\int_0^x f(z) \,dz + u(0)$, where $f(x)$ is the force.
The constant $u(0)$ can be set to zero as this can always be
accomplished by a redefinition of the zero point energy. The energy
is now
\begin{equation}
W[u]=W\left[-\int_0^x f(z)dz\right]=-\sum_{n=1}^{{N}}\int_0^\infty f(z)\,
\theta(x_n^{0}-z) \,dz
-\,\sum_{i <j} \ln|x_{j}^{0} - x_{i}^{0}|,
\end{equation}
where $\theta$ is the Heaviside step function.
The functional derivative of $W[-\int_0^x f(z)\,dz]$ with respect to
$f$ is
\begin{eqnarray}
\frac{\delta W[-\int_0^x f(z)\,dz]}{\delta f(y)} &=&
- \sum_{n=1}^{\infty} \theta(x_n^{0}-y)
=
- \int_0^{y}\sigma_{\gaslager} (x,0)\,dz.
\end{eqnarray}
The functional derivative of $\sigma_{\mbox{eq}}(x) = <\,\sigma_{\gaslager}(x,0) \,>_{\mbox{eq}}$
with respect to $f$ is recognized as $\mbox{ICorr}(x,y,0)$,
\begin{eqnarray}
\mbox{ICorr}(x,y,0) &=& \frac{\delta \,\sigma_{\mbox{eq}}(x) }{\beta\delta f(y)}.
\end{eqnarray}
\noindent
The density $\sigma_{\mbox{eq}}(x)$ is approximated
by
\begin{eqnarray}
\sigma(x,{N};f) &=&
- \frac{1}{\pi^2}\,\sqrt{\frac{b({N},f) - x}{x}}
\int_0^{b({N},f)}
\frac{dz}{z-x}\,\sqrt{\frac{z}{b({N},f) - z}}
\,\, f(z),
\end{eqnarray}
where $b({N},f)$ is the upper limit of support of $\sigma(x,N;f)$.
$\frac{\delta \,\sigma(x,{N};f) }{\delta f(y)}$ is computed as the linear term
in $\epsilon$, when $\epsilon\,\delta(x-y)$ is added to
the force $f(x)$.
We have
\begin{eqnarray}
\sigma(x,{N};f +\epsilon\,\delta(x-y))
&=&
- \frac{1}{\pi^2}\,\sqrt{\frac{b - x}{x}}
\int_0^{b}
\frac{dz}{z-x}\,\sqrt{\frac{z}{b - z}}
\,\,[\;f(z) + \epsilon\,\delta(z-y)\;]\\
&=& \epsilon\;\frac{1}{\pi^2}\,\sqrt{\frac{y}{x}}\,\frac{1}{x-y}
\,\;\;\sqrt{\frac{b - x}{b - y}}
+ \sigma(x,{N}-\eta;f),
\end{eqnarray}
where $\eta = \eta(\epsilon,y,{N},f) =\epsilon\;\frac{1}{\pi^2}\, \int_0^{b}
\sqrt{\frac{y}{x}}\,\frac{1}{x-y}\,
\sqrt{\frac{b - x}{b - y}}\,dx = -\epsilon \, \frac{1}{\pi}\,\frac{y}{b-y} $
is the
number of particles associated with the force $\epsilon\,\delta(x-y)$
and $b:=b({N},f +\epsilon\,\delta(x-y)) = b({N}-\eta,f)$
is the upper limit of support of $\sigma(x;{N},f +\epsilon\,\delta(x-y))$
and $\sigma(x,{N}-\eta;f)\,$\footnote{
They have the same upper limit, because the part of
$\sigma(x,{N};f +\epsilon\,\delta(x-y))$ corresponding to $f$:
$- \frac{1}{\pi^2}\,\sqrt{\frac{b - x}{x}}
\int_0^{b}
\frac{dz}{z-x}\,\sqrt{\frac{z}{b - z}}\, f(z)$
is a solution, with ${N}-\eta$ particles, to the problem with
the external force $f$, obeying the right boundary conditions. In other
words $- \frac{1}{\pi^2}\,\sqrt{\frac{b - x}{x}}
\int_0^{b}
\frac{dz}{z-x}\,\sqrt{\frac{z}{b - z}}\, f(z) = \sigma(x,{N}-\eta;f)$.}.
The functional derivative is
\begin{eqnarray}
\frac{\delta \,\sigma(x,{N}) }{\beta\delta f(y)}
&=&
\frac{1}{\beta\pi^2}\,\sqrt{\frac{y}{x}}\,\frac{1}{x-y}\;\;\frac{b-x}{b-y}
\,+\, \frac{1}{\beta\pi}\,\frac{y}{b-y} \;\;\;
\frac{\partial \,\sigma(x,{N}) }{\partial {N}}.
\end{eqnarray}
In the limit ${N} \rightarrow \infty$,
$\sqrt{\frac{b - x}{b - y}} =1$, $\eta$ and
$\frac{\partial \,\sigma(x,{N}) }{\partial {N}}$ tend to zero.
The extra density, $\Delta\sigma(x),$ when $\epsilon$ particles is added
to the system, experiences no force in the interval
$(0,b)$. Therefore $\Delta \sigma(x)$ is less than
the corresponding density with
$\epsilon$ particles in the box $(0,b)$:
$\frac{1}{\pi}\frac{\epsilon}{\sqrt{x(b-x)}},\;\;x\in(0,b).$
Outside $(0,b)$, $\Delta \sigma(x)$ tends monotonically to zero. We conclude,
in this limit, the functional derivative is reduced to
\begin{eqnarray}
\frac{\delta \,\sigma(x,{N}) }{\beta\delta f(y)}
&=&
\frac{1}{\beta\pi^2}\,\sqrt{\frac{y}{x}}\,\frac{1}{x-y}\\
\mbox{ICorr}(x,y,0)
&=&
\frac{1}{\beta\pi^2}\,{\frac{\partial}{\partial x}}\,\ln\left|\frac{\sqrt{x} - \sqrt{y}}
{\sqrt{x} + \sqrt{y}}\right|.
\label{expresforequaltimeicor}
\end{eqnarray}
{}From this it is seen that
\begin{eqnarray}
\mbox{Int}(x,y,0)
&=&
\frac{1}{\beta\pi^2}\,\ln\left|\frac{\sqrt{x} - \sqrt{y}}
{\sqrt{x} + \sqrt{y}}\right|
\label{expresforequaltimeiicor}
\end{eqnarray}
and
\begin{eqnarray}
\mbox{Var}({\cal Q},0)
&=&
\frac{1}{\beta\pi^2}\,\int_{0}^\infty dx \int_{0}^{\infty} dy
\,{\cal Q}^{\prime}(x)\,{\cal Q}^{\prime}(y) \,
\ln\left|\frac{\sqrt{x} - \sqrt{y}}
{\sqrt{x} + \sqrt{y}}\right|.
\end{eqnarray}
Thus at equal time, variances are independent of the potential $u$.
\vfill\eject
| 2023-04-23T06:40:49.889Z | 1994-10-18T02:32:02.000Z | redpajama/arxiv | arxiv_0001 | 1,205 | 2,333 |
36b96cac528d4005eaf22d27cc66dc981bedbcd4 | \section{Surrogate models}
\subsection{Functional forms}
Surrogate models used in this paper are analytical correlations of the following functional forms:
\begin{equation}
\rho = \left(\rho^{*}_c + C_1(T^*_c-T^*)^{1/3} + C_2(T^*_c-T^*) + C_3 (T^*_c-T^*)^{3/2}\right)\sigma^{-3}
\end{equation}
\begin{equation}
\ln P_{sat}(\sigma, \epsilon, L, Q) = c_1(\sigma, \epsilon, L, Q) + \frac{c_2(\sigma, \epsilon, L, Q)}{T^*} + \frac{c_3(\sigma, \epsilon, L, Q)}{T^{*4}} \\
\end{equation}
\begin{equation}
\gamma = A(\sigma, \epsilon, L, Q)\left( 1- \frac{T}{T_c}\right)^B
\end{equation}
\begin{equation}
T^* = Tk_B/\epsilon
\end{equation}
\begin{equation}
T^*_c = f(\sigma, \epsilon, L, Q)
\end{equation}
For full details and values of constants, see Stoll~\cite{stollComprehensiveStudyVapourliquid2009a} and Werth~\cite{werthSurfaceTensionTwo2015a}
\subsection{Model uncertainty/error estimates}
\begin{figure}[h]
\centering
\begin{tabular}[t]{c|c|c}
Property & Temperature Range ($\%$ of $T_c$) & $\%$ error\\
\hline
& $< 0.9 $ & $0.3$ \\
$\rho_l$ & $0.9 - 0.95$ & $ 0.3 + \frac{1-0.3}{0.95-0.9}\times (T - 0.9) $\\
& $>0.95$ & $1.0$\\
\hline
& $< 0.55 $ & $20$ \\
$P_{sat}$ & $0.55 - 0.7$ & $ 20 + \frac{2-20}{0.7-0.55}\times (T - 0.55) $\\
& $>0.7$ & $2.0$\\
\hline
& $< 0.75 $ & $4$ \\
$\gamma$ & $0.75 - 0.95$ & $ 4 + \frac{12-4}{0.95-0.75}\times (T - 0.75) $\\
& $>0.95$ & $12.0$\\
\end{tabular}
\caption{Piecewise uncertainty $u_{surr}$ developed for 2CLJQ surrogate models by Stoll and Werth \cite{stollComprehensiveStudyVapourliquid2009,werthSurfaceTensionTwo2015a} from those authors' simulation results. Piecewise behavior attempts to capture the temperature dependency of uncertainty without adding unjustified complex functions.}
\label{tbl:Uncertainty}
\end{figure}
\newpage
\section{Data temperature ranges for all Bayes factor calculations}
If data from 55-95 \% of $T_c$ was available for all properties in a given target, then data points were selected in that range. However, some properties had limited data ranges, and in those cases temperature ranges were selected so that all property data was within the same range. Temperature ranges are listed in table \ref{tbl:TempRanges}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c}
Compound & $rho_l, P_{sat}$ temperature range & $\rho_l,P_{sat},\gamma$ temperature range \\
\hline
$\mathrm{Br_2}$ & $(0.55, 0.95)\times T_c$ & $(0.47, 0.55)\times T_c$ \\
$\mathrm{F_2}$ & $(0.5, 0.6)\times T_c$ & $(0.5, 0.55)\times T_c$ \\
$\mathrm{N_2}$ & $(0.55, 0.95)\times T_c$ & $(0.55, 0.95)\times T_c$ \\
$\mathrm{O_2}$ & $(0.55, 0.95)\times T_c$ & $(0.55, 0.95)\times T_c$ \\
$\mathrm{C_2H_2}$ & $(0.55, 0.95)\times T_c$ & $(0.62, 0.7)\times T_c$ \\
$\mathrm{C_2H_4}$ & $(0.55, 0.95)\times T_c$ & $(0.41, 0.65)\times T_c$ \\
$\mathrm{C_2H_6}$ & $(0.55, 0.95)\times T_c$ & $(0.55, 0.95)\times T_c$ \\
$\mathrm{C_2F_4}$ & $(0.55, 0.95)\times T_c$ & --- \\
\end{tabular}
\caption{Temperature ranges used to select property data points for Bayes factor calculation. Temperature ranges chosen so that all data points from all properties fall within temperature range.}
\label{tbl:TempRanges}
\end{table}
\newpage
\section{Bayes factor calculation with MBAR}
Bayes factors in the ``Bridge sampling with intermediates'' method are calculated using MBAR. For a given model posterior, first the normalizing constant $c_{ref}$ of the auxiliary reference distribution is calculated. This is trivial because these distributions are analytical.
Then, MBAR is used to calculate the ratio of normalizing constants $c_{post}/c_{ref}$ between the posterior distribution $P(D|\theta, M)$ and the reference distribution $P_{ref}(\theta | M)$ by finding the ratio of MBAR normalizing constants $\hat{c}_{post}/\hat{c}_{ref}$. We note that only the ratio is defined, since the normalizing constants are only known up to a multiplicative constant. $\hat{c}_{post}$ and $\hat{c}_{ref}$ are calculated in equations \ref{equation:MBARCpost} and \ref{equation:MBARCref}. In these equations, the variables $j$ and $k$ iterate over the probability distributions which samples are taken from, and the variable $n$ iterates over the $N_j$ samples from the unnormalized probability distributions labeled by $j$. The $K$ total distributions include all unnornalized distributions from which samples are collected from, specifically the posterior $P(D|\theta,M)$, the reference distribution $P_{ref}(\theta,M)$, as well as any auxiliary intermediate distributions. \begin{equation}
\label{equation:MBARCpost}
\hat{c}_{post} = -\sum_{j=1}^{K} \sum_{n=1}^{N_j} \frac{P(D|\theta_{jn},M)}{\sum_{k=1}^K N_k \hat{c}_k^{-1} P_k(\theta_{jn})}
\end{equation}
\begin{equation}
\label{equation:MBARCref}
\hat{c}_{ref} = -\sum_{j=1}^{K} \sum_{n=1}^{N_j} \frac{P_{ref}(\theta_{jn}|M)}{\sum_{k=1}^K N_k \hat{c}_k^{-1} P_k(\theta_{jn})}
\end{equation}
These equations must be solved self-consistently, since $\hat{c}_{post}$ and $\hat{c}_{ref}$ are included in $c_k$.
We note that $P_{ref}$ may be normalized or unnormalized, as long as the actual normalization constant of the version used is what is used as $c_{ref}$ in eq.~\ref{equation:PosteriorNormConstant}. These equations are only unique up to a multiplicative constant, so one of the constants must be set (usually to 1) rather than estimated, and all ratios can then be calculated uniquely.
Self-consistent solution is performed by converting into log probability space to produce effective "energies" and then using the python package \texttt{pymbar} (\url{https://github.com/choderalab/pymbar}). For more details on the MBAR equations, see~\cite{shirtsStatisticallyOptimalAnalysis2008a}.
The posterior normalizing constant $c_{post}$ is then calculated by multiplication as in equation \ref{equation:PosteriorNormConstant}.
\begin{equation}
\label{equation:PosteriorNormConstant}
c_{post} = c_{ref} \times \hat{c}_{post}/\hat{c}_{ref}
\end{equation}
At this point we note that the posterior normalizing constant $c_{post}$ is the model marginal likelihood $P(D|M)$. So, for models 1 and 2, we can estimate the Bayes factor $B_{1/2}$ as in equation \ref{equation:BayesFactorCalculate}.
\begin{equation}
\label{equation:BayesFactorCalculate}
B_{1/2} \approx \frac{P(D|M_1)}{P(D|M_2)} = \frac{c_{post,1}}{c_{post,2}}
\end{equation}
\newpage
\section{ln Bayes Factor values for prior training samples (n=3, n=5, n=8)}
\subsection{Low information prior (n=3 data points per property)}
\subsubsection{$\rho_l, P_{sat}$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -10.91 $\pm$ 0.08 & 0 & -0.45 $\pm$ 0.13 \\
$\mathrm{F_2}$ & 0 & -6.64 $\pm$ 0.12 & -10.49 $\pm$ 0.18 \\
$\mathrm{N_2}$ & 0 & -4.53 $\pm$ 0.12 & -0.85 $\pm$ 0.2 \\
$\mathrm{O_2}$ & -4.19 $\pm$ 0.08 & 0 & -1.32 $\pm$ 0.14\\
$\mathrm{C_2H_2}$ & -488.13 $\pm$ 0.20 & -43.13 $\pm$ 0.20 & 0\\
$\mathrm{C_2H_4}$ & -93.21 $\pm$ 0.11 & 0 & -3.01 $\pm$ 0.21\\
$\mathrm{C_2H_6}$ & -43.40 $\pm$ 0.10 & 0 & -1.01 $\pm$ 0.24 \\
$\mathrm{C_2F_4}$ & -630.54 $\pm$ 0.20 & -18.91 $\pm$ 0.20 & 0\\
\end{tabular}
\caption{$\ln$~ (Bayes factors) relative to the most favored model, tested against $\rho_l, P_{sat}$ data, with low information (n=3 data points per property) training sample.}
\end{table}
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{figures/diatomics_2crit_low.png}
\includegraphics[width=0.85\textwidth]{figures/hfc_2crit_low.png}
\caption{Bayes factors between UA, AUA, AUA+Q models for all molecules, tested against $\rho_l, P_{sat}$ data, with low information (n=3 data points per property) training sample.}
\label{fig:2crit_low}
\end{figure}
\newpage
\subsubsection{$\rho_l, P_{sat}, \gamma$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -63.37 $\pm$ 0.18 & -10.98 $\pm$ 0.18 & 0 \\
$\mathrm{F_2}$ & -3.25 $\pm$ 0.19 & -1.54 $\pm$ 0.19 & 0 \\
$\mathrm{N_2}$ & -22.51 $\pm$ 0.12 & 0 & -1.00 $\pm$ 0.14 \\
$\mathrm{O_2}$ & 0 & -4.54 $\pm$ 0.10 & -5.62 $\pm$ 0.12\\
$\mathrm{C_2H_2}$ & -346.15 $\pm$ 0.08 & 0 & -142.68 $\pm$ 0.11\\
$\mathrm{C_2H_4}$ & -130.41 $\pm$ 0.12 & 0 & -0.72 $\pm$ 0.18\\
$\mathrm{C_2H_6}$ & -28.27 $\pm$ 0.09 & 0 & -1.32 $\pm$ 0.12\\
\end{tabular}
\caption{$\ln$ Bayes factors relative to the most favored model, tested against $\rho_l, P_{sat}, \gamma$ data, with low information (n=3 data points per property) training sample.}
\end{table}
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{figures/diatomics_3crit_low.png}
\includegraphics[width=0.64\textwidth]{figures/hfc_3crit_low.png}
\caption{Bayes factors between UA, AUA, AUA+Q models for all molecules, tested against $\rho_l, P_{sat}, \gamma$ data, with low information (n=3 data points per property) training sample.}
\label{fig:3crit_low}
\end{figure}
\newpage
\subsection{Medium information prior (n=5 data points per property)}
\subsubsection{$\rho_l, P_{sat}$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -9.19 $\pm$ 0.08 & 0 & -1.00 $\pm$ 0.15 \\
$\mathrm{F_2}$ & 0 & -6.34 $\pm$ 0.09 & -6.61 $\pm$ 0.24 \\
$\mathrm{N_2}$ & 0 & -3.33 $\pm$ 0.09 & -2.33 $\pm$ 0.18 \\
$\mathrm{O_2}$ & 0 & -3.99 $\pm$ 0.09 & -2.85 $\pm$ 0.15\\
$\mathrm{C_2H_2}$ & -591.29 $\pm$ 0.19 & -74.29 $\pm$ 0.19 & 0\\
$\mathrm{C_2H_4}$ & -117.64 $\pm$ 0.09 & 0 & -4.77 $\pm$ 0.19\\
$\mathrm{C_2H_6}$ & -41.55 $\pm$ 0.11 & 0 & -1.82 $\pm$ 0.19\\
$\mathrm{C_2F_4}$ & -489.93 $\pm$ 0.17 & -96.63 $\pm$ 0.17 & 0\\
\end{tabular}
\caption{$\ln$ Bayes factors relative to the most favored model, tested against $\rho_l, P_{sat}$ data, with medium information (n=5 data points per property) training sample.}
\end{table}
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{figures/diatomics_2crit_med.png}
\includegraphics[width=0.85\textwidth]{figures/hfc_2crit_med.png}
\caption{Bayes factors between UA, AUA, AUA+Q models for all molecules, tested against $\rho_l, P_{sat}$ data, with medium information (n=5 data points per property) training sample.}
\label{fig:2crit_med}
\end{figure}
\newpage
\subsubsection{$\rho_l, P_{sat}, \gamma$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -48.98 $\pm$ 0.17 & -8.87 $\pm$ 0.17 & 0 \\
$\mathrm{F_2}$ & -2.00 $\pm$ 0.16 & -0.02 $\pm$ 0.16 & 0 \\
$\mathrm{N_2}$ & -17.51 $\pm$ 0.10 & 0 & -0.39 $\pm$ 0.12 \\
$\mathrm{O_2}$ & 0 & -6.81 $\pm$ 0.09 & -6.40 $\pm$ 0.14\\
$\mathrm{C_2H_2}$ & -287.72 $\pm$ 0.08 & 0 & -132.83 $\pm$ 0.10\\
$\mathrm{C_2H_4}$ & -118.62 $\pm$ 0.12 & 0 & -0.47 $\pm$ 0.15\\
$\mathrm{C_2H_6}$ & -29.98 $\pm$ 0.10 & 0 & -0.51 $\pm$ 0.12\\
\end{tabular}
\caption{$\ln$ (Bayes factors) relative to the most favored model, tested against $\rho_l, P_{sat}, \gamma$ data, with medium information (n=5 data points per property) training sample.}
\end{table}
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{figures/diatomics_3crit_med.png}
\includegraphics[width=0.64\textwidth]{figures/hfc_3crit_med.png}
\caption{Bayes factors between UA, AUA, AUA+Q models for all molecules, tested against $\rho_l, P_{sat}, \gamma$ data, with medium information (n=5 data points per property) training sample.}
\label{fig:3crit_med}
\end{figure}
\newpage
\subsection{High Information Prior (n=8 data points per property), used in final Bayes factor calculations.}
\subsubsection{$\rho_l, P_{sat}$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -7.94 $\pm$ 0.08 & 0 & -1.20 $\pm$ 0.12 \\
$\mathrm{F_2}$ & 0 & -2.87 $\pm$ 0.08 & -2.77 $\pm$ 0.13 \\
$\mathrm{N_2}$ & 0 & -4.21 $\pm$ 0.08 & -3.65 $\pm$ 0.18 \\
$\mathrm{O_2}$ & -0.66 $\pm$ 0.09 & 0 & -1.65 $\pm$ 0.12\\
$\mathrm{C_2H_2}$ & -382.28 $\pm$ 0.21 & -38.07 $\pm$ 0.21 & 0\\
$\mathrm{C_2H_4}$ & -115.57 $\pm$ 0.08 & 0 & -2.81 $\pm$ 0.19\\
$\mathrm{C_2H_6}$ & -38.46 $\pm$ 0.08 & 0 & -1.39 $\pm$ 0.14\\
$\mathrm{C_2F_4}$ & -424.41 $\pm$ 0.23 & -84.85 $\pm$ 0.23 & 0 \\
\end{tabular}
\caption{$\ln$ (Bayes factors relative to the most favored model, tested against $\rho_l, P_{sat}$ data, with high information (n=8 data points per property) training sample.}
\end{table}
\subsubsection{$\rho_l, P_{sat}, \gamma$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & 19.76 $\pm$ 0.18 & -3.46 $\pm$ 0.18 & 0 \\
$\mathrm{F_2}$ & 0 & -0.79 $\pm$ 0.08 & -1.90 $\pm$ 0.16 \\
$\mathrm{N_2}$ & -16.33 $\pm$ 0.10 & 0 & -0.19 $\pm$ 0.12 \\
$\mathrm{O_2}$ & 0 & -6.51 $\pm$ 0.10 & -6.72 $\pm$ 0.13\\
$\mathrm{C_2H_2}$ & -206.30 $\pm$ 0.10 & -50.14 $\pm$ 0.10 & 0\\
$\mathrm{C_2H_4}$ & -78.90 $\pm$ 0.13 & -0.96 $\pm$ 0.17 & 0\\
$\mathrm{C_2H_6}$ & -23.50 $\pm$ 0.10 & 0 & -0.24 $\pm$ 0.13\\
\end{tabular}
\caption{$\ln$ Bayes factors relative to the most favored model, tested against $\rho_l, P_{sat}, \gamma$ data, with high information (n=8 data points per property) training sample.}
\end{table}
\newpage
\section{ELPPD Benchmarking Results}
\subsection{$\rho_l, P_{sat}$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{3}{|c|}{ELPPD Avg. over test points} & \multicolumn{3}{|c|}{Avg. Stdev. from Exp.} \\
\hline
Compound & UA & AUA & AUA+Q & UA & AUA & AUA+Q \\
\hline
\multicolumn{7}{|c|}{$\rho_l$}\\
\hline
$\mathrm{Br_2}$ &--- & --- & --- & --- & --- & --- \\
$\mathrm{F_2}$ & --- & --- & --- & --- & --- & --- \\
$\mathrm{N_2}$ & 1.51 & 1.13 & 0.94 & 1.74 & 1.50 & 1.37\\
$\mathrm{O_2}$ & 1.36 & 1.26 & 1.22 & 1.65 & 1.59 & 1.57\\
$\mathrm{C_2H_2}$ & --- & --- & --- &--- & --- & --- \\
$\mathrm{C_2H_4}$ & 9.59 & 1.59 & 1.53 & 4.38 & 1.78 & 1.75\\
$\mathrm{C_2H_6}$ & 3.07 & 0.81 & 0.82 & 2.48 & 1.27 & 1.28\\
$\mathrm{C_2F_4}$ & --- & --- & --- &--- & --- & --- \\
\hline
\multicolumn{7}{|c|}{$P_{sat}$}\\
\hline
$\mathrm{Br_2}$ & 0.31 & 0.10 & 0.09 & 0.79 & 0.44 & 0.43\\
$\mathrm{F_2}$ & 0.03 & 0.06 & 0.06 & 0.26 & 0.33 & 0.35 \\
$\mathrm{N_2}$ & 0.15 & 0.20 & 0.19 & 0.54 & 0.63 & 0.62 \\
$\mathrm{O_2}$ & 0.87 & 0.25 & 0.38 & 1.32 & 0.70 & 0.88\\
$\mathrm{C_2H_2}$ & 12.98 & 0.95 & 0.14 & 5.09 & 1.38 & 0.54\\
$\mathrm{C_2H_4}$ & 3.78 & 0.33 & 0.52 & 2.75 & 0.81 & 1.02\\
$\mathrm{C_2H_6}$ & 1.95 & 0.19 & 0.21 & 1.98 & 0.62 & 0.64\\
$\mathrm{C_2F_4}$ & 17.84 & 2.83 & 0.21 & 5.97 & 2.38 & 0.64\\
\hline
\end{tabular}
\caption{ELPPD Benchmarking for the $\rho_l, P_{sat}$ target with high information priors. ELPPD averaged over test points is (total ELPPD value/number of test points). While this is not a true average due to the nature of the ELPPD, it allows for comparison when numbers of test data points are different. Average standard deviations from experimental value over test points also shown. Larger values indicate worse overall model performance. ELPPD measurements omitted for properties with insufficient (n$<$10) measurements not already used in prior fitting or Bayes factor calculations.}
\end{table}
\newpage
\subsection{$\rho_l, P_{sat}, \gamma$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{3}{|c|}{ELPPD} & \multicolumn{3}{|c|}{Avg. Stdev. from Experiment} \\
\hline
Compound & UA & AUA & AUA+Q & UA & AUA & AUA+Q \\
\hline
\multicolumn{7}{|c|}{$\rho_l$}\\
\hline
$\mathrm{Br_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{F_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{N_2}$ & 0.81 & 0.67 & 0.66 & 1.27 & 1.16 & 1.15 \\
$\mathrm{O_2}$ & 1.94 & 2.21 & 2.18 & 1.97 & 2.10 & 2.09\\
$\mathrm{C_2H_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{C_2H_4}$ & 0.59 & 1.20 & 0.89 & 1.09 & 1.55 & 1.33\\
$\mathrm{C_2H_6}$ & 2.01 & 0.88 & 0.88 & 2.01 & 1.33 & 1.32\\
\hline
\multicolumn{7}{|c|}{$P_{sat}$}\\
\hline
$\mathrm{Br_2}$ & 1.63 & 0.75 & 0.50 & 1.81 & 1.22 & 1.00 \\
$\mathrm{F_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{N_2}$ & 1.52 & 1.18 & 1.17 & 1.74 & 1.54 & 1.53 \\
$\mathrm{O_2}$ & 1.50 & 1.75 & 1.72 & 1.73 & 1.87 & 1.85\\
$\mathrm{C_2H_2}$ & 9.01 & 0.73 & 0.08 & 4.25 & 1.21 & 0.41\\
$\mathrm{C_2H_4}$ & 6.85 & 2.70 & 3.43 & 3.70 & 2.33 & 2.62\\
$\mathrm{C_2H_6}$ & 3.97 & 1.09 & 1.18 & 2.82 & 1.47 & 1.53\\
\hline
\multicolumn{7}{|c|}{$\gamma$}\\
\hline
$\mathrm{Br_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{F_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{N_2}$ & 9.03 & 6.99 & 7.04 & 4.25 & 3.74 & 3.75\\
$\mathrm{O_2}$ & 8.12 & 7.71 & 7.76 & 4.03 & 3.93 & 3.94\\
$\mathrm{C_2H_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{C_2H_4}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{C_2H_6}$ & 2.97 & 4.18 & 4.11 & 2.44 & 2.89 & 2.87\\
\hline
\end{tabular}
\caption{ELPPD Benchmarking for the $\rho_l, P_{sat}, \gamma$ target with high information priors. ELPPD averaged over test points is (total ELPPD value/number of test points). Average standard deviations from experimental value over test points also shown. Larger values indicate worse overall model performance. ELPPD measurements omitted for properties with insufficient (n$<$10) measurements not already used in prior fitting or Bayes factor calculations.}
\end{table}
\newpage
\section{Benchmarking Figures}
\subsection{$\rho_l, P_{sat}$ target}
\subsubsection{F$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_F2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_F2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for F$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{Br$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_Br2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_Br2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for Br$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{N$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_N2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_N2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for N$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{O$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_O2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_O2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for O$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$H$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_4$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H4_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H4__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$H$_4$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_6$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H6_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H6__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$H$_6$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$F$_4$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2F4_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2F4__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$F$_4$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsection{$\rho_l, P_{sat}, \gamma$ target}
\subsubsection{F$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_F2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_F2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_F2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for F$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{Br$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_Br2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_Br2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_Br2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for Br$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{N$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_N2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_N2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_N2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for N$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{O$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_O2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_O2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_O2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for O$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for C$_2$H$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_4$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H4_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H4__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H4_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for C$_2$H$_4$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_6$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H6_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H6__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H6_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for C$_2$H$_6$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\section{Parameter Distributions from Bayes factor MCMC samples}
These triangle plots are taken from the MCMC sampling of the model posteriors from the MBAR Bayes factor calculations for all 3 models (UA, AUA, AUA+Q), with priors set from the high information training samples (n=8 data points per property).
All measurements are in nm ($\sigma$), K ($\epsilon $), nm ($L$), $\mathrm{D}\cdot \mathrm{nm}$ ($Q$).
\subsection{$\rho_l, P_{sat}$ target}
\subsubsection{F$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_F2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_F2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_F2_AUA+Q_corner.pdf}
\caption{Parameter distributions for F$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_F2_triangle}
\end{figure}
\subsubsection{Br$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_Br2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_Br2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_Br2_AUA+Q_corner.pdf}
\caption{Parameter distributions for Br$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_Br2_triangle}
\end{figure}
\newpage
\subsubsection{N$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_N2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_N2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_N2_AUA+Q_corner.pdf}
\caption{Parameter distributions for N$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_N2_triangle}
\end{figure}
\subsubsection{O$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_O2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_O2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_O2_AUA+Q_corner.pdf}
\caption{Parameter distributions for O$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_N2_triangle}
\end{figure}
\newpage
\subsubsection{C$_2$H$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H2_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_C2H2_triangle}
\end{figure}
\subsubsection{C$_2$H$_4$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H4_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H4_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H4_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_4$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_C2H4_triangle}
\end{figure}
\newpage
\subsubsection{C$_2$H$_6$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H6_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H6_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H6_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_6$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_C2H6_triangle}
\end{figure}
\subsubsection{C$_2$F$_4$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2F4_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2F4_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2F4_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$F$_4$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_C2F4_triangle}
\end{figure}
\newpage
\subsection{$\rho_l, P_{sat}, \gamma$ target}
\subsubsection{F$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_F2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_F2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_F2_AUA+Q_corner.pdf}
\caption{Parameter distributions for F$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_F2_triangle}
\end{figure}
\subsubsection{Br$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_Br2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_Br2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_Br2_AUA+Q_corner.pdf}
\caption{Parameter distributions for Br$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_Br2_triangle}
\end{figure}
\newpage
\subsubsection{N$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_N2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_N2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_N2_AUA+Q_corner.pdf}
\caption{Parameter distributions for N$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_N2_triangle}
\end{figure}
\subsubsection{O$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_O2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_O2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_O2_AUA+Q_corner.pdf}
\caption{Parameter distributions for O$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_N2_triangle}
\end{figure}
\newpage
\subsubsection{C$_2$H$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H2_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_C2H2_triangle}
\end{figure}
\subsubsection{C$_2$H$_4$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H4_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H4_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H4_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_4$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_C2H4_triangle}
\end{figure}
\newpage
\subsubsection{C$_2$H$_6$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H6_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H6_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H6_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_6$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_C2H6_triangle}
\end{figure}
\bibliographystyle{ieeetr}
\section{Introduction}
\label{section:Introduction}
Parameterization of molecular force fields is a long-standing problem for the classical molecular simulation community~\cite{dauber-osguthorpeBiomolecularForceFields2019,haglerForceFieldDevelopment2019,rinikerFixedChargeAtomisticForce2018}, as the choice of models and their respective parameters are critical for achieving quantitative accuracy in modeling of molecular interactions. The treatment of electrostatic and dispersion-repulsion interactions, commonly referred to as \emph{non-bonded interactions}, is a difficult parameterization task because these interactions are described by relatively simple models that are crude approximations of the underlying electronic behavior, requiring mapping electronic probability distributions onto pairwise interactions between a small number of points. Dispersion-repulsion interactions, commonly modeled with a Lennard-Jones (LJ) potential are especially challenging to derive~\cite{monticelliBiomolecularSimulationsMethods2013} as values of the Lennard-Jones parameters are not directly extractable from quantum mechanics (QM) calculations.
Most force fields train these parameters against macroscopic condensed phase physical properties, though there are some attempts to obtain them with reference to QM calculations~\cite{kantonenDataDrivenMappingGasPhase2020,tkatchenkoAccurateMolecularVan2009}.
Although most commonly employed classical force fields use electrostatic point charges and pairwise LJ potentials~\cite{wangDevelopmentTestingGeneral2004a,banksIntegratedModelingProgram2005}, other models for these interactions exist. Atomic multipoles~\cite{ponderCurrentStatusAMOEBA2010,jiaoCalculationProteinLigand2008}, Drude oscillators~\cite{lemkulPolarizableForceField2017}, and fluctuating charges~\cite{patelCHARMMFluctuatingCharge2004} for electrostatic interactions and alternate functional forms~\cite{dauber-osguthorpeBiomolecularForceFields2019,messerlyMie16Force2019,chiuCoarseGrainedModelBased2010,halgrenMerckMolecularForce1996} for dispersion-repulsion interactions have been proposed over the years. Even considering only LJ potentials, many different choices for atom typing~\cite{schauperlDatadrivenAnalysisNumber2020,boulangerOptimizedLennardJonesParameters2018a} and combination rules~\cite{halgrenRepresentationVanWaals1992a,waldmanNewCombiningRules1993} exist.
Choosing a potential energy function with a more complex functional form can improve agreement with experiment, however, increasing the complexity may also make the parameterization more difficult due to an increased likelihood of overfitting~\cite{harrisonReviewForceFields2018,frohlkingEmpiricalForceFields2020}.
This creates two distinct problems in designing force fields: (1) selecting among discrete choices of models and (2) optimizing continuous choice of parameters within models. It is common, although often expensive, to compare the quantitative performance of different parameter sets within a single model, but it is difficult to quantitatively compare fitness between models.
A common statistical formalism across for comparing the overall fitness of discrete models is Bayesian inference~\cite{vontoussaintBayesianInferencePhysics2011a}, which allows users to evaluate models in a way that automatically penalizes unnecessary complexity~\cite{mackayInformationTheoryInferencea}.
By integrating over the marginal distribution of the parameters one can calculate \emph{Bayes factors} which are interpreted as odds ratios between the separate models~\cite{kassBayesFactors1995}.
The Bayesian framework combines specific information on a model's ability to reproduce target data, with more general prior knowledge based on data collected with previous parameter sets, physical constraints or invariances, or chemical intuition. In this way, the prior distribution generalizes and influences the comparison between models. In any parameterization process, the influence of the prior knowledge about the system is always present; using a Bayesian approach makes the influence of any prior information on the parameters explicit. Bayesian inference has previously been used to incorporate uncertainty quantification~\cite{duttaBayesianCalibrationForcefields2018,angelikopoulosBayesianUncertaintyQuantification2012,farrellBayesianFrameworkAdaptive2015,wus.HierarchicalBayesianFramework2016} and model selection~\cite{bacalladoBayesianComparisonMarkov2009,farrellBayesianFrameworkAdaptive2015} into molecular models.
To compute Bayes factors, samples must be drawn from the model posterior distributions which involves comparing model outputs to macroscopic observables. Typically this is done through a MCMC sampling scheme in the parameter space, since posteriors are generally non-analytical.
Once sufficient posterior samples are obtained, a range of statistical techniques can be used to estimate the Bayes factors and make quantitative judgements on the relative fitness of models.
One established MCMC technique for cross-model sampling is reversible jump Monte Carlo (RJMC)~\cite{greenReversibleJumpMarkov1995a}, which simultaneously samples over several models and their respective parameter spaces. This technique can be used to compare model posterior parameter distributions and directly compare the support for each model.
Bayes factors can also be computed with \emph{bridge sampling} methods which compute the probability ratio between a simple analytical reference distribution and the posterior distribution in order to calculate model evidences. This method can be enhanced by using intermediate probability distributions to bridge the gap between the reference distribution and the posterior, increasing the overlap between distributions being compared and thus improving the accuracy of the calculations~\cite{nealAnnealedImportanceSampling1998}.
These statistical approaches for calculating Bayes factors have close analogues in statistical physics techniques used to calculate free energy differences in molecular simulations. Specifically, Bayes factors are analogous to the ratio of partition coefficients of two models, but integrated over the \emph{parameter space} of each model, rather than over atomic coordinates. Bridge sampling techniques are analogous to free energy perturbation and reweighting methods~\cite{mengSIMULATINGRATIOSNORMALIZING1996,gelmanSimulatingNormalizingConstants1998}, and RJMC is analogous to expanded ensemble techniques or Hamilton replica exchange techniques~\cite{sugitaReplicaexchangeMulticanonicalAlgorithm2000}, where the multiple ensembles are different models rather than different potential functions or temperatures.
Estimating physical observables with molecular simulations is often computationally expensive, especially if some molecular degrees of freedom are slow. To perform MCMC sampling in parameter space, observables must be calculated many times.
To avoid this cost and efficiently sample the posterior distribution, surrogate models~\cite{sidkyMolecularLatentSpace2020,kadupitiyaMachineLearningSurrogates2020} can be used to cheaply approximate the response surface of the observables with respect to the force field parameters.
To demonstrate the Bayesian model comparison strategy for atomistic models, we have selected a problem for which a surrogate model is already well-defined. The two-center Lennard-Jones + quadrupole (2CLJQ) model ~\cite{vrabecSetMolecularModels2001} is a simple 4-parameter fluid model that serves as a useful test bed for this problem. Analytical surrogate models exist for predicting saturated densities ($\rho_l$), saturated vapor pressure ($P_{sat}$), and surface tension ($\gamma$) for a number of small nonpolar fluids.~\cite{stollComprehensiveStudyVapourliquid2009a,werthSurfaceTensionTwo2015a}. A previous parameterization study~\cite{stobenerParametrizationTwocenterLennardJones2016} of the model demonstrated the potential to optimize the 2CLJQ model using surrogate models. In this study we use quantitative evidence from Bayes factors to examine whether including an adjustable bond length and/or quadrupole parameter are justified in this model.
\section{Methods}
\label{section:Methods}
\subsection{Molecular Model}
\label{subsection:Molecular_Model}
We test our Bayesian model selection strategy on the 2CLJQ fluid model~\cite{vrabecSetMolecularModels2001}, a simple 2-site fluid model that fits diatomic and other similar molecules~\cite{stollSetMolecularModels2003,mollerDeterminationEffectiveIntermolecular1994,cheungPropertiesLiquidNitrogen1976,murthyInteractionSiteModels1981}. In particular we consider this model for a range of compounds, 4 diatomic ($\mathrm{O_2, N_2, Br_2, F_2}$) and 4 ``diatomic-like'' hydro/fluoro carbon compounds \\ ($\mathrm{C_2H_2}$, $\mathrm{C_2H_4}$, $\mathrm{C_2H_6}$, $\mathrm{C_2F_4}$), which are both well parameterized by the 2CLJQ model~\cite{vrabecSetMolecularModels2001}. Interactions within the 2CLJQ model are controlled by four parameters, as illustrated in Figure \ref{fig:2cljq_model}: a Lennard-Jones $\sigma$ (nm) and $\epsilon$ (K), a constant bond length $L$ (nm) between the two LJ sites, and a quadrupole interaction strength parameter $Q(\mathrm{D} \cdot \mathrm{nm})$. The LJ $\sigma$ represents the distance at which the potential energy between two particles is equal to zero; the LJ $\epsilon$ represents the depth of the potential well and the strength of the attraction between two particles. The functional form of the molecular model is similar to the Lennard-Jones potential, but adapted for a 2-center model and with a quadrupole interaction at the model's geometric center~\cite{vrabecSetMolecularModels2001}:
\begin{equation}
u_{2CLJQ}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, L, Q) = u_{2CLJ}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, L) + u_{Q}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, Q)
\end{equation}
\begin{equation}
u_{2CLJ}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, L) = \sum_{a=1}^2 \sum_{b=1}^2 4\epsilon \left[\left(\frac{\sigma}{r_{ab}}\right)^{12} - \left(\frac{\sigma}{r_{ab}}\right)^{6} \right]
\end{equation}
\begin{equation}
u_{Q}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, Q) = \frac{3}{4}\frac{Q^2}{|\mathbf{r}_{ij}|^5}f(\mathbf{\omega}_{i}, \mathbf{\omega}_{j})
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{figures/2cljq_model_figure.png}
\caption{\textbf{The 2CLJQ model consists of two LJ sites parameterized by $\sigma$ and $\epsilon$, a bond length $L$, and a quadrupole interaction strength parameter $Q$.}
In this example, an ethane ($\mathrm{C_2H_6}$) molecule is parameterized as two identical sites, each consistent of a carbon atom and three hydrogens.}
\label{fig:2cljq_model}
\end{figure}
\subsubsection{Defining models of varying complexity}
To investigate which parameters are justified to reproduce the physical properties of interest, we split the model into three levels of complexity, as shown in Table \ref{tbl:Model Definitions}: united atom (UA), anisotropic united atom (AUA), and anisotropic united atom + quadrupole (AUA+Q).
\begin{table}[h]
\centering
\begin{tabular}[t]{|c|c|c|c|c|}
\hline
Model & $\sigma$ (nm) & $\epsilon $ (K) & $L$ (nm) & $Q$ ($\mathrm{D}\cdot \mathrm{nm}$) \\
\hline
UA & Variable & Variable & Fixed & Fixed (at 0)\\
AUA & Variable & Variable & Variable & Fixed (at 0)\\
AUA+Q & Variable & Variable & Variable & Variable \\
\hline
\end{tabular}
\caption{{\bf Different 2CLJQ models considered in this study.}
While all models share the same function form, some models have parameters/interactions fixed, while others let them vary.
{\bf UA}: united atom; {\bf AUA}: anisotropic united atom; {\bf AUA+Q}: anisotropic united atom + quadrupole.
}
\label{tbl:Model Definitions}
\end{table}
The models here are nested---for example, the AUA model is the same as the AUA+Q model, but with the quadrupole moment permanently set to zero. The UA model is a subset of the AUA model, but with a fixed bond length chosen from literature (typically taking an ``experimental'' value)~\cite{johnsonrusselld.NISTComputationalChemistry2018}. This construction allows us to test whether the variable quadrupole and bond length parameters are useful in reproducing the chosen physical properties.
\subsubsection{Surrogate Models for 2CLJQ output}
Analytical surrogate models, which predict molecular properties as a function of molecular parameters, were developed for saturated liquid density ($\rho_l$) and saturated vapor pressure ($P_{sat}$) by Stoll~\cite{stollComprehensiveStudyVapourliquid2009a}; similar surrogate models for surface tension ($\gamma$) were developed by Werth~\cite{werthSurfaceTensionTwo2015a} by fitting to 2CLJQ simulation results at a variety of parameter and temperature conditions.
St\"{o}bener~\cite{stobenerParametrizationTwocenterLennardJones2016} previously optimized parameters for the 2CLJQ model by using surrogate models to drive fast optimization and a Pareto optimization approach to choose parameter sets. Parameter set fitness in this study was based on comparison to data from the Design Institute for Physical Properties (DIPPR) correlations; here we train against NIST experimental data for similar properties instead. In this work, we expand the idea of extensive parameter sampling through surrogate models to include comparisons between disparate models.
To apply this technique to an arbitrary force field, one will usually need to construct such surrogate models for different properties. Common techniques to build these surrogate models might include reweighting~\cite{messerlyConfigurationSamplingBasedSurrogateModels2018a}, Gaussian processes~\cite{befortMachineLearningDirected2021}, and machine learning methods~\cite{kadupitiyaMachineLearningSurrogates2020}.
Since the methods needed to construct such a model depend substantially on the property and the parameters of interest, that question is beyond the scope of this study; we focus only on applying Bayesian inference given a surrogate model.
\subsection{Bayesian inference}
\label{subsection:Bayesian_Inference}
Bayesian inference characterizes the fitness of models and parameters by combining specific information about a set of experiments with more general prior information. The core of Bayesian inference is \emph{Bayes' rule}:
\begin{equation}
\underbrace{P(\theta | D, M)}_{\mathrm{Posterior}} \propto \underbrace{P(D| \theta, M)}_{\mathrm{Likelihood}} \underbrace{P(\theta | M)}_{\mathrm{Prior}}
\end{equation}
where $\theta$ represents a set of parameters belonging to a model $M$, and $D$ represents a dataset (which can be compared to experimental data) generated by the model $M$ and parameter set $\theta$.
Bayes factors, computed as the ratio of the model marginal likelihood ($P(D|M)$) between separate models, facilitate comparisons between those models.
\begin{equation}
B_{1/2} = \frac{P(D|M_1)}{P(D|M_2)} = \frac{\int_{\theta_1}P(D|\Theta_1,M_1)P(\theta_1|M_1)\mathrm{d}\theta_1}{\int_{\theta_2}P(D|\theta_2,M_2)P(\theta_2|M_2)\mathrm{d}\theta_2} = \frac{P(M_1|D)}{P(M_2|D)}\frac{P(M_2)}{P(M_1)}
\end{equation}
In order to compute the model evidence $P(D|M)$, one must integrate the parameter posterior over all parameter space in the model (as shown in eq. 2). For most molecular systems this Bayes factor integral cannot be calculated analytically and must be estimated.
Common interpretations for levels of significance of Bayes Factor evidence were suggested by Kass and Raftery~\cite{kassBayesFactors1995} and are listed, with labels renamed for clarity, in Table \ref{tab:bayes-evidence}.
\begin{table}[h]
\caption{\label{tab:bayes-evidence}
{\bf The interpretation of Bayes factors for model 0 over model 1, $B_{01}$, due to Kass and Raftery~\cite{kassBayesFactors1995}.}
Bayes factors provide a quantitative measure of the \emph{model evidence} for choosing one model over another. }
\centering
\begin{tabular}[t]{c|c|c}
$\ln{\left(B_{01} \right)}$ & $B_{01}$ & Evidence in favor of model 0\\
\hline
0 -1 & 1 - 3 & Inconclusive \\
1-3 & 3 - 20 & Significant \\
3-5 & 20 - 150 & Strong \\
$>$ 5 &$>$ 150 & Very Strong
\end{tabular}
\label{tbl:Bayes factors}
\end{table}
The advantage of using Bayesian inference to choose between models is that it captures both model performance and model parsimony, striking a balance between accuracy and generality.
More precisely, more complex models are penalized because their additional complexity allows them to make a wider range of predictions. Unless this additional complexity makes the predictions uniformly more accurate, many of these predictions will be extraneous to the properties of interest. The lower the proportion of useful prediction to total predictions, the more a model will be penalized.
\subsection{Construction of Posterior}
Since model evidence is based on the model posterior, one must consider the choice of the prior distributions and likelihood function that form that posterior carefully.
\subsubsection{Priors enable model parsimony}
The use of Bayesian priors allows us to consider model parsimony in the evaluation of models. The prior distribution defines a multi-dimensional probability landscape in parameter space containing all values of the prior to be considered, weighted by our prior estimates of which values are most likely. As the model's complexity grows, so does the volume of this parameter space. Since priors are normalized probability distributions and must integrate to one, growing the parameter space volume \emph{must} reduce the probability of any specific parameter set.
It is also important to emphasize that Bayesian model selection chooses the best model \emph{given the parameter space}. If parameter values that produce good models are excluded from this space, or assigned very low probability, those values will not factor into the calculated Bayes factors.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{figures/complexity_penalty.png}
\caption{\textbf{ Demonstration of how increased model complexity incurs a penalty.} In the left panel, with the model only having the set $\theta_1$ as variable parameters, a higher proportion of parameter sets are high probability. When the additional parameter $\theta_2$ is added, even though more high probability parameter sets are available, they represent a lower proportion of the total parameter space available to the model, which incurs a complexity penalty in the Bayesian context.}
\label{fig:qual_prior}
\end{figure}
In this way, demonstrated in figure \ref{fig:qual_prior} the prior distribution creates a complexity penalty that the model must overcome with increased predictive power in order to justify increased complexity. Setting a prior is crucial because it encodes this penalty, which depends on the amount of initial information about the parameter.
\subsubsection{Choosing priors}
\label{subsubsection:Choosing_priors}
In order to avoid excluding useful parameter space, or including too much extraneous parameter space, we form the prior distribution using a training sample method. In this method, data is split into a ``training sample'', used to inform the prior, and a ``test sample'' used to calculate the Bayes factor. This method of including partial information in training samples is a relatively simple method for calculating priors that are effective for each model and is established in the statistical literature~\cite{bergerIntrinsicBayesFactor1996,bergerRobustBayesianAnalysis1990}.
In this case, we start from a wide, non-negative uniform distribution, and simulate a posterior distribution (using a likelihood calculated as in section 2.3.4) with training samples of varying amounts of experimental data. We then fit a simple analytical distribution (Gaussian for $\epsilon,\sigma,L$; exponential or gamma for $Q$) to this \emph{training sample posterior}. This analytical distribution becomes the prior for calculation of Bayes factors under a separate set of experimental data points. We set the priors using 3 levels of information in the training sample (low, medium, high; 3, 5, 8 data points per property respectively). These data points are selected for each property in the target criteria and are distributed across the available temperature range, so they are not excluding information based on temperature or property.
\subsubsection{Evaluating the likelihood}
\label{subsubsection:Likelihood_Evaluations}
Likelihood evaluation requires a probabilistic model of observations in terms of model parameters (referred to in statistical literature as a "forward model"), as well as error model for comparing those observations with experimental ones. In this case the forward model is composed of the surrogate models for physical properties ($\rho_l$, $P_{sat}$, $\gamma$) as a function of the model parameters ($\sigma, \epsilon, L, Q$) using the functional forms as defined in Stoll et. al~\cite{stollComprehensiveStudyVapourliquid2009a} (density, saturation pressure) and Werth et al.~\cite{werthSurfaceTensionTwo2015a} (surface tension). More details of the surrogate models are available in Supporting Information, section 1.1.
Experimental measurements for the diatomic ($\mathrm{O_2,N_2,Br_2,F_2}$) and hydro/fluorocarbon compounds ($\mathrm{C_2H_2}$, $\mathrm{C_2H_4}$, $\mathrm{C_2H_6}$, $\mathrm{C_2F_4}$) for saturated liquid densities, saturation pressure, and surface tension between 55\% and 95\% of critical temperature (some training sets have slightly different temperature ranges due to data availability, full information available in Supporting Information section 1.3) are taken from the NIST ThermoData Engine database~\cite{frenkelThermoDataEngineTDE2005a}, with uncertainties assigned as a typical class uncertainty for each compound and property. The use of class uncertainties does not heavily impact the uncertainty as experimental uncertainties are much smaller than correlation uncertainties.
These surrogate models are used to calculate the likelihood $P(D|\Theta)$ by estimating each physical property at the temperatures corresponding to experimental physical property data points and comparing those values to experimental with an error model.
\subsubsection{Error Model}
\label{subsubsection:Error_Model}
We us a Gaussian error model to compare properties calculated from the surrogate models to corresponding experimental properties.
This error model is chosen based on the assumption that the experimental value $x$ is the ``true'' value and that the surrogate model produces a measurement of that value, $\hat{x}$, with some error. We assume this error is Gaussian distributed, so the probability of observing a measurement $\hat{x}$ is given by:
\begin{equation}
P(\hat{x} | x) = \frac{1}{u_{tot}\sqrt{2\pi}}\exp\left( -\frac{1}{2}\left(\frac{x-\hat{x}}{u_{tot}} \right)^2 \right)
\end{equation}
where the total uncertainty $u_{tot}$ is given by a sum of squares of the average experimental uncertainty $u_{exp}$ and the surrogate model error $u_{surr}$.
\begin{equation}
u_{tot}^2 = u_{exp}^2 + u_{surr}^2
\end{equation}
This surrogate model is an analytical form fitted to several chemical species at a range of thermodynamic conditions, so there is systematic uncertainty when the model is applied to any specific compound. While the creators of the surrogate models used in this study did not provide analytical correlations for the uncertainty, they provided uncertainty estimates at difference temperature and parameter values. We used these estimates to assign model uncertainties $u_{surr}$ in our process. This model is piecewise, depending on the temperature regime (defined as fraction of critical temperature) of the system. Details of this error model are listed in the Supporting Information section 1.2.
The full forms of the likelihood function (eq. 10) and prior distribution (eq. 11) are as follows:
\begin{equation}
P(D | \Theta, M) =\left( \prod_{k=1}^3 \left( \prod_{i=1}^{n} \frac{1}{u_{tot}\sqrt{2\pi}} \exp \left( -\frac{1}{2}\left( \frac{\Vec{D_i}_k-\mathbf{f(\theta, T)}}{u_{tot}}\right)^2\right) \right)\right).
\end{equation}
The likelihood function evaluates agreement with experiment over a vector of $i$ temperature data points from $k$ different properties. $\mathbf{f(\theta, T)}$ is the output of the analytical surrogate model for the $k^{th}$ physical property at the temperatures corresponding to the data vector $\Vec{D_i}_k$. It is important to note that for the saturation pressure, the data vector $\Vec{D_i}_{P_{sat}}$ is $\ln{P_{sat}}$ instead of $P_{sat}$ due to the wide range of values at different temperatures. The prior function is defined as:
\begin{equation}
P(\Theta) = \left( \prod_{j=1}^3\frac{1}{\sigma_j \sqrt{2\pi}} \exp \left( -\frac{1}{2}\left( \frac{\theta_j - \mu_j}{\sigma_j}\right)^2\right) \right) \left( \frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta_4^{\alpha-1}e^{-\beta \theta_4} \right).
\end{equation}
The first term of the total prior is the prior for ($\sigma=\theta_1, \epsilon=\theta_2, L=\theta_3)$, and the second term is the prior for $Q=\theta_4$, either an exponential ($\alpha=1, \beta$ determined by prior fitting) or gamma prior ($\alpha, \beta$ determined by prior fitting). Distributions are fit to samples using the SciPy \texttt{distributions} module and the \texttt{fit} functions. For compounds with quadrupole distributions centered at values larger than zero, gamma priors are chosen; for compounds with quadrupole support near zero, exponential distributions are chosen. For the UA and AUA models, the prior terms for $L,Q$ (UA) or $Q$ (AUA) are set to 1, as the values are fixed.
Combined, they form the posterior distribution as described in eq. 1.
\subsection{Sampling of Posteriors}
\label{subsection:Sampling}
For any method of calculating Bayes factors obtaining samples from the parameter posterior distribution is essential. To draw samples from complex distributions without simple closed-form distributions Markov Chain Monte Carlo (MCMC) methods are used~\cite{chipmanPracticalImplementationBayesian2001}.
\subsubsection{MCMC parameter proposals}
\label{subsubsection:MCMC_param_proposals}
Within a particular model, we propose moves using component-wise Metropolis Hastings Monte Carlo~\cite{johnsonComponentWiseMarkovChain2013}, where at each step an individual variable parameter is chosen with uniform probability and perturbed. This perturbation is done by proposing a new value of the parameter based on a normal distribution centered at the current value, with any proposed negative values rejected. The standard deviation of this distribution is initially set to be 1/100 of the initial value of the variable (or a minimum of 0.001 if the initial value is 0) and then tuned to obtain a between model acceptance ratio between 20--50\% during a ``burn-in'' / tuning simulation ran for 1/5th the length of the production simulation.
After the burn-in period, the simulation proceeds with fixed proposal distributions, as is required to satisfy detailed balance. Tuning steps are discarded when computing model evidences.
\subsubsection{Reversible jump Monte Carlo}
\label{subsubsection:RJMC}
We next turn to the more complicated question of choosing moves between models. The reversible jump Monte Carlo (RJMC) technique~\cite{greenReversibleJumpMarkov1995a} can be employed to sample between multiple models with different numbers of parameters.
RJMC is an extension of traditional MCMC sampling from parameter space to combined parameter \emph{and} model space. It allows for the simultaneous sampling of parameter probability distributions from several models, as well as the discrete model probability distribution of the collection of models. RJMC is a powerful tool to simultaneously sample multiple models, but requires careful construction in cases with dissimilar models. Good chain mixing requires (1) the construction of high-quality mappings between the parameters in each model and (2) efficient ways of proposing new values when the number of parameters between models is uneven~\cite{karagiannisAnnealedImportanceSampling2013b,brooksClassicalModelSelection2003,brooksEfficientConstructionReversible2003,hastieModelChoiceUsing2012}.
For the implementation presented in this paper, proposals for variables not present in current model, such as a quadrupole moment parameter when proposing a move from a model without a quadrupole (UA, AUA) to the model with a quadrupole (AUA+Q), are proposed independently from a Gaussian ``proposal distribution'' that approximates the 1-D posterior distribution of that parameter in the model they are moving to. These distributions are obtained by fitting Gaussians to short simulations of the model posterior in order to facilitate acceptances between inter-model moves.
\subsubsection{Parameter mappings}
\label{subsubsection:Parameter Mappings}
To efficiently propose inter-model moves, one must define a \emph{parameter mapping function} that transforms a point in one parameter space to another point in a different parameter space (e.g. $\epsilon$ in the AUA model $\longrightarrow$ $\epsilon$ in the AUA+Q model). These mapping functions may range from a simple direct mapping to a highly complex one, depending on the situation. The form of the mapping function is often an important factor in model chain mixing and convergence; as long as there is \emph{some} probability overlap sampling between models is theoretically possible, but may be impossibly inefficient. We explored several simple strategies for mapping common parameters between models, described below and illustrated in figure \ref{fig:distribution_mapping}; although others exist, such as non-equilibrium candidate Monte Carlo~\cite{nilmeierNonequilibriumCandidateMonte2011b}.
\begin{itemize}
\item \textbf{Direct mapping:} In the simplest cases, parameters can be taken directly from one model and plugged into in the other model. This is only useful in the case where the distribution of the parameter in the two models overlap significantly.
\item \textbf{Maximum a posteriori (MAP) mapping:} Another possibility is to run a short MCMC simulation of each model, and then map the highest probability (MAP) values of each common parameter to each other, either by translation (additive mapping) or rescaling (multiplicative mapping). This technique can improve overlap but will ultimately fail when the shapes of the distributions differ significantly, where only parameter sets near the MAP value will have good overlap.
\item \textbf{Gaussian affine mapping:} One can also run short MCMC simulations of each model, fit Gaussian distributions to each model posterior distribution, and then map these distributions on top of each other using an affine map that transforms the shape of one distribution to the shape of the other. This has an advantage of capturing differently shaped posterior distributions, allowing for more consistent quality mapping, especially away from the posterior maximum, but can still fail if the distributions are multi-modal (or more precisely, differently modal).
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/mapping_strategies.png}
\caption{\textbf{Mappings between distributions with poor overlap, visualized. } In the first panel, there is enough overlap between the distributions to map without any transformation. Second panel shows maximum a posteriori mapping, improving overlap, but not matching shape. Third panel shows Gaussian affine mapping, transforming location and matching shape.}
\label{fig:distribution_mapping}
\end{figure}
\subsubsection{Biasing Factors}
\label{subsubsection:BiasingFactors}
Even with perfect mapping between the shape of the distributions between models, inter-model moves will be rejected at a high rate if one model has much stronger unnormalized evidence than the other. In this case a biasing factor may be used to equalize the magnitude of the evidences. After achieving improved sampling with the biased probabilities, one can easily unbias the results if the bias is constant across the model. The biasing factors that result in even sampling of models are the Bayes factors between models, similar to how the weights leading to equal probability sampling in expanded ensemble or simulated tempering simulations are the free energies of each state. However, even approximate biasing factors can enable sampling such model swaps are feasible and Bayes factors may be estimated.
\subsection{Calculation of Bayes Factors}
\label{subsection:CalculationBayesFactors}
There are several approaches that can be used to calculate the model evidences and Bayes factors including analysis of the reversible jump Monte Carlo model posteriors ~\cite{bartolucciEfficientBayesFactor2006} and bridge sampling~\cite{mengSIMULATINGRATIOSNORMALIZING1996} from the prior to the posterior.
\subsubsection{Bridge sampling with intermediate states}
\label{subsubsection:BridgeSampling}
One method for computing Bayes factors involves calculating the marginal likelihoods (model normalizing constants) for each model $M$ individually, and then calculating the Bayes factor as in eq. 5. The calculation of normalizing constants using Monte Carlo methods is a well-known problem in statistical inference and many methods are described in the literature~\cite{gewekeBayesianInferenceEconometric1989,nealProbabilisticInferenceUsing1993,diciccioComputingBayesFactors1997,gelmanSimulatingNormalizingConstants1998}.
A technique common to statistical inference and statistical physics is bridge sampling~\cite{mengSIMULATINGRATIOSNORMALIZING1996}, which calculates the ratio of normalizing constants between two distributions by utilizing information from Monte Carlo samples from both distributions.
The Bennett acceptance ratio (BAR)~\cite{bennettEfficientEstimationFree1976} is a specific case of bridge sampling where $\alpha$ is chosen to be unbiased and minimize variance of the estimator.
The BAR estimator is limited in cases when overlap between the two distributions is poor. In this case, one can introduce intermediate distributions designed to create a path between the target distributions with sufficient overlap. In this case, which we call \emph{bridge sampling with intermediates}, the ratio of normalizing constants can be estimated with the unbiased, lowest-variance MBAR estimator~\cite{shirtsStatisticallyOptimalAnalysis2008a}, which extends BAR to data collected over multiple states. The process for calculating log Bayes factors from MBAR is analogous to the process of calculating free energies as described by Shirts and Chodera~\cite{shirtsStatisticallyOptimalAnalysis2008a}, with the Boltzmann distributions replaced with the prior $P(\theta|M)$, posterior $P(\theta|D,M)$, and intermediate distributions. For a detailed discussion of how Bayes factors are calculated with MBAR, see Supporting Information section 2.
Our strategy, described visually in figure \ref{fig:mbar_calculations}, is to calculate the model evidence (normalizing constant) of each model posterior $M$ by using MBAR to calculate the ratio of normalizing constants between that model's prior and posterior. Since all priors used here are normalized analytical distributions, the prior normalizing constant is known and we can calculate the posterior normalizing constant in combination with the ration from MBAR. With this information we can calculate the Bayes factors by comparing the posterior normalizing constants between models.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{figures/mbar_procedure.png}
\caption{\textbf{ Bayes factor calculation scheme for 2CLJQ models.} Bridge sampling is used to calculate ratio of marginal likelihoods (normalizing constants) between analytical reference distribution (with known normalizing constant) and posterior distribution. Posterior distribution marginal likelihoods are then calculated and used to compute Bayes factors.}
\label{fig:mbar_calculations}
\end{figure}
Preliminary testing showed that the prior and posterior did not generally have enough overlap to produce reliable estimates of normalizing constant ratios with BAR.
In order to improve overlap, we replace the prior in this sampling process with a more informed auxiliary distribution.
By running a short MCMC simulation of the posterior distribution, we fit a simple multivariate normal distribution to the posterior sample; with more complex posteriors, Gaussian mixture models could be used, although we do not investigate them here. This approach is not common in molecular simulation, with very complicated distributions of samples as a functions of the coordinates, with the possible exception of simple systems like the 2CLJQ model or crystals with near harmonic behavior~\cite{abrahamThermalGradientApproach2018}.
Testing also showed that intermediate distributions were required to get sufficient overlap for bridge sampling. Distributions $P_{int}$ are defined as:
\begin{equation}
P_{int} = P_{ref}(\theta)^{f(\lambda)}\times P(\theta|D,M)^{f(1-\lambda)}
\end{equation}
where $P_{ref}$ is the analytical auxiliary distribution.
Through testing we determined that using one intermediate state between the reference distribution and the posterior is enough to consistently achieve sufficient overlap for Bayes factor calculations.
Using intermediate distributions has an analog in free energy simulations with $\lambda$-dynamics simulations~\cite{knightLDynamicsFreeEnergy2009}, and in staged methods of calculating host/guest binding affinities through multiple windows.
\subsubsection{Model distributions from RJMC}
\label{subsubsection:ModelDistributionsRJMC}
RJMC allows for the direct estimation of Bayes factors through two methods: (1) estimating the ratio of normalizing constants by taking the sampling ratio (SR) of the visits to each model and (2) estimating this ratio with what is called in statistics the warped bridge sampling, using the BAR estimator.
The sampling ratio estimate is simply the ratio of the number of Monte Carlo samples from each model; assuming each model’s parameter space is sampled well, this will converge to the model Bayes factor. The warped bridge sampling approach involves using bridge sampling on a set of samples from two models, then using the proposal distributions from section 2.4.2 and parameter mappings from section 2.4.3 to "warp" the distributions to improve overlap, similar to mappings in configurational space. ~\cite{paliwalMultistateReweightingConfiguration2013,schieberConfigurationalMappingSignificantly2019}.
\subsubsection{Biased RJMC}
\label{subsubsection:BiasedRJMC}
As discussed in section 2.4.4, biasing factors can be used to equalize the evidence between models and facilitate cross model jumps in cases where there is a large difference in model evidences. In the case of our data, UA models often have very low model evidence for this data set, compared to AUA or AUA+Q. We can validate Bayes factor calculations as described in section 2.5.1 by performing RJMC with model evidences estimated from the importance sampling method as biasing factors. If the biasing factors from the importance sampling method in 2.5.1 are accurate, then biased RJMC should yield roughly equal sampling between models. In practice, the sampling will not be perfectly equal due to the stochastic nature of the simulation, but large deviations from equal sampling would indicate that these biasing factors are not accurate.
\subsection{Measuring Posterior Predictive Accuracy}
\label{subsection:MeasuringELPPD}
To perform an benchmark of the models independent of the posterior performance, we can calculate the \emph{expected log pointwise predictive density} for a new dataset (ELPPD), a metric introduced by Gelman \emph{et al.}~\cite{gelmanUnderstandingPredictiveInformation2014}. This quantity is a sum of the \emph{expected log predictive density} (ELPD), which evaluates the expectation value of a new data point's ($\Tilde{D}_i$) log objective function over the previous posterior distribution $P(\theta | D)$. For the objective function, we choose our standard likelihood function with the new data point as the target, $\log P(\Tilde{D}_i | \theta)$ .
\begin{equation}
\mathrm{ELPPD (\tilde{D} | D)} = \sum_{i=1}^{n} \int_{\theta} \log P(\Tilde{D}_i | \theta) P(\theta | D) \mathrm{d}\theta
\end{equation}
In practice, we estimate this quantity by simulating the posterior of each model, then drawing $k$ points at random from the posterior samples:
\begin{equation}
\mathrm{ELPPD}_k (\tilde{D}|D) \approx \sum_{i=1}^{n} \sum_{j=1}^{k} \log P(\Tilde{D}_i | \theta_j) \, ; \, \, \, \theta_i \sim P(\theta | D)
\end{equation}
These quantities are useful in that they assess the model's performance on independent datasets and do not include the prior information, so they are a more ``traditional'' metric to assess model performance.
\subsection{Chosen methods for this investigation}
\label{subsection:ChosenMethods}
Model priors are set using the training sample method described in section 2.3.3. To calculate Bayes factors, we used the approach of MBAR with three intermediate states (section \ref{subsubsection:BridgeSampling}) is used, with three total $\lambda$ windows (analytical distribution, intermediate distribution, posterior distribution) simulated for $4\times 10^6$ steps each. This method was chosen over the RJMC method due to its more accurate Bayes factor calculations.
To validate these results, RJMC with biasing factors taken as the model evidences from the MBAR calculation for $2\times10^6$ steps. In all cases, the biased RJMC calculations agree with the MBAR calculations. ELPPD and average deviation calculations are taken from benchmarking posterior simulations performed for $10^6$ steps for each model.
\section{Results \& Discussion}
\label{section:ResultsDiscussion}
\subsection{Effect of priors on Bayes factors}
\label{subsection:EffectPriors}
In order to compute meaningful Bayes factors between 2CLJQ models, we must address the effect of priors on the model posteriors.
\subsubsection{Illustration with uniform priors}
\label{subsubsection:IllustrationUniformPrior}
One of the most important effects of the choice of prior distribution is that the amount of \emph{parameter uncertainty} that the prior distribution encodes. In the case of a Bayes factor between the model without a quadrupole (AUA) and the model including a quadrupole (AUA+Q), different uniform priors over the quadrupole have a strong effect on the Bayes Factor. This Bayes factor, shown in figure \ref{fig:disjoint_prior} is calculated with a target of ethane (C$_2$H$_6$) $\rho_l$ and $P_{sat}$ data.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/bayeschange.png}
\caption{\textbf{Bayes factor comparison of AUA and AUA+Q models with varied quadrupole priors.}
Left: Bayes factors and uncertainties between AUA and AUA+Q, tested against C$_2$H$_6$ ($\rho_L$+$P_{sat}$) for an uniform prior distribution on 2CLJQ quadrupole parameter $Q$ with several different parameter values.
Wider prior distributions over the quadrupole decreases evidence in favor of AUA+Q model. Right: corresponding in color uniform prior distributions, with sampled quadrupole posterior distribution superimposed over priors.}
\label{fig:disjoint_prior}
\end{figure}
As the uniform prior is widened, covering more parameter space, its probability density decreases proportionally, which in turn, lowers the model evidence for the AUA+Q model. This illustrates that a less certain prior over a parameter (encoding more parameter uncertainty) will penalize a model including that parameter. This simple example illustrates how wider uniform priors penalize models with higher parameter uncertainty. This prior uncertainty penalty is an intended part of Bayesian model selection. In Bayes factors calculated in this work, we fit priors for each model in any given comparison to the same set of training sample, so any differences in prior uncertainty should be due to model differences rather than prior information.
\subsubsection{Using more data in prior fitting increases prior certainty}
\label{subsubsection:PriorFittingTrainingSample}
To evaluate the effect of different amounts of prior information on the Bayes factors between 2CLJQ models, we calculated Bayes factors with priors fit to training samples with different amounts of experimental data. Our priors become more certain as more training data is included. While there is no objectively correct Bayes Factor due to its dependence on the prior, the prior should effectively constrain the parameter space, so that extraneous parameter values do not affect the model comparison. To ensure that the priors for the 2CLJQ model were effectively constraining the parameter space, we constructed them with three levels of data in the training sample: n=3, 5, or 8 data points per property respectively (referred to as low, medium, and high information priors).
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth,trim={0 0 0 1.5cm},clip]{figures/AUA+Q_prior_sigma.png}
\caption{\textbf{ Changes in prior distribution shape of AUA+Q model with different amounts of data in training set.} Lack of overlap between low data (n=3) training sample and higher data training samples indicate that prior is not well constrained with n=3 data points.}
\label{fig:prior_changes}
\end{figure}
In this case, shown in figure \ref{fig:prior_changes}, the overlap between the low information prior (n=3 data points) and the medium and high information priors (n=5, n=8 respectively) is poor, indicating that the low information prior does not constrain the prior in the same way as the other priors. In our Bayes factor calculations, the high information prior is used, since it has excellent overlap with the medium prior, and the increased amount of data in the training sample increases our confidence that the prior represents parameters that can model the whole range of experimental data points. We varied the amount of information in this experiment to determine the effect of the amount of data included on the prior and therefore the Bayes factor; in general, we would use as much information as reasonably possible to inform the prior.
\subsection{Results for $\rho_l$, $P_{sat}$ targets}
\label{subsection:2crit}
For the $\rho_l, P_{sat}$ target, the preferred model varies significantly based on the molecule that it is applied to.
\subsubsection{Diatomic molecules}
\label{subsubsection:2crit_diatomic}
For the set of diatomic molecules in figure \ref{fig:diatomic_2crit}, Bayes factor evidence favors either the simplest UA model or the AUA model. For bromine, the AUA model produces slightly different LJ parameters than the UA model, which yield improved physical property accuracy for this target, but are incompatible with the fixed bond length value of the UA model (sampling bond lengths of $\sim$ 0.18 nm, rather than 0.23 nm, as in the UA model). While the value in the UA model reflects a physical bond length, the anisotropic model's assumption that the interaction sites are not necessarily separated by the equilibrium bond length produces better physical property estimates. This is similar to the behavior of Oxygen, which also selects a different bond length then the experimental value, but with weaker evidence for the AUA model.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{figures/diatomics_2crit.png}
\caption{\textbf{Bayes factors between UA, AUA, AUA+Q models for diatomic molecules, tested against $\rho_l, P_{sat}$ data.} Red bar indicates favored model, blue bars indicate log Bayes factor (strength of evidence) against corresponding models. Simpler models (UA/AUA) are favored in these cases.}
\label{fig:diatomic_2crit}
\end{figure}
\subsubsection{Hydro/fluorocarbons}
\label{subsubsection:3crit_hfc}
For the hydrocarbons/fluorocarbons in figure \ref{fig:hfc_3crit}, the evidence generally points towards the more complex models (AUA/AUA+Q), with the strength of the experimental quadrupole moment generally dictating whether the $Q$ parameter is required. C$_2$H$_2$ and C$_2$F$_4$ have the largest sampled quadrupole moments (Q $\approx$ 0.5, 0.83), and overwhelming evidences in favor of the AUA+Q model. C$_2$H$_4$ and C$_2$H$_6$ have lower sampled quadrupole moments (Q $\approx$ 0.41, <0.25), and less evidence in favor of including this parameter; C$_2$H$_4$ and C$_2$H$_6$ have significant evidence in favor of the AUA model.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{figures/hfc_2crit.png}
\caption{\textbf{ Top panel: Bayes factors between UA, AUA, AUA+Q models for hydro/fluorocarbon molecules, tested against $\rho_l, P_{sat}$ data.} Red bar indicates favored model, blue bars indicate log Bayes factor (strength of evidence) between the favored model and the named model. Size of blue bar indicates the strength of the evidence for the favored model (a larger bar indicates stronger evidence for the favored model).}
\label{fig:hfc_2crit}
\end{figure}
\subsection{Results for $\rho_l, P_{sat}, \gamma $ targets}
\label{subsection:3crit}
The addition of the surface tension target, tends to push the evidence in favor of the more complex models, which could be expected with the need to reproduce a very different type of molecular data.
\subsubsection{Diatomics}
\label{subsubsection:diatomic_3crit}
Among diatomics in figure \ref{fig:diatomic_3crit}, both Br$_2$ and N$_2$ require more complex models, with very strong evidence against the UA model, with weak evidence in favor of AUA over AUA+Q for N$_2$, and strong evidence in favor of AUA+Q in the case of Br$_2$. In the cases of F$_2$ and O$_2$, there is very little difference in model performance (before accounting for parsimony) between UA and the more complex models, leading Bayesian inference to favor UA due to parsimony. In the cases of Br$_2$ and N$_2$, the more complex models (AUA and AUA+Q) correct a systematic overprediction of both $P_{sat}$ and $\gamma$ compared to the UA model, overcoming its parsimony advantage.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{figures/diatomics_3crit.png}
\caption{\textbf{ Bayes factors between UA, AUA, AUA+Q models for diatomic molecules, tested against $rho_l, P_{sat}, \gamma$ data.} Red bar indicates favored model, blue bars indicate log Bayes factor (strength of evidence) against corresponding models. Addition of $\gamma$ data pushes model decision for Br$_2$ and N$_2$ towards more complex models (AUA/AUA+Q)}
\label{fig:diatomic_3crit}
\end{figure}
\subsubsection{Hydro/fluorocarbons}
\label{subsubsection:3crit_hfc}
For the hydro/fluorocarbons in figure \ref{fig:hfc_3crit} (C$_2$F$_4$ omitted due to lack of experimental $\gamma$ data), C$_2$H$_2$ still has extremely strong evidence in favor of AUA+Q, and the inclusion of $\gamma$ in the target yields weak evidence in support of the AUA+Q model for C$_2$H$_4$. C$_2$H$_6$ has very weak evidence against the inclusion of a quadrupole as the AUA and AUA+Q model sample very similar values. All of these molecules have very strong evidence against the simplest UA model.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{figures/hfc_3crit.png}
\caption{\textbf{ Bayes factors between UA, AUA, AUA+Q models for diatomic molecules, tested against $rho_l, P_{sat}, \gamma$ data. Red bar indicates favored model, blue bars indicate log Bayes factor (strength of evidence) against corresponding models. C$_2$F$_4$ omitted due to lack of $\gamma$ data.}}
\label{fig:hfc_3crit}
\end{figure}
\subsection{Parameter Correlations}
\label{subsection:ParameterCorrelations}
An advantage of this Bayesian inference process is the information about parameter probability distributions obtained from the MCMC process. This information is valuable for examining parameter correlations and understanding parameter sensitivity.
\subsubsection{Correlation of Lennard-Jones parameters}
\label{subsubsection:LJCorrelation}
One of notable parameter trends in this model, shown in figure \ref{fig:LJ_Q_param_correlation} is the high degree of correlation between the $\epsilon,\sigma, L$ that represent the Lennard-Jones interactions. A degree of correlation between LJ $\epsilon$ and $\sigma$ has been observed previously~\cite{messerlyConfigurationSamplingBasedSurrogateModels2018a,shirtsSolvationFreeEnergies2005}, but the linear/planar nature of the $(\epsilon,\sigma, L)$ probability surface suggests that the 2CLJQ model has a number of degenerate/nearly degenerate parameter sets that do a relatively good job of reproducing experimental data. Since the sampling is based on the surrogate models, the correlation is probably overestimated compared to the full model.
The quadrupole parameter is also correlated with the other parameters, but more weakly. Corner plots for all models and targets are available in the Supporting Information, section 7.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/supplementary_figures/triangle/2crit_C2H4_AUA+Q_corner.pdf}
\caption{\textbf{ Parameter correlations shown in triangle plot of AUA+Q model's 2-d marginal parameter distributions for C$_2$H$_4$, $\rho_l+P_{sat}$ target.}
Plot shows high degree of correlation between $\epsilon,\sigma,L$, and weaker correlation with $Q$.}
\label{fig:LJ_Q_param_correlation}
\end{figure}
Notably, we do not find strong multi-modalities in any of these situations; most have a single maximum of parameter probability. This is probably due to the simplicity of the model and its surrogate models; we do not necessarily expect this to be true for more complex models.
\subsection{Benchmarking}
\label{subsection:Benchmarking}
Benchmarking based on ELPPD results was performed for all compounds with enough experimental data available after prior fitting and Bayes factor calculation. In general, measures of model performance based on Bayes factor and likelihood-only ELPPD benchmarking agree, especially when the evidence in favor of one model is strong. There are some situations where these methods disagree; this is where the parsimonious nature of the Bayesian approach is important.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/c2h4_benchmark.png}
\caption{\textbf{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$H$_4$.} Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points). Data shows AUA+Q model (yellow triangles) improves performance in reproducing $P_{sat}$ data at low temperatures, but Bayes factor evidence supports AUA model (blue circles) due to parsimony.}
\label{fig:C2H4_benchmark}
\end{figure}
This pattern is illustrated in figure \ref{fig:C2H4_benchmark}, in the $\rho_l, P_{sat}$ criteria for C$_2$H$_4$. Although the performance of the AUA/AUA+Q models are very similar at reproducing $rho_l$ (ELPPD average over test points is 1.78 for AUA vs 1.75 for AUA+Q), and $P_{sat}$ (0.81 for AUA vs 1.02 for AUA+Q), the Bayes factor favors the AUA model due to the complexity penalty that the Bayesian method assesses.
\subsection{Challenges of implementation}
\label{subsection:ImplementationChallenges}
A common challenge of using MCMC to sample points from a probability distribution is ensuring good sampling and mixing within the MCMC chains. In the posteriors sampled here, exhaustive sampling can be achieved with simple MCMC techniques and long chains. In higher-dimension spaces with rougher probability landscapes, directed sampling techniques like Hamiltonian Monte Carlo~\cite{nealMCMCUsingHamiltonian2012} (HMC), Langevin Dynamics~\cite{leimkuhlerRobustEfficientConfigurational2013a} or a No U-turn Sampler (NUTS)~\cite{hoffmanNoUTurnSamplerAdaptively2011} may be required to efficiently sample the space. Running multiple chains from different starting points may also be important in situations with significant multi-modality; advanced sampling techniques to ``bridge the gap'' between disparate basins may also be helpful.
A more practical challenge to MCMC sampling is the implementation of surrogate models used to sample parameter sets. For most parameterization problems, implementing surrogate models will be much more difficult that for the 2CLJQ model. For more complex models like biomolecular force fields, surrogate models will not be general enough to capture model responses for large classes of molecules or physical properties, so surrogate models will need to be purpose-built for a process like this. This is a challenging task, but should be possible by simulating properties of interest for multiple parameter sets, and then using modeling techniques well suited to sparse data, such as Gaussian processes (GP)~\cite{liuWhenGaussianProcess2019,oliverKrigingMethodInterpolation1990} regression. Befort et al.~\cite{befortMachineLearningDirected2021} recently had success using GPs to build physical property surrogate models based on small molecule force fields. These methods could be enhanced with reweighting techniques like MBAR on simulated points to add gradient information to the surrogate models.~\cite{messerlyConfigurationSamplingBasedSurrogateModels2018a}.
Another implementation challenge is selecting a technique to compute Bayes factors from these MCMC chains. As discussed in section \ref{subsubsection:RJMC}, RJMC, while attractive due to its simplicity and simultaneous sampling, will be difficult to implement in large model spaces, requiring much fine-tuning to facilitate model jumps. The MBAR-based technique we use is more attractive in a large model space, but requires some testing to ensure a good reference model, and probability overlap between the reference model and the posterior.
\section{Conclusions}
\label{section:Conclusions}
In this work, we introduce Bayesian inference with surrogate modeling as a paradigm for making model decisions for non-bonded molecular interactions. By testing this strategy on the 2CLJQ model, we demonstrate that the technique can choose between models in a way that balances model performance with parsimony. We anticipate that this will be useful in force field parameterization, where tradeoffs between model complexity and performance are extremely common. We also assess the role the Bayesian prior has on the choice of model, and offer insight into how these choices may be made in a force field parameterization context.
The model decisions made in the Bayesian inference process vary widely due to the compound and data targets considered, which highlights the variability of the targets. Although this does not yield a blanket recommendation for a level of model complexity, it identifies which compounds and properties require a more complex model. This problem-specific parsimony is of particular interest to the molecular modeling community, because the goal for molecular model selection is choosing the simplest and computationally cheapest model that describes the data of interest with sufficient experimental accuracy.
The most significant challenge of applying this technique to more complex systems will be creating the surrogate models required for this process. For biomolecular force fields, it is possible to build problem-specific Gaussian process surrogate models built from simulation data and MBAR reweighting to explore the parameter space of combinations of Lennard-Jones parameters, though the sampling of parameter space required to construct them is a significant challenge. This technique is also applicable to other common dispersion-repulsion model selection problems such as LJ typing schemes, as well as LJ combination rules. It is also potentially useful for choices between fixed-charge electrostatic models, and could help identify where virtual sites or polarizable sites could be most useful. Overall, this technique appears to provide a systematic method of evaluating models based on performance and parsimony applied to molecular simulations.
\section{Data and Software Availability}
Datasets and code used to generate the results shown in this study are available from \url{https://github.com/SimonBoothroyd/bayesiantesting/tree/combined_run}.
\section{Author Contributions}
Contributions based on CRediT taxonomy:
\noindent O.C.M.: Conceptualization, data curation, formal analysis, investigation, methodology, software, visualization, original draft, review and editing \\
S.B.: Methodology, software, investigation, visualization, review and editing \\
R.A.M.: Conceptualization, data curation, resources, software, methodology, review and editing \\
J.F.: Methodology, review and editing \\
J.D.C.: Resources, funding acquisition, review and editing \\
M.R.S.: Conceptualization, methodology, funding acquisition, resources, project administration, supervision, review and editing. \\
\section{Acknowledgements and Funding}
We thank the Open Force Field Consortium for funding, including our industry partners as listed at the Open Force Field website, and Molecular Sciences Software Institute (MolSSI) for its support of the Open Force Field Initiative. We gratefully acknowledge along with all current and former members of the Open Force Field Initiative and the Open Force Field Scientific Advisory Board. Research reported in this publication was in part supported by National Institute of General Medical Sciences of the National Institutes of Health under award number R01GM132386, specifically partial support of OCM, MRS, and JDC. OCM, MRS, JDC, and JF acknowledge support from NSF CHE-1738975 for parts of the project. These findings are solely of the authors and do not necessarily represent the offical views of the NIH or NSF.
\section{Disclosures}
The Chodera laboratory receives or has received funding from multiple sources, including the National Institutes of Health, the National Science Foundation, the Parker Institute for Cancer Immunotherapy, Relay Therapeutics, Entasis Therapeutics, Silicon Therapeutics, EMD Serono (Merck KGaA), AstraZeneca, Vir Biotechnology, XtalPi, Foresite Labs, the Molecular Sciences Software Institute, the Starr Cancer Consortium, the Open Force Field Consortium, Cycle for Survival, a Louis V. Gerstner Young Investigator Award, and the Sloan Kettering Institute. A complete funding history for the Chodera lab can be found at \url{http://choderalab.org/funding}.
JDC is a current member of the Scientific Advisory Boards of OpenEye Scientific Software, Interline, and Redesign Science, and holds equity interests in Interline and Redesign Science.
MRS is an Open Science Fellow for Silicon Therapeutics.
SB is a director of Boothroyd Scientific Consulting Ltd.
\section{Introduction}
\label{section:Introduction}
Parameterization of molecular force fields is a long-standing problem for the classical molecular simulation community~\cite{dauber-osguthorpeBiomolecularForceFields2019,haglerForceFieldDevelopment2019,rinikerFixedChargeAtomisticForce2018}, as the choice of models and their respective parameters are critical for achieving quantitative accuracy in modeling of molecular interactions. The treatment of electrostatic and dispersion-repulsion interactions, commonly referred to as \emph{non-bonded interactions}, is a difficult parameterization task because these interactions are described by relatively simple models that are crude approximations of the underlying electronic behavior, requiring mapping electronic probability distributions onto pairwise interactions between a small number of points. Dispersion-repulsion interactions, commonly modeled with a Lennard-Jones (LJ) potential are especially challenging to derive~\cite{monticelliBiomolecularSimulationsMethods2013} as values of the Lennard-Jones parameters are not directly extractable from quantum mechanics (QM) calculations.
Most force fields train these parameters against macroscopic condensed phase physical properties, though there are some attempts to obtain them with reference to QM calculations~\cite{kantonenDataDrivenMappingGasPhase2020,tkatchenkoAccurateMolecularVan2009}.
Although most commonly employed classical force fields use electrostatic point charges and pairwise LJ potentials~\cite{wangDevelopmentTestingGeneral2004a,banksIntegratedModelingProgram2005}, other models for these interactions exist. Atomic multipoles~\cite{ponderCurrentStatusAMOEBA2010,jiaoCalculationProteinLigand2008}, Drude oscillators~\cite{lemkulPolarizableForceField2017}, and fluctuating charges~\cite{patelCHARMMFluctuatingCharge2004} for electrostatic interactions and alternate functional forms~\cite{dauber-osguthorpeBiomolecularForceFields2019,messerlyMie16Force2019,chiuCoarseGrainedModelBased2010,halgrenMerckMolecularForce1996} for dispersion-repulsion interactions have been proposed over the years. Even considering only LJ potentials, many different choices for atom typing~\cite{schauperlDatadrivenAnalysisNumber2020,boulangerOptimizedLennardJonesParameters2018a} and combination rules~\cite{halgrenRepresentationVanWaals1992a,waldmanNewCombiningRules1993} exist.
Choosing a potential energy function with a more complex functional form can improve agreement with experiment, however, increasing the complexity may also make the parameterization more difficult due to an increased likelihood of overfitting~\cite{harrisonReviewForceFields2018,frohlkingEmpiricalForceFields2020}.
This creates two distinct problems in designing force fields: (1) selecting among discrete choices of models and (2) optimizing continuous choice of parameters within models. It is common, although often expensive, to compare the quantitative performance of different parameter sets within a single model, but it is difficult to quantitatively compare fitness between models.
A common statistical formalism across for comparing the overall fitness of discrete models is Bayesian inference~\cite{vontoussaintBayesianInferencePhysics2011a}, which allows users to evaluate models in a way that automatically penalizes unnecessary complexity~\cite{mackayInformationTheoryInferencea}.
By integrating over the marginal distribution of the parameters one can calculate \emph{Bayes factors} which are interpreted as odds ratios between the separate models~\cite{kassBayesFactors1995}.
The Bayesian framework combines specific information on a model's ability to reproduce target data, with more general prior knowledge based on data collected with previous parameter sets, physical constraints or invariances, or chemical intuition. In this way, the prior distribution generalizes and influences the comparison between models. In any parameterization process, the influence of the prior knowledge about the system is always present; using a Bayesian approach makes the influence of any prior information on the parameters explicit. Bayesian inference has previously been used to incorporate uncertainty quantification~\cite{duttaBayesianCalibrationForcefields2018,angelikopoulosBayesianUncertaintyQuantification2012,farrellBayesianFrameworkAdaptive2015,wus.HierarchicalBayesianFramework2016} and model selection~\cite{bacalladoBayesianComparisonMarkov2009,farrellBayesianFrameworkAdaptive2015} into molecular models.
To compute Bayes factors, samples must be drawn from the model posterior distributions which involves comparing model outputs to macroscopic observables. Typically this is done through a MCMC sampling scheme in the parameter space, since posteriors are generally non-analytical.
Once sufficient posterior samples are obtained, a range of statistical techniques can be used to estimate the Bayes factors and make quantitative judgements on the relative fitness of models.
One established MCMC technique for cross-model sampling is reversible jump Monte Carlo (RJMC)~\cite{greenReversibleJumpMarkov1995a}, which simultaneously samples over several models and their respective parameter spaces. This technique can be used to compare model posterior parameter distributions and directly compare the support for each model.
Bayes factors can also be computed with \emph{bridge sampling} methods which compute the probability ratio between a simple analytical reference distribution and the posterior distribution in order to calculate model evidences. This method can be enhanced by using intermediate probability distributions to bridge the gap between the reference distribution and the posterior, increasing the overlap between distributions being compared and thus improving the accuracy of the calculations~\cite{nealAnnealedImportanceSampling1998}.
These statistical approaches for calculating Bayes factors have close analogues in statistical physics techniques used to calculate free energy differences in molecular simulations. Specifically, Bayes factors are analogous to the ratio of partition coefficients of two models, but integrated over the \emph{parameter space} of each model, rather than over atomic coordinates. Bridge sampling techniques are analogous to free energy perturbation and reweighting methods~\cite{mengSIMULATINGRATIOSNORMALIZING1996,gelmanSimulatingNormalizingConstants1998}, and RJMC is analogous to expanded ensemble techniques or Hamilton replica exchange techniques~\cite{sugitaReplicaexchangeMulticanonicalAlgorithm2000}, where the multiple ensembles are different models rather than different potential functions or temperatures.
Estimating physical observables with molecular simulations is often computationally expensive, especially if some molecular degrees of freedom are slow. To perform MCMC sampling in parameter space, observables must be calculated many times.
To avoid this cost and efficiently sample the posterior distribution, surrogate models~\cite{sidkyMolecularLatentSpace2020,kadupitiyaMachineLearningSurrogates2020} can be used to cheaply approximate the response surface of the observables with respect to the force field parameters.
To demonstrate the Bayesian model comparison strategy for atomistic models, we have selected a problem for which a surrogate model is already well-defined. The two-center Lennard-Jones + quadrupole (2CLJQ) model ~\cite{vrabecSetMolecularModels2001} is a simple 4-parameter fluid model that serves as a useful test bed for this problem. Analytical surrogate models exist for predicting saturated densities ($\rho_l$), saturated vapor pressure ($P_{sat}$), and surface tension ($\gamma$) for a number of small nonpolar fluids.~\cite{stollComprehensiveStudyVapourliquid2009a,werthSurfaceTensionTwo2015a}. A previous parameterization study~\cite{stobenerParametrizationTwocenterLennardJones2016} of the model demonstrated the potential to optimize the 2CLJQ model using surrogate models. In this study we use quantitative evidence from Bayes factors to examine whether including an adjustable bond length and/or quadrupole parameter are justified in this model.
\section{Methods}
\label{section:Methods}
\subsection{Molecular Model}
\label{subsection:Molecular_Model}
We test our Bayesian model selection strategy on the 2CLJQ fluid model~\cite{vrabecSetMolecularModels2001}, a simple 2-site fluid model that fits diatomic and other similar molecules~\cite{stollSetMolecularModels2003,mollerDeterminationEffectiveIntermolecular1994,cheungPropertiesLiquidNitrogen1976,murthyInteractionSiteModels1981}. In particular we consider this model for a range of compounds, 4 diatomic ($\mathrm{O_2, N_2, Br_2, F_2}$) and 4 ``diatomic-like'' hydro/fluoro carbon compounds \\ ($\mathrm{C_2H_2}$, $\mathrm{C_2H_4}$, $\mathrm{C_2H_6}$, $\mathrm{C_2F_4}$), which are both well parameterized by the 2CLJQ model~\cite{vrabecSetMolecularModels2001}. Interactions within the 2CLJQ model are controlled by four parameters, as illustrated in Figure \ref{fig:2cljq_model}: a Lennard-Jones $\sigma$ (nm) and $\epsilon$ (K), a constant bond length $L$ (nm) between the two LJ sites, and a quadrupole interaction strength parameter $Q(\mathrm{D} \cdot \mathrm{nm})$. The LJ $\sigma$ represents the distance at which the potential energy between two particles is equal to zero; the LJ $\epsilon$ represents the depth of the potential well and the strength of the attraction between two particles. The functional form of the molecular model is similar to the Lennard-Jones potential, but adapted for a 2-center model and with a quadrupole interaction at the model's geometric center~\cite{vrabecSetMolecularModels2001}:
\begin{equation}
u_{2CLJQ}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, L, Q) = u_{2CLJ}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, L) + u_{Q}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, Q)
\end{equation}
\begin{equation}
u_{2CLJ}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, L) = \sum_{a=1}^2 \sum_{b=1}^2 4\epsilon \left[\left(\frac{\sigma}{r_{ab}}\right)^{12} - \left(\frac{\sigma}{r_{ab}}\right)^{6} \right]
\end{equation}
\begin{equation}
u_{Q}(\mathbf{r}_{ij}, \mathbf{\omega}_{i}, \mathbf{\omega}_{j}, Q) = \frac{3}{4}\frac{Q^2}{|\mathbf{r}_{ij}|^5}f(\mathbf{\omega}_{i}, \mathbf{\omega}_{j})
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{figures/2cljq_model_figure.png}
\caption{\textbf{The 2CLJQ model consists of two LJ sites parameterized by $\sigma$ and $\epsilon$, a bond length $L$, and a quadrupole interaction strength parameter $Q$.}
In this example, an ethane ($\mathrm{C_2H_6}$) molecule is parameterized as two identical sites, each consistent of a carbon atom and three hydrogens.}
\label{fig:2cljq_model}
\end{figure}
\subsubsection{Defining models of varying complexity}
To investigate which parameters are justified to reproduce the physical properties of interest, we split the model into three levels of complexity, as shown in Table \ref{tbl:Model Definitions}: united atom (UA), anisotropic united atom (AUA), and anisotropic united atom + quadrupole (AUA+Q).
\begin{table}[h]
\centering
\begin{tabular}[t]{|c|c|c|c|c|}
\hline
Model & $\sigma$ (nm) & $\epsilon $ (K) & $L$ (nm) & $Q$ ($\mathrm{D}\cdot \mathrm{nm}$) \\
\hline
UA & Variable & Variable & Fixed & Fixed (at 0)\\
AUA & Variable & Variable & Variable & Fixed (at 0)\\
AUA+Q & Variable & Variable & Variable & Variable \\
\hline
\end{tabular}
\caption{{\bf Different 2CLJQ models considered in this study.}
While all models share the same function form, some models have parameters/interactions fixed, while others let them vary.
{\bf UA}: united atom; {\bf AUA}: anisotropic united atom; {\bf AUA+Q}: anisotropic united atom + quadrupole.
}
\label{tbl:Model Definitions}
\end{table}
The models here are nested---for example, the AUA model is the same as the AUA+Q model, but with the quadrupole moment permanently set to zero. The UA model is a subset of the AUA model, but with a fixed bond length chosen from literature (typically taking an ``experimental'' value)~\cite{johnsonrusselld.NISTComputationalChemistry2018}. This construction allows us to test whether the variable quadrupole and bond length parameters are useful in reproducing the chosen physical properties.
\subsubsection{Surrogate Models for 2CLJQ output}
Analytical surrogate models, which predict molecular properties as a function of molecular parameters, were developed for saturated liquid density ($\rho_l$) and saturated vapor pressure ($P_{sat}$) by Stoll~\cite{stollComprehensiveStudyVapourliquid2009a}; similar surrogate models for surface tension ($\gamma$) were developed by Werth~\cite{werthSurfaceTensionTwo2015a} by fitting to 2CLJQ simulation results at a variety of parameter and temperature conditions.
St\"{o}bener~\cite{stobenerParametrizationTwocenterLennardJones2016} previously optimized parameters for the 2CLJQ model by using surrogate models to drive fast optimization and a Pareto optimization approach to choose parameter sets. Parameter set fitness in this study was based on comparison to data from the Design Institute for Physical Properties (DIPPR) correlations; here we train against NIST experimental data for similar properties instead. In this work, we expand the idea of extensive parameter sampling through surrogate models to include comparisons between disparate models.
To apply this technique to an arbitrary force field, one will usually need to construct such surrogate models for different properties. Common techniques to build these surrogate models might include reweighting~\cite{messerlyConfigurationSamplingBasedSurrogateModels2018a}, Gaussian processes~\cite{befortMachineLearningDirected2021}, and machine learning methods~\cite{kadupitiyaMachineLearningSurrogates2020}.
Since the methods needed to construct such a model depend substantially on the property and the parameters of interest, that question is beyond the scope of this study; we focus only on applying Bayesian inference given a surrogate model.
\subsection{Bayesian inference}
\label{subsection:Bayesian_Inference}
Bayesian inference characterizes the fitness of models and parameters by combining specific information about a set of experiments with more general prior information. The core of Bayesian inference is \emph{Bayes' rule}:
\begin{equation}
\underbrace{P(\theta | D, M)}_{\mathrm{Posterior}} \propto \underbrace{P(D| \theta, M)}_{\mathrm{Likelihood}} \underbrace{P(\theta | M)}_{\mathrm{Prior}}
\end{equation}
where $\theta$ represents a set of parameters belonging to a model $M$, and $D$ represents a dataset (which can be compared to experimental data) generated by the model $M$ and parameter set $\theta$.
Bayes factors, computed as the ratio of the model marginal likelihood ($P(D|M)$) between separate models, facilitate comparisons between those models.
\begin{equation}
B_{1/2} = \frac{P(D|M_1)}{P(D|M_2)} = \frac{\int_{\theta_1}P(D|\Theta_1,M_1)P(\theta_1|M_1)\mathrm{d}\theta_1}{\int_{\theta_2}P(D|\theta_2,M_2)P(\theta_2|M_2)\mathrm{d}\theta_2} = \frac{P(M_1|D)}{P(M_2|D)}\frac{P(M_2)}{P(M_1)}
\end{equation}
In order to compute the model evidence $P(D|M)$, one must integrate the parameter posterior over all parameter space in the model (as shown in eq. 2). For most molecular systems this Bayes factor integral cannot be calculated analytically and must be estimated.
Common interpretations for levels of significance of Bayes Factor evidence were suggested by Kass and Raftery~\cite{kassBayesFactors1995} and are listed, with labels renamed for clarity, in Table \ref{tab:bayes-evidence}.
\begin{table}[h]
\caption{\label{tab:bayes-evidence}
{\bf The interpretation of Bayes factors for model 0 over model 1, $B_{01}$, due to Kass and Raftery~\cite{kassBayesFactors1995}.}
Bayes factors provide a quantitative measure of the \emph{model evidence} for choosing one model over another. }
\centering
\begin{tabular}[t]{c|c|c}
$\ln{\left(B_{01} \right)}$ & $B_{01}$ & Evidence in favor of model 0\\
\hline
0 -1 & 1 - 3 & Inconclusive \\
1-3 & 3 - 20 & Significant \\
3-5 & 20 - 150 & Strong \\
$>$ 5 &$>$ 150 & Very Strong
\end{tabular}
\label{tbl:Bayes factors}
\end{table}
The advantage of using Bayesian inference to choose between models is that it captures both model performance and model parsimony, striking a balance between accuracy and generality.
More precisely, more complex models are penalized because their additional complexity allows them to make a wider range of predictions. Unless this additional complexity makes the predictions uniformly more accurate, many of these predictions will be extraneous to the properties of interest. The lower the proportion of useful prediction to total predictions, the more a model will be penalized.
\subsection{Construction of Posterior}
Since model evidence is based on the model posterior, one must consider the choice of the prior distributions and likelihood function that form that posterior carefully.
\subsubsection{Priors enable model parsimony}
The use of Bayesian priors allows us to consider model parsimony in the evaluation of models. The prior distribution defines a multi-dimensional probability landscape in parameter space containing all values of the prior to be considered, weighted by our prior estimates of which values are most likely. As the model's complexity grows, so does the volume of this parameter space. Since priors are normalized probability distributions and must integrate to one, growing the parameter space volume \emph{must} reduce the probability of any specific parameter set.
It is also important to emphasize that Bayesian model selection chooses the best model \emph{given the parameter space}. If parameter values that produce good models are excluded from this space, or assigned very low probability, those values will not factor into the calculated Bayes factors.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{figures/complexity_penalty.png}
\caption{\textbf{ Demonstration of how increased model complexity incurs a penalty.} In the left panel, with the model only having the set $\theta_1$ as variable parameters, a higher proportion of parameter sets are high probability. When the additional parameter $\theta_2$ is added, even though more high probability parameter sets are available, they represent a lower proportion of the total parameter space available to the model, which incurs a complexity penalty in the Bayesian context.}
\label{fig:qual_prior}
\end{figure}
In this way, demonstrated in figure \ref{fig:qual_prior} the prior distribution creates a complexity penalty that the model must overcome with increased predictive power in order to justify increased complexity. Setting a prior is crucial because it encodes this penalty, which depends on the amount of initial information about the parameter.
\subsubsection{Choosing priors}
\label{subsubsection:Choosing_priors}
In order to avoid excluding useful parameter space, or including too much extraneous parameter space, we form the prior distribution using a training sample method. In this method, data is split into a ``training sample'', used to inform the prior, and a ``test sample'' used to calculate the Bayes factor. This method of including partial information in training samples is a relatively simple method for calculating priors that are effective for each model and is established in the statistical literature~\cite{bergerIntrinsicBayesFactor1996,bergerRobustBayesianAnalysis1990}.
In this case, we start from a wide, non-negative uniform distribution, and simulate a posterior distribution (using a likelihood calculated as in section 2.3.4) with training samples of varying amounts of experimental data. We then fit a simple analytical distribution (Gaussian for $\epsilon,\sigma,L$; exponential or gamma for $Q$) to this \emph{training sample posterior}. This analytical distribution becomes the prior for calculation of Bayes factors under a separate set of experimental data points. We set the priors using 3 levels of information in the training sample (low, medium, high; 3, 5, 8 data points per property respectively). These data points are selected for each property in the target criteria and are distributed across the available temperature range, so they are not excluding information based on temperature or property.
\subsubsection{Evaluating the likelihood}
\label{subsubsection:Likelihood_Evaluations}
Likelihood evaluation requires a probabilistic model of observations in terms of model parameters (referred to in statistical literature as a "forward model"), as well as error model for comparing those observations with experimental ones. In this case the forward model is composed of the surrogate models for physical properties ($\rho_l$, $P_{sat}$, $\gamma$) as a function of the model parameters ($\sigma, \epsilon, L, Q$) using the functional forms as defined in Stoll et. al~\cite{stollComprehensiveStudyVapourliquid2009a} (density, saturation pressure) and Werth et al.~\cite{werthSurfaceTensionTwo2015a} (surface tension). More details of the surrogate models are available in Supporting Information, section 1.1.
Experimental measurements for the diatomic ($\mathrm{O_2,N_2,Br_2,F_2}$) and hydro/fluorocarbon compounds ($\mathrm{C_2H_2}$, $\mathrm{C_2H_4}$, $\mathrm{C_2H_6}$, $\mathrm{C_2F_4}$) for saturated liquid densities, saturation pressure, and surface tension between 55\% and 95\% of critical temperature (some training sets have slightly different temperature ranges due to data availability, full information available in Supporting Information section 1.3) are taken from the NIST ThermoData Engine database~\cite{frenkelThermoDataEngineTDE2005a}, with uncertainties assigned as a typical class uncertainty for each compound and property. The use of class uncertainties does not heavily impact the uncertainty as experimental uncertainties are much smaller than correlation uncertainties.
These surrogate models are used to calculate the likelihood $P(D|\Theta)$ by estimating each physical property at the temperatures corresponding to experimental physical property data points and comparing those values to experimental with an error model.
\subsubsection{Error Model}
\label{subsubsection:Error_Model}
We us a Gaussian error model to compare properties calculated from the surrogate models to corresponding experimental properties.
This error model is chosen based on the assumption that the experimental value $x$ is the ``true'' value and that the surrogate model produces a measurement of that value, $\hat{x}$, with some error. We assume this error is Gaussian distributed, so the probability of observing a measurement $\hat{x}$ is given by:
\begin{equation}
P(\hat{x} | x) = \frac{1}{u_{tot}\sqrt{2\pi}}\exp\left( -\frac{1}{2}\left(\frac{x-\hat{x}}{u_{tot}} \right)^2 \right)
\end{equation}
where the total uncertainty $u_{tot}$ is given by a sum of squares of the average experimental uncertainty $u_{exp}$ and the surrogate model error $u_{surr}$.
\begin{equation}
u_{tot}^2 = u_{exp}^2 + u_{surr}^2
\end{equation}
This surrogate model is an analytical form fitted to several chemical species at a range of thermodynamic conditions, so there is systematic uncertainty when the model is applied to any specific compound. While the creators of the surrogate models used in this study did not provide analytical correlations for the uncertainty, they provided uncertainty estimates at difference temperature and parameter values. We used these estimates to assign model uncertainties $u_{surr}$ in our process. This model is piecewise, depending on the temperature regime (defined as fraction of critical temperature) of the system. Details of this error model are listed in the Supporting Information section 1.2.
The full forms of the likelihood function (eq. 10) and prior distribution (eq. 11) are as follows:
\begin{equation}
P(D | \Theta, M) =\left( \prod_{k=1}^3 \left( \prod_{i=1}^{n} \frac{1}{u_{tot}\sqrt{2\pi}} \exp \left( -\frac{1}{2}\left( \frac{\Vec{D_i}_k-\mathbf{f(\theta, T)}}{u_{tot}}\right)^2\right) \right)\right).
\end{equation}
The likelihood function evaluates agreement with experiment over a vector of $i$ temperature data points from $k$ different properties. $\mathbf{f(\theta, T)}$ is the output of the analytical surrogate model for the $k^{th}$ physical property at the temperatures corresponding to the data vector $\Vec{D_i}_k$. It is important to note that for the saturation pressure, the data vector $\Vec{D_i}_{P_{sat}}$ is $\ln{P_{sat}}$ instead of $P_{sat}$ due to the wide range of values at different temperatures. The prior function is defined as:
\begin{equation}
P(\Theta) = \left( \prod_{j=1}^3\frac{1}{\sigma_j \sqrt{2\pi}} \exp \left( -\frac{1}{2}\left( \frac{\theta_j - \mu_j}{\sigma_j}\right)^2\right) \right) \left( \frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta_4^{\alpha-1}e^{-\beta \theta_4} \right).
\end{equation}
The first term of the total prior is the prior for ($\sigma=\theta_1, \epsilon=\theta_2, L=\theta_3)$, and the second term is the prior for $Q=\theta_4$, either an exponential ($\alpha=1, \beta$ determined by prior fitting) or gamma prior ($\alpha, \beta$ determined by prior fitting). Distributions are fit to samples using the SciPy \texttt{distributions} module and the \texttt{fit} functions. For compounds with quadrupole distributions centered at values larger than zero, gamma priors are chosen; for compounds with quadrupole support near zero, exponential distributions are chosen. For the UA and AUA models, the prior terms for $L,Q$ (UA) or $Q$ (AUA) are set to 1, as the values are fixed.
Combined, they form the posterior distribution as described in eq. 1.
\subsection{Sampling of Posteriors}
\label{subsection:Sampling}
For any method of calculating Bayes factors obtaining samples from the parameter posterior distribution is essential. To draw samples from complex distributions without simple closed-form distributions Markov Chain Monte Carlo (MCMC) methods are used~\cite{chipmanPracticalImplementationBayesian2001}.
\subsubsection{MCMC parameter proposals}
\label{subsubsection:MCMC_param_proposals}
Within a particular model, we propose moves using component-wise Metropolis Hastings Monte Carlo~\cite{johnsonComponentWiseMarkovChain2013}, where at each step an individual variable parameter is chosen with uniform probability and perturbed. This perturbation is done by proposing a new value of the parameter based on a normal distribution centered at the current value, with any proposed negative values rejected. The standard deviation of this distribution is initially set to be 1/100 of the initial value of the variable (or a minimum of 0.001 if the initial value is 0) and then tuned to obtain a between model acceptance ratio between 20--50\% during a ``burn-in'' / tuning simulation ran for 1/5th the length of the production simulation.
After the burn-in period, the simulation proceeds with fixed proposal distributions, as is required to satisfy detailed balance. Tuning steps are discarded when computing model evidences.
\subsubsection{Reversible jump Monte Carlo}
\label{subsubsection:RJMC}
We next turn to the more complicated question of choosing moves between models. The reversible jump Monte Carlo (RJMC) technique~\cite{greenReversibleJumpMarkov1995a} can be employed to sample between multiple models with different numbers of parameters.
RJMC is an extension of traditional MCMC sampling from parameter space to combined parameter \emph{and} model space. It allows for the simultaneous sampling of parameter probability distributions from several models, as well as the discrete model probability distribution of the collection of models. RJMC is a powerful tool to simultaneously sample multiple models, but requires careful construction in cases with dissimilar models. Good chain mixing requires (1) the construction of high-quality mappings between the parameters in each model and (2) efficient ways of proposing new values when the number of parameters between models is uneven~\cite{karagiannisAnnealedImportanceSampling2013b,brooksClassicalModelSelection2003,brooksEfficientConstructionReversible2003,hastieModelChoiceUsing2012}.
For the implementation presented in this paper, proposals for variables not present in current model, such as a quadrupole moment parameter when proposing a move from a model without a quadrupole (UA, AUA) to the model with a quadrupole (AUA+Q), are proposed independently from a Gaussian ``proposal distribution'' that approximates the 1-D posterior distribution of that parameter in the model they are moving to. These distributions are obtained by fitting Gaussians to short simulations of the model posterior in order to facilitate acceptances between inter-model moves.
\subsubsection{Parameter mappings}
\label{subsubsection:Parameter Mappings}
To efficiently propose inter-model moves, one must define a \emph{parameter mapping function} that transforms a point in one parameter space to another point in a different parameter space (e.g. $\epsilon$ in the AUA model $\longrightarrow$ $\epsilon$ in the AUA+Q model). These mapping functions may range from a simple direct mapping to a highly complex one, depending on the situation. The form of the mapping function is often an important factor in model chain mixing and convergence; as long as there is \emph{some} probability overlap sampling between models is theoretically possible, but may be impossibly inefficient. We explored several simple strategies for mapping common parameters between models, described below and illustrated in figure \ref{fig:distribution_mapping}; although others exist, such as non-equilibrium candidate Monte Carlo~\cite{nilmeierNonequilibriumCandidateMonte2011b}.
\begin{itemize}
\item \textbf{Direct mapping:} In the simplest cases, parameters can be taken directly from one model and plugged into in the other model. This is only useful in the case where the distribution of the parameter in the two models overlap significantly.
\item \textbf{Maximum a posteriori (MAP) mapping:} Another possibility is to run a short MCMC simulation of each model, and then map the highest probability (MAP) values of each common parameter to each other, either by translation (additive mapping) or rescaling (multiplicative mapping). This technique can improve overlap but will ultimately fail when the shapes of the distributions differ significantly, where only parameter sets near the MAP value will have good overlap.
\item \textbf{Gaussian affine mapping:} One can also run short MCMC simulations of each model, fit Gaussian distributions to each model posterior distribution, and then map these distributions on top of each other using an affine map that transforms the shape of one distribution to the shape of the other. This has an advantage of capturing differently shaped posterior distributions, allowing for more consistent quality mapping, especially away from the posterior maximum, but can still fail if the distributions are multi-modal (or more precisely, differently modal).
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/mapping_strategies.png}
\caption{\textbf{Mappings between distributions with poor overlap, visualized. } In the first panel, there is enough overlap between the distributions to map without any transformation. Second panel shows maximum a posteriori mapping, improving overlap, but not matching shape. Third panel shows Gaussian affine mapping, transforming location and matching shape.}
\label{fig:distribution_mapping}
\end{figure}
\subsubsection{Biasing Factors}
\label{subsubsection:BiasingFactors}
Even with perfect mapping between the shape of the distributions between models, inter-model moves will be rejected at a high rate if one model has much stronger unnormalized evidence than the other. In this case a biasing factor may be used to equalize the magnitude of the evidences. After achieving improved sampling with the biased probabilities, one can easily unbias the results if the bias is constant across the model. The biasing factors that result in even sampling of models are the Bayes factors between models, similar to how the weights leading to equal probability sampling in expanded ensemble or simulated tempering simulations are the free energies of each state. However, even approximate biasing factors can enable sampling such model swaps are feasible and Bayes factors may be estimated.
\subsection{Calculation of Bayes Factors}
\label{subsection:CalculationBayesFactors}
There are several approaches that can be used to calculate the model evidences and Bayes factors including analysis of the reversible jump Monte Carlo model posteriors ~\cite{bartolucciEfficientBayesFactor2006} and bridge sampling~\cite{mengSIMULATINGRATIOSNORMALIZING1996} from the prior to the posterior.
\subsubsection{Bridge sampling with intermediate states}
\label{subsubsection:BridgeSampling}
One method for computing Bayes factors involves calculating the marginal likelihoods (model normalizing constants) for each model $M$ individually, and then calculating the Bayes factor as in eq. 5. The calculation of normalizing constants using Monte Carlo methods is a well-known problem in statistical inference and many methods are described in the literature~\cite{gewekeBayesianInferenceEconometric1989,nealProbabilisticInferenceUsing1993,diciccioComputingBayesFactors1997,gelmanSimulatingNormalizingConstants1998}.
A technique common to statistical inference and statistical physics is bridge sampling~\cite{mengSIMULATINGRATIOSNORMALIZING1996}, which calculates the ratio of normalizing constants between two distributions by utilizing information from Monte Carlo samples from both distributions.
The Bennett acceptance ratio (BAR)~\cite{bennettEfficientEstimationFree1976} is a specific case of bridge sampling where $\alpha$ is chosen to be unbiased and minimize variance of the estimator.
The BAR estimator is limited in cases when overlap between the two distributions is poor. In this case, one can introduce intermediate distributions designed to create a path between the target distributions with sufficient overlap. In this case, which we call \emph{bridge sampling with intermediates}, the ratio of normalizing constants can be estimated with the unbiased, lowest-variance MBAR estimator~\cite{shirtsStatisticallyOptimalAnalysis2008a}, which extends BAR to data collected over multiple states. The process for calculating log Bayes factors from MBAR is analogous to the process of calculating free energies as described by Shirts and Chodera~\cite{shirtsStatisticallyOptimalAnalysis2008a}, with the Boltzmann distributions replaced with the prior $P(\theta|M)$, posterior $P(\theta|D,M)$, and intermediate distributions. For a detailed discussion of how Bayes factors are calculated with MBAR, see Supporting Information section 2.
Our strategy, described visually in figure \ref{fig:mbar_calculations}, is to calculate the model evidence (normalizing constant) of each model posterior $M$ by using MBAR to calculate the ratio of normalizing constants between that model's prior and posterior. Since all priors used here are normalized analytical distributions, the prior normalizing constant is known and we can calculate the posterior normalizing constant in combination with the ration from MBAR. With this information we can calculate the Bayes factors by comparing the posterior normalizing constants between models.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{figures/mbar_procedure.png}
\caption{\textbf{ Bayes factor calculation scheme for 2CLJQ models.} Bridge sampling is used to calculate ratio of marginal likelihoods (normalizing constants) between analytical reference distribution (with known normalizing constant) and posterior distribution. Posterior distribution marginal likelihoods are then calculated and used to compute Bayes factors.}
\label{fig:mbar_calculations}
\end{figure}
Preliminary testing showed that the prior and posterior did not generally have enough overlap to produce reliable estimates of normalizing constant ratios with BAR.
In order to improve overlap, we replace the prior in this sampling process with a more informed auxiliary distribution.
By running a short MCMC simulation of the posterior distribution, we fit a simple multivariate normal distribution to the posterior sample; with more complex posteriors, Gaussian mixture models could be used, although we do not investigate them here. This approach is not common in molecular simulation, with very complicated distributions of samples as a functions of the coordinates, with the possible exception of simple systems like the 2CLJQ model or crystals with near harmonic behavior~\cite{abrahamThermalGradientApproach2018}.
Testing also showed that intermediate distributions were required to get sufficient overlap for bridge sampling. Distributions $P_{int}$ are defined as:
\begin{equation}
P_{int} = P_{ref}(\theta)^{f(\lambda)}\times P(\theta|D,M)^{f(1-\lambda)}
\end{equation}
where $P_{ref}$ is the analytical auxiliary distribution.
Through testing we determined that using one intermediate state between the reference distribution and the posterior is enough to consistently achieve sufficient overlap for Bayes factor calculations.
Using intermediate distributions has an analog in free energy simulations with $\lambda$-dynamics simulations~\cite{knightLDynamicsFreeEnergy2009}, and in staged methods of calculating host/guest binding affinities through multiple windows.
\subsubsection{Model distributions from RJMC}
\label{subsubsection:ModelDistributionsRJMC}
RJMC allows for the direct estimation of Bayes factors through two methods: (1) estimating the ratio of normalizing constants by taking the sampling ratio (SR) of the visits to each model and (2) estimating this ratio with what is called in statistics the warped bridge sampling, using the BAR estimator.
The sampling ratio estimate is simply the ratio of the number of Monte Carlo samples from each model; assuming each model’s parameter space is sampled well, this will converge to the model Bayes factor. The warped bridge sampling approach involves using bridge sampling on a set of samples from two models, then using the proposal distributions from section 2.4.2 and parameter mappings from section 2.4.3 to "warp" the distributions to improve overlap, similar to mappings in configurational space. ~\cite{paliwalMultistateReweightingConfiguration2013,schieberConfigurationalMappingSignificantly2019}.
\subsubsection{Biased RJMC}
\label{subsubsection:BiasedRJMC}
As discussed in section 2.4.4, biasing factors can be used to equalize the evidence between models and facilitate cross model jumps in cases where there is a large difference in model evidences. In the case of our data, UA models often have very low model evidence for this data set, compared to AUA or AUA+Q. We can validate Bayes factor calculations as described in section 2.5.1 by performing RJMC with model evidences estimated from the importance sampling method as biasing factors. If the biasing factors from the importance sampling method in 2.5.1 are accurate, then biased RJMC should yield roughly equal sampling between models. In practice, the sampling will not be perfectly equal due to the stochastic nature of the simulation, but large deviations from equal sampling would indicate that these biasing factors are not accurate.
\subsection{Measuring Posterior Predictive Accuracy}
\label{subsection:MeasuringELPPD}
To perform an benchmark of the models independent of the posterior performance, we can calculate the \emph{expected log pointwise predictive density} for a new dataset (ELPPD), a metric introduced by Gelman \emph{et al.}~\cite{gelmanUnderstandingPredictiveInformation2014}. This quantity is a sum of the \emph{expected log predictive density} (ELPD), which evaluates the expectation value of a new data point's ($\Tilde{D}_i$) log objective function over the previous posterior distribution $P(\theta | D)$. For the objective function, we choose our standard likelihood function with the new data point as the target, $\log P(\Tilde{D}_i | \theta)$ .
\begin{equation}
\mathrm{ELPPD (\tilde{D} | D)} = \sum_{i=1}^{n} \int_{\theta} \log P(\Tilde{D}_i | \theta) P(\theta | D) \mathrm{d}\theta
\end{equation}
In practice, we estimate this quantity by simulating the posterior of each model, then drawing $k$ points at random from the posterior samples:
\begin{equation}
\mathrm{ELPPD}_k (\tilde{D}|D) \approx \sum_{i=1}^{n} \sum_{j=1}^{k} \log P(\Tilde{D}_i | \theta_j) \, ; \, \, \, \theta_i \sim P(\theta | D)
\end{equation}
These quantities are useful in that they assess the model's performance on independent datasets and do not include the prior information, so they are a more ``traditional'' metric to assess model performance.
\subsection{Chosen methods for this investigation}
\label{subsection:ChosenMethods}
Model priors are set using the training sample method described in section 2.3.3. To calculate Bayes factors, we used the approach of MBAR with three intermediate states (section \ref{subsubsection:BridgeSampling}) is used, with three total $\lambda$ windows (analytical distribution, intermediate distribution, posterior distribution) simulated for $4\times 10^6$ steps each. This method was chosen over the RJMC method due to its more accurate Bayes factor calculations.
To validate these results, RJMC with biasing factors taken as the model evidences from the MBAR calculation for $2\times10^6$ steps. In all cases, the biased RJMC calculations agree with the MBAR calculations. ELPPD and average deviation calculations are taken from benchmarking posterior simulations performed for $10^6$ steps for each model.
\section{Results \& Discussion}
\label{section:ResultsDiscussion}
\subsection{Effect of priors on Bayes factors}
\label{subsection:EffectPriors}
In order to compute meaningful Bayes factors between 2CLJQ models, we must address the effect of priors on the model posteriors.
\subsubsection{Illustration with uniform priors}
\label{subsubsection:IllustrationUniformPrior}
One of the most important effects of the choice of prior distribution is that the amount of \emph{parameter uncertainty} that the prior distribution encodes. In the case of a Bayes factor between the model without a quadrupole (AUA) and the model including a quadrupole (AUA+Q), different uniform priors over the quadrupole have a strong effect on the Bayes Factor. This Bayes factor, shown in figure \ref{fig:disjoint_prior} is calculated with a target of ethane (C$_2$H$_6$) $\rho_l$ and $P_{sat}$ data.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/bayeschange.png}
\caption{\textbf{Bayes factor comparison of AUA and AUA+Q models with varied quadrupole priors.}
Left: Bayes factors and uncertainties between AUA and AUA+Q, tested against C$_2$H$_6$ ($\rho_L$+$P_{sat}$) for an uniform prior distribution on 2CLJQ quadrupole parameter $Q$ with several different parameter values.
Wider prior distributions over the quadrupole decreases evidence in favor of AUA+Q model. Right: corresponding in color uniform prior distributions, with sampled quadrupole posterior distribution superimposed over priors.}
\label{fig:disjoint_prior}
\end{figure}
As the uniform prior is widened, covering more parameter space, its probability density decreases proportionally, which in turn, lowers the model evidence for the AUA+Q model. This illustrates that a less certain prior over a parameter (encoding more parameter uncertainty) will penalize a model including that parameter. This simple example illustrates how wider uniform priors penalize models with higher parameter uncertainty. This prior uncertainty penalty is an intended part of Bayesian model selection. In Bayes factors calculated in this work, we fit priors for each model in any given comparison to the same set of training sample, so any differences in prior uncertainty should be due to model differences rather than prior information.
\subsubsection{Using more data in prior fitting increases prior certainty}
\label{subsubsection:PriorFittingTrainingSample}
To evaluate the effect of different amounts of prior information on the Bayes factors between 2CLJQ models, we calculated Bayes factors with priors fit to training samples with different amounts of experimental data. Our priors become more certain as more training data is included. While there is no objectively correct Bayes Factor due to its dependence on the prior, the prior should effectively constrain the parameter space, so that extraneous parameter values do not affect the model comparison. To ensure that the priors for the 2CLJQ model were effectively constraining the parameter space, we constructed them with three levels of data in the training sample: n=3, 5, or 8 data points per property respectively (referred to as low, medium, and high information priors).
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth,trim={0 0 0 1.5cm},clip]{figures/AUA+Q_prior_sigma.png}
\caption{\textbf{ Changes in prior distribution shape of AUA+Q model with different amounts of data in training set.} Lack of overlap between low data (n=3) training sample and higher data training samples indicate that prior is not well constrained with n=3 data points.}
\label{fig:prior_changes}
\end{figure}
In this case, shown in figure \ref{fig:prior_changes}, the overlap between the low information prior (n=3 data points) and the medium and high information priors (n=5, n=8 respectively) is poor, indicating that the low information prior does not constrain the prior in the same way as the other priors. In our Bayes factor calculations, the high information prior is used, since it has excellent overlap with the medium prior, and the increased amount of data in the training sample increases our confidence that the prior represents parameters that can model the whole range of experimental data points. We varied the amount of information in this experiment to determine the effect of the amount of data included on the prior and therefore the Bayes factor; in general, we would use as much information as reasonably possible to inform the prior.
\subsection{Results for $\rho_l$, $P_{sat}$ targets}
\label{subsection:2crit}
For the $\rho_l, P_{sat}$ target, the preferred model varies significantly based on the molecule that it is applied to.
\subsubsection{Diatomic molecules}
\label{subsubsection:2crit_diatomic}
For the set of diatomic molecules in figure \ref{fig:diatomic_2crit}, Bayes factor evidence favors either the simplest UA model or the AUA model. For bromine, the AUA model produces slightly different LJ parameters than the UA model, which yield improved physical property accuracy for this target, but are incompatible with the fixed bond length value of the UA model (sampling bond lengths of $\sim$ 0.18 nm, rather than 0.23 nm, as in the UA model). While the value in the UA model reflects a physical bond length, the anisotropic model's assumption that the interaction sites are not necessarily separated by the equilibrium bond length produces better physical property estimates. This is similar to the behavior of Oxygen, which also selects a different bond length then the experimental value, but with weaker evidence for the AUA model.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{figures/diatomics_2crit.png}
\caption{\textbf{Bayes factors between UA, AUA, AUA+Q models for diatomic molecules, tested against $\rho_l, P_{sat}$ data.} Red bar indicates favored model, blue bars indicate log Bayes factor (strength of evidence) against corresponding models. Simpler models (UA/AUA) are favored in these cases.}
\label{fig:diatomic_2crit}
\end{figure}
\subsubsection{Hydro/fluorocarbons}
\label{subsubsection:3crit_hfc}
For the hydrocarbons/fluorocarbons in figure \ref{fig:hfc_3crit}, the evidence generally points towards the more complex models (AUA/AUA+Q), with the strength of the experimental quadrupole moment generally dictating whether the $Q$ parameter is required. C$_2$H$_2$ and C$_2$F$_4$ have the largest sampled quadrupole moments (Q $\approx$ 0.5, 0.83), and overwhelming evidences in favor of the AUA+Q model. C$_2$H$_4$ and C$_2$H$_6$ have lower sampled quadrupole moments (Q $\approx$ 0.41, <0.25), and less evidence in favor of including this parameter; C$_2$H$_4$ and C$_2$H$_6$ have significant evidence in favor of the AUA model.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{figures/hfc_2crit.png}
\caption{\textbf{ Top panel: Bayes factors between UA, AUA, AUA+Q models for hydro/fluorocarbon molecules, tested against $\rho_l, P_{sat}$ data.} Red bar indicates favored model, blue bars indicate log Bayes factor (strength of evidence) between the favored model and the named model. Size of blue bar indicates the strength of the evidence for the favored model (a larger bar indicates stronger evidence for the favored model).}
\label{fig:hfc_2crit}
\end{figure}
\subsection{Results for $\rho_l, P_{sat}, \gamma $ targets}
\label{subsection:3crit}
The addition of the surface tension target, tends to push the evidence in favor of the more complex models, which could be expected with the need to reproduce a very different type of molecular data.
\subsubsection{Diatomics}
\label{subsubsection:diatomic_3crit}
Among diatomics in figure \ref{fig:diatomic_3crit}, both Br$_2$ and N$_2$ require more complex models, with very strong evidence against the UA model, with weak evidence in favor of AUA over AUA+Q for N$_2$, and strong evidence in favor of AUA+Q in the case of Br$_2$. In the cases of F$_2$ and O$_2$, there is very little difference in model performance (before accounting for parsimony) between UA and the more complex models, leading Bayesian inference to favor UA due to parsimony. In the cases of Br$_2$ and N$_2$, the more complex models (AUA and AUA+Q) correct a systematic overprediction of both $P_{sat}$ and $\gamma$ compared to the UA model, overcoming its parsimony advantage.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{figures/diatomics_3crit.png}
\caption{\textbf{ Bayes factors between UA, AUA, AUA+Q models for diatomic molecules, tested against $rho_l, P_{sat}, \gamma$ data.} Red bar indicates favored model, blue bars indicate log Bayes factor (strength of evidence) against corresponding models. Addition of $\gamma$ data pushes model decision for Br$_2$ and N$_2$ towards more complex models (AUA/AUA+Q)}
\label{fig:diatomic_3crit}
\end{figure}
\subsubsection{Hydro/fluorocarbons}
\label{subsubsection:3crit_hfc}
For the hydro/fluorocarbons in figure \ref{fig:hfc_3crit} (C$_2$F$_4$ omitted due to lack of experimental $\gamma$ data), C$_2$H$_2$ still has extremely strong evidence in favor of AUA+Q, and the inclusion of $\gamma$ in the target yields weak evidence in support of the AUA+Q model for C$_2$H$_4$. C$_2$H$_6$ has very weak evidence against the inclusion of a quadrupole as the AUA and AUA+Q model sample very similar values. All of these molecules have very strong evidence against the simplest UA model.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{figures/hfc_3crit.png}
\caption{\textbf{ Bayes factors between UA, AUA, AUA+Q models for diatomic molecules, tested against $rho_l, P_{sat}, \gamma$ data. Red bar indicates favored model, blue bars indicate log Bayes factor (strength of evidence) against corresponding models. C$_2$F$_4$ omitted due to lack of $\gamma$ data.}}
\label{fig:hfc_3crit}
\end{figure}
\subsection{Parameter Correlations}
\label{subsection:ParameterCorrelations}
An advantage of this Bayesian inference process is the information about parameter probability distributions obtained from the MCMC process. This information is valuable for examining parameter correlations and understanding parameter sensitivity.
\subsubsection{Correlation of Lennard-Jones parameters}
\label{subsubsection:LJCorrelation}
One of notable parameter trends in this model, shown in figure \ref{fig:LJ_Q_param_correlation} is the high degree of correlation between the $\epsilon,\sigma, L$ that represent the Lennard-Jones interactions. A degree of correlation between LJ $\epsilon$ and $\sigma$ has been observed previously~\cite{messerlyConfigurationSamplingBasedSurrogateModels2018a,shirtsSolvationFreeEnergies2005}, but the linear/planar nature of the $(\epsilon,\sigma, L)$ probability surface suggests that the 2CLJQ model has a number of degenerate/nearly degenerate parameter sets that do a relatively good job of reproducing experimental data. Since the sampling is based on the surrogate models, the correlation is probably overestimated compared to the full model.
The quadrupole parameter is also correlated with the other parameters, but more weakly. Corner plots for all models and targets are available in the Supporting Information, section 7.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/supplementary_figures/triangle/2crit_C2H4_AUA+Q_corner.pdf}
\caption{\textbf{ Parameter correlations shown in triangle plot of AUA+Q model's 2-d marginal parameter distributions for C$_2$H$_4$, $\rho_l+P_{sat}$ target.}
Plot shows high degree of correlation between $\epsilon,\sigma,L$, and weaker correlation with $Q$.}
\label{fig:LJ_Q_param_correlation}
\end{figure}
Notably, we do not find strong multi-modalities in any of these situations; most have a single maximum of parameter probability. This is probably due to the simplicity of the model and its surrogate models; we do not necessarily expect this to be true for more complex models.
\subsection{Benchmarking}
\label{subsection:Benchmarking}
Benchmarking based on ELPPD results was performed for all compounds with enough experimental data available after prior fitting and Bayes factor calculation. In general, measures of model performance based on Bayes factor and likelihood-only ELPPD benchmarking agree, especially when the evidence in favor of one model is strong. There are some situations where these methods disagree; this is where the parsimonious nature of the Bayesian approach is important.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/c2h4_benchmark.png}
\caption{\textbf{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$H$_4$.} Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points). Data shows AUA+Q model (yellow triangles) improves performance in reproducing $P_{sat}$ data at low temperatures, but Bayes factor evidence supports AUA model (blue circles) due to parsimony.}
\label{fig:C2H4_benchmark}
\end{figure}
This pattern is illustrated in figure \ref{fig:C2H4_benchmark}, in the $\rho_l, P_{sat}$ criteria for C$_2$H$_4$. Although the performance of the AUA/AUA+Q models are very similar at reproducing $rho_l$ (ELPPD average over test points is 1.78 for AUA vs 1.75 for AUA+Q), and $P_{sat}$ (0.81 for AUA vs 1.02 for AUA+Q), the Bayes factor favors the AUA model due to the complexity penalty that the Bayesian method assesses.
\subsection{Challenges of implementation}
\label{subsection:ImplementationChallenges}
A common challenge of using MCMC to sample points from a probability distribution is ensuring good sampling and mixing within the MCMC chains. In the posteriors sampled here, exhaustive sampling can be achieved with simple MCMC techniques and long chains. In higher-dimension spaces with rougher probability landscapes, directed sampling techniques like Hamiltonian Monte Carlo~\cite{nealMCMCUsingHamiltonian2012} (HMC), Langevin Dynamics~\cite{leimkuhlerRobustEfficientConfigurational2013a} or a No U-turn Sampler (NUTS)~\cite{hoffmanNoUTurnSamplerAdaptively2011} may be required to efficiently sample the space. Running multiple chains from different starting points may also be important in situations with significant multi-modality; advanced sampling techniques to ``bridge the gap'' between disparate basins may also be helpful.
A more practical challenge to MCMC sampling is the implementation of surrogate models used to sample parameter sets. For most parameterization problems, implementing surrogate models will be much more difficult that for the 2CLJQ model. For more complex models like biomolecular force fields, surrogate models will not be general enough to capture model responses for large classes of molecules or physical properties, so surrogate models will need to be purpose-built for a process like this. This is a challenging task, but should be possible by simulating properties of interest for multiple parameter sets, and then using modeling techniques well suited to sparse data, such as Gaussian processes (GP)~\cite{liuWhenGaussianProcess2019,oliverKrigingMethodInterpolation1990} regression. Befort et al.~\cite{befortMachineLearningDirected2021} recently had success using GPs to build physical property surrogate models based on small molecule force fields. These methods could be enhanced with reweighting techniques like MBAR on simulated points to add gradient information to the surrogate models.~\cite{messerlyConfigurationSamplingBasedSurrogateModels2018a}.
Another implementation challenge is selecting a technique to compute Bayes factors from these MCMC chains. As discussed in section \ref{subsubsection:RJMC}, RJMC, while attractive due to its simplicity and simultaneous sampling, will be difficult to implement in large model spaces, requiring much fine-tuning to facilitate model jumps. The MBAR-based technique we use is more attractive in a large model space, but requires some testing to ensure a good reference model, and probability overlap between the reference model and the posterior.
\section{Conclusions}
\label{section:Conclusions}
In this work, we introduce Bayesian inference with surrogate modeling as a paradigm for making model decisions for non-bonded molecular interactions. By testing this strategy on the 2CLJQ model, we demonstrate that the technique can choose between models in a way that balances model performance with parsimony. We anticipate that this will be useful in force field parameterization, where tradeoffs between model complexity and performance are extremely common. We also assess the role the Bayesian prior has on the choice of model, and offer insight into how these choices may be made in a force field parameterization context.
The model decisions made in the Bayesian inference process vary widely due to the compound and data targets considered, which highlights the variability of the targets. Although this does not yield a blanket recommendation for a level of model complexity, it identifies which compounds and properties require a more complex model. This problem-specific parsimony is of particular interest to the molecular modeling community, because the goal for molecular model selection is choosing the simplest and computationally cheapest model that describes the data of interest with sufficient experimental accuracy.
The most significant challenge of applying this technique to more complex systems will be creating the surrogate models required for this process. For biomolecular force fields, it is possible to build problem-specific Gaussian process surrogate models built from simulation data and MBAR reweighting to explore the parameter space of combinations of Lennard-Jones parameters, though the sampling of parameter space required to construct them is a significant challenge. This technique is also applicable to other common dispersion-repulsion model selection problems such as LJ typing schemes, as well as LJ combination rules. It is also potentially useful for choices between fixed-charge electrostatic models, and could help identify where virtual sites or polarizable sites could be most useful. Overall, this technique appears to provide a systematic method of evaluating models based on performance and parsimony applied to molecular simulations.
\section{Data and Software Availability}
Datasets and code used to generate the results shown in this study are available from \url{https://github.com/SimonBoothroyd/bayesiantesting/tree/combined_run}.
\section{Author Contributions}
Contributions based on CRediT taxonomy:
\noindent O.C.M.: Conceptualization, data curation, formal analysis, investigation, methodology, software, visualization, original draft, review and editing \\
S.B.: Methodology, software, investigation, visualization, review and editing \\
R.A.M.: Conceptualization, data curation, resources, software, methodology, review and editing \\
J.F.: Methodology, review and editing \\
J.D.C.: Resources, funding acquisition, review and editing \\
M.R.S.: Conceptualization, methodology, funding acquisition, resources, project administration, supervision, review and editing. \\
\section{Acknowledgements and Funding}
We thank the Open Force Field Consortium for funding, including our industry partners as listed at the Open Force Field website, and Molecular Sciences Software Institute (MolSSI) for its support of the Open Force Field Initiative. We gratefully acknowledge along with all current and former members of the Open Force Field Initiative and the Open Force Field Scientific Advisory Board. Research reported in this publication was in part supported by National Institute of General Medical Sciences of the National Institutes of Health under award number R01GM132386, specifically partial support of OCM, MRS, and JDC. OCM, MRS, JDC, and JF acknowledge support from NSF CHE-1738975 for parts of the project. These findings are solely of the authors and do not necessarily represent the offical views of the NIH or NSF.
\section{Disclosures}
The Chodera laboratory receives or has received funding from multiple sources, including the National Institutes of Health, the National Science Foundation, the Parker Institute for Cancer Immunotherapy, Relay Therapeutics, Entasis Therapeutics, Silicon Therapeutics, EMD Serono (Merck KGaA), AstraZeneca, Vir Biotechnology, XtalPi, Foresite Labs, the Molecular Sciences Software Institute, the Starr Cancer Consortium, the Open Force Field Consortium, Cycle for Survival, a Louis V. Gerstner Young Investigator Award, and the Sloan Kettering Institute. A complete funding history for the Chodera lab can be found at \url{http://choderalab.org/funding}.
JDC is a current member of the Scientific Advisory Boards of OpenEye Scientific Software, Interline, and Redesign Science, and holds equity interests in Interline and Redesign Science.
MRS is an Open Science Fellow for Silicon Therapeutics.
SB is a director of Boothroyd Scientific Consulting Ltd.
\section{Surrogate models}
\subsection{Functional forms}
Surrogate models used in this paper are analytical correlations of the following functional forms:
\begin{equation}
\rho = \left(\rho^{*}_c + C_1(T^*_c-T^*)^{1/3} + C_2(T^*_c-T^*) + C_3 (T^*_c-T^*)^{3/2}\right)\sigma^{-3}
\end{equation}
\begin{equation}
\ln P_{sat}(\sigma, \epsilon, L, Q) = c_1(\sigma, \epsilon, L, Q) + \frac{c_2(\sigma, \epsilon, L, Q)}{T^*} + \frac{c_3(\sigma, \epsilon, L, Q)}{T^{*4}} \\
\end{equation}
\begin{equation}
\gamma = A(\sigma, \epsilon, L, Q)\left( 1- \frac{T}{T_c}\right)^B
\end{equation}
\begin{equation}
T^* = Tk_B/\epsilon
\end{equation}
\begin{equation}
T^*_c = f(\sigma, \epsilon, L, Q)
\end{equation}
For full details and values of constants, see Stoll~\cite{stollComprehensiveStudyVapourliquid2009a} and Werth~\cite{werthSurfaceTensionTwo2015a}
\subsection{Model uncertainty/error estimates}
\begin{figure}[h]
\centering
\begin{tabular}[t]{c|c|c}
Property & Temperature Range ($\%$ of $T_c$) & $\%$ error\\
\hline
& $< 0.9 $ & $0.3$ \\
$\rho_l$ & $0.9 - 0.95$ & $ 0.3 + \frac{1-0.3}{0.95-0.9}\times (T - 0.9) $\\
& $>0.95$ & $1.0$\\
\hline
& $< 0.55 $ & $20$ \\
$P_{sat}$ & $0.55 - 0.7$ & $ 20 + \frac{2-20}{0.7-0.55}\times (T - 0.55) $\\
& $>0.7$ & $2.0$\\
\hline
& $< 0.75 $ & $4$ \\
$\gamma$ & $0.75 - 0.95$ & $ 4 + \frac{12-4}{0.95-0.75}\times (T - 0.75) $\\
& $>0.95$ & $12.0$\\
\end{tabular}
\caption{Piecewise uncertainty $u_{surr}$ developed for 2CLJQ surrogate models by Stoll and Werth \cite{stollComprehensiveStudyVapourliquid2009,werthSurfaceTensionTwo2015a} from those authors' simulation results. Piecewise behavior attempts to capture the temperature dependency of uncertainty without adding unjustified complex functions.}
\label{tbl:Uncertainty}
\end{figure}
\newpage
\section{Data temperature ranges for all Bayes factor calculations}
If data from 55-95 \% of $T_c$ was available for all properties in a given target, then data points were selected in that range. However, some properties had limited data ranges, and in those cases temperature ranges were selected so that all property data was within the same range. Temperature ranges are listed in table \ref{tbl:TempRanges}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c}
Compound & $rho_l, P_{sat}$ temperature range & $\rho_l,P_{sat},\gamma$ temperature range \\
\hline
$\mathrm{Br_2}$ & $(0.55, 0.95)\times T_c$ & $(0.47, 0.55)\times T_c$ \\
$\mathrm{F_2}$ & $(0.5, 0.6)\times T_c$ & $(0.5, 0.55)\times T_c$ \\
$\mathrm{N_2}$ & $(0.55, 0.95)\times T_c$ & $(0.55, 0.95)\times T_c$ \\
$\mathrm{O_2}$ & $(0.55, 0.95)\times T_c$ & $(0.55, 0.95)\times T_c$ \\
$\mathrm{C_2H_2}$ & $(0.55, 0.95)\times T_c$ & $(0.62, 0.7)\times T_c$ \\
$\mathrm{C_2H_4}$ & $(0.55, 0.95)\times T_c$ & $(0.41, 0.65)\times T_c$ \\
$\mathrm{C_2H_6}$ & $(0.55, 0.95)\times T_c$ & $(0.55, 0.95)\times T_c$ \\
$\mathrm{C_2F_4}$ & $(0.55, 0.95)\times T_c$ & --- \\
\end{tabular}
\caption{Temperature ranges used to select property data points for Bayes factor calculation. Temperature ranges chosen so that all data points from all properties fall within temperature range.}
\label{tbl:TempRanges}
\end{table}
\newpage
\section{Bayes factor calculation with MBAR}
Bayes factors in the ``Bridge sampling with intermediates'' method are calculated using MBAR. For a given model posterior, first the normalizing constant $c_{ref}$ of the auxiliary reference distribution is calculated. This is trivial because these distributions are analytical.
Then, MBAR is used to calculate the ratio of normalizing constants $c_{post}/c_{ref}$ between the posterior distribution $P(D|\theta, M)$ and the reference distribution $P_{ref}(\theta | M)$ by finding the ratio of MBAR normalizing constants $\hat{c}_{post}/\hat{c}_{ref}$. We note that only the ratio is defined, since the normalizing constants are only known up to a multiplicative constant. $\hat{c}_{post}$ and $\hat{c}_{ref}$ are calculated in equations \ref{equation:MBARCpost} and \ref{equation:MBARCref}. In these equations, the variables $j$ and $k$ iterate over the probability distributions which samples are taken from, and the variable $n$ iterates over the $N_j$ samples from the unnormalized probability distributions labeled by $j$. The $K$ total distributions include all unnornalized distributions from which samples are collected from, specifically the posterior $P(D|\theta,M)$, the reference distribution $P_{ref}(\theta,M)$, as well as any auxiliary intermediate distributions. \begin{equation}
\label{equation:MBARCpost}
\hat{c}_{post} = -\sum_{j=1}^{K} \sum_{n=1}^{N_j} \frac{P(D|\theta_{jn},M)}{\sum_{k=1}^K N_k \hat{c}_k^{-1} P_k(\theta_{jn})}
\end{equation}
\begin{equation}
\label{equation:MBARCref}
\hat{c}_{ref} = -\sum_{j=1}^{K} \sum_{n=1}^{N_j} \frac{P_{ref}(\theta_{jn}|M)}{\sum_{k=1}^K N_k \hat{c}_k^{-1} P_k(\theta_{jn})}
\end{equation}
These equations must be solved self-consistently, since $\hat{c}_{post}$ and $\hat{c}_{ref}$ are included in $c_k$.
We note that $P_{ref}$ may be normalized or unnormalized, as long as the actual normalization constant of the version used is what is used as $c_{ref}$ in eq.~\ref{equation:PosteriorNormConstant}. These equations are only unique up to a multiplicative constant, so one of the constants must be set (usually to 1) rather than estimated, and all ratios can then be calculated uniquely.
Self-consistent solution is performed by converting into log probability space to produce effective "energies" and then using the python package \texttt{pymbar} (\url{https://github.com/choderalab/pymbar}). For more details on the MBAR equations, see~\cite{shirtsStatisticallyOptimalAnalysis2008a}.
The posterior normalizing constant $c_{post}$ is then calculated by multiplication as in equation \ref{equation:PosteriorNormConstant}.
\begin{equation}
\label{equation:PosteriorNormConstant}
c_{post} = c_{ref} \times \hat{c}_{post}/\hat{c}_{ref}
\end{equation}
At this point we note that the posterior normalizing constant $c_{post}$ is the model marginal likelihood $P(D|M)$. So, for models 1 and 2, we can estimate the Bayes factor $B_{1/2}$ as in equation \ref{equation:BayesFactorCalculate}.
\begin{equation}
\label{equation:BayesFactorCalculate}
B_{1/2} \approx \frac{P(D|M_1)}{P(D|M_2)} = \frac{c_{post,1}}{c_{post,2}}
\end{equation}
\newpage
\section{ln Bayes Factor values for prior training samples (n=3, n=5, n=8)}
\subsection{Low information prior (n=3 data points per property)}
\subsubsection{$\rho_l, P_{sat}$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -10.91 $\pm$ 0.08 & 0 & -0.45 $\pm$ 0.13 \\
$\mathrm{F_2}$ & 0 & -6.64 $\pm$ 0.12 & -10.49 $\pm$ 0.18 \\
$\mathrm{N_2}$ & 0 & -4.53 $\pm$ 0.12 & -0.85 $\pm$ 0.2 \\
$\mathrm{O_2}$ & -4.19 $\pm$ 0.08 & 0 & -1.32 $\pm$ 0.14\\
$\mathrm{C_2H_2}$ & -488.13 $\pm$ 0.20 & -43.13 $\pm$ 0.20 & 0\\
$\mathrm{C_2H_4}$ & -93.21 $\pm$ 0.11 & 0 & -3.01 $\pm$ 0.21\\
$\mathrm{C_2H_6}$ & -43.40 $\pm$ 0.10 & 0 & -1.01 $\pm$ 0.24 \\
$\mathrm{C_2F_4}$ & -630.54 $\pm$ 0.20 & -18.91 $\pm$ 0.20 & 0\\
\end{tabular}
\caption{$\ln$~ (Bayes factors) relative to the most favored model, tested against $\rho_l, P_{sat}$ data, with low information (n=3 data points per property) training sample.}
\end{table}
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{figures/diatomics_2crit_low.png}
\includegraphics[width=0.85\textwidth]{figures/hfc_2crit_low.png}
\caption{Bayes factors between UA, AUA, AUA+Q models for all molecules, tested against $\rho_l, P_{sat}$ data, with low information (n=3 data points per property) training sample.}
\label{fig:2crit_low}
\end{figure}
\newpage
\subsubsection{$\rho_l, P_{sat}, \gamma$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -63.37 $\pm$ 0.18 & -10.98 $\pm$ 0.18 & 0 \\
$\mathrm{F_2}$ & -3.25 $\pm$ 0.19 & -1.54 $\pm$ 0.19 & 0 \\
$\mathrm{N_2}$ & -22.51 $\pm$ 0.12 & 0 & -1.00 $\pm$ 0.14 \\
$\mathrm{O_2}$ & 0 & -4.54 $\pm$ 0.10 & -5.62 $\pm$ 0.12\\
$\mathrm{C_2H_2}$ & -346.15 $\pm$ 0.08 & 0 & -142.68 $\pm$ 0.11\\
$\mathrm{C_2H_4}$ & -130.41 $\pm$ 0.12 & 0 & -0.72 $\pm$ 0.18\\
$\mathrm{C_2H_6}$ & -28.27 $\pm$ 0.09 & 0 & -1.32 $\pm$ 0.12\\
\end{tabular}
\caption{$\ln$ Bayes factors relative to the most favored model, tested against $\rho_l, P_{sat}, \gamma$ data, with low information (n=3 data points per property) training sample.}
\end{table}
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{figures/diatomics_3crit_low.png}
\includegraphics[width=0.64\textwidth]{figures/hfc_3crit_low.png}
\caption{Bayes factors between UA, AUA, AUA+Q models for all molecules, tested against $\rho_l, P_{sat}, \gamma$ data, with low information (n=3 data points per property) training sample.}
\label{fig:3crit_low}
\end{figure}
\newpage
\subsection{Medium information prior (n=5 data points per property)}
\subsubsection{$\rho_l, P_{sat}$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -9.19 $\pm$ 0.08 & 0 & -1.00 $\pm$ 0.15 \\
$\mathrm{F_2}$ & 0 & -6.34 $\pm$ 0.09 & -6.61 $\pm$ 0.24 \\
$\mathrm{N_2}$ & 0 & -3.33 $\pm$ 0.09 & -2.33 $\pm$ 0.18 \\
$\mathrm{O_2}$ & 0 & -3.99 $\pm$ 0.09 & -2.85 $\pm$ 0.15\\
$\mathrm{C_2H_2}$ & -591.29 $\pm$ 0.19 & -74.29 $\pm$ 0.19 & 0\\
$\mathrm{C_2H_4}$ & -117.64 $\pm$ 0.09 & 0 & -4.77 $\pm$ 0.19\\
$\mathrm{C_2H_6}$ & -41.55 $\pm$ 0.11 & 0 & -1.82 $\pm$ 0.19\\
$\mathrm{C_2F_4}$ & -489.93 $\pm$ 0.17 & -96.63 $\pm$ 0.17 & 0\\
\end{tabular}
\caption{$\ln$ Bayes factors relative to the most favored model, tested against $\rho_l, P_{sat}$ data, with medium information (n=5 data points per property) training sample.}
\end{table}
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{figures/diatomics_2crit_med.png}
\includegraphics[width=0.85\textwidth]{figures/hfc_2crit_med.png}
\caption{Bayes factors between UA, AUA, AUA+Q models for all molecules, tested against $\rho_l, P_{sat}$ data, with medium information (n=5 data points per property) training sample.}
\label{fig:2crit_med}
\end{figure}
\newpage
\subsubsection{$\rho_l, P_{sat}, \gamma$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -48.98 $\pm$ 0.17 & -8.87 $\pm$ 0.17 & 0 \\
$\mathrm{F_2}$ & -2.00 $\pm$ 0.16 & -0.02 $\pm$ 0.16 & 0 \\
$\mathrm{N_2}$ & -17.51 $\pm$ 0.10 & 0 & -0.39 $\pm$ 0.12 \\
$\mathrm{O_2}$ & 0 & -6.81 $\pm$ 0.09 & -6.40 $\pm$ 0.14\\
$\mathrm{C_2H_2}$ & -287.72 $\pm$ 0.08 & 0 & -132.83 $\pm$ 0.10\\
$\mathrm{C_2H_4}$ & -118.62 $\pm$ 0.12 & 0 & -0.47 $\pm$ 0.15\\
$\mathrm{C_2H_6}$ & -29.98 $\pm$ 0.10 & 0 & -0.51 $\pm$ 0.12\\
\end{tabular}
\caption{$\ln$ (Bayes factors) relative to the most favored model, tested against $\rho_l, P_{sat}, \gamma$ data, with medium information (n=5 data points per property) training sample.}
\end{table}
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{figures/diatomics_3crit_med.png}
\includegraphics[width=0.64\textwidth]{figures/hfc_3crit_med.png}
\caption{Bayes factors between UA, AUA, AUA+Q models for all molecules, tested against $\rho_l, P_{sat}, \gamma$ data, with medium information (n=5 data points per property) training sample.}
\label{fig:3crit_med}
\end{figure}
\newpage
\subsection{High Information Prior (n=8 data points per property), used in final Bayes factor calculations.}
\subsubsection{$\rho_l, P_{sat}$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & -7.94 $\pm$ 0.08 & 0 & -1.20 $\pm$ 0.12 \\
$\mathrm{F_2}$ & 0 & -2.87 $\pm$ 0.08 & -2.77 $\pm$ 0.13 \\
$\mathrm{N_2}$ & 0 & -4.21 $\pm$ 0.08 & -3.65 $\pm$ 0.18 \\
$\mathrm{O_2}$ & -0.66 $\pm$ 0.09 & 0 & -1.65 $\pm$ 0.12\\
$\mathrm{C_2H_2}$ & -382.28 $\pm$ 0.21 & -38.07 $\pm$ 0.21 & 0\\
$\mathrm{C_2H_4}$ & -115.57 $\pm$ 0.08 & 0 & -2.81 $\pm$ 0.19\\
$\mathrm{C_2H_6}$ & -38.46 $\pm$ 0.08 & 0 & -1.39 $\pm$ 0.14\\
$\mathrm{C_2F_4}$ & -424.41 $\pm$ 0.23 & -84.85 $\pm$ 0.23 & 0 \\
\end{tabular}
\caption{$\ln$ (Bayes factors relative to the most favored model, tested against $\rho_l, P_{sat}$ data, with high information (n=8 data points per property) training sample.}
\end{table}
\subsubsection{$\rho_l, P_{sat}, \gamma$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{c|c|c|c}
Compound & UA & AUA & AUA+Q \\
\hline
$\mathrm{Br_2}$ & 19.76 $\pm$ 0.18 & -3.46 $\pm$ 0.18 & 0 \\
$\mathrm{F_2}$ & 0 & -0.79 $\pm$ 0.08 & -1.90 $\pm$ 0.16 \\
$\mathrm{N_2}$ & -16.33 $\pm$ 0.10 & 0 & -0.19 $\pm$ 0.12 \\
$\mathrm{O_2}$ & 0 & -6.51 $\pm$ 0.10 & -6.72 $\pm$ 0.13\\
$\mathrm{C_2H_2}$ & -206.30 $\pm$ 0.10 & -50.14 $\pm$ 0.10 & 0\\
$\mathrm{C_2H_4}$ & -78.90 $\pm$ 0.13 & -0.96 $\pm$ 0.17 & 0\\
$\mathrm{C_2H_6}$ & -23.50 $\pm$ 0.10 & 0 & -0.24 $\pm$ 0.13\\
\end{tabular}
\caption{$\ln$ Bayes factors relative to the most favored model, tested against $\rho_l, P_{sat}, \gamma$ data, with high information (n=8 data points per property) training sample.}
\end{table}
\newpage
\section{ELPPD Benchmarking Results}
\subsection{$\rho_l, P_{sat}$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{3}{|c|}{ELPPD Avg. over test points} & \multicolumn{3}{|c|}{Avg. Stdev. from Exp.} \\
\hline
Compound & UA & AUA & AUA+Q & UA & AUA & AUA+Q \\
\hline
\multicolumn{7}{|c|}{$\rho_l$}\\
\hline
$\mathrm{Br_2}$ &--- & --- & --- & --- & --- & --- \\
$\mathrm{F_2}$ & --- & --- & --- & --- & --- & --- \\
$\mathrm{N_2}$ & 1.51 & 1.13 & 0.94 & 1.74 & 1.50 & 1.37\\
$\mathrm{O_2}$ & 1.36 & 1.26 & 1.22 & 1.65 & 1.59 & 1.57\\
$\mathrm{C_2H_2}$ & --- & --- & --- &--- & --- & --- \\
$\mathrm{C_2H_4}$ & 9.59 & 1.59 & 1.53 & 4.38 & 1.78 & 1.75\\
$\mathrm{C_2H_6}$ & 3.07 & 0.81 & 0.82 & 2.48 & 1.27 & 1.28\\
$\mathrm{C_2F_4}$ & --- & --- & --- &--- & --- & --- \\
\hline
\multicolumn{7}{|c|}{$P_{sat}$}\\
\hline
$\mathrm{Br_2}$ & 0.31 & 0.10 & 0.09 & 0.79 & 0.44 & 0.43\\
$\mathrm{F_2}$ & 0.03 & 0.06 & 0.06 & 0.26 & 0.33 & 0.35 \\
$\mathrm{N_2}$ & 0.15 & 0.20 & 0.19 & 0.54 & 0.63 & 0.62 \\
$\mathrm{O_2}$ & 0.87 & 0.25 & 0.38 & 1.32 & 0.70 & 0.88\\
$\mathrm{C_2H_2}$ & 12.98 & 0.95 & 0.14 & 5.09 & 1.38 & 0.54\\
$\mathrm{C_2H_4}$ & 3.78 & 0.33 & 0.52 & 2.75 & 0.81 & 1.02\\
$\mathrm{C_2H_6}$ & 1.95 & 0.19 & 0.21 & 1.98 & 0.62 & 0.64\\
$\mathrm{C_2F_4}$ & 17.84 & 2.83 & 0.21 & 5.97 & 2.38 & 0.64\\
\hline
\end{tabular}
\caption{ELPPD Benchmarking for the $\rho_l, P_{sat}$ target with high information priors. ELPPD averaged over test points is (total ELPPD value/number of test points). While this is not a true average due to the nature of the ELPPD, it allows for comparison when numbers of test data points are different. Average standard deviations from experimental value over test points also shown. Larger values indicate worse overall model performance. ELPPD measurements omitted for properties with insufficient (n$<$10) measurements not already used in prior fitting or Bayes factor calculations.}
\end{table}
\newpage
\subsection{$\rho_l, P_{sat}, \gamma$ target}
\begin{table}[h]
\centering
\begin{tabular}[t]{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{3}{|c|}{ELPPD} & \multicolumn{3}{|c|}{Avg. Stdev. from Experiment} \\
\hline
Compound & UA & AUA & AUA+Q & UA & AUA & AUA+Q \\
\hline
\multicolumn{7}{|c|}{$\rho_l$}\\
\hline
$\mathrm{Br_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{F_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{N_2}$ & 0.81 & 0.67 & 0.66 & 1.27 & 1.16 & 1.15 \\
$\mathrm{O_2}$ & 1.94 & 2.21 & 2.18 & 1.97 & 2.10 & 2.09\\
$\mathrm{C_2H_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{C_2H_4}$ & 0.59 & 1.20 & 0.89 & 1.09 & 1.55 & 1.33\\
$\mathrm{C_2H_6}$ & 2.01 & 0.88 & 0.88 & 2.01 & 1.33 & 1.32\\
\hline
\multicolumn{7}{|c|}{$P_{sat}$}\\
\hline
$\mathrm{Br_2}$ & 1.63 & 0.75 & 0.50 & 1.81 & 1.22 & 1.00 \\
$\mathrm{F_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{N_2}$ & 1.52 & 1.18 & 1.17 & 1.74 & 1.54 & 1.53 \\
$\mathrm{O_2}$ & 1.50 & 1.75 & 1.72 & 1.73 & 1.87 & 1.85\\
$\mathrm{C_2H_2}$ & 9.01 & 0.73 & 0.08 & 4.25 & 1.21 & 0.41\\
$\mathrm{C_2H_4}$ & 6.85 & 2.70 & 3.43 & 3.70 & 2.33 & 2.62\\
$\mathrm{C_2H_6}$ & 3.97 & 1.09 & 1.18 & 2.82 & 1.47 & 1.53\\
\hline
\multicolumn{7}{|c|}{$\gamma$}\\
\hline
$\mathrm{Br_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{F_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{N_2}$ & 9.03 & 6.99 & 7.04 & 4.25 & 3.74 & 3.75\\
$\mathrm{O_2}$ & 8.12 & 7.71 & 7.76 & 4.03 & 3.93 & 3.94\\
$\mathrm{C_2H_2}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{C_2H_4}$ & --- & --- & --- &--- & --- & ---\\
$\mathrm{C_2H_6}$ & 2.97 & 4.18 & 4.11 & 2.44 & 2.89 & 2.87\\
\hline
\end{tabular}
\caption{ELPPD Benchmarking for the $\rho_l, P_{sat}, \gamma$ target with high information priors. ELPPD averaged over test points is (total ELPPD value/number of test points). Average standard deviations from experimental value over test points also shown. Larger values indicate worse overall model performance. ELPPD measurements omitted for properties with insufficient (n$<$10) measurements not already used in prior fitting or Bayes factor calculations.}
\end{table}
\newpage
\section{Benchmarking Figures}
\subsection{$\rho_l, P_{sat}$ target}
\subsubsection{F$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_F2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_F2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for F$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{Br$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_Br2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_Br2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for Br$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{N$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_N2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_N2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for N$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{O$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_O2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_O2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for O$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H2_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H2__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$H$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_4$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H4_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H4__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$H$_4$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_6$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H6_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2H6__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$H$_6$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$F$_4$}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2F4_Density__deviation.png}
\includegraphics[width=0.48\textwidth]{figures/supplementary_figures/benchmarks/2crit_C2F4__Saturation_pressure__deviation.png}
\caption{ Average $\rho_l$ (left panel), $P_{sat}$ (right panel) \% deviation plots for C$_2$F$_4$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsection{$\rho_l, P_{sat}, \gamma$ target}
\subsubsection{F$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_F2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_F2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_F2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for F$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{Br$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_Br2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_Br2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_Br2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for Br$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{N$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_N2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_N2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_N2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for N$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{O$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_O2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_O2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_O2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for O$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_2$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H2_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H2__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H2_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for C$_2$H$_2$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_4$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H4_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H4__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H4_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for C$_2$H$_4$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\subsubsection{C$_2$H$_6$}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H6_Density__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H6__Saturation_pressure__deviation.png}
\includegraphics[width=0.4\textwidth]{figures/supplementary_figures/benchmarks/3crit_C2H6_Surface_tension__deviation.png}
\caption{ Average $\rho_l$ (top left panel), $P_{sat}$ (top right panel), $\gamma$ (bottom panel) \% deviation plots for C$_2$H$_6$. Parameter sets drawn from posterior probability distribution, evaluated against separate benchmark data points (open points) as well as points used in calculated Bayes factor (filled points).}
\end{figure}
\newpage
\section{Parameter Distributions from Bayes factor MCMC samples}
These triangle plots are taken from the MCMC sampling of the model posteriors from the MBAR Bayes factor calculations for all 3 models (UA, AUA, AUA+Q), with priors set from the high information training samples (n=8 data points per property).
All measurements are in nm ($\sigma$), K ($\epsilon $), nm ($L$), $\mathrm{D}\cdot \mathrm{nm}$ ($Q$).
\subsection{$\rho_l, P_{sat}$ target}
\subsubsection{F$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_F2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_F2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_F2_AUA+Q_corner.pdf}
\caption{Parameter distributions for F$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_F2_triangle}
\end{figure}
\subsubsection{Br$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_Br2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_Br2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_Br2_AUA+Q_corner.pdf}
\caption{Parameter distributions for Br$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_Br2_triangle}
\end{figure}
\newpage
\subsubsection{N$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_N2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_N2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_N2_AUA+Q_corner.pdf}
\caption{Parameter distributions for N$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_N2_triangle}
\end{figure}
\subsubsection{O$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_O2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_O2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_O2_AUA+Q_corner.pdf}
\caption{Parameter distributions for O$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_N2_triangle}
\end{figure}
\newpage
\subsubsection{C$_2$H$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H2_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_2$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_C2H2_triangle}
\end{figure}
\subsubsection{C$_2$H$_4$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H4_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H4_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H4_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_4$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_C2H4_triangle}
\end{figure}
\newpage
\subsubsection{C$_2$H$_6$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H6_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H6_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2H6_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_6$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_C2H6_triangle}
\end{figure}
\subsubsection{C$_2$F$_4$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2F4_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2F4_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/2crit_C2F4_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$F$_4$, $\rho_l, P_{sat}$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:2crit_C2F4_triangle}
\end{figure}
\newpage
\subsection{$\rho_l, P_{sat}, \gamma$ target}
\subsubsection{F$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_F2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_F2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_F2_AUA+Q_corner.pdf}
\caption{Parameter distributions for F$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_F2_triangle}
\end{figure}
\subsubsection{Br$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_Br2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_Br2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_Br2_AUA+Q_corner.pdf}
\caption{Parameter distributions for Br$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_Br2_triangle}
\end{figure}
\newpage
\subsubsection{N$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_N2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_N2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_N2_AUA+Q_corner.pdf}
\caption{Parameter distributions for N$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_N2_triangle}
\end{figure}
\subsubsection{O$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_O2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_O2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_O2_AUA+Q_corner.pdf}
\caption{Parameter distributions for O$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_N2_triangle}
\end{figure}
\newpage
\subsubsection{C$_2$H$_2$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H2_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H2_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H2_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_2$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_C2H2_triangle}
\end{figure}
\subsubsection{C$_2$H$_4$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H4_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H4_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H4_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_4$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_C2H4_triangle}
\end{figure}
\newpage
\subsubsection{C$_2$H$_6$}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H6_UA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H6_AUA_corner.pdf}
\includegraphics[width=0.3\textwidth]{figures/supplementary_figures/triangle/3crit_C2H6_AUA+Q_corner.pdf}
\caption{Parameter distributions for C$_2$H$_6$, $\rho_l, P_{sat}, \gamma$ target. From left to right: UA, AUA, AUA+Q}
\label{fig:3crit_C2H6_triangle}
\end{figure}
\bibliographystyle{ieeetr}
| 2023-04-23T06:40:50.367Z | 2021-09-15T02:05:53.000Z | redpajama/arxiv | arxiv_0001 | 1,222 | 28,272 |
6113f4bf4767b573617ed560e1e0ad411a5046d5 | \section{Introduction}
Throughout this paper, all varieties will be over the complex numbers.
\par In \cite{PS14}, Popa and Schnell proposed the following
relative version of Fujita's conjecture:
\begin{citedconj}[{\cite[Conjecture 1.3]{PS14}}]\label{conj:ps14}
Let $f\colon Y \to X$ be a morphism of smooth projective varieties, with $\dim
X = n$, and let $\mathcal{L}$ be an ample line bundle on $X$.
For each $k \ge 1$, the sheaf
\begin{equation*}
f_*\omega_Y^{\otimes k} \otimes \mathcal{L}^{\otimes \ell}
\end{equation*}
is globally generated for all $\ell \ge k(n+1)$.
\end{citedconj}
Additionally assuming that $\mathcal{L}$ is globally generated, Popa and Schnell proved
Conjecture \ref{conj:ps14} more generally for log canonical pairs $(Y,\Delta)$.
Previously, Deng \cite[Theorem C]{Den17} and the first author \cite[Proposition 1.2]{Dut17} have studied this conjecture for klt $\mathbb{Q}$-pairs,
and were able to remove the global generation assumption on $\mathcal{L}$ to obtain generic effective generation statements.
In this paper, we obtain similar generic generation results, more generally for
log canonical pairs $(Y,\Delta)$.
\par First, when $X$ is arbitrarily singular and $\mathcal{L}$ is only big and
nef, we obtain the following quadratic bound on $\ell$.
The case when $(Y,\Delta)$ is klt and $k = 1$ is due to de Cataldo \cite[Theorem
2.2]{dc98}.
\begin{alphtheorem}\label{thm:sing}
Let $f\colon Y \to X$ be a surjective morphism of projective varieties where
$X$ is of dimension $n$.
Let $(Y,\Delta)$ be a log canonical $\mathbb{R}$-pair and let $\mathcal{L}$ be a big and nef
line bundle on $X$.
Consider a Cartier divisor $P$ on $Y$ such that $P \sim_\mathbb{R}
k(K_Y+\Delta)$ for some integer $k \ge 1$.
Then, the sheaf
\[
f_*\mathcal{O}_Y(P)\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}
\]
is generated by global sections on an open set $U$ for every integer $\ell \ge
k(n^2 + 1)$.
\end{alphtheorem}
\par On the other hand, we have the following linear bound when $X$ is smooth
and $\mathcal{L}$ is ample.
The statement in $(\ref{thm:e1})$ extends \cite[Theorem C]{Den17} to log
canonical pairs.
As we were writing this, we learned that a statement similar to $(\ref{thm:e2})$
was also obtained by Iwai \cite[Theorem 1.5]{Iwa17}.
\begin{alphtheorem}\label{thm:pluri}
Let $f\colon Y \to X$ be a fibration of projective varieties where
$X$ is smooth of dimension $n$.
Let $(Y,\Delta)$ be a log canonical $\mathbb{R}$-pair and let $\mathcal{L}$ be an ample
line bundle on $X$.
Consider a Cartier divisor $P$ on $Y$ such that $P \sim_\mathbb{R}
k(K_Y+\Delta)$ for some integer $k \ge 1$.
Then, the sheaf
\[
f_*\mathcal{O}_Y(P)\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}
\]
is globally generated on an open set $U$ for
\begin{enumerate}[label=$(\roman*)$,ref=\roman*]
\item every integer $\ell \ge k(n+1) + n^2-n$; and\label{thm:e1}
\item every integer $\ell > k(n+1) + \frac{n^2-n}{2}$ when $(Y,\Delta)$ is
a klt $\mathbb{Q}$-pair.\label{thm:e2}
\end{enumerate}
\end{alphtheorem}
Here, a \textsl{fibration} is a morphism whose generic fiber is irreducible.
\par In both Theorems \ref{thm:sing} and \ref{thm:pluri}, when $Y$ is smooth and
$\Delta$ has simple normal crossings support, we have explicit descriptions of
the open set $U$. See Remark \ref{rmk:loci}.
Thus, we have descriptions of the loci where global generation holds up
to a log resolution.
\par When $X$ is smooth of dimension $\leq$ 3 and $\mathcal{L}$ is ample, the bound on $\ell$ can be
improved. This gives the predicted bound in Conjecture \ref{conj:ps14} for
surfaces; see Remark
\ref{rem:lowdim}.
\begin{remark}[Effective non-vanishing]
Theorems \ref{thm:sing} and \ref{thm:pluri} can be interpreted as effective non-vanishing statements. With
notation as in the theorems, it follows that $f_*\mathcal{O}_Y(P)\otimes \mathcal{L}^{\otimes \ell}$ admits
global sections for all $\ell \ge k(n^2+1)$ when $\mathcal{L}$ is big and nef, and for all $\ell \ge k(n+1)+n^2-n$
when $\mathcal{L}$ is ample and $X$ is smooth. Moreover, just as in Theorem \ref{thm:pluri}$(\ref{thm:e2})$, the effective bound of the second non-vanishing statement
can be improved in case $(Y,\Delta)$ is a klt $\mathbb{Q}$-pair.
\end{remark}
We now state the technical results used in proving Theorems \ref{thm:sing} and
\ref{thm:pluri}.
\subsection*{An extension theorem}
Recall that if $\mu\colon X' \to X$ is the blow-up of a projective variety $X$ at
$x$ with exceptional divisor $E$, then the \textsl{Seshadri constant} of a nef
Cartier divisor $L$ at $x$ is
\[
\varepsilon(L;x) \coloneqq \sup\bigl\{t \in \mathbb{R}_{\ge 0} \bigm\vert
\mu^*L-tE\ \text{is nef}\bigr\}.
\]
The following replaces the role of Deng's extension theorem
\cite[Theorem 2.11]{Den17} in our proofs.
\begin{alphtheorem}\label{thm:dc}
Let $f\colon Y \to X$ be a surjective morphism of projective varieties, where
$X$ is of dimension $n$ and $Y$ is smooth.
Let $\Delta$ be an $\mathbb{R}$-divisor on $Y$ with simple normal crossings support
and coefficients in $(0,1]$,
and let $L$ be a big and nef $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$.
Suppose there exists a closed point $x \in U(f,\Delta)$ and
a real number $\ell > \frac{n}{\varepsilon(L;x)}$ such that
\[
P_\ell \sim_\mathbb{R} K_Y+\Delta+\ell f^*L
\]
for some Cartier divisor $P_\ell$ on $Y$.
Then, the restriction map
\begin{equation}\label{eq:thmcrest}
H^0\bigl(Y,\mathcal{O}_Y(P_\ell)\bigr) \longrightarrow
H^0\bigl(Y_x,\mathcal{O}_{Y_x}(P_\ell) \bigr)
\end{equation}
is surjective, and the sheaf $f_*\mathcal{O}_Y(P_\ell)$
is globally generated at $x$.
\end{alphtheorem}
See Notation \ref{notn:goodopen}$(\ref{notn:goodopena})$ for the definition of
the open set $U(f,\Delta)$.
\begin{remark}[Comments on the proofs]
The proofs of Theorems \ref{thm:sing} and
\ref{thm:pluri}$(\ref{thm:e1})$ are in a way an algebraization of Deng's techniques,
exploiting a generic lower bound for Seshadri constants due to
Ein, K\"uchle, and Lazarsfeld (Theorem \ref{thm:ekl95}). In the algebraic setting, this
lower bound was first used by de Cataldo to
prove a version of Theorem \ref{thm:sing} for klt pairs when $k = 1$.
One of our main challenges was to extend de Cataldo's theorem to the log canonical case (see Theorem \ref{thm:dc} below).
\par To obtain the better bound in Theorem \ref{thm:pluri}$(\ref{thm:e2})$ for klt $\mathbb{Q}$-pairs, we use
\cite[Proposition 1.2]{Dut17} instead of Seshadri constants.
\par In Theorems \ref{thm:sing} and \ref{thm:dc}, in order to work with line
bundles $\mathcal{L}$ that are big and nef instead of ample, we needed to study the
augmented base locus $\Bsp(\mathcal{L})$ of $\mathcal{L}$ (see Definition \ref{def:baseloci}).
We used Birkar's generalization of Nakamaye's theorem \cite[Theorem 1.4]{Bir17} and a result by
K\"uronya \cite[Proposition 2.7]{Kur13} which capture how $\mathcal{L}$ fails to be
ample.
\par The proof of Theorem \ref{thm:dc} relies on a cohomological injectivity theorem
due to Fujino \cite[Theorem 5.4.1]{Fuj17}.
If $(Y,\Delta)$ is replaced by an arbitrary log canonical $\mathbb{R}$-pair, then the
global generation statement in Theorem \ref{thm:dc} still holds over some open
set (Corollary \ref{cor:dclcsing}).
\end{remark}
\begin{remark}[Effective vanishing]
With the new input of weak positivity, which is discussed next, we give some effective
vanishing statements for certain cases of such pushforwards under smooth morphisms (see Theorem \ref{thm:effvanish}). This improves similar statements
in \cite{Dut17} and is in the spirit of \cite[Proposition 3.1]{PS14}, where Popa and Schnell showed a similar statement
with the assumption that $\mathcal{L}$ is ample and globally generated.
\end{remark}
\subsection*{Effective twisted weak positivity} In order to prove Theorem \ref{thm:pluri}, we also use the following
weak positivity result for log canonical pairs. This may be of independent
interest.
\par In this setting, weak positivity was partially known due to
Campana \cite[Theorem 4.13]{Cam04}, and later more generally due to Fujino \cite[Theorem 1.1]{Fuj15},
but using a slightly
weaker notion of weak positivity (see \cite[Definition 7.3]{Fuj15} and the
comments thereafter). Our result extends these results.
\begin{alphtheorem}[Twisted Weak Positivity]\label{thm:wp}
Let $f\colon Y\to X$ be a fibration of normal projective varieties such that
$X$ is Gorenstein of dimension $n$.
Let $\Delta$ be an $\mathbb{R}$-Cartier $\mathbb{R}$-divisor on $Y$ such that
$(Y, \Delta)$ is log canonical and $k(K_{Y}+\Delta)$ is $\mathbb{R}$-linearly
equivalent to a Cartier divisor for some integer $k \ge 1$.
Then, the sheaf
\[
f_*\mathcal{O}_Y\bigl(k(K_{Y/X}+\Delta)\bigr)
\]
is weakly positive.
\end{alphtheorem}
\par Recall that a torsion-free coherent sheaf $\mathscr{F}$ is \textsl{weakly positive} if there exists a non-empty
open set $U$ such that for every integer $a$, there is an integer $b\ge 1$ such that
\[\Sym^{[ab]}\mathscr{F} \otimes H^{\otimes b}\]
is generated by global sections on $U$ for all ample line bundles $H$. Here, $\bullet^{[s]}$ is the
reflexive hull of $\bullet^{s}$ (see Notation \ref{notn:wp}).
\par In \cite[Theorem 4.2]{PS14}, Popa and Schnell showed that if $\Delta = 0$, the morphism $f$ has
generically reduced fibers in codimension 1, and $H = \omega_X\otimes \mathcal{L}^{\otimes n+1}$
with $\mathcal{L}$ ample and globally generated, then weak positivity in Theorem \ref{thm:wp} holds over $U(f,0)$
for all $b\ge k$. In a similar spirit, we prove the following ``effective'' version of twisted weak positivity
when $Y$ is smooth and $\Delta$ has
simple normal crossings support.
Moreover, Theorem \ref{thm:wp} is deduced from
this result and therefore we also obtain an
explicit description, up to a log resolution, of the locus over which weak positivity holds.
This extends \cite[Theorem 4.2]{PS14} to arbitrary fibrations.
\begin{alphtheorem}[Effective Weak Positivity]\label{thm:wpnc}
Let $f\colon Y\to X$ be a fibration of projective varieties where $Y$ is
smooth and $X$ is normal and Gorenstein of dimension $n$.
Let $\Delta$ be an $\mathbb{R}$-divisor on $Y$ with simple normal crossings support
and with coefficients of $\Delta^h$ in $(0,1]$.
Consider a Cartier divisor $P$ on $Y$ such that
$P \sim_\mathbb{R} k(K_Y + \Delta)$ for some integer $k \ge 1$.
Let $U$ be the intersection of $U(f,\Delta)$ with the largest
open set over which $f_*\mathcal{O}_Y(P)$ is locally free, and
let $H = \omega_X\otimes \mathcal{L}^{\otimes n+1}$ for $\mathcal{L}$ an ample
and globally generated line bundle on $X$.
Then, the sheaf
\[
\bigl(f_*\mathcal{O}_Y\bigl(k(K_{Y/X}+\Delta)\bigr)\bigr)^{[s]}\otimes H^{\otimes
\ell}
\]
is generated by global sections on $U$ for all integers $\ell\geq k$ and $s\ge
1$.
\end{alphtheorem}
Here, $\Delta^h$ is the \textsl{horizontal part} of $\Delta$ (see Notation
\ref{notn:goodopen}$(\ref{notn:goodopenb})$).
\par When $\lfloor\Delta\rfloor=0$, one can, in a way, get rid of the assumption that
$f_*\mathcal{O}_Y(P)$ is locally free on $U$
using invariance of log plurigenera \cite[Theorem 4.2]{HMX};
see Remark \ref{rem:hmx}.
\par
The proof of Theorem \ref{thm:wpnc} relies on Viehweg's fiber product trick
(see \cite[\S3]{Vie83}, \cite[Theorem 4.2]{PS14}, or \cite[\S3]{Horing} for an exposition).
\subsection*{Acknowledgments}
We would like to thank our advisors Mihnea Popa and Mircea Musta\c{t}\u{a} for their unconditional support,
many enlightening conversations, and useful comments. We are also grateful to Karl Schwede for helpful discussions regarding Grothendieck duality theory.
Finally, we would like to thank Lawrence Ein, Mihai Fulger, Emanuel Reinecke and the anonymous referee for some insightful
comments.
\section{Definitions and Preliminary Results}
In this section, we discuss some definitions and preliminary results.
Throughout this paper, a \textsl{variety} is an integral separated scheme of
finite type over the complex numbers.
We will also fix the following notation:
\begin{notation}\label{notn:goodopen}
Let $f\colon Y \to X$ be a morphism of projective varieties, where $Y$ is
smooth, and let $\Delta$ be an $\mathbb{R}$-divisor with simple normal crossings
support on $Y$.
\begin{enumerate}[label=$(\alph*)$,ref=\alph*]
\item We denote by $U(f,\Delta)$ the largest open subset of $X$ such that
\begin{itemize}
\item $U(f,\Delta)$ is contained in the smooth locus $X_{\mathrm{reg}}$ of $X$;
\item $f\colon f^{-1}(U(f,\Delta)) \to U(f,\Delta)$ is smooth; and
\item The fibers $Y_x \coloneqq f^{-1}(x)$ intersect each component of
$\Delta$ transversely for all closed points $x \in U(f,\Delta)$.
\end{itemize}
This open set $U(f,\Delta)$ is non-empty by generic smoothness; see
\cite[Corollary III.10.7]{Har77} and \cite[Lemma 4.1.11]{Laz04a}.
\label{notn:goodopena}
\item\label{notn:goodopenb} We write
\[
\Delta = \Delta^v+\Delta^h,
\]
where $\Delta^v$ and $\Delta^h$ do not share any components, such that
\begin{itemize}
\item every component of $\Delta^h$ is \textsl{horizontal} over $X$,
i.e., surjects onto $X$; and
\item $\Delta^v$ is \textsl{vertical} over $X$, i.e.,
$f(\Supp(\Delta^v))\subsetneq X$.
\end{itemize}
\end{enumerate}
Note that $U(f,\Delta)$ satisfies $U(f,\Delta)\cap f(\Delta^v)
=\emptyset$.
\end{notation}
\subsection{Reflexive sheaves and weak positivity}
In this section, fix an integral noetherian scheme $X$.
To prove Theorem \ref{thm:wpnc}, we need some basic results on reflexive
sheaves, which we collect here.
\begin{definition}\label{def:reflnormal}
A coherent sheaf $\mathscr{F}$ on $X$ is \textsl{reflexive} if the
natural morphism $\mathscr{F}\to\mathscr{F}^{\vee\vee}$ is an isomorphism, where
$\mathscr{G}^\vee \coloneqq \HHom_{\mathcal{O}_X}(\mathscr{G},\mathcal{O}_X)$.
In particular, locally free
sheaves are reflexive.
\par A coherent sheaf $\mathscr{F}$ on $X$ is \textsl{normal} if the restriction map
\[
\Gamma(U,\mathscr{F}) \longrightarrow \Gamma(U\smallsetminus Z,\mathscr{F})
\]
is bijective for every open set $U\subseteq X$ and every closed subset $Z$ of
$U$ of codimension at least 2.
\end{definition}
\begin{proposition}[see {\cite[Proposition 1.11]{Har94}}]\label{prop:reflexivenormal}
If $X$ is normal, then
every reflexive coherent sheaf $\mathscr{F}$ is normal.
\end{proposition}
\begin{citedlem}[{\cite[\href{http://stacks.math.columbia.edu/tag/0AY4}{Tag
0AY4}]{stacks-project}}]\label{lem:homreflexive}
Let $\mathscr{F}$ and $\mathscr{G}$ be coherent sheaves on $X$, and assume that $\mathscr{F}$ is
reflexive. Then, $\HHom_{\mathcal{O}_X}(\mathscr{G},\mathscr{F})$ is also reflexive.
\end{citedlem}
We will often use these facts
to extend morphisms from the complement of codimension at least 2, as recorded in the following:
\begin{corollary}\label{cor:extendsections}
Suppose $X$ is normal, and let $\mathscr{F}$ and $\mathscr{G}$ be coherent sheaves on $X$ such
that $\mathscr{F}$ is reflexive.
If $U \subseteq X$ is an open subset such that $\codim(X \smallsetminus U) \ge
2$, then every morphism $\varphi\colon \mathscr{G}\rvert_U \to \mathscr{F}\rvert_U$ extends
uniquely to a morphism $\widetilde{\varphi}\colon\mathscr{G} \to \mathscr{F}$.
\end{corollary}
\begin{proof}
The morphism $\varphi$ corresponds to a section of the sheaf
$\HHom_{\mathcal{O}_X}(\mathscr{G},\mathscr{F})$ over $U$.
The sheaf $\HHom_{\mathcal{O}_X}(\mathscr{G},\mathscr{F})$ is reflexive by Lemma
\ref{lem:homreflexive}, hence the section $\varphi$ extends uniquely to a
section $\widetilde{\varphi}$ of $\HHom_{\mathcal{O}_X}(\mathscr{G},\mathscr{F})$ over $X$ by
Proposition \ref{prop:reflexivenormal}.
\end{proof}
We will use the following notation throughout this paper:
\begin{citednot}[{\cite[Notation 3.3]{Horing}}]\label{notn:wp}
Let $\mathscr{F}$ be a torsion-free coherent sheaf on a normal variety $X$.
Let $i \colon X^* \hookrightarrow X$ be the largest open set such that
$\mathscr{F}\rvert_{X^*}$
is locally free.
We define
\begin{alignat*}{5}
\Sym^{[b]}\mathscr{F}&\coloneqq i_*\Sym^{b}(\mathscr{F}\bigr\rvert_{X^*})
&\quad \text{and}& \quad& \mathscr{F}^{[b]} &\coloneqq i_*\bigl((\mathscr{F}\bigr\rvert_{X^*})^{\otimes b}\bigr).
\intertext{We can also describe these sheaves as
follows:}
\Sym^{[b]}\mathscr{F} &\simeq \bigl(\Sym^{b}(\mathscr{F})\bigr)^{\vee\vee}
&\quad \text{and}& \quad& \mathscr{F}^{[b]} &\simeq (\mathscr{F}^{\otimes b})^{\vee\vee}.
\end{alignat*}
Indeed, these pairs of reflexive sheaves coincide in codimension 1 and hence are isomorphic by
\cite[Theorem 1.12]{Har94}
\end{citednot}
We can now define the positivity notion appearing in Theorem \ref{thm:wp}.
\begin{definition}[Weak positivity {\cite[Definition 1.2]{Vie83}}]
\label{def:wp}
Let $X$ be a normal variety, and let $U \subseteq X$ be an open set.
A torsion-free coherent sheaf $\mathscr{F}$ on $X$ is said to be
\textsl{weakly positive on $U$} if
for every positive integer
$a$ and every ample line bundle $\mathcal{L}$ on $X$, there exists an integer
$b \ge 1$ such that $\Sym^{[ab]}\mathscr{F}\otimes \mathcal{L}^{\otimes b}$ is globally
generated on $U$.
We say $\mathscr{F}$ is \textsl{weakly positive} if $\mathscr{F}$ is weakly positive on some
open set $U$.
\end{definition}
\subsection{Dualizing complexes and canonical sheaves}
The main reference for this section is \cite{Har66}.
We define the following:
\begin{definition}
Let $h \colon X \to \Spec k$ be an equidimensional scheme of finite type over a field $k$.
Then the \textsl{normalized dualizing complex} for $X$ is $\omega_X^\bullet
\coloneqq h^!k$, where $h^!$ is the exceptional pullback of Grothendieck
duality \cite[Corollary VII.3.4]{Har66}.
One defines the \textsl{canonical sheaf} on $X$ to be the
coherent sheaf
\[
\omega_X\coloneqq \mathbf{H}^{-\dim X}\omega_X^{\bullet}.
\]
\end{definition}
When $X$ is smooth and equidimensional over a field, the canonical
sheaf $\omega_X$ is isomorphic to the invertible sheaf of volume forms
$\Omega_X^{\dim X}$ \cite[III.2]{Har66}.
\par We will need the explicit description of the exceptional pullback functor
for finite morphisms.
Let $\nu\colon Y\to X$ be a finite morphism of equidimensional schemes of finite type over a field.
Consider the functor
\[
\overline{\nu}^*\colon \mathsf{Mod}(\nu_*\mathcal{O}_Y)\longrightarrow \mathsf{Mod}(\mathcal{O}_Y)
\]
obtained from the morphism $\overline{\nu}\colon (Y,\mathcal{O}_Y) \to (X,\nu_*\mathcal{O}_Y)$
of ringed spaces.
This functor $\overline{\nu}^*$ satisfies the following properties
(see \cite[III.6]{Har66}):
\begin{enumerate}[label=$(\alph*)$,ref=\alph*]
\item\label{item:uppershriekprop1} The functor $\overline{\nu}^*$ is exact since the morphism
$\overline{\nu}$ of ringed spaces is flat.
We define the functor
\begin{align*}
\nu^!\colon \mathsf{D}^+(\mathsf{Mod}(\mathcal{O}_X)) &\longrightarrow \mathsf{D}^+(\mathsf{Mod}(\mathcal{O}_Y))\\
\mathscr{F} &\longmapsto \overline{\nu}^*\RHHom_{\mathcal{O}_X}(\nu_*\mathcal{O}_Y,\mathscr{F})
\end{align*}
\item\label{item:uppershriekprop3} For every $\mathcal{O}_X$-module $\mathscr{G}$, we have $\nu^*\mathscr{G} \simeq
\overline{\nu}^*(\mathscr{G}\otimes_{\mathcal{O}_X} \nu_*\mathcal{O}_Y)$.
\item\label{item:uppershriekprop4}
If $\omega_X^{\bullet}$ is the normalized dualizing complex
for $X$, then $\nu^{!}\omega_Y^{\bullet}$ is the normalized dualizing
complex for $Y$.
\end{enumerate}
\par Using the above description, we construct the following \textsl{pluri-trace
map} for integral schemes over fields, which we will use in the
proof of Theorem \ref{thm:wpnc}. We presume that this construction is already known to the experts,
but we could not find a reference.
\begin{lemma}\label{lem:pluritr}
Let $d\colon Y'\to Y$ be a dominant proper birational morphism of
integral schemes of finite type over a field, where $Y'$ is normal and $Y$ is
Gorenstein.
Then, there is a map of pluricanonical sheaves
\[
d_*\omega^{\otimes k}_{Y'} \longrightarrow \omega^{\otimes k}_Y
\]
which is an isomorphism where $d$ is an isomorphism.
\end{lemma}
\begin{proof}
By the universal property of normalization
\cite[\href{http://stacks.math.columbia.edu/tag/035Q}{Tag 035Q}]{stacks-project},
we can factor $d$ as
\[
\begin{tikzcd}[column sep=1.475em]
Y' \rar{d'}\arrow[bend right=25]{rr}[swap]{d} & \overline{Y} \rar{\nu} & Y
\end{tikzcd}
\]
where $\nu$ is the normalization.
Note that $d'$ is proper and birational since $d$ is.
We first construct a similar morphism for $\nu$. Let $n=\dim Y$.
Since $Y$ is Gorenstein, the canonical sheaf $\omega_Y$ is invertible and the
normalized dualizing complex is $\omega_Y[n]$
\cite[Proposition V.9.3]{Har66}.
Using property $(\ref{item:uppershriekprop4})$ above we have
\begin{align*}
\omega_{\overline{Y}} = \mathbf{H}^{-n}(\nu^{!}\omega_Y^{\bullet}) &\simeq
\overline{\nu}^*\big(\mathbf{R}^{-n}\HHom_{\mathcal{O}_Y}(\nu_*\mathcal{O}_{\overline{Y}},
\mathcal{O}_Y[n])\otimes_{\mathcal{O}_Y} \omega_Y\big)\\
&\simeq \overline{\nu}^*\big(\!\HHom_{\mathcal{O}_Y}(\nu_*\mathcal{O}_{\overline{Y}}, \mathcal{O}_Y)\otimes_{\mathcal{O}_Y} \omega_Y\big)
\end{align*}
where we get the first isomorphism since $\overline{\nu}^*$ is exact by
$(\ref{item:uppershriekprop1})$ and since $\omega_Y$ is invertible.
Now $\HHom_{\mathcal{O}_Y}(\nu_*\mathcal{O}_{\overline{Y}}, \mathcal{O}_Y)$ admits a morphism to $\nu_*\mathcal{O}_{\overline{Y}}$ which makes it the largest ideal in $\nu_*\mathcal{O}_{\overline{Y}}$ that is also
an ideal in $\mathcal{O}_Y$. It is the so-called \textsl{conductor ideal} of the
normalization map \cite[(5.2)]{Kol13}. Thus, we get a morphism
\begin{equation*}
\omega_{\overline{Y}} \lhook\joinrel\longrightarrow
\overline{\nu}^*(\nu_*\mathcal{O}_{\overline{Y}} \otimes \omega_Y) \simeq \nu^*\omega_Y.
\end{equation*}
The last isomorphism follows from $(\ref{item:uppershriekprop3})$ above. By
taking the $(k-1)$-fold tensor product of the above morphism we
have
\begin{equation}\label{norm-trace}
\omega_{\overline{Y}}^{\otimes(k-1)}\lhook\joinrel\longrightarrow
\nu^*\omega_Y^{\otimes(k-1)}.
\end{equation}
Finally, we use \eqref{norm-trace} to construct a map
\[
d_*\omega_{Y'}^{\otimes k} \longrightarrow \nu^*\omega_Y^{\otimes(k-1)}\otimes_{\mathcal{O}_{\overline{Y}}} \omega_{\overline{Y}}.
\]
First, we construct the above morphism over $U$ where $d'$ is an isomorphism.
Denote $V\coloneqq d'^{-1}(U)$. The identity map
\[
\mathrm{id}\colon d'_*\omega_{V}^{\otimes k} \longrightarrow \omega^{\otimes k}_{U}
\]
composed with map obtained from \eqref{norm-trace} gives the following map
\[
\tau\colon\omega_{U}^{\otimes k}\lhook\joinrel\longrightarrow \nu^*\omega_{Y}^{\otimes(k-1)}\bigr\rvert_U\otimes_{\mathcal{O}_U} \omega_{U}.
\]
Since $\nu^*\omega_{Y}^{\otimes(k-1)}$ is invertible and $\omega_{\overline{Y}}$ is reflexive, the sheaf
$\nu^*\omega_Y^{\otimes(k-1)}\otimes \omega_{\overline{Y}}$ is also reflexive.
Now $\codim(Y\smallsetminus U)\geq 2$ by Zariski's Main Theorem (see
\cite[Theorem V.5.2]{Har77}). Therefore by Corollary \ref{cor:extendsections}
we obtain
\[
\widetilde{\tau}\colon d'_*\omega_{Y'}^{\otimes k} \longrightarrow \nu^*\omega_{Y}^{\otimes(k-1)}\otimes_{\mathcal{O}_{\overline{Y}}}\omega_{\overline{Y}}.
\]
Composing $\nu_*\widetilde{\tau}$ with one copy of the trace morphism $\nu_*\omega_{\overline{Y}} \to \omega_Y$ \cite[Proposition III.6.5]{Har66}, we get
\begin{equation}\label{desing-trace}
d_*\omega_{Y'}^{\otimes k}\xrightarrow{\nu_*\widetilde{\tau}}
\nu_*(\nu^*\omega_{Y}^{\otimes
(k-1)}\otimes_{\mathcal{O}_{\overline{Y}}}\omega_{\overline{Y}}) \simeq\omega^{\otimes
(k-1)}_Y\otimes_{\mathcal{O}_Y} \nu_*\omega_{\overline{Y}}\xrightarrow{\mathrm{id}\otimes
\mathrm{Tr}}\omega_Y^{\otimes k}.
\end{equation}
The last part of the statement holds by construction of the maps above.
Indeed, in \eqref{desing-trace}
the trace morphism is compatible with flat base change
\cite[Proposition III.6.6(2)]{Har66}, hence compatible with restriction to the
open set where $d$ is an
isomorphism.
\end{proof}
\subsection{Singularities of pairs}
We follow the conventions of \cite[\S2.3]{Fuj17}; see also
\cite[\S\S1.1,2.1]{Kol13}.
Recall that $X_\mathrm{reg}$ denotes the regular locus of a scheme $X$ (Notation
\ref{notn:goodopen}$(\ref{notn:goodopena})$).
\begin{definition}[Canonical divisor]
Let $X$ be a normal variety of dimension $n$.
A \textsl{canonical divisor} $K_X$ on $X$ is a Weil divisor such that
\[
\mathcal{O}_{X_\mathrm{reg}}(K_X) \simeq \Omega^n_{X_\mathrm{reg}}.
\]
The choice of a canonical divisor $K_X$ is unique up to linear equivalence. Then
one defines $\mathcal{O}_X(K_X)$ to be the reflexive sheaf of rank 1 associated to $K_X$.
\end{definition}
The following lemma allows us to freely pass between divisor and sheaf notation on normal varieties:
\begin{lemma}\label{lem:canonicalnormal}
Let $X$ be a normal variety of dimension $n$.
Then, $\mathcal{O}_X(K_X)$ is isomorphic to $\omega_X$.
\end{lemma}
\begin{proof}
The sheaf $\mathcal{O}_X(K_X)$ is reflexive by definition
and the canonical sheaf $\omega_X$ is S2 (by
\cite[\href{http://stacks.math.columbia.edu/tag/0AWE}{Tag
0AWE}]{stacks-project}), hence reflexive (by \cite[Theorem 1.9]{Har94}).
Since they are both isomorphic to $\Omega^n_{X_\mathrm{reg}}$ on $X_\mathrm{reg}$ and
$\codim(X \smallsetminus X_\mathrm{reg}) \ge 2$, we have $\mathcal{O}_X(K_X) \simeq
\omega_X$ by \cite[Theorem 1.12]{Har94}.
\end{proof}
\begin{definition}[Discrepancy]
Let $(X,\Delta)$ be a pair consisting of a normal variety $X$ and an
$\mathbb{R}$-divisor $\Delta$ on $X$ such that $K_X+\Delta$ is $\mathbb{R}$-Cartier.
Suppose $f\colon Y \to X$ is a proper birational morphism from a normal
variety $Y$, and choose canonical divisors $K_Y$ and $K_X$ such that $f_*K_Y =
K_X$.
In this case, we may write
\[
K_Y = f^*(K_X+\Delta) + \sum_i a(E_i,X,\Delta)E_i,
\]
where the $E_i$ are irreducible Weil divisors.
The real number $a(E_i,X,\Delta)$ is called the \textsl{discrepancy of $E_i$}
with respect to $(X,\Delta)$, and the \textsl{discrepancy} of $(X,\Delta)$ is
\[
\discrep(X,\Delta) = \inf_E \bigl\{ a(E,X,\Delta) \bigm\vert E\ \text{is an
exceptional divisor over $X$} \bigr\},
\]
where the infimum runs over all irreducible exceptional divisors of all proper
birational morphisms $f\colon Y \to X$.
\end{definition}
\begin{definition}[Singularities of pairs]
Let $(X,\Delta)$ be a pair consisting of a normal variety $X$ and an effective
$\mathbb{R}$-divisor $\Delta$ on $X$ such that $K_X+\Delta$ is $\mathbb{R}$-Cartier.
We say that $(X,\Delta)$ is \textsl{klt} if $\discrep(X,\Delta) > -1$ and
$\lfloor \Delta \rfloor = 0$.
We say that $(X,\Delta)$ is \textsl{log canonical} if $\discrep(X,\Delta) \ge
-1$.
\end{definition}
We will repeatedly use the following results about log resolutions of log
canonical $\mathbb{R}$-pairs.
\begin{lemma}\label{lem:logres}
Let $(Y,\Delta)$ be a log canonical (resp.\ klt) $\mathbb{R}$-pair, and
consider a Cartier divisor $P$ on $Y$ such that $P\sim_{\mathbb{R}} k(K_Y+\Delta+H)$
for some integer $k \geq 1$ and some $\mathbb{R}$-Cartier $\mathbb{R}$-divisor $H$.
Then, for every proper birational morphism $\mu\colon\widetilde{Y}\to Y$
such that $\widetilde{Y}$ is smooth and $\mu^{-1}(\Delta) + \exc(\mu)$ has
simple normal crossings support,
there exists a divisor $\widetilde{P}$ on $\widetilde{Y}$ and an
$\mathbb{R}$-divisor $\widetilde{\Delta}$ such that
\begin{enumerate}[label=$(\roman*)$]
\item $\widetilde{\Delta}$ has coefficients in $(0,1]$ (resp.\ $(0,1)$) and
simple normal crossings support;
\item The divisor $\widetilde{P} - \mu^*P$ is an effective divisor with
support in $\Supp\bigl(\exc(\mu)\bigr)$;
\item The divisor $\widetilde{P}$ satisfies $\widetilde{P} \sim_\mathbb{R}
k(K_{\widetilde{Y}}+\widetilde{\Delta}+\mu^*H)$; and
\item There is an isomorphism $\mu_*\mathcal{O}_{\widetilde{Y}}(\widetilde{P})\simeq
\mathcal{O}_Y(P)$.
\end{enumerate}
\end{lemma}
\begin{proof}
On $\widetilde{Y}$, we can write
\[
K_{\widetilde{Y}} - \mu^*(K_Y+\Delta) = Q - N
\]
where $Q$ and $N$ are effective $\mathbb{R}$-divisors without common components, such
that $Q - N$ has simple normal crossings support and $Q$ is $\mu$-exceptional.
Note that since $(Y,\Delta)$ is log canonical (resp.\ klt), all coefficients
in $N$ are less than or equal to $1$ (resp.\ less than 1).
Let
\[
\widetilde{\Delta} \coloneqq N + \lceil Q \rceil - Q,
\]
so that by definition, $\widetilde{\Delta}$ has simple normal crossings
support and coefficients in $(0,1]$ (resp.\ $(0,1)$).
Now setting $\widetilde{P} \coloneqq \mu^*P + k\lceil Q \rceil$, we have
\begin{align*}
\widetilde{P} &\sim_\mathbb{R}
k\mu^*(K_Y+\Delta+H) + k\lceil Q \rceil\\
&\sim_\mathbb{R}
kK_{\widetilde{Y}} + k(N + \lceil Q \rceil - Q) + \mu^*H =
k(K_{\widetilde{Y}} + \widetilde{\Delta}+\mu^*H).
\end{align*}
Since $\lceil Q \rceil$ is $\mu$-exceptional,
we get $\mu_*\mathcal{O}_{\widetilde{Y}}(\widetilde{P}) \simeq \mathcal{O}_Y(P)$
by using the projection formula.
\end{proof}
We also use the following stronger notion of log resolution due to Szab\'o:
\begin{citedthm}[{\cite[Theorem 10.45.2]{Kol13}}]\label{citedthm:Szabo}
Let $X$ be a variety, and
let $D$ be a Weil divisor on $X$.
Then, there is a log resolution $\mu\colon\widetilde{X}\to X$ of $(X,D)$
such that $\mu$ is an isomorphism over
the locus where $X$ is smooth and $D$ has simple normal crossings support.
\end{citedthm}
\subsection{A few tools from Popa--Schnell}
The following result is a slight generalization of \cite[Variant 1.6]{PS14}.
This will be instrumental in proving Theorems \ref{thm:wp} and \ref{thm:wpnc}.
\begin{theorem}\label{thm:ps14}
Let $f\colon Y \to X$ be a morphism of projective varieties where $Y$ is
normal and $X$ is of dimension $n$.
Let $\Delta$ be an $\mathbb{R}$-divisor on $Y$ and $H$ a semiample $\mathbb{Q}$-divisor on $X$
such that for some integer $k \ge 1$, there is a Cartier divisor $P$ on $Y$ satisfying
\[
P \sim_\mathbb{R} k(K_Y + \Delta+f^*H).
\]
Suppose, moreover, that $\Delta$ can be written as
$\Delta= \Delta'+\Delta^v$ where
$(Y,\Delta')$ is log canonical and $\Delta^v$ is an $\mathbb{R}$-Cartier
$\mathbb{R}$-divisor that is vertical over $X$.
Let $\mathcal{L}$ be an ample and globally generated line bundle on $X$. Then, the
sheaf
$$f_*\mathcal{O}_Y(P)\otimes \mathcal{L}^{\otimes\ell}$$
is generated by global sections on some open set $U$ for all $\ell\geq
k(n+1)$. Moreover, when $\Delta'$ has simple normal crossings
support, we have $U=X\smallsetminus f(\Supp(\Delta^v))$.
\end{theorem}
\begin{proof}
Possibly after a log resolution of $(Y,\Delta)$, we may assume that $\Delta=\Delta^h+\Delta^v$ in the sense of Notation
\ref{notn:goodopen}$(\ref{notn:goodopenb})$, such that $(Y,\Delta^h)$ is log
canonical and $\Delta$ has simple normal crossing support. Indeed, let
$\mu\colon \widetilde{Y} \to Y$ be a log resolution of $(Y,\Delta)$.
Then, by Lemma \ref{lem:logres} applied to the pair $(Y,\Delta')$ and $H =
\Delta^v$, we obtain a log canonical $\mathbb{R}$-divisor
$\widetilde{\Delta}$ with simple normal crossings support
on $\widetilde{Y}$ satisfying
\[K_{\widetilde{Y}}+\widetilde{\Delta}+\mu^*\Delta^v \sim_{\mathbb{R}} \mu^*(K_Y+\Delta)+N\]
where $N$ is an effective $\mu$-exceptional divisor. We rename $\widetilde{Y}$
and $\widetilde{\Delta}+\mu^*\Delta^v$ as $Y$ and $\Delta$ respectively.
Now $\Delta$ has simple normal crossings support and $\Delta^h$ is log canonical.
Moreover, since $f^*H$ is semiample, by Bertini's theorem we can pick a $\mathbb{Q}$-divisor
$D\sim_{\mathbb{Q}}f^*H$ with smooth support and satisfying the conditions that $D+\Delta$ has simple
normal crossing support and $D$ does not share any components with $\Delta$.
Letting $\Delta'' \coloneqq
\Delta^v-\lfloor \Delta^v \rfloor$, we have that
\[
\Delta = \Delta^h+\Delta''+\lfloor\Delta^v\rfloor
\]
and $(Y,\Delta^h+\Delta''+D)$ is log canonical. Since $\mathcal{L}$ is ample and globally generated, we therefore obtain that
\[
f_*\mathcal{O}_Y\big(k(K_Y+\Delta^h+\Delta''+f^*H)\big)\otimes \mathcal{L}^{\otimes \ell}
\]
is generated by global sections for all $\ell \geq k(n+1)$ by \cite[Variant 1.6]{PS14}. But
\[
f_*\mathcal{O}_Y\big(k(K_Y+\Delta^h+\Delta''+f^*H)\big)\otimes
\mathcal{L}^{\otimes\ell}\lhook\joinrel\longrightarrow f_*\mathcal{O}_Y(P)\otimes \mathcal{L}^{\otimes\ell},
\]
and they have the same stalks at every point $x\in U$. Thus, the sheaf on the right hand side is generated by global sections at $x$ for all $x\in U$ and for all $\ell \geq k(n+1)$.
\end{proof}
We will also need the following result, which is used in the proof of
\cite[Variant 1.6]{PS14}:
\begin{lemma}[cf.\ {\cite[p.\ 2280]{PS14}}]\label{lem:ps14trick}
Let $f\colon Y \to X$ be a morphism of projective varieties, and let $\mathscr{F}$ be
a coherent sheaf on $Y$ such that the image of the counit map
\[
f^*f_*\mathscr{F} \longrightarrow \mathscr{F}
\]
of the adjunction $f^* \dashv f_*$
is of the form $\mathscr{F}(-E)$ for some effective Cartier divisor $E$ on $Y$.
Then, for every effective Cartier divisor $E' \preceq E$, we have
$f_*\bigl( \mathscr{F}(-E') \bigr) \simeq f_*\mathscr{F}$.
\end{lemma}
\begin{proof}
We have the factorization
\[
\begin{tikzcd}[column sep=1.475em]
f^*f_*\mathscr{F} \rar & \mathscr{F}(-E') \rar[hook] & \mathscr{F}
\end{tikzcd}
\]
and by applying the adjunction $f^* \dashv f_*$, we have a factorization
\[
\begin{tikzcd}[column sep=1.475em]
f_*\mathscr{F} \rar\arrow[bend right=20]{rr}[swap]{\mathrm{id}} & f_*\bigl(\mathscr{F}(-E')\bigr)
\rar[hook] & f_*\mathscr{F}
\end{tikzcd}
\]
of the identity.
\end{proof}
Finally, we record the following numerical argument that will appear in the proofs of Theorems \ref{thm:sing} and
\ref{thm:pluri}.
\begin{lemma}[cf.\ {\cite[Theorem 1.7, Step 2]{PS14}}]\label{lem:numeric}
Let $X$ be a smooth projective variety.
Let $\Delta$ be an effective $\mathbb{R}$-Cartier divisor and $E$ an effective
$\mathbb{Z}$-divisor with simple normal crossings support such that
$\Delta+E$ also has simple normal crossing support and $\Delta$ has coefficients
in $(0,1]$. Let
$0\le c< 1$ be a real number. Then, there exists an effective Cartier divisor $E'\preceq E$ such that
$\Delta+cE - E'$ has simple normal crossings support and coefficients in
$(0,1]$.
\end{lemma}
\subsection{Seshadri constants}
The effectivity of our results in Theorems \ref{thm:sing} and \ref{thm:pluri}
rely on Seshadri constants.
These were originally introduced by Demailly to measure local positivity of
line bundles and thereby study Fujita-type conjectures.
See \cite[Chapter 5]{Laz04a} for more on these invariants.
\begin{definition}
Let $X$ be a projective variety, and let $x
\in X$ be a closed point.
Let $L$ be a nef $\mathbb{R}$-Cartier $\mathbb{R}$-divisor on $X$.
Denote by $\mu\colon X' \to X$ the blow-up of $X$ at $x$ with exceptional
divisor $E$.
The \textsl{Seshadri constant} of $L$ at $x$ is
\[
\varepsilon(L;x) \coloneqq \sup\bigl\{t \in \mathbb{R}_{\ge 0} \bigm\vert
\mu^*L-tE\ \text{is nef}\bigr\}.
\]
If $\mathcal{L}$ is a nef line bundle, then we denote by
$\varepsilon(\mathcal{L};x)$ the Seshadri constant of the associated Cartier divisor
$L$ at $x$.
\end{definition}
The following result is crucial in making our results effective.
\begin{citedthm}[{\cite[Theorem 1]{EKL95}}]\label{thm:ekl95}
Let $X$ be a projective variety of dimension $n$.
Let $L$ be a big and nef Cartier divisor on $X$.
Then, for every $\delta >0$, the locus
\[
\biggl\{ x \in X \biggm\vert \varepsilon(L;x) > \frac{1}{n+\delta}
\biggr\}
\]
contains an open dense set.
\end{citedthm}
\begin{remark}\label{rem:betterbounds}
If in the notation of Theorem \ref{thm:ekl95}, we also assume that $X$ is
smooth and $L$ is ample, then better lower bounds are known if $n = 2,3$.
Under these additional assumptions, the locus
\[
\biggl\{ x \in X \biggm\vert \varepsilon(L;x) > \frac{1}{(n-1)+\delta}
\biggr\}
\]
contains an open dense set if $n =2$ \cite[Theorem]{EL93} or $n=3$
\cite[Theorem 1.2]{CN14}.
Here, we use \cite[Lemma 1.4]{EKL95} to obtain results for general points from
the cited results, which are stated for very general points.
In general, it is conjectured that in the
situation of Theorem \ref{thm:ekl95}, the locus
\[
\biggl\{ x \in X \biggm\vert \varepsilon(L;x) > \frac{1}{1+\delta}
\biggr\}
\]
contains an open dense set \cite[Conjecture 5.2.5]{Laz04a}.
\end{remark}
\subsection{The stable and augmented base locus}
In order to deal with big and nef line bundles in Theorems \ref{thm:sing} and \ref{thm:dc}, we will need some facts about
base loci, following \cite{ELMNP09}.
We start with the following:
\begin{definition}\label{def:baseloci}
Let $X$ be a projective variety.
If $L$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$, then the \textsl{stable base
locus} of $L$ is the closed set
\begin{align*}
\SB(L) &\coloneqq \bigcap_m \Bs\lvert mL \rvert_\mathrm{red},
\intertext{where $m$ runs over all integers such that $mL$ is Cartier.
If $L$ is an $\mathbb{R}$-Cartier $\mathbb{R}$-divisor on $X$, the \textsl{augmented base
locus} of $L$ is the closed set}
\Bsp(L) &\coloneqq \bigcap_A \SB(L - A)
\end{align*}
where $A$ runs over all ample $\mathbb{R}$-Cartier $\mathbb{R}$-divisors $A$ such that $L-A$
is $\mathbb{Q}$-Cartier.
By definition, if $L$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor, then
\[
\SB(L) \subseteq \Bsp(L).
\]
\end{definition}
Note that $\Bsp(L) \ne X$ if and only if $L$ is big by Kodaira's lemma
\cite[Proposition 2.2.22]{Laz04a}.
\medskip
\par We will also need the following result, which shows how augmented base loci
and Seshadri constants are related.
The result follows from \cite[\S6]{ELMNP09} if the scheme $X$ is a smooth
variety, but we will need it more generally for singular varieties.
\begin{corollary}\label{cor:seshadribs}
Let $X$ be a projective variety, and let $x
\in X$ be a closed point.
Suppose $L$ is a big and nef $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor.
If $\varepsilon(L;x) > 0$, then $x \notin \Bsp(L)$.
\end{corollary}
\begin{proof}
If $x \in \Bsp(L)$, then by \cite[Theorem 1.4]{Bir17} there exists
a closed subvariety $V \subseteq X$ containing $x$ such that $L^{\dim V}
\cdot V = 0$, in which case $\varepsilon(L;x) = 0$ by
\cite[Proposition 5.1.9]{Laz04a}.
\end{proof}
\section{An Extension Theorem}
We now turn to the proof of Theorem \ref{thm:dc}.
The proof relies on the following application of
cohomology and base change.
\begin{lemma}\label{lem:cohbasechangeapp}
Let $f\colon Y \to X$ be a proper morphism of separated noetherian schemes,
and let $\mathscr{F}$ be a coherent sheaf on $Y$.
Let $x \in X$ be a point that has an open neighborhood
$U \subseteq X$ where $\mathscr{F}\bigr\rvert_{f^{-1}(U)}$ is flat over $U$.
Consider the following cartesian square:
\[
\begin{tikzcd}
Y_x \rar\dar & Y\dar{f}\\
\Spec\bigl(\kappa(x)\bigr) \rar & X
\end{tikzcd}
\]
If the restriction map $H^0(Y,\mathscr{F}) \to H^0(Y_x,\mathscr{F}\bigr\rvert_{Y_x})$ is
surjective, then the restriction map
\[
H^0(X,f_*\mathscr{F}) \longrightarrow f_*\mathscr{F} \otimes_{\mathcal{O}_X} \kappa(x)
\]
is also surjective.
\end{lemma}
\begin{proof}
Let $f_U \coloneqq f\bigr\rvert_{f^{-1}(U)}$ and $\mathscr{F}_U
\coloneqq \mathscr{F}\bigr\rvert_{f^{-1}(U)}$.
We have the commutative diagram
\[
\begin{tikzcd}
H^0(X,f_*\mathscr{F}) \rar\arrow[equal]{dd} & f_*\mathscr{F} \otimes_{\mathcal{O}_X}
\kappa(x)\arrow{d}[sloped,below]{\sim}[right]{\beta}\\
& f_{U*}\mathscr{F}_U
\otimes_{\mathcal{O}_U} \kappa(x)\dar{\alpha^0(x)}\\
H^0(Y,\mathscr{F}) \rar[twoheadrightarrow] & H^0(Y_x,\mathscr{F}\bigr\rvert_{Y_x})
\end{tikzcd}
\]
where the bottom arrow is surjective by assumption, $\beta$ is an isomorphism
by computing affine-locally, and
$\alpha^0(x)$ is the natural base change map \cite[(8.3.2.3)]{Ill05}.
By the commutativity of the diagram, this map $\alpha^0(x)$ is surjective,
hence is an isomorphism by cohomology and base change \cite[Corollary
8.3.11]{Ill05}.
Thus, the top horizontal arrow is also surjective.
\end{proof}
Before proving Theorem \ref{thm:dc}, we first explain how to deduce a generic
global generation statement for arbitrary log canonical $\mathbb{R}$-pairs $(Y,\Delta)$
from Theorem \ref{thm:dc} by passing to a log resolution.
\begin{corollary}\label{cor:dclcsing}
Let $f\colon Y \to X$ be a surjective morphism of projective varieties, where
$X$ is of dimension $n$.
Let $(Y,\Delta)$ be a log canonical $\mathbb{R}$-pair,
and let $L$ be an big and nef $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$.
Let $\ell$ be a real number for which there
exists a Cartier divisor $P_\ell$ on $Y$ such that
\[
P_\ell \sim_\mathbb{R} K_Y+\Delta+\ell f^*L.
\]
If $\ell > \frac{n}{\varepsilon(L;x)}$ for general $x \in X$, then
the sheaf $f_*\mathcal{O}_Y(P_\ell)$ is generically globally generated.
\end{corollary}
\begin{proof}
Applying Lemma \ref{lem:logres} for $H = \ell f^*L$ to a log resolution
$\mu\colon \widetilde{Y} \to Y$ of $(Y,\Delta)$, we have
the following commutative diagram:
\[
\begin{tikzcd}
H^0\bigl(X,(f \circ \mu)_*\mathcal{O}_{\widetilde{Y}}(\widetilde{P}_\ell)
\bigr) \rar & (f \circ
\mu)_*\mathcal{O}_{\widetilde{Y}}(\widetilde{P}_\ell)
\otimes \kappa(x)\\
H^0\bigl(X,f_*\mathcal{O}_Y(P_\ell) \bigr) \rar
\arrow{u}[sloped,below]{\sim} &
f_*\mathcal{O}_Y(P_\ell) \otimes \kappa(x)
\arrow{u}[sloped,below]{\sim}
\end{tikzcd}
\]
where $\widetilde{P}_\ell$ is the divisor on $\widetilde{Y}$ satisfying the
properties in Lemma \ref{lem:logres}.
Then, Theorem \ref{thm:dc} for $(\widetilde{Y},\widetilde{\Delta})$ implies
that for some open subset $U \subseteq X$, the top horizontal arrow is
surjective for all closed points $x \in U$ such that $\ell >
\frac{n}{\varepsilon(L;x)}$, hence the bottom horizontal arrow
is also surjective at these closed points $x$.
We therefore conclude that $f_*\mathcal{O}_Y(P_\ell)$ is generically globally
generated.
\end{proof}
To prove Theorem \ref{thm:dc}, we need the following result on augmented base
loci.
\begin{lemma}\label{lem:nakamaye}
Let $X$ be a projective variety of dimension $n$, and let $L$ be a big and nef $\mathbb{R}$-Cartier
$\mathbb{R}$-divisor on $X$.
Let $x \in X$ be a closed point, and suppose $\varepsilon(L;x) > 0$.
Let $\mu\colon X' \to X$ be the blow-up of $X$ at $x$ with exceptional divisor
$E$.
For every positive real number $\delta < \varepsilon(L;x)$, we have
\[
\Bsp(\mu^*L - \delta E) \cap E = \emptyset.
\]
In particular, if $\mu^*L - \delta E$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor, then
\[
\Bs\bigl\lvert m(\mu^*L - \delta E)\bigr\rvert \cap E = \emptyset
\]
for all sufficiently large and divisible integers $m$.
\end{lemma}
\begin{proof}
First, the $\mathbb{R}$-Cartier $\mathbb{R}$-divisor $\mu^*L - \delta E$ is big and nef
since
\begin{equation}\label{lem:nakamayerequiv}
\mu^*L - \delta E \sim_\mathbb{R} \frac{\delta}{\varepsilon(L;x)} \bigl(\mu^*L -
\varepsilon(L;x)E\bigr) + \biggl( 1 - \frac{\delta}{\varepsilon(L;x)}
\biggr) \mu^*L
\end{equation}
is the sum of a nef $\mathbb{R}$-Cartier $\mathbb{R}$-divisor and a big and nef
$\mathbb{R}$-Cartier $\mathbb{R}$-divisor.
Thus, by \cite[Theorem 1.4]{Bir17}, we know that
$\Bsp(\mu^*L - \delta E)$
is the union of positive-dimensional
closed subvarieties $V$ of $X'$ such that
$(\mu^*L - \delta E)^{\dim V} \cdot V = 0$.
\par It suffices to show such a $V$ cannot contain any point $y \in E$.
First, if $V \subseteq E$, then
\begin{align*}
(\mu^*L - \delta E)^{\dim V} \cdot V &= (-\delta E)^{\dim V} \cdot V =
\delta^{\dim V} (-E\rvert_E)^{\dim V} \cdot V > 0,
\intertext{since $\mathcal{O}_E(-E) \simeq \mathcal{O}_E(1)$ is very ample.
On the other hand, if $V \not\subseteq E$, then $V$ is the strict transform of
some closed subvariety $V_0 \subseteq X$ containing $x$, and by
\eqref{lem:nakamayerequiv}, we have}
(\mu^*L - \delta E)^{\dim V} \cdot V
&= \biggl( \frac{\delta}{\varepsilon(L;x)} \bigl(\mu^*L -
\varepsilon(L;x)E\bigr) + \biggl( 1 - \frac{\delta}{\varepsilon(L;x)}
\biggr) \mu^*L \biggr)^{\dim V} \cdot V\\
&\ge \biggl( 1 - \frac{\delta}{\varepsilon(L;x)}
\biggr)^{\dim V} (\mu^*L)^{\dim V} \cdot V\\
&= \biggl( 1 - \frac{\delta}{\varepsilon(L;x)}
\biggr)^{\dim V} L^{\dim V} \cdot V_0 > 0,
\end{align*}
where the first inequality is by nefness of $\mu^*L - \varepsilon(L;x)E$,
and the last inequality is by \cite[Proposition 5.1.9]{Laz04a} and the
condition $\varepsilon(L;x) > 0$.
\par The last statement about base loci follows from the fact that
\[
\Bsp(\mu^*L - \delta E) \supseteq \SB(\mu^*L - \delta E) =
\Bs\bigl\lvert m(\mu^*L - \delta E)\bigr\rvert_{\mathrm{red}}
\]
for all sufficiently large and divisible integers $m$, where the last equality
holds by \cite[Proposition 2.1.21]{Laz04a} since $\mu^*L - \delta E$ is
a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor.
\end{proof}
Finally, we need the following cohomological injectivity theorem due to Fujino.
\begin{citedthm}[{\cite[Theorem 5.4.1]{Fuj17}}]\label{thm:fujinonj}
Let $Y$ be a smooth complete variety and let $\Delta$ be an $\mathbb{R}$-divisor on
$Y$ with coefficients in $(0,1]$ and simple normal crossings support.
Let $L$ be a Cartier divisor on $Y$ and let $D$ be an effective Weil divisor on
$Y$ whose support is contained in $\Supp\Delta$.
Assume that $L \sim_\mathbb{R} K_Y + \Delta$.
Then, the natural homomorphism
\[
H^i\bigl(Y,\mathcal{O}_Y(L)\bigr) \longrightarrow H^i\bigl(Y,\mathcal{O}_Y(L+D)\bigr)
\]
induced by the inclusion $\mathcal{O}_Y \to \mathcal{O}_Y(D)$ is injective for every $i$.
\end{citedthm}
We can now prove Theorem \ref{thm:dc}.
\begin{proof}[Proof of Theorem \ref{thm:dc}]
Fix $x \in U$, and consider the cartesian square
\[
\begin{tikzcd}
Y' \rar{B}\dar[swap]{f'} & Y\dar{f}\\
X' \rar{b} & X
\end{tikzcd}
\]
where $b$ is the blow-up of $X$ at $x$.
Since $f$ is flat in a neighborhood of $x$, the morphism $B$ can be identified
with the blow-up of $Y$ along $Y_x$, which is a smooth subvariety of
codimension $n$ \cite[\href{http://stacks.math.columbia.edu/tag/0805}{Tag
0805}]{stacks-project}.
Moreover, if $E$ is the exceptional divisor of $b$ and $D$ is the exceptional
divisor of $B$, then $f^{\prime*}E = D$.
By Lemma \ref{lem:cohbasechangeapp}, the surjectivity of \eqref{eq:thmcrest}
in the statement of Theorem \ref{thm:dc}
implies the generic global generation statement, so it suffices to show that
the map in \eqref{eq:thmcrest} is surjective.
\par First, we note that $(Y',B^*\Delta)$ is log canonical:
since $Y_x$ intersects every component of $\Delta$ transversely,
the pullback $B^*\Delta$ of $\Delta$ is equal to the strict transform
$\Delta'$ of $\Delta$ \cite[Corollary 6.7.2]{Ful98}, and so in particular,
$(Y',\Delta')$ is log canonical.
\par Since $\varepsilon(L;x) > n/\ell$, we can choose a sufficiently small
$\delta > 0$ such that $(n+\delta)/\ell \in \mathbb{Q}$ and $\varepsilon(L;x) >
(n+\delta)/\ell$.
Thus, using the fact that $L$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor, for real
numbers $m$ of the form $m_0/\ell$ for sufficiently large
and divisible integers $m_0$, we have that $m(\ell b^*L -
(n+\delta)E)$ is Cartier.
Lemma \ref{lem:nakamaye} then implies
\[
S \coloneqq \Bs\bigl\lvert m(\ell b^*L -
(n+\delta)E)\bigr\rvert_\mathrm{red}
\]
does not intersect $E$, i.e., $m(\ell b^*L - (n+\delta)E)$ is
globally generated away from $S$, and in particular, is globally generated on
an open set containing $E$.
Thus, the pullback $m(\ell B^*f^*L - (n+\delta)D)$ of this divisor is
globally generated away from $S' \coloneqq f^{\prime-1}(S)$, and in particular
is globally generated on an open set containing $D$.
Choose
\[
\mathfrak{D}_x \in \bigl\lvert m (\ell B^*f^*L - (n+\delta)D)
\bigr\rvert
\]
which is smooth and irreducible away from $f^{\prime-1}(S)$,
and is such that the component of $\mathfrak{D}_x$ not contained in
$S'$ intersects each component of the support of $\Delta'$
transversely away from $S'$.
Note that such a choice is possible by applying Bertini's theorem
\cite[Corollary III.10.9 and Remark III.10.9.3]{Har77}.
Since $\mathfrak{D}_x$ may have singularities along $S'$, however, we will need
to pass to a log resolution before applying Theorem \ref{thm:fujinonj}.
\par By Theorem \ref{citedthm:Szabo}, there exists a common log resolution
$\mu\colon\widetilde{Y} \to Y'$
for $\mathfrak{D}_x$ and $(Y',\Delta')$ that is an isomorphism away
from $f^{\prime-1}(S) \subsetneq Y'$.
We then write
\[
\mu^*\mathfrak{D}_x = D' + F, \qquad \mu^*\Delta' = \mu_*^{-1}\Delta' + F_1
\]
where $D'$ is a smooth divisor intersecting $Y_x$ transversely and
$F,F_1$ are supported on $\mu^{-1}(S')$.
Define
\[
F' \coloneqq \biggl\lfloor \frac{1}{m} F + F_1 \biggr\rfloor, \qquad
\widetilde{\Delta} \coloneqq \mu^*\Delta' + \frac{1}{m}\mu^*\mathfrak{D}_x
- F' + \delta \mu^*D, \qquad
\widetilde{P}_\ell \coloneqq \mu^*B^*P_\ell + K_{\widetilde{Y}/Y'}.
\]
Note that $\widetilde{\Delta}$ has simple normal crossings support containing
$\mu^*D$, and has coefficients in $(0,1]$ by assumption on the log resolution
and by definition of $F'$.
Note also that
\begin{align*}
\widetilde{P}_\ell - F' &\sim_\mathbb{R} \mu^*B^*(K_Y+\Delta + \ell f^*L)
+ K_{\widetilde{Y}/Y'} - F'\\
&\sim_\mathbb{R} K_{\widetilde{Y}} + \mu^*\Delta' - F' + \mu^*\bigl(\ell B^*f^*L -
(n-1)D\bigr)\\
&\sim_\mathbb{R} K_{\widetilde{Y}} + \mu^*\Delta' + \frac{1}{m}\mu^*
\mathfrak{D}_x - F' + (1+\delta)\mu^*D\\
&\sim_\mathbb{R} K_{\widetilde{Y}} + \widetilde{\Delta} + \mu^*D
\end{align*}
where the second equivalence follows from the fact that $B$ is the blow-up of
the smooth subvariety $Y_x$, which is of codimension $n$, hence
\[
K_{\widetilde{Y}} = \mu^*K_{Y'} + K_{\widetilde{Y}/Y'} = \mu^*B^*K_{Y} +
(n-1)\mu^*D + K_{\widetilde{Y}/Y'}.
\]
\par We can now apply the injectivity theorem \ref{thm:fujinonj} to $\widetilde{P}_\ell - F' -
\mu^*D \sim_{\mathbb{R}} K_{\widetilde{Y}} + \widetilde{\Delta}$ to see that
\begin{equation}\label{eq:dcbninjapp}
H^1\bigl(\widetilde{Y},\mathcal{O}_{\widetilde{Y}}(\widetilde{P}_\ell - F' -
\mu^*D) \bigr) \longrightarrow
H^1\bigl(\widetilde{Y},\mathcal{O}_{\widetilde{Y}}(\widetilde{P}_\ell - F'
) \bigr)
\end{equation}
is injective.
Next, consider the following commutative diagram:
\[
\begin{tikzcd}
H^0\bigl(\widetilde{Y},\mathcal{O}_{\widetilde{Y}}(\widetilde{P}_\ell - F')
\bigr) \rar[twoheadrightarrow]\dar[hook]
& H^0\bigl(\mu^{*}(D),\mathcal{O}_{\mu^{*}(D)}(\widetilde{P}_\ell - F') \bigr)
\arrow[hook]{d}[sloped,above]{\sim}\\
H^0\bigl(\widetilde{Y},\mathcal{O}_{\widetilde{Y}}(\widetilde{P}_\ell)\bigr) \rar
& H^0\bigl(\mu^{*}(D),\mathcal{O}_{\mu^{*}(D)}(\widetilde{P}_\ell) \bigr)\\
H^0\bigl(Y',\mathcal{O}_{Y'}(B^*P_\ell)\bigr)
\rar \arrow{u}[sloped,below]{\sim}
& H^0\bigl(D,\mathcal{O}_{D}(B^*P_\ell)\bigr)
\arrow{u}[sloped,below]{\sim}\\
H^0\bigl(Y,\mathcal{O}_Y(P_\ell)\bigr) \rar\arrow{u}[sloped,below]{\sim}
& H^0\bigl(Y_x,\mathcal{O}_{Y_x}(P_\ell)\bigr)\arrow{u}[sloped,below]{\sim}
\end{tikzcd}
\]
The top right vertical arrow is an isomorphism since $F'$ is disjoint
from $\mu^{*}(D)$.
The bottom right vertical arrow is an isomorphism since $B\rvert_D$ realizes
$D$ as a projective bundle over $Y_x$, hence $(B\rvert_D)_*\mathcal{O}_D \simeq
\mathcal{O}_{Y_x}$.
The other vertical isomorphisms follow from the projection formula and the
fact that $\mu$ and $B$ are birational.
Finally, the top horizontal arrow is surjective by the long exact sequence on
cohomology and the injectivity of \eqref{eq:dcbninjapp}.
The commutativity of the diagram implies the bottom row is surjective, which
is exactly the map in \eqref{eq:thmcrest}.
\end{proof}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
\section{Effective Twisted Weak Positivity}
We now prove Theorem \ref{thm:wpnc} using Viehweg's fiber product trick.
This trick enables us to reduce the global generation of
the reflexivized $s$-fold tensor product
$f_*\mathcal{O}_Y\big(k(K_Y+\Delta)\big)^{[s]}$ to $s=1$ with $Y$ replaced by a
suitable $\widetilde{Y}^s$. The main obstacle is picking a suitable
boundary divisor on $\widetilde{Y}^s$. We tackle this using Theorem \ref{thm:ps14}.
Readers are encouraged to consult \cite[\S4]{PS14}, \cite[\S3]{Vie83}, or
\cite[\S3]{Horing}.
\par Throughout the proof we use $\mathcal{O}_X(K_X)$ and $\omega_X$ interchangeably whenever
$X$ is a normal variety. We can do so by Lemma \ref{lem:canonicalnormal}.
\begin{proof}[Proof of Theorem \ref{thm:wpnc}]\label{pf:wpnc}
For every positive integer $s$, let $Y^s$ denote the reduction of the unique
irreducible component of
\[
\underbrace{Y\times_XY\times_X\cdots\times_XY}_{s\ \text{times}}
\]
that surjects onto $X$; note that it is unique since $f$ has irreducible generic
fiber. Denoting $V \coloneqq f^{-1}(U)$,
we define $V^s$ similarly.
Let $d\colon Y^{(s)} \to Y^s$ be a desingularization of $Y^s$, and note that $d$
is an isomorphism over $V^s$.
We will also denote by $V^s$ the image of $V^s$ under any birational
modification of $Y^s$ which is an isomorphism along $V^s$.
Denote $d_i = \pi_i \circ d$ for $i\in \{1,2,\ldots,s\}$, where $\pi_i\colon Y^s
\to Y$ is the $i$\textsuperscript{th} projection. Since $d_i$ is a
surjective morphism between integral varieties, the pullback $d_i^*\Delta_j$ of the Cartier divisor $\Delta_j$ is well
defined for every component $\Delta_j$ of $\Delta$ (see \cite[\href{http://stacks.math.columbia.edu/tag/02OO}{Tag 02OO}(1)]{stacks-project}).
\par Let $\mu\colon \widetilde{Y}^s \to Y^{(s)}$ be a log resolution as in Theorem \ref{citedthm:Szabo} of the pair
$\bigl(Y^{(s)}, \sum_id_i^*\Delta\bigr)$
so that $\mu$ is an isomorphism over $V^s$.
Denote
\[
\widetilde{\Delta} = \mu^* \displaystyle\sum_id_i^*\Delta.
\]
\begin{claim}\label{step:wpnc2}
There exists a map
\begin{equation}\label{eq:wpncstep2}
\widetilde{f}^s_*\mathcal{O}_{\widetilde{Y}^s}\bigl(k(K_{\widetilde{Y}^s/ X}+
\widetilde{\Delta})\bigr) \longrightarrow
\bigl(f_*\mathcal{O}_Y\bigl(k(K_{Y/X}+\Delta)\bigr)\bigr)^{[s]}
\end{equation}
which is an isomorphism over $U$.
\end{claim}
Let $X_0$ be the open set in $X$ such that
\begin{itemize}\label{assumptions}
\item The map $f$ is flat over $X_0$;
\item The regular locus of $X$ contains $X_0$; and
\item The sheaf $f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))$ is locally free over $X_0$.
\end{itemize}
Then, $\codim(X\smallsetminus X_0) \geq 2$. Indeed, $X$ is normal and both $f_*\mathcal{O}_Y$ and $f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))$
are torsion-free.
Now by construction, we have $U \subseteq X_0$.
Since $(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta)))^{[s]}$ is reflexive and
is isomorphic to $(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta)))^{\otimes s}$
on $X_0$, a map
\[
\widetilde{f}^s_*\mathcal{O}_{\widetilde{Y}^s}\bigl(k(K_{\widetilde{Y}^s/ X}+
\widetilde{\Delta})\bigr) \longrightarrow
\bigl(f_*\mathcal{O}_Y\bigl(k(K_{Y/X}+\Delta)\bigr)\bigr)^{\otimes s}
\]
over $X_0$ will extend to a map of the form in \eqref{eq:wpncstep2} on $X$ by
Corollary \ref{cor:extendsections}.
This together with flat base change
\cite[Proposition III.9.3]{Har77}, implies that it suffices to construct a map
\[
\widetilde{f}^s_*\mathcal{O}_{\widetilde{Y}^s_0}\bigl(k(K_{\widetilde{Y}_0^s/ X_0}+
\widetilde{\Delta}\rvert_{\widetilde{Y}^s_0})\bigr) \longrightarrow
\bigl(f_*\mathcal{O}_{Y_0}\bigl(k(K_{Y_0/X_0}+\Delta\rvert_{Y_0})\bigr)\bigr)^{\otimes
s}
\]
which is an isomorphism over $U$.
Denote $Y_0 \coloneqq f^{-1}(X_0)$. In this case, by \cite[Corollary
5.24]{Horing} we know that
\[
Y_0^s \coloneqq{} \underbrace{Y_0 \times_X Y_0 \times_X \cdots \times_X
Y_0}_{s\ \text{times}}{}\simeq{}
\underbrace{Y_0\times_{X_0}Y_0\times_{X_0}\cdots\times_{X_0}Y_0}_{s\ \text{times}}
\]
and that $Y_0^s$ is Gorenstein.
We can therefore apply Lemma \ref{lem:pluritr} to $d \circ \mu$, to obtain a
morphism
\[
(d \circ \mu)_*\omega_{\widetilde{Y}_0^{s}/X_0}^{\otimes k} \longrightarrow \omega^{\otimes k}_{Y_0^s/X_0}
\]
which is an isomorphism over $V^s$.
Here $\omega_{Y_0^s/X_0}\coloneqq\omega_{Y_0}\otimes {f^s}^*\omega_{X_0}^{-1}$
and similarly for $\omega_{\widetilde{Y}_0^{s}/X_0}$.
This induces a map
\begin{equation}\label{eq:isooveru}
\widetilde{f}^s_*\mathcal{O}_{\widetilde{Y}^s_0}\bigl(k(K_{\widetilde{Y}^s_0/X_0}+\widetilde{\Delta}\rvert_{\widetilde{Y}^s_0}) \bigr)
\longrightarrow f^s_*\Bigl(\omega_{Y^s_0/X_0}^{\otimes k}\otimes
\bigotimes_i\pi_i^*\mathscr{M}\bigr\rvert_{Y^s_0}\Bigr)
\end{equation}
which is an isomorphism over $U$, where
$\mathscr{M} \coloneqq \mathcal{O}_Y(P-kK_Y)$ is the line bundle associated to the
Cartier divisor $P-kK_Y \sim_\mathbb{R} k\Delta$.
We will now show that the sheaf on the right-hand side of \eqref{eq:isooveru}
admits an isomorphism to
\[
\bigl(f_*\mathcal{O}_{Y_0}\bigl(k(K_{Y_0/X_0}+\Delta\rvert_{Y_0})\bigr)\bigr)^{\otimes
s}.
\]
Note that this would show Claim \ref{step:wpnc2}, since \eqref{eq:isooveru}
is an isomorphism over $U$.
We proceed by induction, adapting the argument in \cite[Lemma 3.15]{Horing}
to our twisted setting.
Note that the case $s = 1$ is clear, since in this case $Y^s = Y$ and the
sheaves in question are equal.
By \cite[Corollary 5.24]{Horing} we have that
\[
\omega_{Y_0^s/X_0}^{\otimes k}\otimes\bigotimes_i\pi_i^*\bigl(\mathscr{M}
\bigr\rvert_{Y_0}\bigr)\simeq
\pi_s^*\bigl(\omega_{Y_0/X_0}^{\otimes k}\otimes \mathscr{M}\bigr\rvert_{Y_0}\bigr)\otimes \pi'^*\bigl(\omega_{Y_0^{s-1}/X_0}^{\otimes k}\otimes \mathscr{M}^{s-1}\bigr\rvert_{Y^{s-1}_0}\bigr)
\]
where $\pi'\colon Y^{s}\to Y^{s-1}$ and $\mathscr{M}^{s-1} \coloneqq \bigotimes_{i=1}^{s-1}\pi_i^*\mathscr{M}$. Since
$\omega_{Y_0^{s-1}/X_0}^{\otimes k}\otimes \mathscr{M}^{s-1}\bigr\rvert_{Y^{s-1}_0}$ is locally free, by the projection formula we obtain
\[
f^s_*\Bigl(\omega_{Y_0^{s}/X_0}^{\otimes k} \otimes \bigotimes_{i=1}^{s}
\pi_i^*\mathscr{M}\bigr\rvert_{Y_0}\Bigr) \simeq
f_*\Bigl(\bigl(\omega_{Y_0/X_0}^{\otimes k}\otimes
\mathscr{M}\bigr\rvert_{Y_0}\bigr)\otimes
\pi_{s_*}\pi'^*\bigl(\omega_{Y_0^{s-1}/X_0}^{\otimes k}\otimes
\mathscr{M}^{s-1}\bigr\rvert_{Y^{s-1}_0}\bigr)\Bigr).
\]
Now by flat base change \cite[Proposition III.9.3]{Har77},
\[
\pi_{s_*}\pi'^*\bigl(\omega_{Y_0^{s-1}/X_0}^{\otimes k}\otimes
\mathscr{M}^{s-1}\bigr\rvert_{Y^{s-1}_0}\bigr) \simeq
f^*f^{s-1}_*\bigl(\omega_{Y_0^{s-1}/X_0}^{\otimes k}\otimes
\mathscr{M}^{s-1}\bigr\rvert_{Y^{s-1}_0}\bigr).
\]
By induction the latter is isomorphic to
\[
f^*\bigl(f_*\mathcal{O}_{Y_0}\bigl(k(K_{Y_0/X_0}+\Delta\rvert_{Y_0})\bigr)^{\otimes
s-1}\bigr).
\]
Therefore
\[
f^s_*\Bigl(\omega_{Y_0^{s}/X_0}^{\otimes k} \otimes \bigotimes_i
\pi_i^*\mathscr{M}\bigr\rvert_{Y_0}\Bigr) \simeq f_*\Bigl(\omega_{Y_0/X_0}^{\otimes
k}\otimes \mathscr{M}\bigr\rvert_{Y_0}\otimes
f^*\bigl(f_*\mathcal{O}_{Y_0}\bigl(k(K_{Y_0/X_0}+\Delta\rvert_{Y_0})\bigr)^{\otimes
s-1}\bigr)\Bigr).
\]
Since $f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))$ is locally free over $X_0$, we can
apply the projection formula to obtain
\[
f^s_*\Bigl(\omega_{Y_0^{s}/X_0}^{\otimes k} \otimes \bigotimes_i \pi_i^*\mathscr{M}\bigr\rvert_{Y_0}\Bigr) \simeq
\bigl(f_*\mathcal{O}_{Y_0}\bigl(k(K_{Y_0/X_0}+\Delta\rvert_{Y_0})\bigr)\bigr)^{\otimes
s}.
\]
This concludes the proof of Claim \ref{step:wpnc2}.
We now use Theorem \ref{thm:ps14} to finish the proof of Theorem \ref{thm:wpnc}.
We first claim $\widetilde{\Delta}$ satisfies the hypothesis of Theorem
\ref{thm:ps14}. To do so, first note that on $\pi_i$ is flat over $Y_0$, and therefore by flat pullback of cycles we have
\begin{equation*}
\pi_i^*(\Delta_j)\big|_{Y_0^s} = \pi_i^{-1}(\Delta_j\big|_{Y_0})=
Y_0\times_{X_0}\cdots\times_{X_0}\underbrace{\Delta_j}_{i^{\mathrm{th}}\text{ position }}\times_{{X_0}}\cdots\times_{X_0} Y_0.
\end{equation*}
Since $Y_0\supseteq V$ and both $d$ and $\mu$ are isomorphisms over $V^s$, the pullback
$\mu^*(\pi_i\circ d)^*\Delta^h_j\big|_{V^s}$ of the horizontal components of $\Delta$ are smooth above $U$ for all $i\in\{1,2,\ldots,s\}$.
In other words, the components of $\widetilde{\Delta}$ either do not intersect
$V^s$, or intersect the fiber over
$x$ transversely for all $x\in U$. Thus,
\[
\widetilde{\Delta}\big|_{V^s}=
\mu^{-1}d^{-1}\sum_i\pi_i^{-1}\big(\Delta^h\big|_{V}\big).
\]
In particular, in the notation of Notation
\ref{notn:goodopen}$(\ref{notn:goodopenb})$, we have that the horizontal part
$\widetilde{\Delta}^h$ equals the closure
$\overline{\widetilde{\Delta}\big|_{V^s}}$ of $\widetilde{\Delta}\big|_{V^s}$ in
$\widetilde{Y}^s$. We can therefore write
\[
\widetilde{\Delta} = \widetilde{\Delta}^h+\widetilde{\Delta}^v,
\]
where by construction, the coefficients of $\widetilde{\Delta}^h$ are in $(0,1]$ and $\widetilde{f}^s\big(\widetilde{\Delta}^v\big) \cap U = \emptyset$.
Finally, we note from
Mori's cone theorem \cite[Theorem 1.24]{KM98} that $H=\omega_X\otimes\mathcal{L}^{\otimes n+1}$ is nef
and hence semiample by the base point free theorem \cite[Theorem 3.3]{KM98}. Therefore $f^*H^{\otimes (\ell-k)}$ is also semiample for all $\ell\ge k$.
Using $H$ again to denote a divisor class of $H$, we argue that since
\begin{equation}\label{eq:applyps14}
\widetilde{f}^s_*\mathcal{O}_{\widetilde{Y}^s}\bigl(k(K_{\widetilde{Y}^s/ X}+ \widetilde{\Delta})\bigr)\otimes H^{\otimes \ell}
\simeq \widetilde{f}^s_*\mathcal{O}_{\widetilde{Y}^s}\bigl(k(K_{\widetilde{Y}^s}+ \widetilde{\Delta}+(\ell-k)\widetilde{f}^{s*}H)\bigr)\otimes \mathcal{L}^{\otimes k(n+1)}
\end{equation}
with $\mathcal{L}$ ample and globally generated, we can apply Theorem \ref{thm:ps14} to conclude that the sheaf above in (\ref{eq:applyps14}) is generated by global sections over $U$ for all $\ell\geq k$.
Now fix a closed point $x \in U$.
We have the commutative diagram
\[
\begin{tikzcd}
H^0\bigl(X,\widetilde{f}^s_*\mathcal{O}_{\widetilde{Y}^s}\bigl(k(K_{\widetilde{Y}^s/
X}+ \widetilde{\Delta})\bigr)\otimes H^{\otimes \ell} \bigr)
\rar[twoheadrightarrow]
\arrow{d}
& \bigl(\widetilde{f}^s_*\mathcal{O}_{\widetilde{Y}^s}\bigl(k(K_{\widetilde{Y}^s/
X}+ \widetilde{\Delta})\bigr)\otimes H^{\otimes \ell}\bigr)
\otimes \kappa(x)\arrow{d}[sloped,above]{\sim}\\
H^0\bigl(X,\big(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\big)^{[s]}\otimes H^{\otimes
\ell} \bigr) \rar
& \bigl(\big(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\big)^{[s]}\otimes H^{\otimes
\ell}\bigr) \otimes \kappa(x)
\end{tikzcd}
\]
where the vertical arrows are induced by the map \eqref{eq:wpncstep2} from
Claim \ref{step:wpnc2}, and the top horizontal arrow is surjective by the global
generation of the sheaves in \eqref{eq:applyps14} over $U$.
Since \eqref{eq:wpncstep2} is an isomorphism over $U$, the right vertical arrow
is an isomorphism, hence by the commutativity of the diagram, the bottom
horizontal arrow is surjective.
We therefore conclude that
$$\big(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\big)^{[s]}\otimes H^{\otimes \ell}$$
is generated by global sections over $U$ for all $\ell\geq k$.
\end{proof}
\begin{remark}\label{rem:hmx}
When $\lfloor\Delta\rfloor = 0$, if we moreover take $U(f,\Delta)$ to be an open set over which every stratum of $(Y,\Delta)$ is smooth, then applying invariance of log plurigenera
\cite[Theorem 4.2]{HMX}, we can assert that $f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\big|_{U(f,\Delta)}$ is locally free. In this case we can take $X_0$ to be simply the locus inside $X_{\mathrm{reg}}$ over which $f$ is flat. Moreover, the isomorphism
\begin{equation*}
\big(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\big)^{\otimes s} \simeq
\big(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\big)^{[s]}
\end{equation*}
automatically holds over $U(f,\Delta)$. Thus, Theorem \ref{thm:wpnc} holds more generally over $U(f,\Delta)$.
\end{remark}
We now deduce Theorem \ref{thm:wp} from Theorem \ref{thm:wpnc}.
\begin{proof}[Proof of Theorem \ref{thm:wp}]\label{pf:wp}
Using Lemma \ref{lem:logres}, we assume that $Y$ is smooth and $\Delta$ has simple normal crossing
support.
Then, Theorem \ref{thm:wpnc} implies
\[
\big(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\big)^{[s]}\otimes H^{\otimes \ell}
\]
is generated by global sections for all $\ell\geq k$ on an open set $U
\subseteq X$.
Since $f_*\mathcal{O}_Y\big(k(K_{Y/X}+\Delta))$ is locally free over $U$, the map
\[
\bigl(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\bigr)^{[s]}
\longrightarrow \Sym^{[s]}\bigl(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\bigr)
\]
is surjective over $U$, hence
\[
\Sym^{[s]}\big(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\big)\otimes H^{\otimes \ell}
\]
is also generated by global sections for all $\ell\geq k$ on $U$.
Note that for any ample line bundle $\mathcal{L}$, there is an integer
$b\ge 1$ such that $H^{\otimes -k}\otimes \mathcal{L}^{\otimes b}$ is globally
generated.
For such a $b$, the sheaf
\[
\Sym^{[s]}\bigl(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\bigr)\otimes \mathcal{L}^{\otimes b}
\]
is also generated by global sections on $U$.
Since $b$ depends only on $k$ and $H$ and
is independent of $s$, we can set $s=ab$. This implies weak positivity of $f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))$ over $U$.
\end{proof}
\begin{remark}\label{rem:thmwpopen}
The proof of Theorem \ref{thm:wp} shows that when $Y$ is smooth and $\Delta$
has simple normal crossings support, the sheaf $f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))$ is
weakly positive over the open set in the statement of Theorem \ref{thm:wpnc}.
\end{remark}
\section{Generic Generation for Pluricanonical Sheaves}
\subsection{Proof of Theorem \ref{thm:sing}}\label{proof:sing}
We now prove Theorem \ref{thm:sing}, following the strategy in \cite[Theorem 1.7]{PS14} and \cite[Theorem
A]{Dut17}.
The idea is to reduce to the case where $Y$ is smooth and $\Delta$ has simple
normal crossings support, and then maneuver into a situation to which Theorem
\ref{thm:dc} applies.
\begin{proof}[Proof of Theorem \ref{thm:sing}]
We start with some preliminary reductions.
\setcounter{step}{-1}
\begin{step}\label{step:pluripair0}
We may assume that the image of the counit morphism
\begin{equation}\label{eq:pluripaircounit}
f^*f_*\mathcal{O}_Y(P) \longrightarrow \mathcal{O}_Y(P)
\end{equation}
for the adjunction $f^* \dashv f_*$ is nonzero.
\end{step}
Suppose the image of \eqref{eq:pluripaircounit} is the zero sheaf.
Then, the natural isomorphism
\[
\Hom_{\mathcal{O}_Y}\bigl(f^*f_*\mathcal{O}_Y(P),\mathcal{O}_Y(P)\bigr) \simeq
\Hom_{\mathcal{O}_X}\bigl(f_*\mathcal{O}_Y(P),f_*\mathcal{O}_Y(P)\bigr)
\]
from the adjunction $f^* \dashv f_*$ implies that the identity morphism
$\mathrm{id}\colon f_*\mathcal{O}_Y(P) \to f_*\mathcal{O}_Y(P)$ is the zero morphism.
This implies $f_*\mathcal{O}_Y(P) = 0$, hence the conclusion of Theorem \ref{thm:sing}
trivially holds.
\begin{step}[cf.\ {\cite[Theorem 1.7, Step 1]{PS14}}]
\label{step:pluripair1}
We can reduce to the case where
\begin{enumerate}[label=$(\alph*)$,ref=\ensuremath{\alph*}]
\item $Y$ is smooth;\label{step:pluripair1i}
\item $\Delta$ has simple normal crossings support and coefficients in
$(0,1]$; and\label{step:pluripair1ii}
\item The image of \eqref{eq:pluripaircounit}
is of the form $\mathcal{O}_Y(P-E)$ for a
divisor $E$ such that $\Delta+E$ has simple normal crossings
support.\label{step:pluripair1iii}
\end{enumerate}
\end{step}
A priori, the image of the counit \eqref{eq:pluripaircounit} is of the form
$\mathfrak{b} \cdot \mathcal{O}_Y(P)$, where $\mathfrak{b} \subseteq \mathcal{O}_Y$ is the
\textsl{relative base ideal} of $\mathcal{O}_Y(P)$.
By Step \ref{step:pluripair0}, this ideal is nonzero, and so consider
a simultaneous log resolution $\mu\colon \widetilde{Y} \to Y$
of $\mathfrak{b}$ and $(Y,\Delta)$.
The image of the counit morphism
\begin{equation}\label{eq:pluripaircounitres}
\mu^*f^*f_*\mathcal{O}_Y(P) \longrightarrow \mu^*\mathcal{O}_Y(P) = \mathcal{O}_{Y'}(\mu^*P)
\end{equation}
is the sheaf $\mathcal{O}_{Y'}(\mu^*P-E')$ \cite[Generalization 9.1.17]{Laz04b}.
\par We then apply Lemma \ref{lem:logres} to $\mu$. With the notation of the lemma we
note that on $\widetilde{Y}$ the counit morphism \eqref{eq:pluripaircounitres} becomes the surjective
morphism
\[
(f \circ \mu)^*(f \circ \mu)_*\mathcal{O}_{\widetilde{Y}}(\widetilde{P})
\mathrel{\text{\tikz \draw [-cm double to] (0,0) (0.05em,0.5ex) -- (1.525em,0.5ex);}\hspace{0.05em}} \mathcal{O}_{\widetilde{Y}}(\mu^*P-E') =
\mathcal{O}_{\widetilde{Y}}\bigl(\widetilde{P} - (\widetilde{P} - \mu^*P) -
E'\bigr).
\]
Setting $E \coloneqq (\widetilde{P} - \mu^*P) + E'$, we see that
$(\ref{step:pluripair1iii})$ holds for $\widetilde{P}$.
\par Finally, Theorem \ref{thm:sing} for
$(\widetilde{Y},\widetilde{\Delta})$ and $\widetilde{P}$ implies that
\[
(f \circ \mu)_*\mathcal{O}_{\widetilde{Y}}(\widetilde{P})\otimes_{\mathcal{O}_X}
\mathcal{L}^{\otimes\ell} \simeq f_*\mathcal{O}_Y(P) \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}
\]
is generated by global sections on some open set $U$ for $\ell \ge
k(n^2 + 1)$.
This concludes Step \ref{step:pluripair1}.
\medskip
\par Henceforth, we work in the situation of Step \ref{step:pluripair1}.
Before moving on to Step \ref{step:pluripair2}, we fix some notation.
Let $L$ denote the divisor class of $\mathcal{L}$.
Let $U$ be the subset of $U(f,\Delta+E)$ where
\[
\varepsilon(\mathcal{L};x) > \frac{1}{n+\frac{1}{kn}}
\]
for every $x \in U$, which is nonempty by Notation
$\ref{notn:goodopen}(\ref{notn:goodopena})$ and Theorem
\ref{thm:ekl95}.
\par We set $m$ to be the smallest positive integer such that
$f_*\mathcal{O}_Y(P) \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes m}$ is globally generated on $U$.
This integer $m$ exists by \cite[Proposition 2.7]{Kur13} since
$U \cap \Bsp(L) = \emptyset$ by Corollary \ref{cor:seshadribs}.
\par Finally, we set
$B \coloneqq \Bs\lvert P - E + mf^*L \rvert_{\mathrm{red}} \subsetneq Y$ and note that $B\cap f^{-1}(U) = \emptyset$.
\begin{step}
\label{step:pluripair2}
Reducing the problem to $k = 1$ and a suitable pair.
\end{step}
\par From now on, fix a closed point $x \in U$.
\par The surjection
\[
f^*f_*\mathcal{O}_Y(P) \otimes_{\mathcal{O}_Y} f^*\mathcal{L}^{\otimes m} \mathrel{\text{\tikz \draw [-cm double to] (0,0) (0.05em,0.5ex) -- (1.525em,0.5ex);}\hspace{0.05em}}
\mathcal{O}_Y(P-E) \otimes_{\mathcal{O}_Y} f^*\mathcal{L}^{\otimes m}
\]
implies that $\mathcal{O}_Y(P-E) \otimes_{\mathcal{O}_Y} f^*\mathcal{L}^{\otimes m}$ is
globally generated on $f^{-1}(U)$.
Choose a general member
\[
\mathfrak{D}_x \in \lvert P-E+mf^*L \rvert.
\]
By Bertini's theorem \cite[Corollary III.10.9 and Remark
III.10.9.3]{Har77}, we
may assume that $\mathfrak{D}_x$ is smooth away from the base locus $B$ of the
linear system $\lvert P-E+mf^*L \rvert$.
We may also assume that $\mathfrak{D}_x$ intersects the fiber $Y_x$
transversely, and the support of $\Delta$ and $E$ transversely away
from $B$ \cite[Lemma 4.1.11]{Laz04a}.
We then have
\begin{align*}
k(K_Y+\Delta) &\sim_\mathbb{R} K_Y+\Delta + \frac{k-1}{k}\mathfrak{D}_x +
\frac{k-1}{k}E - \frac{k-1}{k} mf^*L,
\intertext{hence for every integer $\ell$,}
k(K_Y+\Delta) + \ell f^*L &\sim_\mathbb{R} K_Y+\Delta +
\frac{k-1}{k}\mathfrak{D}_x + \frac{k-1}{k}E + \biggl( \ell - \frac{k-1}{k}
m\biggr)f^*L.
\end{align*}
\par We now adjust the coefficients of $\Delta$ and $E$ so they do not share
any components.
Applying Lemma \ref{lem:numeric} to $c = \frac{k-1}{k}$, we see that
there exists an effective divisor $E' \preceq E$ such that
\[
\Delta' \coloneqq \Delta + \frac{k-1}{k}E - E',
\]
is effective with simple
normal crossings support, with components intersecting $Y_x$
transversely, and with coefficients in $(0,1]$.
We can then write
\begin{equation}\label{eq:pluripairstep2decomp}
P - E' + \ell f^*L \sim_\mathbb{R} K_Y + \Delta'
+ \frac{k-1}{k}\mathfrak{D} + \biggl(\ell - \frac{k-1}{k}
m\biggr)f^*L.
\end{equation}
\begin{step}\label{step:pluripair3}
Applying Theorem \ref{thm:dc} to obtain global generation.
\end{step}
By Lemma \ref{lem:ps14trick}, we have $f_*\mathcal{O}_X(P-E') \simeq f_*\mathcal{O}_X(P)$.
It therefore suffices to show that
\begin{equation}\label{eq:pluripairstep3sheaf}
f_*\mathcal{O}_Y(P - E') \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}
\end{equation}
is globally generated at $x$.
We first modify $\mathfrak{D}_x$ to allow us to apply Theorem \ref{thm:dc}.
By Theorem \ref{citedthm:Szabo}, there exists a common log resolution
$\mu\colon \widetilde{Y} \to Y$
for $\mathfrak{D}_x$ and $(Y,\Delta)$ that is an isomorphism
away from $B \subsetneq Y$.
We then write
\[
\mu^*\mathfrak{D}_x = D + F, \qquad \mu^*\Delta' = \mu_*^{-1}\Delta' + F_1
\]
where $D$ is a smooth prime divisor intersecting the fiber over $x$ transversely and
$F,F_1$ are supported on $\mu^{-1}(B)$.
Define
\[
F' \coloneqq \biggl\lfloor \frac{k-1}{k} F + F_1 \biggr\rfloor, \qquad
\widetilde{\Delta} \coloneqq \mu^*\Delta' + \frac{k-1}{k}\mu^*\mathfrak{D}_x
- F', \qquad
\widetilde{P} \coloneqq \mu^*P + K_{\widetilde{Y}/Y}.
\]
Note that $\widetilde{\Delta}$ has simple normal crossings support and
coefficients in $(0,1]$ by assumption on the log resolution and by definition
of $F'$.
Moreover, the support of $\widetilde{\Delta}$ intersects the fiber over $x$
transversely.
Pulling back the decomposition in \eqref{eq:pluripairstep2decomp} and adding
$K_{\widetilde{Y}/Y} - F'$ yields
\begin{align}
\widetilde{P} - \mu^*E' - F' + \ell (f \circ \mu)^*L
&\sim_\mathbb{R} K_{\widetilde{Y}}
+ \mu^*\Delta' + \frac{k-1}{k}\mu^*\mathfrak{D}_x - F' + \biggl( \ell -
\frac{k-1}{k}m \biggr)(f \circ \mu)^*L\nonumber\\
&\sim_\mathbb{R} K_{\widetilde{Y}} + \widetilde{\Delta} + \biggl( \ell -
\frac{k-1}{k}m \biggr)(f \circ \mu)^*L.\label{eq:pluripairlastrequiv}
\end{align}
\par We now claim that it suffices to show
\begin{equation}\label{eq:pluripairlastsheaf}
(f \circ \mu)_*\mathcal{O}_{\widetilde{Y}}
(\widetilde{P} - \mu^*E' - F' )
\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}
\end{equation}
is globally generated at $x$.
Consider the commutative diagram
\begin{equation*}
\setbox0=\hbox\bgroup\ignorespaces
\begin{tikzcd}
H^0\bigl(X,(f \circ \mu)_*\mathcal{O}_{\widetilde{Y}}
(\widetilde{P} - \mu^*E' - F' )
\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}\bigr) \dar[hook] \rar &
\bigl((f \circ \mu)_*\mathcal{O}_{\widetilde{Y}}
(\widetilde{P} - \mu^*E' - F' ) \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}\bigr) \otimes
\kappa(x)\arrow[hook]{d}[sloped,above]{\sim}\\
H^0\bigl(X,(f \circ \mu)_*\mathcal{O}_{\widetilde{Y}}
(\widetilde{P} - \mu^*E')
\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}\bigr) \rar\arrow{d}[sloped,above]{\sim}
& \bigl((f \circ \mu)_*\mathcal{O}_{\widetilde{Y}}
(\widetilde{P} - \mu^*E' ) \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}\bigr) \otimes
\kappa(x)\arrow{d}[sloped,above]{\sim}\\
H^0\bigl(X,f_*\bigl(\mu_*\mathcal{O}_{\widetilde{Y}}(\widetilde{P}) \otimes
\mathcal{O}_Y(-E')\bigr) \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}\bigr)
\arrow{d}[sloped,above]{\sim}\rar &
\bigl(f_*\bigl(\mu_*\mathcal{O}_{\widetilde{Y}}(\widetilde{P}) \otimes
\mathcal{O}_Y(-E')\bigr) \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}\bigr) \otimes \kappa(x)
\arrow{d}[sloped,above]{\sim}\\
H^0\bigl(X,f_*\mathcal{O}_Y(P-E') \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell}\bigr) \rar
& f_*\mathcal{O}_Y(P-E') \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes\ell} \otimes \kappa(x)
\end{tikzcd}
\unskip\egroup\noindent\makebox[\textwidth]{\box0}
\end{equation*}
where the top right isomorphism holds since $F'$ is supported away
from $(f \circ \mu)^{-1}(U)$, hence the stalks of the two sheaves are
isomorphic, and the other isomorphisms follow from
the projection formula and the fact that $K_{\widetilde{Y}/Y}$ is
$\mu$-exceptional.
If the top horizontal arrow is surjective, then the commutativity of the
diagram implies that the bottom horizontal arrow is also surjective, i.e.,
the sheaf in \eqref{eq:pluripairstep3sheaf} is globally generated at $x$.
\par We now apply Theorem \ref{thm:dc} to the decomposition
\eqref{eq:pluripairlastrequiv} to
see that the sheaf in \eqref{eq:pluripairlastsheaf} is globally generated at
$x$ for all
\[
\ell - \frac{k-1}{k}m > \frac{n}{\varepsilon(\mathcal{L};x)}.
\]
By choice of $U$, we know that $\varepsilon(\mathcal{L};x) >
\frac{1}{n+\frac{1}{kn}}$ at all $x \in U$, and so
by applying the same argument so far to all $x \in U$, we see
$f_*\mathcal{O}_Y(P) \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes \ell}$ is globally generated on $U$
for all
\[
\ell > n\biggl(n+\frac{1}{kn}\biggr) + \frac{k-1}{k}m = n^2 +
\frac{1}{k} + \frac{k-1}{k}m.
\]
By minimality of $m$, we know that
\[
m \le \biggl\lfloor n^2 + \frac{1}{k} + \frac{k-1}{k}m \biggr\rfloor
+ 1 \le n^2 + \frac{k-1}{k}m + 1.
\]
The inequality between the leftmost and rightmost quantities is equivalent to
$m \le k(n^2 + 1)$,
that is, $f_*\mathcal{O}_Y(P) \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes \ell}$ is globally generated
on $U$ for $\ell \ge k(n^2 + 1)$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:pluri}}\label{pf:pluri}
Restricting to $X$ smooth and $\mathcal{L}$ ample, we now show a slightly better bound.
The strategy of Theorem \ref{thm:pluri} is the same as that for Theorem
\ref{thm:sing}: We first reduce to
the case when $Y$ is smooth and $\Delta$ has simple normal crossing support.
Then, using twisted weak positivity this time, we maneuver to a situation in
which we can apply Theorem \ref{thm:dc} or \cite[Proposition 1.2]{Dut17}.
\begin{proof}[Proof of Theorem \ref{thm:pluri}]
We begin with \textbf{Step \ref{step:pluripair0}} and \textbf{Step \ref{step:pluripair1}} of the proof
of Theorem \ref{thm:sing} to reduce to a situation where $Y$ is smooth and $\Delta$ has simple
normal crossing support. Following Step \ref{step:pluripair1}, we also assume that there exists an effective
divisor $E$ with simple normal crossing support such that
\begin{equation}\label{eq:counit}
f^*f_*\mathcal{O}_Y(P) \longrightarrow \mathcal{O}_Y(P-E)
\end{equation} is surjective.
\setcounter{step}{1}
\begin{step}\label{step:pluri2}
Reducing the problem to $k = 1$ and a suitable pair.
\end{step}
Unless otherwise mentioned, throughout this proof we fix $U$ to denote
the intersection of $U(f,\Delta+E)$ with the open set over which $f_*\mathcal{O}_Y(P)$ is locally free.
In the diagram
\begin{equation*}
\begin{tikzcd}
f^*\bigl(\bigl(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\bigr)^{\otimes b}\bigr)
\arrow[twoheadrightarrow]{r}\ar[d] &
\mathcal{O}_{Y}\big(bk(K_{Y/X}+\Delta)-bE\bigr)\dar[equals]\\
f^*\bigl(\bigl(f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))\bigr)^{[b]}\bigr)
\ar[r,dashed] & \mathcal{O}_{Y}\big(bk(K_{Y/X}+\Delta)-bE\big)
\end{tikzcd}
\end{equation*}
the dashed map exists making the diagram commute.
Indeed, the map exists over the locus $X_1$ where
$f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))$ is locally free. Since $X_1$ has a complement of
codimension $\ge 2$,
and the bottom right sheaf is locally free, we can extend the dashed map to all of $X$ (Corollary \ref{cor:extendsections}).
Now the top arrow is the surjective map obtained by taking the $b$th tensor
power of \eqref{eq:counit}.
Then the commutativity of the diagram implies that the bottom
arrow is also surjective. By Theorem \ref{thm:wpnc} we know that over $U$,
\[f_*\mathcal{O}_Y\bigl(k(K_{Y/X}+\Delta)\bigr)^{[b]} \otimes \mathcal{L}^{\otimes b}\] is generated by global sections
for $b\gg 1$. Therefore so is
$\mathcal{O}_{Y}\big(bk(K_{Y/X}+\Delta)-bE\big)\otimes f^*\mathcal{L}^{\otimes b}$ over $f^{-1}(U)$.
We now fix a point $x \in U$.
\par Letting $L$ denote a Cartier divisor class of $\mathcal{L}$, we can apply Bertini's
theorem to choose a divisor
\[
D\in \big|bk(K_{Y/X}+\Delta)-bE+bf^*L\big|
\]
such that on $f^{-1}(U)$, $D$ is smooth, $D+\Delta+E$ has
simple normal crossing support, $D$ is not contained in the support of
$\Delta+E$, and $D$ intersects the fiber over $x$ transversely. Then write
\[\frac{1}{b}D \sim_{\mathbb{R}} k(K_{Y/X}+\Delta)-E+f^*L.\]
Multiplying both sides by $\frac{k-1}{k}$, and then adding
$K_{Y/X}+\Delta+\frac{k-1}{k}E$, we have
\begin{equation}\label{red}K_{Y/X}+\Delta+\frac{k-1}{kb}D +\frac{k-1}{k}E \sim_{\mathbb{R}} k(K_{Y/X}+\Delta) +\frac{k-1}{k}f^*L.\end{equation}
\par Now applying Lemma \ref{lem:numeric} for $c=\frac{k-1}{k}$, there exists an effective divisor
$E' \preceq E$ such that
\[
\Delta' \coloneqq \Delta + \frac{k-1}{k}E - E'
\]
has coeffecients in $(0,1]$.
Subtracting $E'+\frac{k-1}{k}f^*L$ from both sides in \eqref{red}, we can therefore write
\[
K_{Y/X}+\frac{k-1}{kb}D + \Delta'-\frac{k-1}{k}f^*L \sim_{\mathbb{R}}
k(K_{Y/X}+\Delta)-E'.
\]
Let us now denote by $H$ the line bundle $\omega_X\otimes \mathcal{L}^{\otimes n+1}$ and a divisor class in it at the same time.
For a positive integer $\ell$, we add $f^*K_X+(k-1)f^*H+(\ell-(k-1)(n+1))f^*L$ to both sides to obtain
\begin{equation}\label{red1}K_{Y}+\frac{k-1}{kb}D + \Delta'+(k-1)f^*H+\left(\ell-\frac{k-1}{k}-(k-1)(n+1)\right)f^*L\sim_{\mathbb{R}} P - E'+\ell f^*L.\end{equation}
As noted earlier $E'\preceq E$ is an effective Cartier divisor and therefore $f_*\mathcal{O}_Y(P-E')\simeq f_*\mathcal{O}_Y(P)$
by Lemma \ref{lem:ps14trick}. Moreover since the right hand side of \eqref{red1} is a Cartier divisor, it is enough to
tackle the generation of the left side.
\begin{step}\label{step:pluri3}
Applying Theorem \ref{thm:dc} to obtain global generation.
\end{step}
First, we need to modify $D$ to be able to apply Theorem \ref{thm:dc}.
\par Let $\mu\colon Y'\to Y$ be a log resolution of $\frac{k-1}{kb}D + \Delta'$ as in Theorem \ref{citedthm:Szabo}.
Such a modification is an isomorphism
over $f^{-1}(U)$ by choice of $D$.
Write
\[
\mu^*D = \widetilde{D}+F, \qquad \mu^*\Delta' = \widetilde{\Delta}' + F_1
\]
where $\widetilde{D}$ is the strict transform
of the components of $D$ that lie above $U$
and $\widetilde{\Delta}'$ is the strict transform of $\Delta'$.
Note that both $F$ and $F_1$ has support outside of $f^{-1}(U)$.
Denote,
\[F'\coloneqq \left\lfloor\frac{k-1}{kb}F+F_1\right\rfloor, \qquad
\widetilde{\Delta} \coloneqq \mu^*D+\mu^*\Delta'-F', \qquad \widetilde{P}\coloneqq \mu^*P+K_{Y'/Y}\]
By definition $\widetilde{\Delta}$ has coefficients in $(0,1]$. Now pulling back \eqref{red1} and adding $K_{Y'/Y}-F'$ we
and rewrite \eqref{red1} as:
\begin{equation*}
K_{Y'}+\widetilde{\Delta}+ (k-1)\mu^*f^*H+\left(\ell-\frac{k-1}{k}-(k-1)(n+1)\right) \mu^*f^*L \sim_{\mathbb{R}} \widetilde{P} - \mu^*E'+\ell\mu^*f^*L - F'.
\end{equation*}
This can be compared to \eqref{eq:pluripairlastrequiv}.
By the arguments following \eqref{eq:pluripairlastrequiv}
we can say that it is enough to show global generation for the pushforward of the left side
under $f\circ\mu$
to deduce desired global generation for $f_*\mathcal{O}_Y(P)\otimes \mathcal{L}^{\otimes \ell}$ for suitable $\ell$.
To do so, we note once again that from Mori theory it follows that $H=\omega_X\otimes\mathcal{L}^{\otimes n+1}$
semiample. Therefore $(k-1)\mu^*f^*H$ is also semiample.
Applying Bertini's theorem one more time we can
pick an effective fractional $\mathbb{Q}$-divisor $D' \sim_{\mathbb{Q}} (k-1)\mu^*f^*H$ with smooth support
and its support intersects components of $\widetilde{\Delta}+D'$ and the fiber over $x$ transversely.
We can now rewrite the linear equivalence as
\begin{equation}\label{newred}
K_{Y'}+\widetilde{\Delta}+D'+\left(\ell-\frac{k-1}{k}-(k-1)(n+1)\right)\mu^*f^*L \sim_\mathbb{R}
\widetilde{P} -\mu^*E'+ \ell \mu^*f^*L - F' .
\end{equation}
Note that $\widetilde{\Delta}+D'$ on the left-hand side of \eqref{newred}
has simple normal crossing support with coefficients in $(0,1]$ and $\Supp(\widetilde{\Delta}+D')$
intersects the fiber over $x$ transversely.
Thus, we can apply Theorem \ref{thm:dc} on the left hand side
to conclude that
\[f_*\mathcal{O}_Y(P)\otimes \mathcal{L}^{\otimes \ell}\] is generated by global sections over $U$
for all $\ell> \frac{n}{\varepsilon(L;x)}+k(n+1)-n-\frac{1}{k}$.
After possibly shrinking $U$ we assume that
$\varepsilon(\mathcal{L};x)> \frac{1}{n+\frac{1}{n(k+1)}}$ for all points $x\in U$, and hence
\[\ell> n\left(n+\frac{1}{n(k+1)}\right)+k(n+1)-n-\frac{1}{k} = k(n+1)+n^2-n -\frac{1}{k(k+1)}.\]
Therefore, $\displaystyle \ell \ge k(n+1)+n^2-n$. This
proves $(\ref{thm:e1})$.
\begin{step}The case of klt $\mathbb{Q}$-pairs.
\end{step}
When $\Delta$ is a klt $\mathbb{Q}$-pair, we apply \cite[Proposition 1.2]{Dut17}
on the left hand side of
\eqref{newred}.
To do so,
we first trace the construction of $\widetilde{\Delta}+D'$ to note that its coefficients lie in $(0,1)$.
We then apply the Proposition with $H= \frac{1}{k}\mu^*f^*L$ and $A=(\ell -k(n+1)+n)\mu^*f^*L$
to obtain global generation on $U$ for all
$\ell> k(n+1)+\frac{n^2-n}{2}$. This proves $(\ref{thm:e2})$.
\end{proof}
We summarize below the locus of global generation for Theorem \ref{thm:sing} and \ref{thm:pluri}:
\begin{remark}\label{rmk:loci}
When $Y$ is smooth and the relative base locus of $P$ is an effective divisor
$E$ such that $\Delta+E$ has simple normal crossings support,
the open set $U$
for Theorem \ref{thm:sing} contains the largest open subset of $U(f,\Delta+E)$
such that $\varepsilon(\mathcal{L};x)> (n+\frac{1}{kn})^{-1}$
and for
Theorem \ref{thm:pluri}$(\ref{thm:e1})$, $U$ contains the intersection of $U(f,\Delta+E)$, the locus where $f_*\mathcal{O}_Y(P)$ is
locally free, and the open set where
$\varepsilon(\mathcal{L};x)>\bigl(n+\frac{1}{n(k+1)}\bigr)^{-1}$. Finally, for Theorem \ref{thm:pluri}$(\ref{thm:e2})$,
$U$ contains the intersection of $U(f,\Delta+E)$ and the locus where
$f_*\mathcal{O}_Y(P)$ is locally free.
\end{remark}
\begin{remark}\label{rem:lowdim}
Using the better bounds in Remark \ref{rem:betterbounds} for low dimensions ($n = 2,3$),
one can show that the lower bound $\ell \ge k(n^2-n+1)$ in Theorem
\ref{thm:sing} and $\ell \ge k(n+1)+n^2-2n$ in Theorem \ref{thm:pluri}
suffice when $X$ is smooth and $\mathcal{L}$ is ample.
In particular, Conjecture \ref{conj:ps14} for generic global generation holds
for $n=2$.
In the klt case, the conjectured lower bound in fact holds when $n\leq 4$ as was
observed in \cite{Dut17}.
\par If the conjectured lower bound for Seshadri constants in Remark \ref{rem:betterbounds} holds,
then Theorem \ref{thm:sing} would hold for the lower bound $\ell \ge k(n+1)$,
thereby proving this generic version of Conjecture
\ref{conj:ps14} in higher dimensions for big and nef line bundles.
\end{remark}
\subsection{An Effective Vanishing Theorem}
With the help of our effective twisted weak positivity, we improve the effective vanishing statement in
\cite[Theorem 3.1]{Dut17}:
\begin{theorem}
\label{thm:effvanish}
Let $f\colon Y \to X$ be a smooth fibration of smooth projective varieties with
$\dim X=n$.
Let $\Delta$ be a $\mathbb{Q}$-divisor with simple normal crossing support with coefficients in $[0,1)$,
such that every stratum of $(Y,\Delta)$
is smooth and dominant over $X$, and let $\mathcal{L}$ be an ample
line bundle on $X$.
Assume also that for some fixed integer $k\geq 1$,
$k(K_Y+\Delta)$ is Cartier and
$\mathcal{O}_Y(k(K_Y+\Delta))$ is relatively
base point free.
Then, for every $i > 0$ and all $\ell \geq k(n+1)-n$, we have
\[
H^i\bigl(X, f_* \mathcal{O}_Y\bigl(k(K_Y+\Delta)\bigr)\otimes \mathcal{L}^{\otimes \ell}\bigr)=0.
\]
\par
Moreover, if $K_X$ is semiample, for every $i > 0$ and every ample line bundle $\mathcal{L}$ we have
\[
H^i\bigl(X, f_* \mathcal{O}_Y\bigl(k(K_Y+\Delta)\bigr)\otimes \mathcal{L}\bigr)=0.
\]
\end{theorem}
\begin{proof}
The hypothesis on $f$ and $\Delta$ ensures invariance of log-plurigenera, as noted in Remark \ref{rem:hmx}, hence $f_*\mathcal{O}_Y(k(K_{Y/X}+\Delta))$ is locally free. This
means $U(f,\Delta) = X$. Furthermore, by the description of the open set in the proof of Theorem \ref{thm:wp}, we have that
there exists a positive
integer $b$ such that
\[
\bigl(f_*\mathcal{O}_Y\bigl(k(K_{Y/X}+\Delta)\bigr)\bigr)^{[b]}\otimes \mathcal{L}^{\otimes b}
\]
is globally generated everywhere on $X$. Now since $\mathcal{O}_Y(k(K_Y+\Delta))$ is relatively base point free, we
can choose a divisor $\frac{1}{b}D\sim_{\mathbb{R}} k(K_{Y/X}+\Delta)+f^*L$,
satisfying the Bertini-type properties as in Step \ref{step:pluri2} of Theorem \ref{thm:pluri}.
Denote $H\coloneqq K_X+(n+1)L$,
which is semiample by Mori's cone theorem and the base point free theorem. As before, we then write
\[
K_Y+\Delta+\frac{k-1}{kb}D+(k-1)f^*H+\biggl(\ell -
\frac{k-1}{k}-(k-1)(n+1)\biggr)f^*L \sim_\mathbb{R} k(K_Y+\Delta)+\ell f^*L.
\]
Since the divisor $\Delta+\frac{k-1}{kb}D$ is klt and $(k-1)H+\bigl(\ell -
\frac{k-1}{k}-(k-1)(n+1)\bigr)L$ is ample for all $\ell\geq k(n+1)-n$, by Koll\'ar's vanishing theorem \cite[Theorem 10.19]{Kol95}
we obtain that
\[H^i\bigl(X, f_* \mathcal{O}_Y\bigl(k(K_Y+\Delta)\bigr)\otimes \mathcal{L}^{\otimes
\ell}\bigr)=0\]
for all $\ell \geq k(n+1)-n$ and for all $i>0$.
Moreover, when $K_X$ is already semiample, we take $H=K_X$. In this case, the linear
equivalence above looks as follows:
\[
K_Y+\Delta+\frac{k-1}{kb}D+(k-1)f^*H+ \bigl(\ell - \frac{k-1}{k}\bigr)f^*L \sim_\mathbb{R} k(K_Y+\Delta)+\ell f^*L.
\]
Then, we obtain the desired vanishing for all $\ell\geq 1$ and $i>0$.
\end{proof}
| 2023-04-23T06:40:51.480Z | 2018-10-30T01:01:59.000Z | redpajama/arxiv | arxiv_0001 | 1,251 | 15,588 |
70d8474eb2584c35e3b5d8a93e81dbdf12cc5a5e | \section{Introduction}
Let $X$ be a smooth irreducible complex projective curve of genus $g$, with
$g\,\geq\, 2$. Fix an integer $r\,\geq\, 2$ and a line bundle $L$ over $X$ of
degree $d$ such that $r$ and $d$ are coprime. Let $M_{r,L}$ denote the moduli
space of isomorphism classes of stable bundles of rank $r$ with
$\wedge^{r}E\,\simeq\, L$. A vector bundle $V$ over $X\times M_{r,L}$ is called a
{\it Poincar\'e bundle} if its restriction $V\vert_{X\times \{[E]\}}$ is
isomorphic to $E$ for all closed points $[E]\,\in\, M_{r,L}$. It is known that a
Poincar\'e bundle exists; moreover, any two of them differ by tensoring with a
line bundle pulled back from $M_{r,L}$. Balaji, Brambila-Paz and Newstead proved
in \cite{BBN} that any such Poincar\'e bundle is stable with respect to any ample
divisor in $X \times M_{r,L}$. Recently, Biswas, Gomez and Hoffman studied in
\cite{BGH} the similar question for the moduli space of principal $G$-bundles.
In this short note we consider certain moduli spaces of stable parabolic bundles on
$X$. Let us fix a finite set $D$ of $n$ closed points in $X$. We denote by
$M_{\alpha}(\Lambda)$ the moduli space of stable parabolic bundles of rank $r$ on
$X$ with fixed determinant $\Lambda$ having full flags at each points of $D$ and
rational parabolic weights $\alpha\,=\,\{\alpha_{j}^i\}$, $1\,\leq\, j \,\leq\, r$
and $1 \,\leq\, i \,\leq\, n$. In this case it is known that there
exists a vector bundle $\mathcal{U}_{\alpha}$ over $X\times M_{\alpha}(\Lambda)$
which has a natural parabolic structure over the divisor $D\times
M_{\alpha}(\Lambda)$, and moreover, its restriction to each closed points $[E_*]$
is isomorphic to $E_*$ as parabolic bundle \cite{BD}. Any two such bundles differ
by tensoring with a line bundle pulled back from $M_{\alpha}(\Lambda)$. We call
such bundles {\it Poincar\'e parabolic bundles}. So, it is natural to ask whether
this bundles are parabolic (slope) stable.
We prove the following:
\begin{theorem}
Let $\mathcal{U}$ be a Poincar\'e parabolic bundle over $X\times M_{\alpha}(\Lambda)$.
Then $\mathcal{U}$ is a parabolic (slope) stable bundle with respect to a
natural ample divisor.
\end{theorem}
We adopt the strategy of proof in \cite{BBN} in the given context.
\section{Preliminaries}
Let $X$ be an irreducible smooth complex projective curve of genus $g \,\geq\, 2$.
Fix $n$ distinct points $x_{1},\, \cdots\, ,x_{n}$ on $X$, and denote the
divisor $x_{1}+\cdots +x_{n}$ on $X$ by $D$. Let $E$ be a holomorphic vector bundle on $X$ of
rank $r$.
A {\it quasi-parabolic structure} over $E$ is a strictly decreasing
filtration of linear subspaces
\[E_{_{x_i}}=F_i^1 \supset F_i^2 \supset \cdots \supset F_i^{k_i}\supset
F_i^{{k_i}+1} = 0
\]
for every $x_i\in D$. We set \[r_j^i \,:= \,\mbox{dim}F_i^j - \mbox{dim}F_i^{j+1}\, .\]
The integer $k_i$ is called the {\it length} of the flag and the sequence $(r_1^{i},
r_2^{i},\cdots, r_{k_i}^{i})$ is called the {\it type} of the flag at $x_i$. A {\it
parabolic structure} on $E$ over the divisor $D$ is a quasi-parabolic structure
as above together with a sequence of real numbers
\[0\leq \alpha_1^i < \alpha_2^i < \cdots < \alpha_{k_i}^i < 1.\]
The {\it parabolic degree} of $E$ is defined to be
\[\mbox{par-deg}(E):=\mbox{deg}(E)+\sum_{x_i\in D}\sum_{j=1}^{k_i}\alpha_j^ir_j^i\]
and the {\it parabolic slope} of $E$ is
\[\mbox{par-}\mu(E) :=\frac{\mbox{par-deg}(E)}{r}.\]
(See \cite{MS}.)
For any subbundle $F\subseteq E$, there exists an induced parabolic structure on $F$
whose quasi-parabolic filtration over $x_i$ is given by the distinct subspaces in
\[F_{x_i}=F_i^1 \cap F_{x_i} \supset F_i^2 \cap F_{x_i} \supset \cdots \supset F_i^{k_0}\cap F_{x_i}\supset 0\, ,\]
where $k_0 := {\rm max}\{j\in \{1,\cdots, k_i\} \mid F_i^j\cap F_{x_i}\neq 0\}$;
the parabolic weight of $F_i^j \cap F_{x_i}$ is the maximum of all $\alpha_i^\ell$ such that
$F_i^j \cap F_{x_i}\,=\, F_i^\ell \cap F_{x_i}$.
A parabolic vector bundle $E$ with parabolic structure over $D$ is said to be
{\it stable} (respectively, {\it semistable}) if for every
subbundle $0\neq F\subsetneq E$ equipped with the induced parabolic structure, we have
\[ \mbox{par-}\mu(F) \,<\, \mbox{par-}\mu(E)\ \
\text{(respectively,}\ \mbox{par-}\mu(F) \,\leq\,\mbox{par-}\mu(E)\text{)}\, .\]
\subsection{Poincar\'e parabolic bundle}
Fix integers $r>1$ and $d$ and for each $i=1,2,\cdots, n$ a sequence of positive
integers $\{r_j^i\}_{j=1}^{k_i}$ such that $\sum_{j=1}^{k_i}r_j^i =
r$ for each $i$. Then the coarse moduli space $M_{X}(d,r, \{\alpha_j^i\},\{r_j^i\})$ of
semistable parabolic vector bundles of rank $r$, degree $d$, flag types
$\{r_j^i\}$ and parabolic weights $\{\alpha_j^i\}_{j=1}^{k_i}$ at $x_i \,\in\, D$, $1
\,\leq\, i \,\leq\, n$, is a normal projective variety \cite{MS}. The open subvariety
$M_{X}(d,r, \{\alpha_j^i\},\{r_j^i\})^s$ of it consisting of stable parabolic bundles is
smooth.
For a scheme $S$, let $\pi_X\,:\, X\times S\,\longrightarrow\, X$ and $\pi_S
\,:\, X\times S\,\longrightarrow\, S$ be the natural projections.
For a vector bundle $U$ over $X \times S$, and $s\,\in\, S$, set $U_s \,:= \, U|_{X \times \{s\}}$.
Given a flag type $m_i=(r_1,\cdots,r_{k_i})$, $1\, \leq\, i\, \leq\, n$, with $\sum_{j=1}^{{k_i}}r_j\,=\,r$, define $\mathcal{F}_{m_i}$ to be the
variety of flags of type $m_i$. Furthermore, for a vector bundle $U\,\longrightarrow\, S$ of rank
$r$, let $\mathcal{F}_{m_i}(U)\,\longrightarrow\, S$ be the bundle of flags of type $m_i$.
For each $x_i\,\in\, D$ we fix the flag type $m_i\,=\,(r_1^i, r_2^i, \cdots, r_{k_i}^i)$. A {\it family}
of quasi-parabolic vector bundles parametrized by a scheme $S$ is defined to be a vector bundle
$U$ over $X\times S$ together with sections $\phi_{x_i}\,:\,S \,\longrightarrow\,
\mathcal{F}_{m_i}(U|_{x_{i}\times
S})$, $1\,\leq\, i \,\leq\, n$. Note that the section $\phi_{x_i}$ corresponds to a flag of subbundles of
$U|_{x_{i}\times S}$ with flag type $m_i$ for each $i$. A family of parabolic bundles is given
by associating weights $\{\alpha_j^i\}$ to each flag of subbundles over $x_{i}\times S$, $x_i\in
D$. We denote the family of parabolic bundles by $U_*=(U,\phi,\alpha)$ and by $U_{s,*}$ the
parabolic bundle $(U_s,\phi_s,\alpha)$ above $s \in S$.
It is known that if the elements of the set $\{d,r_j^i\mid 1\leq i \leq n, 1 \leq j \leq k_i\}$
have greatest common divisor equal to one then $M_{\alpha}^{s}:=M_{X}(d,r,
\{\alpha_j^i\},\{r_j^i\})^s$ is a fine moduli space, meaning there exists a family
$\mathcal{U}_{*}^{\alpha}:=(\mathcal{U},\phi,\alpha)$ parametrized by $M_{\alpha}^{s}$ with the
property that $\mathcal{U}_{e,*}^{{\alpha}}$ is a stable parabolic bundle isomorphic to $E_*$
for all $[E_*]\,=\,e\,\in\, M_{\alpha}^{s}$ \cite[Proposition 3.2]{boden},
\cite{BD}. Moreover, if the
parabolic weights $\{\alpha_j^i\}$ are chosen to be {\it generic}, i.e.,
the notions of stability and
semi-stability coincide, then the moduli space $M_{X}(d,r, \{\alpha_j^i\},\{r_j^i\})$ is a
smooth, irreducible, projective variety. We denote this variety by $M_{\alpha}$.
Now assume that the weights are generic, $\alpha_j^i$ are rational numbers and $r_j^i=1$, so we
are choosing full flags at each points of $D$. Note that this is the generic case.
There is a well defined {\it determinant
morphism} $\det\,:\,M_{\alpha}\,\longrightarrow\, J^d(X)$, where $J^d(X)$ denotes the component
of the Picard group of $X$ consisting of line bundles
of degree $d$. For $\Lambda\in J^d(X)$, denote the fiber $\det^{-1}(\Lambda)$ by
$M_{\alpha}(\Lambda)$, and the restriction of the vector bundle $\mathcal{U}$ to $X\times
M_{\alpha}(\Lambda)$ by (with a mild abuse of notation) $\mathcal{U}$. From the earlier discussions
it is clear that the vector bundle $\mathcal{U}$ over $X\times
M_{\alpha}(\Lambda)$ gets a natural parabolic structure over the smooth divisor $D\times
M_{\alpha}(\Lambda)$.
\subsection{Strongly Parabolic Higgs Fields}
In this subsection we will briefly recall some properties of strongly parabolic Higgs fields and
the Hitchin map; for details see \cite[Sections 2,3]{GL}. As before, we assume that the
weights are generic and full flags at each points of $D$.
Let $K_X$ denotes the holomorphic cotangent bundle of $X$.
A {\it parabolic Higgs field} on a parabolic vector bundle $E_*$ is a
homomorphism $$\Phi\,:\, E \,\longrightarrow\, E\otimes K\otimes{\mathcal O}_X(D)\,=\,
E\otimes K(D)$$
such that
\begin{enumerate}
\item $\text{trace}(\Phi)\,=\, 0$, and
\item $\Phi$ is a {\it strongly parabolic} homomorphism, meaning for each $x_i \in D$ we have
$\Phi(F_i^j) \,\subset\, F_i^{j+1} \otimes (K(D)|_{x_i})$.
\end{enumerate}
The pair $(E_*,\, \Phi)$ is called a parabolic
Higgs bundle.
A parabolic Higgs bundle $(E_*, \, \Phi)$ is called {\it stable} (respectively,
{\it semistable}) if for all proper non-zero $\Phi$-invariant
sub-bundles $F$ of $E$, we have $\mbox{par-}\mu(F) \,<\, \mbox{par-}\mu(E)$
(respectively, $\mbox{par-}\mu(F) \,\leq\, \mbox{par-}\mu(E)$).
The cotangent space at $[E]\,\in\, M_{\alpha}(\Lambda)$ can be identified
with $H^0(X, \mbox{SParEnd}_0(E)\otimes K_X(D))$ where $\mbox{SParEnd}_0(E)$ is the sheaf of
strongly parabolic traceless endomorphisms. Then the coefficients of the characteristic polynomial of
$\phi\,\in\, H^0(X, \mbox{SParEnd}_0(E)\otimes K_X(D))$ lie in $W\,:=\,\bigoplus_{j=2}^{r}
H^0(X,\, K_{X}^j((j-1)D))$.
Let $N_{\alpha}(\Lambda)$ be the moduli space of isomorphism classes of strongly parabolic
stable Higgs bundles with parabolic structures over $D$ and weights $\{\alpha_j^i\}_{j=1}^{r}$
at $x_i \,\in\, D$, $1 \,\leq\, i \,\leq\, n$, with fixed determinant $\Lambda$. The total space
$T^{*}M_{\alpha}(\Lambda)$ of the cotangent bundle is an open subvariety of the moduli space
$N_{\alpha}(\Lambda)$. The map $$h\,:\,N_{\alpha}(\Lambda)\,\longrightarrow\,
W\, , \ \ (E,\,\phi)\,\longmapsto \, ({\rm trace}(\wedge^2\phi),\cdots ,{\rm trace}(\wedge^r\phi))$$
is proper and surjective; it is called the {\it Hitchin
map}. If $s\,\in\, W$ such
that the corresponding spectral curve $X_{s}$ is smooth, then the fiber $h^{-1}(s)$
is identified with the Prym variety
$$\mbox{Prym}^{\delta}(X_s)\,=\,\{L\,\in\, J^{\delta}(X_s)\,\mid\,
\det(\pi_*L)\simeq \Lambda\}$$
associated to $X_{s}$, where $\delta\,:=\,d-\mbox{deg}(\pi_*(\mathcal{O}_{X_s}))$.
A parabolic bundle on $X$ is called {\it very stable} if there is no non-trivial nilpotent strongly
parabolic Higgs field on it. It is known that, if the genus $g(X)\geq 2$, a very stable parabolic bundle is stable. There
exist very stable parabolic bundles in any moduli space. In fact the subset of very stable parabolic
bundles is a dense open set in $M_{\alpha}(\Lambda)$. This follows from the fact the dimension
of the nilpotent cone $h^{-1}(0)$ is same as the dimension of the moduli space
$M_{\alpha}(\Lambda)$ \cite[Corollary 3.10]{gmg}. Let
$$S'\, \subset\, S\, :=\, T^*M_{\alpha}(\Lambda)\cap
h^{-1}(0)$$ be the open subset consisting of all $(E,\phi)\,\in\, S$ such
that $\phi$ is nonzero. The image of $S'$ in $M_{\alpha}(\Lambda)$ under the forgetful map
$(E,\phi)\,\longmapsto\, E$ will be denoted by $B$. Note that $B$ is the non-very stable
locus in $M_{\alpha}(\Lambda)$. On the other hand, there is a free action of
${\mathbb C}^*$ on $S'$; namely the action of any $c\,\in\, {\mathbb C}^*$ sends any
$(E,\phi)$ to $(E,c\cdot\phi)$. Hence we have
$$
\dim M_{\alpha}(\Lambda) \,=\, \dim S\, =\, \dim S' \, >\, \dim B\, .
$$
This implies that the complement $M_{\alpha}(\Lambda) \setminus B$ is nonempty.
\subsection{Determinant bundle}
Let $T$ be a variety. For any coherent sheaf $\mathcal{E}$ on $X\times T$, flat over $T$, let
$\det R\pi_{T}\mathcal{E}$ denote {\it determinant line bundle} defined as:
\[\{\det R\pi_{T}\mathcal{E}\}_t :=\{\det H^0(X,\mathcal{E}_t)\}^{-1}\otimes \{\det H^1(X,\mathcal{E}_t)\}\]
for $t\,\in\, T$ (\cite{BR}, \cite{nara}, \cite{Sunn}).
Let $x$ be a fixed closed point of $X$. We fix rational numbers $0\leq \alpha_1<\alpha_2<\cdots< \alpha_r<1$
and a positive integer
$k$ such that $\beta_j=k\cdot \alpha_j$ is an integer for each $j=1, \cdots, r$.
Set $d_j:= \beta_{j+1}-\beta_j, 1\leq j \leq r$, with the assumption that
$\beta_{r+1}\,=\,1$.
Let $\mathcal{V}_*^{\alpha}=(\mathcal{V}^{\alpha}, \alpha=\{\alpha_j\}_{j=1}^{r},\phi)$ be a family of
rank $r$ stable parabolic bundles over $X$ with parabolic divisor $\{x\}$ parametrized by a variety
$T$ and
\[\mathcal{V}^{\alpha}|_{x\times T}=\mathcal{F}_{1,x}\supset \mathcal{F}_{2,x}\supset \cdots \supset
\mathcal{F}_{r,x}\supset \mathcal{F}_{r+1,x}=(0)\]
be the full flag of subbundles over $x\times T$ determined by the section $\phi$.
Set $L_j := \frac{\mathcal{F}_{j,x}}{\mathcal{F}_{j+1,x}}$. Let $\Psi: T \longrightarrow M_{\alpha}(\Lambda)$
be the morphism induced by this family. Define a line bundle
\[\theta_{T} :=(\det R\pi_{T}\mathcal{V}^{\alpha})^{k}\otimes \det (\mathcal{V}_x^{\alpha})^l
\otimes \otimes_{j=1}^{r} L_j^{d_j}\]
where $l$ is a positive integer determined by \cite[Equation (*), page 6]{Sunn}. Then there exists a unique
(up to algebraic equivalence) ample line bundle $\Theta_{M_{\alpha}}$ over $M_{\alpha}(\Lambda)$ such that
$\Psi^*\Theta_{M_{\alpha}}=\theta_{T}$ \cite{BR}, \cite[Theorem 1.2]{Sunn}.
\section{parabolic stability of the parabolic Poincar\'e bundle}
In this section we continue with the notation of the previous section.
Let $X$ be a smooth projective irreducible complex curve of genus $g \geq 2$,
$x_1, \cdots, x_n \,\in\, X$ distinct points and $D=x_1+\cdots+x_n$. Let $Y$ be a
smooth projective irreducible complex variety.
For each point $x_{i}\in \text{Supp}(D)$
fix real numbers $0\leq \alpha_1^i < \alpha_2^i < \cdots <\alpha_{k_i}^i < 1$
and $m_i=(r_1^i,\cdots, r_{k_i}^i)$, where each $r_j^i$ is a positive integer. Let $U$ be a rank $r$ vector
bundle over $X\times Y$ with parabolic structure over the smooth divisor $D\times Y$, of flag types $m_i$
and weights $\{\alpha_j^i\}, 1 \leq i\leq n, 1 \leq j \leq k_i$. Fix ample divisors $\theta_{X}$ on $X$ and
$\theta_{Y}$ on $Y$. Then for any integers $a,b>0$, the class $a\theta_{X}+b\theta_{Y}$ is ample on $X\times Y$.
Let $\Theta := a\theta_{X}+b\theta_{Y}$ for some fix integers $a,b>0$.
\begin{lemma}\label{two stable}
Suppose that for a general point $x\in X,$ the vector
bundle $U_x\,=\, U\vert_{\{x\}\times Y}$ is semi-stable with respect to $\theta_Y$
over $Y$, and
for a general point $y\,\in\, Y$ the parabolic vector bundle $U_y\,=\,
U\vert_{X\times\{y\}}$ over $X$ with parabolic divisor $D$ is semi-stable with respect
to $\theta_X$. Then the parabolic vector bundle $U$ with
parabolic divisor $D\times Y$ is parabolic semi-stable with respect to
$\Theta$. Moreover, if $U_x$ is stable or $U_y$ is parabolic stable, then
$U$ is also parabolic stable.
\end{lemma}
\begin{proof}
The proof essentially follows from the proof of \cite[Lemma 2.2]{BBN}. Let us
indicate the modification needed in this case. Let $F\,\subset\, U$ be a torsionfree
subsheaf. Then it has an induced parabolic structure. To compute the parabolic degrees
of $U$ and $F$, one needs to
compute the degree of certain vector bundles supported on the smooth divisor
$D\times Y$. But this is same as computing the degree of certain subsheaves of $U$
and $F$, which can be done as in \cite[Lemma 2.2]{BBN}.
\end{proof}
For the rest of this section we assume that the parabolic weights are rational, generic and full
flags at each points of $\text{Supp}(D)$.
Set $s \,=\, (s_2,\cdots,s_r)\in W$, and let $\pi:X_s \longrightarrow X$ be the associated spectral cover.
Then for $z\in X$, the fiber $\pi^{-1}(z)$ is given by the points $y\,\in\,
K_{X}(D)|_{z}$ which satisfy the polynomial
\[y^r+s_{2}(z)\cdot y^{r-2}+\cdots +s_r(z)\,=\,0\, .\]
Let us denote this polynomial by $f$. The morphism $\pi$ is unramified over $z$ if
and only if the resultant $R(f,f')$ of $f$ and its derivative $f'$ are nonzero. Since
all $s_j$ vanish over $D$, the ramification locus of $\pi$ contains $D$.
\begin{lemma}\label{unram}
Let $X$ be a smooth, irreducible, projective curve of genus $g\geq 2$ and $z\notin \mbox{Supp}(D)$. There exists a smooth, projective spectral curve $Y$ and finite morphism
$\pi:Y\longrightarrow X$ of degree $r$ which is unramified over $z$.
\end{lemma}
\begin{proof}
Since the linear system $|K_X^{j}((j-1)D)|$ is base point free outside $D$ and $z\notin \mbox{Supp}(D)$,
there exists $(s_2,\cdots s_r)\in W$ such that \[ R(f,f')(s_2(z),\cdots s_r(z))\neq 0.\]
Clearly this is an open condition in $W$. Thus there exists a non-empty
open subset $V$ of $W$ such that for each $s \in V$, the corresponding spectral cover
$X_s \longrightarrow X$ is unramified over $z$. Now, since the genus $g\geq 2$, by \cite[Lemma 3.1]{GL}
the set of points in $W$ where the corresponding spectral curve is smooth is an dense open subset of $W$.
Thus we can always choose a spectral curve which is smooth and unramified over $z$.
\end{proof}
\begin{lemma}
Let $X_s \longrightarrow X$ be a spectral curve. Let $P^{\delta}$ be the associated Prym
variety, where $\delta\,:=\,d-\mbox{deg}(\pi_*(\mathcal{O}_{X_s}))$. Then there is a
dominant rational map $f\,:\, P^{\delta}\,\dashrightarrow \,M_{\alpha}(\Lambda)$.
\end{lemma}
\begin{proof}
Let $h'$ be the restriction of $h$ to the the total space of the cotangent bundle $T^{*}M_{\alpha}(\Lambda)$.
Then for any very stable parabolic bundle $E \,\in\, M_{\alpha}(\Lambda)$, the
restriction
$$h'_E\,:\,T_E^*M_{\alpha}(\Lambda)\,\longrightarrow\, W$$ of $h'$ is surjective (for a proof see \cite[Lemma 1.4]{KP}).
Thus, for any $s\in W$, we have $h'^{-1}(s)\cap T_E^{*}M_{\alpha}(\Lambda)$
is nonempty for every very stable parabolic bundle $E \,\in\, M_{\alpha}(\Lambda)$.
Consequently, for all $s\in W$, the image of the map $h'^{-1}(s)\longrightarrow M_{\alpha}(\Lambda)$
contains the dense open set $U$ of all very stable parabolic bundles. Thus the morphism
$h'^{-1}(s) \longrightarrow M_{\alpha}(\Lambda)$ is dominant.
Since $h'^{-1}(s)\subseteq h^{-1}(s)\simeq P^{\delta}$ is an open set, we have a dominant rational
map $f\,:\, P^{\delta} \,\dashrightarrow\, M_{\alpha}(\Lambda)$.
\end{proof}
Now we discuss the `parabolic stability' of $\mathcal{U}$ with respect to a `naturally' defined ample
divisor on $X\times M_{\alpha}(\Lambda)$. For the simplicity of the exposition we assume that $D=x$
(for an arbitrary reduced divisor the same arguments will hold).
\begin{theorem}\label{th1}
Let $z\notin \text{Supp}(D)$. Then $\mathcal{U}_z$ is semi-stable with respect
the ample divisor $\Theta_{M_{\alpha}}$.
\end{theorem}
\begin{proof}
By Lemma \ref{unram} we get a spectral cover $\pi: Y\longrightarrow X$ which is
unramified over $z$. Let $\pi^{-1}(z)\,=\,\{y_1,\,\cdots,\,y_{r}\}$, with $y_i$
being distinct points in $Y$.
Let $\pi\times 1: Y\times P^{\delta}\longrightarrow X\times P^{\delta}$ denotes
the product morphism. Let $\mathcal{L}$ denote the restriction of a Poincar\'e
line bundle on $Y\times J^{\delta}(Y)$ to $Y\times P^{\delta}$. Then the
direct image $(\pi\times
1)_{*}\mathcal{L}$ is a rank $r$ vector bundle and the $\mathcal{O}_{X\times
P^{\delta}}$--algebra structure on $(\pi\times 1)_{*}\mathcal{L}$ defines a
section $$\Phi\,\in\, H^0(X\times P^{\delta},\,\mbox{End}((\pi\times
1)_{*}(\mathcal{L}))\otimes p_{X}^*K_{X}(D))\, .$$ This
$\Phi$ induces a parabolic
structure on $(\pi\times 1)_{*}(\mathcal{L})$ over $x\times P^{\delta}$. Thus we
have a family of parabolic bundles parametrized by $P^{\delta}$. Clearly, the
rational map $f:P^{\delta}\dashrightarrow M_{\alpha}(\Lambda)$ is induced by the
above family. Let $T^{\delta}$ be the open set where $f$ is defined. Then
$\mbox{Codim}(P^{\delta}\setminus T^{\delta})\geq 2$.
Let $\mathcal{E}:=((\pi\times 1)_*\mathcal{L})|_{X\times T^{\delta}}$. Since
$M_{\alpha}(\Lambda)$ is a fine moduli space we have
\[
(1\times f)^*\mathcal{U}
\simeq \mathcal{E}\otimes p_{_{T^{\delta}}}^*(L_0)
\]
for some line bundle $L_{0}$ on $T^{\delta}$. Thus \[f^*\mathcal{U}_z \simeq
\oplus_{i=1}^{r}\mathcal{L}_{y_i}\otimes L_0\] on $T^{\delta}$. Since
$\mbox{Codim}(P^{\delta}\setminus T^{\delta})\geq 2$ and $P^{\delta}$ is smooth,
the line bundles $\mathcal{L}_{y_i}$ and $L_0$ uniquely extend over $P^{\delta}$.
The line bundles $\mathcal{L}_{y_i}$ are already defined over $P^{\delta}$. Let
$L'_0$ be the unique extension of $L_0$ over $P^{\delta}$. Since
$\mathcal{L}_{y_i}$ are algebraically equivalent, it follows that
$\oplus_{j=1}^{r}\mathcal{L}_{y_i}\otimes L'_0$ is semistable with respect to any
ample line bundle on $P^{\delta}$. Thus if we can find an ample line bundle $H$
over $P^{\delta}$ such that $H|_{T^{\delta}} \simeq f^*(\Theta_{M_{\alpha}}^n)$
for some positive integer $n$, then by \cite[Lemma 2.1]{BBN}, $\mathcal{U}_{z}$ is
semistable with respect to $\Theta_{M_{\alpha}}^n$. Hence it is semistable with
respect $\Theta_{M_{\alpha}}.$
We have,
\[f^*\Theta_{M_{\alpha}}=\theta_{T^{\delta}}\,=\,
(\det R{\pi_{T^{\delta}}}\mathcal{E})^{k}\otimes
\det(\mathcal{E}_x)^{l}\otimes \otimes_{j=1}^{r}L_j^{d_j}\, .
\]
By \cite[Theorem 4.3]{Li} we get that
\[(\det R{\pi_{T^{\delta}}}\mathcal{E})^{k}\,=\,
m\Theta_{P^{\delta}}^{k}|_{_{T^{\delta}}}
\]
for some positive integer $m$, where $\Theta_{P^{\delta}}^{k}$ is the restriction
of the canonical theta divisor on $J^{\delta}(Y)$ to $P^{\delta}$. Let $M$ be the
unique extension of $\det(\mathcal{E}_x)^l\otimes \otimes_{j=1}^{r}L_j^{d_j}$. Set
$H:= m\Theta_{P^{\delta}}^{k}\otimes M$. Then for some positive integer $q, H^q$ is
ample on $P^{\delta}$. Thus
$f^*\Theta_{M_{\alpha}}^n$ is a restriction of an ample line bundle $H$ on
$P^{\delta}$.
\end{proof}
As a corollary of Theorem \ref{th1} and Lemma \ref{two stable} we obtain the main
result:
\begin{theorem}
The parabolic bundle $\mathcal{U}$ over $X\times M_{\alpha}(\Lambda)$ is parabolic stable with respect
to any integral ample divisor of the form $aD_X+b\Theta_{M_{\alpha}}$, where $D_X$ is an ample divisor
on $X$ and $a,b>0$.
\end{theorem}
\section*{Acknowledgements}
The third-named author is supported by NBHM Post-doctoral Fellowship, DAE
(Government of India). The second-named author is supported by a J. C. Bose
Fellowship.
| 2023-04-23T06:40:51.496Z | 2017-12-27T02:04:09.000Z | redpajama/arxiv | arxiv_0001 | 1,252 | 3,967 |
4b70443b7b338aa6c266ddd7db812d2386194a38 | \section{Introduction}
Wall modeling via function enrichment is a spatial discretization technique that allows the resolution of the sharp boundary layer gradients present in high-Reynolds-number flows with relatively coarse meshes. The basic idea is to make use of the flexibility in Galerkin methods regarding the choice of the solution space: a few additional, problem-tailored shape functions are used to approximate the solution, in addition to the common polynomials. Using these enriched elements, the full Navier--Stokes equations are solved in the whole boundary layer in a consistent manner. As a result, the wall model can take into account high adverse pressure gradients and convective effects, unlike most other wall modeling approaches.
The idea of wall modeling via function enrichment was proposed by Krank and Wall~\cite{Krank16} within the continuous Galerkin method (standard FEM) as a wall modeling technique for large-eddy simulation (LES). While that work showed promising results in separated flows, the limiting factor in terms of accuracy was the turbulence model employed in the near-wall region. A residual-based approach was used, supported by a structural LES model in the outer layer, a model that was originally not intended for underresolved boundary-layer simulations.
The wall modeling approach was since applied in conjunction with RANS~\cite{Krank16c} employing the Spalart--Allmaras (SA) model within the high-order discontinuous Galerkin (DG) method.
In this article, we show that the widely used delayed detached-eddy simulation (DDES) methodology~\cite{Spalart06} may be used to model the unresolved turbulence in the near-wall region in wall modeling via function enrichment. This can be done by extending the implementation of the SA model~\cite{Krank16c} in a straightforward way. The idea of the original DES approach~\cite{Spalart97} is that the wall distance function $y$ present in the SA model is limited with a characteristic cell length $\Delta$ according to
\begin{equation}
y_{\mathrm{DES}} = \min(y,C_{\mathrm{DES}}\Delta),
\end{equation}
where the parameter $C_{\mathrm{DES}}$ has been calibrated to $C_{\mathrm{DES}}=0.65$ and the grid length scale is defined as the maximum of the cell length over the space dimensions $\Delta=\max(\Delta_x,\Delta_y,\Delta_z)$~\cite{Shur99}. As a result, the RANS model acts as a one-equation LES subgrid model if $y>C_{\mathrm{DES}} \Delta$. DDES represents an enhancement of that methodology by defining the wall-distance parameter as
\begin{equation}
y_{\mathrm{DDES}} = y - f_d \max(0,y-C_{\mathrm{DES}}\Delta),
\end{equation}
with the functions
\begin{align}
f_d&=1-\mathrm{tanh}\left((8r_d)^3\right),\\
r_d&=\frac{\nu+\nu_t}{\sqrt{(\nabla \bm{u})_{ij}(\nabla \bm{u})_{ij}} \kappa^2 y^2},
\end{align}
where $\bm{u}$ is the velocity vector, $\nu$ the kinematic and $\nu_t$ the eddy viscosity, and $\kappa=0.41$.
(D)DES is widely used in research and industry, see, e.g.,~\cite{Frohlich08,Piomelli08} and is today even used for the aerodynamics of entire vehicles~\cite{Blacha16} due to its good accuracy in separated flows and the ability to investigate acoustic noise sources in the flow. Regarding the application of DES, two main branches are frequently used. The original idea was to simulate the whole boundary layer in RANS mode and to compute free shear layers in LES mode only~\cite{Spalart97}. As an alternative, DES can be seen as an approach to wall-modeled LES (WMLES), in which only the inner layer is computed in RANS mode and the outer boundary layer in LES mode, see, e.g.,~\cite{Nikitin00}.
Wall modeling via function enrichment has the potential of significantly reducing the computational cost of (D)DES. The grid saving of the standard (D)DES in comparison to LES is achieved by using relatively coarse meshes in the wall-parallel directions of up to $0.1\delta$ (WMLES) and $\delta$ (classical DES)~\cite{Larsson16} with the boundary layer thickness $\delta$. The wall-normal direction necessitates many grid points in order to resolve the laminar sublayer due to the requirement of placing the first off-wall node at $y^+_1\sim 1$, however. For example, if a boundary layer of a thickness of $10{,}000$ wall units is computed with a constant grid stretching factor of 1.15~\cite{Nikitin00}, a total of 53 grid layers would be required. This is a quite high cost compared to the relatively low engineering interest in that region. Wall modeling via function enrichment allows the first grid point to be located in the range $y^+_1\sim 10$ to $100$, saving 17--33 grid layers for that example, without noteworthy loss in accuracy, in addition to much better conditioned equation systems through the lower grid anisotropy.
In the next section, we give details on how the enrichment shape functions are constructed. In Section~\ref{sec:num}, the high-order DG code employed for the validation is outlined and numerical examples are presented in Section~\ref{sec:examples}.
\section{Wall modeling via function enrichment}
The primary idea of wall modeling via function enrichment is as follows. In a single element row at the wall, the discrete velocity solution $\bm{u}_h$ is composed of two parts, the standard polynomial component, $\bar{\bm{u}}_h$, and an additional enrichment component, $\widetilde{\bm{u}}_h$, yielding
\begin{equation}
\bm{u}_h(\bm{x},t)=\bar{\bm{u}}_h(\bm{x},t)+\widetilde{\bm{u}}_h(\bm{x},t).
\label{eq:dg_xw_rans:space}
\end{equation}
The polynomial component is given in each cell as an FE-expansion according to
\begin{equation}
\bar{\bm{u}}_h(\bm{x},t)=\sum_{B \in N^{k}} N_B^{k}(\bm{x}) \bar{\bm{u}}_B(t)
\label{eq:dg_xw_rans:std_fe}
\end{equation}
with the shape functions $N_B^{k}$ of polynomial degree $k$ and corresponding degrees of freedom $\bar{\bm{u}}_B$. There are several ways of constructing the enrichment component. In its simplest form, an enrichment function $\psi$ is weighted in each element with one additional node $\widetilde{\bm{u}}_{0}$, i.e., one degree of freedom per space dimension,
with
\begin{equation}
\widetilde{\bm{u}}_h(\bm{x},t)=\psi(\bm{x},t
\widetilde{\bm{u}}_{0}(t).
\label{eq:dg_xw_rans:enrichment}
\end{equation}
The enrichment function can additionally be weighted using a low-order polynomial to yield a higher level of flexibility in the function space~\cite{Krank16,Krank16c}, which is not considered herein.
It is this enrichment function that is responsible for the efficiency of the approach. By taking $\psi$ as a wall function, the solution space of the Galerkin method is capable of resolving a sharp attached boundary layer with very few degrees of freedom. It is noted that this wall function is not prescribed as a solution, but the Galerkin method automatically ``chooses'' the best possible solution within the high-order polynomials and the enrichment component in a least squares sense. As a wall function, we consider Spalding's law~\cite{spalding61} in the form
\begin{equation}
y^+=\frac{\psi}{\kappa}+e^{-\kappa B}\left(e^{\psi}-1-\psi-\frac{\psi^2}{2!}-\frac{\psi^3}{3!}-\frac{\psi^4}{4!}\right),
\label{eq:dg_xw_rans:spald}
\end{equation}
with $\kappa=0.41$ and $B=5.17$, as it was implemented in~\cite{Krank16}. Several alternative wall functions have been discussed in~\cite{Krank16c}. In the wall-normal direction, Spalding's law scales with the wall coordinate $y^+=y u_{\tau}/ \nu$ with $u_{\tau}=\sqrt{\tau_w/\rho}$ and the density $\rho$ such that the wall shear stress $\tau_w$ is represented correctly. In turn, the wall function has to be adapted according to the local wall shear stress in the numerical method, and its temporal evolution has to be taken into account.
We have developed an algorithm in~\cite{Krank16,Krank16c}, which enables such an adaptation. Therein, the wall shear stress is computed on discrete nodes via the velocity derivative according to
\begin{equation}
\tau_{w,B}=\frac{\lVert\int_{\partial \Omega^D} N_B^{c,m}(\bm{x})\rho \nu \frac{\partial \bm{u}_h}{\partial y} \big|_{y=0} \ \mathrm{d}A \rVert}{\int_{\partial \Omega^D} N_B^{c,m}(\bm{x}) \ \mathrm{d}A},
\end{equation}
with linear continuous shape functions $N_B^{c,m}$ of degree $m=1$. The nodal values are interpolated by
\begin{equation}
\tau_{w,h}=\sum_{B \in N^{c,m}}N_B^{c,m} \tau_{w,B},
\end{equation}
yielding a continuous representation of the wall shear stress. Through the choice of $m=1$, the wall shear stress $\tau_{w,h}$ is a coarsened field, since usually $k>1$; the coarsening is mandatory because the wall functions are relations for the mean quantities, meaning that the mean wall shear stress is related to the mean velocity, and the average wall shear stress would otherwise be overpredicted, see Reference~\cite{Krank16}. This field is updated prior to each time step, such that the function space of the velocity changes continuously and adapts to the local flow conditions. Further details on the adaptation algorithm are given in~\cite{Krank16c}. Near separation or reattachment locations, it may happen that the wall shear stress becomes zero, which renders the function space linear dependent. However, considering that the first off-wall point is located very close to the wall in terms of $y^+$ at these locations, a simple and consistent solution is to temporally ``switch off'' the enrichment in the respective cells, see~\cite{Krank16c} for details and~\cite{Krank17} for an evaluation of the method in WMLES in the context of another turbulence modeling approach. If wall shear stress becomes larger at these locations at a later instance, the enrichment is ``switched on'' again.
Finally, we comment on the two additional variables, which have to be discretized: the pressure and the working variable of the SA model. Both variables do not exhibit high gradients at the wall, such that they are represented sufficiently well by the standard FE space only, according to~\cite{Krank16c}.
\section{Numerical method}
\label{sec:num}
The present wall modeling approach may be implemented in any FEM and DG flow solver. In this work, we consider the implementation of the incompressible Navier--Stokes equations with the SA model in~\cite{Krank16c}, which in turn is based on the incompressible high-performance high-order semi-explicit DG code INDEXA~\cite{Krank16b}. An extension of the present wall modeling approach to the compressible Navier--Stokes equations would be straightforward, since high gradients are commonly not present in the energy variable, such that the latter may be considered analogous to the pressure variable herein. Numerical methods based on the continuous FEM would require a small modification of the enrichment component as described in~\cite{Krank16}.
The solver is based on weak forms, which are described in detail in~\cite{Krank16c}. These weak forms include volume and surface terms, that have to be integrated over cells and faces. The integrals are in our solver evaluated using the high-performance kernels by Kronbichler and Kormann~\cite{Kronbichler12} within the deal.II finite element library~\cite{Arndt17}. In particular, the integrals have polynomilal and nonpolynomial paths, the latter due to the nonpolynomial character of the enrichment function. The polynomial paths are integrated using the quadrature formulas given in~\cite{Krank16b} and are evaluated exactly on affine cells. The nonpolynomial contributions have to be evaluated with more quadrature points, in particular in the wall-normal direction~\cite{Krank16}. From our extensive experience with wall modeling via function enrichment, we can give the following guide lines: If the enriched cells extend up to approximately $y_{1e}^+=90$ in the statistical quantities, 8 quadrature points in the wall-normal direction are sufficient. Further we have $y_{1e}^+<110$ (10 points), $y_{1e}^+=130$ (12 points), $y_{1e}^+=200$ (17 points); see also the monograph~\cite{Krank18} for further details. All simulation cases presented herein use an adaptive time stepping method presented in~\cite{Krank16c} with a temporal accuracy of second order, a Courant number of $\mathrm{Cr}=0.14$, and a diffusion number of $D=0.02$. In the particular formulation used with the enrichment, the solver has a formal spatial order of accuracy of $k$. Finally, we note that we apply no-slip boundary conditions weakly according to~\cite{Krank16b} in all steps of the scheme for the examples presented in this article, which limits the width of the first off-wall cell to a few hundred wall units, as the no-slip condition would otherwise be violated severely.
The increasing resolution power of the DG scheme with increasing polynomial degree should be taken into account in the (D)DES grid length scale $\Delta$~\cite{Wurst13}. Based on the analysis of the resolution power of DG schemes performed in~\cite{Moura16}, we choose
\begin{equation}
\Delta = \frac{\Delta_e}{k+1}
\label{eq:k_scaling}
\end{equation}
as a length scale, based on the respective cell size $\Delta_e$, in contrast to the choice of the factor of $1/k$ chosen in Reference~\cite{Wurst13}.
\begin{table}[t]
\caption{Overview of simulation cases for the turbulent channel flow. The number of polynomial grid points per direction $i$ is $N_i=(k+1)N_{ie}$ with the number of cells per direction $N_{ie}$ and the polynomial degree $k=4$, $\Delta y_{1e}^+$ is the thickness of the first off-wall cell, in which the enrichment is active, $y = C_{\mathrm{DES}} \Delta$ is the RANS--LES switching location in terms of channel half-height $\delta$, and err($\tau_w$) is the relative error of the computed wall shear stress.}
\label{tab:ch_flows}
\begin{tabular*}{\linewidth}{l @{\extracolsep{\fill}} l l l l l}
\hline
$Re_{\tau}$ & $N_{1e} {\times} N_{2e} {\times} N_{3e} $ & $\gamma$ & $\Delta y_{1e}^+ $ & $y = C_{\mathrm{DES}} \Delta $ & err($\tau_w$)
\\ \hline \noalign{\smallskip}
$395$ & $16 {\times} 8 {\times} 8$ & $0.8$ & $76$ & $0.05\delta$ & $0.4\%$\\
$950$ & $16 {\times} 8 {\times} 8$ & $1.6$ & $91$& $0.05\delta$ & $4.9\%$ \\
$2{,}000$ & $16 {\times} 8 {\times} 8$ & $1.9$ & $137$& $0.05\delta$ & $-4.5 \%$\\
& $16 {\times} 16 {\times} 8$ & $1.9$ & $54$& $0.05\delta$ & $0.8\%$\\
& $32 {\times} 16 {\times} 16$ & $1.9$ & $54$& $0.025\delta$ & $1.3\%$\\
$5{,}200$ & $16 {\times} 16 {\times} 8$ & $2.2$ & $93$& $0.05\delta$ & $0.9\%$\\
$10{,}000$\ & $16 {\times} 16 {\times} 8$ & $2.5$ & $116$& $0.05\delta$ & $-1.9\%$\\
$20{,}000$ \ & $16 {\times} 24 {\times} 8$ & $2.5$& $139$& $0.05\delta$ & $1.1\%$\\
$50{,}000$ \ & $16 {\times} 40 {\times} 8$ & $2.5$& $191$& $0.05\delta$ & $-1.4\%$\\
\noalign{\smallskip}
\hline
\end{tabular*}
\end{table}
\begin{figure}[tb]
\centering
\includegraphics[trim= 10mm 40mm 10mm 40mm,clip,width=0.95\linewidth]{ch950_mesh.png}
\caption{Mesh for turbulent channel flow at $Re_{\tau}=950$. Red indicates enriched cells and blue standard polynomial cells, i.e., a single layer of cells at the wall is enriched. In each cell, the solution consists of a polynomial of $4^{\mathrm{th}}$ degree plus one enrichment shape function in the enriched cells.}
\label{fig:ch_mesh}
\includegraphics[trim= 10mm 40mm 10mm 30mm,clip,width=0.95\linewidth]{ch950_umagnitude.png}
\caption{Instantaneous numerical solution of turbulent channel flow at $Re_{\tau}=950$ via velocity magnitude. Red indicates high and blue low values.}
\label{fig:ch_flow}
\end{figure}
\section{Numerical examples}
\label{sec:examples}
Wall modeling via function enrichment is assessed by considering DDES in the WMLES branch. In the first example, we investigate the method for attached equilibrium boundary layer flows present in turbulent channel flow. The second example considers flow over periodic hills in order to analyze the behavior of the enrichment in conjunction with DDES in a nonequilibrium flow. As a result of earlier studies~\cite{Krank16b,Krank17b}, the polynomial degree of $k=4$ has proven to be a good compromise between accuracy and time-to-solution, so this polynomial degree is used for all simulation cases presented.
\begin{figure*}[t]
\centering
\begin{minipage}[b]{0.497\linewidth}
\centering
\includegraphics[trim= 8mm 9mm 13mm 10mm,clip,width=1.\textwidth]{ch_ddes_um-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[b]{0.497\linewidth}
\centering
\includegraphics[trim= 8mm 9mm 13mm 10mm,clip,width=1.\textwidth]{ch_ddes_rms-eps-converted-to.pdf}
\end{minipage}
\caption{DDES (WMLES) of turbulent channel flow at several Reynolds numbers. Mean velocity (left) and RMS-velocities as well as Reynolds shear stress (right). All quantities are normalized according to $u^+=\langle u_1\rangle/u_{\tau}$, $u^{\prime+}= \sqrt{\langle u_1^{\prime2}\rangle}/u_{\tau}$, $v^{\prime+}= \sqrt{\langle u_2^{2}\rangle}/u_{\tau}$, $w^{\prime+}=\sqrt{\langle u_3^{2}\rangle}/u_{\tau}$, and $(u^{\prime}v^{\prime})^{+}=\langle u_1 u_2\rangle/u_{\tau}^2$.}
\label{fig:ch_u}
\end{figure*}
\subsection{Turbulent channel flow}
We consider flow in a stream- and spanwise periodic channel of the dimensions $2\pi\delta{\times}2\delta{\times}\pi\delta$ in streamwise, wall-normal, and spanwise direction, respectively, with the channel half-height $\delta$. The flow is driven by a constant body force, which is derived from the nominal quantities. We investigate this flow in a wide range of friction Reynolds numbers $Re_{\tau}=u_{\tau}\delta/\nu$, which are chosen according to the available DNS data at $Re_{\tau}=395$~\cite{Moser99}, $Re_{\tau}=950$~\cite{Alamo03}, $Re_{\tau}=2{,}000$~\cite{Hoyas06}, $Re_{\tau}=5{,}200$~\cite{Lee15}, and additionally $Re_{\tau}=10{,}000$, $Re_{\tau}=20{,}000$, and $Re_{\tau}=50{,}000$. All simulation cases, meshes, and resolution criteria are presented in Table~\ref{tab:ch_flows}.
The meshes considered are chosen such that the wall-parallel grid length scale yields approximately $\Delta=0.08\delta$ for most cases, so the RANS--LES switching point is located at $C_{\mathrm{DES}}\Delta=0.05\delta$. One simulation case uses twice the number of grid cells in streamwise and spanwise direction, resulting in a RANS--LES switching point near $C_{\mathrm{DES}}\Delta=0.025\delta$. As for the wall-normal resolution, the enrichment is taken into account in the wall-nearest cell layer in all simulation cases, see Figure~\ref{fig:ch_mesh}. As it was discussed earlier, the enrichment shape functions allow the resolution of the averaged near-wall flow with very coarse cell sizes. The width of the first off-wall cell lies in this work in the range of 51 to 191 wall units. In order to enable an application to high Reynolds numbers, a hyperbolic grid stretching is additionally considered, according to $f$: $[0,1] \to [-\delta, \delta]$:
\begin{equation}
x_2 \mapsto f(x_2)=\delta \frac{\tanh(\gamma (2x_2-1))}{\tanh(\gamma)},
\end{equation}
with the mesh stretching parameter $\gamma$. The values of $\gamma$ for all simulation cases are included in Table~\ref{tab:ch_flows}. In the numerical method, the velocity solution is postprocessed at a large number of wall-normal layers inside each cell using the definition of the velocity variable~\eqref{eq:dg_xw_rans:space} such that the behavior of the enrichment may be analyzed. Statistics were acquired in a simulation time interval of approximately 60--95 flow-through times based on a fixed time interval.
\begin{table*}[t]
\caption{Simulation cases and resolutions of the periodic hill flow. The cases use a coarse mesh with $32 {\times} 16 {\times} 16$ grid cells and a fine mesh with $64 {\times} 32 {\times} 32$ elements. The polynomial degree is $k=4$ for all simulation cases, and the number of grid points per direction is $k+1$ in each cell. The separation and reattachment lengths $x_{1,\mathrm{sep}}$ and $x_{1,\mathrm{reatt}}$ correspond to the zero-crossings of the skin friction.}
\label{tab:ph_flows}
\begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} l l l l l l l}
\hline
Case & $N_{e1} {\times} N_{e2} {\times} N_{e3}$ &$N_{1} {\times} N_{2} {\times} N_{3}$ &$Re_{H}$ & $\mathrm{max}(\Delta y^+_{1e})$ & $x_{1,\mathrm{sep}}/H$ & $x_{1,\mathrm{reatt}}/H$
\\ \hline \noalign{\smallskip}
ph10595\_coarse \ & $32 {\times} 16 {\times} 16$ & $160 {\times} 80 {\times} 80$ & $10{,}595$ & $76$ & $0.25$ & $4.51$\\
ph10595\_fine \ & $64 {\times} 32 {\times} 32$ & $320 {\times} 160 {\times} 160$ & $10{,}595$ & $36$ & $0.16$ & $4.40$\\
KKW\_DNS~\cite{Krank17b} \ & - & $896 {\times} 448 {\times} 448$ & $10{,}595$ & - & $0.19$ & $4.51$ \\
\noalign{\smallskip}
ph37000\_coarse \ & $32 {\times} 16 {\times} 16$ & $160 {\times} 80 {\times} 80$ & $37{,}000$ & $144$ & $0.40$ & $3.37$\\
ph37000\_fine \ & $64 {\times} 32 {\times} 32$ & $320 {\times} 160 {\times} 160$ & $37{,}000$ & $79$ & $0.26$ & $4.53$\\
RM\_Exp~\cite{Rapp11} & - & - & $37{,}000$ & - & - & $3.76$\\
CM\_WMLES\_coarse~\cite{Wiart17} & - & $128 {\times} 64 {\times} 64$ & $37{,}000$ & - & - & $2.3$\\
CM\_WMLES\_fine~\cite{Wiart17} & - & $256 {\times} 128 {\times} 128$ & $37{,}000$ & - & - & $2.8$\\
\hline
\end{tabular*}
\end{table*}
\begin{figure}[tb]
\centering
\includegraphics[trim= 10mm 40mm 10mm 40mm,clip,width=0.95\linewidth]{ph37000_N32x16x16_k4k0_gt15_ddes_cdes065_hmax_mesh.png}
\begin{picture}(100,0)
\put(-44,16){\footnotesize $x_1$}
\put(-62,32){\footnotesize $x_2$}
\end{picture}
\begin{tikzpicture}[overlay]
\draw[->,black, thick,>=latex]
(-5.87,0.44) -- (-5,0.44);
\draw[->,black, thick,rotate around={90:(-5.87,0.44)},>=latex]
(-5.87,0.44) -- (-5,0.44);
\end{tikzpicture}
\caption{Mesh for flow over periodic hills of the case ph37000\_coarse. Red indicates enriched cells and blue standard polynomial cells, i.e., a single layer of cells at the wall is enriched. In each cell, the solution consists of a polynomial of $4^{\mathrm{th}}$ degree plus one enrichment shape function in the enriched cells.}
\label{fig:ph_mesh}
\includegraphics[trim= 10mm 40mm 10mm 30mm,clip,width=0.95\linewidth]{ph37000_N32x16x16_k4k0_gt15_ddes_cdes065_hmax_umagnitude.png}
\begin{picture}(100,0)
\put(-44,16){\footnotesize $x_1$}
\put(-62,32){\footnotesize $x_2$}
\end{picture}
\begin{tikzpicture}[overlay]
\draw[->,black, thick,>=latex]
(-5.87,0.44) -- (-5,0.44);
\draw[->,black, thick,rotate around={90:(-5.87,0.44)},>=latex]
(-5.87,0.44) -- (-5,0.44);
\end{tikzpicture}
\caption{Instantaneous numerical solution of flow over periodic hills of the case ph37000\_coarse via velocity magnitude. Red indicates high and blue low values.}
\label{fig:ph_flow}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[trim= 0mm 0mm 0mm 4mm,clip,width=0.7\linewidth]{ph10595_yp1-eps-converted-to.pdf}
\includegraphics[trim= 0mm 0mm 0mm 4mm,clip,width=0.7\linewidth]{ph37000_yp1-eps-converted-to.pdf}
\caption{Width of wall-layer (width of first off-wall cell) for $Re_H=10{,}595$ (top) and $Re_H=37{,}000$ (bottom). The shallower curves correspond to the upper wall.}
\label{fig:ph_yp1}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[trim= 9mm 0mm 9mm 4mm,clip,scale=0.65]{ph10595_cfcp-eps-converted-to.pdf}
\caption{Skin friction coefficient at the lower wall (left) and pressure coefficient at the lower and upper boundary (right). The shallower pressure coefficient curves correspond to the upper wall.}
\label{fig:ph10595_cfcp}
\end{figure*}
The turbulent flow is visualized at one time instant in Figure~\ref{fig:ch_flow}. Time-averaged results are presented in Figure~\ref{fig:ch_u}. Therein, the results are plotted in terms of the normalized mean velocity $u^+=\langle u_1\rangle/u_{\tau}$, the RMS velocity components $u^{\prime+}= \sqrt{\langle u_1^{\prime2}\rangle}/u_{\tau}$, $v^{\prime+}= \sqrt{\langle u_2^{2}\rangle}/u_{\tau}$, and $w^{\prime+}=\sqrt{\langle u_3^{2}\rangle}/u_{\tau}$, as well as the Reynolds shear stress $(u^{\prime}v^{\prime})^{+}=\langle u_1 u_2\rangle/u_{\tau}^2$, which are all normalized using the numerical value of $u_{\tau}$. The mean velocity is generally predicted very accurately in the laminar sublayer and the log-layer, where the enrichment shape functions are active. In order to get a better impression of the role of the enrichment, the numerical enrichment solution is plotted in Figure~\ref{fig:ch_u} alongside the full mean velocity solution. The enrichment solution represents the largest part of the near-wall solution in most cases, including the high velocity gradient. In particular in cases, where the first off-wall cell spans a range of more than 100 wall units, the enrichment is the main contributor to the mean velocity. Solely at the lowest Reynolds number, the enrichment solution plays a minor role, which essentially means that the polynomial component is capable of resolving most of the flow. Further away from the wall we observe the characteristic log-layer mismatch, that we expect in wall-attached simulations using DDES~\cite{Nikitin00,Yang17}. The log-layer mismatch is especially visible for the lower Reynolds numbers. We note that there are several techniques available in the literature that reduce this effect, for example~\cite{Shur08}. In the framework of the present enrichment methodology, it is possible to construct an alternative hybrid RANS/LES turbulence model, which does not show a log-layer mismatch by definition. We have recently developed such an approach, which is the topic of a subsequent publication~\cite{Krank17}.
The RMS velocities and the Reynolds shear stress are also presented in Figure~\ref{fig:ch_u} up to $Re_{\tau}=5{,}200$ and compared with the DNS data. These quantities show that the RANS--LES transition extends up to approximately $0.4\delta$ and the flow is in full LES mode further away from the wall. This means that we do not expect agreement with the DNS below $0.4\delta$, and the curves match the DNS above this value very well. Only in the refined case at $Re_{\tau}=2{,}000$, the RANS--LES transition happens closer to the wall.
Finally, a major advantage of the present method is the accurate prediction of the wall shear stress. In Table~\ref{tab:ch_flows}, we list the relative error of the computed wall shear stress compared to the nominal simulation parameters for each simulation case. The error lies within a few percent for all cases. Comparing the values with the errors in the skin friction coefficient presented in~\cite{Nikitin00} of up to 22\%, this is an excellent result.
We conclude from this section that wall modeling via function enrichment allows an accurate computation of the near-wall region in turbulent boundary layers with very coarse cells, while still computing the full incompressible Navier--Stokes equations in the whole boundary layer. DDES is a suitable turbulence modeling approach for wall modeling via function enrichment.
\begin{figure*}[t!]
\centering
\includegraphics[trim= 0mm 0mm 0mm 0mm,clip,width=0.65\linewidth]{ph10595_u-eps-converted-to.pdf}
\caption{Streamwise $u=\langle u_1\rangle$ and vertical $v=\langle u_2\rangle$ mean velocity, Reynolds shear stress $u'v'=\langle u_1 u_2\rangle - \langle u_1 \rangle \langle u_2\rangle$, and turbulence kinetic energy $K=1/2(u'u'+v'v'+w'w')$ of the periodic hill flow at $Re_H=10{,}595$.}
\label{fig:ph10595_um}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[trim= 0mm 63mm 0mm 0mm,clip,width=0.65\linewidth]{ph37000_u-eps-converted-to.pdf}
\caption{Streamwise $u=\langle u_1\rangle$ and vertical $v=\langle u_2\rangle$ mean velocity as well as Reynolds shear stress $u'v'=\langle u_1 u_2\rangle - \langle u_1 \rangle \langle u_2\rangle$ of the periodic hill flow at $Re_H=37{,}000$. The results of the cases CM\_WMLES\_coarse and CM\_WMLES\_fine are only available for the streamwise velocity.}
\label{fig:ph37000_um}
\end{figure*}
\subsection{Flow over periodic hills}
As a second benchmark example, we consider flow over periodic hills at the Reynolds numbers based on the hill height $H$ and bulk velocity $u_b$ of $Re_H=10{,}595$ and $Re_H=37{,}000$. Several hybrid RANS/LES methods were assessed using this flow configuration within the European initiative ``Advanced Turbulence Simulation for Aerodynamic Application Challenges'' (ATAAC)~\cite{Schwamborn12}, including DDES (see the final report by Jakirli{\'c} for cross-comparison of results). A strong adverse pressure gradient and flow separation from the curved boundary are challenging for many statistical modeling approaches, but DDES yielded very good agreement with a reference LES in that study. Also, all previous publications on wall modeling via function enrichment~\cite{Krank16,Krank16c,Krank17} used this benchmark example, and very promising results were obtained if a turbulence resolving approach was used. Reference data for this flow is provided by DNS at the lower Reynolds number~\cite{Krank17b} (available for download at~\cite{Krank17d}) and water-channel experiments~\cite{Rapp11} at the higher Reynolds number.
The computational domain is of the dimensions $9H{\times}3.036H{\times}4.5H$ in streamwise, vertical and spanwise direction, respectively, and the lower wall is given by the smoothly curved hill shape. The domain is extended periodically in the streamwise and spanwise direction, and no-slip boundary conditions are applied on the upper and lower wall. The computational setup is very similar to the simulations of the DNS~\cite{Krank17b}. Two meshes are considered at each Reynolds number, a coarser mesh with $32{\times}16{\times}16$ cells, and a finer one with $64{\times}32{\times}32$ cells. As for the previous example, the solution is represented by a polynomial of degree 4 in each cell, plus one enrichment shape function in the wall-nearest cell layer. The mesh is moderately stretched towards the no slip walls to yield a better resolution of the near-wall area, and the geometry is mapped onto the exact hill shape using an isogeometric approach. One representative mesh is displayed in Figure~\ref{fig:ph_mesh}. The wall-normal width of the enrichment layer is plotted in Figure~\ref{fig:ph_yp1} in wall coordinates. An overview of all simulation cases and resolution parameters is given in Table~\ref{tab:ph_flows}. Statistics were averaged in a simulation time interval of 61 flow-through times. One snapshot of the instantaneous velocity field is visualized in Figure~\ref{fig:ph_flow}.
We begin the discussion of the results with the skin friction and pressure coefficients $c_f$ and $c_p$. They are defined as
\begin{equation*}
c_f=\frac{\tau_w}{\frac{1}{2} \rho u_b^2},\hspace{1cm}
c_p=\frac{p-p_{\mathrm{ref}}}{\frac{1}{2} \rho u_b^2},
\label{eq:cfcpdef}
\end{equation*}
where the reference pressure $p_{\mathrm{ref}}$ is taken at $x_1=0$ at the upper wall. The results of the lower Reynolds number are compared to the DNS in Figure~\ref{fig:ph10595_cfcp}. All profiles yield very good agreement with the DNS. Solely the skin friction coefficient predicted by the coarse mesh shows an overprediction of the magnitude between $x_1/H=2$ and $x_1/H=4$. Even the characteristic peak in the skin friction on the windward side of the hill crest is predicted very well for both cases. The overall excellent agreement is also observed in the estimation of the length of the reattachment zone of $x_{1,\mathrm{reatt}}/H=4.51$ and $4.40$ (see Table~\ref{tab:ph_flows}) in comparison to the DNS result of $x_{1,\mathrm{reatt}}/H=4.51\pm0.06$.
The velocity profiles of the same Reynolds number are compared to the DNS data at ten streamwise stations in Figure~\ref{fig:ph10595_um}. The streamwise velocity agrees exceptionally well with the reference DNS. The vertical velocity shows a minor difference at $x_1/H=2$ for the coarser simulation case, but the remaining profiles essentially lie on the DNS curves. A similar level of accuracy is observed in the Reynolds shear stress distribution. The turbulence kinetic energy computed with the coarse simulation case shows an underprediction of the magnitude in the shear layer. These results also exhibit ticks in the shear layer, which are typical for high-order DG, since the discontinuity present in the velocity yields higher fluctuations near the element boundaries, see also~\cite{Krank17b}.
The excellent results obtained at the lower Reynolds number motivate an application of the wall model to a significantly higher Reynolds number. The velocity statistics are compared to the available experimental reference data at $Re_H=37{,}000$ in Figure~\ref{fig:ph37000_um}. In order to allow for a critical assessment of the present wall modeling approach, we additionally compare the results of the mean streamwise velocity with a recent implementation of an equilibrium wall model within the high-order DG~\cite{Wiart17} (cases baseline and fine in that publication). These simulations employ grids comparable to the respective coarse and fine case presented in this work and are also included in the overview if Table~\ref{tab:ph_flows}. Regarding the mean velocity, all wall-modeled cases yield larger errors as compared to the lower Reynolds number. The equilibrium wall model overpredicts the velocity in the recirculation zone, yielding a shorter reattachment length of $x_{1,\mathrm{reatt}}/H=2.3$ and $x_{1,\mathrm{reatt}}/H=2.8$ in comparison to the experiments ($x_{1,\mathrm{reatt}}/H=3.76$, see Table~\ref{tab:ph_flows}). The present wall-enriched DDES simulations overpredict the mean streamwise velocity in that region with the coarse mesh and underpredict the velocity in the fine case. Yet, the DDES cases are closer to the reference than the equilibrium model, both for the coarse and fine mesh. The reattachment lengths are computed as $x_{1,\mathrm{reatt}}/H=3.37$ and $4.53$ and confirm the observations of the mean velocity. The profiles of the vertical velocity yield differences with the reference on the lee side of the hill as a result of the different length of the separation bubble. The magnitude of the Reynolds shear stress is overpredicted by the coarse case and is accurately estimated by the fine case.
We conclude from the results of the periodic hill flow that wall modeling via function enrichment with DDES as turbulence model is well capable of computing nonequilibrium flows. This is due to the full consistency of the method, as all terms of the Navier--Stokes equations are satisfied discretely.
\section{Conclusions}
In this work, we have used the DDES methodology to model the unresolved turbulent motions in wall modeling via function enrichment. The idea of this wall model is that an additional shape function is included in each cell, which has the shape of a wall function. As a result, the Galerkin method can resolve typical attached boundary layer profiles with very coarse meshes. Since the standard high-order polynomial shape functions are still available in all cells, the method is sufficiently flexible to represent nonequilibrium boundary layers with a high pressure gradient and separated boundary layers.
Wall modeling via function enrichment with the DDES turbulence model does not provide a solution to the problems in the hybrid RANS--LES transition region in attached boundary layers. However, an alternative hybrid RANS/LES turbulence modeling approach can be constructed based on the enrichment, which a priori circumvents these problems and the associated log-layer mismatch. This turbulence model is described in a follow-up paper~\cite{Krank17}.
\section*{Acknowledgements}
Computational resources on SuperMUC in Garching, Germany, provided by the Leibniz Supercomputing Centre, under the project pr83te are gratefully acknowledged.
| 2023-04-23T06:40:51.913Z | 2017-12-25T02:07:22.000Z | redpajama/arxiv | arxiv_0001 | 1,266 | 5,790 |
be37128755f3e3b2f90b2cba797544facfbe7b16 | \section{Introduction}
Let $x:M\rightarrow S^{n}$ be a minimal immersion of a closed
surface $M$ into an n-dimensional unit sphere $S^{n}$. Let
$K_{s}={2}/(s(s+1))$ for each natural number $s$. Using an idea of
Hopf and the global coordinates on $S^{n}$, Calabi \cite{calabi}
proved that if $M$ is a 2-sphere with constant Gauss curvature
$K_{s}$ and and $x$ is linearly full, then $n=2s$ and $x$ is
congruent to $s$-th standard minimal immersion. For more general
minimal immersion of 2-sphere into $S^{n}$, Chern [4, p. 38]
obtained an important equality about some local invariants by
choosing a local orthonormal frame filed on $S^{n}$. As a special
case, Chern showed that if the Gauss curvature $K$ is constant then
$K=K_{s}$. Furthermore, for general minimal immersion of a surface
into $S^{n}$, Kenmotsu \cite{kenmotsu1,kenmotsu2} also obtained an
important equality (see Theorem 1 in \cite{kenmotsu1}) by choosing
the frame filed, which generalized Chern's result for the minimal
immersion of 2-sphere into $S^{n}$. We could observe that the chosen
of the frame filed plays an important role in studying the minimal
surface. The first purpose of this paper is to establish a best
local orthonormal frame field on the closed surface minimally
immersed in $S^{n}$ with positive Gauss curvature, under which the
shape operators take the most simple forms, see Theorem \ref{thm1}
for detail.
Furthermore, using the frame field introduced in Theorem
\ref{thm1}, we can obtain an interesting result $K+K^{N}=1$ for the
Gauss curvature $K$ and the normal curvature $K^{N}$, which means
that closed minimal surfaces immersed in $S^{n}$ with positive Gauss
curvature and flat or nowhere flat normal bundle are Wintgen ideal
surfaces, see Theorem \ref{thm4}. Wintgen ideal submanifolds are a
family of submanifolds satisfying a DDVV type inequality when the
equality holds true exactly, see \cite{Chen} for instance.
Recently, a remarkable result due to Baker and Nguyen \cite{baker}
says that codimensional two surfaces satisfying a nonlinear
curvature condition depending on normal curvature smoothly evolve by
mean curvature flow to round points. In the course of estimating the
nonlinearity in the Simons identity, the authors announced an
interesting result depending on a pointwise pinching of the
intrinsic and normal curvatures.
\begin{theorem}\label{thm77} {\rm ({\cite{baker}})}
Suppose a two surface $M$ minimally immersed in $\mathbb{S}^{4}$
satisfies $K^{N}\leq2|K|$. Then either\\
$(1)$ $S=0$
and $M$ is a geodesic sphere; or\\
$(2)$ $S\neq0$, in which
case either\\
\indent $(a)$ $K^{N}=0$ and the surface is the
Clifford torus, or\\
\indent $(b)$ $K^{N}\neq0$ and it is the Veronese surface.
\end{theorem}
By using the frame field obtained in Theorem \ref{thm1}, we
generalize this result to the minimal surfaces in arbitrary
dimension unit sphere $S^{n}$. We prove that closed minimal surfaces
immersed in $S^{n}$ with nonnegative Gauss curvature and flat or
nowhere flat normal bundle satisfying $K^{N}\leq2K$ are geodesic
sphere,
the Clifford torus, or the Veronese surface in $S^{4}$, see Theorem
\ref{thm7} for detail.
Based on this result, we continue to consider the next pinching
$2K\leq K^{N}\leq5K$, see Theorem \ref{thm8}. Then we study the
first pinching of normal curvature $0\leq K^{N}\leq 2/3$, see
Theorem \ref{thm5}, and the next pinching $2/3\leq K^{N}\leq 5/6$,
see Theorem \ref{thm6}. At last, we prove that closed surfaces
minimally immersed in $S^{n}$ with positive Gauss curvature and
non-zero constant normal curvature are generalized Veronese surfaces
studied by Calabi \cite{calabi} and do-Carmo-Wallach \cite{wallach}.
The paper is organized as follows. In Section 2, we introduce some
basic formulae for theory of submanifolds and establish an
orthonormal frame field on the closed surfaces minimally immersed in
a unit sphere, which is crucial to get the main theorems. In Section
3, we give some pinching theorems and their proofs.
\section{\bf Basic formulae and the frame field}\label{sec:proof1}
Let $M$ be a closed surface immersed in a
unit sphere $S^{n}$. We identify $M$ with its immersed image, agree
on the following index ranges:
$$
1\leq i, j, k, l, m ,\cdots\leq 2;\quad 3\leq
\alpha,\beta,\gamma,\delta, \cdots \leq n;\quad 1\leq A, B, C,
D,\cdots\leq n,
$$
and use the Einstein convention. We take a local orthonormal frame
field $\{e_{1},\cdots,e_{n}\}$ in $TS^{n}$ such that, restricted to
$M$, at each point of $M$, $\{e_{1},e_{2}\}$ lies in the tangent
bundle $T(M)$ and $\{e_{3},\cdots,e_{n}\}$ in the normal bundle
$N(M)$. Let $\{\omega_{1},\cdots,\omega_{n}\}$ be the dual coframe
field of $\{e_{1},\cdots,e_{n}\}$ and $(\omega_{AB})$ the Riemannian
connection form matrix associated with
$\{\omega_{1},\cdots,\omega_{n}\}$. Then $(\omega_{ij})$ defines a
Riemannian connection in $T(M)$ and $(\omega_{\alpha\beta})$ defines
a normal connection in $N(M)$. The second fundamental form of $M$
can be expressed as
$$II=\omega_{i}\otimes\omega_{i\alpha}\otimes e_{\alpha}
=h_{ij}^{\alpha}\omega_{i}\otimes\omega_{j}\otimes e_{\alpha},$$
where
$$\omega_{i\alpha}=h_{ij}^{\alpha}\omega_{j};\qquad
h_{ij}^{\alpha}=h_{ji}^{\alpha}.
$$
Let $L^{\alpha}=(h_{ij}^{\alpha})_{2\times 2}$. We denote the square
of the norm of the second fundamental form $S$ by
$$S=\sum_{(\alpha,i,j)}(h_{ij}^{\alpha})^{2}.$$
The mean curvature vector field of $M$ is expressed as
$$
h=\frac{1}{2}\sum_{\alpha=3}^{n}(h^{\alpha}_{11}+h^{\alpha}_{22})
e_{\alpha},
$$
then $M$ is minimal if and only if $h=0$. The Riemannian curvature
tensor $\{R_{ijkl}\}$ and the normal curvature tensor
$\{R_{\alpha\beta kl}\}$ are expressed as
\begin{eqnarray}\label{eq2}
R_{ijkl}=(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk})
+h_{ik}^{\alpha}h_{jl}^{\alpha}-h_{il}^{\alpha}h_{jk}^{\alpha},\quad
R_{\alpha\beta
kl}=h_{km}^{\alpha}h_{ml}^{\beta}-h_{lm}^{\alpha}h_{mk}^{\beta}.
\end{eqnarray}
We
denote the normal scalar curvature $K^{N}$ by
$$K^{N}=\frac{1}{2}\sqrt{\sum_{(\alpha,\beta,i,j)}(R_{\alpha\beta ij})^{2}}.$$
The first and the second order covariant derivatives of
$\{h_{ij}^{\alpha}\}$, say$\{h_{ijk}^{\alpha}\}$ and
$\{h_{ijkl}^{\alpha}\}$ are defined as follows:
$$\nabla h_{ij}^{\alpha}=h_{ijk}^{\alpha}\omega_{k}=dh_{ij}^{\alpha}
+h_{mj}^{\alpha}\omega_{mi}+h_{im}^{\alpha}\omega_{mj}+h_{ij}^{\beta}\omega_{\beta\alpha},$$
$$\nabla
h_{ijk}^{\alpha}=h_{ijkl}^{\alpha}\omega_{l}=dh_{ijk}^{\alpha}
+h_{mjk}^{\alpha}\omega_{mi}+h_{imk}^{\alpha}\omega_{mj}+h_{ijm}^{\alpha}\omega_{mk}+h_{ijk}^{\beta}\omega_{\beta\alpha}.$$
Then we have the Codazzi equation
\begin{eqnarray}
h_{ijk}^{\alpha}=h_{ikj}^{\alpha},\label{abc}
\end{eqnarray}
and the Ricci's formula
\begin{eqnarray}
h_{ijkl}^{\alpha}-h_{ijlk}^{\alpha}=h_{pj}^{\alpha}R_{pikl}
+h_{ip}^{\alpha}R_{pjkl}+h_{ij}^{\beta}R_{\beta\alpha kl}.\label{a}
\end{eqnarray}
The Laplacian of $\{h_{ij}^{\alpha}\}$ and $\{h_{ijk}^{\alpha}\}$
are defined by
$$
\Delta h_{ij}^{\alpha}=h_{ijmm}^{\alpha},\quad \Delta
h_{ijk}^{\alpha}=h_{ijkmm}^{\alpha}.
$$
It follows from (\ref{abc}) and (\ref{a}) that
\begin{eqnarray}
\Delta
h_{ij}^{\alpha}=h_{mmij}^{\alpha}+h_{pi}^{\alpha}R_{pmjm}+h_{mp}^{\alpha}R_{pijm}
+ h_{mi}^{\delta}R_{\delta\alpha jm},\label{aa}
\end{eqnarray}
\begin{eqnarray}
\Delta
h_{ijk}^{\alpha}&=& (\Delta h_{ij}^{\alpha})_{k}
+2h_{pjm}^{\alpha}R_{pikm}+2h_{ipm}^{\alpha}R_{pjkm}+h_{ijp}^{\alpha}R_{pmkm}\label{bb}\nonumber\\
&+&h_{pj}^{\alpha}R_{pikm,m}+h_{ip}^{\alpha}R_{pjkm,m}+2h_{ijm}^{\delta}R_{\delta\alpha
km} +h_{ij}^{\delta}R_{\delta\alpha km,m}.
\end{eqnarray}
In the following we
will choose an orthonormal frame field on the closed surfaces
minimally immersed in a unit sphere, under which the shape operators
have very simple forms.
If the normal bundle of $M$ immersed in $S^{n}$ is flat, the shape
operator $L^{\alpha}$ with respect to $e_{\alpha}$ can be
diagonalized simultaneously for $\alpha=3,\cdots,n$. Otherwise, at
least one of $h^{\beta}_{12}$ is not zero. Choosing a unit normal
vector field $\widetilde e_3=e/|e|$ where
$e=\sum_{\beta=3}^{n}h^{\beta}_{12}e_{\beta}$, and taking an
orthogonal transformation in the normal space $N_x(M)$, we have
$$
\begin{pmatrix}
\widetilde{e}_{3}\\
\\
\widetilde{e}_{4}\\
\\
\vdots \\
\\
\widetilde{e}_{n}
\end{pmatrix}
=\begin{pmatrix}
{h^{3}_{12}}{|e|^{-1}} & {h^{4}_{12}}{|e|^{-1}} & \cdots & {h^{n}_{12}}{|e|^{-1}}\\
\\
a_{43} & a_{44} & \cdots & a_{4n} \\
\\
\vdots & \vdots & & \vdots \\
\\
a_{n3} & a_{n4} & \cdots & a_{nn}
\end{pmatrix}
\begin{pmatrix}
e_{3} \\
\\
e_{4}\\
\\
\vdots \\
\\
e_{n}
\end{pmatrix}.
$$
Let $\widetilde{L}^{\alpha}=(\widetilde{h}^\alpha_{ij})$ be the
shape operator with respect to $\widetilde{e}_{\alpha}$. Then
$$\left\{
\begin{array}{l}
h_{ij}^{3}={h_{12}^{3}}{|e|^{-1}}\widetilde{h}_{ij}^{3}+a_{43}\widetilde{h}_{ij}^{4}
+\cdots+a_{n3}\widetilde{h}_{ij}^{n},\\
\\
h_{ij}^{4}={h_{12}^{4}}{|e|^{-1}}\widetilde{h}_{ij}^{3}+a_{44}\widetilde{h}_{ij}^{4}
+\cdots+a_{n4}\widetilde{h}_{ij}^{n},\\
\\
\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \\
\\
h_{ij}^{n}={h_{12}^{n}}{|e|^{-1}}\widetilde{h}_{ij}^{3}+a_{4n}\widetilde{h}_{ij}^{4}
+\cdots+a_{nn}\widetilde{h}_{ij}^{n}.
\end{array}\right.\eqno{(\ast)}$$
Put $i=1$ and $j=2$ in $(\ast)$. Then $(\ast)$ has a unique solution
$$b:=\widetilde{h}_{12}^{3}=|e|\neq 0,\qquad
\widetilde{h}_{12}^{\beta}=0,\quad 4\leq\beta\leq n.$$ We denote
$\lambda^{\beta}=\tilde{h}_{11}^{\beta}$ for $\beta\geq 3$. So the
shape operators $\tilde{L}^{\alpha}$ have the following forms:
\begin{eqnarray}\label{d}
\tilde{L}^{3}=\begin{pmatrix} \lambda^{3} & b \\b&
\nu^{3}\end{pmatrix}, \qquad
\tilde{L}^{\beta}=\begin{pmatrix}\lambda^{\beta} & 0\\0&
\nu^{\beta}\end{pmatrix},
\end{eqnarray}
where $\beta=4,\cdots, n$.
When $M$ is a minimal surface immersed in $S^{n}$, then
\begin{eqnarray}\label{6}
\lambda^{\alpha}+\nu^{\alpha}=0,
\end{eqnarray}
for $\alpha=3,\cdots,n.$ For convenience, we still denote the new
frame filed by $\{e_{\alpha}\}$ and the corresponding second
fundamental form by $\{h_{ij}^{\alpha}\}$. We denote
$$\overline{S}:=\sum_{(i,j,\beta>3)}(h_{ij}^{\beta})^{2}
=2\sum_{(\beta>3)} (\lambda^{\beta})^{2},\quad
S_{3}:=\sum_{(i,j)}(h_{ij}^{3})^{2}=2(\lambda^{3})^{2}+2b^{2}.$$
Then
$$S=\overline{S}+S_{3}=2\sum_{(\beta>3)}(\lambda^{\beta})^{2}+2(\lambda^{3})^{2}+2b^{2}.$$
According to (\ref{6}), we define
\begin{eqnarray}
\lambda_{i}^{\alpha}:=h_{11i}^{\alpha}=-h_{22i}^{\alpha},\qquad
\lambda_{ij}^{\alpha}:=h_{11ij}^{\alpha}=-h_{22ij}^{\alpha}.
\end{eqnarray}
By the symmetry of
$h_{ijk}^{\alpha}$ and $h_{ijkl}^{\alpha}$ with respect to indices
$i,j,k$, we denote
\begin{eqnarray}
P&:=&\sum(h_{ijk}^{\alpha})^{2}
=4\sum_{\alpha=3}^{n}\left((\lambda^{\alpha}_{1})^{2}+(\lambda^{\alpha}_{2})^{2}\right),\nonumber\\
Q&:=&\sum(h_{ijkl}^{\alpha})^{2}
=4\sum_{\alpha=3}^{n}\left((\lambda^{\alpha}_{11})^{2}+(\lambda^{\alpha}_{22})^{2}
+(\lambda^{\alpha}_{12})^{2}+(\lambda^{\alpha}_{21})^{2}\right).\label{38}
\end{eqnarray}
It follows from \eqref{eq2} and \eqref{d} that the Riemannian
curvature tensor, the normal curvature tensor and the first
covariant differentials of the normal curvature tensor become
\begin{eqnarray}\label{e}
R_{ijkl}=\left(1-S/2\right)(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk}),\quad
R_{3\beta 12}=-2b\lambda^{\beta},\quad
R_{\gamma\beta12}=0,
\end{eqnarray}
\begin{eqnarray}\label{f}
R_{3\beta12,k}=2(\lambda^{3}h^{\beta}_{12k}-\lambda^{\beta}h^{3}_{12k}-b\lambda^{\beta}_{k}),
\;
R_{\beta\gamma12,k}=2(\lambda^{\beta}h_{12k}^{\gamma}-\lambda^{\gamma}h_{12k}^{\beta}),
\end{eqnarray}
where $\beta,\gamma=4,\cdots,n$. It is not difficult to check that
\begin{eqnarray*}
S_{k}=2\sum h_{ij}^{\alpha}h_{ijk}^{\alpha}
=4\sum_{\beta=4}^{n}\lambda^{\beta}\lambda^{\beta}_{k}+4(\lambda^{3}\lambda^{3}_{k}
+bh^{3}_{12k}).
\end{eqnarray*}
Hence
\begin{eqnarray}
\frac{1}{4}S_{1}=\sum_{\beta=4}^{n}\lambda^{\beta}\lambda^{\beta}_{1}
+\lambda^{3}\lambda^{3}_{1}+b\lambda^{3}_{2},\qquad
\frac{1}{4}S_{2}=\sum_{\beta=4}^{n}\lambda^{\beta}\lambda^{\beta}_{2}
+\lambda^{3}\lambda^{3}_{2}-b\lambda^{3}_{1}.\label{h}
\end{eqnarray}
From Ricci's formula (\ref{a}), we have
\begin{eqnarray}\label{35}
\lambda^{3}_{12}-\lambda^{3}_{21}=-(2-S-\overline{S})b,\qquad
\lambda^{3}_{11}+\lambda^{3}_{22}=(2-S)\lambda^{3},\nonumber\\
\\
\lambda^{\beta}_{11}+\lambda^{\beta}_{22}=(2-S-2b^2)\lambda^{\beta},\qquad
\lambda^{\beta}_{12}-\lambda^{\beta}_{21}=-2b\lambda^{3}\lambda^{\beta},\nonumber
\end{eqnarray}
for $\beta=4,\cdots,n$.
Using the above formulae, we can obtain the following proposition
for later use.
\begin{proposition}\label{prop2}
Let $M$ be a surface minimally immersed in a unit sphere $S^{n}$.
Then
\begin{eqnarray}\label{45}
\frac{1}{2}\Delta S=P+(2-S)S-4b^{2}\overline{S}.
\end{eqnarray}
\end{proposition}
\begin{proof}
From (\ref{aa}) and (\ref{e}), we have
\begin{eqnarray*}
\sum_{i,j,\alpha}h_{ij}^{\alpha}\Delta h_{ij}^{\alpha}
&=&\sum_{i,j,p,m,\alpha}(h_{ij}^{\alpha}h_{pi}^{\alpha}R_{pmjm}
+h_{ij}^{\alpha} h_{mp}^{\alpha}R_{pijm})
+\sum_{i,j,m,\alpha,\delta}h_{ij}^{\alpha} h_{mi}^{\delta}R_{\delta\alpha
jm}\nonumber\\
&=&(2-S)(h_{ij}^{\alpha})^2+\sum_{\beta=4}^{n}4b\lambda^{\beta}R_{3\beta
12} =(2-S)S-4b^2\overline{S}.
\end{eqnarray*}
Hence
\begin{eqnarray*}
\frac{1}{2}\Delta
S=\sum_{\alpha,i,j,k}(h_{ijk}^{\alpha})^2+\sum_{\alpha,i,j}h_{ij}^{\alpha}\Delta
h_{ij}^{\alpha}=P+(2-S)S-4b^{2}\overline{S}.
\end{eqnarray*}
\end{proof}
\begin{remark}\label{rem1}
If the normal bundle of the surface $M$ minimally immersed in $S^{n}$ is flat, we choose $e_{1}, e_{2}$
such that $b$ is zero. We easily get
$$\frac{1}{2}\Delta S=P+(2-S)S.$$
\end{remark}
Next we consider the case that the normal bundle of $M$ is nowhere
flat. In this case, $b\neq 0$ and we can establish the following
Theorem \ref{thm1}. Some partial result was obtained in \cite{hou}.
Here, we will give the detailed proof of the theorem for the
completeness.
\begin{theorem}\label{thm1}
Let $M$
be a surface minimally immersed in a unit sphere $S^{n}$ with
nowhere flat normal bundle. If the Gauss curvature of $M$ is
positive, we can establish a local orthonormal frame filed
$\{e_{3},\cdots,e_{n}\}$ normal to $M$ such that the shape operators
$L^{\alpha}$ with respect to $e_{\alpha}$ have the following forms:
\begin{eqnarray*}
L^{3}= \begin{pmatrix} 0 & b \\ b& 0\end{pmatrix}, \qquad L^{4}=
\begin{pmatrix} b & 0\\0&
-b\end{pmatrix}, \qquad L^{\beta}=
\begin{pmatrix} 0 & 0\\0&
0\end{pmatrix},
\end{eqnarray*}
where $\beta=5,\cdots,n$. Furthermore
\begin{eqnarray}\label{7}
b^2=S/{4},\qquad
\lambda^{3}_{1}=-\lambda^{4}_{2}=-\frac{1}{4\sqrt{S}}S_{2},\qquad\lambda^{3}_{2}=\lambda^{4}_{1}=\frac{1}{4\sqrt{S}}S_{1},
\end{eqnarray}
\begin{eqnarray}\label{12}
&&\lambda^{3}_{11}=-\lambda^{4}_{21}=-\frac{1}{4\sqrt{S}}S_{21},\quad\quad\quad\lambda^{3}_{12}=-\lambda^{4}_{22}=-\frac{1}{4\sqrt{S}}(S_{22}-P),\nonumber
\\\\
&&\lambda^{3}_{22}=\lambda^{4}_{12}=\frac{1}{4\sqrt{S}}S_{12},\qquad\qquad\lambda^{3}_{21}=\lambda^{4}_{11}=\frac{1}{4\sqrt{S}}(S_{11}-P).\nonumber
\end{eqnarray}
\end{theorem}
\begin{proof} We
take the orthonormal frame field
$\{e_{1},e_{2},e_{3},\cdots,e_{n}\}$ on $M$ such that the shape
operators have the form
\begin{eqnarray}\label{DDDDD}
L^{3}=\begin{pmatrix} \lambda^{3} & b
\\b& -\lambda^{3}\end{pmatrix}; \qquad
L^{\beta}=\begin{pmatrix}\lambda^{\beta} & 0\\0&
-\lambda^{\beta}\end{pmatrix},
\end{eqnarray}
where $ \beta=4,\cdots,n$. It is easy to check from (\ref{aa}) that
\begin{eqnarray}\label{l}
\sum_{i,j,k,\alpha}(h_{ijk}^{\alpha}\Delta h_{ij}^{\alpha})_{k} &=&
\sum_{i,j,\alpha}(\Delta h_{ij}^{\alpha})^2
+\sum_{i,j,k,l,p,\alpha}(h_{ijk}^{\alpha}h_{pik}^{\alpha}R_{pljl}
+h_{ijk}^{\alpha}h_{lpk}^{\alpha}R_{pijl})
\nonumber\\
&+&\sum_{i,j,k,l,p,\alpha}(h_{ijk}^{\alpha}h_{pi}^{\alpha}R_{pljl,k}
+h_{ijk}^{\alpha}h_{lp}^{\alpha}R_{pijl,k})
\nonumber\\
&+&\sum_{i,j,k,l,\alpha,\delta}h_{ijk}^{\alpha}h_{lik}^{\delta}R_{\delta\alpha
jl}
+\sum_{i,j,k,l,\alpha,\delta}h_{ijk}^{\alpha}h_{li}^{\delta}R_{\delta\alpha
jl,k}.
\end{eqnarray}
Firstly, by (\ref{aa}) and (\ref{e}), we get
\begin{eqnarray*}
&&\Delta h_{11}^{\beta}=(2-S-2b^2)\lambda^{\beta}, \qquad \Delta
h_{12}^{\beta}=2b\lambda^{3}\lambda^{\beta}, \\
&&\Delta h_{12}^{3}=(2-S-\overline{S})b, \quad\quad\quad \Delta
h_{11}^{3}=(2-S)\lambda^{3},
\end{eqnarray*}
for $\beta=4,\cdots n$, so
\begin{eqnarray}\label{m}
\sum_{i,j,\alpha}(\Delta h_{ij}^{\alpha})^2
&=&2\sum_{\beta=4}^{n}(\Delta h_{11}^{\beta})^2+2(\Delta
h_{11}^{3})^2+2\sum_{\beta=4}^{n}(\Delta h_{12}^{\beta})^2+(\Delta
h_{12}^{3})^2\nonumber\\
&=&(2-S)^2S+2(5S-8)b^2\overline{S}.
\end{eqnarray}\par\noindent
Secondly, using (\ref{e}) and (\ref{h}), we get
\begin{eqnarray}\label{n}
\sum_{i,j,k,l,p,\alpha}
\Big(h_{ijk}^{\alpha}h_{pik}^{\alpha}R_{pljl}
&+&h_{ijk}^{\alpha}h_{lpk}^{\alpha}R_{pijl}\Big)
+\sum_{i,j,k,l,\alpha,\delta} h_{ijk}^{\alpha}h_{lik}^{\delta}R_{\delta\alpha jl}\nonumber\\
=(2-S)P &+&\sum_{\gamma=4}^{n}8(\lambda^{3}_{1}\lambda^{\gamma}_{2}
-\lambda^{3}_{2}\lambda^{\gamma}_{1})R_{\gamma312}\nonumber\\
=(2-S)P
&+&16b\sum_{\gamma=4}^{n}(\lambda^{3}_{1}\lambda^{\gamma}\lambda^{\gamma}_{2}
-\lambda^{3}_{2}\lambda^{\gamma}\lambda^{\gamma}_{1}).\nonumber\\
=(2-S)P
&+&4b^2\sum_{i,j,k}(h_{ijk}^{3})^2+4b(\lambda^{3}_{1}S_{2}-\lambda^{3}_{2}S_{1}).
\end{eqnarray}
Thirdly, by the first formula of (\ref{e}), we have
\begin{eqnarray}\label{qqq}
\sum_{i,j,k,l,p,\alpha}(h_{ijk}^{\alpha}h_{pi}^{\alpha}R_{pljl,k}
+h_{ijk}^{\alpha}h_{lp}^{\alpha}R_{pijl,k})=-\sum_{i,j,k,\alpha}h_{ij}^{\alpha}h_{ijk}^{\alpha}S_{k}=-\frac{1}{2}|\nabla
S|^2.
\end{eqnarray}
At last, using (\ref{f}), we have
\begin{eqnarray}\label{s}
\sum_{i,j,k,l,\alpha,\delta}
h_{ijk}^{\alpha}h_{li}^{\delta}R_{\delta\alpha jl,k}
&=&\sum_{\gamma=4}^{n}2(\lambda^{\gamma}\lambda^{3}_{2}+b\lambda^{\gamma}_{1}
-\lambda^{3}\lambda^{\gamma}_{2})R_{3\gamma12,1}\nonumber\\
&+&\sum_{\gamma=4}^{n}2(-\lambda^{\gamma}\lambda^{3}_{1}
+b\lambda^{\gamma}_{2}+\lambda^{3}\lambda^{\gamma}_{1})R_{3\gamma12,2}\nonumber\\
&+&\sum_{\beta,\gamma=4}^{n}(-2\lambda^{\gamma}\lambda^{\beta}_{2}R_{\gamma\beta12,1}
+2\lambda^{\gamma}\lambda^{\beta}_{1}R_{\gamma\beta12,2}).\nonumber\\
&=&-\frac{1}{2}SP+4b^2\sum_{i,j,k}(h_{ijk}^{3})^2
+4b(\lambda^{3}_{1}S_{2}-\lambda^{3}_{2}S_{1})+\frac{1}{4}|\nabla
S|^2.\nonumber\\
\end{eqnarray}
Substituting (\ref{m}), (\ref{n}),(\ref{qqq}) and (\ref{s}) into
(\ref{l}), we have
\begin{eqnarray}\label{ttt} \sum_{i,j,k,\alpha}
(h_{ijk}^{\alpha}\Delta h_{ij}^{\alpha})_{k}
&=&(2-\frac{3}{2}S)P+(2-S)^2S+2(5S-8)b^2\overline{S}\nonumber\\
&+&8b^{2}\sum_{i,j,k}(h_{ijk}^{3})^2+8b(\lambda^{3}_{1}S_{2}-\lambda^{3}_{2}S_{1})-\frac{1}{4}|\nabla
S|^2,
\end{eqnarray}
which together with $P=\frac{1}{2}\Delta S-(2-S)S+4b^2\overline{S}$
and $S\Delta S=\frac{1}{2}\Delta S^2-|\nabla S|^2$ forces that
\begin{eqnarray}\label{tt}
\sum_{i,j,k,\alpha} (h_{ijk}^{\alpha}\Delta h_{ij}^{\alpha})_{k}
&=&\frac{1}{2}(2-S)S^2+4(S-2)b^2\overline{S}+\Delta
S-\frac{3}{8}\Delta S^2\nonumber\\
&&+8b^{2}\sum_{i,j,k}(h_{ijk}^{3})^2+8b(\lambda^{3}_{1}S_{2}-\lambda^{3}_{2}S_{1})+\frac{1}{2}|\nabla
S|^2.
\end{eqnarray}
Taking integration over $M$ on both sides of (\ref{tt}), we have
\begin{eqnarray*}
\int_{M}\left\{\frac{1}{2}S^2(2-S)+4(S-2)b^2\overline{S}
+2(4b\lambda^{3}_{1}+\frac{1}{2}S_{2})^2+2(4b\lambda^{3}_{2}-\frac{1}{2}S_{1})^2\right\}=0,
\end{eqnarray*}
that is
\begin{eqnarray}\label{u}
\int_{M}(2-S)b^2\overline{S}&=&\int_{M}\left\{\frac{1}{8}S^2(2-S)+\frac{1}{2}(4b\lambda^{3}_{1}+\frac{1}{2}S_{2})^2+\frac{1}{2}(4b\lambda^{3}_{2}-\frac{1}{2}S_{1})^2\right\}\nonumber\\
&\geq&\int_{M}\frac{1}{8}S^2(2-S),
\end{eqnarray}
and the equality holds if and only if
$$b\lambda^{3}_{1}=-\frac{1}{8}S_{2},\qquad b\lambda^{3}_{2}=\frac{1}{8}S_{1}.$$
On the other hand, it is easy to check
\begin{eqnarray}\label{v}
b^2\overline{S}\leq\frac{1}{2}S_{3}\overline{S}\leq\frac{1}{8}(S_{3}
+\overline{S})^2=\frac{1}{8}S^2,
\end{eqnarray}
and the equality holds if and only if
\begin{eqnarray*}\lambda^{3}=0,\quad \overline{S}=S_{3}=2b^2.\end{eqnarray*}
Since the Gauss curvature of $M$ is positive, we have $S<2$. Taking
integration over $M$ on both sides of (\ref{v}), we obtain
\begin{eqnarray}\label{w}
\int_{M}(2-S)b^2\overline{S}\leq\int_{M}\frac{1}{8}S^2(2-S).
\end{eqnarray}
It follows from (\ref{u}) and (\ref{w}) that
$$\int_{M}(2-S)b^2\overline{S}=\int_{M}\frac{1}{8}S^2(2-S),$$
which implies that the equalities in (\ref{u}) and (\ref{v}) hold
always. Therefore
\begin{eqnarray*}\label{x}
\lambda^{3}=0,\quad b\lambda^{3}_{1}=-\frac{1}{8}S_{2},\quad
b\lambda^{3}_{2}=\frac{1}{8}S_{1},\quad\overline{S}=2b^2.
\end{eqnarray*}
This
together with the fact that $S=\overline{S}+2b^2$ yields
$$b^2=\frac{1}{4}S,\qquad \overline{S}=\frac{1}{2}S,\qquad\lambda^{3}_{1}=\frac{1}{4\sqrt{S}}S_{2},\qquad\lambda^{3}_{2}=\frac{1}{4\sqrt{S}}S_{1}.$$
Therefore we deduce that there must exist a number $\beta$ such that
$\lambda^{\beta}\neq 0$ for $\beta\geq4$. We choose a unit normal
vector field $\overline{e}_{4}=e/|e|$ where
$e=\sum_{\gamma=4}^{n}h_{11}^{\gamma}e_{\gamma}$, and take an
orthogonal transformation in the normal space $N_{x}(M)$:
$\overline{e}_{3}=e_{3}$ and
$$
\begin{pmatrix}
\overline{e}_{4}\\
\\
\overline{e}_{5}\\
\\
\vdots \\
\\
\overline{e}_{n}
\end{pmatrix}
=\begin{pmatrix}
{h^{4}_{11}}{|e|^{-1}} & {h^{4}_{11}}{|e|^{-1}} & \cdots & {h^{n}_{11}}{|e|^{-1}}\\
\\
b_{54} & b_{55} & \cdots & b_{5n} \\
\\
\vdots & \vdots & & \vdots \\
\\
b_{n4} & b_{n5} & \cdots & b_{nn}
\end{pmatrix}
\begin{pmatrix}
e_{4} \\
\\
e_{5}\\
\\
\vdots \\
\\
e_{n}
\end{pmatrix}.
$$
Let $\overline{L}^{\alpha}=(\overline{h}^\alpha_{ij})$ be the shape
operators with respect to $\overline{e}_{\alpha}$, $3\le \alpha\le
n$. It follows that
$$\left\{
\begin{array}{l}
h_{ij}^{3}=\overline{h}_{ij}^{3},\\
h_{ij}^{4}={\overline{h}_{11}^{4}}{|e|^{-1}}\overline{h}_{ij}^{4}+b_{54}\overline{h}_{ij}^{5}
+\cdots+b_{n4}\overline{h}_{ij}^{n},\\
\\
h_{ij}^{5}={\overline{h}_{11}^{5}}{|e|^{-1}}\overline{h}_{ij}^{4}+b_{55}\overline{h}_{ij}^{5}
+\cdots+b_{n5}\overline{h}_{ij}^{n},\\
\\
\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \\
\\
h_{ij}^{n}={\overline{h}_{11}^{n}}{|e|^{-1}}\overline{h}_{ij}^{4}+b_{5n}\overline{h}_{ij}^{5}
+\cdots+b_{nn}\overline{h}_{ij}^{n}.
\end{array}\right.\eqno{(\star)}$$
Put $i=1$ and $j=1$ in $(\star)$. Then it is easy to check that
$(\star)$ has unique solution
$$\overline{h}_{11}^{4}=|e|^{-1}> 0,\qquad
\overline{h}_{11}^{\gamma}=0,\qquad 5\leq\gamma\leq n.$$ Put $i=1$
and $j=2$ in $(\star)$. Then it is easy to check that $(\star)$ has
unique solution $\overline{h}_{12}^{\gamma}=0$, $5\leq\gamma\leq n.$
Therefore
the shape operators with respect to $\{e_1, e_2; \overline{e}_{\gamma}\}^n_{\gamma=3}$
have the following forms:
\begin{eqnarray*}
\overline{L}^{3}= \begin{pmatrix} 0 & b \\ b& 0\end{pmatrix},
\qquad \overline{L}^{4}=
\begin{pmatrix} \overline{\lambda}^{4} & 0\\0&
-\overline{\lambda}^{4}\end{pmatrix}, \qquad \overline{L}^{\beta}=
\begin{pmatrix} 0 & 0\\0&
0\end{pmatrix},\qquad 5\leq\beta\leq n,
\end{eqnarray*}
where $b^2=(\overline{\lambda}^{4})^2=S/{4}$, that is
$\overline{\lambda}^{4}=b=\sqrt{S}/2.$ For convenience, we denote
the new frame field by $\{e_1, e_2; e_{\gamma}\}^n_{\gamma=3}$. So
far, we have built a frame field on $M$ such that the shape
operators have the following forms:
\begin{eqnarray*}
L^{3}= \begin{pmatrix} 0 & b \\ b& 0\end{pmatrix}, \qquad L^{4}=
\begin{pmatrix} b & 0\\0&
-b\end{pmatrix}, \qquad L^{\beta}=
\begin{pmatrix} 0 & 0\\0&
0\end{pmatrix},\qquad 5\leq\beta\leq n.
\end{eqnarray*}
Furthermore \begin{eqnarray}\label{14}
b^2=S/{4},\qquad
\lambda^{3}_{1}=-\frac{1}{4\sqrt{S}}S_{2},\qquad\lambda^{3}_{2}=\frac{1}{4\sqrt{S}}S_{1}.
\end{eqnarray}
It follows from Chern \cite{chern} and the choice of the normal
vector field $e_{3},e_{4}$ that
\begin{eqnarray}\label{13}
\sum_{\gamma=5}^{n}\lambda^{\gamma}_{1}\lambda^{\gamma}_{2}=0,\qquad
\sum_{\gamma=5}^{n}\left((\lambda^{\gamma}_{1})^2-(\lambda^{\gamma}_{2})^2\right)=0.
\end{eqnarray}
Next we take covariant differential of $h_{11}^{4}$ and have
$$h_{11k}^{4}\omega_{k}=dh_{11}^{4}+2h_{12}^{4}\omega_{21}+\sum_{\alpha=3}^{n}h_{11}^{\alpha}\omega_{\alpha4}=dh_{11}^{4}=\frac{1}{4\sqrt{S}}S_{k}\omega_{k},$$
which implies
\begin{eqnarray}\label{15}
\lambda^{4}_{1}= \lambda^{3}_{2}=\frac{1}{4\sqrt{S}}S_{1},\qquad\lambda^{4}_{2}=-\lambda^{3}_{1}=\frac{1}{4\sqrt{S}}S_{2}.
\end{eqnarray}
We take covariant differential of $h_{11}^{\gamma}$ and
$h_{12}^{\gamma}$ for $5\leq \gamma\leq n$,
\begin{eqnarray*}
&&h_{11k}^{\gamma}\omega_{k}=dh_{11}^{\gamma}+2h_{12}^{\gamma}\omega_{21}+\sum_{\alpha=3}^{n}h_{11}^{\alpha}\omega_{\alpha\gamma}=b\omega_{4\gamma},\\
&&h_{12k}^{\gamma}\omega_{k}=dh_{12}^{\gamma}+h_{22}^{\gamma}\omega_{21}+h_{11}^{\gamma}\omega_{12}+\sum_{\alpha=3}^{n}h_{12}^{\alpha}\omega_{\alpha\gamma}=b\omega_{3\gamma},
\end{eqnarray*}
which imply
\begin{eqnarray}
\omega_{3\gamma}=\frac{1}{b}(\lambda^{\gamma}_{2}\omega_{1}-\lambda^{\gamma}_{1}\omega_{2}),\qquad
\omega_{4\gamma}=\frac{1}{b}(\lambda^{\gamma}_{1}\omega_{1}+\lambda^{\gamma}_{2}\omega_{2}).
\end{eqnarray}
We take covariant differential of $h_{111}^{3}$, $h_{112}^{3}$,
$h_{111}^{4}$ and $h_{112}^{4}$ respectively
\begin{eqnarray}
&&h_{111k}^{3}\omega_{k}=dh_{111}^{3}+3h_{211}^{3}\omega_{21}+h_{111}^{4}\omega_{43}+\sum_{\gamma=5}^{n}h_{111}^{\gamma}\omega_{\gamma3},\label{16}\\
&&h_{112k}^{3}\omega_{k}=dh_{112}^{3}+2h_{212}^{3}\omega_{21}+h_{111}^{3}\omega_{12}+h_{112}^{4}\omega_{43}+\sum_{\gamma=5}^{n}h_{112}^{\gamma}\omega_{\gamma3},\label{17}\\
&&h_{111k}^{4}\omega_{k}=dh_{111}^{4}+3h_{211}^{4}\omega_{21}+h_{111}^{3}\omega_{34}+\sum_{\gamma=5}^{n}h_{111}^{\gamma}\omega_{\gamma4},\label{18}\\
&&h_{112k}^{4}\omega_{k}=dh_{112}^{4}+2h_{212}^{4}\omega_{21}+h_{111}^{4}\omega_{12}+h_{112}^{3}\omega_{34}+\sum_{\gamma=5}^{n}h_{112}^{\gamma}\omega_{\gamma4}.\label{19}
\end{eqnarray}
Then from (\ref{16}), (\ref{19}) and use (\ref{15}), (\ref{13}) we
have
\begin{eqnarray*}
(\lambda^{3}_{11}+\lambda^{4}_{21})\omega_{1}&+&(\lambda^{3}_{12}+\lambda^{4}_{22})\omega_{2}=\sum_{\gamma=5}^{n}(\lambda_{1}^{\gamma}\omega_{\gamma3}+\lambda_{2}^{\gamma}\omega_{\gamma4})\nonumber\\
&=&-\frac{2}{b}\sum_{\gamma=5}^{n}\lambda_{1}^{\gamma}\lambda_{2}^{\gamma}\omega_{1}+\frac{1}{b}\sum_{\gamma=5}^{n}\{(\lambda_{1}^{\gamma})^2-(\lambda_{2}^{\gamma})^2\}\omega_{2}=0.
\end{eqnarray*}
It follows from (\ref{17}), (\ref{18}), (\ref{15}) and (\ref{13})
that
\begin{eqnarray*}
(\lambda^{3}_{21}-\lambda^{4}_{11})\omega_{1}&+&(\lambda^{3}_{22}-\lambda^{4}_{12})\omega_{2}=\sum_{\gamma=5}^{n}(\lambda_{2}^{\gamma}\omega_{\gamma3}-\lambda_{1}^{\gamma}\omega_{\gamma4})\nonumber\\
&=&\frac{2}{b}\sum_{\gamma=5}^{n}\lambda_{1}^{\gamma}\lambda_{2}^{\gamma}\omega_{2}+\frac{1}{b}\sum_{\gamma=5}^{n}\left((\lambda_{1}^{\gamma})^2-(\lambda_{2}^{\gamma})^2\right)\omega_{1}=0,
\end{eqnarray*}
therefore
\begin{eqnarray}\label{20}
\lambda^{3}_{11}+\lambda^{4}_{21}=0,\quad
\lambda^{3}_{12}+\lambda^{4}_{22}=0,\quad
\lambda^{3}_{21}-\lambda^{4}_{11}=0,\quad
\lambda^{3}_{22}-\lambda^{4}_{12}=0.
\end{eqnarray}
On the other hand, we study the second covariant differentials of
$S$. It is not difficult to check that for all $k=1,2$,
\begin{eqnarray*}
S_{kl}&=&2\sum
(h_{ijl}^{\alpha}h_{ijk}^{\alpha}+h_{ij}^{\alpha}h_{ijkl}^{\alpha})=4\sum_{\alpha}(\lambda^{\alpha}_{k}\lambda^{\alpha}_{l}+h_{12k}^{\alpha}h_{12l}^{\alpha})+b\lambda^{3}_{kl}+bh_{12kl}^{4},
\end{eqnarray*}
which is equal to
\begin{eqnarray*}
&&\lambda^{4}_{11}+\lambda^{3}_{21}=\frac{1}{2\sqrt{S}}(S_{11}-P),\qquad
\lambda^{4}_{12}+\lambda^{3}_{22}=\frac{1}{2\sqrt{S}}S_{12}\nonumber\\
&&\lambda^{4}_{22}-\lambda^{3}_{12}=\frac{1}{2\sqrt{S}}(S_{22}-P),\qquad
\lambda^{4}_{21}-\lambda^{3}_{11}=\frac{1}{2\sqrt{S}}S_{21}.
\end{eqnarray*}
This together with (\ref{20}) gives (\ref{12}). So we complete the
proof of Theorem \ref{thm1}.
\end{proof}
From now on we use the orthonormal frame field
established by Theorem \ref{thm1}. We conclude this section with
some interesting and elementary formulas which will be useful in the
next section. Firstly, it follows from Theorem \ref{thm1} that
$b^{2}\bar{S}=S^{2}/8$. So we can rewrite Proposition \ref{prop2} as
follows.
\begin{proposition}\label{prop3}
Suppose that $M$ is a closed surface minimally immersed in a unit
sphere $S^{n}$ with positive Gauss curvature and nowhere flat normal
bundle. We have
\begin{eqnarray}\label{eq1}
\frac{1}{2}\Delta S=P-\frac{1}{2}S(3S-4).
\end{eqnarray}
\end{proposition}
The Riemannian curvature tensor, the normal curvature tensor and the
first covariant differentials of the normal curvature tensor in
\eqref{e} and \eqref{f} can be simplified as
\begin{eqnarray}\label{0}
R_{ijkl}=\left(1-\frac{S}{2}\right)(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk}),\quad
R_{ijkl,m}=-\frac{1}{2}S_{m}(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk}),
\end{eqnarray}
\begin{eqnarray}\label{1}
R_{3412}=-\frac{1}{2}S,\quad
R_{3\beta12}=R_{4\beta12}=R_{\beta\gamma12}=0,
\end{eqnarray}
\begin{eqnarray}\label{2}
R_{3412,k}=-\frac{1}{2}S_{k},\quad
R_{3\beta12,1}=-2b\lambda^{\beta}_{1},\quad
R_{3\beta12,2}=-2b\lambda^{\beta}_{2},
\end{eqnarray}
\begin{eqnarray}
R_{\beta\gamma12,k}=0,\quad
R_{4\beta12,2}=-2b\lambda^{\beta}_{1},\quad
R_{4\beta12,1}=2b\lambda^{\beta}_{2}
\end{eqnarray}
for $5\leq\beta,\gamma\leq n$. The Ricci's formula in \eqref{35}
becomes
\begin{eqnarray}\label{e3}
&&\lambda_{11}^{3}+\lambda_{22}^{3}=0,\qquad\lambda_{12}^{3}-\lambda_{21}^{3}=\dfrac
14\sqrt{S} (3S-4),\nonumber\\
&&\lambda_{11}^{\beta}+\lambda_{22}^{\beta}=0,\qquad
\lambda_{12}^{\beta}-\lambda_{21}^{\beta}=0
\end{eqnarray}
for $5\leq\beta\leq n$.
\section{\bf Main Results}
In \cite{calabi}, Calabi considered minimal immersions of compact
surfaces without boundary and with constant Gauss curvature $K$ into
$S^{n}$. He gave a complete list of all such immersions and proved
that the set of possible values of $K$ is discrete, namely
$K=K(s)=2/(s(s+1))$, $s\in \mathbb{N}$. This led to the Simon
conjecture as follows (see \cite{simon}).
{\bf{Simon conjecture} (intrinsic version)}: Let $M$ be a compact
surface minimally immersed into $S^{n}$. If $K(s+1)\leq K\leq K(s)$
for an $s\in \mathbb{N}$, then either $K=K(s+1)$ or $K=K(s)$ and the
immersion is one of the Calabi's standard minimal immersion.
There is another version of this conjecture for the extrinsic
curvature functions $S$. For minimal surfaces in $S^{n}$, both
curvature functions are related as follows:
$$2K=2-S,\qquad S=\frac{2(s-1)(s+2)}{s(s+1)}, \quad s\in \mathbb{N}.$$
Thus, for Calabi's standard immersions, we have
{\bf{Simon conjecture} (extrinsic version)}: Let $M$ be a compact
surface minimally immersed into $S^{n}$. If
$$\frac{2(s-1)(s+2)}{s(s+1)}\leq S\leq \frac{2s(s+3)}{(s+1)(s+2)},\quad s\in \mathbb{N}$$
then either $S=\frac{2(s-1)(s+2)}{s(s+1)}$ or
$S=\frac{2s(s+3)}{(s+1)(s+2)}$, and the immersion is one of the
Calabi's standard minimal immersion.
For a minimal immersion as considered above, $K=K(s)=1$ for $s=1$
gives $S=0$, and the immersion is an equator in $S^{3}(1)$.
$K=K(s)=\frac{1}{3}$ for $s=2$ gives $S=\frac{4}{3}$ and the
immersion is a Veronese surface in $S^{4}(1)$. $K=K(s)=\frac{1}{6}$
for $s=3$ gives $S=\frac{5}{3}$ and the immersion is a generalized
Veronese surface in $S^{6}(1)$.
So far, Simon conjecture has been solved in the case $s=1$ and
$s=2$, see \cite{L.H.B,B.K,K.M}. Using the frame field established
by Theorem \ref{thm1}, we give a very simple proof of Simon
conjecture for the minimal surface in $S^{n}$ with flat or nowhere
flat normal bundle, which is critical for later use.
\begin{theorem}\label{thm2}
Let $M$ be a closed minimal surface
immersed in $S^{n}$ with flat or nowhere flat normal bundle. If
$0\leq S\leq {4}/{3}$, then $S=0$ or $S={4}/{3}$.
\end{theorem}
\begin{proof}
If the normal bundle is flat, it follows from Remark \ref{rem1} and $0\leq S\leq {4}/{3}$ that
$$\frac{1}{2}\Delta S=P+(2-S)S\geq (2-S)S\geq0.$$
By integration, we have $S=0$.
If the normal bundle is nowhere flat, the assumption $0\leq S \leq
4/3$ implies that the Gauss curvature of the minimal surface is
positive. It follows from (\ref{eq1}) in Proposition \ref{prop3} that
\begin{eqnarray}\label{29}
\frac{1}{2}\Delta
S=P-\frac{1}{2}S(3S-4)\geq-\frac{1}{2}S(3S-4)\geq0.
\end{eqnarray}
So we have $S(3S-4)=0$, it follows that
$S=4/3$. We complete the proof of Theorem \ref{thm2}.
\end{proof}
\begin{theorem}\label{thm3}
Let $M$ be a closed minimal surface immersed in $S^{n}$ with flat or
nowhere flat normal bundle. If ${4}/{3}\leq S \leq {5}/{3}$, then
$S={4}/{3}$ or $S={5}/{3}$.
\end{theorem}
\begin{proof}
If the normal bundle is flat, from the condition ${4}/{3}\leq S \leq
{5}/{3}$, we have $$\frac{1}{2}\Delta S=P+(2-S)S\geq (2-S)S>0.$$ By
integration, we get a contradiction.
If the normal bundle is nowhere flat, it follows from ${4}/{3}\leq S
\leq {5}/{3}$ that the Gauss curvature is positive. So we can use
the frame field introduced in Theorem \ref{thm1}. From (\ref{bb}),
we have
\begin{eqnarray}\label{a3}
\sum_{i,j,k,\alpha}h_{ijk}^{\alpha}\Delta h_{ijk}^{\alpha}
&=&\sum_{i,j,k,\alpha}(h_{ijk}^{\alpha}\Delta h_{ij}^{\alpha})_{k}
-\sum_{i,j,\alpha}(\Delta h_{ij}^{\alpha})^2+\sum_{i,j,k,p,m,\alpha}2h_{ijk}^{\alpha}h_{pj}^{\alpha}R_{pikm,m}\nonumber\\
&+&\sum_{i,j,k,p,m,\alpha}(h_{ijk}^{\alpha}h_{ijp}^{\alpha}R_{pmkm}+4h_{ijk}^{\alpha}h_{pjm}^{\alpha}R_{pikm})\nonumber\\
&+&\sum_{i,j,k,m,\alpha,\delta}2h_{ijk}^{\alpha}h_{ijm}^{\delta}R_{\delta\alpha
km}+\sum_{i,j,k,m,\alpha,\delta}h_{ijk}^{\alpha}h_{ij}^{\delta}R_{\delta\alpha
km,m}.
\end{eqnarray}
From (\ref{m}), we have
\begin{eqnarray}\label{b3}
\sum_{i,j,\alpha}(\Delta
h_{ij}^{\alpha})^2=(2-S)^2S+2(5S-8)b^2\overline{S}
=\frac{1}{4}S(3S-4)^2.
\end{eqnarray}
From (\ref{0}), we have
\begin{eqnarray}\label{10}
2\sum_{i,j,k,p,m,\alpha}h_{ijk}^{\alpha}h_{pj}^{\alpha}R_{pikm,m}=-\sum_{i,j,k,\alpha}h_{ij}^{\alpha}h_{ijk}^{\alpha}S_{k}=-\frac{1}{2}|\nabla
S|^2,
\end{eqnarray}
and
\begin{eqnarray}\label{9}
\sum_{i,j,k,p,m,\alpha}(4h_{ijk}^{\alpha}h_{pjm}^{\alpha}R_{pikm}
+h_{ijk}^{\alpha}h_{ijp}^{\alpha}R_{pmkm})=5(1-\frac{S}{2})P.
\end{eqnarray}
From (\ref{1}), we have
\begin{eqnarray}\label{dd6}
2\sum_{i,j,k,m,\alpha,\delta}h_{ijk}^{\alpha}h_{ijm}^{\delta}R_{\delta\alpha
km}=16(\lambda^{3}_{1}\lambda^{4}_{2}
-\lambda^{3}_{2}\lambda^{4}_{1})R_{4312}=-\frac{1}{2}|\nabla S|^2.
\end{eqnarray}
From (\ref{2}), we have
\begin{eqnarray*}
\sum_{i,j,k,m,\alpha,\delta}h_{ijk}^{\alpha}h_{ij}^{\delta}R_{\delta\alpha
km,m}=-\frac{1}{2}S\sum_{i,j,k}\sum_{\gamma=5}^{n}(h_{ijk}^{\gamma})^2-\frac{1}{4}|\nabla
S|^2,\label{dd3}
\end{eqnarray*}
which together with
\begin{eqnarray*}
\sum_{i,j,k}(h_{ijk}^{3})^2+\sum_{i,j,k}(h_{ijk}^{4})^2=\frac{1}{2S}|\nabla
S|^2
\end{eqnarray*}
forces that
\begin{eqnarray}\label{11}
\sum_{i,j,k,m,\alpha,\delta}h_{ijk}^{\alpha}h_{ij}^{\delta}R_{\delta\alpha
km,m}=-\frac{1}{2}SP\label{dd5}.
\end{eqnarray}
Substituting (\ref{b3}),(\ref{10}), (\ref{9}), (\ref{dd6}) and
(\ref{11}) into (\ref{a3}), we get
\begin{eqnarray}\label{dd}
\sum_{i,j,k,\alpha}h_{ijk}^{\alpha}\Delta h_{ijk}^{\alpha}
&=&\sum_{i,j,k,\alpha}(h_{ijk}^{\alpha}\Delta
h_{ij}^{\alpha})_{k}+\frac{5}{2}\Delta S-\frac{3}{4}\Delta
S^2\nonumber\\
&+&\frac{1}{2}|\nabla S|^2-\frac{1}{4}S(3S-4)(9S-14).
\end{eqnarray}
On the other hand, It follows from (\ref{38}) and (\ref{20}) that
$$Q=8\left((\lambda_{11}^{3})^{2}+(\lambda_{12}^{3})^{2}+(\lambda_{21}^{3})^{2}
+(\lambda_{22}^{3})^{2}\right)+4\sum_{\beta=5}^{n}\left((\lambda_{11}^{\beta})^{2}+(\lambda_{12}^{\beta})^{2}
+(\lambda_{21}^{\beta})^{2}+(\lambda_{22}^{\beta})^{2}\right).$$ It
is easy to check that the relative minimal value Q with the
constraint (\ref{e3}) is $\frac{1}{4}S(3S-4)^2$, which together with
(\ref{dd}) forces that
\begin{eqnarray}\label{gg}
\frac{1}{2}\Delta P &=& \sum_{i,j,k,\alpha}h_{ijk}^{\alpha}\Delta
h_{ijk}^{\alpha}
+\sum_{i,j,k,l,\alpha}(h_{ijkl}^{\alpha})^{2}\nonumber\\
&\geq&\sum_{i,j,k,\alpha}(h_{ijk}^{\alpha}\Delta
h_{ij}^{\alpha})_{k}+\frac{5}{2}\Delta S-\frac{3}{4}\Delta
S^2-\frac{1}{2}S(3S-4)(3S-5).
\end{eqnarray}
Taking integration over $M$ on both sides of (\ref{gg}) and using
the Stokes formula, we have
\begin{eqnarray*}
0\geq-\int_{M}\frac{1}{2}S(3S-4)(3S-5).
\end{eqnarray*}
It follows that $S={4}/{3}$ or $S={5}/{3}$ if ${4}/{3}\leq S \leq
{5}/{3}$. We complete the proof of Theorem \ref{thm3}.
\end{proof}
Now an important result can be obtained instantly as follows.
\begin{theorem}\label{thm4}
Let $M$ be a closed minimal surface
immersed in $S^{n}$ with positive Gauss curvature $K$ and flat or
nowhere flat normal bundle. Then $K+K^{N}=1$, i.e. $M$ is a minimal
Wintgen ideal surface.
\end{theorem}
\begin{proof}
If the normal bundle is flat, since the Gauss curvature of $M$ is
positive, we have $S<2$. It follows from Remark \ref{rem1} that
$$\frac{1}{2}\Delta S=P+(2-S)S\geq (2-S)S\geq0.$$
It follows that $S=0$ and $K=1$, so the Gauss curvature $K$ and the normal curvature
$K^{N}$ satisfy $K+K^{N}=1$.
If the normal bundle is nowhere flat, it follows
from formula \eqref{1} that the normal curvature $K^{N}=S/2=1-K$, so
$K+K^{N}=1$. This completes the proof of Theorem \ref{thm4}.
\end{proof}
\begin{remark}
In submanifolds theory, the famous DDVV inequality is related to the
scalar curvature, normal scalar curvature and mean curvature, which
is proved by Ge-Tang \cite{getang} and Lu \cite{Lu}, independently.
Submanifolds are called Wintgen ideal when the equality in the DDVV
inequality holds true exactly. By the equality characterization of
the DDVV inequality proved by Ge and Tang \cite{getang}, the shape
operators in Theorem \ref{thm1} could attain the equality of DDVV
inequality. In this way, Theorem \ref{thm4} can be also deduced.
\end{remark}
\begin{remark}
There are also some important properties concerning the sum of the
Gauss curvature and normal curvature for closed surfaces immersed in
space forms (see \cite{pengtang}).
\end{remark}
As we know very well, there are lots of results concerning the
pinching of the second fundamental form in a unit sphere $S^{n}$.
But there are little results concerning the pinching of the normal
curvature. In the following part, we will provide some new results
of closed surfaces immersed in arbitrary dimensional unit sphere
$S^{n}$ depending on a pinching of the intrinsic and normal
curvature.
\begin{theorem}\label{thm7}
Let $M$ be a closed surface minimally
immersed in $S^{n}$ with flat or nowhere flat normal bundle
satisfying $K^{N}\leq2K$.\\
\indent$(1)$ If $K^{N}=0$, then either\\
\indent\indent $(a)$ $S=0$ and the surface is the geodesic sphere, or \\
\indent\indent $(b)$ $S=2$ and the surface is the Clifford torus.\\
\indent$(2)$ If $K^{N}\neq0$, then $K^{N}=2K$ and the surface is the
Veronese surface in $S^{4}$.
\end{theorem}
\begin{proof}
If the normal bundle is flat, we have $K^{N}=0$ and the Gauss
curvature $K$ is nonnegative. It follows from Remark \ref{rem1} that
$$\frac{1}{2}\Delta S=P+(2-S)S\geq (2-S)S\geq0.$$
So either $S=0$ and the surface is a geodesic sphere, or $S=2$ and
the surface is the Clifford torus.
If the normal bundle is nowhere
flat, we have that the Gauss curvature $K$ is positive from
$K^{N}\leq2K$. It follows from $K+K^{N}=1$ in Theorem \ref{thm4}
that the assumption $K^{N}\leq2K$ is equivalent to
$S\leq 4/3$. By Theorem
\ref{thm2} we get $S=4/3$ and $M$ is the Veronese surface in
$S^{4}$.
\end{proof}
\begin{remark}
Theorem \ref{thm7} generalizes Theorem \ref{thm77} obtained by Baker
and Nguyen \cite{baker} to arbitrary codimension for the surface
with flat normal bundle or nowhere flat normal bundle.
\end{remark}
\begin{theorem}\label{thm8}
Let $M$ be a closed surface minimally
immersed in $S^{n}$ with positive Gauss curvature satisfying $2K\leq K^{N}\leq5K$, then either\\
$(1)$ $K^{N}=2K$ and it is the Veronese
surface in $S^{4}$, or\\
$(2)$ $K^{N}=5K$ and it is the generalized Veronese surface in
$S^{6}$.
\end{theorem}
\begin{proof}
It follows from $2K\leq K^{N}\leq5K$ and the Gauss curvature $K$ is
positive that the normal curvature is positive everywhere. It
follows from $K+K^{N}=1$ in Theorem \ref{thm4} that $2K\leq
K^{N}\leq5K$ is equivalent to $4/3\leq S\leq 5/3$. By Theorem
\ref{thm3} we get $S=4/3$ and $M$ is the Veronese surface in
$S^{4}$, or $S=5/3$ and $M$ is the generalized Veronese surface in
$S^{6}$.
\end{proof}
Next we
will give some new results concerning the normal curvature.
\begin{theorem}\label{thm5}
Let $M$ be a closed minimal surface immersed in $S^{n}$ with positive Gauss
curvature and flat or nowhere flat normal bundle. If $0\leq K^{N}\leq2/3$, then either\\
$(1)$ $K^{N}=0$ and it is a geodesic sphere, or\\
$(2)$ $K^{N}=2/3$ and it is the Veronese surface in $S^{4}$.
\end{theorem}
\begin{proof}
Since the Gauss curvature $K$ is positive and the normal bundle is
flat or nowhere flat, it follows from Theorem \ref{thm4} that
$K+K^{N}=1$. So $0\leq K^{N}\leq2/3$ is equivalent to
$0\leq S\leq 4/3$. By Theorem
\ref{thm2} we get $S=0$ and $M$ is a geodesic sphere, or $S=4/3$ and
$M$ is the Veronese surface in $S^{4}$.
\end{proof}
\begin{theorem}\label{thm6}
Let $M$ be a closed minimal surface
immersed in $S^{n}$ with positive Gauss curvature. If $2/3\leq
K^{N}\leq 5/6$,
then\\
$(1)$ $K^{N}=2/3$ and $M$ is the Veronese surface in
$S^{4}$; or\\
$(2)$ $K^{N}=5/6$ and $M$ is a generalized Veronese surface in
$S^{6}$.
\end{theorem}
\begin{proof}
We observe that the condition $2/3\leq K^{N}\leq5/6$ implies that
the normal bundle is nowhere flat. It follows from Theorem
\ref{thm4} that $K+K^{N}=1$. So $0\leq K^{N}\leq2/3$ is equivalent
to
$4/3\leq S\leq 5/3$. By Theorem
\ref{thm2} we get $S=4/3$ and $M$ is the Veronese surface in
$S^{4}$, or $S=5/3$ and $M$ is the generalized Veronese surface in
$S^{6}$.
\end{proof}
We conclude this paper with the following theorem.
\begin{theorem}\label{thm10}
Let $M$ be a closed minimal surface
immersed in $S^{n}$ with positive Gauss curvature. If $K^{N}$ is
non-zero constant everywhere on $M$, then $K$ is constant and the
immersion is one of the generalized Veronese surfaces.
\end{theorem}
\section*{Acknowledgments}
The author was partially supported by Chern Institute of
Mathematics. The authors would like to thank the referees for their
professional suggestions about this paper which led to various
improvements.
\bibliographystyle{amsplain}
| 2023-04-23T06:40:52.012Z | 2017-12-25T02:07:29.000Z | redpajama/arxiv | arxiv_0001 | 1,272 | 8,077 |
468e4fe9c47b10939e800b2e1435a395d0717c4b | \section{Introduction}
In this paper, we use explicit methods to study
rational points and divisors on the \textit{Fermat quartic} $F_4 \subset {\mathbb P}^2$
defined by the homogeneous equation
\[ X^4 + Y^4 = Z^4. \]
We study the $4$-torsion points $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$
on the Jacobian variety $\mathop{\mathrm{Jac}}\nolimits(F_4)$.
Then we give applications to the Mordell-Weil group of $\mathop{\mathrm{Jac}}\nolimits(F_4)$
and the rational points on $F_4$.
In the 17th century, Fermat proved that
all of the ${\mathbb Q}$-rational points on $F_4$ satisfy $XYZ = 0$.
(The points on $F_4$ satisfying $XYZ = 0$ are called the \textit{cusps}.)
Since the work of Fermat, the arithmetic of $F_4$ has been studied
by many mathematicians because, together with the Fermat curves of higher degree,
it provides a nice testing ground of more general theories.
In 1960, Faddeev studied the arithmetic of $\mathop{\mathrm{Jac}}\nolimits(F_4)$ using
methods from algebraic geometry \cite{Faddeev}.
He constructed an isogeny of degree $8$ defined over ${\mathbb Q}$
from $\mathop{\mathrm{Jac}}\nolimits(F_4)$ to the product of three elliptic curves.
Since the Mordell-Weil groups of these elliptic curves over ${\mathbb Q}$
are finite of $2$-power order,
it follows that the Mordell-Weil group $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q})$
is also finite of $2$-power order.
It is an interesting problem to study the $2$-power torsion points.
But the situation was not clear in the literature.
In \cite{Faddeev}, Faddeev claimed (without proof) that
$\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q})$ has order $32$,
and the Mordell-Weil group of $\mathop{\mathrm{Jac}}\nolimits(F_4)$
over the $8$-th cyclotomic field ${\mathbb Q}(\zeta_8)$ is finite.
(But he did not mention the structure of the Mordell-Weil group.)
He alluded that he could determine all of the points on $F_4$
defined over quadratic extensions of ${\mathbb Q}(\zeta_8)$;
see \cite[p.1150]{Faddeev}.
As far as the authors of this paper know,
the precise statements of Faddeev's claims (and proofs of them)
did not appear in the literature.
Nevertheless, Faddeev's results and claims were cited several times.
Kenku used them to study $2$-power torsion points
on elliptic curves over quadratic fields \cite{Kenku}.
Klassen revisited Faddeev's results in his thesis \cite{Klassen:Thesis}.
Schaefer-Klassen used the finiteness of $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$
to study the torsion packet on $F_4$ \cite[Section 6]{KlassenSchaefer}.
In this paper,
we use explicit methods to study the $4$-torsion points
$\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$ on $\mathop{\mathrm{Jac}}\nolimits(F_4)$.
We give an explicit description of
the action of the absolute Galois group
$\mathop{\mathrm{Gal}}\nolimits(\overline{{\mathbb Q}}/{\mathbb Q})$ on $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$.
As applications of our results,
we give precise statements alluded by Faddeev, and prove them.
Thus we fill the gap in the literature.
Here is a summary of the results we obtain in this paper:
\begin{enumerate}
\item (Theorem \ref{MainTheorem1})\ We give $6$ divisors of degree $0$ on $F_4$
which give a basis of $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$ as a ${\mathbb Z}/4{\mathbb Z}$-module.
Five of them are supported on the cusps, but one of them is not.
\item (Theorem \ref{MainTheorem2})\ We calculate
the mod $4$ Galois representation:
\[ \rho_4 \colon \mathop{\mathrm{Gal}}\nolimits(\overline{{\mathbb Q}}/{\mathbb Q}) \to \mathop{\mathrm{Aut}}\nolimits(\mathop{\mathrm{Jac}}\nolimits(F_4)[4]) \cong \mathop{\mathrm{GL}}\nolimits_6({\mathbb Z}/4{\mathbb Z}). \]
The image of $\rho_4$ is isomorphic to
the dihedral group of order $8$,
and the kernel of $\rho_4$ corresponds to the number field ${\mathbb Q}(2^{1/4},\zeta_8)$,
which is a quadratic extension of ${\mathbb Q}(\zeta_8)$.
(We also calculate the Weil pairing.
We determine the image of $\rho_4$
inside the symplectic similitude group $\mathrm{GSp}_6({\mathbb Z}/4{\mathbb Z})$.
See Theorem \ref{Theorem:WeilPairing} and Corollary \ref{Corollary:WeilPairingGSp(6)}.)
\item (Theorem \ref{MainTheorem3})\ We calculate
the Mordell-Weil group $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$.
We show $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ is isomorphic to
$({\mathbb Z}/4{\mathbb Z})^{\oplus 5} \oplus {\mathbb Z}/2{\mathbb Z}$ generated by
divisors supported on the cusps.
(We also calculate the Mordell-Weil group of $\mathop{\mathrm{Jac}}\nolimits(F_4)$
and its generators over each subfield of ${\mathbb Q}(\zeta_8)$
(i.e., over ${\mathbb Q}$, ${\mathbb Q}(\sqrt{-1})$, ${\mathbb Q}(\sqrt{2})$, and ${\mathbb Q}(\sqrt{-2})$).
See Appendix \ref{Appendix:MordellWeilGroupSubfields}.)
\item (Theorem \ref{MainTheorem4})\
We determine all of the points on $F_4$
defined over quadratic extensions of ${\mathbb Q}(\zeta_8)$.
We list all of them.
It turns out that $F_4$ has $188$ such points in total.
Except for the $12$ cusps, none of them is defined over ${\mathbb Q}(\zeta_8)$.
There exist $48$ points defined over ${\mathbb Q}(2^{1/4},\zeta_8)$,
$32$ points defined over ${\mathbb Q}(\zeta_3, \zeta_8)$,
and $96$ points defined over ${\mathbb Q}(\sqrt{-7}, \zeta_8)$.
\end{enumerate}
Note that we take advantage of using results and techniques
which were not available in 1960's.
Especially, our proof depends on Rohrlich's results on the subgroup of
the Mordell-Weil group generated by
divisors supported on the cusps \cite{Rohrlich}.
We obtained and confirmed key computational results with
the aid of computer algebra systems Maxima, Sage, and Singular
\cite{Maxima}, \cite{Sage}, \cite{Singular}, \cite{Singular:DivisorsLib}.
The organization of this paper is as follows.
In Section \ref{Section:Notation},
we introduce necessary notation on the Galois group and
divisors on the Fermat quartic $F_4$.
In Section \ref{Section:Rohrlich},
we recall Rohrlich's results on the cusps.
In Section \ref{Section:Mod4},
we give a basis of the $4$-torsion points $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$.
In Section \ref{Section:GaloisAction}, we calculate the Galois action.
Using these results, the calculation of
the Mordell-Weil group of $\mathop{\mathrm{Jac}}\nolimits(F_4)$ over ${\mathbb Q}(\zeta_8)$
becomes an exercise in linear algebra.
We calculate it in Section \ref{Section:MordellWeilGroupQZeta8}.
In Section \ref{Section:QuadraticPoints},
we determine all of the points on $F_4$
defined over quadratic extensions of ${\mathbb Q}(\zeta_8)$.
This paper has $5$ appendices.
In Appendix \ref{Appendix:MordellWeilGroupSubfields},
we calculate the Mordell-Weil group of $\mathop{\mathrm{Jac}}\nolimits(F_4)$
over each subfield of ${\mathbb Q}(\zeta_8)$.
In Appendix \ref{Appendix:WeilPairing},
we calculate the Weil pairing on $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$.
In Appendix \ref{Appendix:Automorphisms},
we calculate the action of the automorphism group of $F_4$
on the $4$-torsion points $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$.
In Appendix \ref{Appendix:Experimental},
we briefly explain the authors' experimental methods to find a $4$-torsion point
which does not belong to the subgroup generated by divisors
supported on the cusps.
Finally, in Appendix \ref{Appendix:MethodsCalculation},
we give several remarks on the methods of calculation.
The initial motivation of this work was to
understand Faddeev's results and claims in \cite{Faddeev},
and to apply them to study linear and symmetric determinantal representations of
the Fermat quartic $F_4$ over ${\mathbb Q}$.
Such applications will appear elsewhere;
see \cite{IshitsukaItoOhshita:KleinFermatQuartic}
for the summary of our results.
\section{Notation}
\label{Section:Notation}
In this section, we shall introduce necessary notation on
the Galois group and divisors which will be used in the rest of
this paper.
We fix an embedding of an algebraic closure $\overline{{\mathbb Q}}$ of ${\mathbb Q}$
into the field ${\mathbb C}$ of complex numbers.
We also put
$\zeta_n := \exp(2 \pi i/n)$.
Hence we have $(\zeta_{mn})^m = \zeta_n$ for any $n,m \geq 1$.
For a positive integer $a \geq 1$, we put $\sqrt{-a} := \sqrt{a} \zeta_4$.
The element $2^{1/4}$ denotes
a unique positive real number whose fourth power is $2$.
The following relation holds:
$\zeta_8 + \zeta^7_8 = (2^{1/4})^2 = \sqrt{2}$.
The field ${\mathbb Q}(2^{1/4},\zeta_8)$
is a Galois extension of ${\mathbb Q}$ of degree $8$.
The Galois group $\mathop{\mathrm{Gal}}\nolimits({\mathbb Q}(2^{1/4},\zeta_8)/{\mathbb Q})$
is generated by the following two automorphisms $\sigma,\tau$:
$\sigma$ is an element of order $4$ satisfying
$\sigma(\zeta_8) = \zeta_8^5 = -\zeta_8$ and
$\sigma(2^{1/4}) = 2^{1/4} \zeta_4$,
and $\tau$ is an element of order $2$ satisfying
$\tau(\zeta_8) = \zeta_8^7$ and
$\tau(2^{1/4}) = 2^{1/4}$.
These elements satisfy $\tau \sigma \tau = \sigma^{3}$.
We see that $\mathop{\mathrm{Gal}}\nolimits({\mathbb Q}(2^{1/4},\zeta_8)/{\mathbb Q})$ is isomorphic to
the dihedral group with $8$ elements.
The \textit{Fermat quartic} is a smooth projective curve over ${\mathbb Q}$ defined by
\[ F_4 := \{\, [X:Y:Z] \in {\mathbb P}^2 \mid X^4 + Y^4 = Z^4 \,\}. \]
Its Jacobian variety $\mathop{\mathrm{Jac}}\nolimits(F_4)$ is an abelian variety of dimension $3$ over ${\mathbb Q}$.
The group of $4$-torsion points
\[ \mathop{\mathrm{Jac}}\nolimits(F_4)[4] := \{\, \alpha \in \mathop{\mathrm{Jac}}\nolimits(F_4)(\overline{{\mathbb Q}}) \mid [4] \alpha = 0 \,\} \]
is a free ${\mathbb Z}/4{\mathbb Z}$-module of rank $6$
with an action of $\mathop{\mathrm{Gal}}\nolimits(\overline{{\mathbb Q}}/{\mathbb Q})$,
where $[4]$ denotes the multiplication-by-4 isogeny on $\mathop{\mathrm{Jac}}\nolimits(F_4)$.
Since $F_4$ has a ${\mathbb Q}$-rational point (such as $[1 : 0 : 1]$),
for any extension $k/{\mathbb Q}$,
the degree $0$ part of the Picard group of $F_4 \otimes_{{\mathbb Q}} k$
is canonically identified with the group of $k$-rational points on $\mathop{\mathrm{Jac}}\nolimits(F_4)$:
\[ \mathop{\mathrm{Pic}}\nolimits^0(F_4 \otimes_{{\mathbb Q}} k) \cong \mathop{\mathrm{Jac}}\nolimits(F_4)(k). \]
(See \cite[Chapter 8, Proposition 4]{BoschLuetkebohmertRaynaud},
\cite[Section 5.7.1]{Poonen:RationalPoints}.)
The Fermat quartic $F_4$ has $12$ points satisfying $XYZ = 0$:
\begin{align*}
A_i &:= [0 : \zeta_4^i : 1], &
B_i &:= [\zeta_4^i : 0 : 1], &
C_i &:= [\zeta_8 \zeta_4^i : 1 : 0]
\end{align*}
for $0 \leq i \leq 3$.
These points are called \textit{cusps}.
All of the cusps on $F_4$ are defined over ${\mathbb Q}(\zeta_8)$.
We introduce the following points defined over ${\mathbb Q}(2^{1/4},\zeta_8)$:
\begin{align*}
P_1 &:= [ 2^{1/4} \zeta_4 : \zeta_8 : 1 ], &
P_2 &:= [ \zeta_8 : 2^{1/4} \zeta_4 : 1 ], &
P_3 &:= [ 2^{-1/4} : 2^{-1/4} : 1 ].
\end{align*}
We embed $F_4$ into $\mathop{\mathrm{Jac}}\nolimits(F_4)$ by taking $B_0$ as a base point.
For each $0 \leq i \leq 3$, we define
$\alpha_i, \beta_i, \gamma_i \in \mathop{\mathrm{Jac}}\nolimits(F_4)(\overline{{\mathbb Q}})$ by
\begin{align*}
\alpha_i &:= [A_i - B_0], &
\beta_i &:= [B_i - B_0], &
\gamma_i &:= [C_i - B_0],
\end{align*}
where the linear equivalence class of a divisor $D$ is denoted by $[D]$.
We define
$e_1, e_2, e_3, e_4, e_5 \in \mathop{\mathrm{Jac}}\nolimits(F_4)(\overline{{\mathbb Q}})$ by
\begin{align*}
e_1 &:= \alpha_1, &
e_2 &:= \alpha_2, &
e_3 &:= \beta_1, &
e_4 &:= \beta_2, &
e_5 &:= \gamma_1.
\end{align*}
We also define $e_6, e'_6 \in \mathop{\mathrm{Jac}}\nolimits(F_4)(\overline{{\mathbb Q}})$ by
\begin{align*}
e_6 &:= [A_1 + A_2 + B_1 + B_2 + C_1 + C_2 - 6 B_0], \\
e'_6 &:= [P_1 + P_2 + P_3 - 3 B_0].
\end{align*}
It turns out that the divisor $P_1 + P_2 + P_3$,
defined over ${\mathbb Q}(2^{1/4},\zeta_8)$, plays an important role
in the arithmetic of the Fermat quartic $F_4$.
This divisor does not seem to receive special attention before.
\section{Rohrlich's results on divisor classes of the cusps}
\label{Section:Rohrlich}
Let
$\mathscr{C} \subset \mathop{\mathrm{Jac}}\nolimits(F_4)(\overline{{\mathbb Q}})$
be the subgroup generated by
$\alpha_i, \beta_i, \gamma_i \ (0 \leq i \leq 3)$.
(The group $\mathscr{C}$ is denoted by $\mathscr{D}^{\infty}/\mathscr{F}^{\infty}$
in \cite{Rohrlich}.)
Rohrlich determined all of the relations between $\alpha_i, \beta_i, \gamma_i$,
and calculated the group $\mathscr{C}$.
Here is a summary of Rohrlich's results:
\begin{prop}[Rohrlich {\cite[p.117, Corollary 1]{Rohrlich}}]
\label{Proposition:Rohrlich}
The group $\mathscr{C}$ is isomorphic to the abelian group
generated by $\alpha_i, \beta_i, \gamma_i$ ($0 \leq i \leq 3$)
with the following relations:
\begin{align*}
0 &= 4\alpha_i = 4\beta_i = 4\gamma_i, \\
0 &= \alpha_0 + \alpha_1 + \alpha_2 + \alpha_3
= \beta_1 + \beta_2 + \beta_3
= \gamma_0 + \gamma_1 + \gamma_2 + \gamma_3, \\
0 &= \alpha_1 + \beta_1 + 2(\alpha_2 + \beta_2) +3( \alpha_3 + \beta_3), \\
0 &= \beta_1 + \gamma_1 + 2(\beta_2 + \gamma_2) + 3(\beta_3 + \gamma_3), \\
0 &= 2 (\alpha_1 + \beta_1 + \gamma_1 + \alpha_2 + \beta_2 + \gamma_2).
\end{align*}
\end{prop}
\begin{cor}
\label{Corollary:Rohrlich}
\begin{enumerate}
\item $\mathscr{C}$ is a finite abelian group killed by $4$.
\item $e_6$ is killed by $2$.
\item The following homomorphism is an isomorphism:
\begin{align*}
({\mathbb Z}/4{\mathbb Z})^{\oplus 5} \oplus ({\mathbb Z}/2{\mathbb Z}) &\overset{\cong}{\longrightarrow} \mathscr{C}, \\
(c_1,c_2,c_3,c_4,c_5,c_6) &\mapsto \sum_{i=1}^{5} c_i e_i + c_6 e_6.
\end{align*}
\end{enumerate}
\end{cor}
For later use, we note that the following equalities are satisfied:
\begin{align*}
\alpha_0 &= 2e_1 + e_2 + 2e_3 + e_4, &
\alpha_3 &= e_1 + 2e_2 + 2e_3 + 3 e_4, \\
\beta_0 &= 0, &
\beta_3 &= 3 e_3 + 3 e_4, \\
\gamma_0 &= 3 e_1 + 3 e_2 + e_3 + e_5 + e_6, &
\gamma_2 &= 3 e_1 + 3 e_2 + 3 e_3 + 3 e_4 + 3 e_5 + e_6, \\
\gamma_3 &= 2 e_1 + 2 e_2 + e_4 + 3 e_5.
\end{align*}
\section{Explicit determination of the $4$-torsion points}
\label{Section:Mod4}
By Rohrlich's results,
the subgroup $\mathscr{C} \subset \mathop{\mathrm{Jac}}\nolimits(F_4)[4]$ has index $2$;
see Corollary \ref{Corollary:Rohrlich}.
Hence we need only to find a $4$-torsion point on $\mathop{\mathrm{Jac}}\nolimits(F_4)$
which does not belong to $\mathscr{C}$.
Currently, there seems no practical algorithm to calculate such a point.
Fortunately, after a number of trials and errors,
the authors found that $e'_6$ does the job.
(See also Appendix \ref{Appendix:Experimental}.)
\begin{prop}
\label{Proposition:KeyDivisor}
The element $e'_6$ satisfies the following equality in $\mathop{\mathrm{Jac}}\nolimits(F_4)(\overline{{\mathbb Q}})$:
\[ 2 e'_6 = 2e_2 + 2e_4 + e_6. \]
\end{prop}
\begin{proof}
It is enough to show that
\[
2 P_1 + 2 P_2 + 2 P_3
- A_1 - 3 A_2 + 4 B_0 - B_1 - 3 B_2 - C_1 - C_2
\]
is linearly equivalent to $0$.
We shall give a rational function $f$ whose divisor coincides
with the above divisor.
The field ${\mathbb Q}(2^{1/4}, \zeta_8)$
is generated by an element $\delta$ such that its minimal polynomial over ${\mathbb Q}$ is
\[ X^8 - 4X^6 + 8X^4 - 4X^2 + 1, \]
and the following equalities are satisfied:
\begin{align*}
2 \cdot \delta^2 &= (2 - \sqrt{2})(1 + \zeta_4), \\
3 \cdot \zeta_8 &= 2 \delta^6 - 7 \delta^4 + 11 \delta^2 - 1, \\
3 \cdot 2^{1/4} &= \delta^7 - 5 \delta^5 + 10 \delta^3 - 8 \delta.
\end{align*}
We define the elements $c_1,c_2,c_3,c_4,c_5$ by
\begin{align*}
c_1 &:= \delta^6 - 2 \delta^4 + \delta^2 + 7, \\
c_2 &:= 2 \delta^6 - 10 \delta^4 + 20 \delta^2 - 13, \\
c_3 &:= 22 \delta^7 + 9 \delta^6 - 86 \delta^5 - 36 \delta^4 + 166 \delta^3 + 63 \delta^2 - 86 \delta - 24, \\
c_4 &:= 10 \delta^7 - 2 \delta^6 - 26 \delta^5 + 7 \delta^4 + 22 \delta^3 - 20 \delta^2 + 46 \delta - 14, \\
c_5 &:= 22 \delta^6 - 77 \delta^4 + 154 \delta^2 - 44.
\end{align*}
Then, it can be checked (with the aid of computer algebra systems)
that the following rational function $f$ satisfies the required conditions:
\begin{align*}
f &:= \frac{g_2}{g_1}, \\
g_1 &:= 3 (X^{3} + Y^{3} + Z^{3}) + c_1 (X^{2}Y + X^{2}Z + XY^{2} + XYZ + Y^{2}Z - Z^3) \\
&\quad -c_2(XYZ + XZ^{2} + YZ^{2} + Z^{3}), \\
g_2 &:= 33 (X^3 + XY^2 + XYZ - XZ^2 - Y^2 Z - YZ^2) \\
&\quad + c_3 (-X^2 Y + XYZ) + c_4 (X^2Z + XYZ - XZ^2 - YZ^2) \\
&\quad + c_5 (XZ^2 - Z^3).
\end{align*}
(See also Appendix \ref{Appendix:MethodsCalculation}.)
\end{proof}
By Corollary \ref{Corollary:Rohrlich},
the element $2e_2 + 2e_4 + e_6$ is killed by $2$,
but it is not divisible by $2$ inside $\mathscr{C}$.
From these calculations,
it is straightforward to give a basis of $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$.
\begin{thm}
\label{MainTheorem1}
The group of $4$-torsion points $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$
is a free ${\mathbb Z}/4{\mathbb Z}$-module of rank $6$
with basis $e_1$, $e_2$, $e_3$, $e_4$, $e_5$, $e'_6$.
In other words, the following homomorphism is an isomorphism:
\begin{align*}
({\mathbb Z}/4{\mathbb Z})^{\oplus 6} &\overset{\cong}{\longrightarrow} \mathop{\mathrm{Jac}}\nolimits(F_4)[4], \\
(c_1,c_2,c_3,c_4,c_5,c'_6) &\mapsto \sum_{i=1}^{5} c_i e_i + c'_6 e'_6.
\end{align*}
In particular, all of the $4$-torsion points on $\mathop{\mathrm{Jac}}\nolimits(F_4)$
are defined over ${\mathbb Q}(2^{1/4}, \zeta_8)$.
\end{thm}
\begin{rem}
\label{Remark:ModularCurve}
The element $e'_6$ was found experimentally by the authors;
see Appendix \ref{Appendix:Experimental}.
The authors were unaware of any theoretical meaning of this element.
It would be interesting to look for a ``natural'' or ``moduli theoretic''
proof of Proposition \ref{Proposition:KeyDivisor}.
(Note that the Fermat quartic $F_4$ is isomorphic to
the modular curve $X_0(64)$ over ${\mathbb Q}$;
see \cite[Proposition 2]{Kenku}, \cite[p.454]{MShimura}, \cite[p.107]{TuYang}.)
\end{rem}
\section{Explicit calculation of the Galois action}
\label{Section:GaloisAction}
By Theorem \ref{MainTheorem1},
we see that the action of $\mathop{\mathrm{Gal}}\nolimits(\overline{{\mathbb Q}}/{\mathbb Q})$
on $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$ factors through $\mathop{\mathrm{Gal}}\nolimits({\mathbb Q}(2^{1/4},\zeta_8)/{\mathbb Q})$.
Hence we have the mod $4$ Galois representation
\[ \rho_4 \colon \mathop{\mathrm{Gal}}\nolimits({\mathbb Q}(2^{1/4},\zeta_8)/{\mathbb Q}) \to \mathop{\mathrm{Aut}}\nolimits(\mathop{\mathrm{Jac}}\nolimits(F_4)[4]) \cong \mathop{\mathrm{GL}}\nolimits_6({\mathbb Z}/4{\mathbb Z}) \]
with respect to the basis $e_1,e_2,e_3,e_4,e_5,e'_6$.
Our task is to calculate the matrices $\rho_4(\sigma), \rho_4(\tau)$ explicitly.
\begin{thm}
\label{MainTheorem2}
With respect to the basis $e_1,e_2,e_3,e_4,e_5,e'_6 \in \mathop{\mathrm{Jac}}\nolimits(F_4)[4]$,
the actions of $\sigma,\tau$ on $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$
are represented by the following matrices:
\begin{align*}
\rho_4(\sigma) &= \begin{pmatrix}
1 & 0 & 0 & 0 & 2 & 1 \\
0 & 1 & 0 & 0 & 2 & 3 \\
0 & 0 & 1 & 0 & 0 & 3 \\
0 & 0 & 0 & 1 & 1 & 1 \\
0 & 0 & 0 & 0 & 3 & 2 \\
0 & 0 & 0 & 0 & 0 & 3
\end{pmatrix}, &
\rho_4(\tau) &= \begin{pmatrix}
1 & 0 & 0 & 0 & 3 & 0 \\
2 & 1 & 0 & 0 & 1 & 3 \\
2 & 0 & 3 & 0 & 3 & 0 \\
3 & 0 & 3 & 1 & 1 & 3 \\
0 & 0 & 0 & 0 & 3 & 0 \\
0 & 0 & 0 & 0 & 2 & 3
\end{pmatrix}.
\end{align*}
In particular, the mod $4$ Galois representation $\rho_4$ is injective,
and the image of $\rho_4$ is isomorphic to the dihedral group of order $8$.
\end{thm}
\begin{proof}
It is a straightforward exercise (with the aid of computer algebra systems).
We briefly give a summary of our calculations.
The actions of $\sigma,\tau$
on the cusps $A_i,B_i,C_i$ $(0 \leq i \leq 3)$
are calculated as follows:
\[
\begin{array}{|c||c|c|c|c||c|c|c|c||c|c|c|c|}
\hline
\quad & A_0 & A_1 & A_2 & A_3 & B_0 & B_1 & B_2 & B_3 & C_0 & C_1 & C_2 & C_3 \\
\hline
\sigma & A_0 & A_1 & A_2 & A_3 & B_0 & B_1 & B_2 & B_3 & C_2 & C_3 & C_0 & C_1 \\
\hline
\tau & A_0 & A_3 & A_2 & A_1 & B_0 & B_3 & B_2 & B_1 & C_3 & C_2 & C_1 & C_0 \\
\hline
\end{array}
\]
The actions of $\sigma,\tau$ on $P_1,P_2,P_3$ are calculated
as follows:
\[
\begin{array}{|c||c|c|c|}
\hline
\quad & P_1 & P_2 & P_3 \\
\hline
\sigma & [ -2^{1/4} : -\zeta_8 : 1 ] & [ -\zeta_8 : -2^{1/4} : 1 ] &
[ -2^{-1/4} \zeta_4 : -2^{-1/4} \zeta_4 : 1 ] \\
\hline
\tau & [ -2^{1/4} \zeta_4 : \zeta_8^7 : 1 ] & [ \zeta_8^7 : -2^{1/4} \zeta_4 : 1 ] & P_3 \\
\hline
\end{array}
\]
By Theorem \ref{MainTheorem1},
$\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$ is a free ${\mathbb Z}/4{\mathbb Z}$-module with basis
$e_1$, $e_2$, $e_3$, $e_4$, $e_5$, $e'_6$.
Hence we need to calculate the actions of $\sigma,\tau$ on these elements.
\[
\begin{array}{|c||c|c|c|c|}
\hline
\quad & e_1 & \qquad e_2 \qquad & e_3 & \qquad e_4 \qquad \\
\hline
\sigma & e_1 & e_2 & e_3 & e_4 \\
\hline
\tau &
\begin{array}{c}
[A_3 - B_0] = \\ e_1 + 2 e_2 + 2 e_3 + 3 e_4
\end{array}
& e_2 &
\begin{array}{c}
[B_3 - B_0] = \\ 3 e_3 + 3 e_4
\end{array}
& e_4 \\
\hline
\end{array}
\]
\[
\begin{array}{|c||c|c|}
\hline
\quad & e_5 & e'_6 \\
\hline
\sigma &
\begin{array}{c}
[C_3 - B_0] = \\ 2 e_1 + 2 e_2 + e_4 + 3 e_5
\end{array}
&
e_1 + 3 e_2 + 3 e_3 + e_4 + 2 e_5 + 3 e'_6 \\
\hline
\tau &
\begin{array}{c}
[C_2 - B_0] = \\
3 e_1 + e_2 + 3 e_3 + e_4 + 3 e_5 + 2e'_6
\end{array} &
3 e_2 + 3 e_4 + 3 e'_6 \\
\hline
\end{array}
\]
From these calculations,
we see that the images of $\sigma,\tau$ under the homomorphism $\rho_4$
are given by the above matrices.
\end{proof}
\begin{rem}
The dihedral extension ${\mathbb Q}(2^{1/4},\zeta_8)/{\mathbb Q}$ appeared in
a recent work of Kiming-Rustom \cite{KimingRustom}.
They constructed a $2$-dimensional mod $4$ Galois representation
which factors through $\mathop{\mathrm{Gal}}\nolimits({\mathbb Q}(2^{1/4},\zeta_8)/{\mathbb Q})$,
and showed that it is associated with a cusp form $f$
of weight $36$ and level $1$.
They also showed that it is realized as the Galois action
on $4$-torsion points on the elliptic curve
$E \colon Y^2 = X^3 + X^2 + X + 1$,
which is a modular elliptic curve of conductor $128$
associated with a cusp form $g$ of weight $2$ and level $128$;
the cusp forms $f,g$ are congruent mod $4$.
(See \cite[Theorem 1]{KimingRustom}.)
Since $F_4 \cong X_0(64)$,
it is interesting to study the relation between
the Galois action on $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$ and
the $2$-dimensional mod $4$ Galois representation
constructed by Kiming-Rustom.
\end{rem}
\section{Calculation of the Mordell-Weil group over ${\mathbb Q}(\zeta_8)$}
\label{Section:MordellWeilGroupQZeta8}
In this section, as an application of our results
(Theorem \ref{MainTheorem1}, Theorem \ref{MainTheorem2}),
we calculate the Mordell-Weil group of $\mathop{\mathrm{Jac}}\nolimits(F_4)$ over ${\mathbb Q}(\zeta_8)$.
We briefly recall the isogeny constructed by Faddeev \cite{Faddeev}.
Let $E_1$ (resp.\ $E_3$)
be the smooth projective curve over ${\mathbb Q}$
birational to the affine curve $Y^2 = 1 - X^4$ (resp.\ $Y^2 = 1 + X^4$).
Then, $E_1$ (resp.\ $E_3$) is isomorphic to the elliptic curve
whose Weierstrass equation is $Y^2 = X^3 + 4X$ (resp.\ $Y^2 = X^3 - 4X$).
For any $\lambda$,
the affine curve $C_{\lambda} \colon Y^2 = 1 - \lambda X^4$ is
birational to the affine curve $C'_{\lambda} \colon Y^2 = X^3 + 4 \lambda X$
via the rational map
\[ C_{\lambda} \dashrightarrow C'_{\lambda},\quad (a,b) \mapsto (u,v) := (2 \lambda a^2/(1-b),\,4 \lambda a/(1-b)); \]
the inverse map is given by
\[ C'_{\lambda} \dashrightarrow C_{\lambda},\quad (u,v) \mapsto (2u/v,\,1 - 8 \lambda u/v^2). \]
We put $E_2 := E_1$.
Then we have morphisms
$f_i \colon F_4 \to E_i$ $(1 \leq i \leq 3)$
of degree $2$ by
\begin{align*}
f_1([X : Y : Z]) &= ((Y/Z),\,(X/Z)^2), \\
f_2([X : Y : Z]) &= ((X/Z),\,(Y/Z)^2), \\
f_3([X : Y : Z]) &= ((X/Y),\,(Z/Y)^2).
\end{align*}
These morphisms induce a homomorphism of abelian varieties
\begin{equation}
\label{Jacobian:Isogeny}
\xymatrix{
\mathop{\mathrm{Jac}}\nolimits(F_4) \ar^-{}[r] & E_1 \times E_2 \times E_3
\qquad
}
\end{equation}
defined over ${\mathbb Q}$. It is an isogeny of degree $8$.
The following result is presumably well-known.
\begin{lem}
\label{Lemma:MordellWeilGroup1}
The Mordell-Weil group $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ is a finite abelian group
whose order is a power of $2$.
\end{lem}
\begin{proof}
Since the degree of the isogeny (\ref{Jacobian:Isogeny}) is a power of $2$,
it is enough to show the same assertion for each
$E_i$ ($1 \leq i \leq 3$).
Over ${\mathbb Q}(\zeta_8)$, the elliptic curves $E_1,E_2,E_3$ are isomorphic to each other.
Hence it is enough to show the assertion for $E_1$.
It should be possible to calculate $E_1({\mathbb Q}(\zeta_8))$ directly.
But the calculation of the Mordell-Weil group is not an easy problem over a number field of large degree.
Here is a simple proof of this lemma avoiding the calculation of the Mordell-Weil group
over a number field other than ${\mathbb Q}$.
The Mordell-Weil group of $E_1$ over ${\mathbb Q}(\zeta_8)$
is identified with the Mordell-Weil group of the Weil restriction
$\mathrm{Res}_{{\mathbb Q}(\zeta_8)/{\mathbb Q}}(E_1)$,
which is an abelian variety of dimension $4$ over ${\mathbb Q}$.
Since ${\mathbb Q}(\zeta_8)$ is a biquadratic field equal to
the composite of ${\mathbb Q}(\sqrt{-1})$ and ${\mathbb Q}(\sqrt{2})$,
there is an isogeny of $2$-power degree from
$\mathrm{Res}_{{\mathbb Q}(\zeta_8)/{\mathbb Q}}(E_1)$
to the product of the following $4$ elliptic curves over ${\mathbb Q}$:
\begin{align*}
\pm Y^2 &= X^3 + 4 X, &
\pm 2Y^2 &= X^3 + 4 X.
\end{align*}
It is an easy exercise to check that the Mordell-Weil groups of
these elliptic curves over ${\mathbb Q}$ are finite abelian groups of $2$-power order.
\end{proof}
Let ${\mathbb F}_3$ (resp.\ ${\mathbb F}_9$) be the finite field with $3$ (resp.\ $9$) elements.
\begin{lem}
\label{Lemma:MordellWeilGroup2}
Let $\widetilde{F}_4$ be the smooth quartic over ${\mathbb F}_3$ defined by $X^4 + Y^4 = Z^4$.
Let $\mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)$ be the Jacobian variety of $\widetilde{F}_4$.
\begin{enumerate}
\item For any prime number $\ell \neq 3$,
every eigenvalue of the Frobenius morphism over ${\mathbb F}_3$
on the $\ell$-adic Tate module
\[ V_{\ell} \mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4) :=
\big( \varprojlim_{n} \mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)[\ell^n] \big) \otimes_{{\mathbb Z}_{\ell}} {\mathbb Q}_{\ell}.
\]
is a square root of $-3$.
\item
For any prime number $\ell \neq 3$,
the Frobenius morphism over ${\mathbb F}_9$ acts on
$V_{\ell} \mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)$
via the scalar multiplication by $-3$.
\item
$\mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)({\mathbb F}_9)$ is isomorphic to $({\mathbb Z}/4{\mathbb Z})^{\oplus 6}$.
\end{enumerate}
\end{lem}
\begin{proof}
(1) \ $\mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)$ is isogenous to
the product of three elliptic curves over ${\mathbb F}_3$
because the isogeny (\ref{Jacobian:Isogeny}) is also defined over ${\mathbb F}_3$.
Each of them is isomorphic to either
$Y^2 = X^3 + 4 X$ or $Y^2 = X^3 - 4 X$.
The number of ${\mathbb F}_3$-rational points (including the point at infinity)
of each of these elliptic curves is equal to $4$.
Hence every eigenvalue of the Frobenius morphism over ${\mathbb F}_3$
on the $\ell$-adic Tate module of these elliptic curves
is a square root of $-3$.
Since the Frobenius eigenvalues are invariant under isogeny,
the same assertion holds for $V_{\ell} \mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)$.
(2) \ This assertion follows from (1) and the semisimplicity of
the action of the Frobenius morphism on the $\ell$-adic Tate module.
(3) \ By (2), the order of $\mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)({\mathbb F}_9)$
is equal to $(1 - (-3))^6 = 4096$;
see \cite[Chapter IV, Section 21, Theorem 4]{Mumford:AbelianVarieties}.
For any $\ell \neq 3$,
a torsion point $x \in \mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)[\ell^{\infty}]$
of $\ell$-power order is defined over ${\mathbb F}_9$
if and only if $[-3]x = x$.
Hence we have
$\mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)({\mathbb F}_9) = \mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)[4]$.
It is isomorphic to $({\mathbb Z}/4{\mathbb Z})^{\oplus 6}$.
\end{proof}
\begin{rem}
For an abelian variety $A$ of dimension $g$ over
the finite field ${\mathbb F}_q$ with $q$ elements,
the Riemann Hypothesis (also called the Weil conjecture, proved by Weil himself)
for $A$ implies the number of ${\mathbb F}_q$-rational points on $A$
is less than or equal to $(1 + q^{1/2})^{2g}$;
see \cite[Chapter IV, Section 21, Theorem 4]{Mumford:AbelianVarieties}.
Lemma \ref{Lemma:MordellWeilGroup2} (3)
shows that the maximal number of elements is realized by
$\mathop{\mathrm{Jac}}\nolimits(\widetilde{F}_4)$ over ${\mathbb F}_9$.
\end{rem}
We shall show the Mordell-Weil group $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ is killed by $4$.
This result is presumably well-known.
One possible approach is to use the isogeny (\ref{Jacobian:Isogeny}).
But it is a non-trivial task to eliminate the possibility that
$\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ might have a non-trivial $8$-torsion point.
Instead of using (\ref{Jacobian:Isogeny}), in the following,
we shall give a proof using the reduction modulo $3$ of $\mathop{\mathrm{Jac}}\nolimits(F_4)$.
\begin{prop}
\label{Proposition:KilledBy4}
$\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ is killed by $4$.
\end{prop}
\begin{proof}
(This proof was inspired by
a previous work of the second author \cite{Ito:ManinMumford}.)
Let $v$ be a finite place of ${\mathbb Q}(\zeta_8)$ above $3$.
The residue field $\kappa(v)$ at $v$ is ${\mathbb F}_9$.
The Fermat quartic $F_4$ naturally extends to
a smooth projective curve over the ring of integers of
the completion of ${\mathbb Q}(\zeta_8)$ at $v$.
Its special fiber is the quartic $\widetilde{F}_4$ in
Lemma \ref{Lemma:MordellWeilGroup2}.
By Lemma \ref{Lemma:MordellWeilGroup1},
every element of $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ is of finite order.
Its order is a power of $2$. (In particular, its order is prime to $3$.)
Hence the reduction modulo $v$ homomorphism
\[
\xymatrix{
\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8)) \ar^-{}[r] & \mathop{\mathrm{Jac}}\nolimits(F_{4,v})({\mathbb F}_9)
}
\]
is injective.
The group $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ is killed by $4$
by Lemma \ref{Lemma:MordellWeilGroup2} (3).
\end{proof}
By Proposition \ref{Proposition:KilledBy4},
we have only to calculate the subgroup of $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$
fixed by $\mathop{\mathrm{Gal}}\nolimits({\mathbb Q}(2^{1/4},\zeta_8)/{\mathbb Q}(\zeta_8))$.
By Theorem \ref{MainTheorem2},
it is solved by a standard method in linear algebra.
\begin{thm}
\label{MainTheorem3}
The Mordell-Weil group $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ coincides
with the group $\mathscr{C}$ generated by divisors supported on the cusps.
In particular, the following homomorphism is an isomorphism:
\begin{align*}
({\mathbb Z}/4{\mathbb Z})^{\oplus 5} \oplus {\mathbb Z}/2{\mathbb Z} &\overset{\cong}{\longrightarrow} \mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8)), \\
(c_1,c_2,c_3,c_4,c_5,c_6) &\mapsto \sum_{i=1}^{5} c_i e_i + c_6 e_6.
\end{align*}
\end{thm}
\begin{proof}
By Proposition \ref{Proposition:KilledBy4},
we have $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8)) \subset \mathop{\mathrm{Jac}}\nolimits(F_4)[4]$.
Since we know the Galois action on $\mathop{\mathrm{Jac}}\nolimits(F_4)[4]$,
the calculation of $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ is easy.
Since $\mathop{\mathrm{Gal}}\nolimits({\mathbb Q}(2^{1/4},\zeta_8)/{\mathbb Q}(\zeta_8))$ is generated by $\sigma^2$,
we have
\begin{align*}
\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))
&\cong \{\, v \in ({\mathbb Z}/4{\mathbb Z})^{\oplus 6} \mid \rho_4(\sigma)^2 v = v \, \}.
\end{align*}
By Theorem \ref{MainTheorem2}, the matrix $\rho_4(\sigma)^2$ is
calculated as
\[
\rho_4(\sigma)^2 =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 2 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1
\end{pmatrix}.
\]
Hence
$\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$ is generated by $e_1$, $e_2$, $e_3$, $e_4$, $e_5$, $e_6$.
It coincides with the group $\mathscr{C}$;
see Corollary \ref{Corollary:Rohrlich}.
\end{proof}
\section{Points over quadratic extensions of ${\mathbb Q}(\zeta_8)$}
\label{Section:QuadraticPoints}
In this section,
we shall determine all of the points on the Fermat quartic $F_4$
defined over quadratic extensions of ${\mathbb Q}(\zeta_8)$.
We shall use the following notation:
a pair of $\overline{{\mathbb Q}}$-rational points $P,Q \in C(\overline{{\mathbb Q}})$
on a curve $C$ over a number field $K$
is called a \textit{conjugate pair over $K$}
if none of $P,Q$ is defined over $K$,
and there exists a quadratic extension $L/K$
such that both of $P,Q$ are defined over $L$
and $P,Q$ are interchanged by the action of $\mathop{\mathrm{Gal}}\nolimits(L/K)$.
If a pair $P,Q$ is a conjugate pair over $K$,
then the divisor $P+Q$ is defined over $K$.
The following result is essentially due to Faddeev.
\begin{lem}[Faddeev {\cite[p.1150, Section 3]{Faddeev}}]
\label{Lemma:QuadraticPoints}
Let $K$ be a number field.
Let $C \subset {\mathbb P}^2$ be a smooth plane quartic defined over $K$.
Assume that the following conditions are satisfied:
\begin{enumerate}
\item $C$ has at least $N_1$ points $P_1,\ldots,P_{N_1}$ defined over $K$,
\item $C$ has at least $N_2$ conjugate pairs
$Q_1,\overline{Q}_1,\ldots,Q_{N_2},\overline{Q}_{N_2}$ over $K$
(i.e.\ for each $1 \leq i \leq N_2$,
the pair $Q_i,\overline{Q}_i$ is a conjugate pair over $K$), and
\item the number of linear equivalence classes of effective divisors of
degree $2$ on $C$ defined over $K$ is equal to $M$.
\end{enumerate}
Then the inequality
$N_1(N_1 + 1)/2 + N_2 \leq M$
is satisfied.
Moreover, if the equality
$N_1(N_1 + 1)/2 + N_2 = M$
is satisfied, then the points
$P_1,\ldots,P_{N_1}$, $Q_1$, $\overline{Q}_1$, $\ldots$, $Q_{N_2}$, $\overline{Q}_{N_2}$
are the only points on $C$ defined over quadratic extensions of $K$.
\end{lem}
\begin{proof}
We shall observe that
if $D = P + Q$ is an effective divisor of degree $2$ defined over $K$,
then either both of $P,Q$ are $K$-rational points,
or $P,Q$ is a conjugate pair over $K$.
In the setting of this lemma, $C$ has the following effective divisors
of degree $2$ defined over $K$:
\begin{align*}
P_i + P_j \quad (1 \leq i \leq j \leq N_1), & &
Q_k + \overline{Q}_k \quad (1 \leq k \leq N_2).
\end{align*}
In total, we have $N_1 (N_1 + 1)/2 + N_2$ effective divisors of degree $2$
defined over $K$.
Since $C$ is non-hyperelliptic, any two of them are not linearly equivalent
to each other.
Hence the inequality $N_1 (N_1 + 1)/2 + N_2 \leq M$ is satisfied.
Moreover, if the equality $N_1 (N_1 + 1)/2 + N_2 = M$ is satisfied,
the above divisors represent all of the linear equivalence classes of
effective divisors of degree $2$ on $C$ defined over $K$.
Therefore, there does not exist any $K$-rational point
other than $P_1,\ldots,P_{N_1}$,
and there does not exist any conjugate pair over $K$
other than $Q_1,\overline{Q}_1,\ldots,Q_{N_2},\overline{Q}_{N_2}$.
\end{proof}
Here are some points on $F_4$
defined over quadratic extensions of ${\mathbb Q}(\zeta_8)$.
\begin{enumerate}
\item There are $12$ cusps $A_i, B_i, C_i$ $(0 \leq i \leq 3)$.
All of them are defined over ${\mathbb Q}(\zeta_8)$.
\item For any $i,j$ with $0 \leq i,j \leq 3$,
the following points
\begin{align*}
[ 2^{1/4} \zeta^i_4 : \zeta^{1+2j}_8 : 1 ], & &
[ \zeta^{1+2j}_8 : 2^{1/4} \zeta^i_4 : 1 ], & &
[ 2^{-1/4} \zeta^i_4 : 2^{-1/4} \zeta^j_4 : 1 ]
\end{align*}
are defined over ${\mathbb Q}(2^{1/4},\zeta_8)$.
In total, we have $48$ points defined over ${\mathbb Q}(2^{1/4},\zeta_8)$.
Since none of them is defined over ${\mathbb Q}(\zeta_8)$,
we have $24$ conjugate pairs over ${\mathbb Q}(\zeta_8)$.
\item For any $i,j$ with $0 \leq i,j \leq 3$,
the following points
\begin{align*}
[ \zeta_3 \zeta^{1+2i}_8 : \zeta^2_3 \zeta_8^{1+2j} : 1 ], & &
[ \zeta^2_3 \zeta^{1+2i}_8 : \zeta_3 \zeta_8^{1+2j} : 1 ]
\end{align*}
are defined over ${\mathbb Q}(\zeta_3, \zeta_8)$.
In total, we have $32$ points defined over ${\mathbb Q}(\zeta_3, \zeta_8)$.
Since none of them is defined over ${\mathbb Q}(\zeta_8)$,
we have $16$ conjugate pairs over ${\mathbb Q}(\zeta_8)$.
\item We put
$\alpha := (1+\sqrt{-7})/2$ and $\overline{\alpha} := (1-\sqrt{-7})/2$.
For any $i,j$ with $0 \leq i,j \leq 3$, the following points
\begin{align*}
[ \alpha \zeta^i_4 : \overline{\alpha} \zeta^j_4 : 1 ], & &
[ \overline{\alpha} \zeta^i_4 : \alpha \zeta^j_4 : 1 ], \\
[ \overline{\alpha} \zeta^{7+2j}_8 : \zeta_4^3 : \alpha \zeta^i_4 ], & &
[ \alpha \zeta^{7+2j}_8 : \zeta_4^3 : \overline{\alpha} \zeta^i_4 ], \\
[ 1 : \alpha \zeta^{1+2i}_8 : \overline{\alpha} \zeta^{2+2j}_8 ], & &
[ 1 : \overline{\alpha} \zeta^{1+2i}_8 : \alpha \zeta^{2+2j}_8 ]
\end{align*}
are defined over ${\mathbb Q}(\sqrt{-7}, \zeta_8)$.
Note that the points in the second (resp.\ the third) row are
obtained from the points in the first row
by applying the automorphism $\theta_3$ (resp.\ $\theta^2_3$);
see Appendix \ref{Appendix:Automorphisms}.
In total, we have $96$ points defined over ${\mathbb Q}(\sqrt{-7}, \zeta_8)$.
Since none of them is defined over ${\mathbb Q}(\zeta_8)$,
we have $48$ conjugate pairs over ${\mathbb Q}(\zeta_8)$.
\end{enumerate}
\begin{rem}
The 4 points
$[ \pm \alpha : \pm \overline{\alpha} : 1 ]$,
$[ \pm \overline{\alpha} : \pm \alpha : 1 ]$
are defined over ${\mathbb Q}(\sqrt{-7})$.
These points were given by Faddeev in \cite[p.1150, Section 3]{Faddeev}.
Note that these points were already found by Aigner in 1934 \cite{Aigner}.
(See also Mordell's 1968 paper \cite{Mordell}.)
\end{rem}
\begin{thm}
\label{MainTheorem4}
The above points on the Fermat quartic $F_4$
are the only points defined over
quadratic extensions of ${\mathbb Q}(\zeta_8)$.
In particular, the number of points on $F_4$
defined over quadratic extensions of ${\mathbb Q}(\zeta_8)$ is
\[ 12 + 48 + 32 + 96 = 188. \]
\end{thm}
\begin{proof}
We have already seen that $F_4$ has
at least $12$ points defined over ${\mathbb Q}(\zeta_8)$.
Also, $F_4$ has at least
$88 = (48 + 32 + 96)/2$
conjugate pairs over ${\mathbb Q}(\zeta_8)$.
Since $12(12+1)/2 + 88 = 166$,
it is enough to show that the number of linear equivalence classes of
effective divisors of degree $2$ on $C$ defined over ${\mathbb Q}(\zeta_8)$
is equal to $166$.
This can be shown as follows.
Since $F_4$ has a ${\mathbb Q}(\zeta_8)$-rational point (such as $B_0 = [1 : 0 : 1]$),
the following map is bijective:
\[
\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8)) \overset{\cong}{\longrightarrow} \mathop{\mathrm{Pic}}\nolimits^2(F_4 \otimes_{{\mathbb Q}} {\mathbb Q}(\zeta_8)),
\qquad
[D] \mapsto [D + 2 B_0].
\]
By Theorem \ref{MainTheorem3},
we know that $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$
is an abelian group of order $2048$.
We also know generators of this group explicitly.
Therefore, we can make a list of divisors representing
all of the $2048$ elements of $\mathop{\mathrm{Pic}}\nolimits^2(F_4 \otimes_{{\mathbb Q}} {\mathbb Q}(\zeta_8))$.
For each divisor of degree $2$,
there is an algorithm using Gr\"obner basis
which calculates the dimension of the global sections of the line bundle
associated to it.
It is implemented in standard computer algebra systems.
Hence we can count the number of linear equivalence classes of
effective divisors of degree $2$ on $F_4$ defined over ${\mathbb Q}(\zeta_8)$.
It turns out that this number is equal to $166$.
(The authors calculated it using Singular.
See Appendix \ref{Appendix:MethodsCalculation} for the methods of calculation.)
\end{proof}
The following result is
considered as an analogue of Fermat's Last Theorem for
the quartic equation $X^4 + Y^4 = Z^4$ over
quadratic extensions of ${\mathbb Q}(\zeta_8)$.
\begin{cor}
\label{Corollary:FermatQuadraticQZeta8}
Let $K$ be a quadratic extension of ${\mathbb Q}(\zeta_8)$
which does not contain any of $2^{1/4}$, $\zeta_3$, $\sqrt{-7}$.
Then there do not exist any $K$-rational point
on the Fermat quartic $F_4$
other than the $12$ cusps $A_i, B_i, C_i$ $(0 \leq i \leq 3)$.
\end{cor}
\begin{rem}
The non-existence of non-cuspidal ${\mathbb Q}(\zeta_8)$-rational points on
the Fermat quartic $F_4$ was proved by
Klassen-Schaefer \cite[Proposition 6.1]{KlassenSchaefer}.
They used the finiteness of $\mathop{\mathrm{Jac}}\nolimits(F_4)({\mathbb Q}(\zeta_8))$
and Coleman's theory of $p$-adic abelian integrals (for $p=5$).
Their proof and our proof are completely different.
\end{rem}
| 2023-04-23T06:40:53.286Z | 2019-10-01T02:16:34.000Z | redpajama/arxiv | arxiv_0001 | 1,313 | 6,768 |
f86a65d8f3761da27ef11efd3c6424ac32602bfb | \section{\label{sec:level1}Introduction}
Laser tweezing is a powerful noninvasive tool to control and measure the movement of Brownian colloidal particles for applications in biology, physics, and chemistry \cite{Ashkin:86,ashkin1987optical,Ashkin1517,moffitt2008recent}. By tuning the light field distribution in the illuminated region, it is possible to generate a wide variety of optical potential energy landscapes. A thermally-driven Brownian particle trapped in such a landscape represents an ideal model system for studying a wide range of fundamental phenomena. Examples to date include studies of chemical reactions \cite{kramers1940brownian,RevModPhys.62.251}, protein folding \cite{neupane2016direct}, thermodynamic relations \cite{wang2002experimental}, information flow \cite{berut2012experimental}, entropy production \cite{tietz2006measurement} and Kramers-type dynamics \cite{rondin2017direct}. By modulating optical potentials, it has even been possible to realize optical Brownian ratchets \cite{wu2016near} and single-particle microscopic Carnot engines \cite{martinez2016brownian}. Interested readers can refer to \cite{C6SM00923A} to find more examples of using optically trapped colloidal particles to build stochastic heat engines.
The tilted periodic ``washboard'' potential is a particularly important potential distribution realizable using optical tweezers technology because it can be used as an archetypal nonequilibrium model of statistical physics in a variety of systems, such as the damped pendulum, ring-laser gyroscope, Josephson junctions, superionic conduction, phase-locked loops, and charge-density-wave condensation \cite{rice2013advances, Risken1996, fulde1975problem, stratonovich1967topics}. Despite the comprehensive theoretical discussion of Brownian dynamics in a tilted washboard-type potential \cite{Risken1996,coffey2004langevin}, it is still of great interest to achieve convenient and precise shape control of such optical potentials. Moreover, it is of generic interest to be able to scale down the physical dimension of the probe particle from the micron range to the nanoscale in order to approach length scales relevant to molecular interactions and dynamics. In addition, nanoscale Brownian particles have much shorter characteristic diffusion time than their micrometer counterparts and thus allow for faster and more efficient sampling of thermodynamic transitions between different states.
In this article, we employed elliptically polarized laser tweezers to create a tilted washboard rotational potential for trapping of colloidal gold nanorods \cite{chen2013gold}. The optical anisotropy and enhanced light-matter interaction of such particles, caused by plasmon resonances \cite{lehmuskero2015laser,shao2018light}, result in extremely efficient optical confinement and rotation performance \cite{shao2018light,friese1998optical}. By adjusting the depth and tilt of the potential by control of polarization ellipticity, we successfully managed to switch the rotational movement of a nanorod from ultrafast continuous spinning to discrete stochastic rotational jumps. Using both experiments and simulations, we further investigated the jump dynamics at critical trapping polarizations and found that it quantitatively agrees with that predicted from Kramers theory \cite{kramers1940brownian,RevModPhys.62.251}. The full control of Brownian rotation of plasmonic nanorods demonstrated here provides an additional freedom of nanomotor movement manipulation and holds great potential for future investigations of fundamental questions in non-equilibrium thermodynamics. Our experimental configuration might also be useful for studies of molecular motors, optical Brownian ratchets, and optical torque wrenches for high-sensitivity biological experiments \cite{pedaci2011excitable}.
\section{Results}
\subsection{\label{sec:level2}Construction of a rotational tilted washboard potential}
An elliptically polarized plane wave with electric field $\bm{E}=E_0[\cos(\omega t)\widehat{x}+\cos(\omega t+\Delta \phi)\widehat{y}]$ can be decomposed into one linearly polarized and one circularly polarized component, $E_\text{L}$ and $E_\text{C}$, respectively (Fig.~\ref{figure1}a, a detailed analysis is provided in the Supplementary Material \footnote{See Supplemental Material at http://link.aps.org/supplemental/ for more details and discussions related to this letter.}). Once a gold nanorod is optically trapped (Fig.~\ref{figure1}b), the linear component provides a restoring torque $M_\text{L}$ that tends to align it along the corresponding polarization direction while the circular component induces a torque $M_\text{C}$ that tends to spin the particle around the direction of incidence to overcome the potential barrier formed by $M_\text{L}$ (Fig.~\ref{figure1}c) \cite{ruijgrok2011brownian}. Both torques are determined by angular momentum transfer due to light absorption as well as scattering \cite{shao2018light,shao2015gold}. The response of the gold nanorod is approximately that of a dipole with induced moment $\bm{p}=\bm{\alpha}\cdot\bm{E}$. For an incident wavelength close to the longitudinal plasmon resonance of the particle, the polarizability tensor $\bm{\alpha}$ is dominated by the long axis component \cite{chen2013gold}, which we denote $\alpha$. The optical potential experienced by the nanorod then has a “tilted washboard” shape (Fig.~\ref{figure1}d) according to:
\begin{equation}
U(\varphi)=-\int_{0}^{\varphi}(-M_\text{L}+M_\text{C})d\varphi=-A\varphi+B\sin^2\varphi
\label{eq:one}
\end{equation}
where $A=1/2\cdot\text{Re}(\alpha) E_\text{C}^2$, $B=1/2\cdot\text{Re}(\alpha) E_\text{L} ^2$ and $\varphi$ is the angle between the nanorod long axis and the linear polarization component $E_\text{L}$ (Fig.~\ref{figure1}d inset). It is easily shown that each well in $U(\varphi)$ is surrounded by highly asymmetric barriers when $\Delta\phi$, the phase difference between the $\widehat{x}$ and $\widehat{y}$ field-components, is below 45$^\circ$. For example, we find that the plasmonic nanorod has to overcome a barrier height $\Delta U=0.8 k_\text{B}T$ to rotate towards the preferred direction while the barrier in the opposite direction is more than an order of magnitude higher for parameter settings mimicking our experimental conditions. The probability of rotational jumps in the ``wrong'' direction is thus very low. The barrier height $\Delta U$ can be varied by changing the degree of polarization ellipticity \textit{via} a change in $\Delta\phi$.
\begin{figure}
\includegraphics{fig_1
\caption{\label{figure1} Tilted washboard potential from elliptically polarized light. (a) An elliptically polarized plane wave can be decomposed into one linear and one circular polarization component. (b) Schematic of a gold nanorod in an optical trap. The nanorod is oriented with its long axis perpendicular to the direction of incidence. (c) Nanorod potential energy distribution induced by the linear polarization component of an elliptically polarized light field. The circular polarization component tends to continuously rotate the rod with a torque of $M_\text{C}$. (d) The nanorod thus experiences a rotational tilted washboard potential energy landscape as a function of orientation angle $\varphi$. The potential barrier height in the preferred rotation direction can be calculated to $\Delta U=0.8k_\text{B}T$ for parameters mimicking experimental data (nanorod with dimensions 65 nm$\times$147 nm, $\lambda_\text{Laser}$=830 nm, $P_\text{Laser}$=6 mW, $T$=326 K) for the case when the elliptically polarized wave has $\Delta\phi=40^\circ$. The barrier height in the opposite rotation direction is 9.4 $k_\text{B}T$.}
\end{figure}
\subsection{\label{sec:level2}Transition from continuous rotation to stochastic jumps of an individual gold nanorod}
We studied the rotational dynamics of gold nanorods optically trapped in 2D against a cover glass in an optical tweezers setup based on an 830 nm laser beam with tunable polarization ellipticity (see Methods in the Supplemental Material \footnotemark[\value{footnote}]). The nanorods had an average size of $(147\pm10\ \text{nm})\times(65\pm5\ \text{nm})$ (Fig.~\ref{figure2}a) and were prepared by a seed-mediated growth method \cite{chen2013gold,shao2015gold}. The exemplary dark-field scattering spectrum of an individual trapped nanorod (Fig.~\ref{figure2}b) shows one weak surface plasmon resonance at around 550 nm and a strong mode at around 740 nm overlapping the 830 nm trapping laser wavelength. The relative strengths of the resonance peaks correspond to a polarizability along the long-axis that is more than an order-of-magnitude higher than along the short axis at a wavelength of 830 nm, thus confirming the assumption of an essentially 1D polarizability tensor. The attractive and plasmon-enhanced optical gradient force keeps the particle trapped and aligned in the laser focus $xy$-plane while the Coulomb repulsion from the cover glass and the radiation pressure prevent the particle from escaping the trap along the $z$-axis \cite{shao2015gold}.
\begin{figure*}
\includegraphics{fig_2}
\caption{\label{figure2} Transition from continuous rotation to discrete jumps of gold nanorods. (a) Scanning electron microscopy image of the nanorods (scale bar 200 nm). (b) Scattering spectrum of a trapped nanorod. The strong peak at $\sim$740 nm is caused by the long-axis surface plasmon resonance of the nanorod. (c$-$h) Measured (black) and simulated (red) rotational dynamics of the rod undergoing continuous rotation (c$-$e) in the presence of an almost perfectly circularly polarized laser field ($\Delta\phi=88^\circ$) and discrete rotational jumps (f$-$h) due to an elliptically polarized field ($\Delta\phi=37^\circ$). Figure (e,h) shows the nanorods orientation angle $\varphi$ versus time while (c,d) and (f,g) shows the corresponding scattering intensity time traces and intensity autocorrelation functions (ACFs), respectively. The data is based on measurements and simulations of cross-polarized backscattering from the nanorod. The blue stars in (f) and (h) mark out individual jumps. (i) Polarization-dependent rotation of a nanorod. We showed both measured (black) and calculated (red) rotational frequency of a nanorod as the laser polarization continuously varies from almost circular to linear. The blue curve indicates the barrier height at varying $\Delta\phi$. In the highlighted area ($30^\circ<\Delta\phi<45^\circ$), where the potential barrier $\Delta U\in(0, 4k_\text{B}T)$, we observe the nanorod undergoing a transition from continuous rotation to discrete jumps. The nanorod stops continuous rotation when $\Delta\phi$ decreases to around $40^\circ$ where an effective rotational potential barrier $\Delta U\approx k_\text{B}T$.}
\end{figure*}
We first tracked the rotational dynamics of a trapped nanorod by analyzing the back-scattered laser light $I_\text{sca}^\text{P}$ from the particle using polarization selective detection in which a polarizer oriented perpendicular to the linear polarization component $E_\text{L}$ of the trapping beam has been placed in front of the detector. The nanorod angle variations are thus converted to fluctuations in $I_\text{sca}^\text{P}$ because of the nanorods' highly polarized scattering properties \cite{shao2015gold}. $I_\text{sca}^\text{P}(t)$ shows a periodic oscillation with superimposed fluctuations due to rotational Brownian motion for the case of an almost circularly polarized trapping field ($\Delta\phi=88^\circ$, Fig.~\ref{figure2}c, black trace). The corresponding autocorrelation function $C(\tau)$ of $I_\text{sca}^\text{P}(t)$ (Fig.~\ref{figure2}d, black trace) can be analyzed using $C(\tau)=I_0^2+0.5I_1^2\exp(-\tau/\tau_0)\cos(4\pi f\tau)$ \cite{shao2015gold}, which yields the nanorod average rotation frequency as $f=2460\pm20$ Hz and the autocorrelation decay time as $\tau_0=103\pm2$ $\mu\text{s}$. The nanorod ceased to rotate when we switched the laser polarization to elliptical ($\Delta\phi=37^\circ$), as is evident from the lack of a well-defined periodicity in the measured $I_\text{sca}^\text{P}(t)$ and $C(\tau)$ (Fig.~\ref{figure2}f and g, black traces). However, the recorded intensity trace nevertheless exhibits distinct occasional burst. We interpret these features as due to well-defined but stochastic thermal jumps in nanorod orientation.
Next, stochastic simulations were performed to gain further insight into the rotation process. The Brownian dynamics of a nanorod trapped in the tilted washboard potential $U(\varphi)$ can be simulated using the equation-of-motion \cite{Risken1996,shao2015gold}:
\begin{equation}
J\ddot{\varphi}=-\gamma_\text{r}\dot{\varphi}-\cos\varphi\sin\varphi\cdot\text{Re}(\alpha)E_\text{L}^2+\frac{1}{2}\text{Re}(\alpha)E_\text{C}^2+\xi(t).
\label{eq:two}
\end{equation}
Here, $J$ is the nanorod moment of inertia and the first three terms on the right-hand side represent, respectively, a viscous damping torque, characterized by a rotational friction coefficient $\gamma_\text{r}$, the restoring torque due to the linear polarization component $E_\text{L}$ and the driving torque due to the circular polarization component $E_\text{C}$. The last term represents a stationary Gaussian noise torque with zero mean and autocorrelation function $\langle\xi(t)\xi(0)\rangle=2\gamma_\text{r}k_\text{B}T_\text{r}\delta(t)$, where $T_\text{r}$ is the effective temperature for rotational Brownian motion \cite{hajizadeh2017brownian}. The temporal variation in nanorod orientation $\varphi(t)$ obtained from Eq.~(\ref{eq:two}) can in turn be used to calculate $I_\text{sca}^\text{P}(t)$ and $C(\tau)$ for comparison with experiments.
Fig.~\ref{figure2}c$-$h (red traces) shows simulation results for almost circular ($\Delta\phi=88^\circ$) and elliptical ($\Delta\phi=37^\circ$) polarization using simulation parameters selected to match experimental conditions, including a fixed $T_\text{r}=320$ K estimated \cite{shao2015gold,hajizadeh2017brownian} from the experimental $\tau_0$ (see the Supplementary Material for details \footnotemark[\value{footnote}]). For the circular polarization case, the simulated $\varphi(t)$ evolves continuously (Fig.~\ref{figure2}e), corresponding to continuous rotation, and the calculated $C(\tau)$ yields $f=2643\pm7$ Hz and $\tau_0=100\pm1$ $\mu$s in excellent agreement with the experimental results. For the elliptical polarization case, $\varphi(t)$ instead exhibits a staircase behavior (Fig.~\ref{figure2}h) corresponding to discrete and random $\pi$ jumps in one direction given by the driving torque, separated by periods of almost fixed alignment along the linear polarization direction. The resulting intensity trace and autocorrelation function are again in good agreement with the experimental observations Fig.~\ref{figure2}f$-$g). Thus, the simulations confirm that thermal agitation occasionally forces the nanorod to jump out of the local minima of the washboard potential to the next, lower, potential well, resulting in discrete intensity bursts in $I_\text{sca}^\text{P}$ that can be tracked experimentally.
We further varied the polarization state of the trapping laser continuously. Both experiments and simulations showed that the gold nanorod rotation becomes increasingly slow as the polarization becomes increasingly elliptical. When $\Delta\phi$ decreases to $\sim40^\circ$, the oscillating feature in the intensity autocorrelation function disappears (Fig.~\ref{figure2}i; more details can be found in the Supplementary Material \footnotemark[\value{footnote}]), suggesting that a barrier exists and stops the nanorod's continuous rotation. However, the nanorod undergoes stochastic rotational jumps from time to time and we can still calculate an effective rotation frequency by counting the number of discrete jumps. When the polarization becomes even more elliptical, the barrier in the rotation potential becomes high enough to keep the rod aligned with the major axis of the polarization ellipse. The nanorod thus exhibits a `rotation' frequency asymptotic towards zero. From simulation, we further observed that the effective rotational diffusion of the nanorod varies as the laser polarization changes, similar to the reported result of translational diffusion of a Brownian particle on tilted washboard potentials \cite{reimann2001giant}. Interested readers can find more discussion in the Supplementary Material \footnotemark[\value{footnote}].
\subsection{\label{sec:level2}Stochastic jump dynamics of gold nanorods trapped by elliptical polarization}
To test quantitatively the physical attributes of nanorod stochastic jumps, including their rate and transit time, we further examined the measured scattering signals and the simulation results to discern further details. Statistics on both the experiment and simulation results revealed that the number of nanorod flips in a certain time interval $X$ follows a Poisson probability distribution $P(X=S)=e^{-\lambda} \lambda^S/S!$ (Fig.~\ref{figure3}a,b), with the Poisson mean $\lambda$ determined by the potential barrier height relative to $k_\text{B}T_\text{r}$. When the time interval is set at 5 ms, the fitting-obtained $\lambda$ are $4.3\pm0.3$ for experiment and $4.7\pm0.1$ for simulation (laser power 6 mW, $\Delta\phi=37^\circ$), respectively.
\begin{figure}
\includegraphics{fig_3}
\caption{\label{figure3} Stochastic rotational jump dynamics of a gold nanorod. The gold nanorod was trapped using an elliptically polarized laser beam with $\Delta\phi=37^\circ$ and a power of 6 mW. (a$-$d) Experimentally measured and simulation-calculated probability distributions of nanorod flips in a fixed time interval of 5 ms (a,b) and transit times of such jump (c,d). The nanorod rotational effective Brownian temperature $T_\text{r}$ was set at 360 K in simulation to achieve good agreement with experiments. The laser trap forms a tilted washboard rotation potential with a barrier $\Delta U=1.4\ k_\text{B}T_\text{r}$, and near a local maximum the potential landscape can be modeled with an inverted harmonic potential (inset in a). The rod was agitated thermally to overcome this barrier to jump by an angle of $\pi$, and each jump process was recorded by one scattering intensity peak shown in Fig.~\ref{figure2}f. The numbers of jumps follow Poisson distribution (blue curves in a and b). The full distributions of transit times are well fitted by formula derived from Kramers theory (blue curves in c and d), with the coefficients of determination $R^2=$ 0.979 and 0.994, respectively.}
\end{figure}
Additional information can be obtained through studying the duration of each individual stochastic jump, as the average value and the variability in transit times reflect kinetics and the fundamentally statistical nature of the stochastic jump process. The transit time $\tau_\text{T}$ is much shorter than the first-passage time defined as how long it takes for the nanorod to rotate by $\pi$. This is very different from the case when the nanorod undergoes continuous rotation (more details can be found in the Supplementary Material \footnotemark[\value{footnote}]). $\tau_\text{T}$ was found to vary widely, from less than 80 $\mu$s to over 600 $\mu$s (Fig.~\ref{figure3}c,d), with average values $\langle\tau_\text{T}\rangle=158\pm114$ $\mu$s (experiment) and $117\pm47$ $\mu$s (simulation). Additionally, both experimental and simulation results revealed that the broadly distributed $\tau_\text{T}$ has a peak at around 120 $\mu$s and a long exponential tail (Fig.~\ref{figure3}c,d). This behavior is similar to that expected for transit across harmonic barriers in the high-barrier limit ($\Delta U>>k_\text{B}T$) in the Kramers regime \cite{kramers1940brownian,RevModPhys.62.251,chaudhury2010harmonic}. Specifically, in our case, the potential landscape near a local maximum can be modeled with an inverted harmonic potential with a ``spring constant'' $\kappa_\text{b}$, $V(\varphi)\approx V_\text{max}-\kappa_\text{b}(\varphi-\varphi_\text{max})^2/2$ (schematic in Fig.~\ref{figure3}a inset). When the transition region is from $(\varphi_\text{max}-\varphi_0)$ to $(\varphi_\text{max}+\varphi_0)$, the barrier height $\Delta U=\kappa_\text{b}\varphi_0^2/2$ and the transit time $\tau_\text{T}$ has a distribution $P(\tau_\text{T})$. For the one-dimensional diffusion model determined by $J\ddot{\varphi}=-\gamma_\text{r}\dot{\varphi}-V'(\varphi)+\xi(t)$, $P(\tau_\text{T})$ is predicted to have the form \cite{chaudhury2010harmonic,doi:10.1063/1.2434966}:
\begin{equation}
P(\tau_\text{T})=\frac{\omega_\text{K}\sqrt{\Delta U/(k_\text{B}T)}}{1-\text{erf}[\sqrt{\Delta U/(k_\text{B}T)}]}\frac{\exp[-\Delta U\coth(\omega_\text{K}\tau_\text{T}/2)/(k_\text{B}T)]}{\sinh(\omega_\text{K}\tau_\text{T}/2)\sqrt{2\pi\sinh(\omega_\text{K}\tau_\text{T})}}
\label{eq:three}
\end{equation}
The distribution in Eq.~(\ref{eq:three}) decays exponentially for large $\tau_\text{T}$ as $P(\tau_\text{T})\approx2\omega_\text{K}[\Delta U/(k_\text{B}T)]\text{exp}(-\omega_\text{K}\tau_\text{T})$. The parameter $\omega_\text{K}$ sets the time scale for decay away from states near the top of the barrier, $\omega_\text{K}=\kappa_\text{b}/\gamma_\text{r}$.
In both experiment and simulation, $P(\tau_\text{T})$ are well fitted by Eq.~(\ref{eq:three}) (Fig.~\ref{figure3}c,d). The barrier heights $\Delta U$ returned by the fit are $1.10\pm0.30\ k_\text{B}T$ (experiment) and $1.36\pm0.08\ k_\text{B}T$ (simulation), both in good agreement with the value calculated from the laser polarization according to Eq.~(\ref{eq:one}): $\Delta U=1.4\ k_\text{B}T_\text{r}$. The values of fitting-obtained $\omega_\text{K}$ are $1.13\pm0.17\times10^4\ \text{s}^{-1}$ (experiment) and $1.02\pm0.03\times10^4\ \text{s}^{-1}$ (simulation). Given that the rotational diffusion constant $D_\text{r}$ is determined by $D_\text{r}=k_\text{B}T/\gamma_\text{r}$, we can write $\omega_\text{K}$ as well as the average transit time in terms of this quantity through
\begin{subequations}
\label{eq:whole1}
\begin{equation}
\omega_\text{K}=\kappa_\text{b}/\gamma_\text{r}=D_\text{r}\kappa_\text{b}/(k_\text{B}T),
\label{subeq:1}
\end{equation}
\begin{equation}
\langle\tau_\text{T}\rangle=\ln\big(2e^\gamma\Delta U/(k_\text{B}T)\big)/\omega_\text{K},
\label{subeq:2}
\end{equation}
\end{subequations}
where $\gamma$ is Euler's constant \cite{chaudhury2010harmonic}. Given that $\kappa_\text{b}$ can be determined by fitting the energy landscape to be $\kappa_\text{b}=3.4\pm0.1\ k_\text{B}T/\text{rad}^2$ at $\Delta\phi=37^\circ$, we can calculate the rotational diffusion coefficient $D_\text{r}$ from $\omega_\text{K}$ and $\langle\tau_\text{T}\rangle$ according to Eqs.~(\ref{eq:whole1}). For the simulation result, $D_\text{r}$ calculated from $\omega_\text{K}$ is $3.0\pm0.1\times10^3\ \text{s}^{-1}$, close to $D_\text{r}=4.0\pm1.6\times10^3\ \text{s}^{-1}$ calculated from $\langle\tau_\text{T}\rangle$. In experiment, $D_\text{r}$ is calculated from $\omega_\text{K}$ and $\langle\tau_\text{T}\rangle$ to be $3.3\pm0.5\times10^3\ \text{s}^{-1}$ and $3.0\pm2.2\times10^3\ \text{s}^{-1}$, respectively, which are also in good agreement with each other. The values of $D_\text{r}$ calculated from measured $\omega_\text{K}$ and $\langle\tau_\text{T}\rangle$ are close to the result directly calculated by modeling the nanorod as a prolate ellipsoid in water ($D_\text{r}=4.6\times10^3\ \text{s}^{-1}$), validating Kramers description of the nanorod rotational jump transition.
Furthermore, the rates and transit times of the nanorod stochastic jumps are highly dependent on the temperature and viscosity of the local nanoenvironment. If we artificially increase $T_\text{r}$ in simulation, the calculated Poisson distribution means $\lambda$ and the average transit time $\langle\tau_\text{T}\rangle$ exhibit rapid increase and decrease (Fig.~\ref{figure4}a,b), respectively. $\lambda$ indicating the transition rate follows an exponential (Boltzmann) dependence on the energy barrier height $\Delta U$ (Fig.~\ref{figure4}a). $\langle\tau_\text{T}\rangle$ decreases exponentially with $T_\text{r}$ (Fig.~\ref{figure4}b). $\lambda$ and $\langle\tau_\text{T}\rangle$ are also very sensitive to the medium viscosity $\eta$. As $\eta$ increases, $\lambda$ decreases exponentially and $\langle\tau_\text{T}\rangle$ increases linearly (Fig.~\ref{figure4}c,d), according to our simulation results. The sensitive temperature and viscosity dependence of the nanorod stochastic jump dynamics implies that the gold nanorod manipulated by an elliptical polarization can work as a sensing element to probe the local temperature and viscosity in solution.
\begin{figure}
\includegraphics{fig_4}
\caption{\label{figure4} Temperature- and medium viscosity-dependent nanorod stochastic jump dynamics. (a,b) Rotational effective Brownian temperature $T_\text{r}$-dependent average number of flips in a time interval of 5 ms (a) and transit time (b) calculated from simulation (red dots). The blue curve in (a) and the red curve in (b) are fitting results, with the coefficients of determination $R^2=$ 0.975 and 0.998. (c,d) Medium viscosity $\eta$-dependent average number of flips (c) and transit time (d) calculated from simulation (red dots). The blue curve in (c) and the red curve in (d) are exponential and linear fitting results, with the coefficients of determination $R^2=$ 0.999 and 0.968.}
\end{figure}
\section{Discussion}
We have shown the construction of tilted washboard rotational potentials for optically trapped Brownian plasmonic gold nanorod motors, by rather simple means, utilizing elliptical polarizations. The gold nanorods were modulated from continuous rotation to discrete jumps by simply adjusting the polarization state of the trapping laser. In addition, we studied individual stochastic jump processes of the nanorod at critical laser polarization, finding that the jump dynamics is in good agreement with that predicted by Kramers theory. Our measurement results can be directly used to help understand mechanisms in molecular motors \cite{michl2009molecular} and rotating dipoles in external fields \cite{praestgaard1981model}.
The plasmonic nanorod trapped by elliptical polarization is a simple optical analogy to many other physical systems. It provides a powerful tool for investigating fundamental questions where the problem of Brownian motion in tilted periodic potentials arises \cite{rice2013advances,Risken1996,fulde1975problem,stratonovich1967topics}. The colloidal nanoparticle trapped in solution works in an overdamped regime; one can extend this to an underdamped regime by trapping plasmonic nanoparticles in air or in vacuum \cite{rondin2017direct,jauffred2015optical}. The plasmonic nanorod optically trapped with elliptical polarization thus generates a universal model system for the nonequilibrium thermodynamics problem of “Brownian diffusion over periodic barriers”. The small size ($\sim$100 nm) and short characteristic time scale ($\sim100\ \mu$s) of the Brownian nanorod allow for fast statistical investigations. Experimental parameters can be controlled and varied \textit{in situ} easily in an optical way, significantly reducing the complexity, incompatibility, and inadaptability of other physical systems conventionally employed. As we have shown, a very simple and well known theoretical model can be utilized to capture the system dynamics \cite{Risken1996,coffey2004langevin}. This, in turn, means that it is also straight-forward and robust to extract parameters by model fitting. As a result, we believe that our study facilitates the use of Brownian plasmonic nanoparticles as new probes to study fundamental issues with broad interest, such as giant acceleration of particle diffusion \cite{reimann2001giant}, connection between statistical physics and information theory \cite{berut2012experimental,parrondo2015thermodynamics}, and hydrodynamic synchronization \cite{koumakis2013stochastic}.
In addition, we have realized the full rotational control of the light-driven gold nanorod motors. The elliptical polarization trapped gold nanorod can also work as a ratchet that harvests overdamped Brownian noise and rectifies the Brownian motion at thermal non-equilibrium \cite{wu2016near}. Moreover, by combining the structure Brownian dynamics analysis, which can probe the local viscosity and temperature, and the plasmonic molecular analysis techniques such as refractometric sensing \cite{zijlstra2012optical} and surface-enhanced Raman scattering \cite{xu1999spectroscopy}, the optical-potential-controlled gold nanorod further becomes a multifunctional sensing platform to probe different characteristics of local nanoenvironment \cite{andren2017probing,vsipova2018photothermal}.
\begin{acknowledgments}
The authors thank Nils Odebo L{\"a}nk for help with the FDTD simulations. Support and advice by Prof. Giovanni Volpe and Prof. Andreas Isacsson is gratefully acknowledged. This work was supported by the Knut and Alice Wallenberg Foundation.
\end{acknowledgments}
\nocite{*}
| 2023-04-23T06:40:53.752Z | 2018-07-18T02:03:24.000Z | redpajama/arxiv | arxiv_0001 | 1,333 | 4,487 |
ca8b3b8af9fb35199860d65cc101c686497c9002 | \section{Introduction}
The modular Andr\'e-Oort Conjecture is a statement about the arithmetic and algebraic properties of the classical modular function
\[j:\mathcal{H}\to\mathbb{C}.\]
The statement is the following.
\begin{thrm}[Pila, Modular Andr\'e-Oort]\label{thrm:ModularAO}
Let $V\subseteq\mathbb{C}^n$ be an algebraic variety. Then $V$ contains only finitely many maximal special subvarieties.
\end{thrm}
While various partial \cite{Andre1998} and conditional \cite{Klingler2014} results were known, this was first proven unconditionally and in full generality by Pila, in his 2011 paper \cite{Pila2011}, using what is now a fairly standard strategy employing ideas of o-minimality and point-counting. The connection with the $j$ function is obscured behind the definition of ``special subvariety'', which goes as follows. It is well known that there are modular polynomials $\Phi_N\in\mathbb{Z}[X,Y]$ with the property that
\[\Phi_N(j(g\tau),j(\tau))=0\]
whenever $g\in\operatorname{GL}_2^+(\mathbb{Q})$ is a primitive integer matrix of determinant $N$. So, although $j$ is a transcendental function, it is very well-behaved under the action of $\operatorname{GL}_2^+(\mathbb{Q})$. A fairly direct consequence of the existence of the polynomials $\Phi_N$ is the (also well known) fact that $j(\tau)$ is an algebraic integer whenever $\tau\in\mathcal{H}$ is quadratic.
Loosely, a special subvariety of $\mathbb{C}^n$ is a variety induced by these relations. To be more precise, we have the following definition.
\begin{defn}\label{defn:geodesics}
Let $n\in\mathbb{N}$.
Let $S_0\cup S_1\cup\dots\cup S_k$ be a partition of $\{1,\dots,n\}$, where $k\geq 0$ and $S_i\ne\emptyset$ for $i>0$.
For each $s\in S_0$, choose any point $q_s\in\mathcal{H}$. For each $i>0$, let $s_i$ be the least element of $S_i$ and for each $s_i\ne s\in S_i$ choose a geodesic matrix $g_{i,s}\in\operatorname{GL}_2^+(\mathbb{Q})$. A \emph{weakly $\mathcal{H}$-special} subvariety of $\mathcal{H}^n$ is a set of the form
\[\{(\tau_1,\dots,\tau_n)\in\mathcal{H}^n:\tau_s=q_s\text{ for }s\in S_0, \tau_{s}=g_{i,s}\tau_{s_i} \text{ for }s\in S_i, s\ne s_i,i=1,\dots,k\},\]
for some given data $S_i$, $q_s$, $g_{i,s}$.
A weakly $\mathcal{H}$-special subvariety is \emph{$\mathcal{H}$-special} if the constant factors $q_s$ are imaginary quadratic numbers for all $s\in S_0$.
\end{defn}
One can define special (henceforth to be known as $j$-special) subvarieties in a similar way, as varieties in $\mathbb{C}^n$ cut out by the modular polynomials $\Phi_N$. For our purposes, however, there is a simpler definition that suits better. By abuse of notation, we will write $j$ for the function
\[(\tau_1,\dots,\tau_n)\mapsto (j(\tau_1),\dots,j(\tau_n)).\]
A $j$-special variety in $\mathbb{C}^n$ is then the image under $j$ of an $\mathcal{H}$-special variety; similarly a weakly $j$-special variety is the image under $j$ of a weakly $\mathcal{H}$-special variety. \\
In this paper, I will be investigating what happens when we consider not only $j$ and its corresponding special varieties, but also the derivatives of $j$. In this setup, the situation is rather more complicated. To begin with, recall that $j$ satisfies a certain 3rd order differential equation. Hence it is enough to consider only $j$ and its first two derivatives. With this in mind, let us define a function which will be our central object of study for the paper:
\[J=(j,j',j''):\mathcal{H}\to\mathbb{C}^3,\]
\[J(\tau)=(j(\tau),j'(\tau),j''(\tau)).\]
Again we will abuse notation and use $J$ also to refer to the obvious map from $\mathcal{H}^n$ to $\mathbb{C}^{3n}$ defined as $J$ on each coordinate.
Much of the setup here is due to Pila; in unpublished notes, he compiled various properties of $J$ and gave definitions of what a $J$-special variety should be. Pila also made a conjecture analogous to \ref{thrm:ModularAO}, with respect to $J$. For the rest of this section, I will give some of the setup covered by Pila in his notes. Towards the end of the section I state Pila's conjecture and a weakened version of the conjecture which is the central theorem of the paper.
\subsection{Properties of $J$}
The functions $j'$ and $j''$ are not fully modular functions. Instead, they satisfy
\[j'(\gamma\tau)=(c\tau+d)^2j'(\tau)\]
and
\[j''(\gamma\tau)=(c\tau+d)^4j''(\tau)+2c(c\tau+d)^3j'(\tau),\]
where $\gamma=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in\operatorname{SL}_2(\mathbb{Z})$. This says that $j'$ is a meromorphic modular form of weight 2, while $j''$ is a so-called \emph{quasimodular} form of weight 4 and depth 1. The behaviour of $j'$ and $j''$ at quadratic points is also worse than that of $j$; while $j(\tau)$ is algebraic at quadratic $\tau$, $j'(\tau)$ and $j''(\tau)$ almost never are. In fact, such points are always transcendental unless $\tau$ is in the $\operatorname{SL}_2(\mathbb{Z})$-orbit of $i$ or $\rho=e^{\frac{2\pi i}{3}}$ (see \cite{Diaz2000}). However, thanks to a result of Masser, $j''(\tau)$ is always algebraic over $j'(\tau)$. To be precise, we have a rational function $p_c$, in 3 variables (with $c$ a positive real), defined as follows:
\[p_c(W,X,Z)=\dfrac{1}{6}\dfrac{Z^2(X-7W+6912)}{W(W-1728)}+\dfrac{iZ}{c}.\]
A brief calculation shows that
\[p_{\operatorname{Im}\tau}(j(\tau),\chi^*(\tau),j'(\tau))=j''(\tau)\]
for all $\tau$. Here and throughout, $\chi^*$ is an almost holomorphic modular (AHM) function defined by
\[\chi^*=1728\cdot\dfrac{E_2^*E_4E_6}{E_4^3-E_6^2},\]
where the $E_k$ are standard Eisenstein series and $E_2^*$ is the weight 2 almost holomorphic modular form defined by
\[E_2^*(\tau)=E_2(\tau)-\dfrac{3}{\pi\operatorname{Im}\tau}.\]
We can decompose $\chi^*$ as
\[\chi^*=\chi-\dfrac{3}{\pi y}\cdot f,\]
where $\chi$ and $f$ are, here and throughout, holomorphic functions defined by
\[\chi=1728\cdot\dfrac{E_2E_4E_6}{E_4^3-E_6^2},\qquad f=1728\cdot\dfrac{E_4E_6}{E_4^3-E_6^2}.\]
The function $\chi^*$ and related AHM functions have been studied in several places, by Masser \cite{Masser1975}, Mertens and Rolen \cite{Mertens2015} and Zagier \cite{Zagier2008} among others. I have studied $\chi^*$ in the context of an Andr\'e-Oort result \cite{Spence2016}; we will be making use of some results from that paper occasionally.
For the present, the most relevant fact is that $\chi^*(\tau)$ is algebraic whenever $\tau$ is quadratic\footnote{This was essentially proven by Masser in the Appendix of \cite{Masser1975}, which seems to have been the first investigation of the algebraic properties of AHM functions.}. As a consequence, we see that $p_{\operatorname{Im}\tau}$ has coefficients in $\overline{\mathbb{Q}}$ whenever $\tau$ is quadratic. Hence, as claimed, $\text{tr.deg.}_\mathbb{Q}(j(\tau),j'(\tau),j''(\tau))=1$ for quadratic $\tau\not\in \operatorname{SL}_2(\mathbb{Z})\cdot\{i,\rho\}$.
By differentiating the modular polynomials, we see that $J$ also has nice behaviour with respect to $g\in\operatorname{GL}_2^+(\mathbb{Q})$. We will briefly work out the details of this in the case of $j'$. Let $g\in\operatorname{GL}_2^+(\mathbb{Q})$ be a primitive integer matrix of determinant $N$. Since
\[\Phi_N(j(\tau),j(g\tau))=0,\]
we see that
\[\partial_X(\Phi_N)(j(\tau),j(g\tau))j'(\tau)+\partial_Y(\Phi_N)(j(\tau),j(g\tau))j'(g\tau)\dfrac{d(g\tau)}{d\tau}=0\]
\[\implies j'(g\tau)=-\lambda_N(j(\tau),j(g\tau))j'(\tau)m_g(\tau),\]
where \[\lambda_N=\dfrac{\partial_X(\Phi_N)}{\partial_Y(\Phi_N)}\]
and
\[m_g(\tau)=\dfrac{(c\tau+d)^2}{\det g}=\left(\dfrac{d(g\tau)}{d\tau}\right)^{-1}.\]
For $j''$, we have a similar relation, though a bit more complex. In both cases, the nature of the relation differs depending on whether $g$ is upper triangular. If $g$ fails to be upper triangular (ie. $c\ne 0$) then the relation between the functions $j'(g\tau)$ and $j'(\tau)$ (respectively for $j''$) only exists over $\mathbb{C}(\tau)$. Otherwise the relationship is over $\mathbb{C}$. Hence
\[\text{tr.deg.}_\mathbb{C}(J(\tau),J(g\tau))=3\]
(considering $\tau$ as a variable in $\mathcal{H}$) if $g$ is upper triangular and
\[\text{tr.deg.}_\mathbb{C}(J(\tau),J(g\tau))=4\]
otherwise.
Since the behaviour of $J$ is affected by whether or not a matrix is upper triangular, it makes sense to make the following definition.
\begin{defn}
A weakly $\mathcal{H}$-special variety $G$ is called a geodesic upper-triangular (GUT) variety if all of the $g_{i,s}\in\operatorname{GL}_2^+(\mathbb{Q})$ arising in its definition (see \ref{defn:geodesics}) are upper triangular matrices.
\end{defn}
We will be making use of GUT varieties quite often, later on in this paper.
\subsection{Special sets for $J$}
In the classical case, the special sets were just the images, under $j$, of $\mathcal{H}$-special sets. An important feature is that they are bi-algebraic; for $\mathcal{H}$-special sets $G$, both $G$ and $j(G)$ are algebraic sets defined over $\overline{\mathbb{Q}}$. In the $J$ case, as noted earlier, $J(G)$ is not necessarily an algebraic set, let alone defined over $\overline{\mathbb{Q}}$. The solution to this is fairly simple; just take Zariski closures over $\overline{\mathbb{Q}}$. We are still following Pila, and will use the following notation, also of his design.
\begin{notn}
For any subset $S\subseteq\mathcal{H}^n$, define $\JClose{S}$ to be the $\overline{\mathbb{Q}}$-Zariski closure of $J(S)$. That is, $\JClose{S}$ is the smallest algebraic variety, defined over $\overline{\mathbb{Q}}$, which contains $J(S)$.
\end{notn}
\begin{defn}
A $J$-special subvariety of $\mathbb{C}^{3n}$ is an irreducible component of any set of the form $\JClose{G}$, where $G$ is an $\mathcal{H}$-special set.
\end{defn}
With this definition made, we might conjecture a direct analogue of \ref{thrm:ModularAO}, with $j$-special varieties replaced by $J$-special ones. This fails for a fairly obvious reason. Consider the variety $V\subseteq\mathbb{C}^3$ defined just by
\[X_1=j(\tau),\]
for some fixed quadratic $\tau\in\mathcal{H}\setminus \operatorname{SL}_2(\mathbb{Z})\cdot\{i,\rho\}$. Then by modularity of $j$, $V$ contains all the points $J(\gamma\tau)$, $\gamma\in\operatorname{SL}_2(\mathbb{Z})$. In particular, taking Zariski closures, we have
\[\JClose{\gamma\tau}\subseteq V\]
for all $\gamma\in\operatorname{SL}_2(\mathbb{Z})$. Since
\[\JClose{\gamma\tau}=\{(j(\tau),w,p_{\operatorname{Im}\gamma\tau}(j(\tau),\chi^*(\tau),w)):w\in\mathbb{C}\},\]
one sees that the various $\JClose{\gamma\tau}$ are in fact distinct. So $V$ contains infinitely many distinct $J$-special sets. They are maximal since $\JClose{\mathcal{H}}=\mathbb{C}^3\not\subseteq V$.
With this in mind, we need a version of \ref{thrm:ModularAO} which takes into account the action of $\operatorname{SL}_2(\mathbb{Z})$. The aforementioned conjecture of Pila is one such version. To state it, we will need a definition.
\begin{defn}
Let $\mathcal{S}$ be a collection of subsets of $\mathcal{H}^n$. We say $\mathcal{S}$ is $\operatorname{SL}_2(\mathbb{Z})$-finite if there is some finite subcollection $\mathcal{T}\subseteq \mathcal{S}$, such that every $S\in\mathcal{S}$ takes the form
\[S=\gamma\cdot T,\]
for some $\gamma\in\operatorname{SL}_2(\mathbb{Z})^n$, $T\in\mathcal{T}$. Otherwise, $\mathcal{S}$ is $\operatorname{SL}_2(\mathbb{Z})$-infinite.
Abusing notation slightly, we also use this terminology to apply to collections of points, equating points $\tau$ with singleton sets $\{\tau\}$ in the obvious way.
\end{defn}
Now we can state Pila's conjecture.
\begin{conj}[Pila, ``Modular Andr\'e-Oort with Derivatives'']\label{conj:PartialAOwDerivs}\hfill\\
Let $V\subseteq\mathbb{C}^{3n}$ be a proper algebraic variety defined over $\overline{\mathbb{Q}}$. There exists an $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\sigma(V)$, consisting of \emph{proper} $\mathcal{H}$-special varieties of $\mathcal{H}^n$, with the following property. Every $J$-special subvariety of $V$ is contained in $\JClose{G}$ for some $G\in\sigma(V)$.
\end{conj}
We will be approaching this conjecture using a variant of the Pila-Zannier strategy and o-minimality. With a seemingly novel adaptation of the usual strategy, we are able to make some good progress. The o-minimal methods are sufficient to give us good control over quadratic points of the form
\[(g_1\sigma,\dots,g_{n}\sigma).\]
In this case, the methods yield a bound on the size of the determinants of the $g_i\in\operatorname{GL}_2^+(\mathbb{Q})$ and on the discriminant of $\sigma$. This is a good step towards Conjecture \ref{conj:PartialAOwDerivs}.
There is significant difficulty, however, in dealing with points having a more complex $\operatorname{GL}_2^+(\mathbb{Q})$-structure. The difficulty lies in the possibility that, for two quadratic points $\tau,\sigma\in\mathcal{H}$, it might happen that $j'(\tau)$ and $j'(\sigma)$ are algebraically dependent even when $\tau$ and $\sigma$ lie in distinct $\operatorname{GL}_2^+(\mathbb{Q})$-orbits. As we will discuss shortly, we do not expect this to happen, but it does not seem to be possible to exclude the possibility using o-minimal methods. So we need the following definition.
\begin{defn}
Let $\tau=(\tau_1,\dots,\tau_n)\in\mathcal{H}^n$ be a quadratic point. Then $\tau$ may be written as
\[\tau=(\sigma_1,g_{1,1}\sigma_1,\dots,g_{1,r_1}\sigma_1,\dots,\sigma_k,g_{k,1}\sigma_k,\dots,g_{k,r_k}\sigma_k),\]
with $g_{i,j}\in\operatorname{GL}_2^+(\mathbb{Q})$ and the $\sigma_i$ lying in distinct $\operatorname{GL}_2^+(\mathbb{Q})$-orbits. We say $\tau$ is $j'$-generic if the numbers $j'(\sigma_1),\dots,j'(\sigma_k)$ are algebraically independent over $\overline{\mathbb{Q}}$.
\end{defn}
Now we can state the central theorem of the paper.
\begin{thrm}\label{thrm:PartialAOwDerivs}
Let $V\subseteq\mathbb{C}^{3n}$ be a proper algebraic variety defined over $\overline{\mathbb{Q}}$. There exists an $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\sigma(V)$, consisting of \emph{proper} $\mathcal{H}$-special varieties of $\mathcal{H}^n$, with the following property. Every $j'$-generic $\mathcal{H}$-special point in $J^{-1}(V)$ is contained in some $G\in\sigma(V)$.
\end{thrm}
Note in particular that any special point of the form $(g_1\sigma_1,\dots,g_{n}\sigma_1)$, with $g_i\in\operatorname{GL}_2^+(\mathbb{Q})$, is automatically $j'$-generic, unless $g_1\sigma_1$ is in the $\operatorname{SL}_2(\mathbb{Z})$-orbit of $i$ or $\rho$. Hence we have the following easy corollaries.
\begin{cor}
Let $V\subseteq\mathbb{C}^{3n}$ be a proper algebraic variety defined over $\overline{\mathbb{Q}}$. There exists an $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\sigma(V)$, consisting of \emph{proper} $\mathcal{H}$-special varieties of $\mathcal{H}^n$, with the following property. Every $\mathcal{H}$-special point of the form $(g_1\sigma,\dots,g_{n}\sigma)\in J^{-1}(V)$, $g_i\in\operatorname{GL}_2^+(\mathbb{Q})$, is contained in some $G\in\sigma(V)$.
\begin{proof}
This follows directly from \ref{thrm:PartialAOwDerivs}, except for the possible existence of points with a coordinate lying in $\operatorname{SL}_2(\mathbb{Z})\cdot\{i,\rho\}$. But any such points are automatically contained within $\operatorname{SL}_2(\mathbb{Z})$-finitely many proper $\mathcal{H}$-special varieties.
\end{proof}
\end{cor}
\begin{cor}
Let $V\subseteq\mathbb{C}^3$ be a proper algebraic variety defined over $\overline{\mathbb{Q}}$. Then $J^{-1}(V)$ contains only $\operatorname{SL}_2(\mathbb{Z})$-finitely many $\mathcal{H}$-special points.
\end{cor}
This last, of course, is simply Conjecture \ref{conj:PartialAOwDerivs} for $n=1$. To get \ref{conj:PartialAOwDerivs} in full generality, we would need something like the following.\begin{conj}\label{conj:AlgIndep}
Let $\tau_1,\dots,\tau_n\in\mathcal{H}$ be quadratic points, lying in distinct $\operatorname{GL}_2^+(\mathbb{Q})$-orbits, none of which lies in the $\operatorname{SL}_2(\mathbb{Z})$-orbit of $i$ or $\rho$. Then $j'(\tau_1),\dots,j'(\tau_n)$ are algebraically independent over $\overline{\mathbb{Q}}$.
\end{conj}
Otherwise put: ``All quadratic points in $\mathcal{H}^n$ are $j'$-generic, except those with a coordinate in the $\operatorname{SL}_2(\mathbb{Z})$-orbit of $i$ or $\rho$.'' In this form, it is easy to see that the conjecture, together with Theorem \ref{thrm:PartialAOwDerivs}, implies \ref{conj:PartialAOwDerivs}.
\begin{thrm}
Assume Conjecture \ref{conj:AlgIndep}. Then Conjecture \ref{conj:PartialAOwDerivs} holds.
\begin{proof}
Immediate from Theorem \ref{thrm:PartialAOwDerivs}.
\end{proof}
\end{thrm}
Should we believe Conjecture \ref{conj:AlgIndep}? It is quite strong, having a similar flavour to existing modular Schanuel statements; but it does fit into the existing body of conjectures. See, for instance, Bertolin's elliptico-toric conjecture (CET) \cite{Bertolin2002204}, from which Conjecture \ref{conj:AlgIndep} follows immediately. In turn, CET is a special case of the Grothendieck-Andr\'e period conjecture. So \ref{conj:AlgIndep} fits well with what we might expect. It is also much stronger than we need; to get Conjecture \ref{conj:PartialAOwDerivs}, various weaker transcendence statements would suffice. The weaker statements are much less clean to state and fit less obviously into the existing literature, so we stick with Conjecture \ref{conj:AlgIndep} for this paper.\\
The paper is broken down as follows. Section 2 is dedicated towards proving some Ax-Lindemann type results. In this section lies the primary novelty of the paper; the Ax-Lindemann results we prove here are necessarily of a very new and unusual shape, in order to account for some problems that arise with point-counting. This section is completely independent from Conjecture \ref{conj:AlgIndep} and does not involve considerations of $j'$-genericity. It is in section 3, where we discuss the point-counting aspects, that the issue of $j'$-genericity arises. Section 4 brings everything together to conclude the proof of \ref{thrm:PartialAOwDerivs}. The penultimate section of the paper, section 5, is dedicated towards proving a more precise version of Conjecture \ref{conj:PartialAOwDerivs}, under the assumption of Conjecture \ref{conj:AlgIndep}. In the final section we apply this more precise result, together with a slight adaptation of work of Scanlon \cite{Scanlon2004}, to produce versions of our results which are uniform in algebraic families.\\
\\
\textbf{Acknowledgements.} I would like to take the opportunity to thank Jonathan Pila for his invaluable guidance and supervision, as well as for many excellent suggestions on the subject of this paper. I would also like to thank Sebastian Eterovi\'c for several useful conversations on these topics. Last but certainly not least, thanks go to my father Derek for our many discussions and his keen proof-reading eyes!
\section{Ax-Lindemann}
\subsection{Technicalities}
We begin with a few crucial definitions.
\begin{defn}
A subset of $\mathcal{H}^n$ is a \emph{linear variety} if, up to reordering of coordinates, it takes the form
\begin{equation}\label{eqn:BasicLinearSub}G=\{(g_{1,1}\tau_1, \dots g_{1,r_1}\tau_1,\dots,g_{k,1}\tau_k,\dots,g_{k,r_k}\tau_k,t_{0,1},\dots,t_{0,r_0}):\tau_i\in\mathcal{H}\},\end{equation}
where $t_{0,j}\in\mathcal{H}$ and $g_{i,j}\in\operatorname{SL}_2(\mathbb{R})$.
A linear variety is called \emph{basic} if $r_0=0$. So a basic linear variety is defined by finitely many matrices $g_{i,j}\in\operatorname{SL}_2(\mathbb{R})$, together with a permutation of the coordinates. We will often suppress the permutation of coordinates, assuming for simplicity that the variety takes exactly the form in (\ref{eqn:BasicLinearSub}).
Any linear variety $G\subseteq\mathcal{H}^n$ has an underlying basic variety attached to it, namely the $B\subseteq \mathcal{H}^k$ attained by ignoring the constant coordinates $t_{0,j}$. We say that $G$ is a \emph{translate} of $B$ (by the $t_{0,j}$), or that $B$ is the basic variety underlying $G$.
\end{defn}
As we will see in later sections, the usual counting methods don't work out perfectly when applied to the derivatives problem. One would like to be able to count linear subvarieties contained in $J^{-1}(V)$, but the methods are slightly too coarse for this. One can, however, count those varieties which are, in some sense, \emph{approximately} in $J^{-1}(V)$. This motivates the following.
\begin{defn}
Let $V\subseteq\mathbb{C}^{3n}$ be an algebraic variety and $B\subseteq\mathcal{H}^k$ a basic linear variety, given by data $g_{i,j}$ as in (\ref{eqn:BasicLinearSub}). For each $g_{i,j}$, take a complex number $z_{i,j}$ and a real $c_{i,j}>0$. Also take $n-k$ triples of complex numbers $(w_i,x_i,y_i)$.
We say $B$ is \emph{adjacent to} $V$ \emph{via} $z_{i,j},c_{i,j},w_i,x_i,z_i$ if for all $\tau_i$, we have, up to permutation of coordinates,
\[\Biggl[\dots,j(g_{i,j}\tau_i),\dfrac{j'(g_{i,j}\tau_i)z_{i,j}}{m_{g_{i,j}}(\tau_i)},p_{c_{i,j}}\left(j(g_{i,j}\tau_i),\chi^*(g_{i,j}\tau_i),\dfrac{j'(g_{i,j}\tau_i)z_{i,j}}{m_{g_{i,j}}(\tau_i)}\right),\dots,w_i,x_i,y_i,\dots\Biggr]\in V.\]
If this holds for some choice of the possible data $z_{i,j}$, $c_{i,j}$, $w_i$, $x_i$, $z_i$, we simply say $B$ is adjacent to $V$, and write
\[B\hookrightarrow V.\]
Finally, if $G$ is a translate of a basic linear variety $B$ by $(\sigma_1,\dots,\sigma_d)$, we say $G$ is adjacent to $V$ if $B$ is adjacent to $V$ via any $z_{i,j}$, any $c_{i,j}$ and $(w_i,x_i,z_i)=J(\sigma_i)$. In this case we again write
\[G\hookrightarrow V.\]
\end{defn}
This rather intricate definition turns out to be crucial in carrying out a suitable variant of the usual o-minimal strategy used for Andr\'e-Oort problems. This is the main new idea of the paper: we follow the typical Pila-Zannier strategy for diophantine problems of this type, but rather than counting those $\mathcal{H}$-special varieties which are contained in $J^{-1}(V)$ directly, we instead count the $\mathcal{H}$-special varieties which are adjacent to $V$. The notion of adjacency is constructed so as to be definable, invariant under the action of $\operatorname{SL}_2(\mathbb{Z})$ (on the $g_{i,j}$) and invariant under Galois action (on $j$ and $\chi^*$); those three conditions are precisely what we need to make the strategy work.\\
The first step of the strategy and the primary goal of this this section is to prove an ``Ax-Lindemann-style'' result pertaining to the notion of adjacency. The idea is that the only linear varieties (or more generally, real algebraic arcs) that can be adjacent to some variety $V$ should be accounted for by weakly $\mathcal{H}$-special varieties. Given our overall goal - to count those linear varieties which are adjacent to $V$ - this problem is directly analogous to the usual Ax-Lindemann theorems needed in the classical case. Unsurprisingly, we will need to use some existing Ax-Lindemann results.
\begin{thrm}\label{thrm:AxLindwDerivs}
Let $S\subseteq\mathcal{H}^n$ be an arc of a real algebraic curve and let $G$ be the smallest weakly $\mathcal{H}$-special variety containing $S$. Suppose that $G$ is a GUT variety. (Recall: this simply means that the matrices defining $G$ are upper triangular.)
Let $V\subseteq\mathbb{C}^{3n+1}$ be an algebraic variety such that
\[(\tau_1,J(\tau_1),\dots,J(\tau_n))\in V\]
for all $(\tau_1,\dots,\tau_n)\in S$. Then in fact this holds for all $(\tau_1,\dots,\tau_n)\in G$.
\end{thrm}
\begin{thrm}\label{thrm:NonholomorphicAxLind}
Let $S\subseteq\mathcal{H}^n$ be an arc of a real algebraic curve and let $G$ be the smallest weakly $\mathcal{H}$-special variety containing $S$.
Let $V\subseteq\mathbb{C}^{2n}$ be an algebraic variety such that
\[(\dots,j(\tau_j),\chi^*(\tau_j),\dots)\in V\]
for all $(\tau_1,\dots,\tau_n)\in S$. Then in fact this holds for all $(\tau_1,\dots,\tau_n)\in G$.
\end{thrm}
Both of the above were proven in \cite{Spence2016}, though the majority of the work towards \ref{thrm:AxLindwDerivs} was done by Pila in \cite{Pila2013}. Before we can apply these to prove our central Ax-Lindemann result, we will need some technical lemmas, the first of which is simply a strengthening of \ref{thrm:AxLindwDerivs}.
\begin{thrm}\label{thrm:AxLindAllFourHolomorphics}
Let $S\subseteq\mathcal{H}^n$ be an arc of a real algebraic curve and let $G$ be the smallest weakly $\mathcal{H}$-special variety containing $S$. Suppose that $G$ is a GUT variety.
Let $V\subseteq\mathbb{C}^{4n+1}$ be an algebraic variety such that
\[(\tau_1,j(\tau_1),j'(\tau_1),\chi(\tau_1),f(\tau_1),\dots,j(\tau_n),j'(\tau_n),\chi(\tau_n),f(\tau_n))\in V\]
for all $(\tau_1,\dots,\tau_n)\in S$. Then in fact this holds for all $(\tau_1,\dots,\tau_n)\in G$.
\begin{proof}
Let $F$ be a defining polynomial of $V$. We will represent the various coordinates as follows.
\begin{itemize}
\item The $\tau_1$ coordinate will be represented by a variable $T$.
\item The $j$-coordinates (ie. the 2nd, 6th, coordinates, etc.) will be represented by variables $J_1,\dots,J_n$.
\item The $j'$-coordinates will be represented by variables $K_1,\dots,K_n$.
\item The $\chi$-coordinates will be represented by variables $X_1,\dots,X_n$.
\item The $f$-coordinates will be represented by variables $F_1,\dots,F_n$.
\end{itemize}
Since $j,j',\chi$ and $f$ are algebraically dependent, there is an irreducible polynomial $p$ with the property that
\[p(j(\tau),j'(\tau),\chi(\tau),f(\tau))=0\]
for all $\tau$.
Consider the variety $W\subseteq\mathbb{C}^{4n+1}$ defined by
\[p(J_{i},K_{i},X_{i},F_{i})=0\]
for each $1\leq i\leq n$. Clearly $\dim W=3n+1$.
If we further impose the condition
\[F(X_1,\dots,X_{4n+1})=0,\]
there are two possibilities. Either the resulting variety $W_F$ still has dimension $3n+1$ or it has dimension $3n$.
If $\dim W_F=3n+1$, it is automatically the case that
\[F(\tau_1,\dots,j(\tau_k),j'(\tau_k),\chi(\tau_k),f(\tau_k),\dots)=0\]
for all $(\tau_1,\dots,\tau_n)\in\mathcal{H}^n$.
On the other hand, if $\dim W_F=3n$, then $W_F$ amounts to the imposition of a relation between 3 of the 4 functions. That is, there are distinct $A,B,C\in \{J,K,X,F\}$ and a polynomial $H$, in $3n+1$ variables, such that $W_F$ is defined by:
\[(T,J_1,\dots F_{n})\in W\]
and
\[H(T,A_1,B_1,C_1,\dots, A_n,B_n,C_n)=0.\]
We then have, for the corresponding $f_A,f_B,f_C\in\{j,j',\chi,f\}$ that
\[H(\tau_1,\dots,f_A(\tau_i),f_B(\tau_i),f_C(\tau_i),\dots)=0\]
for all $(\tau_1,\dots,\tau_n)\in S$. By Theorem \ref{thrm:AxLindwDerivs}, this must then hold for all $(\tau_1,\dots,\tau_n)\in G$. In particular,
\[F(\tau_1,\dots,j(\tau_i),j'(\tau_i),\chi(\tau_i),f(\tau_i),\dots)=0\]
for all $(\tau_1,\dots,\tau_n)\in G$.
This holds for each defining polynomial $F$ of $V$, so we're done.
\end{proof}
\end{thrm}
\begin{cor}\label{cor:ContinuationInAlgFunctions}
Let $S\subseteq\mathcal{H}^n$ be an arc of a real algebraic curve and let $\phi$ be an algebraic function in $4n+1$ variables. Let $G$ be the smallest weakly $\mathcal{H}$-special variety containing $S$, and suppose that $G$ is a GUT variety.
Writing
\[\tilde\pi(\tau_1,\dots,\tau_n)=(j(\tau_1),j'(\tau_1),\chi(\tau_1),f(\tau_1),\dots,j(\tau_n),j'(\tau_n),\chi(\tau_n),f(\tau_n)),\]
we suppose that, on some branch of $\phi$,
\[\phi(\tau_1,\tilde\pi(\tau))=0\]
for all $\tau=(\tau_1,\dots,\tau_n)\in S$. Then this holds for all $\tau\in G$, excluding perhaps some exceptional set corresponding to branch points of $\phi$.
\begin{proof}
There exists an irreducible polynomial $p$ such that
\[p(\phi(\mathbf{X}),\mathbf{X})=0\]
for all $\mathbf{X}$. Then in particular we have
\[p(0,\tau_1,\tilde\pi(\tau))=0\]
for all $\tau=(\tau_1,\dots,\tau_n)\in S$. By Theorem \ref{thrm:AxLindAllFourHolomorphics}, we have this relation for all $\tau\in G$.
We can pick a point $\mathbf{q}=(\tau_1,\dots,\tau_n)\in S$, a $G$-open neighbourhood $U$ of $\mathbf{q}$ and a $\mathbb{C}$-open neighbourhood $V$ of 0 with the following property. Whenever $\tau\in U$, the only root of
\begin{equation}\label{eqn:pEquation}p(X,\tau_1,\tilde\pi(\tau))\end{equation}
lying in $V$ is root 0 (which is a root by the earlier discussion). Now, for all $\tau\in\mathcal{H}^n$, we have
\[p(\phi(\tau_1,\tilde\pi(\tau)),\tau_1,\tilde\pi(\tau))=0\]
by definition of $p$. In other words, $\phi(\tau_1,\tilde\pi(\tau))$ is a root of (\ref{eqn:pEquation}). However, as $\tau\in U$ gets arbitrarily close to $\mathbf{q}$, the value of $\phi(\tau_1,\tilde\pi(\tau))$ gets arbitrarily close to 0. Hence it eventually lies in $V$. The only root of (\ref{eqn:pEquation}) within $V$ is 0, whence for some $G$-open neighbourhood of $\mathbf{q}$, we have
\[\phi(\tau_1,\tilde\pi(\tau))=0.\]
By analytic continuation, this holds for all $\tau\in G$, excluding some exceptional set corresponding to the branch points and branch cuts of $\phi$.
\end{proof}
\end{cor}
We conclude this section with one more technical lemma. While it may appear entirely unmotivated, hopefully the need for such a lemma will become clear during the proof of Theorem \ref{propn:AxLindOverQ}.
\begin{lma}\label{lma:yIsAlgebraic}
Let $S\subseteq\mathcal{H}^n$ be an arc of a real algebraic curve and let $\phi$ be an algebraic function in $4n$ variables. Suppose that
\[\operatorname{Im}\tau_1=\phi(\dots,j(\tau_k),j'(\tau_k),\chi(\tau_k),f(\tau_k),\dots)\]
for all $(\tau_1,\dots,\tau_n)\in S$. Let $G$ be the smallest weakly $\mathcal{H}$-special variety containing $S$, and suppose that $G$ is a GUT variety. Then $\operatorname{Im}\tau_1$ is constant on $S$.
\begin{proof}
Write $y=\operatorname{Im}\tau_1$ and suppose that $y$ is nonconstant. Let us retain the abbreviation
\[\tilde\pi(\tau)=(j(\tau_1),j'(\tau_1),\chi(\tau_1),f(\tau_1),\dots,j(\tau_n),j'(\tau_n),\chi(\tau_n),f(\tau_n)).\]
Then $S$ can be parametrised as
\[S=\{(x_1(y)+iy,x_2(y)+iy_2(y),\dots,x_n(y)+iy_n(y)):y\in U\}\]
for some interval $U\subseteq\mathbb{R}$ and algebraic functions $x_i$, $y_i$.
Now, take one of the polynomials $p(x_1,y_1,\dots,x_n,y_n)$ defining $S$. We can write
\begin{equation}\label{eqn:pT1}p\bigl[\tau_1-i\phi(\tilde\pi(\tau)),\phi(\tilde\pi(\tau)),x_2(\phi(\tilde\pi(\tau))),y_2(\phi(\tilde\pi(\tau))),\dots,x_n(\phi(\tilde\pi(\tau))),y_n(\phi(\tilde\pi(\tau)))\bigr]=0\end{equation}
for all $\tau=(\tau_1,\dots,\tau_n)\in S$. Otherwise put, we have an algebraic function $\psi$ such that
\[\psi(\tau_1,\tilde\pi(\tau))=0\]
for all $\tau\in S$. By Corollary \ref{cor:ContinuationInAlgFunctions}, this holds for all $\tau\in G$, whence \ref{eqn:pT1} holds for all $\tau\in G$.\\
By assumption, $G$ is a GUT variety. Since $y$ is assumed to be nonconstant on $S$, the variable $\tau_1$ cannot be constant on $G$. So up to permutation of coordinates $G$ looks like
\[\{(\tau_1,g_2\tau_1,\dots,g_k\tau_1):\tau_1\in\mathcal{H}\}\times H\]
for some upper triangular matrices $g_i\in\operatorname{GL}_2^+(\mathbb{Q})$ and some GUT variety $H$.
For any $\tau_1\in \mathcal{H}$, $\tau'\in H$ and any $t\in\mathbb{Z}$, we then have
\[\tau_t:=(\tau_1+t,g_2(\tau_1+t),\dots,g_k(\tau_1+t),\tau')\in G.\]
Since the $g_i$ are upper triangular, we can find an integer $N$ with the following property. For every $t\in\mathbb{Z}$, there exists $k\in\mathbb{Z}$ with
\[g_i(\tau_1+tN)=k+g_i(\tau_1),\]
for all $i$. By the periodicity of $j$, $j'$, $\chi$ and $f$, it follows that
\[\tilde\pi(\tau_{tN})=\tilde\pi(\tau_0)\]
for all $t\in\mathbb{Z}$. So for all $\tau=(\tau_1,\dots,\tau_n)\in G$ and all $t\in\mathbb{Z}$ we have
\[p(\tau_1+tN-i\phi(\tilde\pi(\tau)),\phi(\tilde\pi(\tau)),x_2(\phi(\tilde\pi(\tau))),y_2(\phi(\tilde\pi(\tau))),\dots,x_n(\phi(\tilde\pi(\tau))),y_n(\phi(\tilde\pi(\tau))))=0.\]
In particular, whenever $\tau=(x_1(y)+iy,x_2(y)+iy_2(y),\dots,x_n(y)+iy_n(y))\in S$, we have
\[p(x_1(y)+tN,y,x_2(y),y_2(y),\dots,x_n(y),y_n(y))=0.\]
This holds for \emph{every} polynomial $p$ defining $S$. Since $S$ has only one real dimension, it must therefore be a horizontal line in the $\tau_1$ coordinate. That is, $y$ is constant. Contradiction!
\end{proof}
\end{lma}
\subsection{Ax-Lindemann for Adjacency}
We can now prove our main Ax-Lindemann theorem. The idea is to show that a real algebraic arc in $\mathcal{H}^n$ which is `adjacent' to a variety $V$ (in a suitable sense) must be contained in a weakly $\mathcal{H}$-special variety which is itself adjacent to $V$.
\begin{thrm}\label{propn:AxLindOverQ}
Let $V\subseteq\mathbb{C}^{3n}$ be an algebraic variety. Let $S$ be an arc of a real algebraic curve lying in $(\mathcal{H}\times \operatorname{SL}_2(\mathbb{R}))^n$. Define
\[\widehat{S}=\{(g_1\tau_1,\dots,g_n\tau_n):(\tau_1,g_1,\dots,\tau_n,g_n)\in S\},\]
and suppose that $\widehat{S}$ is positive-dimensional; that is, not all of the $g_j\tau_j$ are constant on $S$.
Further suppose that, for some $c_i\in\mathbb{R}$,
\[\left[\dots,j(g_i\tau_i),\dfrac{j'(g_i\tau_i)}{m_{g_i}(\tau_i)},p_{c_i}\left(j(g_i\tau_i),\chi^*(g_i\tau_i),\dfrac{j'(g_i\tau_i)}{m_{g_i}(\tau_i)}\right),\dots\right]\in V\]
for all $(\tau_1,g_1,\dots,\tau_n,g_n)\in S$.
Then there exists a weakly $\mathcal{H}$-special variety $G$ with
\[\widehat{S}\subseteq G\hookrightarrow V.\]
\end{thrm}
\textbf{Note.} The functions $j(g_j\tau_j)$, $\chi^*(g_j\tau_j)$ and $j'(g_j\tau_j)/m_{g_j}(\tau_j)$ are unaffected if we replace $g_j$ by $\gamma g_j$ for any $\gamma \in \operatorname{SL}_2(\mathbb{Z})$. Hence we may assume that $G$, the smallest weakly $\mathcal{H}$-special variety containing $\widehat{S}$, is a GUT variety. This will be useful several times throughout.\\
\textbf{Idea of Proof.} First, we attempt to parametrise the relevant algebraic arcs in terms of the imaginary part $y_j$ of one of the variables. With suitable manipulations, we reach one of two outcomes: either a particular complex analytic relation involving just $j$, $\chi$, $f$ and $j'$ holds, or $y_j$ is equal to an algebraic function of $j$, $\chi$, $f$ and $j'$. In the first case, the result comes fairly easily. In the second case, we apply Lemma \ref{lma:yIsAlgebraic} to see that $y_j$ is constant; this situation can be dealt with easily.
\begin{proof}[Proof of 2.8]
Given $(\dots,\tau_j,g_j,\dots)\in S$, let us write
\[g_j\tau_j=\sigma_j=x_j+iy_j\] and
\[m_{g_j}(\tau_j)=\rho_j=u_j+iv_j.\]
We wish to parametrise the real algebraic arc
\[\tilde{S}=\{(\dots,g_j\tau_j,m_{g_j}(\tau_j),\dots):(\dots,\tau_j,g_j,\dots)\in S\}\subseteq (\mathcal{H}\times\mathbb{C})^n\]
in terms of one of the $y_j$. Thus we first have to deal with the possibility that all of the $y_j$ are in fact constant on $\tilde S$.
In this situation, let us first assume that the $\rho_j$ are also constant. Then we have
\begin{equation}\label{eqn:ConstantCase}\left[\dots,j(\sigma_j),\dfrac{j'(\sigma_j)}{\rho_j},p_{c_j}\left(j(\sigma_j),\chi(\sigma_j)-\dfrac{3f(\sigma_j)}{\pi y_j},\dfrac{j'(\sigma_j)}{\rho_j}\right),\dots\right]\in V\end{equation}
for some constants $y_j$, $\rho_j$ and all $\sigma=(\sigma_1,\dots,\sigma_n)\in\widehat{S}$. Recall that $G$, the weakly $\mathcal{H}$-special closure of $\widehat{S}$, can be assumed to be a GUT variety. So we can apply Theorem \ref{thrm:AxLindAllFourHolomorphics} to see that (\ref{eqn:ConstantCase}) holds for all $\sigma\in G$.
Taking Zariski closures (over $\mathbb{C}$) in (\ref{eqn:ConstantCase}), we get
\[\left[\dots,j(\sigma_j),w_j,p_{c_j}\left(j(\sigma_j),\chi(\sigma_j)-\dfrac{3f(\sigma_j)}{\pi y_j},w_j\right),\dots\right]\in V\]
for all $\mathbf{q}\in G$ and $(w_1,\dots,w_n)\in V_{j(\mathbf{q})}$. Here $V_{j(\mathbf{q})}$ is a variety depending only on the $j(\sigma_j)$, which contains the point
\[\left(\dfrac{j'(\sigma_1)}{\rho_1},\dots,\dfrac{j'(\sigma_n)}{\rho_n}\right).\]
As in \cite[Lemma 4.10]{Spence2016}, it is easy to find a sequence of matrices $\gamma_t\in\operatorname{SL}_2(\mathbb{Z})$, $t\in\mathbb{N}$, such that
\[(\chi(\gamma_t\sigma_1),\dots,\chi(\gamma_t\sigma_n))\to (\chi(\sigma_1),\dots,\chi(\sigma_n))\]
and
\[(f(\gamma_t\sigma_1),\dots,f(\gamma_t\sigma_n))\to 0\]
as $t\to\infty$. So by continuity (and the invariance of $j$), we get
\[\left[\dots,j(\sigma_j),w_j,p_{c_j}\left(j(\sigma_j),\chi(\sigma_j),w_j\right),\dots\right]\in V\]
for all $\mathbf{q}\in G$ and $\mathbf{w}\in W_{j(\mathbf{q})}$. By an isomorphism theorem from \cite{Spence2016}, we get
\[\left[\dots,j(\sigma_j),w_j,p_{c_j}\left(j(\sigma_j),\chi^*(\sigma_j),w_j\right),\dots\right]\in V\]
and hence
\[\left[\dots,j(\sigma_j),\dfrac{j'(\sigma_j)}{\rho_j},p_{c_j}\left(j(\sigma_j),\chi^*(\sigma_j),\dfrac{j'(\sigma_j)}{\rho_j}\right),\dots\right]\in V,\]
for all $\sigma\in G$. This says precisely that $G\hookrightarrow V$.
Next we deal with the situation where one of the $y_j$ is nonconstant on $\tilde S$. Without loss of generality, suppose it is $y_1$ and write $y=y_1$. We can parametrise
\[\tilde{S}=\{(x_1(y)+iy,u_1(y)+iv_1(y),\dots,x_n(y)+iy_n(y),u_n(y)+iv_n(y)):y\in I\},\]
for some interval $I\subseteq\mathbb{R}$ and algebraic functions $x_i,y_i,u_i,v_i$. Letting $F$ be a defining polynomial of $V$, we have
\[F\left[\dots,j(\sigma_j),\dfrac{j'(\sigma_j)}{u_j(y)+iv_j(y)},p_{c_j}\left(j(\sigma_j),\chi(\sigma_j)-\dfrac{3f(\sigma_j)}{\pi y_j(y)},\dfrac{j'(\sigma_j)}{u_j(y)+iv_j(y)}\right),\dots\right]=0,\]
where we set $y_1(y)=y$. We can rewrite this; there is an algebraic function $s$ such that the above holds if and only if
\[s(y,\tilde\pi(\sigma))=0\]
for all $\sigma=(\sigma_1,\dots,\sigma_n)\in\widehat{S}$. Here we are writing $y=\operatorname{Im}\sigma_1$ and, as before:
\[\tilde\pi(\sigma_1,\dots,\sigma_n)=(j(\sigma_1),j'(\sigma_1),\chi(\sigma_1),f(\sigma_1),\dots,j(\sigma_n),j'(\sigma_n),\chi(\sigma_n),f(\sigma_n)).\]
Since $s$ is an algebraic function, there is a nontrivial irreducible polynomial $p_s$ such that
\[p_s(s(\mathbf{X}),\mathbf{X})=0\]
for all $\mathbf{X}$. In particular,
\[p_s(0,y,\tilde\pi(\sigma))=0\]
for all $\sigma\in\widehat{S}$ and $y=\operatorname{Im}\sigma_1$. Since $p_s$ is irreducible and nontrivial, we get a nontrivial $q_s(\mathbf{X})=p_s(0,\mathbf{X})$. (It is clear that $p_s(t,\mathbf{X})\ne t$.)
Now we apply the following iterative procedure to $q_s$.
\begin{enumerate}
\item Inspect separately each coefficient $r_k$ of $T^k$ in $q(T,\dots)$. If
\[r_k(\tilde\pi(\sigma))=0\]
for all $\sigma\in\widehat{S}$ and all $k$, then terminate. Otherwise, let $q'$ be the polynomial produced by removing from $q$ all coefficients $r_k$ which have the above property.
\item If $q'$ is irreducible, terminate. Otherwise, there is a factor $q''$ of $q'$ with
\[q''(y,\tilde\pi(\sigma))=0\]
for all $\sigma\in\widehat{S}$ and $y=\operatorname{Im}\sigma_1$.
\item We have a polynomial $q''$, which retains the property that
\[q''(y,\tilde\pi(\sigma))=0\]
for all $\sigma\in\widehat{S}$ and $y=\operatorname{Im}\sigma_1$. Repeat from step 1, with $q''$ instead of $q$.
\end{enumerate}
This must eventually terminate, since step 2 will always reduce the degree of the polynomial in question. So we have two possibilities.\\
If we terminated at step 1, then working backwards we see that every coefficient $r_k$ of $T^k$ in $q_s$ has the property that
\[r_k(\tilde\pi(\sigma))=0\]
for all $\sigma\in\widehat{S}$. Using the fact that $G$ is a GUT variety, we can apply Theorem \ref{thrm:AxLindAllFourHolomorphics} to see that this holds for all $\sigma\in G$. In particular,
\[q_s(y,\tilde\pi(\sigma))=0\]
for all $y\in\mathbb{C}$ and all $\sigma\in G$. Otherwise put, 0 is a root of
\begin{equation}\label{eqn:pSEquation}p_s(X,y,\tilde\pi(\sigma))\end{equation}
for all $y\in\mathbb{C}$ and all $\sigma\in G$.
Now we proceed much as we did in the proof of Corollary \ref{cor:ContinuationInAlgFunctions}. Choose a point
\[\mathbf{a}=(a_1,\dots,a_n)\in\widehat{S},\]
a $G$-open neighbourhood $U$ of $\mathbf{a}$ and a $\mathbb{C}$-open neighbourhood $W$ of 0 with the following property. Whenever $\sigma=(\sigma_1,\dots,\sigma_n)\in U$, the only root of (\ref{eqn:pSEquation}) lying in $W$ is 0 itself.
Now recall that
\[s(y,\tilde\pi(\sigma))\]
is a root of (\ref{eqn:pSEquation}) for all $y$ and all $\sigma$. This is just the definition of $p_s$. Since $s$ vanishes at $\mathbf{a}\in\widehat{S}$, there is a $G$-open neighbourhood
\[\mathbf{a}\in U'\subseteq U\]
such that
\[s(\operatorname{Im} a_1,\tilde\pi(\sigma))\in W\]
for all $\sigma\in U'$. But the only root of (\ref{eqn:pSEquation}) lying in $W$ is the root 0. So it must be the case that
\[s(\operatorname{Im} a_1,\tilde\pi(\sigma))=0\]
for all $\sigma\in U'$ and hence for all $\sigma\in G$.
Recalling the definition of $S$, we see that
\[F\left[\dots,j(\sigma_j),\dfrac{j'(\sigma_j)}{u_j(y)+iv_j(y)},p_{c_j}\left(j(\sigma_j),\chi(\sigma_j)-\dfrac{3f(\sigma_j)}{\pi y_j(y)},\dfrac{j'(\sigma_j)}{u_j(y)+iv_j(y)}\right),\dots\right]=0,\]
for all $\sigma=(\sigma_1,\dots,\sigma_n)\in G$ and for \emph{constants} $y_j=y_j(\operatorname{Im} a_1)$ and $\rho_j=u_j(\operatorname{Im} a_1)+iv_j(\operatorname{Im} a_1)$.
If we repeat this whole procedure for each defining polynomial of $V$, we get
\[\left[\dots,j(\sigma_j),\dfrac{j'(\sigma_j)}{\rho_j},p_{c_j}\left(j(\sigma_j),\chi(\sigma_j)-\dfrac{3f(\sigma_j)}{\pi y_j},\dfrac{j'(\sigma_j)}{\rho_j}\right),\dots\right]\in V\]
for all $\sigma\in G$ and \emph{constants} $y_j=y_j(\operatorname{Im} a_1)$ and $\rho_j=u_j(\operatorname{Im} a_1)+iv_j(\operatorname{Im} a_1)$. Now we are in exactly the same position as we were for (\ref{eqn:ConstantCase}), so we conclude as we did earlier.\\
If we terminated at step 2, then we have an irreducible polynomial $q$ such that
\[q(y,\tilde\pi(\sigma))=0\]
for all $\sigma=(\sigma_1,\dots,\sigma_n)\in \widehat{S}$. Moreover, for every $k$, the coefficient $r_k$ of $T^k$ in $q(T,\dots)$ has the property that
\[r_k(\tilde\pi(\sigma))\]
does \emph{not} vanish identically for $\sigma\in\widehat{S}$. Thus we can extract an algebraic function $\phi$ such that
\[y=\phi(\tilde\pi(\sigma))\]
for all $\sigma\in \widehat{S}$ and $y=\operatorname{Im}\sigma_1$. Now we may apply Lemma \ref{lma:yIsAlgebraic} (once again using the fact that the weakly $\mathcal{H}$-special closure of $\widehat{S}$ is a GUT variety) and see that $y$ is constant on $\widehat{S}$, a contradiction.\\
Finally we deal with the case where the $y_j$ are all constant on $\tilde S$, but perhaps the $\rho_j$ vary. Say $y_j=a_j$. Note, by hypothesis, that at least one of the $\sigma_j$ is nonconstant on $\tilde S$. Without loss of generality, let us say it is $\sigma_1$.
We have the relation
\begin{equation}\label{eqn:constantYs}
\left[\dots,j(\sigma_j),\dfrac{j'(\sigma_j)}{\rho_j},p_{c_j}\left(j(\sigma_j),\chi(\sigma_j)-\dfrac{3f(\sigma_j)}{\pi a_j},\dfrac{j'(\sigma_j)}{\rho_j}\right),\dots\right]\in V,
\end{equation}
holding for all $(\dots,\sigma_j,\rho_j,\dots)\in \tilde S$. This is a complex analytic relation.
Consider the intersection of an irreducible algebraic variety $W\subseteq\mathbb{C}^{2n}$ with $(\mathcal{H}\times\mathbb{C})^n$. A connected component of this intersection is called a complex algebraic component. Let $A$ be the smallest complex algebraic component containing $\tilde S$. Then by analytic continuation, (\ref{eqn:constantYs}) holds for all $(\dots,\sigma_j,\rho_j,\dots)\in A$.
Since the weakly $\mathcal{H}$-special closure of $\widehat{S}$ is a GUT variety $G$, the projection of $A$ onto the $\sigma_j$ coordinates also has $G$ as its weakly $\mathcal{H}$-special closure. Hence we can find a real algebraic arc
\[T\subseteq (\mathcal{H}\times\mathbb{C})^n\]
with the following properties:
\begin{itemize}
\item On $T$, the imaginary part of $\sigma_1$ is nonconstant.
\item Whenever $(\dots,\sigma_j,r_j,\dots)\in T$, we have
\[
\left[\dots,j(\sigma_j),\dfrac{j'(\sigma_j)}{\rho_j},p_{c_j}\left(j(\sigma_j),\chi(\sigma_j)-\dfrac{3f(\sigma_j)}{\pi a_j},\dfrac{j'(\sigma_j)}{\rho_j}\right),\dots\right]\in V.
\]
\item The projection of $T$ onto the $\sigma_j$ coordinates has $G$ as its weakly $\mathcal{H}$-special closure.
\end{itemize}
We can then parametrise $T$ in terms of $y=\operatorname{Im} \sigma_1$. Exactly as before, we then rewrite the polynomial relation as an algebraic function $s$, yielding
\[s(y,\tilde\pi(\sigma_1,\dots,\sigma_n))=0\]
whenever $(\dots,\sigma_j,r_j,\dots)\in T$ and $y=\operatorname{Im} \sigma_1$. By the same analysis as earlier, we end up with a constant $a$ such that
\[s(a,\tilde\pi(\sigma_1,\dots,\sigma_n))=0\]
for all $(\sigma_1,\dots,\sigma_n)\in G$. This in turn yields
\[\left[\dots,j(\sigma_j),\dfrac{j'(\sigma_j)}{\rho_j(a)},p_{c_j}\left(j(\sigma_j),\chi(\sigma_j)-\dfrac{3f(\sigma_j)}{\pi a_j},\dfrac{j'(\sigma_j)}{\rho_j(a)}\right),\dots\right]\in V\]
for all $(\sigma_1,\dots,\sigma_n)\in G$. Since $\rho_j(a)$ and $a_j$ are constants, we can then conclude exactly as we did for (\ref{eqn:ConstantCase}).
\end{proof}
The above theorem will be used in our point-counting arguments in the next section, in order to take a real algebraic arc and produce from it a weakly $\mathcal{H}$-special variety with certain adjacency properties. We will also need the following corollary, which we will eventually use to ensure that there are only finitely many basic linear varieties adjacent to our fixed variety $V$.
\begin{cor}\label{cor:BasicsAreSpecial}
Let $V\subseteq\mathbb{C}^{3k}$ be a variety and let $B$ be a basic linear variety adjacent to $V$. Suppose $B$ is maximal with this property. Then $B$ is $\mathcal{H}$-special, ie. all of the $g_{i,j}$ defining $B$ lie in $\operatorname{GL}_2^+(\mathbb{Q})$.
\begin{proof}
Immediate from \ref{propn:AxLindOverQ}.
\end{proof}
\end{cor}
\section{Point-Counting}
In this section, we will discuss the necessary o-minimality and point-counting considerations. For familiar readers, the results here should fit well with expectations, though they are necessarily rather less neat than their equivalents in more classical settings. We are assuming some basic familiarity with o-minimality and the Pila-Wilkie theorems; see \cite{Pila2006}, \cite{Dries1998} and \cite{Dries1994}.
The first crucial fact we need is that all of the relevant functions are definable in an appropriate sense. Namely, the restrictions of $j$, $j'$, $j''$, $\chi^*$, $\chi$ and $f$ to any standard fundamental domain for the action of $\operatorname{SL}_2(\mathbb{Z})$ on $\mathcal{H}$ are definable in (the o-minimal structure) $\mathbb{R}_\text{an,exp}$. We will use the ``most standard'' fundamental domain
\[\mathbb{F}=\{\tau\in\mathcal{H}:-1/2\leq\operatorname{Re}\tau\leq 1/2, |\tau|\geq 1\}.\]
The definability can be seen in various ways; either as a consequence of the theory of elliptic curves and a result of Peterzil and Starchenko on definability of the Weierstrass $\wp$-function \cite{Peterzil2004}, or via $q$-expansions. Given this fact, the idea is a fairly standard one:
First assume that a variety contains an infinite set of special points. Then the preimage of that variety (under $J$ for instance), intersected with $\mathbb{F}^n$, is a definable set in $\mathbb{R}_\text{an,exp}$. By taking Galois conjugates, one can force this definable set to contain many quadratic points of bounded height, in the sense of the Pila-Wilkie theorem. So it will contain a real algebraic arc, allowing us to apply the Ax-Lindemann results of the previous section.
The missing ingredient so far is the Galois aspect. Fortunately, much of this is already done for us; the points about which we need Galois information are just the $j$-special and $\chi^*$-special points. For control over these, we have the following.
\begin{propn}\label{propn:GaloisOrbits}
Let $\tau\in\mathcal{H}$ be a quadratic point and consider the algebraic numbers $j(\tau)$ and $\chi^*(\tau)$. Let $\sigma$ be a Galois conjugation acting on $\mathbb{Q}(j(\tau))\supseteq\mathbb{Q}(\chi^*(\tau))$. Let $\tau'$ be a quadratic point such that $j(\tau')=\sigma(j(\tau))$. Then $\chi^*(\tau')=\sigma(\chi^*(\tau))$.
\end{propn}
For a proof, see \cite[Proposition 5.2]{Spence2016}. This tells us, essentially, that to keep track of the Galois conjugates of $\chi^*(\tau)$, we need only keep track of the Galois conjugates of $j(\tau)$. We already have sufficiently good control of the Galois conjugates of $j(\tau)$; it is a consequence of the Siegel bound for class numbers of quadratic fields \cite{Siegel1935} that
\begin{equation}\label{eqn:SiegelBound}[\mathbb{Q}(j(\tau)):\mathbb{Q}]\gg D^{\frac{1}{4}},\end{equation}
where $D$ is the discriminant of $\tau$. (In fact we can do much better, but this is sufficient for our purposes.) Hence in particular, there will be $\gg D^{\frac{1}{4}}$ Galois conjugates of a point $(j(\tau),\chi^*(\tau))$, over any fixed number field. This fact is central to our main ``point-counting theorem''. In this theorem we use the assumption of $j'$-genericity for the first, and only, time in the paper.
\begin{thrm}\label{thrm:MainPointCountingThrm}
Let $V\subseteq\mathbb{C}^{3n}$ be an algebraic variety defined over $\overline{\mathbb{Q}}$. Then there is a number $D=D(V)$ with the following property. Let $\tau\in J^{-1}(V)$ be a $j'$-generic quadratic point with discriminant greater than $D$, and suppose none of the coordinates of $\tau$ lies in $\operatorname{SL}_2(\mathbb{Z})\cdot\{i,\rho\}$. Then there is an $\mathcal{H}$-special variety $G$ with
\[\tau\in G \hookrightarrow V.\]
\begin{proof}
Let $K$ be a number field containing a field of definition for $V$.
Suppose we have a partition of $\{1,\dots, n\}$,
\[S_1\cup\dots\cup S_k,\]
with each $S_i\ne\emptyset$. For each $i$, let $s_i=\min S_i$ and $r_i=\#S_i-1$. Given
\[\sigma=(\sigma_1,\dots,\sigma_k)\in\mathcal{H}^k\]
and
\[g=(g_{1,1},\dots,g_{1,r_1},\dots,g_{k,1},\dots,g_{k,r_k})\in\operatorname{SL}_2(\mathbb{R})^{n-k},\]
define the following set:
\begin{multline*}
Z_{\sigma,g}=\Bigg\{(\tau,h)\in\mathcal{H}^k\times\operatorname{GL}_2^+(\mathbb{R})^{n-k}: \det h_{i,j}=\det g_{i,j},\\
\bigg[\dots, j(\tau_i),j'(\tau_i),p_{\operatorname{Im}\sigma_i}(j(\tau_i),\chi^*(\tau_i),j'(\tau_i)),\dots\\
\dots, j(h_{i,j}\tau_i),j'(h_{i,j}\tau_i)\dfrac{m_{g_{i,j}}(\sigma_i)}{m_{h_{i,j}}(\tau_i)},p_{\operatorname{Im} g_{i,j}\sigma_i}\left(j(h_{i,j}\tau_i),\chi^*(h_{i,j}\tau_i),j'(h_{i,j}\tau_i)\dfrac{m_{g_{i,j}}(\sigma_i)}{m_{h_{i,j}}(\tau_i)}\right),\dots\bigg]\in V\Bigg\}
\end{multline*}
Consider this as a family of sets, fibred over $\mathcal{H}^k\times\operatorname{SL}_2(\mathbb{R})^{n-k}$. There is one such family for each of the finitely many partitions of $\{1,\dots, n\}$, and we consider them all together. They are certainly \emph{not} definable families. However, for a given partition, the family
\[\mathcal{Z}_{\sigma,g}=\{(\tau,h)\in Z_{\sigma,g}:\tau_i, h_{i,j}\tau_i\in \mathbb{F}, i\leq k, j\leq r_k\}\]
is definable in $\mathbb{R}_\text{an,exp}$.\\
Now let us consider a $j'$-generic special point $\tau\in J^{-1}(V)$, of large discriminant $D(\tau)$. Up to permutation of coordinates, $\tau$ looks like
\[\tau=(\sigma_1,g_{1,1}\sigma_1,\dots,g_{1,r_1}\sigma_1,\dots,\sigma_k,g_{k,1}\sigma_k,\dots,g_{k,r_k}\sigma_k),\]
with the $\sigma_i$ lying in distinct $\operatorname{GL}_2^+(\mathbb{Q})$-orbits, and $g_{i,j}\in\operatorname{GL}_2^+(\mathbb{Q})$. So $\tau$ corresponds to a partition of $\{1,\dots, n\}$ in the obvious way. Writing $g_{i,j}$ as primitive integer matrices, let $N_{i,j}=\det g_{i,j}$. Recall that the $j'$-genericity of $\tau$ means that
\[j'(\sigma_1),\dots,j'(\sigma_k)\]
are algebraically independent over $\overline{\mathbb{Q}}$. So we see that the $\overline{\mathbb{Q}}$-Zariski closure $\JClose{\tau}$ of $J(\tau)$ is the set of points of the form
\begin{multline*}\Biggl[\dots,j(\sigma_i),w_i,p_{\operatorname{Im}\sigma_i}(j(\sigma_i),\chi^*(\sigma_i),w_i),\dots,j(g_{i,j}\sigma_i),-w_i\lambda_{N_{i,j}}\bigl(j(\sigma_i),j(g_{i,j}\sigma_i)\bigr)m_{g_{i,j}}(\sigma_i),\\p_{\operatorname{Im} g_{i,j}\sigma_i}\biggl(j(g_{i,j}\sigma_i),\chi^*(g_{i,j}\sigma_i),-w_i\lambda_{N_{i,j}}\bigl(j(\sigma_i),j(g_{i,j}\sigma_i)\bigr)m_{g_{i,j}}(\sigma_i)\biggr),\dots\Biggr],\end{multline*}
for some $w_1,\dots,w_k\in\mathbb{C}$.
We will show that the existence of this $\tau$ implies that $\mathcal{Z}_{\sigma,g}$ contains $\gg D(\tau)^{\frac{1}{4}}$ quadratic points of bounded height. As is typical in the Pila-Zannier strategy, these new points will arise from Galois conjugates of $j(\tau)$. To begin, we need to define a variety which keeps track of $\tau$ and its Galois conjugates. Said variety will be a subvariety of $\mathbb{C}^{2n}$; we will write a general element of $\mathbb{C}^{2n}$ as
\[(\dots,X_i,Y_i,\dots,X_{i,j},Y_{i,j},\dots),\]
with $i\leq k$ and $j\leq r_i$, matching the structure of the underlying partition of $\{1,\dots,n\}$.
Let
\begin{multline*}V_{\sigma,g}=\Biggl\{(\mathbf{X},\mathbf{Y})\in\mathbb{C}^{2n}:\forall w_1,\dots,w_k\in\mathbb{C},\\ \Bigl[\dots,X_i,w_i,p_{\operatorname{Im}\sigma_i}(X_i,Y_i,w_i),\dots,X_{i,j},-w_i\lambda_{N_{i,j}}(X_i,X_{i,j})m_{g_{i,j}}(\sigma_i),\\p_{\operatorname{Im} g_{i,j}\sigma_i}\bigl(X_{i,j},Y_{i,j},-w_i\lambda_{N_{i,j}}(X_i,X_{i,j})m_{g_{i,j}}(\sigma_i)\bigr),\dots\Bigr]\in V\Biggr\}.\end{multline*}
Then $V_{\sigma,g}$ is a subvariety of $\mathbb{C}^{2n}$, defined over $K(\sigma,\operatorname{Im}\sigma)$. This definition is set up to mirror the shape of $\JClose{\tau}$. Thus, since $\JClose{\tau}\subseteq V$, we see that $V_{\sigma,g}$ must contain the point
\[(j,\chi^*)(\tau).\]
Hence $V_{\sigma,g}$ also contains every Galois conjugate (over $K(\sigma,\operatorname{Im}\sigma)$) of $(j,\chi^*)(\tau)$. By Proposition \ref{propn:GaloisOrbits}, such a Galois conjugate must take the form $(j,\chi^*)(\tau')$, for some quadratc $\tau'$ with $D(\tau')=D(\tau)$. Moreover, by the existence of the modular polynomial $\Phi_N$, $\tau'$ must have the same $\operatorname{GL}_2^+(\mathbb{Q})$ structure as $\tau$. That is:
\[\tau'=(\sigma_1',g_{1,1}'\sigma_1',\dots,g_{1,r_1}'\sigma_1',\dots,\sigma_k',g_{k,1}'\sigma_k',\dots,g_{k,r_k}'\sigma_k'),\]
where the $\sigma_i'$ are quadratic points and $g_{i,j}'= g_{i,j}\gamma_{i,j}$, for some $\gamma_{i,j}\in\operatorname{SL}_2(\mathbb{Z})$. Further, by the modularity of $j$ and $\chi^*$, we can ensure that $\sigma_i'\in\mathbb{F}$.
For each $\tau'$ arising this way, let us take $w_i=j'(\sigma_i')$ in the definition of $V_{\sigma,g}$. Noting that
\[-j'(\sigma_i')\lambda_{N_{i,j}}(j(\sigma_i'),j(g_{i,j}'\tau))m_{g_{i,j}}(\sigma_i)=j'(g_{i,j}'\sigma_i')\dfrac{m_{g_{i,j}}(\sigma_i)}{m_{g_{i,j}'}(\sigma_i')},\]
we see that $\left(\sigma',g'\right)\in Z_{\sigma,g}$. Further, there is $\gamma_{i,j}'\in\operatorname{SL}_2(\mathbb{Z})$ such that $\gamma_{i,j}'g_{i,j}'\sigma_i'\in\mathbb{F}$. This yields $(\sigma',\gamma'g')\in\mathcal{Z}_{\sigma,g}$.
By (\ref{eqn:SiegelBound}), there are $\gg D(\tau)^{1/4}$ Galois conjugates of $(j,\chi^*)(\tau)$ over $\mathbb{Q}$. Since $K(\sigma,\operatorname{Im}\sigma)$ is an extension of $K$ of degree at most $4n$, we have
\[[\mathbb{Q}(j(\tau)):K(\sigma,\operatorname{Im}\sigma)]=[\mathbb{Q}(j(\tau)):\mathbb{Q}]/c,\]
where $c$ is an absolute constant. Hence there are $\gg D(\tau)^{\frac{1}{4}}$ points $(\sigma',\gamma'g')$ lying in $\mathcal{Z}_{\sigma,g}$.
Moreover, it is a consequence of Proposition 5.2 in \cite{Pila2011} that the corresponding $\gamma_{i,j},\gamma_{i,j}'$ can be chosen to have height polynomial in $D$, whence $(\sigma',\gamma' g')$ has height polynomial in $D$. So the existence of $\tau\in J^{-1}(V)$, with discriminant $D(\tau)$, ensures that $\mathcal{Z}_{\sigma,g}$ contains $\gg D(\tau)^{1/4}$ points of bounded height (and degree at most 2).
At this point, we can apply the uniform Pila-Wilkie Theorem. Playing the upper bound from uniform Pila-Wilkie against the lower bound found above, we find a number $D$ such that whenever $\tau\in J^{-1}(V)$ has discriminant greater than $D$, the corresponding $\mathcal{Z}_{\sigma,g}$ contains an arc $T$ of a real algebraic curve. Further, we can ensure that $T$ contains the $(\sigma',\gamma'g')$ corresponding to one of the $\tau'$ arising from the Galois conjugates of $(j,\chi^*)(\tau)$.
We would like to apply Theorem \ref{propn:AxLindOverQ} to some algebraic arc constructed from $T$. Indeed, let
\[S=\left\{\left(\dots,\tau_i,\begin{pmatrix}1&0\\0&1\end{pmatrix},\dots,\tau_i,\overline{h_{i,j}},\dots\right):(\tau,h)\in T\right\},\]
where $\overline{h}$ is the element of $\operatorname{SL}_2(\mathbb{R})$ corresponding to the image of $h$ as an element of $\operatorname{PGL}_2(\mathbb{R})$. Also let
\[\widehat{S}=\{(\dots,\tau_i,\dots,h_{i,j}\tau_i,\dots):(\tau,h)\in T\}.\]
Before we can apply Theorem \ref{propn:AxLindOverQ}, it only remains to check that $\widehat{S}$ is indeed an arc, rather than just point. This is easy to see; if $\tau_i$ and $h_{i,j}\tau_i$ are all constant on $\widehat{S}$, then $h$ must be constant, up to determinant, on $T$. Since the determinant of $h_{i,j}$ is fixed in the definition of $Z_{\tau_0,g_0}$, it follows that $\tau_i$ and $h_{i,j}$ are both constant on $T$, whence $T$ itself is just a point, which is a contradiction.
So \ref{propn:AxLindOverQ} yields an $\mathcal{H}$-special set $H$ with
\[\widehat{S}\subseteq G \hookrightarrow V.\]
Since $(\sigma',\gamma'g')\in T$, we have $\gamma\tau'\in\widehat{S}$ for some $\gamma\in\operatorname{SL}_2(\mathbb{Z})^n$. Hence, some $\operatorname{SL}_2(\mathbb{Z})$-translate $H'$ of $H$ contains $\tau'$, and remains adjacent to $V$. Suppose that
\[H'=B\times \{\tau_{k+1}',\dots,\tau_n'\}\]
for some basic $\mathcal{H}$-special variety $B$. Since $(j,\chi^*)(\tau')$ was a Galois conjugate of $(j,\chi^*)(\tau)$, we can now apply the inverse Galois conjugation to see that
\[B\times \{\tau_{k+1},\dots,\tau_n\}\hookrightarrow V.\]
For suitable $\gamma$, the $\mathcal{H}$-special variety
\[G=\gamma B\times \{\tau_{k+1},\dots,\tau_n\}\]
will contain $\tau$ and is still adjacent to $V$.
\end{proof}
\end{thrm}
\begin{comment}
\end{comment}
The above result is one of two crucial pieces of ``counting'' we need in order to prove \ref{thrm:PartialAOwDerivs}. The other half is the following proposition. The idea is that \ref{thrm:MainPointCountingThrm} will be used to count the number of isolated points that can arise in $J^{-1}(V)$ (the``zero-dimensional pieces''), and this next proposition will count the ``positive-dimensional pieces;'' namely the basic $\mathcal{H}$-special varieties. Together, \ref{thrm:MainPointCountingThrm} and \ref{propn:BasicSpecialCount} will act as the engine driving the inductive argument at the heart of the proof of \ref{thrm:PartialAOwDerivs}.
\begin{propn}\label{propn:BasicSpecialCount}
Let $V\subseteq\mathbb{C}^{3n}$ be a variety. Consider the definable subset consisting of those \emph{proper} basic linear varieties $B\subseteq \mathcal{H}^k$ (for any $k$) such that:
\begin{itemize}
\item $B$ meets $\mathbb{F}^k$ in its full dimension.
\item $B$ is adjacent to $V$.
\item $B$ is maximal with the above properties.
\end{itemize}
Then there are only finitely many such $B$, and each is $\mathcal{H}$-special.
\begin{proof}
By Corollary \ref{cor:BasicsAreSpecial}, the only basic linear $B$ which are maximally adjacent to $W$ are necessarily $\mathcal{H}$-special. For such $B$, all the defining $g_{i,j}$ are in $\operatorname{GL}_2^+(\mathbb{Q})$, so the collection of such $B$ is parametrised by a countable set.
The conditions specified are definable, (compare with, for instance, \cite[Proposition 10.2]{Pila2011}) so we have a countable definable set, which is therefore finite.
\end{proof}
\end{propn}
\section{Bringing the Proof Together}
As we bring everything together to prove our central theorem, let us recall the statement.
\begin{thrmREP}
Let $V\subseteq\mathbb{C}^{3n}$ be a proper algebraic variety defined over $\overline{\mathbb{Q}}$. There exists an $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\sigma(V)$, consisting of \emph{proper} $\mathcal{H}$-special varieties of $\mathcal{H}^n$, with the following property. Every $j'$-generic $\mathcal{H}$-special point in $J^{-1}(V)$ is contained in some $G\in\sigma(V)$.
\begin{proof}
First, let us note: we can safely ignore any $\tau\in J^{-1}(V)$ which have a coordinate lying in $\operatorname{SL}_2(\mathbb{Z})\cdot\{i,\rho\}$, since these all clearly lie in $\operatorname{SL}_2(\mathbb{Z})$-finitely many proper $\mathcal{H}$-special subvarieties.\\
We work by induction on $n$. For $n=1$, we argue as follows.
Suppose that $V\subseteq \mathbb{C}^3$ is an algebraic variety defined over $\overline{\mathbb{Q}}$ and let $D=D(V)$ be the number given to us by \ref{thrm:MainPointCountingThrm}. If $J^{-1}(V)$ contains $\operatorname{SL}_2(\mathbb{Z})$-infinitely many quadratic points, then in particular it contains one of discriminant greater than $D$. Theorem \ref{thrm:MainPointCountingThrm} then tells us that $\mathcal{H}$ is adjacent to $V$. This implies that
\[\forall\tau\in\mathcal{H}, (j(\tau),j'(\tau)z,p_c(j(\tau),\chi^*(\tau),j'(\tau)z))\in V\]
for some $c>0$ and $z\in\mathbb{C}$. Taking Zariski closures (over $\mathbb{C}$) this says that
\[\forall w\in\mathbb{C},\tau\in\mathcal{H}, (j(\tau),w,p_c(j(\tau),\chi^*(\tau),w))\in V.\]
In particular, for any $\tau$ with $\operatorname{Im}\tau=c$, we have
\[(j(\tau),j'(\tau),p_c(j(\tau),\chi^*(\tau),j'(\tau)))\in V.\]
For such $\tau$, $p_c(j(\tau),\chi^*(\tau),j'(\tau))=j''(\tau)$ by definition. Hence
\[J(\tau)\in V\]
for all $\tau$ with $\operatorname{Im}\tau=c$. By analytic continuation, this says that $J(\mathcal{H})\subseteq V$, whence $V=\mathbb{C}^3$.\\
So by induction we may assume the result holds for all $V\subseteq\mathbb{C}^{3k}$, $k<n$.
The first stage is to construct a variety $V^*$ which is designed to account for all possible positive-dimensional special subvarieties of $V$.
Let $\mathcal{G}$ be the finite collection of proper basic $\mathcal{H}$-special subvarieties (of some $\mathcal{H}^k$) afforded by applying Proposition \ref{propn:BasicSpecialCount} to $V$. Then let
\[\mathcal{G}_1^*=\Bigl\{\omega\bigl(\gamma\cdot(B\times\mathcal{H}^{n-k})\bigr):B\in \mathcal{G}, B\subseteq\mathcal{H}^k, \gamma\in\operatorname{SL}_2(\mathbb{Z})^n,\omega\text{ a permutation of the coordinates}\Bigr\}.\]
Since $\mathcal{G}$ was finite, $\mathcal{G}_1^*$ is $\operatorname{SL}_2(\mathbb{Z})$-finite.
Next, consider the variety $V_k\subseteq\mathbb{C}^{3(n-k)}$, defined over $\overline{\mathbb{Q}}$ by
\begin{multline*}V_k=\{\mathbf{X}\in\mathbb{C}^{3(n-k)}:\text{The translate of }\mathbb{C}^{3k}\text{ by }\mathbf{X}\\\text{ (for some choice of ordering of coordinates) is contained in }V\}.\end{multline*}
Clearly $V_k$ is a proper subvariety of $\mathbb{C}^{3(n-k)}$. By our inductive assumption, there is some $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\mathcal{F}_k$ of proper $\mathcal{H}$-special subvarieties of $\mathcal{H}^{n-k}$. Every $j'$-generic $\mathcal{H}$-special point in $J^{-1}(V_k)$ is contained in some $F\in\mathcal{F}_k$. Let
\[\mathcal{G}_2^*=\{\omega(\mathcal{H}^k\times F):F\in\mathcal{F}_k,1\leq k< n,\omega\text{ a permutation of coordinates}\}.\]
Then $\mathcal{G}_2^*$ is $\operatorname{SL}_2(\mathbb{Z})$-finite.
Let
\[\mathcal{G}^*=\mathcal{G}_1^*\cup\mathcal{G}_2^*,\]
and let
\[V^*=\JClose{\bigcup \mathcal{G}^*}.\]
Since $\mathcal{G}^*$ consists of $\operatorname{SL}_2(\mathbb{Z})$-finitely many proper $\mathcal{H}$-special subvarieties, $V^*$ is a proper subvariety of $\mathbb{C}^{3n}$.
Suppose now that $J^{-1}(V\setminus V^*)$ contains $\operatorname{SL}_2(\mathbb{Z})$-infinitely many $j'$-generic quadratic points. In particular, there is some $j'$-generic quadratic $\tau\in V\setminus V^*$ with $D(\tau)>D=D(V)$. By \ref{thrm:MainPointCountingThrm}, there is some $\mathcal{H}$-special set $H$ with
\[\tau\in H \hookrightarrow V.\]
Now, $H$ is a translate of some basic $\mathcal{H}$-special variety $B\subseteq\mathcal{H}^k$. If $B$ is a proper subvariety of $\mathcal{H}^k$, then $H$ should have been accounted for by $\mathcal{G}_1^*$, whence $J(H)\subseteq V^*$, which contradicts $\tau\not\in V^*$. So we must have $B=\mathcal{H}^k$.
So, up to permutation of coordinates, we have
\[H=\mathcal{H}^k\times\{\tau_{k+1},\dots,\tau_n\}.\]
As in the case $n=1$ above, the fact that $H \hookrightarrow V$ (via some data including some positive real numbers $c_i$) tells us that
\[J(\sigma_1,\dots,\sigma_k,\tau_{k+1},\dots,\tau_n)\in V\]
whenever $\operatorname{Im}\sigma_i=c_i$. By analytic continuation we then see that
\[J(\mathcal{H}^k\times\{\tau_{k+1},\dots,\tau_n\})\subseteq V,\]
whence
\[\mathbb{C}^{3k}\times\{J(\tau_{k+1},\dots,\tau_n)\}\subseteq V.\]
So $(\tau_{k+1},\dots,\tau_n)$ is an $\mathcal{H}$-special point of $J^{-1}(V_k)$. Moreover, it is $j'$-generic since $\tau$ was. Hence $(\tau_{k+1},\dots,\tau_n)$ should have been accounted for by $\mathcal{G}_2^*$. Hence $\tau\in V^*$, which is a contradiction.
So $J^{-1}(V\setminus V^*)$ can only contain $\operatorname{SL}_2(\mathbb{Z})$-finitely many $j'$-generic quadratic points, whence the $j'$-generic quadratic points in $J^{-1}(V)$ are accounted for by the $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\mathcal{G}^*$, together with $\operatorname{SL}_2(\mathbb{Z})$-finitely many additional points.
\end{proof}
\end{thrmREP}
In the next section, we will state a more precise version of Conjecture \ref{conj:PartialAOwDerivs}, and prove it under the assumption of \ref{conj:AlgIndep}. Before we can do so, we will take the time now to note the following. Proposition \ref{propn:BasicSpecialCount} clearly holds uniformly, and by using the uniform Pila-Wilkie Theorem, we can also get a uniform version of Theorem \ref{thrm:MainPointCountingThrm}. Using this uniformity, it is easy to get the following uniform version of \ref{thrm:PartialAOwDerivs}.
\begin{thrm}\label{thrm:UniformPartialAO}
Let $V\subseteq\mathbb{C}^{3n+k}$ be an algebraic variety defined over $\overline{\mathbb{Q}}$, considered as an algebraic family of varieties, \[V_{\mathbf{a}}\subseteq\mathbb{C}^{3n}, \quad\mathbf{a}\in\mathbb{C}^k.\]
For each positive integer $r$, there is an $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\sigma_r(V)$, consisting of proper $\mathcal{H}$-special subvarieties of $\mathcal{H}^n$, with the following property.
Whenever $\mathbf{a}\in\overline{\mathbb{Q}}^k$ satisfies $\max [\mathbb{Q}(a_i):\mathbb{Q}]\leq r$ and $V_{\mathbf{a}}$ is a \emph{proper} subvariety of $\mathbb{C}^{3n}$, every $j'$-generic quadratic point in $J^{-1}(V_{\mathbf{a}})$ is contained in some $G\in\sigma_r(V)$.
\end{thrm}
\section{A More Precise Statement}
Readers familiar with the normal shape of Andr\'e-Oort statements may have noticed that \ref{conj:PartialAOwDerivs} is rather weaker than one might expect. Even taking into account the action of $\operatorname{SL}_2(\mathbb{Z})$, a more natural analogue of the classical case might look like the following.
\begin{statement}\label{conj:FalseAO}
Let $V\subseteq\mathbb{C}^{3n}$ be an algebraic variety defined over $\overline{\mathbb{Q}}$. Then the collection of maximal $J$-special subvarieties of $V$ is $\operatorname{SL}_2(\mathbb{Z})$-finite.
\end{statement}
This turns out to be false. The reason for its failure is fairly simple; the modular relations that relate $j'(g\tau)$ (for some $g\in\operatorname{GL}_2^+(\mathbb{Q})$) with $j'(\tau)$ include not just $j(\tau)$, $j'(\tau)$, $j(g\tau)$ and $j'(g\tau)$, but also include instances of $m_g(\tau)=(c\tau+d)^2/N$. With the right polynomial, one can therefore enforce arbitrary relations between $m_g(\tau)$ and other $m_h(\sigma)$ arising in other coordinates.
Similarly, the modular relation for $j''$ introduces new variables to the equation. Indeed, by differentiating the modular polynomials one can find a rational function $\mu_N$ (in 7 variables) such that
\[j''(g\tau)=\mu_N(j(\tau),j(g\tau),j'(\tau),j'(g\tau),j''(\tau),c,(c\tau+d)),\]
where $g=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ is a primitive integer matrix of determinant $N$. Moreover, $\mu_N$ is linear in the $c$-coordinate. Hence we are able to enforce relations on the $c$ that can arise, as well as the $m_g(\tau)$.
\begin{eg}Let us see some examples to illustrate this issue.
\begin{enumerate}
\item Let $W\subseteq\mathbb{C}^2$ be an algebraic variety defined over $\mathbb{Q}$. Suppose that $W$ has at least one solution $(x,y)$ where $x$ and $y$ are both squares of quadratic points in $\mathcal{H}$. Fix two positive integers $M$ and $N$.
Writing a general element of $\mathbb{C}^{12}$ as $(X_1,Y_1,Z_1,\dots,X_4,Y_4,Z_4)$, consider the variety $V\subseteq\mathbb{C}^{12}$ defined (over $\overline{\mathbb{Q}}$) by
\[\Phi_M(X_1,X_2)=0, \quad \Phi_N(X_3,X_4)=0,\]
\[\left(\dfrac{-Y_2}{Y_1\lambda_M(X_1,X_2)},\dfrac{-Y_4}{Y_3\lambda_N(X_3,X_4)}\right)\in W.\]
Then the special points of $J^{-1}(V)$ are precisely the points $(\tau,g\tau,\sigma,h\sigma)$, where $g$ and $h$ are (arbitrary) primitive integer matrices with determinant $M$ and $N$ respectively, and $\tau$, $\sigma$ are quadratic points satisfying
\[(m_g(\tau),m_h(\sigma))\in W.\]
Since $W$ has at least one solution which is a square of a quadratic point, we can certainly find $\tau$ and $\sigma$ to solve this equation. Indeed, by modifying $\tau$ and $\sigma$ we can solve this equation for any $g$ and $h$ of the right determinant. The resulting collection of special points is certainly $\operatorname{SL}_2(\mathbb{Z})$-infinite, but no positive-dimensional $\mathcal{H}$-special variety is contained in $J^{-1}(V)$.
This example therefore serves as a counterexample to the hypothetical statement \ref{conj:FalseAO}. Since the points all lie within the $\operatorname{SL}_2(\mathbb{Z})$-translates of the $\mathcal{H}$-special set
\[\{(\tau_1,g\tau_1,\tau_2,h\tau_2):\tau_i\in\mathcal{H}\},\]
our main theorem \ref{thrm:PartialAOwDerivs} is still fine! Let us see one more example.
\item Fix a quadratic point $\sigma\in\mathcal{H}$ and $\gamma=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in\operatorname{SL}_2(\mathbb{Z})$. Then
\[(c\sigma+d)^2=m_\gamma(\sigma)=\dfrac{j'(\gamma\sigma)}{j'(\sigma)}\]
and
\[c=\dfrac{j''(\gamma\sigma)-j''(\sigma)(c\sigma+d)^4}{2(c\sigma+d)^3j'(\tau)},\]
whence
\begin{align*}c^2&=\dfrac{\big(j''(\gamma\sigma)-j''(\sigma)(c\sigma+d)^4\big)^2}{4(c\sigma+d)^6j'(\sigma)^2}\\
&=\dfrac{\left(j''(\gamma\sigma)-j''(\sigma)\left(\dfrac{j'(\gamma\sigma)}{j'(\sigma)}\right)^2\right)^2}{4\left(\dfrac{j'(\gamma\sigma)}{j'(\sigma)}\right)^3j'(\sigma)^2}.\end{align*}
So for the appropriate rational function $q$ we have
\[c^2=q(j'(\sigma),j'(\gamma\sigma), j''(\sigma),j''(\gamma\sigma)).\]
Given a variety $W\subseteq\mathbb{C}^2$, defined over $\overline{\mathbb{Q}}$, we can then define $V\subseteq\mathbb{C}^9$ by
\[\Phi_N(X_1,X_2), \quad X_3=j(\sigma),\]
\[\forall w\in\mathbb{C}, \left(\dfrac{-Y_2}{Y_1\lambda_N(X_1,X_2)},q\big[w,Y_3, p_{\operatorname{Im}\sigma}(j(\sigma),\chi^*(\sigma),w),Z_3\big]\right)\in W.\]
Then the $\mathcal{H}$-special points of $J^{-1}(V)$ are exactly those points
\[(\tau,g\tau,\gamma\sigma),\]
where $g=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ is any primitive integer matrix of determinant $N$, $\gamma=\begin{pmatrix}A&B\\C&D\end{pmatrix}\in\operatorname{SL}_2(\mathbb{Z})$ and
\[((c\tau+d)^2,C^2)\in W.\]
Once again, if $W$ is suitable then this is an $\operatorname{SL}_2(\mathbb{Z})$-infinite collection, but no positive dimensional $\mathcal{H}$-special set lies in $J^{-1}(V)$.
\end{enumerate}
\end{eg}
With variants of the examples above, one can produce varieties whose special points satisfy almost any arbitrary relation, provided the relation is written in terms of variables $c$, $(c\tau+d)$ corresponding to matrices $g\in\operatorname{GL}_2^+(\mathbb{Q})$, and $C,D$ corresponding to some $\operatorname{SL}_2(\mathbb{Z})$-translate of a fixed $\sigma$.
The idea of our stronger result is that these relations should be the only obstruction to a result like \ref{conj:FalseAO}. In order to state this precisely, we will need to go through some technicalities.\\
\begin{comment}
\end{comment}
Given a proper $\mathcal{H}$-special set $G\subseteq\mathcal{H}^n$, there is an underlying partition of $\{1,\dots,n\}$ which can be written as
\[S_0\cup S_1\cup\dots\cup S_h\cup T_1\cup\dots\cup T_k,\]
with only $S_0$ allowed to be empty and only the $T_i$ allowed to be singletons. (The condition that $G$ is proper is equivalent to requiring that $k < n$.) For $i>0$, let $r_i=\# S_i-1$ and let $s_i$ be the smallest element of $s_i$. Also associated to $G$ are some matrices $g_{i,1},\dots,g_{i,r_i}\in\operatorname{GL}_2^+(\mathbb{Q})$, so that each coordinate in $S_i$ (except the $s_i$ coordinate) is defined by $\tau=g_{i,j}\tau_{s_i}$.
Given such a $G$ and given a tuple of matrices
\begin{equation}\label{eqn:SLZComp}\gamma=(\gamma_1,\dots,\gamma_{\#S_0},\gamma_{1,1},\dots,\gamma_{1,r_1},\dots,\gamma_{h,1},\dots,\gamma_{h,r_h})\in\operatorname{SL}_2(\mathbb{Z})^{\#S_0+\sum r_i},\end{equation}
let $c_i,d_i$ be the bottom row of $\gamma_i$, and $c_{i,j},d_{i,j}$ be the bottom row of the matrix $\gamma_{i,j}g_{i,j}$.\\
This is all building towards the following definitions.
\begin{defn}
A variety $W\subseteq \mathbb{C}^{2\# S_0+2\sum r_i}$, defined over $\overline{\mathbb{Q}}$, is called a $G$-variety. For a $G$-variety $W$ and a given $\gamma\in\operatorname{SL}_2(\mathbb{Z})^{\#S_0+\sum r_i}$ (as above), we define $W^\gamma\subseteq\mathcal{H}^{\{s_1,\dots,s_h\}}$ to be the set of $(\tau_{s_1},\dots,\tau_{s_h})$ such that
\[(\dots,c_i,d_i,\dots,c_{i,j}, c_{i,j}\tau_{s_i}+d_{i,j},\dots)\in W.\]
\end{defn}
If $\gamma$ is an element of the full group $\operatorname{SL}_2(\mathbb{Z})^n$, then it consists of $\gamma'\in\operatorname{SL}_2(\mathbb{Z})^{\#S_0+\sum r_i}$, as above, together with some more matrices
\[\alpha_{s_1},\dots,\alpha_{s_h}\in\operatorname{SL}_2(\mathbb{Z}),\]
corresponding to the $s_i$-coordinates, and
\[\beta_1,\dots\beta_k\in\operatorname{SL}_2(\mathbb{Z})\]
corresponding to the singleton coordinates in the $T_i$. For such a $\gamma\in\operatorname{SL}_2(\mathbb{Z})^n$, we will abuse notation and write
\[W^\gamma=(\alpha_{s_1},\dots,\alpha_{s_h})\cdot W^{\gamma'}.\]
(The $\beta_i$ have no meaningful effect.)
\begin{notn}
We will write
\[\operatorname{Sp}(\gamma,W)=\{(\tau_1,\dots,\tau_n)\in\mathcal{H}^n: (\tau_{s_1},\dots,\tau_{s_h})\in W^\gamma\text{ and every }\tau_{i}\text{ is quadratic}\}.\]
\end{notn}
In the case where $h=0$, so that we have no $\tau$-coordinates to work with, the variety $W^\gamma$ only enforces conditions on the $\gamma$ corresponding to the $S_0$-coordinates. In this case, we will use the convention that
\[\operatorname{Sp}(\gamma,W)=\mathcal{H}^n\]
if $(\dots,c_i,d_i,\dots)\in W$, and
\[\operatorname{Sp}(\gamma,W)=\emptyset\]
otherwise.
Before we can state our more precise version of \ref{conj:PartialAOwDerivs}, we need one more definition.
\begin{defn}
A pair $(G,W)$, with $G$ a proper $\mathcal{H}$-special set and $W$ a $G$-variety is said to be \emph{geodesically minimal} if
\[\bigcup_{\gamma\in\operatorname{SL}_2(\mathbb{Z})^n}\operatorname{Sp}(\gamma,W)\]
is \emph{not} contained in any $\operatorname{SL}_2(\mathbb{Z})$-finite collection of proper $\mathcal{H}$-special varieties.\\
\end{defn}
\begin{thrm}[Precise Modular Andr\'e-Oort with Derivatives]\label{thrm:PrecAOwDerivs}
Assume Conjecture \ref{conj:AlgIndep}. Let $V\subseteq\mathbb{C}^{3n}$ be an algebraic variety defined over $\overline{\mathbb{Q}}$. Then there is a finite collection $\sigma(V)$ of $\mathcal{H}$-special subvarieties of $\mathcal{H}^n$, and for each $G\in\sigma(V)$ an associated $G$-variety $W_G$, with the following properties.
\begin{itemize}
\item For every $G\in\sigma(V)$, $(G,W_G)$ is geodesically minimal.
\item The set of quadratic points in $J^{-1}(V)$ is precisely
\[\bigcup_{\substack{G\in\sigma(V)\\\gamma\in\operatorname{SL}_2(\mathbb{Z})^n}}\gamma\cdot G\cap \operatorname{Sp}(\gamma,W).\]
\end{itemize}
\end{thrm}
\begin{proof}
Under the assumption of Conjecture \ref{conj:AlgIndep}, Theorem \ref{thrm:PartialAOwDerivs} yields a finite collection $\sigma(V)$ of proper $\mathcal{H}$-special subvarieties, such that the special points of $J^{-1}(V)$ are contained in
\[\bigcup_{\substack{G\in\sigma(V)\\\gamma\in\operatorname{SL}_2(\mathbb{Z})^n}}\gamma\cdot G.\]
Let us look first at a single $G\in\sigma(V)$, and associate some data to it, as in the definitions above. Associated to $G$ is a partition of $\{1,\dots,n\}$,
\[S_0\cup S_1\cup\dots\cup S_h\cup T_1\cup\dots\cup T_k,\]
with $T_i$ singletons, $\#S_i>1$. As above, we have some associated data
\[s_i=\min S_i,\qquad r_i=\# S_i-1 \text{ for }i>0,\]
\[\sigma_1,\dots,\sigma_{\#S_0}\in\mathcal{H}\text{ quadratic}\]
and
\[g_{i,j}\in\operatorname{GL}_2^+(\mathbb{Q})\text{, a primitive integer matrix with determinant }N_{i,j},\text{ for }1\leq i \leq h, 1\leq j\leq r_i.\]
For ease of notation, we will assume that the coordinates are ordered nicely, with the first few coordinates in $S_0$, the next few in $S_1$, and so on.
Recall: for each $N$, there is a rational function $\mu_N$ with the property that
\[j''(g\tau)=\mu_N(j(\tau),j(g\tau),j'(\tau),j'(g\tau),j''(\tau),c,(c\tau+d))\]
whenever $g=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ is a primitive integer matrix of determinant $N$. Moreover, it will be useful later to know that
\begin{multline}\label{eqn:muNCalc}\mu_N\left(j(\tau),j(g\tau),j'(\tau),j'(g\tau)\dfrac{(c_0\tau_0+d_0)^2}{(c\tau+d)^2},j''(\tau),c_0,(c_0\tau_0+d_0)\right)\\=j''(g\tau)\dfrac{(c_0\tau_0+d_0)^4}{(c\tau+d)^4}+j'(g\tau)\dfrac{2(c_0\tau_0+d_0)^3(c_0d-cd_0)}{(c\tau+d)^3}.\end{multline}
This follows by a straightforward, if tedious, calculation.
Given $\gamma\in\operatorname{SL}_2(\mathbb{R})^{n}$, and a point $\tau=(\tau_1,\dots,\tau_{h})\in\mathcal{H}^{h}$, define a variety
\[V_{\gamma,\tau}\subseteq \mathbb{C}^{3(h+k)}\]
as follows. First, write $\gamma$ as in (\ref{eqn:SLZComp}). That is, $\gamma$ consists of
\[\gamma'=(\gamma_1,\dots,\gamma_{\#S_0},\gamma_{1,1},\dots,\gamma_{1,s_1},\dots,\gamma_{h,1},\dots,\gamma_{h,s_h})\in\operatorname{SL}_2(\mathbb{R})^{\# S_0+\sum r_i},\]
\[\alpha_{s_1},\dots,\alpha_{s_h}\in\operatorname{SL}_2(\mathbb{Z})\] corresponding to the $s_i$ coordinates, and
\[\beta_1,\dots,\beta_k\in\operatorname{SL}_2(\mathbb{Z})\]
corresponding to the singleton coordinates in the $T_i$. Let $c_i$, $d_i$ be the bottom row of $\gamma_i$ and $c_{i,j},d_{i,j}$ the bottom row of $\gamma_{i,j}g_{i,j}$.
Define $V_{\gamma,\tau}'$ by
\[(X_1,Y_1,Z_1,\dots,X_{h+k},Y_{h+k},Z_{h+k})\in V_{\gamma,\tau}'\]\[\Longleftrightarrow\]
\begin{multline*}\forall w_{i,j}\in\mathbb{C}\text{ with }\Phi_{N_{i,j}}(X_i,w_{i,j}), i\leq h,j\leq r_i,\\ \Bigl[\dots,j(\sigma_i),j'(\gamma_i\sigma_i),j''(\gamma_i\sigma_i),\dots,X_i,Y_i,Z_i,\dots,w_{i,j},-Y_i(c_{i,j}\tau_{s_i}+d_{i,j})^2\lambda_{N_{i,j}}(X_i,w_{i,j}),\\\mu_{N_{i,j}}\bigl(X_i,w_{i,j},Y_i,-Y_i(c_{i,j}\tau_{s_i}+d_{i,j})^2\lambda_{N_{i,j}}(X_i,w_{i,j}),Z_i,c_{i,j},c_{i,j}\tau_{s_i}+d_{i,j}\bigr),\dots\\\dots,X_{h+i},Y_{h+i},Z_{h+i},\dots\Bigr]\in V.\end{multline*}
Taking $\overline{\mathbb{Q}}$-Zariski closures replaces the $j'(\gamma_i\sigma_i)$ and $j''(\gamma_i\sigma_i)$ by suitable rational functions involving $\sigma_i$, $\operatorname{Im}\sigma_i$, $\chi^*(\sigma_i)$, $c_i$, $d_i$, and some complex numbers $w$ which are allowed to be arbitrary. Making these replacements we get a variety $V_{\gamma,\tau}$, defined over $\overline{\mathbb{Q}}$, depending polynomially on $c_i,d_i,c_{i,j},(c_{i,j}\tau_{i}+d_{i,j})$ (and nothing else). Thus each $V_{\gamma,\tau}$ is a fibre of an algebraic family of varieties $\widehat{V}$, defined over $\overline{\mathbb{Q}}$.
We now apply Theorem \ref{thrm:UniformPartialAO} to $\widehat{V}$. We get an $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\sigma_2\bigl(\widehat{V}\bigr)$, consisting of $\mathcal{H}$-special subvarieties of $\mathcal{H}^{(h+k)}$, such that for all $\gamma\in\operatorname{SL}_2(\mathbb{Z})^n$ and all quadratic $\tau\in\mathcal{H}^h$, either \[V_{\gamma,\tau}=\mathbb{C}^{3(h+k)}\] or the $J$-special subvarieties of $V_{\gamma,\tau}$ are accounted for by $\sigma_2\bigl(\widehat{V}\bigr)$. The $H\in\sigma_2\bigl(\widehat{V}\bigr)$ correspond in the obvious way to an $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\mathcal{G}$ of proper $\mathcal{H}$-special subvarieties of $G$. We add all these $H\in\mathcal{G}$ to the overarching collection $\sigma(V)$.\\
Now, a quadratic point lying in \[\bigcup_{\gamma\in\operatorname{SL}_2(\mathbb{Z})^n}\gamma\cdot G\cap J^{-1}(V)\] corresponds to a quadratic point $\tau=(\tau_1,\dots,\tau_n)\in G$ together with $\gamma\in\operatorname{SL}_2(\mathbb{Z})^n$ such that $\gamma\tau\in J^{-1}(V)$. Such a pair $(\tau, \gamma)$ necessarily satisfies
\[J(\tau')\in V_{\gamma,\tau},\]
where
\[\tau'=(\alpha_{s_1}\tau_{s_1},\dots,\alpha_{s_h}\tau_{s_h},\beta_1\tau_{n-k+1},\dots,\beta_k\tau_n).\]
By the properties of $\sigma_2\bigl(\widehat{V}\bigr)$, either $\tau'\in H$, for some $H\in\sigma_2\bigl(\widehat{V}\bigr)$, or $V_{\gamma,\tau}=\mathbb{C}^{3(h+k)}$. In the first case, we have $\tau\in H$, for some $H\in\mathcal{G}$.
Define a set
\[R=\left\{(\tau,\gamma)\in\mathcal{H}^n\times\operatorname{SL}_2(\mathbb{Z})^n:\begin{matrix}\tau\text{ is quadratic}, \tau\in G, J(\gamma\tau)\in V, \\\\\forall H\in\mathcal{G},\tau\not\in H\end{matrix}\right\}.\]
If $R$ is empty, we can stop here, removing $G$ from $\sigma(V)$ entirely; it contributes no special points other than those already accounted for by $\mathcal{G}$.
If $R\ne\emptyset$, we continue. By the properties of $\mathcal{G}$, every $(\tau,\gamma)\in R$ must satisfy
\[V_{\gamma,\tau}=\mathbb{C}^{3(h+k)}.\]
\begin{comment}
\end{comment}
Hence, in the definition of $V_{\gamma,\tau}$, we can replace $(X_1,\dots,Z_{h+k})$ with $J(z_1,\dots,z_{h+k})$, for \emph{arbitrary} $z_i\in\mathcal{H}$. By (\ref{eqn:muNCalc}) and an easy calculation involving $\lambda_N$, we see that
\begin{multline*}\label{eqn:DefnOfWz}\Bigl[\dots,J(\gamma_i\sigma_i),\dots,J(z_i),\dots\\\dots,j(h_{i,j}z_i),j'(h_{i,j}z_i)\dfrac{(c_{i,j}\tau_{s_i}+d_{i,j})^2}{\delta_{i,j}^2},j''(h_{i,j}z_i)\dfrac{(c_{i,j}\tau_{s_i}+d_{i,j})^4}{\delta_{i,j}^4}+2j'(h_{i,j}z_i)\dfrac{c_{i,j}(c_{i,j}\tau_{s_i}+d_{i,j})^3}{\delta_{i,j}^2},\dots\\\dots,J(z_{h+i}),\dots\Bigr]\in V\end{multline*}
for all $(z_1,\dots,z_{h+k})\in\mathcal{H}^{h+k}$. Here $h_{i,j}$ is an upper triangular matrix in the $\operatorname{SL}_2(\mathbb{Z})$-orbit of $g_{i,j}$ and $\delta_{i,j}$ is its lower-right entry.
As before, by taking $\overline{\mathbb{Q}}$-Zariski closures we can replace the $J(\gamma_i\sigma_i)$ by some $\overline{\mathbb{Q}}$-rational functions of $c_i$ and $d_i$. Thus the above equation, for each choice of $\mathbf{z}=(z_1,\dots,z_{h+k})$, defines a $G$-variety which we will call $W_{\mathbf{z}}$. The intersection
\[W=\bigcap_{\mathbf{z}\in\mathcal{H}^{h+k}}W_{\mathbf{z}}\]
is still a $G$-variety. (It is nonempty since $R$ is nonempty.)
We have seen that every $(\tau,\gamma)\in R$ satisfies
\[(\tau_{s_1},\dots,\tau_{s_h})\in W^\gamma.\]
Conversely, any quadratic $\mathbf{z}=(z_1,\dots,z_h)$, no matter its height, which is a solution of some $W^\gamma$, must come from a member of $\gamma\cdot G\cap J^{-1}(V)$. Indeed, in this situation we have \begin{multline*}\Bigl[\dots,J(\gamma_i\sigma_i),\dots,J(z_i),\dots\\\dots,j(h_{i,j}z_i),j'(h_{i,j}z_i)\dfrac{(c_{i,j}z_i+d_{i,j})^2}{\delta_{i,j}^2},j''(h_{i,j}z_i)\dfrac{(c_{i,j}z_i+d_{i,j})^4}{\delta_{i,j}^4}+2j'(h_{i,j}z_i)\dfrac{c_{i,j}(c_{i,j}z_i+d_{i,j})^3}{\delta_{i,j}^2},\dots\\\dots,J(z_{h+i}),\dots\Bigr]\in V,\end{multline*}
for all $(z_{h+1},\dots,z_{h+k})\in\mathcal{H}^k$.
Brief calculations involving the transformation laws for $j'$ and $j''$ show that
\[j'(h_{i,j}z_i)\dfrac{(c_{i,j}z_i+d_{i,j})^2}{\delta_{i,j}^2}=j'(\gamma_{i,j}g_{i,j}z_i)\dfrac{(c_{i,j}z_i+d_{i,j})^2}{(c_{i,j}z_i+d_{i,j})^2}=j'(\gamma_{i,j}g_{i,j}z_i)\]
and that
\[j''(h_{i,j}z_i)\dfrac{(c_{i,j}z_i+d_{i,j})^4}{\delta_{i,j}^4}+2j'(h_{i,j}z_i)\dfrac{c_{i,j}(c_{i,j}z_i+d_{i,j})^3}{\delta_{i,j}^2}=j''(\gamma_{i,j}g_{i,j}z_i)\dfrac{(c_{i,j}z_i+d_{i,j})^4}{(c_{i,j}z_i+d_{i,j})^4}=j''(\gamma_{i,j}g_{i,j}z_i).\]
Thus we get
\begin{equation*}\Bigl[\dots,J(\gamma_i\sigma_i),\dots,J(z_i),\dots,j(\gamma_{i,j}g_{i,j}z_i),j'(\gamma_{i,j}g_{i,j}z_i),j''(\gamma_{i,j}g_{i,j}z_i),\dots,J(z_{h+i}),\dots\Bigr]\in V,\end{equation*}
for every $(z_{h+1},\dots,z_{h+k})\in\mathcal{H}^k$. Hence $\mathbf{z}$ corresponds to the point
\[(\dots,\gamma_i\sigma_i,\dots,z_i,\dots,\gamma_ig_{i,j}z_i,\dots,z_{h+i},\dots)\in \gamma\cdot G\cap J^{-1}(V),\]
for any choice of $z_{h+i}$.\\
To sum up, we have seen that the quadratic points in $\gamma\cdot G\cap J^{-1}(V)$ consist precisely of:
\begin{itemize}
\item Those quadratic points that lie in some $H\in\mathcal{G}$.
\item Those points that corresponds to quadratic solutions of $W^\gamma$, that is
\[\gamma\cdot G\cap \operatorname{Sp}(\gamma, W).\]
\end{itemize}
We are not claiming that these possibilities are mutually exclusive!
Before we are done with $G$, we must check whether $(G,W)$ is geodesically minimal. If it does happen to be geodesically minimal, we are done. Otherwise, by definition,
\[\bigcup_{\gamma\in\operatorname{SL}_2(\mathbb{Z})^n}\operatorname{Sp}(\gamma, W)\]
is contained in some $\operatorname{SL}_2(\mathbb{Z})$-finite collection of proper $\mathcal{H}$-special subvarieties of $\mathcal{H}^{h+k}$. This is not a problem; we simply remove $G$ from $\sigma(V)$ entirely and replace it by the appropriate finite collection of proper subvarieties of $G$.\\
This is as much as we can do with a given $G\in\sigma(V)$. It may be helpful to have a brief summary here. For the given $G$ we have done two things:
\begin{enumerate}
\item Either removed $G$ from $\sigma(V)$ entirely or associated to $G$ an geodesically minimal $G$-variety $W_G$.
\item Added to $\sigma(V)$ some finite collection $\mathcal{G}$ of proper subvarieties of $G$.
\end{enumerate}
Moreover, the union of the $\mathcal{H}$-special subvarieties of $\gamma\cdot G\cap J^{-1}(V)$ is precisely
\[\gamma\cdot G\cap \operatorname{Sp}(\gamma,W_G),\]
together with the $\mathcal{H}$-special subvarieties of
\[\bigcup_{H\in\mathcal{G}}\gamma\cdot H\cap J^{-1}(V).\]
This is enough to conclude the theorem. Simply perform the above process to each $G\in\sigma(V)$ in turn, taking the $G$ in \emph{descending order of dimension}. Since each $G$ can add to $\sigma(V)$ only finitely many varieties of strictly smaller dimension, the process will eventually terminate.
\end{proof}
\section{Uniformity}
In this final section, I will discuss uniform versions of the two main results of the document, Theorems \ref{thrm:PartialAOwDerivs} and \ref{thrm:PrecAOwDerivs}. For this, we are closely following work of Scanlon, who gives in \cite{Scanlon2004} a very general approach to uniformising results of this type. Unfortunately, our setting does not fit perfectly into Scanlon's framework; the full strength of his result is therefore not available to us. For our purposes it is enough that the central ideas in Scanlon's work \emph{do} apply.
There are two main lemmas of Scanlon's that we will use. The first is Lemma 3.1 from \cite{Scanlon2004}, which applies directly. We write it out here for completeness.
\begin{lma}\label{lma:PassingUpFields}
Let $k$ be a field and $K$ an algebraically closed field extension of $k$. Let $X$ be a variety over $k$ and $X_K$ its base change to $K$. Let $A\subseteq X(K)$. Suppose that $Y\subseteq X_K$ is constructible. Then there is a natural number $n$, some constructible set $Z\subseteq X\times X^n$, defined over $k$, and some $a\in A^n$ such that $Z_a(K)\cap A=Y(K)\cap A$.
\begin{proof}
See Lemma 3.1 from \cite{Scanlon2004}
\end{proof}
\end{lma}
The next lemma is the analogue of Lemma 3.2 from \cite{Scanlon2004}. The statement needs some very slight modification before it applies in our setting, but the proof of the modified lemma is essentially identical to the original.
\begin{lma}\label{lma:UniformConstructible}
Let $k\subseteq K$ be algebraically closed fields, $X=\mathbb{A}^s$, $B=\mathbb{A}^t$ and $Y\subseteq X\times B$ a constructible subset. Let $A\subseteq X(K)$. Suppose that there exist $\alpha,\beta\in A$ with the following property:
For $m\in \mathbb{N}$ and $i\leq m$, let $p^{(m)}_i$ be the $m$-tuple
\[(\alpha,\dots,\alpha,\beta,\alpha,\dots,\alpha),\]
with the $\beta$ arising in the $i$th place. Let $P^{(m)}_i$ be the $k$-Zariski closure of $\left\{p^{(m)}_i\right\}$. Then
\[p^{(m)}_i\not\in P^{(m)}_j\]
for any $i\ne j$.
Then there is a natural number $n$ and a $k$-constructible set $Z\subseteq X\times X^n$ such that for any $b\in B(K)$, there is $a\in A^n$ for which $Y_b(K)\cap A=Z_a\cap A$.
\begin{proof}
Throughout this proof we will suppress notation, writing $W=W(K)$ whenever $W$ is a constructible set over $K$.
Consider $K$ as a structure in the language $\mathcal{L}=(+,\cdot,P_A,\{a\}_{a\in K})$, where each constant symbol $a$ is to be interpreted as the corresponding element $a\in K$ and $P_A$ is an $s$-ary predicate to be interpreted as the set $A$. In this language, we can express the condition ``$x\in A\cap W$'' for any affine variety $W$ over $K$.
Let $T$ be the $\mathcal{L}$-theory of $K$, and then let $\mathcal{L}'$ be $\mathcal{L}$ together with some new constant symbols $b_1,\dots, b_t$. Write $b=(b_1,\dots,b_t)$.
Let $\mathcal{C}(k)$ be the set of $k$-constructible subsets of $X\times X^N$ (for some $N$) and consider the set
\[\Gamma = T\cup \{\forall c\in A^N\quad \exists x\in A\quad (x\in Y_b\setminus Z_c\vee x\in Z_c\setminus Y_b): Z\in \mathcal{C}(k)\}.\]
Suppose that $\Gamma$ is \emph{not} finitely satisfiable. Let $\Gamma_0$ be a finite subset witnessing this. Since $T$ is the theory of $K$, $\Gamma_0$ cannot be contained in $T$, so it mentions some finitely many $k$-constructible sets $Z_1,\dots,Z_l$, with $Z_i\subseteq X\times X^{N_i}$. Since $\Gamma_0$ is not satisfiable, we have:
\[\forall b\in B \quad\exists i\leq l\quad \exists c\in A^{N_i}\quad \forall x\in A\quad( x\not\in Y_b\setminus Z_c\wedge x\not\in (Z_i)_c\setminus Y_b).\]
In other words, for every $b\in B$, there is some $Z_i$ and some $c\in A^{N_i}$ such that
\[A\cap (Z_i)_c=A\cap Y_b.\]
Now let $N=\max\{N_i:i\leq l\}$ and let $n=N+l$. Define
\[Z=\bigcup_{i=1}^{l}Z_i\times X^{N-N_i}\times P^{(l)}_i,\]
with $P^{(l)}_i$ as in the hypotheses of the lemma. This $Z$ is a constructible set defined over $k$, and satisfies the conclusion of the lemma: if $b\in B$, then for some $i$, $c$, we have $A\cap (Z_i)_c=A\cap Y_b$. Letting $c'=(c,\alpha,\dots,\alpha, p^{(l)}_i)\in A^{N+l}$, we get
\[A\cap Z_{c'}=A\cap (Z_i)_c=A\cap Y_b.\]
So suppose on the other hand that $\Gamma$ is finitely satisfiable. By the Compactness Theorem, it is satisfiable. So we have an algebraically closed extension $L\supseteq K$ of $K$, a point $b\in B(L)$ and a set $A^*\subseteq X(L)$ such that, for every $k$-constructible $Z\subseteq X\times X^N$ and $c\in (A^*)^N$, we have
\[Y_b(L)\cap A^*\ne Z_c(L)\cap A^*.\]
This contradicts Lemma \ref{lma:PassingUpFields} (applied to $k$, $L$, $A^*$, $X$ and $Y_b$).
\end{proof}
\end{lma}
Using this, we would like to get a uniform version of \ref{thrm:PrecAOwDerivs}. This can indeed be done, though perhaps not in exactly the manner we might like.
Given an algebraic family $V\subseteq \mathbb{C}^{3n+k}$, we can apply \ref{lma:UniformConstructible}, with
\[A=\{J(\tau):\tau\in\mathcal{H}^n \text{ quadratic}\}.\]
For the resulting constructible set $Z$, we can write
\[Z=\bigcup_{i=1}^{r}X_i\setminus Y_i\]
for some varieties $X_i, Y_i$ defined over $\overline{\mathbb{Q}}$. If we apply \ref{thrm:PrecAOwDerivs} (under the assumption of Conjecture \ref{conj:AlgIndep}) to each of the $X_i$ and $Y_i$, we get a finite collection $\sigma(V)$, consisting of \emph{pairs} $(G,H)$ of $\mathcal{H}$-special varieties, with corresponding $G$-varieties $(W_G,W_H)$, such that the union of the $\mathcal{H}$-special subvarieties of $J^{-1}(V_b)$, for any fibre $b$, is precisely the fibre of the set
\[\bigcup_{(G,H)\in\sigma(V)}\left[\bigcup_{\gamma\in\operatorname{SL}_2(\mathbb{Z})} \gamma G\cap \operatorname{Sp}(\gamma,W_G)\mathbin{\Bigg\backslash}\bigcup_{\gamma\in\operatorname{SL}_2(\mathbb{Z})}\gamma H\cap\operatorname{Sp}(\gamma,W_H)\right]\]
at some quadratic $\tau=\tau(b)$.
This is not quite as good as we might like; one would prefer not to have the $\mathcal{H}$-special sets $H$ in the picture. These arise thanks to the fact that $Z$ is only constructible, rather than Zariski closed. It does not seem possible to get around this; the reason is essentially the same as the reason why the full strategy outlined in Scanlon's paper \cite{Scanlon2004} does not work.
Let $G$ and $H$ be $\mathcal{H}$-special varieties and take a corresponding $G$-variety $W_G$ and $H$-variety $W_H$. If it were the case that $(G\cap H, W_G\cap W_H)$ was geodesically minimal whenever $(G,W_G)$ and $(H,W_H)$ were, then we would be able to apply the rest of Scanlon's work. This is not necessarily the case, so the rest of Scanlon's work cannot be applied. Hence the above seems likely to be the best possible uniform version of \ref{thrm:PrecAOwDerivs}.
Theorem \ref{thrm:PartialAOwDerivs}, however, can be uniformised more cleanly, since it is less precise.
\begin{propn}
Assume Conjecture \ref{conj:AlgIndep}. Let $V\subseteq\mathbb{C}^{3n+k}$ be an algebraic variety (with arbitrary field of definition), considered as an algebraic family of fibres $V_b\subseteq\mathbb{C}^{3n}$. There is a natural number $N$, and an $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\sigma(V)$ of $\mathcal{H}$-special subvarieties of $\mathcal{H}^{n+N}$, with the following property.
For every $b\in\mathbb{C}^k$ with $V_b\ne \mathbb{C}^{3n}$, the $\mathcal{H}$-special points of $J^{-1}(V_b)$ are contained in
\[\bigcup_{G\in\sigma(V)} G_\tau,\]
where $G_\tau$ is the fibre of $G$ at some fixed quadratic $\tau=\tau(b)\in\mathcal{H}^N$.
Moreover, all of the $G_\tau$ are \emph{proper} subvarieties of $\mathcal{H}^n$.
\begin{proof}
Let $V\subseteq\mathbb{C}^{3n+k}$ be a variety, considered as an algebraic family of fibres $V_b\subseteq\mathbb{C}^{3n}$. We will apply Lemma \ref{lma:UniformConstructible}, with $X=\mathbb{C}^{3n}$, $Y=V$, $K=\mathbb{C}$, $k=\overline{\mathbb{Q}}$ and
\[A=\{J(\tau):\tau\in\mathcal{H}^n \text{ quadratic}\}.\]
To apply the lemma, we need to find two suitable points $\alpha,\beta\in A$. This is easy; we only need the $j$-coordinates of $\alpha$ and $\beta$ to be distinct. So Lemma \ref{lma:UniformConstructible} does apply; we get a $\overline{\mathbb{Q}}$-constructible set $Z\subseteq \mathbb{C}^{3(n+dn)}$ such that, for every $b\in\mathbb{C}^k$, there is some $a\in A^{d}$ such that
\[Z_a\cap A = V_b\cap A.\]
Write
\[Z=\bigcup_{i=1}^{r}X_i\setminus Y_i,\]
for some $\overline{\mathbb{Q}}$-varieties $X_i$ and $Y_i$. We will apply Theorem \ref{thrm:PrecAOwDerivs} to each $X_i$ and $Y_i$ separately. We get some finite sets $\sigma(X_i)$ and $\sigma(Y_i)$ of $\mathcal{H}$-special varieties, with associated $G$-varieties, exactly describing the special subvarieties of the $Z_i$ in the manner described in the statement of Theorem \ref{thrm:PrecAOwDerivs}.\\
Now, given $b\in\mathbb{C}^{3d}$, let $a\in A^d$ be such that $Z_a\cap A=V_b\cap A$. Let $\tau$ be a preimage of $a$ under $J$.
First suppose that no $\sigma(X_i)$ contains any $G$ such that, for some $\gamma\in\operatorname{SL}_2(\mathbb{Z})^N$, $G_{\gamma\tau}=\mathcal{H}^n$. Then we are done; the special points of $J^{-1}(V_b)$ are contained in the $\operatorname{SL}_2(\mathbb{Z})$-finite collection of proper $\mathcal{H}$-special varieties
\[\{\gamma'\cdot G_{\gamma\tau}:G\in\sigma(X_i), i\leq r,\gamma\in\operatorname{SL}_2(\mathbb{Z})^N,\gamma'\in\operatorname{SL}_2(\mathbb{Z})^n\}.\]
On the other hand, suppose that some $\sigma(X_i)$ contains $G$ such that, for some $\gamma\in\operatorname{SL}_2(\mathbb{Z})^N$, $G_{\gamma\tau}=\mathcal{H}^n$. Then the $G$-variety associated to $G$ cannot impose any condition on the coordinates corresponding to $\mathcal{H}^n$. Hence by the properties laid out in \ref{thrm:PrecAOwDerivs}, we must have $(X_i)_a=\mathbb{C}^{3n}$.
Now apply the same argument to $Y_i$. There are 2 possibilities. Either:
\begin{itemize}
\item The special points of $(Y_i)_a$ are contained in an $\operatorname{SL}_2(\mathbb{Z})$-finite collection
\[\{\gamma'\cdot G_{\gamma\tau}:G\in\sigma(Y_i),i\leq r,\gamma\in\operatorname{SL}_2(\mathbb{Z})^N,\gamma'\in\operatorname{SL}_2(\mathbb{Z})^n\},\]
with each $G_{\gamma\tau}$ being a \emph{proper} $\mathcal{H}$-special subvariety of $\mathcal{H}^n$, or
\item $(Y_i)_a=\mathbb{C}^{3n}$.
\end{itemize}
In the first case, since the special points of $(Y_i)_a$ are contained in a lower-dimensional set, it follows that the special points of $Z_a$ are Zariski dense, whence $V_b=\mathbb{C}^{3n}$. In the second case, $(X_i)_a\setminus (Y_i)_a$ contributes no new special points, so we can ignore this $i$ and move on.
Finally, note that if $G_\tau=\mathcal{H}^n$ for some $\tau$, then it must be the case for every $\tau'$ that either $G_{\tau'}=\mathcal{H}^n$ or $G_{\tau'}=\emptyset$. Thus, if some $G\in\sigma(X_i)$ has the property that (for some $\tau$), $G_\tau=\mathcal{H}^n$, we can safely remove it. By the previous arguments, all the special points will still be covered by the rest of the $G\in\bigcup\sigma(X_i)$, except in the case where $V_b=\mathbb{C}^{3n}$.
Hence the ($\operatorname{SL}_2(\mathbb{Z})$-finite) collection
\[\sigma(V)=\{\gamma\cdot G: \gamma\in\operatorname{SL}_2(\mathbb{Z})^n, G\in\sigma(X_i)\text{ for some }i,\text{ and for every }\tau\in\mathcal{H}^N, G_\tau\ne\mathcal{H}^n\}\]
satisfies the conclusion of the proposition.
\end{proof}
\end{propn}
To conclude, we will state one final corollary of the above result, which simply says that Theorem \ref{thrm:PartialAOwDerivs} holds for arbitrary varieties, rather than just those defined over $\overline{\mathbb{Q}}$.
\begin{cor}
Assume Conjecture \ref{conj:AlgIndep}. Let $V\subseteq\mathbb{C}^{3n}$ be a proper algebraic variety (with arbitrary field of definition). There exists an $\operatorname{SL}_2(\mathbb{Z})$-finite collection $\sigma(V)$, consisting of \emph{proper} $\mathcal{H}$-special varieties of $\mathcal{H}^n$, such that every $\mathcal{H}$-special point in $J^{-1}(V)$ is contained in some $G\in\sigma(V)$.
\begin{proof}
Immediate.
\end{proof}
\end{cor}
\bibliographystyle{../../bib/scabbrv}
| 2023-04-23T06:40:55.996Z | 2017-09-28T02:08:19.000Z | redpajama/arxiv | arxiv_0001 | 1,409 | 17,319 |
12309f8a6bb8b02e78afcd87dc5c718deb1c281e |
\section{Background normalisation and modelling}
\label{sec:Backgrounds}
The largest background contributions to $t$-channel single top-quark production arise from
\ttbar\ and $W$+jets production. The former is difficult to distinguish from the signal
since \ttbar\ events contain real top quarks in the final state. The $W$+jets production
contributes to the background if there is a $b$-quark in the final state or due to
mistagging of jets containing other quark flavours. Multijet production via the strong
interaction can contribute as well if, in addition to two reconstructed jets, an extra
jet is misidentified as an isolated lepton, or if a non-prompt lepton appears to be
isolated (both referred to as fake leptons). Other minor backgrounds originate from $Wt$,
$s$-channel single top-quark, $Z$+jets and diboson production.
For all processes, except multijet production, the normalisation is initially estimated by
using the Monte Carlo simulation scaled to the theoretical cross-section predictions, and the
event distribution modelling is taken from simulation.
The \ttbar\ production cross-section is calculated at NNLO in QCD including resummation
of next-to-next-to-leading-logarithm (NNLL) soft gluon terms with
Top{\scriptsize ++}2.0~\cite{Cacciari:2011,Baernreuther:2012,Czakon:2012a,Czakon:2012b,Czakon:2013,Czakon:2011}.
Its predicted value is $253^{+13}_{-15}$~pb~\cite{Cacciari:2011}. The quoted uncertainties include
the PDF and $\alpha_\mathrm{s}$ uncertainties calculated according to the PDF4LHC
prescription~\cite{PDF4LHC} with the MSTW2008 NNLO~\cite{PDF_Martin,PDF_Martin_2}, CT10 NNLO~\cite{PDF_Lai,PDF_Gao}
and NNPDF2.3 5f FFN~\cite{PDF_Ball} PDF sets, and the QCD scale uncertainty. The $t$-channel,
$Wt$ and $s$-channel single-top-quark production cross-sections are calculated
at NLO precision in QCD through NNLL resummation, leading to $87.7^{+3.4}_{-1.9}$~pb~\cite{Kidonakis_tchan},
$22.4\pm1.5$~pb~\cite{Kidonakis_Wt} and $5.6\pm0.2$~pb~\cite{Kidonakis_schan}, respectively.
The calculations assume a top-quark mass of 172.5~GeV and use the MSTW2008 NNLO~\cite{PDF_Martin}
PDFs. The quoted uncertainties include those due to the QCD scale uncertainty and the correlated
PDF--$\alpha_{\mathrm{s}}$ uncertainty.
The cross-sections for inclusive $W$- and $Z$-boson production are estimated with NNLO precision
using the FEWZ program~\cite{FEWZ_1,FEWZ_2} and the MSTW2008 NNLO PDFs. The diboson samples are
normalised to the NLO cross-section predictions calculated with MCFM~\cite{MCFM}. A normalisation
uncertainty of 20\% is assigned to the $W$+jets background. This uncertainty is estimated from
parameter variations of the \textsc{Sherpa} generator covering the measured $W$+jets cross-sections~\cite{ATLAS_Wjets}.
A normalisation uncertainty of 20\% is also assumed for the $Z$+jets and diboson processes.
The normalisation as well as the event modelling of the multijet background is estimated from
data using the matrix method~\cite{ATLAS_topreco,ATLAS_ttbar}. This method allows the derivation
of the true composition of the data sample in terms of prompt (real) and fake leptons from its
observed composition in terms of tight (signal selection) and loose leptons. An alternative
normalisation and modelling based on the mixed data--simulation jet-electron method~\cite{ATLAS_tchan_letter,ATLAS_tchan}
and the purely data-driven anti-muon selection~\cite{ATLAS_topreco} are used to estimate the
systematic uncertainties. From the comparison an overall normalisation uncertainty of 70\% is
assigned to the multijet contribution.
To check the modelling of the \ttbar\ and $W$+jets background contributions, the simulated
events are compared to the data in two dedicated background-dominated regions. Samples enriched
in \ttbar\ events (\ttbar\ control region) are defined by considering events preselected as
explained in \Section{sec:Selections}, but containing two additional jets that are required to
be untagged. This control region is also used in the normalisation fit described in \Section{sec:Yields}.
Samples enriched in $W$+jets events ($W$+jets control region) are selected by applying a
relaxed $b$-tagging requirement corresponding to an efficiency of 80\%. In addition, all events
satisfying the signal $b$-tagging requirement are excluded. For these two control regions
the dilepton rejection and the four final selection cuts are not applied. An additional category
of events is defined by selecting all events not passing the four signal selection cuts
(anti-signal region). This region is only used in the normalisation fit, in combination with
the \ttbar\ control region. It is preferred to the $W$+jets control region to constrain the
$W$+jets normalisation because it has a flavour composition more similar to that of the signal
region. The predicted fraction of heavy-flavour events in the $W$+jets contribution is around
95\% for both the signal and anti-signal selections, whereas it is 55\% in the $W$+jets control
region.
Good overall data--prediction agreement is found in the \ttbar\, $W$+jets and anti-signal control
regions for the relevant kinematic observables, as well as for the various angular observables used
in the measurements. Figure~\ref{fig:ttbar_selections} shows the distributions in the \ttbar\
control region of the four variables used to define the final selections. The distributions
obtained in the $W$+jets control region are displayed in Figure~\ref{fig:Wjets_selections}.
\begin{figure}[t]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_abs_ujet_eta_rebin_tchan_1tag_noVeto_noFiducial_4jets_Leptons}
\label{fig:ttbar_abs_ujet_eta}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_dEta_bjet_ujet_rebin_tchan_1tag_noVeto_noFiducial_4jets_Leptons}
\label{fig:ttbar_dEta_bjet_ujet}}
\hfill
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_top_m_tchan_1tag_noVeto_noFiducial_4jets_Leptons}
\label{fig:ttbar_top_m}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_Ht_event_tchan_1tag_noVeto_noFiducial_4jets_Leptons}
\label{fig:ttbar_Ht_event}}
\caption
{Distributions of the selection variables in the \ttbar\ control region:
\protect\subref{fig:ttbar_abs_ujet_eta} $|\eta|$ of the untagged jet,
\protect\subref{fig:ttbar_dEta_bjet_ujet} separation in $\eta$ between the untagged and $b$-tagged jets,
\protect\subref{fig:ttbar_top_m} reconstructed top-quark mass, and
\protect\subref{fig:ttbar_Ht_event} scalar sum of the \pT\ of the lepton, the \pT\ of the jets and \MET.
The observed distributions are compared to the predicted signal and background distributions, normalised
to the results of the maximum-likelihood fit. The labels $tq$ and $t\bar{b}$ refer to the $t$-channel
and $s$-channel single-top-quark processes, respectively, and $VV$ to diboson production. The uncertainty
bands include the statistical post-fit uncertainty, the uncertainty due to the limited size of the simulation
samples and the uncertainty in the normalisation of the multijet background, added in quadrature. The last bin
of the histograms includes overflows. The lower panels show the ratio of data to prediction.}
\label{fig:ttbar_selections}
\end{figure}
\begin{figure}[t]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_abs_ujet_eta_rebin_tchan_1tag_control_noVeto_noFiducial_2jets_Leptons}
\label{fig:Wjets_abs_ujet_eta}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_dEta_bjet_ujet_rebin_tchan_1tag_control_noVeto_noFiducial_2jets_Leptons}
\label{fig:Wjets_dEta_bjet_ujet}}
\hfill
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_top_m_tchan_1tag_control_noVeto_noFiducial_2jets_Leptons}
\label{fig:Wjets_top_m}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_Ht_event_tchan_1tag_control_noVeto_noFiducial_2jets_Leptons}
\label{fig:Wjets_Ht_event}}
\caption
{Distributions of the selection variables in the $W$+jets control region:
\protect\subref{fig:Wjets_abs_ujet_eta} $|\eta|$ of the untagged jet,
\protect\subref{fig:Wjets_dEta_bjet_ujet} separation in $\eta$ between the untagged and $b$-tagged jets,
\protect\subref{fig:Wjets_top_m} reconstructed top-quark mass, and
\protect\subref{fig:Wjets_Ht_event} scalar sum of the \pT\ of the lepton, the \pT\ of the jets and \MET.
The observed distributions are compared to the predicted signal and background distributions.
The $W$+jets distributions are normalised to match the observed number of events. The labels
$tq$ and $t\bar{b}$ refer to the $t$-channel and $s$-channel single-top-quark processes, respectively,
and $VV$ to diboson production. The uncertainty bands include the uncertainty due to the limited size
of the simulation samples and the uncertainty in the normalisation of the multijet background, added in
quadrature. The last bin of the histograms includes overflows. The lower panels show the ratio of data
to prediction.}
\label{fig:Wjets_selections}
\end{figure}
\section{Conclusion}
\label{sec:Conclusion}
Measurements of the top-quark and $W$-boson polarisation observables in $t$-channel single
top-quark production at $\sqrt{s}=8$~TeV with 20.2~fb$^{-1}$ of $pp$ collision data recorded
with the ATLAS detector at the LHC are presented. The selected events contain one isolated
electron or muon, large missing transverse momentum and exactly two jets, of which one is
tagged as a $b$-jet. A cut-based analysis is used to discriminate
the signal events from background, and the electron and muon channels are combined. The
polarisation observables are measured from asymmetries in various angular distributions
unfolded to the parton level. Unfolding corrections based on a Standard Model simulation of the
$t$-channel process are used, as well as model-independent corrections derived through an
interpolation method. The measured asymmetries and the measured polarisation observables are
in agreement with the predictions of the Standard Model. Limits on the imaginary part of
the anomalous coupling \ensuremath{g_{\mathrm{R}}}\ are also set, giving Im\,\ensuremath{g_{\mathrm{R}}}~$\in$~[$-$0.18, 0.06] at the 95\%
confidence level. The extracted values improve on the most recently published limits for this
coupling.
\section{The ATLAS detector}
\label{sec:Detector}
\newcommand{\AtlasCoordFootnote}{ATLAS uses a right-handed coordinate system with its origin
at the nominal interaction point in the centre of the detector and the $z$-axis along the beam
pipe. The $x$-axis points from the interaction point to the centre of the LHC ring, and the
$y$-axis points upwards. Cylindrical coordinates $(r,\phi)$ are used in the transverse plane,
$\phi$ being the azimuthal angle around the $z$-axis. The pseudorapidity is defined in terms
of the polar angle $\theta$ as $\eta = -\ln \tan(\theta/2)$.}
The ATLAS detector~\cite{ATLAS_detector} is a multi-purpose particle detector with a forward-backward
symmetric, cylindrical geometry and a near $4\pi$ coverage in solid angle around the collision
point.\footnote{\AtlasCoordFootnote} It consists of an inner tracking detector surrounded by a thin
superconducting solenoid, electromagnetic and hadronic calorimeters, and a muon spectrometer. The
inner detector is immersed in a \SI{2}{\tesla} axial magnetic field, and provides charged-particle
tracking in the pseudorapidity range $|\eta|<2.5$. It contains a high-granularity silicon pixel detector,
a silicon microstrip tracker, and a straw-tube transition radiation tracker. Lead/liquid-argon
sampling calorimeters provide electromagnetic energy measurements with high granularity in the
pseudorapidity ranges $|\eta|<1.5$ (barrel region) and $1.4<|\eta|<3.2$ (endcap region). Hadronic
energy measurements are provided by steel/scintillator-tile calorimeters in the central pseudorapidity
range $|\eta|<1.7$ and by copper/liquid-argon calorimeters in the endcap region $1.5<|\eta|<3.2$.
The forward region is instrumented with liquid-argon calorimeters for electromagnetic and
hadronic energy measurements, extending the coverage to $|\eta|=4.9$. The muon spectrometer surrounds
the calorimeters and incorporates three large air-core toroid superconducting magnets with eight
coils each. It includes separate trigger detectors and high-precision tracking chambers, providing
muon momentum measurement for $|\eta|<2.7$ and muon triggering up to $|\eta|=2.4$.
A three-level trigger system is used to select interesting events~\cite{ATLAS_trigger}. The first-level
trigger is hardware-based and uses a subset of the detector information to reduce the accepted event
rate to less than \SI{75}{\kHz}. The second and third levels are software-based and together reduce
the event rate to about \SI{400}{\Hz}.
\section{Angular distributions}
\label{sec:Distributions}
The distributions observed at reconstruction level for the angular observables used to measure
the various asymmetries are shown in Figures~\ref{fig:Distributions_1} and \ref{fig:Distributions_2}.
They are compared to the predicted signal and background distributions, normalised to the results of
the maximum-likelihood fit. To minimise the unfolding corrections that are applied after background
subtraction, two bins are chosen for the angular distributions from which forward-backward asymmetries
are extracted, while four bins are used for the angular distribution from which the $\ensuremath{A_{\mathrm{EC}}}$ asymmetry
is determined.
Depending on the angular observable, as described in Section~\ref{sec:Observables}, the charged-lepton
four-momentum is computed in the rest frame of the reconstructed top quark or in the rest frame
of the reconstructed $W$ boson. The angular observables related to the top-quark polarisation are
defined by taking the momentum of the untagged jet as the spectator-quark direction, whereas those
related to the $W$-boson spin observables are defined by considering the reverse momentum of the
$b$-tagged jet as the $W$-boson direction.
\begin{figure}[t]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_cos_lepton_ujet_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}
\label{fig:Polar_cos_lepton_ujet_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_cos_lepton_tW_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}
\label{fig:Polar_cos_lepton_tW_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}}
\hfill
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_cos_lepton_bjet_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}
\label{fig:Polar_cos_lepton_bjet_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_cos_lepton_bjet_tchan_1tag_cuts_noFiducial_2jets_Leptons_4}
\label{fig:Polar_cos_lepton_bjet_tchan_1tag_cuts_noFiducial_2jets_Leptons_4}}
\caption
{Distributions in the signal region of the angular observables used to measure the various asymmetries:
\protect\subref{fig:Polar_cos_lepton_ujet_tchan_1tag_cuts_noFiducial_2jets_Leptons_2} $\cos\theta_{\ell}$ for $\ensuremath{A_{\mathrm{FB}}}^{\ell}$,
\protect\subref{fig:Polar_cos_lepton_tW_tchan_1tag_cuts_noFiducial_2jets_Leptons_2} $\cos\theta_W \cos\theta_{\ell}^*$ for $\ensuremath{A_{\mathrm{FB}}}^{tW}$,
\protect\subref{fig:Polar_cos_lepton_bjet_tchan_1tag_cuts_noFiducial_2jets_Leptons_2} $\cos\theta_{\ell}^*$ with two bins for $\ensuremath{A_{\mathrm{FB}}}$, and
\protect\subref{fig:Polar_cos_lepton_bjet_tchan_1tag_cuts_noFiducial_2jets_Leptons_4} $\cos\theta_{\ell}^*$ with four bins for $\ensuremath{A_{\mathrm{EC}}}$.
The observed distributions are compared to the predicted signal and background distributions,
normalised to the results of the maximum-likelihood fit. The template $t$-channel distributions
are taken from the baseline \textsc{Powheg-Box} sample. The labels $tq$ and $t\bar{b}$ refer to
the $t$-channel and $s$-channel single-top-quark processes, respectively, and $VV$ to diboson
production. The uncertainty bands include the statistical post-fit uncertainty, the uncertainty
due to the limited size of the simulation samples and the uncertainty in the normalisation of the
multijet background, added in quadrature. The lower panels show the ratio of data to prediction.}
\label{fig:Distributions_1}
\end{figure}
\begin{figure}[t]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_cos_lepton_normal_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}
\label{fig:Polar_cos_lepton_normal_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_cos_lepton_trans_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}
\label{fig:Polar_cos_lepton_trans_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}}
\hfill
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_cos_lepton_bjet_n_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}
\label{fig:Polar_cos_lepton_bjet_n_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Stack_cos_lepton_bjet_t_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}
\label{fig:Polar_cos_lepton_bjet_t_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}}
\caption
{Distributions in the signal region of the angular observables used to measure the various asymmetries:
\protect\subref{fig:Polar_cos_lepton_normal_tchan_1tag_cuts_noFiducial_2jets_Leptons_2} $\cos\theta_{\ell}^{N}$ for $\ensuremath{A_{\mathrm{FB}}}^{N}$,
\protect\subref{fig:Polar_cos_lepton_trans_tchan_1tag_cuts_noFiducial_2jets_Leptons_2} $\cos\theta_{\ell}^{T}$ for $\ensuremath{A_{\mathrm{FB}}}^{T}$,
\protect\subref{fig:Polar_cos_lepton_bjet_n_tchan_1tag_cuts_noFiducial_2jets_Leptons_2} $\cos\theta_{\ell}^* \cos\phi_{N}^*$ for $\ensuremath{A_{\mathrm{FB}}}^{N,\phi}$, and
\protect\subref{fig:Polar_cos_lepton_bjet_t_tchan_1tag_cuts_noFiducial_2jets_Leptons_2} $\cos\theta_{\ell}^* \cos\phi_{T}^*$ for $\ensuremath{A_{\mathrm{FB}}}^{T,\phi}$.
The observed distributions are compared to the predicted signal and background distributions,
normalised to the results of the maximum-likelihood fit. The template $t$-channel distributions
are taken from the baseline \textsc{Powheg-Box} sample. The labels $tq$ and $t\bar{b}$ refer to
the $t$-channel and $s$-channel single-top-quark processes, respectively, and $VV$ to diboson
production. The uncertainty bands include the statistical post-fit uncertainty, the uncertainty due
to the limited size of the simulation samples and the uncertainty in the normalisation of the multijet
background, added in quadrature. The lower panels show the ratio of data to prediction.}
\label{fig:Distributions_2}
\end{figure}
\section{Introduction}
\label{sec:Introduction}
At hadron colliders, top quarks are predominantly produced in pairs (\ttbar)
via the flavour-conserving strong interaction, but single top-quark production
can occur via charged-current electroweak processes involving a $Wtb$ vertex. At
leading order in QCD perturbation theory, three sub-processes contribute to single
top-quark production: an exchange of a virtual $W$ boson either in the $t$-channel
or in the $s$-channel, or the associated production of a top quark with an
on-shell $W$ boson ($Wt$). The $t$-channel and $s$-channel processes do not
interfere at next-to-leading-order in QCD and are thus well defined
with that precision~\cite{Theory_sgtop}.
In proton--proton ($pp$) collisions, the $t$-channel exchange, depicted in
Figure~\ref{fig:tchannel}, is the dominant production process of single top quarks.
The exchange of a space-like $W$ boson due to the interaction of a light quark with
a $b$-quark produces a top quark and a forward light-quark (called the spectator quark)
in the final state. Furthermore, as a consequence of the vector minus axial-vector (V--A)
form of the $Wtb$ vertex in the Standard Model, the produced top quarks are
highly polarised, in particular along the direction of the spectator-quark
momentum~\cite{Mahlon, Schwienhorst}.
Within the Standard Model the top quark decays through the electroweak interaction
into an on-shell $W$ boson and a $b$-quark, with a lifetime much shorter than
the time scale necessary to depolarise the spin. The information on the top-quark
spin can thus be obtained from its decay products. The produced real $W$ boson also
possesses a polarisation (or helicity state), which can be extracted from angular
distributions of its decay products through the measurement of spin-dependent
observables~\cite{Theory_Wspin}.
\begin{figure}[htbp]
\centering
\includegraphics[width= 0.30\textwidth]{singletop_t-ch_2to3}
\caption{Leading-order Feynman diagram for $t$-channel production of
single top quarks in $pp$ collisions. In the depicted four-flavour scheme (2$\rightarrow$3 process)
the initial $b$-quark arises from a gluon splitting into a $b\overline{b}$ pair.}
\label{fig:tchannel}
\end{figure}
Measuring the top-quark polarisation and the $W$-boson spin observables in $t$-channel
single top-quark production provides a powerful probe for studying the $Wtb$ vertex
in both top-quark production and decay. New physics effects resulting in corrections
to the $Wtb$ vertex would affect the top-quark and $W$-boson polarisations. In the
effective operator formalism the most general $Wtb$ Lagrangian can be written as~\cite{Theory_Wpolarization}:
\begin{equation}
{\cal L}_{Wtb} = - \frac{g}{\sqrt{2}}\,{\overline{b}}\gamma^\mu \left (\ensuremath{V_{\mathrm{L}}} P_{\text{L}} + \ensuremath{V_{\mathrm{R}}} P_{\text{R}} \right )tW^-_\mu
- \frac{g}{\sqrt{2}}\,{\overline{b}}\,\frac{i\sigma^{\mu\nu}q_{\nu}}{\mW} \left (\ensuremath{g_{\mathrm{L}}} P_{\text{L}} + \ensuremath{g_{\mathrm{R}}} P_{\text{R}} \right)tW^-_\mu + \text{h.c.}
\label{eq:lagrangian}
\end{equation}
\noindent
In this expression $g$ is the weak coupling constant, $\mW$ and
$q_{\nu}$ are the mass and the four-momentum of the $W$ boson,
respectively, $P_{\text{{L,R}}} \equiv (1\mp \gamma^5)/2$ are the left- and
right-handed projection operators, and $\sigma^{\mu\nu} =[\gamma^{\mu}, \gamma^{\nu}]/2$.
The constants $V_{\text{L,R}}$ and $\Glr$ are the left- and right-handed
vector and tensor couplings, respectively. In the Standard Model at
tree level the coupling $\ensuremath{V_{\mathrm{L}}}$ is the $V_{tb}$ element of the
quark-mixing Cabibbo--Kobayashi--Maskawa (CKM) matrix that is close
to one, while the anomalous couplings $\ensuremath{V_{\mathrm{R}}}$ and $\Glr$ are all
zero. Deviations from these values would provide hints of physics
beyond the Standard Model, and complex values would imply that the
top-quark decay has a CP-violating component~\cite{Theory_Wpolarization}.
The imaginary part of $\ensuremath{g_{\mathrm{R}}}$ (Im\,\ensuremath{g_{\mathrm{R}}}) can be probed with the best precision
in the $t$-channel production of single top quarks through the measurement
of polarisation observables~\cite{Theory_Wpolarization}. Limits on Im\,\ensuremath{g_{\mathrm{R}}}\
have been set at the LHC by the ATLAS Collaboration at a centre-of-mass energy
of 7~TeV from the analysis of the double-differential angular decay rates
of the produced $t$-channel single-top-quark events~\cite{ATLAS_Wtb_limits}.
The top-quark polarisation and the $W$-boson spin observables can
be extracted in an alternative way from the measurement of asymmetries
in various angular distributions of the top-quark decay products~\cite{Theory_Wspin,Theory_Wpolarization}.
Firstly, this article reports a determination of the top-quark polarisation
as well as the $W$-boson spin observables extracted from the measured
angular asymmetries. Such measurements serve as a consistency check
with the Standard Model predictions. Secondly, limits on Im\,\ensuremath{g_{\mathrm{R}}}\ are
presented from the measurement of the so-called normal forward-backward
asymmetry, which is predicted to have the highest sensitivity to
Im\,\ensuremath{g_{\mathrm{R}}}~\cite{Theory_Wpolarization}, and the asymmetry related to the
top-quark polarisation. Here Standard Model values are assumed for all
other couplings.
The measurements reported in this article use 20.2~fb$^{-1}$ of data collected
at a centre-of-mass energy of 8~TeV with the ATLAS detector at the LHC.
Stringent selection requirements are applied in order to separate signal from
background. The $W$ boson from the top-quark decay is identified through
its decay modes leading to a final state with an electron or a muon, and
missing transverse momentum for the neutrino. The measurement at parton
level of the asymmetries is performed by unfolding the
observed angular distributions from detector and physics effects
after subtracting the background contributions. For all reported results
the electron and muon channels are merged, and the analysis is carried out
independently of the lepton charge, in order to measure the polarisation
observables associated with the combined production and decay of top
quarks and top antiquarks.
\section{Polarisation observables and asymmetries}
\label{sec:Observables}
The top-quark polarisation is determined from angular distributions of the decay products
reconstructed in the top-quark rest frame, while the $W$-boson spin observables are
determined from angular distributions of the charged lepton reconstructed in the $W$-boson
rest frame.
In the top-quark rest frame, the angular distribution of any decay product $X$ of the top
quark is given by
\begin{equation}
\frac{1}{\Gamma}\frac{\,\mathrm{d}\Gamma}{\,\mathrm{d}(\cos\theta_X)} = \frac{1}{2} \left(1 + \alpha_X P\cos\theta_X \right)\, ,
\label{eq:TopDecay}
\end{equation}
\noindent
where $\theta_X$ is the angle between the top-quark spin axis and the direction of motion of
the chosen decay product in the top-quark rest frame, $\Gamma$ is the total decay width of the
top quark, $\alpha_X$ is the spin analysing power associated with $X$, and $P$ is the top-quark
degree of polarisation. The charged lepton is the most sensitive spin analyser; at
next-to-leading-order (NLO) precision in QCD its spin analysing power is
$\alpha_{\ell^{\pm}}$\,$=$\,$\pm0.998$~\cite{Brandenburg}. In the $t$-channel, single top quarks
are produced with a large degree of polarisation in the direction of motion of the spectator
quark~\cite{Mahlon_3,Schwienhorst}. This direction is used to define the top-quark spin axis
in this measurement. The corresponding degrees of polarisation calculated at NLO in QCD are
$0.91$ and $-0.86$ for top-quark and top-antiquark production, respectively~\cite{Schwienhorst}.
In the framework of a general formalism developed in Ref.~\cite{Theory_Wspin}, the spin-density
matrix elements for the $W$-boson helicity components 0, $\pm$1, resulting from the
decay of polarised top-quarks, can be parameterised in terms of expectation values of six
independent spin observables: $\langle S_{1,2,3}\rangle$, $\langle T_{0}\rangle$ and
$\langle A_{1,2}\rangle$. With ($\theta_{\ell}^*,\phi_{\ell}^*$) denoting the polar and azimuthal
angles of the charged-lepton momentum in the $W$-boson rest frame, the fully differential decay
width of a $W$ boson can be written as
\begin{eqnarray}
\frac{1}{\Gamma}\frac{\,\mathrm{d}\Gamma}{\,\mathrm{d}(\cos\theta_{\ell}^*)\mathrm{d}\phi_{\ell}^*}
&=& \frac{3}{8 \pi} \Bigg\{ \frac{2}{3} + \frac{1}{\sqrt{6}} \langle T_{0}\rangle \left(3 \cos^2\theta_{\ell}^* - 1 \right) + \langle S_{3}\rangle \cos\theta_{\ell}^* \nonumber \\
&+& \langle S_{1}\rangle \cos\phi_{\ell}^* \sin\theta_{\ell}^*\ + \langle S_{2}\rangle \sin\phi_{\ell}^* \sin\theta_{\ell}^* \nonumber \\
&-& \langle A_{1}\rangle \cos\phi_{\ell}^* \sin2\theta_{\ell}^*\ - \langle A_{2}\rangle \sin\phi_{\ell}^* \sin2\theta_{\ell}^* \Bigg\}\, .
\label{eq:WDecay}
\end{eqnarray}
\noindent
In this formalism the $W$-boson spin axis is taken along the direction of the $W$-boson
momentum in the top-quark rest frame, or equivalently along the direction opposite to the
$b$-quark momentum in the $W$-boson rest frame. The coordinate system used and the various
angles defined for the charged lepton in the $W$-boson rest frame are depicted in \Figure{fig:CoordinateSystem}.
\begin{figure}[!h!tpb]
\centering
\includegraphics[width=0.70\textwidth]{coord_system_pol}
\caption{Coordinate system and angles used to define the $W$-boson spin observables
and their related angular asymmetries in the decay of polarised top quarks. The $W$-boson
momentum ${\vec q}$ in the top-quark rest frame defines the $\hat{z}$-axis; the
top-quark spin direction ${\hat{s}}_t$, taken along the spectator-quark momentum
in the top-quark rest frame, defines the $\hat{x}$--$\hat{z}$ plane. The polar
and azimuthal angles of the charged-lepton momentum ${\vec p}_{\ell}$ in the $W$-boson
rest frame are labelled $\theta_{\ell}^*$ and $\phi_{\ell}^*$, respectively. The normal
and transverse axes are defined relatively to ${\vec q}$ and ${\hat{s}}_t$ according
to ${\vec N}={\hat{s}}_t\times {\vec q}$ and ${\vec T}={\vec q} \times {\vec N}$; they
are along the $-\hat{y}$ and $\hat{x}$ axes of the coordinate system, respectively. The
azimuthal angles $\phi_{N}^*$ and $\phi_{T}^*$ of the charged lepton in the $W$-boson
rest frame are defined relatively to the ${\vec N}$ and ${\vec T}$ axes, respectively
($\phi_{T}^*\equiv\phi_{\ell}^*$), while $\theta_{\ell}^N$ and $\theta_{\ell}^T$ (not
shown in the figure) are the relative angles between ${\vec p}_{\ell}$ and the ${\vec N}$
and ${\vec T}$ axes, respectively.}
\label{fig:CoordinateSystem}
\end{figure}
The angular distribution expressed in Equation~\eqref{eq:WDecay} implies an integration over all
the possible directions of the top-quark spin relative to the $W$-boson spin axis.
The top-quark polarisation is propagated to the spin observables $\langle S_{1,2}\rangle$ and
$\langle A_{1,2}\rangle$, which depend in a proportional way on the value of $P$. The spin
observables $\langle S_{3}\rangle$ and $\langle T_{0}\rangle$ do not depend on $P$, and are
related to the $W$-boson helicity fractions \ensuremath{F_{\mathrm{R}}}, \ensuremath{F_{\mathrm{L}}}\ and \ensuremath{F_{0}}~\cite{Theory_Wspin}.
From the values of the helicity fractions predicted by the Standard Model at next-to-next-to-leading
order (NNLO) in QCD assuming a top-quark mass of 172.5~GeV and a $b$-quark mass of 4.8~GeV~\cite{Theory_helicity},
one obtains $\langle S_{3}\rangle$\,$=-0.31$ and $\langle T_{0}\rangle$\,$=-0.43$. The uncertainties
in these predictions due to the theoretical uncertainties in the helicity fractions are lower than
0.01 for both $\langle S_{3}\rangle$ and $\langle T_{0}\rangle$. Combining the predicted degrees of
polarisation $P_{t}=0.91$ and $P_{\bar t}=-0.86$ with the $t$-channel single-top cross-sections
$\sigma_{t}=54.9$~pb and $\sigma_{\bar t}=29.7$~pb calculated at NLO in QCD for top-quark and
top-antiquark production~\cite{Campbell_tchan}, the Standard Model predictions for $\langle S_{1,2}\rangle$
and $\langle A_{1,2}\rangle$ are: $\langle S_{1}\rangle$\,$=0.46$, $\langle A_{1}\rangle$\,$=0.23$
and $\langle S_{2}\rangle$\,$=$\,$\langle A_{2}\rangle$\,$=0$. These values are calculated at
leading order (LO) in QCD from the expressions of the spin-density matrix elements given in Refs.~\cite{Theory_Wpolarization,Theory_Wspin}.
The uncertainties in these predictions resulting from the uncertainties in the top-quark, $b$-quark
and $W$-boson masses, and from higher-order effects~\cite{Theory_TopDecay}, are all smaller than 0.01.
Measured values not equal to zero for the $\langle S_{2}\rangle$
and $\langle A_{2}\rangle$ spin observables would signal the presence of an imaginary coupling
in the $Wtb$ vertex, since $\langle S_{2}\rangle$ and $\langle A_{2}\rangle$ are only sensitive
to Im\,$\ensuremath{g_{\mathrm{R}}}$~\cite{Theory_Wspin}.\footnote{Including one-loop QCD and electroweak corrections the
prediction for \ensuremath{g_{\mathrm{R}}}\ in the Standard Model is $(-7.17-1.23\mathrm{i})\times 10^{-3}$~\cite{ImGrPrediction},
leading to values of the order of $10^{-3}$ for the $\langle S_{2}\rangle$ and $\langle A_{2}\rangle$
spin observables.} However, $\langle S_{2}\rangle$ is twice as sensitive as $\langle A_{2}\rangle$ to Im\,$\ensuremath{g_{\mathrm{R}}}$,
making this observable more suitable for determining this coupling. The other four $W$-boson spin
observables are mainly sensitive to Re\,$\ensuremath{g_{\mathrm{R}}}$, with a poor sensitivity to Im\,$\ensuremath{g_{\mathrm{R}}}$~\cite{Theory_Wpolarization,Theory_Wspin}.
\newcommand{The asymmetries used in this article and in Ref.~\cite{Theory_Wpolarization}{The asymmetries used in this article and in Ref.~\cite{Theory_Wpolarization}
are related to the ones defined in Refs.~\cite{Theory_Wspin,Theory_Zspin} through the equations
$\ensuremath{A_{\mathrm{FB}}}^{T}=\ensuremath{A_{\mathrm{FB}}}^{x}$, $\ensuremath{A_{\mathrm{FB}}}^{N}=-\ensuremath{A_{\mathrm{FB}}}^{y}$, $\ensuremath{A_{\mathrm{FB}}}^{T,\phi}=\ensuremath{A_{\mathrm{FB}}}^{1}$, $\ensuremath{A_{\mathrm{FB}}}^{N,\phi}=-\ensuremath{A_{\mathrm{FB}}}^{2}$, $\ensuremath{A_{\mathrm{FB}}}=\ensuremath{A_{\mathrm{FB}}}^{z}$\,.}
The top-quark polarisation and the $W$-boson spin observables can be extracted from asymmetries
derived by integrating the angular distributions expressed in Equations~\eqref{eq:TopDecay} and
\eqref{eq:WDecay}. These asymmetries are based on single or combined angular observables. They
are listed in \Table{tab:Asymmetries}, together with their associated angular observables and
their relation to the polarisation observables.\footnote{The asymmetries used in this article and in Ref.~\cite{Theory_Wpolarization} The asymmetry values
predicted by the Standard Model are also reported in the table.
Most of the polarisation observables are based on a forward-backward asymmetry, which is
generically defined as a function of a given angular observable $\cos\theta$ according to
\begin{equation}
\ensuremath{A_{\mathrm{FB}}} = \frac{N(\cos\theta>0)-N(\cos\theta<0)}{N(\cos\theta>0)+N(\cos\theta<0)}\, ,
\label{eq:AFB}
\end{equation}
\noindent
where $N$ is the number of events. One of the $W$-boson spin observables is determined
from an asymmetry called edge-central and defined as follows
\begin{equation}
\ensuremath{A_{\mathrm{EC}}} = \frac{N(|\cos\theta|>\frac{1}{2})-N(|\cos\theta|<\frac{1}{2})}{N(|\cos\theta|>\frac{1}{2})+N(|\cos\theta|<\frac{1}{2})}\, .
\label{eq:AEC}
\end{equation}
\begin{table}[h]
\begin{center}
\begin{tabular}{lccc}
\toprule
Asymmetry & Angular observable & Polarisation observable & SM prediction \\
\midrule
$\ensuremath{A_{\mathrm{FB}}}^{\ell}$ & $\cos\theta_{\ell}$ & $\frac{1}{2}\alpha_{\ell} P$ & 0.45 \\[0.14cm]
$\ensuremath{A_{\mathrm{FB}}}^{tW}$ & $\cos\theta_W \cos\theta_{\ell}^*$ & $\frac{3}{8} P \left( \ensuremath{F_{\mathrm{R}}} + \ensuremath{F_{\mathrm{L}}} \right)$ & 0.10 \\[0.18cm]
$\ensuremath{A_{\mathrm{FB}}}$ & $\cos\theta_{\ell}^*$ & $\frac{3}{4} \langle S_{3}\rangle = \frac{3}{4} \left( \ensuremath{F_{\mathrm{R}}} - \ensuremath{F_{\mathrm{L}}} \right)$ & $-$0.23 \\[0.10cm]
$\ensuremath{A_{\mathrm{EC}}}$ & $\cos\theta_{\ell}^*$ & $\frac{3}{8} \sqrt{\frac{3}{2}} \langle T_{0}\rangle = \frac{3}{16}\left(1 - 3\ensuremath{F_{0}} \right)$ & $-$0.20 \\[0.14cm]
$\ensuremath{A_{\mathrm{FB}}}^{T}$ & $\cos\theta_{\ell}^T$ & $\frac{3}{4} \langle S_{1}\rangle$ & 0.34 \\[0.14cm]
$\ensuremath{A_{\mathrm{FB}}}^{N}$ & $\cos\theta_{\ell}^N$ & $-\frac{3}{4} \langle S_{2}\rangle$ & 0 \\[0.14cm]
$\ensuremath{A_{\mathrm{FB}}}^{T,\phi}$ & $\cos\theta_{\ell}^* \cos\phi_{T}^*$ & $-\frac{2}{\pi} \langle A_{1}\rangle$ & $-$0.14 \\[0.14cm]
$\ensuremath{A_{\mathrm{FB}}}^{N,\phi}$ & $\cos\theta_{\ell}^* \cos\phi_{N}^*$ & $\frac{2}{\pi} \langle A_{2}\rangle$ & 0 \\[0.14cm]
\bottomrule
\end{tabular}
\caption{Asymmetries with their associated angular observables and their relation to the top-quark
polarisation and $W$-boson spin observables. The values predicted by the Standard Model are
also given. They are calculated using the predictions at NLO in QCD for $P$ and $\alpha_{\ell}$,
the predictions at NNLO for the helicity fractions, and the predictions at LO for
$\langle S_{1,2}\rangle$ and $\langle A_{1,2}\rangle$. The uncertainties in these values
are all lower than 0.01. They are estimated from the uncertainties in the top-quark, $b$-quark
and $W$-boson masses, added in quadrature, including the uncertainty in $\alpha_\mathrm{s}$ and an
estimate of the higher-order effects for the asymmetries related to the $W$-boson spin observables.}
\label{tab:Asymmetries}
\end{center}
\end{table}
The product $\alpha_{\ell} P$ is extracted from the forward-backward asymmetry $\ensuremath{A_{\mathrm{FB}}}^{\ell}$
of the $\cos\theta_{\ell}$ angular distribution, where $\theta_{\ell}$ is the angle between
the lepton momentum in the top-quark rest frame and the top-quark spin axis. The measurement
of $P$ can also be performed from the forward-backward asymmetry $\ensuremath{A_{\mathrm{FB}}}^{tW}$ defined with
respect to the combined angular observable $\cos\theta_W \cos\theta_{\ell}^*$~\cite{Theory_TopSpin_1},
where $\theta_W$ is the angle between the $W$-boson momentum in the top-quark rest frame
and the top-quark spin axis. This asymmetry is proportional to the product of $P$ and the sum
of two $W$-boson helicity fractions, as reported in \Table{tab:Asymmetries}. The $W$-boson
spin observables $\langle S_{3}\rangle$ and $\langle T_{0}\rangle$ are derived from the
forward-backward asymmetry $\ensuremath{A_{\mathrm{FB}}}$ and from the edge-central asymmetry $\ensuremath{A_{\mathrm{EC}}}$ of the
$\cos\theta_{\ell}^*$ angular distribution, respectively. Using the definition~\cite{Theory_Wpolarization}
of the normal axis ${\vec N}={\vec s}_t \times {\vec q}$ and transverse axis ${\vec T}={\vec q} \times {\vec N}$,
as illustrated in \Figure{fig:CoordinateSystem}, $\langle S_{1}\rangle$ and $\langle S_{2}\rangle$
are determined from the forward-backward asymmetries $\ensuremath{A_{\mathrm{FB}}}^{T}$ and $\ensuremath{A_{\mathrm{FB}}}^{N}$ in the angular
observables $\cos\theta_{\ell}^T$ and $\cos\theta_{\ell}^N$, respectively. The $\langle A_{1}\rangle$
and $\langle A_{2}\rangle$ spin observables are determined from the forward-backward
asymmetries $\ensuremath{A_{\mathrm{FB}}}^{T,\phi}$ and $\ensuremath{A_{\mathrm{FB}}}^{N,\phi}$ based on the combination of $\cos\theta_{\ell}^*$
with the cosine of the azimuthal angles $\phi_{T}^*$ and $\phi_{N}^*$ defined relatively
to ${\vec T}$ and ${\vec N}$, respectively.
Limits on Im\,\ensuremath{g_{\mathrm{R}}}\ can be extracted from the measurement of the $\ensuremath{A_{\mathrm{FB}}}^{N}$ asymmetry, which
has the highest sensitivity to this coupling. For small Im\,\ensuremath{g_{\mathrm{R}}}\ values, taking $\ensuremath{V_{\mathrm{L}}}=1$ and
$\ensuremath{V_{\mathrm{R}}}=\ensuremath{g_{\mathrm{L}}}=0$, a linear dependence on Im\,\ensuremath{g_{\mathrm{R}}}\ is obtained for this asymmetry:
$\ensuremath{A_{\mathrm{FB}}}^{N}=0.64\,P\,\mathrm{Im}\,\ensuremath{g_{\mathrm{R}}}$~\cite{Theory_Wpolarization}. In this relation the
weak dependence of $P$ on Im\,\ensuremath{g_{\mathrm{R}}}, which is of quadratic form, is not included.
As $\ensuremath{A_{\mathrm{FB}}}^{N}$ depends on $P$, the measured value of the $\ensuremath{A_{\mathrm{FB}}}^{\ell}$ asymmetry is required
to constrain $P$ for the limit computation. The quadratic variation of $P$ and $\alpha_{\ell}$
as a function of Im\,\ensuremath{g_{\mathrm{R}}}~\cite{Theory_Wpolarization,Theory_TopSpin_2} is taken into account
when setting the limits through the procedure explained in \Section{sec:Results}. The $\ensuremath{A_{\mathrm{FB}}}^{\ell}$
asymmetry is chosen to constrain $P$ because it is measured independently of Im\,\ensuremath{g_{\mathrm{R}}}; this
is discussed in \Section{sec:Unfolding}.
\section{Results}
\label{sec:Results}
The values of the asymmetries related to the top-quark polarisation and to the
$W$-boson spin observables, measured using the Standard Model $Wtb$ couplings
for the signal unfolding corrections and for the top-quark background modelling,
are
\vspace{-0.5cm}
\begin{align*}
\nonumber
\ensuremath{A_{\mathrm{FB}}}^{\ell} &= 0.49 \pm 0.03\stat \pm 0.05\syst = 0.49 \pm 0.06\, ,\\
\ensuremath{A_{\mathrm{FB}}}^{tW} &= 0.10 \pm 0.03\stat \pm 0.05\syst = 0.10 \pm 0.06\, ,\\
\ensuremath{A_{\mathrm{FB}}} &= -0.26 \pm 0.02\stat \pm 0.07\syst = -0.26 \pm 0.08\, ,\\
\ensuremath{A_{\mathrm{EC}}} &= -0.25 \pm 0.03\stat \pm 0.05\syst = -0.25 \pm 0.06\, ,\\
\ensuremath{A_{\mathrm{FB}}}^{T} &= 0.39 \pm 0.03\stat \pm 0.09\syst = 0.39 \pm 0.09\, ,\\
\ensuremath{A_{\mathrm{FB}}}^{N,\phi} &= -0.03 \pm 0.03\stat \pm 0.05\syst = -0.03 \pm 0.06\, ,\\
\ensuremath{A_{\mathrm{FB}}}^{T,\phi} &= -0.17 \pm 0.05\stat ^{+0.11}_{-0.10}\syst = -0.17 ^{+0.12}_{-0.11}\, .
\end{align*}
The values for the top-quark polarisation combined with the charged-lepton spin analysing power
and with the sum of the $W$-boson helicity fractions, derived from the measured $\ensuremath{A_{\mathrm{FB}}}^{\ell}$ and
$\ensuremath{A_{\mathrm{FB}}}^{tW}$ asymmetries using the relations given in \Table{tab:Asymmetries}, are
\vspace{-0.5cm}
\begin{align*}
\nonumber
\alpha_{\ell} P &= 0.97 \pm 0.05\stat \pm 0.11\syst = 0.97 \pm 0.12\, ,\\
P (\ensuremath{F_{\mathrm{R}}}+\ensuremath{F_{\mathrm{L}}}) &= 0.25 \pm 0.08\stat \pm 0.14\syst = 0.25 \pm 0.16\, .
\end{align*}
The values of the $W$-boson spin observables derived from the measured $\ensuremath{A_{\mathrm{FB}}}$, $\ensuremath{A_{\mathrm{EC}}}$, $\ensuremath{A_{\mathrm{FB}}}^{T}$,
$\ensuremath{A_{\mathrm{FB}}}^{N,\phi}$ and $\ensuremath{A_{\mathrm{FB}}}^{T,\phi}$ asymmetries through the relations given in \Table{tab:Asymmetries}
are
\vspace{-0.5cm}
\begin{align*}
\nonumber
\langle S_3\rangle &= -0.35 \pm 0.03\stat \pm 0.10\syst = -0.35 \pm 0.10\, ,\\
\langle T_0\rangle &= -0.55 \pm 0.06\stat \pm 0.12\syst = -0.55 \pm 0.13\, ,\\
\langle S_1\rangle &= 0.52 \pm 0.04\stat \pm 0.12\syst = 0.52 \pm 0.12\, ,\\
\langle A_2\rangle &= -0.05 \pm 0.05\stat \pm 0.09\syst = -0.05 \pm 0.10\, ,\\
\langle A_1\rangle &= 0.27 \pm 0.07\stat ^{+0.16}_{-0.17}\syst = 0.27 ^{+0.17}_{-0.19}\, .
\end{align*}
The results for the $\ensuremath{A_{\mathrm{FB}}}^{N}$ asymmetry, which has the highest sensitivity to the anomalous
$Wtb$ coupling Im\,\ensuremath{g_{\mathrm{R}}}, and for its associated $W$-boson spin observable are
\vspace{-0.5cm}
\begin{align*}
\nonumber
\ensuremath{A_{\mathrm{FB}}}^{N} &= -0.04 \pm 0.02\stat \pm 0.03\syst = -0.04 \pm 0.04\, ,\\
\langle S_2\rangle &= 0.06 \pm 0.03\stat \pm 0.04\syst = 0.06 \pm 0.05\, .
\end{align*}
\noindent
These observables are measured using the signal corrections interpolated with respect to
Im\,\ensuremath{g_{\mathrm{R}}}\ as explained in \Section{sec:Unfolding}, and using the Standard Model couplings
for the top-quark background modelling.
\Figure{fig:Results_asym} shows the measured and predicted values of all asymmetries,
while \Figure{fig:Results_spin} compares the derived values for the six $W$-boson spin
observables. Compatibility between the measurements and Standard Model predictions is
observed.
\begin{figure}[t]
\centering
\includegraphics[width=0.50\textwidth]{PolarizationSummary_asym_obs}
\caption
{Summary of the measured asymmetries and comparison with the Standard Model predictions.}
\label{fig:Results_asym}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.50\textwidth]{PolarizationSummary_spin_obs}
\caption
{Summary of the measured values of the $W$-boson spin observables and comparison with the Standard
Model predictions.}
\label{fig:Results_spin}
\end{figure}
The overall compatibility of the measurements with the Standard Model predictions is evaluated
through the construction of a $\chi^2$ test statistic taking into account all measured quantities
with their correlations. The theoretical uncertainties, which are negligible compared to the
measurement uncertainties, are not taken into account in the $\chi^2$ calculation. The overall
covariance matrix is computed from the sum of the covariance matrices associated with the various
sources of statistical and systematic uncertainty. To calculate the covariance matrices associated
with the detector-related and $W$+jets flavour composition uncertainties, the positive and
negative uncertainties are symmetrised by taking the larger value. The overall $p$-value for the
eight asymmetries is found to be 0.94, and it is 0.83 for the six $W$-boson spin observables.
Limits on the anomalous coupling Im\,\ensuremath{g_{\mathrm{R}}}\ are extracted from the $\ensuremath{A_{\mathrm{FB}}}^{N}$ and $\ensuremath{A_{\mathrm{FB}}}^{\ell}$
asymmetries, which, as discussed in \Section{sec:Unfolding}, are measured independently of any
assumption about Im\,\ensuremath{g_{\mathrm{R}}}\ in the unfolding procedure, but assuming the Standard Model couplings
for the subtracted top-quark backgrounds. However, for the main \ttbar\ background a negligible
dependence on Im\,\ensuremath{g_{\mathrm{R}}}\ is expected.
The limit extraction is based on the \textsc{TopFit} code~\cite{Theory_Wpolarization,Comb_Wtb_limits}.
By taking into account the analytic expressions and parameterisations introduced in
Refs.~\cite{Theory_Wpolarization,Theory_Wspin,Theory_TopSpin_2} for the $Wtb$ coupling dependence
of $\langle S_2\rangle$, $\alpha_{\ell}$ and $P$, it is possible to determine the allowed region
for Im\,\ensuremath{g_{\mathrm{R}}}\ from the measured values of $\ensuremath{A_{\mathrm{FB}}}^{N}$ and $\ensuremath{A_{\mathrm{FB}}}^{\ell}$. The limit setting is based
on the computation of the $\chi^2$ test statistic using the covariance matrix associated with the
$\ensuremath{A_{\mathrm{FB}}}^{N}$ and $\ensuremath{A_{\mathrm{FB}}}^{\ell}$ measurements. An overall correlation coefficient of $-$0.05 is found.
Assuming $\ensuremath{V_{\mathrm{L}}}=1$ and that all anomalous couplings other than Im\,\ensuremath{g_{\mathrm{R}}}\ vanish ($\ensuremath{V_{\mathrm{R}}}=\ensuremath{g_{\mathrm{L}}}=0$ and
Re\,$\ensuremath{g_{\mathrm{R}}}=0$), the limits set at the 95\% confidence level are Im\,\ensuremath{g_{\mathrm{R}}}~$\in$~[$-$0.18, 0.06].
The measured interval of allowed values slightly improves on the limits set at 7~TeV by the ATLAS
Collaboration from the measurement of double-differential angular decay rates~\cite{ATLAS_Wtb_limits}.
\section{Data and simulation samples}
\label{sec:Samples}
The analysis is performed using $pp$ collision data collected in 2012 by the ATLAS detector
at a centre-of-mass energy of $\SI{8}{\TeV}$. The events are required to pass single-electron
or single-muon triggers~\cite{ATLAS_trigger,ATLAS_trigger_muon}, resulting, after detector and
data-quality requirements, in a data sample corresponding to an integrated luminosity of
$\SI{20.2}{\per\fb}$. The electron and muon triggers impose a threshold of $\SI{24}{\GeV}$ on
the transverse momentum (\pT), along with isolation requirements. To recover efficiency for
higher-\pT\ leptons, the isolated lepton triggers are complemented by triggers without isolation
requirements, but with a threshold raised to $\SI{60}{\GeV}$ for electrons and to $\SI{36}{\GeV}$
for muons.
Samples of signal and background events are simulated using various Monte Carlo generators.
The generated events are passed through a simulation of the ATLAS detector~\cite{ATLAS_FullSim}
based on the \textsc{Geant4} framework~\cite{GEANT4}. For some samples a faster simulation
(ATLFAST-II~\cite{ATLAS_FastSim}), making use of a parameterised response of the electromagnetic
and hadronic calorimeters, is performed instead. Minimum-bias events simulated with
the \textsc{Pythia}~(8.1)~\cite{Pythia8} generator are overlaid to model the pile-up effects
from additional $pp$ collisions in the same and nearby bunch crossings. All simulated events are
then processed using the same reconstruction and analysis chain as for data events.
Signal $t$-channel single-top-quark events are generated with the NLO \textsc{Powheg-Box}~(r2556)~\cite{Powheg_1,Powheg_2,Powheg_3}
generator, which uses the four-flavour scheme (Figure~\ref{fig:tchannel}) for the matrix-element calculations~\cite{Frederix:2012}.
Events are generated with the CT10f4~\cite{PDF_Lai} parton distribution functions (PDFs), and the
renormalisation and factorisation scales are set to $\mu_{\text{R}}^2=\mu_{\text{F}}^2=16\left(m^2_b+p^2_{\text{T},b}\right)$,
where $m_{b}$ is the mass of the $b$-quark and $p_{\text{T},b}$ is the transverse momentum of the $b$-quark
from the initial gluon splitting (called the spectator $b$-quark)~\cite{Frederix:2012}. Additional
$t$-channel samples are produced with the LO \textsc{Protos}~(2.2)~\cite{Protos}
generator using the CTEQ6L1 PDFs~\cite{PDF_CTEQ6L1}. \textsc{Protos} events are generated using
the four-flavour scheme, as well, and anomalous couplings are enabled in both the production and the decay
vertices, varying Re\,$\ensuremath{V_{\mathrm{L}}}$ and Im\,$\ensuremath{g_{\mathrm{R}}}$ simultaneously to keep the top-quark width invariant.
The factorisation scale is set to $\mu_{\text{F}}^2=-p^2_W$ for the light quark, where $p_W$ is
the four-momentum of the exchanged $W$ boson, and to $\mu_{\text{F}}^2=m^2_b+p^2_{\text{T},b}$
for the gluon. Eight \textsc{Protos} samples generated with Im\,$\ensuremath{g_{\mathrm{R}}}$ in the range [$-$0.144, 0.144]
and Re\,$\ensuremath{V_{\mathrm{L}}}$ in the range [0.982, 1] are used, including the Standard Model configuration
Im\,$\ensuremath{g_{\mathrm{R}}}=0$ and Re\,$\ensuremath{V_{\mathrm{L}}}=1$. These \textsc{Protos} samples are used to compute the parton-level
unfolding corrections and to check the reliability of the unfolding method, while the
\textsc{Powheg-Box} sample is used to determine the expected event yields and template
distributions.
Samples of \ttbar~\cite{Powheg_ttbar}, $s$-channel single-top-quark and $Wt$~\cite{Powheg_Wt}
background events are produced using the \textsc{Powheg-Box}~(r2819, r3026) generator with the
CT10 PDFs. To generate the \ttbar\ sample, the model parameter $h_{\text{damp}}$, which effectively
regulates the high-\pT\ gluon radiation, is set to the top-quark mass $m_{t}$~\cite{ATLAS_hdamp}.
For the above samples, parton showering, hadronisation and the underlying event are simulated
with \textsc{Pythia}~(6.426)~\cite{Pythia} using parameter values set to the Perugia 2011C
tune~\cite{PerugiaTune}, and the CTEQ6L1 PDFs.
To study the modelling uncertainties of all processes involving top quarks, either alternative
generators or parameter variations in the \textsc{Powheg-Box} and \textsc{Pythia} settings are
used. For the estimation of the uncertainty in the $t$-channel matrix-element calculation, a
sample is produced using the \textsc{MadGraph5}\_a\textsc{MC@NLO}~(2.0)~\cite{aMCatNLO}
generator, interfaced to \textsc{Herwig}~(6.52)~\cite{Herwig_1,Herwig_2} for parton showering
and to \textsc{Jimmy}~(4.31)~\cite{Jimmy} for the underlying-event modelling with the ATLAS AUET2
tuned parameter settings~\cite{AUETTune} and the CT10f4 PDFs. The events are generated using
the four-flavour scheme. For the \ttbar, $s$-channel and $Wt$ processes, alternative samples
are produced using the \textsc{MC@NLO}~(4.03)~\cite{MCatNLO_1,MCatNLO_2,MCatNLO_3,MCatNLO_4}
generator interfaced to \textsc{Herwig}~(6.52) for parton showering and \textsc{Jimmy}~(4.31)
for the underlying-event modelling with the ATLAS AUET2 tune and the CT10 PDFs. To specifically
study the impact of the parton-shower modelling, a $t$-channel sample and a $Wt$ sample both
generated with \textsc{Powheg-Box} and coupled to \textsc{Herwig}~(6.52) and \textsc{Jimmy}~(4.31)
with the AUET2 tune are used. For the \ttbar\ process, samples generated using \textsc{Powheg-Box}
with the CT10 PDFs, interfaced to \textsc{Herwig}~(6.52) with the AUET2 tune or to \textsc{Pythia}~(6.426)
with the AUET2B tune, are used. Effects of varying the amount of radiation are studied by changing
the hard-process and parton-shower scales simultaneously in the \textsc{Powheg-Box} and \textsc{Pythia}~(6.426, 6.427)
simulations. In the single-top-quark samples the factorisation and renormalisation scales are increased
or decreased by a factor of two or one-half, respectively, in combination with the Perugia 2012 radLo
and radHi tunes~\cite{PerugiaTune}. In the \ttbar\ samples, $h_{\text{damp}}$ is set to $m_{t}$
or $2m_{t}$ in combination with the radLo and radHi parameterisations, respectively.
All top-quark processes are simulated assuming a top-quark mass of 172.5~GeV, and the top-quark decay is
assumed to proceed exclusively through \textit{t}$\to$\textit{Wb}. The baseline \textsc{Powheg-Box} samples
are passed through the fully \textsc{Geant4}-based simulation of the ATLAS detector, while the \textsc{Protos} samples
and all samples used in studies of modelling uncertainties are processed through the ATLFAST-II simulation.
Vector-boson production in association with jets is simulated using the multileg LO \textsc{Sherpa}~(1.4.1)~\cite{Sherpa}
generator with its own parameter tune and the CT10 PDFs. \textsc{Sherpa} is used not only to generate the hard
process, but also for the parton shower and the modelling of the underlying event. $W$+jets and $Z$+jets events
with up to four additional partons are generated. The CKKW method~\cite{Hoeche:2009} is used to remove overlaps
between the partonic configurations generated by the matrix element and by the parton showering. Diboson samples
of $WW$, $WZ$ and $ZZ$ events are also produced, using the \textsc{Sherpa}~(1.4.1) generator with the CT10 PDFs.
All the generated \textsc{Sherpa} single-boson and diboson events are passed through the ATLFAST-II simulation
of the detector.
\section{Event reconstruction and selection}
\label{sec:Selections}
The analysis considers only $W$-boson decay modes to an electron or a muon. Events in
which the $W$ boson decays to a $\tau$ lepton are thus included if the $\tau$ lepton
subsequently decays to an electron or a muon.
The signal event candidates are selected by requiring a single isolated electron or muon,
significant missing transverse momentum, and exactly two jets with one of them identified
as likely to contain a $b$-hadron ($b$-tagged jet). In fact, the presence of a third jet is
not required in the event selection. Indeed, the additional jet resulting from the
spectator $b$-quark originating from the gluon splitting as shown in \Figure{fig:tchannel}
is expected to have a softer \pT spectrum and a broader $|\eta|$ distribution than
the $b$-tagged jet produced in the top-quark decay, and, therefore, is in general not detected.
Electron candidates are reconstructed from isolated energy deposits in the electromagnetic
calorimeter which are associated with inner-detector tracks fulfilling strict quality
requirements~\cite{ATLAS_electrons}. They are required to satisfy $\pT>\SI{25}{\GeV}$ and
$|\eta|<2.47$, excluding the barrel--endcap transition region, corresponding to $1.37<|\eta|<1.52$.
Muon candidates are reconstructed using combined tracking information from the inner detector
and the muon spectrometer~\cite{ATLAS_muons}. They are required to have $\pT>\SI{25}{\GeV}$
and $|\eta|<2.5$. The electron and muon candidates must fulfil additional isolation
requirements, as described in Ref.~\cite{ATLAS_topreco}, in order to reduce contributions
from misidentified jets, non-prompt leptons from the decay of heavy-flavour quarks and
electrons from photon conversions.
\newcommand{\VertexFootnote}{A primary-vertex candidate is defined as a reconstructed
vertex with at least five associated tracks with $\pT>400$~MeV. The primary vertex
associated with the hard-scattering collision is the candidate with the largest sum
of the squared \pT of the associated tracks.}
Jets are reconstructed using the anti-$k_{t}$ algorithm~\cite{Antikt} with a radius
parameter of 0.4, from topological clusters~\cite{ATLAS_clusters}, calibrated with
a local cluster weighting method~\cite{ATLAS_jets_1}. Jets are calibrated using an
energy- and $\eta$-dependent simulation-based scheme, with in situ corrections based
on data. The jet energy is further corrected for the effect of multiple $pp$ interactions.
To reject jets from pile-up events, a so-called jet-vertex-fraction criterion~\cite{ATLAS_jvf}
is applied to the jets with $\pT<\SI{50}{\GeV}$ and $|\eta|<2.4$: at least 50\% of the
scalar sum of the \pT\ of the tracks associated with a
jet is required to be from tracks compatible with the primary vertex.\footnote{\VertexFootnote}
Only events containing two reconstructed jets with $\pT>\SI{30}{\GeV}$ are selected.
In addition, one of them must be $b$-tagged with $|\eta|<2.5$, while the
second jet is required to be untagged and to have $|\eta|<4.5$. The $b$-tagging is
performed using a neural network which combines three different algorithms exploiting
the properties of a $b$-hadron decay in a jet~\cite{ATLAS_btag_1}. The $b$-tagging
algorithm is optimised to improve the rejection of $c$-quark jets, since $W$-boson
production in association with $c$-quarks is a major background for the selected final
state. The requirement applied to the neural-network discriminant corresponds to a
$b$-tagging efficiency of 50\%, and mistagging rates of 3.9\% and 0.07\%
for $c$-quark jets and light-flavour jets, respectively, as predicted in simulated
\ttbar events~\cite{ATLAS_btag_2,ATLAS_btag_3}.
The missing transverse momentum, with magnitude \MET, is reconstructed from the vector
sum of energy deposits in the calorimeter projected onto the transverse plane~\cite{ATLAS_MET}.
All cluster energies are corrected using the local cluster weighting method. Clusters
associated with high-\pT jets and electrons are further calibrated using their respective
energy corrections. Contributions from the \pT of the selected muons are also included
in the calculation.
\newcommand{\MassFootnote}{The transverse mass of the lepton--\MET\ system is defined as
$m_{\mathrm{T}}(\ell, \MET) = \sqrt{2 \pT(\ell) \MET \left(1-\cos \Delta \phi (\ell,\MET) \right)}$,
where $\Delta \phi(\ell,\MET)$ is the difference in azimuthal angle between the lepton
transverse momentum and the missing transverse momentum.}
Events are required to contain at least one good primary-vertex candidate, and no jets failing
to satisfy reconstruction quality criteria. The magnitude of the missing transverse momentum
is required to be larger than $\SI{30}{\GeV}$. In addition, the transverse mass of the
lepton--\MET system must be greater than $\SI{50}{\GeV}$ in order to reduce the multijet
background contribution.~\footnote{\MassFootnote} Further reduction of this background is achieved
by imposing an additional requirement on events where the lepton and the leading jet in \pT have
opposite directions in the transverse plane~\cite{ATLAS_tchan}. To reduce the \ttbar\ dilepton
background, events containing an additional lepton, identified with less stringent
criteria (referred to as a loose lepton) and with a \pT\ threshold lowered to $10$~GeV,
are rejected.
The lepton and neutrino four-momenta are used to reconstruct the $W$ boson. Since the neutrino
escapes undetected, the $x$- and $y$-components of the missing transverse momentum are assumed
to correspond to the transverse momentum of the neutrino. The unmeasured longitudinal component
of the neutrino momentum is computed by imposing a $W$-boson mass constraint on the lepton--neutrino
system. If there are two real solutions, the solution giving the smallest magnitude of the
longitudinal neutrino momentum is taken. If there are complex solutions, the magnitude of the
measured missing transverse momentum is rescaled in order to obtain a physical solution~\cite{ATLAS_Wtb_limits}.
The top-quark candidate is reconstructed by combining the four-momenta of the reconstructed $W$
boson and the $b$-tagged jet.
Additional requirements, defining the signal region, are finally applied to the preselected events:
\begin{itemize}
\item
The pseudorapidity of the untagged jet must satisfy $|\eta|>2.0$, since the spectator quark tends
to be produced in the forward direction in the $t$-channel process.
\item
The separation in $\eta$ between the untagged jet and the $b$-tagged jet must be larger than 1.5,
to reduce the contribution from \ttbar\ background events.
\item
The mass of the reconstructed top quark is required to be between 130~GeV and 200~GeV, to reject
background events from processes not involving top quarks.
\item
The scalar sum ($H_\mathrm{T}$) of the \pT\ of the lepton, the \pT\ of the jets and \MET must be larger than 195~GeV, to
further reduce the number of background events, in particular the $W$+jets contribution.
\end{itemize}
Figure~\ref{fig:selections} shows the distributions of the four variables relevant for these
requirements, comparing data to the predicted signal and background distributions normalised
to the results of the maximum-likelihood fit described in \Section{sec:Yields}. The cuts
that define the signal region are indicated for each of the variables. The multijet background
estimate shown in the figure is discussed in \Section{sec:Backgrounds}.
\begin{figure}[t]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[]
{\includegraphics[width=0.48\textwidth]{StackCut_abs_ujet_eta_rebin_tchan_1tag_noFiducial_2jets_Leptons}
\label{fig:preselection_abs_ujet_eta}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{StackCut_dEta_bjet_ujet_rebin_tchan_1tag_noFiducial_2jets_Leptons}
\label{fig:preselection_dEta_bjet_ujet}}
\hfill
\subfloat[]
{\includegraphics[width=0.48\textwidth]{StackCut_top_m_tchan_1tag_noFiducial_2jets_Leptons}
\label{fig:preselection_top_m}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{StackCut_Ht_event_tchan_1tag_noFiducial_2jets_Leptons}
\label{fig:preselection_Ht_event}}
\caption
{Distributions of the selection variables in the preselected signal region:
\protect\subref{fig:ttbar_abs_ujet_eta} $|\eta|$ of the untagged jet,
\protect\subref{fig:ttbar_dEta_bjet_ujet} separation in $\eta$ between the untagged and $b$-tagged jets,
\protect\subref{fig:ttbar_top_m} reconstructed top-quark mass, and
\protect\subref{fig:ttbar_Ht_event} scalar sum of the \pT\ of the lepton, the \pT\ of the jets and \MET.
The observed distributions are compared to the predicted signal and background distributions,
normalised to the results of the maximum-likelihood fit. The labels $tq$ and $t\bar{b}$ refer
to the $t$-channel and $s$-channel single-top-quark processes, respectively, and $VV$ to diboson
production. The vertical lines and the arrows define the signal region. The uncertainty bands
include the statistical post-fit uncertainty, the uncertainty due to the limited size of the simulation
samples and the uncertainty in the normalisation of the multijet background, added in quadrature. The last
bin of the histograms includes overflows. The lower panels show the ratio of data to prediction.}
\label{fig:selections}
\end{figure}
\section*{Acknowledgements}
\input{Acknowledgements}
\clearpage
\printbibliography
\newpage
\input{atlas_authlist}
\end{document}
\section{Systematic uncertainties}
\label{sec:Uncertainties}
Several sources of systematic uncertainty affect the asymmetry measurements, modifying
the signal and background event yields and angular distributions. To evaluate the impact
of each source the asymmetries are extracted by unfolding the template distributions
after varying them to reflect that source of uncertainty. In each case a new background
estimation is performed before subtraction, using the fitting procedure described in \Section{sec:Yields}.
For all sources of systematic uncertainty other than those associated with the limited
size of the simulation samples, the nominal unfolding corrections are considered. The
systematic uncertainty is evaluated as the difference between the nominal asymmetry
value and the one measured using the varied normalisations and shapes.
The sources of systematic uncertainty are split into the following categories:
\textbf{Background normalisation:}
The uncertainties in the normalisation of the top-quark and $W$+jets background processes
are determined from the maximum-likelihood fit. For the merged $Z$+jets and diboson processes
the normalisation uncertainty of 20\% introduced in \Section{sec:Backgrounds} is applied to
the predictions. For the data-driven normalisation of the multijet background the uncertainty
of 70\% estimated from the comparison of the matrix-method estimates with those given by the
jet-electron and anti-muon methods is used.
The uncertainty in the integrated luminosity is 1.9\%~\cite{ATLAS_lumi}. It is propagated
to the asymmetry measurements through the normalisation of the simulated backgrounds.
\textbf{Detector modelling:}
Systematic uncertainties in the reconstruction and energy calibration of jets, electrons
and muons are propagated in the analysis through variations in the modelling of the detector
response. For the jets, the main source of uncertainty is the energy scale, evaluated using
a combination of in situ techniques~\cite{ATLAS_jets_1}. Other jet-related uncertainty sources
are the modelling of the energy resolution~\cite{ATLAS_jets_2} and reconstruction
efficiency~\cite{ATLAS_jets_1} (both referred to as jet reconstruction uncertainties), and
the modelling of the tagging efficiencies of $b$-quark jets, $c$-quark jets and light-flavour
jets~\cite{ATLAS_btag_2,ATLAS_btag_3}. Uncertainties related to leptons come from trigger,
identification and isolation efficiencies, as well as from the energy scale and
resolution~\cite{ATLAS_electrons,ATLAS_muons} (all referred to as lepton reconstruction
uncertainties). The uncertainties in the energy scale and resolution corrections applied to
leptons and jets are propagated to the computation of the missing transverse momentum. The
scale and resolution uncertainties due to soft jets and to contributions of calorimeter
energy deposits not associated with any reconstructed objects are also considered and
evaluated independently (they are labelled \MET reconstruction uncertainties). For all
detector modelling uncertainties, positive and negative uncertainties are estimated
separately from the corresponding shifts.
\textbf{Signal and background modelling:}
Systematic uncertainties associated with the signal and background modelling are estimated
by comparing event samples from different generators and by varying parameters in the event generation.
The uncertainty in the matrix-element calculation in the simulation of the $t$-channel
single-top-quark process is estimated by comparing \textsc{MadGraph5}\_a\textsc{MC@NLO}+\textsc{Herwig}
with \textsc{Powheg-Box}+\textsc{Herwig}. For the \ttbar\ and $Wt$ processes, \textsc{MC@NLO} is
compared with \textsc{Powheg-Box}, both generators interfaced to \textsc{Herwig}. The uncertainty
in the parton shower is estimated by comparing \textsc{Powheg-Box} interfaced with \textsc{Pythia}
and \textsc{Herwig} for the $t$-channel, \ttbar\ and $Wt$ processes. For the $s$-channel
single-top-quark contribution the uncertainty due to the choice of generator and parton shower is
estimated in a combined way by comparing \textsc{MC@NLO}+\textsc{Herwig} with
\textsc{Powheg-Box}+\textsc{Pythia}.
An additional modelling uncertainty is considered for the signal process by comparing the
NLO {\textsc Powheg-Box} sample to the LO {\textsc Protos} sample implementing the Standard Model
parameterisation of the $Wtb$ couplings. To estimate this uncertainty, only the shapes of the
distributions are varied in order to assess the impact of using a LO generator to determine
the unfolding corrections.
The uncertainty in the amount of QCD radiation is evaluated for all top-quark processes by
comparing the \textsc{Powheg-Box} samples generated with the varied hard-process and parton-shower
scales presented in \Section{sec:Samples}. The largest shift in the measured asymmetries is
taken as uncertainty.
The dependence of the measured asymmetries on the top-quark mass is estimated using \textsc{Powheg-Box}
samples generated with different top-quark masses. Variations lower than 0.01 per GeV
are found for the measured asymmetry values. Therefore, these variations are not included
in the total systematic uncertainty.
The impact of the flavour composition on the modelling of the $W$+jets distributions is
determined by propagating an uncertainty of 50\% in the ratio of $W$+$b\bar{b}$ to
$W$+$c\bar{c}$ events. As reported in \Section{sec:Yields}, $W$+light-flavour jets events
give a small contribution in the signal region and no associated modelling uncertainty
is taken into account. An additional shape-modelling uncertainty is considered for the
$W$+jets distributions. Indeed, in the $W$+jets control region a few kinematic variables
are slightly mismodelled, and the impact of this mismodelling is evaluated by reweighting
the $W$+jets angular distributions in the signal region. The applied event weights are
derived from matching to data (after subtraction of all processes other than
$W$+jets) the mismodelled kinematic variables in the $W$+jets control region. This
procedure leads to a conservative estimate since it also accounts for mismodelling of the
$W$+light-flavour jets events, which have a much more important contribution in the $W$+jets
control region than in the signal region.
The systematic uncertainty associated with the data-driven shape modelling of the multijet events
is estimated by comparing the shapes provided by the baseline matrix method and the alternative
modelling given by the jet-electron and anti-muon methods.
All the signal and background modelling uncertainties, except that associated with the
$W$+jets flavour composition, are symmetrised by taking the difference between the nominal
and varied measurements as positive and negative uncertainties.
Systematic uncertainties related to the parton distribution functions are estimated for all
processes, except for the multijet contribution. The uncertainty is estimated, following the
PDF4LHC prescription~\cite{PDF4LHC}, by calculating the envelope of the uncertainties at 68\%
confidence level of the CT10~\cite{PDF_Lai}, MSTW2008NLO~\cite{PDF_Martin} and
NNPDF2.3~\cite{PDF_Ball} sets.
\textbf{Limited size of simulation samples:}
The uncertainty due to the limited size of the Monte Carlo samples is evaluated by
varying the background normalisation and shape, as well as the unfolding corrections,
through Gaussian fluctuations. The standard deviation of the distribution of the
measured asymmetry provided by an ensemble test of pseudo-experiments built from
these variations is taken as a systematic uncertainty.
Tables~\ref{tab:Breakdowns_1} and \ref{tab:Breakdowns_2} show the contribution of each
source of systematic uncertainty to the asymmetry measurements. The total uncertainties
are obtained from the sum in quadrature of all contributions. Tables~\ref{tab:Breakdowns_1}
and \ref{tab:Breakdowns_2} also include the statistical uncertainty from the data sample.
It is evaluated using a procedure similar to that used for the uncertainty associated
with the size of the simulation samples, but varying the observed numbers of events and
the shape of the angular distributions through Poisson fluctuations.
The asymmetry measurements are dominated by the systematic uncertainties. The largest
contributions are from the uncertainties in the modelling of the $t$-channel and
\ttbar\ processes, and in the jet reconstruction and energy scale. Significant contributions
also come from the uncertainty in the modelling of the multijet or $W$+jets events,
depending on the measured asymmetry, and from the limited size of the simulation samples.
The statistical uncertainty of the data sample, although lower than the systematic
uncertainty, also has a sizeable impact on the measurement precision.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lcccc}
\toprule
Uncertainty source & $\Delta\ensuremath{A_{\mathrm{FB}}}^{\ell}\times10^2$ & $\Delta\ensuremath{A_{\mathrm{FB}}}^{tW}\times10^2$ & $\Delta\ensuremath{A_{\mathrm{FB}}}\times10^2$ & $\Delta\ensuremath{A_{\mathrm{EC}}}\times10^2$ \\
\midrule
Statistical uncertainty & $\pm$2.6 & $\pm$3.1 & $\pm$2.3 & $\pm$2.8 \\
\midrule
Simulation statistics & $\pm$1.7 & $\pm$1.9 & $\pm$1.4 & $\pm$1.7 \\
Luminosity & $<$0.1 & $<$0.1 & $<$0.1 & $<$0.1 \\
Background normalisation & $\pm$0.5 & $\pm$0.5 & $\pm$0.9 & $\pm$0.7 \\[0.14cm]
\MET reconstruction & $^{+0.9}_{-0.1}$ & $^{+0.4}_{-0.7}$ & $^{+1.1}_{-0.7}$ & $^{+0.8}_{-0.2}$ \\[0.14cm]
Lepton reconstruction & $^{+1.0}_{-0.4}$ & $^{+0.1}_{-1.3}$ & $\pm$1.4 & $^{+0.6}_{-0.3}$ \\[0.14cm]
Jet reconstruction & $\pm$2.1 & $\pm$2.5 & $\pm$1.2 & $\pm$1.8 \\[0.14cm]
Jet energy scale & $^{+1.3}_{-1.2}$ & $^{+2.0}_{-1.6}$ & $^{+3.4}_{-2.7}$ & $^{+2.0}_{-0.7}$ \\[0.14cm]
Jet flavour tagging & $\pm$0.9 & $\pm$0.3 & $\pm$0.6 & $\pm$0.4 \\
PDF & $\pm$0.2 & $<$0.1 & $<$0.1 & $\pm$0.2 \\
$t\bar{t}$ generator & $\pm$2.3 & $\pm$1.0 & $\pm$0.2 & $\pm$1.2 \\
$t\bar{t}$ parton shower & $\pm$0.6 & $\pm$0.5 & $\pm$2.7 & $\pm$0.3 \\
$t\bar{t}$ scales & $\pm$0.2 & $\pm$0.4 & $\pm$1.2 & $\pm$0.3 \\
$Wt$, $s$-channel generator & $\pm$1.0 & $\pm$1.1 & $\pm$0.4 & $\pm$0.3 \\
$Wt$, $s$-channel scales & $\pm$0.9 & $\pm$0.3 & $\pm$0.3 & $\pm$0.3 \\
$t$-channel NLO generator & $\pm$1.4 & $\pm$0.6 & $\pm$0.6 & $\pm$2.7 \\
$t$-channel LO--NLO generator & $\pm$1.5 & $\pm$2.0 & $\pm$2.6 & $\pm$1.8 \\
$t$-channel parton shower & $\pm$0.5 & $\pm$1.0 & $\pm$3.5 & $\pm$0.2 \\
$t$-channel scales & $\pm$1.1 & $\pm$2.0 & $\pm$0.6 & $\pm$1.6 \\[0.14cm]
$W$+jets, multijet modelling & $^{+1.9}_{-2.4}$ & $^{+0.9}_{-1.0}$ & $^{+2.2}_{-2.1}$ & $^{+1.3}_{-1.2}$ \\[0.14cm]
\midrule
Total systematic uncertainty & $^{+5.4}_{-5.4}$ & $^{+5.2}_{-5.3}$ & $^{+7.3}_{-6.9}$ & $^{+5.3}_{-4.8}$ \\[0.14cm]
\bottomrule
\end{tabular}
\caption{Uncertainties contributing to the measurements of the $\ensuremath{A_{\mathrm{FB}}}^{\ell}$, $\ensuremath{A_{\mathrm{FB}}}^{tW}$, $\ensuremath{A_{\mathrm{FB}}}$ and
$\ensuremath{A_{\mathrm{EC}}}$ asymmetries. For better readability the uncertainties are multiplied by 10$^2$.}
\label{tab:Breakdowns_1}
\end{center}
\end{table}
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lcccc}
\toprule
Uncertainty source & $\Delta\ensuremath{A_{\mathrm{FB}}}^{N}\times10^2$ & $\Delta\ensuremath{A_{\mathrm{FB}}}^{T}\times10^2$ & $\Delta\ensuremath{A_{\mathrm{FB}}}^{N,\phi}\times10^2$ & $\Delta\ensuremath{A_{\mathrm{FB}}}^{T,\phi}\times10^2$ \\
\midrule
Statistical uncertainty & $\pm$2.2 & $\pm$3.1 & $\pm$3.0 & $\pm$4.6 \\
\midrule
Simulation statistics & $\pm$1.3 & $\pm$2.0 & $\pm$1.8 & $\pm$2.9 \\
Luminosity & $<$0.1 & $<$0.1 & $<$0.1 & $<$0.1 \\
Background normalisation & $\pm$0.4 & $\pm$1.1 & $\pm$0.6 & $\pm$1.1 \\[0.14cm]
\MET reconstruction & $^{+0.3}_{-0.4}$ & $^{+0.5}_{-0.3}$ & $^{+0.5}_{-0.8}$ & $^{+0.4}_{-1.3}$ \\[0.14cm]
Lepton reconstruction & $^{+0.1}_{-0.2}$ & $^{+1.3}_{-1.5}$ & $^{+0.6}_{-0.5}$ & $^{+1.6}_{-0.6}$ \\[0.14cm]
Jet reconstruction & $\pm$0.8 & $\pm$0.5 & $\pm$1.6 & $\pm$1.3 \\[0.14cm]
Jet energy scale & $^{+0.9}_{-0.8}$ & $^{+3.9}_{-4.6}$ & $^{+0.6}_{-2.5}$ & $^{+4.5}_{-2.5}$ \\[0.14cm]
Jet flavour tagging & $\pm$0.2 & $\pm$0.6 & $\pm$0.3 & $\pm$0.6 \\
PDF & $\pm$0.1 & $\pm$0.1 & $\pm$0.1 & $\pm$0.4 \\
$t\bar{t}$ generator & $\pm$0.2 & $\pm$3.5 & $\pm$1.7 & $\pm$1.3 \\
$t\bar{t}$ parton shower & $\pm$1.5 & $\pm$1.0 & $\pm$0.9 & $\pm$1.6 \\
$t\bar{t}$ scales & $\pm$0.3 & $\pm$0.8 & $\pm$0.3 & $\pm$1.3 \\
$Wt$, $s$-channel generator & $\pm$0.2 & $\pm$0.8 & $\pm$0.3 & $\pm$1.4 \\
$Wt$, $s$-channel scales & $\pm$0.6 & $\pm$0.5 & $\pm$0.4 & $\pm$0.9 \\
$t$-channel NLO generator & $\pm$0.3 & $\pm$4.5 & $\pm$2.6 & $\pm$7.2 \\
$t$-channel LO--NLO generator & $\pm$0.5 & $\pm$1.9 & $\pm$1.3 & $\pm$3.2 \\
$t$-channel parton shower & $\pm$0.7 & $\pm$0.9 & $<$0.1 & $\pm$1.1 \\
$t$-channel scales & $\pm$0.9 & $\pm$2.2 & $\pm$1.4 & $\pm$2.6 \\[0.14cm]
$W$+jets, multijet modelling & $^{+0.7}_{-0.6}$ & $^{+1.3}_{-1.7}$ & $\pm$0.6 & $^{+2.3}_{-1.7}$ \\[0.14cm]
\midrule
Total systematic uncertainty & $^{+2.9}_{-2.9}$ & $^{+8.3}_{-8.8}$ & $^{+4.8}_{-5.4}$ & $^{+10.9}_{-10.1}$ \\[0.14cm]
\bottomrule
\end{tabular}
\caption{Uncertainties contributing to the measurements of the $\ensuremath{A_{\mathrm{FB}}}^{N}$, $\ensuremath{A_{\mathrm{FB}}}^{T}$, $\ensuremath{A_{\mathrm{FB}}}^{N,\phi}$
and $\ensuremath{A_{\mathrm{FB}}}^{T,\phi}$ asymmetries. For better readability the uncertainties are multiplied by 10$^2$.}
\label{tab:Breakdowns_2}
\end{center}
\end{table}
\section{Unfolding}
\label{sec:Unfolding}
The measured angular distributions are unfolded to the parton level,\footnote{Partons
are defined from the matrix-element hard process and immediate decays.} so that
the asymmetries extracted from the corrected angular distributions can be directly
compared to theoretical calculations. The unfolding corrections account for
distortions due to detector resolution, selection efficiencies, and reconstruction
of the $W$ boson and top quark. They also include the effects due to hadronisation
and parton showering.
The unfolding procedure is applied to the angular distributions after subtracting the
background contributions, and is based on a matrix inversion combined with an efficiency
correction. The number of unfolded signal events $N^{\text{unfolded}}_{j}$ in each bin $j$
of the parton-level distribution is obtained from the background-subtracted yields
$N^{\text{measured}}_{i}$ measured in all bins $i$ of the reconstructed distribution,
according to
\begin{equation}
N^{\text{unfolded}}_{j} = \frac{\sum_{i} M^{-1}_{ji}N^{\text{measured}}_{i}}{\epsilon_{j}} ,\\
\label{eq:asymmetry}
\end{equation}
\noindent
where $M_{ji}$ is the migration matrix which relates the parton-level and reconstructed
values of the considered angular variable, and $\epsilon_{j}$ is the event selection
efficiency. Both the migration matrix and the selection efficiency are computed using samples
of $t$-channel events simulated with the \textsc{Protos} generator, as described below.
For the chosen numbers of bins, the fractions of simulated events belonging to the diagonal
elements of the migration matrices are found to be between 68\% and 90\%, depending on the angular
observable. The selection efficiencies are between 0.6\% and 1.6\%, depending on the angular
observable and on the bin range. The matrix inversion is performed by using the iterative
Bayesian method~\cite{BayesMethod} as implemented in the RooUnfold framework~\cite{RooUnfold}.
The number of iterations is chosen such that the absolute change in the extracted asymmetry
between two successive steps becomes lower than 0.0005. The unfolding procedure has been
validated through convergence and closure tests performed by using template distributions
constructed from the $t$-channel \textsc{Powheg-Box} and \textsc{Protos} samples presented
in Section~\ref{sec:Samples}. The closure tests showed that the residual bias induced by
the unfolding method is negligible, whatever the measured asymmetry.
With the aim of testing their compatibility with the Standard Model predictions, all asymmetries
described in \Section{sec:Observables}, except $\ensuremath{A_{\mathrm{FB}}}^{N}$, are extracted using the \textsc{Protos}
simulation generated with the Standard Model values of the $Wtb$ couplings to determine the
migration matrix and the selection efficiency. For all the asymmetry measurements, the Standard
Model $Wtb$ couplings, as implemented in the \textsc{Powheg-Box} generator, are considered for the
subtracted top-quark backgrounds.
To constrain Im\,\ensuremath{g_{\mathrm{R}}}\ using the method explained in \Section{sec:Observables}, the $\ensuremath{A_{\mathrm{FB}}}^{N}$
and $\ensuremath{A_{\mathrm{FB}}}^{\ell}$ asymmetries must be measured without any assumption about Im\,\ensuremath{g_{\mathrm{R}}}.
It is observed that the presence of anomalous couplings in general modifies the kinematics
in such a way that the efficiency corrections are dependent on the $Wtb$ couplings. While
the measurement of $\ensuremath{A_{\mathrm{FB}}}^{\ell}$ is found to be independent of the value of Im\,\ensuremath{g_{\mathrm{R}}}\ assumed
in the unfolding corrections, the measurement of $\ensuremath{A_{\mathrm{FB}}}^{N}$ is found to depend on the unfolding
corrections used. By applying an interpolation technique it is possible to unfold the
$\cos\theta_{\ell}^N$ angular distribution independently of any assumption about Im\,\ensuremath{g_{\mathrm{R}}},
so that the extracted $\ensuremath{A_{\mathrm{FB}}}^{N}$ asymmetry, combined with $\ensuremath{A_{\mathrm{FB}}}^{\ell}$, can be used
to constrain this coupling.
The interpolation method is based on determining the unfolding corrections using a linear
combination of the migration and efficiency corrections provided by five {\textsc Protos}
samples in which Im\,\ensuremath{g_{\mathrm{R}}}\ is varied (Im\,\ensuremath{g_{\mathrm{R}}}\ $=0, \pm0.094, \pm0.23$). An iterative
procedure is applied to determine the coefficients of the linear combination until convergence
is reached in the extracted $\ensuremath{A_{\mathrm{FB}}}^{N}$ asymmetry. The method proceeds as follows. An initial
value of $\ensuremath{A_{\mathrm{FB}}}^{N}$ is first extracted using the standard {\textsc Protos} unfolding corrections.
This value is then used to determine, via a Lagrange interpolation, the weights to be
applied to the five predicted corrections. A new value of $\ensuremath{A_{\mathrm{FB}}}^{N}$ is obtained after
unfolding the $\cos\theta_{\ell}^N$ angular distribution with these corrections using
the Bayesian method. The chosen convergence criterion for the interpolation procedure
requires that the difference between the extracted $\ensuremath{A_{\mathrm{FB}}}^{N}$ from two successive steps
is smaller than 0.0005. By using template distributions given by \textsc{Protos} samples
not used in the linear combination of the unfolding corrections (Im\,\ensuremath{g_{\mathrm{R}}}\ $=\pm0.043, \pm0.144$),
it has been checked that this method recovers the generated asymmetries at parton level.
The sensitivity to Im\,\ensuremath{g_{\mathrm{R}}}\ of the $\cos\theta_{\ell}$ and $\cos\theta_{\ell}^{N}$
distributions, which are used to set limits on this coupling, is illustrated in
\Figure{fig:Distributions_Protos}. In this figure the observed distributions are
compared to the signal-plus-background predictions built by adding the signal
templates given by the \textsc{Protos} samples generated with Im\,\ensuremath{g_{\mathrm{R}}}\ $=0$ (Standard
Model parameterisation) and Im\,\ensuremath{g_{\mathrm{R}}}\ $=\pm0.23$, the latter corresponding to the
maximum values considered in the interpolation method described above.
\begin{figure}[t]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Protos_cos_lepton_ujet_tchan_1tag_cuts_noFiducial_2jets_Leptons_2.pdf}
\label{fig:Protos_cos_lepton_ujet_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}}
\subfloat[]
{\includegraphics[width=0.48\textwidth]{Protos_cos_lepton_normal_tchan_1tag_cuts_noFiducial_2jets_Leptons_2.pdf}
\label{fig:Protos_cos_lepton_normal_tchan_1tag_cuts_noFiducial_2jets_Leptons_2}}
\caption
{Comparison of the distributions observed in the signal region with the distributions predicted
as a function of Im\,\ensuremath{g_{\mathrm{R}}}\ for the angular observables from which the asymmetries used to set
limits on this coupling are measured:
\protect\subref{fig:Protos_cos_lepton_ujet_tchan_1tag_cuts_noFiducial_2jets_Leptons_2} $\cos\theta_{\ell}$ for $\ensuremath{A_{\mathrm{FB}}}^{\ell}$ and
\protect\subref{fig:Protos_cos_lepton_normal_tchan_1tag_cuts_noFiducial_2jets_Leptons_2} $\cos\theta_{\ell}^{N}$ for $\ensuremath{A_{\mathrm{FB}}}^{N}$.
The predicted distributions are determined by adding the signal and background contributions
normalised to the results of the maximum-likelihood fit. The template signal distributions
are taken from the \textsc{Protos} samples generated with Im\,\ensuremath{g_{\mathrm{R}}}\ $=0$ (Standard Model
parameterisation) and Im\,\ensuremath{g_{\mathrm{R}}}\ $=\pm0.23$. The corresponding parton-level values for the
$\ensuremath{A_{\mathrm{FB}}}^{N}$ asymmetry are 0 and $\pm$0.10, respectively. For $\ensuremath{A_{\mathrm{FB}}}^{\ell}$ the predicted
values are 0.45 for Im\,\ensuremath{g_{\mathrm{R}}}\ $=0$ and 0.34 for Im\,\ensuremath{g_{\mathrm{R}}}\ $=\pm0.23$. The uncertainty bands
include the statistical post-fit uncertainty, the uncertainty due to the limited size of the
simulation samples and the uncertainty in the normalisation of the multijet background, added
in quadrature.}
\label{fig:Distributions_Protos}
\end{figure}
\section{Signal and background event yields}
\label{sec:Yields}
The signal and background event yields are estimated through a simultaneous
maximum-likelihood fit to the numbers of data events observed in the signal
and anti-signal regions, and in the \ttbar\ control region.
The likelihood function~\cite{ATLAS_tchan} is given by the product of Poisson
probability terms associated with the fitted regions, combined with the product of
Gaussian priors to constrain the background rates to their predictions within the
associated uncertainties. In the fit the $t$-channel single-top-quark contribution
is treated as unconstrained. The top-quark background contributions (\ttbar, $Wt$
and $s$-channel single top-quark production) are merged with their relative fractions
taken from simulation, and the applied constraint is derived from the combination
of their cross-section uncertainties presented in \Section{sec:Backgrounds}. The
flavour composition of the $W$+jets contribution is taken from
simulation. In all fitted regions the production of a $W$ boson in association with
heavy-flavour jets is the dominant contribution to the $W$+jets background, predicted
to be around 95\% in the three regions. The $Z$+jets and diboson contributions, which
are very low in the signal region (2\% of the total expectation), are merged and
fixed to the predictions. The multijet contribution is kept fixed to its data-driven
estimate.
The results of the maximum-likelihood fit together with the associated statistical
uncertainties (referred to as statistical post-fit uncertainties) are shown in
Table~\ref{tab:Scales}. They are presented as scale factors to be applied to the
predicted event yields. The results are found to be stable when the constraints
imposed on the top-quark and $W$+jets backgrounds are significantly relaxed. Table~\ref{tab:Yields}
provides the signal and background event yields in the signal region after scaling
to the results of the fit to the data. The signal-to-background ratio is 1.2, the
$t$-channel single top-quark production representing 54\% of the total expectation.
The two main background contributions come from $W$+jets (19\%) and \ttbar\ production
(18\%).
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lr@{$\,\pm\,$}l}
\toprule
Process & \multicolumn{2}{c}{Scale factor} \\
\midrule
$t$-channel & 0.95 & 0.02 \\
$t\bar{t}$, $Wt$, $s$-channel & 1.01 & 0.01 \\
$W$+jets & 1.10 & 0.01 \\
\bottomrule
\end{tabular}
\caption{Scale factors and uncertainties extracted for the signal and background processes
from the simultaneous maximum-likelihood fit of the event yields in the signal, anti-signal
and \ttbar\ regions. The quoted uncertainties are statistical only.}
\label{tab:Scales}
\end{center}
\end{table}
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lr@{$\,\pm\,$}l}
\toprule
Process & \multicolumn{2}{c}{Event yield} \\
\midrule
$t$-channel & 5700 & 110 \\
$Wt$, $s$-channel & 265 & 12 \\
$t\bar{t}$ & 1914 & 15 \\
$W$+jets & 2044 & 57 \\
$Z$+jets, diboson & 188 & 9 \\
Multijet & 420 & 290 \\
\midrule
Total expectation & 10530 & 320 \\
Data & \multicolumn{2}{l}{10527} \\
\bottomrule
\end{tabular}
\caption{Signal and background event yields in the signal region after scaling to the
results of the maximum-likelihood fit. The quoted uncertainties add in quadrature the
post-fit uncertainties and the uncertainties due to the limited size of the simulation
samples, except for the data-driven multijet contribution to which the normalisation
uncertainty of 70\% is applied. The total expectation is compared to the observed number
of events.}
\label{tab:Yields}
\end{center}
\end{table}
| 2023-04-23T06:40:56.058Z | 2017-02-28T02:12:56.000Z | redpajama/arxiv | arxiv_0001 | 1,414 | 13,480 |
90a852e960745ef9109e2d0ef19689100ea0d681 | \section{Introduction}
Suppose $\Gamma$ is a finite group and $\Omega$ is a generating set of $\Gamma$ such that the identity $1\notin\Omega$ and $\omega^{-1}\in\Omega$ whenever $\omega\in\Omega$. The \emph{Cayley graph} $\textrm{Cay}(\Gamma,\Omega)$ is the graph having the vertex set $\Gamma$ and the arc set $\Gamma\times\Omega$, where for $\eta\in \Gamma, \omega\in\Omega$, the arc from $\eta$ to $\eta\omega$ is denoted as $(\eta,\omega)$.
A cyclic permutation $\rho$ on $\Omega$ canonically induces a permutation $\hat{\rho}$ on the arc set via $(\eta,\omega)\mapsto(\eta,\rho(\omega))$, and this equips each vertex $\eta$ with a ``cyclic order", which means a cyclic permutation on the set of arcs emanating from $\eta$.
This determines an embedding of the Cayley graph ${\rm Cay}(\Gamma,\Omega)$ into a unique closed oriented surface, which is characterized by the property that each connected component of the complement of the graph is a disk. This embedding is called a {\it Cayley map} and denoted by $\mathcal{CM}(\Gamma,\Omega,\rho)$.
An \emph{isomorphism} between Cayley maps $\mathcal{CM}(\Gamma,\Omega,\rho)\to\mathcal{CM}(\Gamma',\Omega',\rho')$ is an isomorphism of the underlying graphs $\alpha:\textrm{Cay}(\Gamma,\Omega)\to\textrm{Cay}(\Gamma',\Omega')$ which can be extended to an orientation-preserving homeomorphism between their embedding surfaces.
A Cayley map $\mathcal{CM}(\Gamma,\Omega,\rho)$ is said to be \emph{regular} if its automorphism group acts regularly on the arc set, i.e., for any two arcs, there exists an automorphism sending one arc to the other. It was shown in \cite{JS02} that a Cayley map $\mathcal{CM}(\Gamma,\Omega,\rho)$ is regular if and only if there exists a {\it skew-morphism} which is a bijective function $\varphi:\Gamma\to\Gamma$, and a {\it power function} $\pi:\Gamma\to\{1,\ldots,|\Omega|\}$, such that $\varphi|_{\Omega}=\rho$, $\varphi(1)=1$ and
\begin{align}
\varphi(\eta\mu)=\varphi(\eta)\varphi^{\pi(\eta)}(\mu) \qquad \text{for\ all} \qquad \eta,\mu\in\Gamma.
\end{align}
Let $t$ be an integer with $t^2\equiv 1\pmod{|\Omega|}$. A regular Cayley map $\mathcal{CM}(\Gamma,\Omega,\rho)$ is said to be \emph{$t$-balanced} if
\begin{align}
\rho(\omega^{-1})=(\rho^{t}(\omega))^{-1} \qquad \text{for\ all} \qquad \omega\in\Omega; \label{eq:t-balance}
\end{align}
in particular, it is called {\it balanced} if $t\equiv 1\pmod{|\Omega|}$ and {\it anti-balanced} if $t\equiv-1\pmod{|\Omega|}$. Note that, it is the residue modulo $|\Omega|$, but not $t$ itself, is of importance.
From now on we assume $t>0$, and abbreviate ``regular $t$-balanced Cayley map" to ``RBCM$_{t}$".
Recall some facts on RBCM$_t$ from \cite{Ch17} Proposition 1.2.
\begin{prop} \label{prop:RBCMt}
{\rm(a)} A Cayley map $\mathcal{CM}(\Gamma,\Omega,\rho)$ is a RBCM$_{1}$ if and only if $\rho$ can be extended to an automorphism of $\Gamma$.
{\rm(b)} Suppose $t>1$. A Cayley map $\mathcal{CM}(\Gamma,\Omega,\rho)$ is a RBCM$_{t}$ if and only if $\pi(\omega)=t$ for all $\omega\in\Omega$ and $\pi(\eta)\in\{1,t\}$ for all $\eta\in\Gamma$.
{\rm(c)} When the conditions in {\rm(b)} are satisfied,
$\ker\pi:=\{\eta\in \Gamma\colon \pi(\eta)=1\}$ is a subgroup of index 2,
consisting of elements which are products of an even number of generators,
$\varphi(\ker\pi)=\ker\pi$, and $\varphi|_{\ker\pi}$ is an automorphism.
\end{prop}
Let $d=|\Omega|$. By (\ref{eq:t-balance}), there is an involution $\iota$ on $\{1,\ldots,d\}$ with $\omega_i^{-1}=\omega_{\iota(i)}$ and $\iota(i+1)=\iota(i)+t$.
Let $\ell=\iota(d)$, then $\iota(i)\equiv\ell+ti\pmod{d}$ for all $i$,
and $\iota^2={\rm id}$ is equivalent to $(t+1)\ell\equiv 0\pmod{d}$, which together with $t^2\equiv 1\pmod{d}$ implies $(t-1,d)\mid 2\ell$.
We say that the RBCM$_t$ has {\it type I} or {\it type II} if $(t-1,d)\nmid\ell$ or $(t-1,d)\mid\ell$, respectively.
\begin{rmk} \label{rmk:ell-iso}
\rm Observe that $(t-1,d)\mid\ell$ if and only if $\Omega$ contains an element of order 2, so RBCM$_t$'s of different type cannot be isomorphic. On the other hand, according to Lemma 2.4 of \cite{KKF06}, two RBCM$_t$'s of the same type $\mathcal{CM}(\Gamma_j,\Omega_j,\rho_j), j=1,2$ are isomorphic if and only if there exists an isomorphism $\sigma:\Gamma_1\to\Gamma_2$ such that $\sigma(\Omega_1)=\Omega_2$ and $\sigma\circ\rho_1=\rho_2\circ\sigma$.
By re-indexing the $\omega_i$'s if necessary, we may assume $\ell=(t-1,d)/2$ or $\ell=(t-1,d)$ when the RBCM is of type I or II, respectively.
\end{rmk}
So far, people have completely classified RBCM$_{t}$'s for the following classes of groups: dihedral groups (Kwak, Kwon and Feng \cite{KKF06}, 2006), dicyclic groups (Kwak and Oh \cite{KO08}, 2008), semi-dihedral groups (Oh \cite{Oh09}, 2009), cyclic groups (Kwon \cite{Kw13}, 2013).
Recently, the author \cite{Ch17} gave a classification for RBCM$_t$'s on abelian groups, which is complete to some degree.
To study RBCM$_t$'s on more complicated groups, we propose a ``reduction method", through which, the problem can be reduced to that about simpler groups.
The following observation is a key ingredient:
\begin{lem} \label{lem:quotient}
Let $\mathcal{CM}(\Gamma,\Omega,\rho)$ be a $d$-valent RBCM$_{t}$, with skew-morphism $\varphi$.
Suppose $\Xi$ is a subgroup of $\Gamma^+$ which is normal in $\Gamma$ and invariant under $\varphi^+$.
Let $\overline{\Gamma}=\Gamma/\Xi$, let $\overline{\Omega}$ denote the image of $\Omega$ under the quotient $\Gamma\to\overline{\Gamma}$ and let $\overline{\rho}$ be the induced permutation on $\overline{\Omega}$, then $\mathcal{CM}(\overline{\Gamma},\overline{\Omega},\overline{\rho})$ is a RBCM$_{t}$ whose valency divides $d$. If $\mathcal{CM}(\Gamma,\Omega,\rho)$ has type II, then so does $\mathcal{CM}(\overline{\Gamma},\overline{\Omega},\overline{\rho})$.
\end{lem}
\begin{proof}
The function
$\overline{\varphi}:\overline{\Gamma}\to\overline{\Gamma}$, $\overline{\eta}\mapsto \overline{\varphi(\eta)}$
is well-defined, since for any $\xi\in\Xi$, we have $\varphi(\xi\eta)=\varphi(\xi)\varphi(\eta)$. Let $\pi$ be the power function of $\mathcal{CM}(\Gamma,\Omega,\rho)$. It induces a function $\overline{\pi}:\overline{\Gamma}\twoheadrightarrow\{1,t\}$ in an obvious way. Then
$$\overline{\varphi}(\overline{\eta}\overline{\mu})=\overline{\varphi(\eta\mu)}=\overline{\varphi(\eta)\varphi^{\pi(\eta)}(\mu)}=
\overline{\varphi}(\overline{\eta})\overline{\varphi}^{\overline{\pi}(\overline{\eta})}(\overline{\mu}) \qquad \text{for\ all\ \ } \eta,\mu,$$
hence $\mathcal{CM}(\overline{\Gamma},\overline{\Omega},\overline{\rho})$ is a RBCM$_t$. Clearly its valency is a divisor of $d$.
The assertion concerning type follows from Remark \ref{rmk:ell-iso}.
\end{proof}
The idea is, to understand a RBCM$t$ $\mathcal{M}$ on $\Gamma$, we find a suitable subgroup $\Xi$, and consider the quotient RBCM$_t$ $\overline{\mathcal{M}}$ on $\Gamma/\Xi$, using known results on $\overline{\mathcal{M}}$ to deduce information about $\mathcal{M}$.
It is expected to be applicable for wider study. In a forthcoming paper, we are going to look into RBCM$_t$'s on general metabelian groups, using the reduction method.
As a first practice, in the present paper we consider the classification problem for a class of split metacyclic 2-groups.
A general {\it split metacyclic group} can be presented as
\begin{align}
\Lambda(n,m;r)=\langle \alpha,\beta\mid \alpha^{n}=\beta^{m}=1, \ \beta\alpha\beta^{-1}=\alpha^{r}\rangle, \label{eq:presentation}
\end{align}
for some positive integers $n,m,r$ with $r^m\equiv 1\pmod{n}$.
We focus on
\begin{align}
&\Delta=\Lambda(2^a,2^b;1+2^c), \\
\text{with} \qquad &\max\{2,a-b\}\le c\le a-3 \quad \text{and} \quad b\ne c.
\end{align}
This seemingly strangle restriction is imposed just for technical reason; otherwise there would be many subtleties, making the paper too long.
\begin{nota}
\rm For any positive integers $s$ and $u$, let $[u]_{s}=1+s+\cdots+s^{u-1}$.
For $u\ne 0$, let $\deg_2(u)$ denote the largest $k$ with $2^k\mid u$; set $\deg_2(0)=+\infty$.
The cyclic group $\mathbb{Z}/n\mathbb{Z}$ is abbreviated to $\mathbb{Z}_n$.
When $\Gamma$ is an abelian 2-group, ${\rm rk}(\Gamma)$ denotes its rank.
For $\eta\in\Gamma$ and a normal subgroup $\Xi\unlhd\Gamma$, the image of $\eta$ under the quotient $\Gamma\twoheadrightarrow\Gamma/\Xi$ is usually denoted by $\overline{\eta}$. But for $u\in\mathbb{Z}$, its image under $\mathbb{Z}\twoheadrightarrow\mathbb{Z}_n$ is still denoted by $u$.
For a RBCM$_t$ on $\Gamma$ with power function $\pi$, let $\Gamma^+=\ker\pi$, let ${\rm Aut}^+(\Gamma)=\{\tau\in{\rm Aut}(\Gamma)\colon \tau(\Gamma^+)=\Gamma^+\}$, and for each $\tau\in{\rm Aut}^+(\Gamma)$, denote $\tau|_{\Gamma^+}$ by $\tau^+$.
\end{nota}
\section{Preliminary on metacyclic groups}
A general element of $\Lambda=\Lambda(n,m;r)$ can be written as $\alpha^{x}\beta^{y}$.
By (\ref{eq:presentation}) we have
\begin{align}
\beta^{y}\alpha^{x}&=\alpha^{xr^{y}}\beta^{y}, \label{eq:identity1} \\
(\alpha^{x_{1}}\beta^{y_{1}})(\alpha^{x_{2}}\beta^{y_{2}})&=\alpha^{x_{1}+x_{2}r^{y_{1}}}\beta^{y_{1}+y_{2}},\label{eq:identity2} \\
(\alpha^{x}\beta^{y})^{u}&=\alpha^{x[u]_{r^{y}}}\beta^{yu}, \label{eq:identity3} \\
[\alpha^{x_{1}}\beta^{y_{1}},\alpha^{x_{2}}\beta^{y_{2}}]&=\alpha^{x_{1}(1-r^{y_{2}})-x_{2}(1-r^{y_{1}})}, \label{eq:identity4}
\end{align}
where $[\eta,\mu]=\eta\mu\eta^{-1}\mu^{-1}$. As a consequence, the commutator subgroup is generated by $\langle\alpha^{r-1}\rangle$, hence the abelianization
\begin{align}
\Lambda^{{\rm ab}}:=\Lambda/[\Lambda,\Lambda]\cong\mathbb{Z}_{(r-1,n)}\times\mathbb{Z}_{m}.
\end{align}
\begin{lem} \label{lem:index-2}
The group $\Lambda(n,m;r)$ has three subgroups of index 2, namely, $\langle \alpha^{2},\beta\rangle$, $\langle \alpha,\beta^{2}\rangle$ and $\langle \alpha^{2},\alpha\beta\rangle$.
\end{lem}
\begin{proof}
Each homomorphism $\Lambda\to\mathbb{Z}_2$ factors through $\Lambda^{{\rm ab}}$, and there are exactly three epimorphisms $\varpi_j:\Lambda^{{\rm ab}}\cong\mathbb{Z}_{(r-1,n)}\times\mathbb{Z}_{m}\twoheadrightarrow\mathbb{Z}_2, j=1,2,3$, which are given by
\begin{align*}
\varpi_1(u,v)=u, \quad \varpi_2(u,v)=v, \qquad \varpi_3(u,v)=u+v.
\end{align*}
Let $\widetilde{\varpi}_j$ denote the composition of $\Lambda\to\Lambda^{{\rm ab}}$ followed $\varpi_j$. It is easy to see that $\ker\widetilde{\varpi}_1=\langle\alpha^2,\beta\rangle$, $\ker\widetilde{\varpi}_2=\langle\alpha,\beta^2\rangle$, $\ker\widetilde{\varpi}_3=\langle\alpha^2,\alpha\beta\rangle$.
\end{proof}
The following is a specialization of Theorem 2.9 of \cite{CXZ15}:
\begin{lem} \label{lem:auto}
Let $\tilde{c}=\deg_2(r-1)\ge 2$. Each automorphism of $\Lambda(2^{\tilde{a}},2^{\tilde{b}};r)$ is given by
\begin{align}
\sigma_{x_{1},y_{1};x_{2},y_{2}}:\
\alpha^{u}\beta^{v}\mapsto \alpha^{x_{1}[u]_{r^{y_{1}}}+r^{y_{1}u}x_{2}[v]_{r^{y_{2}}}}\beta^{y_{1}u+y_{2}v}, \ \ \ u,v\geq 0,
\end{align}
for some integers $x_{1},x_{2},y_{1},y_{2}$ with
\begin{align*}
2&\nmid x_{1}y_{2}-x_{2}y_{1}, \qquad \deg_2(y_{1})\ge \tilde{b}-\tilde{c}, \qquad \deg_2(x_{2})\ge \tilde{a}-\tilde{b}, \\
y_{2}&\equiv \begin{cases} 1+2^{\tilde{a}-\tilde{c}-1}, &\text{if}\ \tilde{b}=\tilde{a}-\tilde{c}=\deg_{2}(y_{1})+\tilde{c}, \\ 1,&\text{otherwise} \end{cases}\pmod{2^{\tilde{a}-\tilde{c}}}.
\end{align*}
\end{lem}
Given $\sigma_{x_1,y_1;x_2,y_2}$ and $\sigma_{x'_1,y'_1;x'_2,y'_2}$, the composite $\sigma_{x'_1,y'_1;x'_2,y'_2}\circ\sigma_{x_1,y_1;x_2,y_2}$ sends $\alpha$ to $\alpha^{h_1}\beta^{y'_1x_1+y'_2y_1}$ and sends $\beta$ to $\alpha^{h_2}\beta^{y'_1x_2+y'_2y_2}$, with
\begin{align}
h_j=x'_1[x_j]_{r^{y'_1}}+r^{y'_1x_j}x'_2[y_j]_{r^{y'_2}}, \qquad j=1,2.
\end{align}
Since $2(\deg_2(y'_1)+\tilde{c})\ge \tilde{a}$, we have $r^{y'_1u}\equiv 1+(r-1)y'_1u\pmod{2^{\tilde{a}}}$ so that
\begin{align*}
[x_j]_{r^{y'_1}}&=\sum\limits_{i=0}^{x_j-1}r^{iy'_1}\equiv x_j+r'x_j(x_j-1)\pmod{2^{\tilde{a}}}, \\
r^{y'_1x_j}x'_2&\equiv x'_2+(r-1)y'_1x_jx'_2\equiv x'_2\pmod{2^{\tilde{a}}},
\end{align*}
due to $r^{y'_2}\equiv r^{y_2}\equiv r\pmod{2^{\tilde{a}}}$.
Note that if $\tilde{c}\ge \tilde{b}$, then
\begin{align}
h_1&\equiv x'_1(x_1+r'y'_1x_1(x_1-1))+x'_2y_1\pmod{2^{\tilde{a}}}, \label{eq:auto1} \\
h_2&\equiv x'_1x_2+x'_2y_2 \pmod{2^{\tilde{a}}}. \label{eq:auto2}
\end{align}
\section{Classification for a class of metacyclic 2-groups}
Suppose $\mathcal{CM}(\Delta,\Omega,\rho)$ is a $d$-valent RBCM$_t$ with skew-morphism $\varphi$, and
\begin{align}
\Omega=\{\omega_1,\ldots,\omega_d\}; \qquad \rho(\omega_i)=\omega_{i+1}, \quad i=1,\ldots,d.
\end{align}
We further assume $\ell\in\{(t-1,d),(t-1,d)/2\}$ as in Remark \ref{rmk:ell-iso}, so that
\begin{align}
\omega_{\ell+ti}=\omega_i^{-1} \qquad \text{for\ all} \quad i. \label{eq:inverse}
\end{align}
Let
\begin{align}
\eta_j=\omega_j\omega_{j-1}^{-1}=\omega_j\omega_{\ell+t(j-1)}.
\end{align}
Then
\begin{align}
\omega_i\omega_d^{-1}&=\eta_i\cdots\eta_1, \qquad i=1,\ldots,d; \\
\text{in\ particular,} \qquad \omega_d^{-2}&=\omega_{\ell}\omega_d^{-1}=\eta_\ell\cdots\eta_1; \label{eq:eta-ell-1} \\
\Delta^+&=\langle\eta_1,\ldots,\eta_d\rangle. \label{eq:Delta+}
\end{align}
Moreover,
\begin{align}
\varphi(\eta_j)=\varphi(\omega_j\omega_{\ell+t(j-1)})=\omega_{j+1}\varphi^{t}(\omega_{\ell+t(j-1)})=
\omega_{j+1}\omega_{\ell+tj}=\eta_{j+1}.
\end{align}
\subsection{Constraints}
\begin{lem} \label{lem:RBCMt-ab}
If $\mathcal{M}=\mathcal{CM}(\Gamma,\{\mu_1,\ldots,\mu_{\tilde{d}}\},\rho)$ is a $\tilde{d}$-valent RBCM$_t$ on an abelian 2-group $\Gamma$ such that ${\rm rk}(\Gamma)={\rm rk}(\Gamma^+)=2$, then
\begin{enumerate}
\item[\rm(i)] there is an isomorphism $\Gamma^+\cong\mathbb{Z}_{2^{k'}}\times\mathbb{Z}_{2^k}$ with $k'\ge k$, sending
$\theta_1$ to $(1,0)$ and $\theta_2$ to $(-1,1)$, where $\theta_j=\mu_j-\mu_{j-1}$;
\item[\rm(ii)] $\mathcal{M}$ is of type I, and $\tilde{d}=2^{k+1}\mid t+1$;
\item[\rm(iii)] $\psi$ being the skew-morphism of $\mathcal{M}$, $(\psi^+)^2={\rm id}$.
\end{enumerate}
\end{lem}
\begin{proof}
Cite the results of \cite{Ch17}; see Section 4.2, Corollary 4.3 and Corollary 4.7, obviously ${\rm rk}(\Gamma)={\rm rk}(\Gamma^+)=2$ only
occurs in the last case of Section 4.2, and the conditions (i)--(iii) can be verified.
\end{proof}
\begin{rmk} \label{rmk:RBCMt-ab}
\rm From (i) we see that $|\theta_1|=2^{k'}$, $|\theta_1+\theta_2|=2^{k}$, and $|\theta_1-\theta_2|=2^{\max\{k'-1,k\}}$.
\end{rmk}
\begin{lem} \label{lem:restriction}
For the RBCM$_t$ $\mathcal{CM}(\Delta,\Omega,\rho)$, we have
\begin{itemize}
\item it is of type I, with $\Delta^+=\langle\alpha^2,\beta\rangle$;
\item $c>b$, $\deg_2(t+1)\ge b+1$;
\item and $\varphi^+=\sigma_{x_1,x_2;y_1,y_2}$ for some $x_1,x_2,y_1,y_2$ with $2\nmid y_1$ and
$$x_1^2+x_2y_1\equiv 1\pmod{2^{c-1}}, \qquad x_1+y_2\equiv y_2^2+x_2y_1-1\equiv 0\pmod{2^{b}}.$$
\end{itemize}
\end{lem}
\begin{rmk}
\rm Here $\varphi^+=\sigma_{x_1,x_2;y_1,y_2}$ means it send $\alpha^2$ to $\alpha^{2x_1}\beta^{y_1}$ and sends $\beta$ to $\alpha^{2x_2}\beta^{y_2}$. Similarly for other situations.
\end{rmk}
\begin{proof}
The proof is divided into three parts.
\begin{enumerate}
\item Assume $\Delta^+=\langle\alpha^2,\alpha\beta\rangle$, $\eta_j=\alpha^{u_j}\beta^{v_j}$ ($j=1,\ldots,d$), and
$$\varphi^+(\alpha^2)=(\alpha^2)^{x_1}(\alpha\beta)^{y_1}, \qquad \varphi^+(\alpha\beta)=(\alpha^2)^{x_2}(\alpha\beta)^{y_2}.$$
Since $|(\alpha^2)^{x_2}(\alpha\beta)^{y_2}|=|\varphi^+(\alpha\beta)|=|\alpha\beta|,$ we have $2\nmid y_2$;
since $$1=\varphi^+((\alpha^2)^{2^{a-1}})=((\alpha^2)^{x_1}(\alpha\beta)^{y_1})^{2^{a-1}}=(\alpha^{2x_1+[y_1]_r}\beta^{y_1})^{2^{a-1}},$$
we have $2\mid y_1$.
Hence $2\nmid v_j$ for each $j$. By (\ref{eq:eta-ell-1}), $\ell$ is even.
On the other hand, one can verify that the subgroup $\Xi=\langle\alpha^{2^c},\beta^{2^{c}}\rangle=\langle\alpha^{2^c},(\alpha\beta)^{2^c}\rangle$ is normal in $\Delta$ and invariant under $\varphi^+$. By Lemma \ref{lem:quotient} there is an induced RBCM$_t$ on $\Delta/\Xi$.
Clearly ${\rm rk}(\Delta/\Xi)={\rm rk}(\Delta^+/\Xi)=2$, hence by Lemma \ref{lem:RBCMt-ab},
$4\mid t+1$ and $\ell=(t-1,d)/2$ so that $\ell$ is odd. This is a contradiction.
\item Assume $\Delta^+=\langle\alpha,\beta^2\rangle$, $\eta_j=\alpha^{u_j}\beta^{2v_j}$ ($j=1,\ldots,d$)
and $\varphi^+=\sigma_{x_1,x_2;y_1,y_2}$.
Since $\Delta^+\cong\Lambda(2^a,2^{b-1};(1+2^c)^2)$, by Lemma \ref{lem:auto} we have
\begin{align}
&\deg_2(x_2)\ge a-b+1, \qquad \deg_2(y_1)\ge b-c-2, \nonumber \\
&\deg_2(y_2-1)\ge a-c-1. \label{ineq:deg-x2-y1-y2}
\end{align}
The subgroup $\Xi'=\langle\alpha^{2^c},\beta^{2^{b-1}}\rangle$ is normal in $\Delta$ and invariant under $\varphi^+$, with $\Delta^+/\Xi'\cong\mathbb{Z}_{2^c}\times\mathbb{Z}_{2^{b-2}}$. By Lemma \ref{lem:quotient} and Lemma \ref{lem:RBCMt-ab}, $4\mid t+1$ and $\ell=(t-1,d)/2$ so that $2\nmid\ell$.
If $2\mid x_2$, then $u_j\equiv u_1\pmod{2}$; by (\ref{eq:Delta+}), $2\nmid u_1$, hence $\eta_{\ell}\cdots\eta_1=\alpha^{u'}\beta^{v'}$ for some odd $u'$, but this contradicts (\ref{eq:eta-ell-1}). Hence $2\nmid x_2$, and consequently $b-1\ge a$.
Assume $\deg_2(y_1)=b-c-2$.
We have $\overline{\eta_1}=u_1\overline{\alpha}+v_1\overline{\beta^2}$ and
\begin{align*}
\overline{\eta_1}\pm\overline{\eta_2}=((x_1\pm 1)u_1+x_2v_1)\overline{\alpha}+(y_1u_1+(y_2\pm 1)v_1)\overline{\beta^2}.
\end{align*}
By Remark \ref{rmk:RBCMt-ab}, $|\overline{\eta_1}|=2^{b-2}$ and $|\overline{\eta_1}+\overline{\eta_2}|=2^c$,
Hence
\begin{align*}
\deg_2(y_1u_1+(y_2+1)v_1)\ge b-2-c.
\end{align*}
It follows from $(\overline{\varphi^+})^2={\rm id}$ that $x_2y_1+y_2^2\equiv 1\pmod{2^{b-2}}$ and then
$\deg_2(y_1)\ge\max\{\deg_2(y_2+1)+1,b-2\}$, so
$$b-2\le c+\deg_2(y_1u_1+(y_2+1)v_1)=c+\deg_2(y_2+1)\le c+\deg_2(y_1)-1,$$
contradicting the assumption.
Thus $c+\deg_2(y_1)\ge b-1$, and actually $\Xi=\langle \alpha^{2^c}\rangle$ is invariant under $\varphi^+$,
with $\Delta^+/\Xi\cong\mathbb{Z}_{2^c}\times\mathbb{Z}_{2^{b-1}}$.
An argument similar as above leads to $b-1\le c+\deg_2(y_2+1)$.
Remark \ref{rmk:RBCMt-ab} also implies $\deg_2(|\overline{\eta_1}-\overline{\eta_2}|)=\max\{b-2,c\}=b-2$, hence $\deg_2(y_2-1)=1$.
Then by (\ref{ineq:deg-x2-y1-y2}), $a\le c+2$, contradicting our hypothesis.
\item Thus $\Delta^+=\langle\alpha^2,\beta\rangle\cong\Lambda(2^{a-1},2^b;1+2^c)$. Suppose $\eta_j=\alpha^{2u_j}\beta^{v_j}$,
and $\varphi^+=\sigma_{x_1,x_2;y_1,y_2}$. Similarly as in part 2, we can first show $2\nmid\ell$.
If $2\mid y_1$, then $v_j\equiv v_1\pmod{2}$ for all $j$;
by (\ref{eq:Delta+}), $2\nmid v_1$, then, since $2\nmid \ell$, we have
$\eta_{\ell}\cdots\eta_1=\alpha^{2u'}\beta^{v'}$ for some odd $v'$, contradicting (\ref{eq:eta-ell-1}).
Hence $2\nmid y_1$, and then by Lemma \ref{lem:auto}, $b\le c+d_2(y_1)=c$.
The subgroup $\Xi=\langle\alpha^{2^c}\rangle$ is normal in $\Delta$ and invariant under $\varphi^+$. Applying Lemma \ref{lem:RBCMt-ab} to the quotient RBCM$_t$ on $\Delta/\Xi\cong\mathbb{Z}_{2^c}\times\mathbb{Z}_{2^b}$, we obtain $\deg_2(t+1)\ge b+1$.
The congruence relations follows from Lemma \ref{lem:RBCMt-ab} (iii) and the expression for $(\overline{\varphi^+})^2$:
\begin{align*}
\overline{\alpha^2}&\mapsto (x_1^2+x_2y_1)\overline{\alpha^2}+y_1(x_1+y_2)\overline{\beta}, \\
\overline{\beta}&\mapsto x_2(x_1+y_2)\overline{\alpha^2}+(x_2y_1+y_2^2)\overline{\beta}.
\end{align*}
\end{enumerate}
\end{proof}
\subsection{Normalization}
Note that if $\tau=\sigma_{p_1,p_2;q_1,q_2}\in{\rm Aut}^+(\Delta)$, then
\begin{align*}
\tau^+(\alpha^2)=\alpha^{p_1(1+r^{q_1})}\beta^{2p_1}, \qquad \tau^+(\beta)=(\alpha^2)^{p'}\beta^{q_2},
\end{align*}
with $2p'\equiv p_2\pmod{2^a}$, so $2\mid p_2$. We can write
\begin{align}
\tau^+=\sigma_{p_1(1+r^{q_1})/2,p_2/2;2q_1,q_2}.
\end{align}
\begin{rmk} \label{rmk:auto}
\rm Applying Lemma \ref{lem:auto} to both $\Delta$ and $\Delta^+$, we see that, given $\phi=\sigma_{z_1,z_2;w_1,w_2}\in{\rm Aut}(\Delta^+)$, there exists $\tau\in{\rm Aut}^+(\Delta)$ with $\tau^+=\phi$ if and only if $2\mid w_1$ and $\deg_2(w_2-1)\ge a-c$.
\end{rmk}
\begin{lem} \label{lem:quadratic}
Suppose $s^2\equiv h\pmod{2^e}$ with $e\ge 3$. There exists a sequence $\{\tilde{s}_k\}_{k=2}^{\infty}$ such that $\tilde{s}_k^2\equiv h\pmod{2^{k(e-1)}}$ and $\tilde{s}_{k+1}\equiv\tilde{s}_k\pmod{2^{k(e-1)-1}}$ for each $k$.
Consequently, for each $\tilde{e}>e$, there exists $\tilde{s}\in\mathbb{Z}$ such that $\tilde{s}^2\equiv h\pmod{2^{\tilde{e}}}$ and $\tilde{s}\equiv s\pmod{2^{e-1}}$.
\end{lem}
\begin{proof}
We construct $\tilde{s}_k$ inductively. Write $h=s^2+2^eu_1$ and set $\tilde{s}_2=s+2^{e-1}u_1$; clearly $\tilde{s}_2^2\equiv h\pmod{2^{2(e-1)}}$ and $\tilde{s}_2\equiv s\pmod{2^{e-1}}$.
In each step, write $h=\tilde{s}_k^2+2^{k(e-1)}u_k$ and set
$$\tilde{s}_{k+1}=\tilde{s}_k+2^{k(e-1)-1}u_k,$$
then $\tilde{s}_{k+1}\equiv \tilde{s}_k\pmod{2^{k(e-1)-1}}$ and $\tilde{s}_{k+1}^2\equiv h\pmod{2^{(k+1)(e-1)}}$, due to $2k(e-1)-2\ge (k+1)(e-1)$.
\end{proof}
\begin{lem} \label{lem:normal-auto}
There exists $\tau_1\in{\rm Aut}^+(\Delta)$ such that $(\tau_1\varphi\tau_1^{-1})^+=\sigma_{z,0;1,w}$ with $z\equiv -1\pmod{2^{c-2}}$, $w\equiv 1\pmod{2^{b-1}}$ and $z+w\equiv 0\pmod{2^b}$.
\end{lem}
\begin{proof}
We are going to find $x'_1,y'_1,x'_2,y'_2, z,w$ satisfying the following:
\begin{align}
x'_1[x_1]_{r^{y'_1}}+x'_2y_1 &\equiv z[x'_1]_r \pmod{2^{a-1}}, \label{eq:I2-x-1} \\
x'_1x_2+x'_2y_2&\equiv zx'_2\pmod{2^{a-1}}, \label{eq:I2-x-2} \\
y'_1x_1+y'_2y_1&\equiv x'_1+wy'_1\pmod{2^{b}}, \label{eq:I2-y-1} \\
y'_1x_2+y'_2y_2&\equiv x'_2+wy'_2\pmod{2^{b}}. \label{eq:I2-y-2}
\end{align}
In view of (\ref{eq:auto1}), (\ref{eq:auto2}), these will ensure $$\sigma_{x'_1,x'_2;y'_1,y'_2}\circ\varphi^+=\sigma_{z,0;1,w}\circ\sigma_{x'_1,x'_2;y'_1,y'_2}.$$
Let $e=\min\{c+1,\deg_2(x_2)\}\ge 3$; let
$$f(x)=(x-y_2)(x-x_1)-x_2y_1-2^{c-1}(x-y_2)x_1(1-y_1).$$
For any $\tilde{e}>0$, the congruence equation $f(x)\equiv 0\pmod{2^{\tilde{e}}}$ is equivalent to one of the form
$(x-g)^2\equiv h\pmod{2^{\tilde{e}}}$.
The equation $f(x)\equiv 0\pmod{2^{e}}$ has a tautological solution $x=x_1$, hence by Lemma \ref{lem:quadratic}, there exists $z$ with
$f(z)\equiv 0\pmod{2^a}$ and $z\equiv x_1\pmod{2^{e-1}}$. Note that $\deg_2(z-y_2)=\deg_2(x_1-y_2)=1$.
Take $x'_2$ with $(z-y_2)x'_2\equiv x_2y_1\pmod{2^{a-1}}$, then $\deg_2(x'_2)\ge (a-1-b)-1\ge a-c-1$; put $w=y_2-x'_2$.
Take $y'_1=0, y'_2=1$, $x'_1=y_1$. It can be verified that (\ref{eq:I2-x-1})--(\ref{eq:I2-y-2}) all hold.
Now that $\deg_2(x'_2)=\deg_2(x_2)-1\ge a-c-1$, by Remark \ref{rmk:auto}, $\sigma_{x'_1,y'_1;x'_2,y'_2}=\tau_1^+$ for some $\tau_1\in{\rm Aut}^+(\Delta)$.
Let $\psi$ denote the automorphism of $\Delta/\langle\alpha^{2^c}\rangle$ induced by $\tau_1\varphi\tau_1^{-1}$, then $(\psi^+)^2={\rm id}$, hence similarly as in the proof of Lemma \ref{lem:restriction}, we have $z^2\equiv 0\pmod{2^{c-1}}$ and $z+w\equiv w^2-1\equiv 0\pmod{2^b}$,
which together with $w\equiv y_2\equiv 1\pmod{4}$ implies $z\equiv -1\pmod{2^{c-2}}$ and $w\equiv 1\pmod{2^{b-1}}$.
\end{proof}
\begin{lem} \label{lem:conjugate}
Suppose $\tau\in{\rm Aut}^+(\Delta)$ with $\tau^+=\sigma_{p_1,p_2;q_1,q_2}$, and suppose $z'\equiv z\equiv -1\pmod{4}$. Then $\tau^+\sigma_{z,0;1,w}(\tau^+)^{-1}=\sigma_{z',0;1,w'}$ if and only if
\begin{align}
\deg_2(p_2)\ge a-2, \qquad z'\equiv z+p_2\pmod{2^{a-1}}, \label{eq:conjugate-1} \\
p_1-q_2\equiv (z-w)q_1\pmod{2^b}, \qquad w'\equiv w\pmod{2^b}. \label{eq:conjugate-2}
\end{align}
In particular, $\tau^+\sigma_{z,0;1,w}(\tau^+)^{-1}=\sigma_{z,0;1,w}$ if and only if
$$p_2\equiv 0\pmod{2^{a-1}}\qquad \text{and} \qquad p_1-q_2\equiv (z-w)q_1\pmod{2^b}.$$
\end{lem}
\begin{proof}
By Remark \ref{rmk:auto}, $2\mid q_1$ and $\deg_2(q_2-1)\ge a-c$. The condition $\tau^+\sigma_{z,0;1,w}=\sigma_{z',0;1,w'}\tau^+$ is equivalent to
\begin{align}
p_1[z]_{r^{q_1}}+p_2&\equiv z'[p_1]_r\pmod{2^{a-1}}, \label{eq:conj-1} \\
p_2w&\equiv z'p_2\pmod{2^{a-1}}, \label{eq:conj-2} \\
q_1z+q_2&\equiv p_1+w'q_1\pmod{2^b}, \label{eq:conj-3} \\
q_2w&\equiv p_2+w'q_2\pmod{2^b}. \label{eq:conj-4}
\end{align}
Clearly (\ref{eq:conj-2}) implies $\deg_2(p_2)\ge a-2$, then (\ref{eq:conj-4}) implies $w'\equiv w\pmod{2^b}$ and also $p_1-q_2\equiv (z-w)q_1\pmod{2^b}$. Finally, it follows from (\ref{eq:conj-1}) that
\begin{align*}
&p_1z(1+2^{c-1}(z-1)q_1)+p_2 \equiv z'p_1(1+2^{c-1}(p_1-1)) \\
\equiv &z'p_1(1+2^{c-1}(q_2-1+(z-w)q_1)) \\
\equiv &z'p_1(1+2^{c-1}(z-1)q_1),
\end{align*}
using $\deg_2((w-1)q_1),\deg_2(q_2-1)\ge a-c$, hence $z'\equiv z+p_2$.
Conversely, it can be easily verified that (\ref{eq:conjugate-1}), (\ref{eq:conjugate-2}) imply (\ref{eq:conj-1})--(\ref{eq:conj-4}).
\end{proof}
\begin{lem} \label{lem:normal-u}
There exists a unique $\tau_2\in{\rm Aut}^+(\Delta)$ such that $\tau_2^+\sigma_{z,0;1,w}=\sigma_{z,0;1,w}\tau_2^+$ and $\tau_2\tau_1(\omega_d)=\alpha^{\tilde{u}}\beta$ with $0<\tilde{u}<2^{a-c}$.
\end{lem}
\begin{proof}
Suppose $\tau\in{\rm Aut}^+(\Delta)$ and $\tau^+=\sigma_{x_1,x_2;y_1,y_2}$. By Remark \ref{rmk:auto}, $2\mid y_1$, $\deg_2(y_2-1)\ge a-c$ and $\tau=\sigma_{x_1,x_2;y_1/2,y_2}$.
By Lemma \ref{lem:conjugate}, $\tau^+\sigma_{z,0;1,w}=\sigma_{z,0;1,w}\tau^+$ if and only if
\begin{align}
x_2\equiv 0\pmod{2^{a-1}} \qquad \text{and} \qquad (z-w)y_1\equiv x_1-y_2\pmod{2^{b}}. \label{eq:commute}
\end{align}
Suppose $\tau_1(\omega_d)=\alpha^{u_0}\beta^{v_0}$, then $\tau\tau_1(\omega_d)=\alpha^{\tilde{u}}\beta$ if and only if
\begin{align}
x_1[u_0]_{r^{y_1/2}}&\equiv \tilde{u}\pmod{2^a}, \label{eq:u0} \\
\frac{y_1}{2}u_0+y_2v_0&\equiv 1\pmod{2^b}; \label{eq:v0}
\end{align}
the second congruence implies
$$[u_0]_{r^{y_1/2}}\equiv u_0+2^{c-1}u_0(u_0-1)\frac{y_1}{2}\equiv u_0+2^{c-1}(u_0-1)(1-v_0).$$
Note that (\ref{eq:commute}), (\ref{eq:v0}) imply
\begin{align}
\tilde{u}\equiv x_1u_0\equiv ((z-w)y_1+1)u_0\equiv 2(z-w)(1-v_0)+u_0\pmod{2^{a-c}}. \label{eq:u-tilde}
\end{align}
Take the unique $\tilde{u}$ satisfying $0<\tilde{u}<2^{a-c}$ and (\ref{eq:u-tilde}), and take $x_1$ with
$$x_1(u_0+2^{c-1}(u_0-1)(1-v_0))\equiv\tilde{u}\pmod{2^a}$$
so that (\ref{eq:u0}) holds, then $x_2,y_1,y_2$ are determined by (\ref{eq:commute}) and (\ref{eq:v0}).
\end{proof}
\subsection{Solving congruence equations}
\begin{nota}
\rm To simplify the writing, we abbreviate $A\equiv B\pmod{2^{a-1}}$ to $A\equiv B$ whenever there is no danger of confusion. Let $\ell'=(\ell-1)/2$, $t'=(t+1)/2$ and $r=1+2^c$.
\end{nota}
Suppose $\varphi^+=\sigma_{z,0;1,w}$ with
\begin{align}
z^2\equiv 1\pmod{2^{c-1}}, \qquad z+w\equiv w^2-1\equiv 0\pmod{2^b}; \label{eq:normal-zw}
\end{align}
suppose $\eta_i=\alpha^{2u_i}\beta^{v_i}$ and $\omega_d=\alpha^{\tilde{u}}\beta$, with $0<\tilde{u}<2^{a-c}$.
Then $\omega_i\omega_d^{-1}=\eta_i\cdots\eta_1=\alpha^{2f_i}\beta^{g_i}$,
so that
\begin{align}
\omega_i&=\alpha^{2f_i+r^{g_i}\tilde{u}}\beta^{g_i+1}, \\
\text{with}\qquad
f_i&=u_i+r^{v_i}u_{i-1}+\cdots+r^{v_i+\cdots+v_2}u_1, \label{eq:f} \\
g_i&=v_i+\cdots+v_1, \label{eq:g}
\end{align}
The condition (\ref{eq:inverse}) is equivalent to
\begin{align}
f_{\ell+ti}+r^{-(g_i+1)}f_i+\frac{1}{2}(r^{g_{\ell+ti}}+r^{-1})\tilde{u}&\equiv 0, \label{eq:inverse-1} \\
g_{\ell+ti}+g_i+2&\equiv 0 \pmod{2^b}. \label{eq:inverse-2}
\end{align}
Due to (\ref{eq:normal-zw}), we have $\varphi^2(\beta)=\beta$, $\varphi^2(\alpha^2)=\alpha^{2s}$ with
\begin{align}
s=z[z]_{r}\equiv z^2+2^{c-1}(z-1),
\end{align}
hence
$v_{i+2}\equiv v_i\pmod{2^{b}}$ and $u_{i+2}\equiv su_i\pmod{2^{a-1}}$.
Note that
\begin{align}
u_2=z[u_1]_r\equiv (z-2^{c-1}(u_1-1))u_1, \qquad v_2=u_1+wv_1,
\end{align}
hence
\begin{align}
u':&=u_2+r^{u_1+v_1}u_1\equiv (z+1+2^{c-1}(u_1+2v_1+1))u_1, \label{eq:u'} \\
(s-1)u'&\equiv (z+1)^2(z-1)u_1\equiv -2(z+1)^2, \\
v':&=v_2+v_1=u_1+(w+1)v_1. \label{eq:v'}
\end{align}
Now (\ref{eq:f}), (\ref{eq:g}) imply
\begin{align}
f_{2k}&\equiv(s^{k-1}+r^{u_1+2v_1}s^{k-2}+\cdots+r^{(k-1)(u_1+2v_1)})u' \equiv[k]_s\cdot u' \nonumber \\
&\equiv ku'-k(k-1)(z+1)^2, \\
f_{2k+1}&\equiv s^ku_1+r^{v_1}f_{2k}\equiv s^ku_1+[k]_s\cdot u' \nonumber \\
&\equiv s^ku_1+ku'-k(k-1)(z+1)^2, \\
g_{2k}&\equiv kv'\pmod{2^{b}}, \\
g_{2k+1}&\equiv kv'+v_1\pmod{2^{b}}.
\end{align}
\begin{lem}
The conditions {\rm(\ref{eq:inverse-1})}, {\rm(\ref{eq:inverse-2})} hold if and only if
\begin{align}
\ell'v'+v_1+2&\equiv 0\pmod{2^b} \label{eq:condition1}, \\
u_1+\ell'u'-\ell'(\ell'-1)(z+1)^2+(1+2^{c-1}(v_1-1))\tilde{u}&\equiv 0\pmod{2^{a-1}}, \label{eq:condition2} \\
(s-1)(u'+1)-2^{c-1}v'&\equiv 0\pmod{2^{a-1}}, \label{eq:condition3} \\
\deg_2(t+1)&\ge a-c+2. \label{eq:condition4}
\end{align}
\end{lem}
\begin{proof}
When $i=2k$, the condition (\ref{eq:inverse-2}) is
\begin{align*}
v_1+(\ell'+kt)v'+kv'+2\equiv 0\pmod{2^{b}},
\end{align*}
which is, due to $\deg_2(t+1)\ge b+1$, equivalent to (\ref{eq:condition1}).
Conversely, if (\ref{eq:condition1}) is satisfied, then one can verify that (\ref{eq:inverse-2}) also holds for $i=2k+1$.
When $i=2k$, the condition (\ref{eq:inverse-1}) becomes
\begin{align}
s^{\ell'+kt}u_1+(\ell'+kt+k)u'-((\ell'+kt)(\ell'+kt-1)+k(k-1))(z+1)^2 \nonumber \\
+(1+2^{c-1}((\ell'+kt)v'+v-1))\tilde{u}\equiv 0; \label{eq:deduce-f1}
\end{align}
using $s^g\equiv 1+g(s-1)$, $\deg_2(t+1)\ge b+1$ and $4(z+1)^2\equiv 0$, we can compute the difference (meaning the result of subtracting (\ref{eq:deduce-f1}) itself from the equation obtained by replacing $k$ by $k+1$ in (\ref{eq:deduce-f1})) to be
\begin{align}
-(s-1)u_1+(2\ell'-2)(z+1)^2-2^{c-1}v'\tilde{u}\equiv 0. \label{eq:deduce-f2}
\end{align}
Setting $k=0$ in (\ref{eq:deduce-f1}), we obtain
\begin{align}
s^{\ell'}u_1+\ell'u'-\ell'(\ell'-1)(z+1)^2+(1+2^{c-1}(\ell'v'+v_1-1))\tilde{u}\equiv 0. \label{eq:deduce-f3}
\end{align}
When $i=2k+1$, the condition (\ref{eq:inverse-1}) is
\begin{align}
(\ell'+t'+kt+k)u'-((\ell'+t'+kt)(\ell'+t'+kt-1)+k(k-1))(z+1)^2 \nonumber \\
+s^ku_1-2^c(kv'+v_1+1)u_1+(1+2^{c-1}((\ell'+t'+kt)v'-1))\tilde{u}\equiv 0, \label{eq:deduce-f4}
\end{align}
whose difference is
\begin{align}
(s-1-2^cv')u_1+2(\ell'+t'-2)(z+1)^2-2^{c-1}v'\tilde{u}\equiv 0,
\end{align}
which is equivalent to (\ref{eq:deduce-f2}), as $\deg_2(t')\ge b$.
Setting $k=0$ in (\ref{eq:deduce-f4}), we obtain
\begin{align*}
(\ell'+t')u'-\ell'(\ell'-1)(z+1)^2+(1-2^c(v_1+1))u_1+(1+2^{c-1}(\ell'v'-1))\tilde{u}\equiv 0,
\end{align*}
which, via multiplying $1+2^c(v_1+1)$ and using (\ref{eq:condition1}), is equivalent to
\begin{align}
u_1+(\ell'+t')u'-\ell'(\ell'-1)(z+1)^2+(1+2^{c-1}(v_1-1))\tilde{u}\equiv 0. \label{eq:deduce-f6}
\end{align}
Substituting $u$ into (\ref{eq:deduce-f2}), (\ref{eq:deduce-f3}), we obtain, respectively,
\begin{align*}
(s-1)u'&\equiv (2^{c-1}v'-(s-1))\tilde{u} , \\
((\ell')^2(s-1)+t')u'&\equiv \ell'(2^{c-1}v'-(s-1))\tilde{u}.
\end{align*}
The first is equivalent to (\ref{eq:condition3}),
which in particular implies $\deg_2(s-1)=c-1$ so that $\deg_2(u')=\deg_2(z+1)=c-2$, and then the second
is equivalent to (\ref{eq:condition4}). This also reduces (\ref{eq:deduce-f6}) to (\ref{eq:condition2}).
\end{proof}
\subsection{The result}
Now that $\deg_2(z+1)=c-2$, we have $z\equiv -1+2^{c-2}\pmod{2^{c-1}}$, hence $w\equiv 1-2^{c-2}\pmod{2^b}$; just put $w=1-2^{c-2}$.
Referring to (\ref{eq:u'})--(\ref{eq:v'}), the condition (\ref{eq:condition1}) is equivalent to
\begin{align}
v_1\equiv -\frac{2+\ell'u_1}{1+\ell'(w+1)} \pmod{2^b}, \label{eq:v1}
\end{align}
then (\ref{eq:condition3}) becomes
\begin{align*}
s-1-2(z+1)^2\equiv\frac{2^{c-1}(u_1-4)}{1+\ell'(w+1)},
\end{align*}
implying
\begin{align}
u_1\equiv\ell\frac{z^2-1}{2^{c-1}}-\ell'(2^{c-2}+4)\pmod{2^{a-c}},
\end{align}
and then (\ref{eq:condition2}) implies
\begin{align}
\tilde{u}\equiv \ell(2-\frac{z^2-1}{2^{c-1}})-4;
\end{align}
this together with the assumption that $0<\tilde{u}<2^{a-c}$ determines $\tilde{u}$, which via (\ref{eq:condition2}) in turn determines $u_1$, and finally via (\ref{eq:v1}) determines $v_1$.
Therefore, everything is determined by $z=-1+2^{c-2}+2^{c-1}z_1$, with $0\le z_1<2^{a-c-1}$, so there are $2^{a-c-1}$ isomorphism classes.
| 2023-04-23T06:40:56.975Z | 2017-02-28T02:13:47.000Z | redpajama/arxiv | arxiv_0001 | 1,442 | 6,215 |
aae19f0d7cc5357f105949ba102c5b224308817e | \section{Introduction}
Continual learning refers to the ability of a model to learn from a stream of incoming data sequentially over time, while retaining knowledge acquired from previous data. Continual learning is a vital component of machine learning. It enables a model to generalise in situations where the stream of data may be non-stationary, with unavailable data during training, or when new information is incrementally made available to the model over time~\citep{kirkpatrick2017overcoming}.
A phenomenon called catastrophic forgetting hinders continual learning in many artificial neural networks (ANNs)~\citep{howard2018universal, schak2019study}. Catastrophic forgetting refers to the model losing knowledge of previous datasets or tasks as the model is trained sequentially on information relevant to a new dataset or task~\citep{kirkpatrick2017overcoming}. Catastrophic forgetting is also called catastrophic interference in older works~\citep{MCCLOSKEY1989109}.
If a model, e.g. an ANN, has very robust memory that is not susceptible to catastrophic forgetting, then such a model may be susceptible to overfitting (i.e. memorisation of the training set). Overfitting leads to poor generalisation. Humans, however, have both reasonable (albeit imperfect) memory retention and good generalisation, so in principle it should be possible to build a model with similar desirable properties.
Splines are piece-wise defined functions. The application of splines to mitigate catastrophic forgetting was absent in most of the reviewed literature. Due to the piece-wise definition, each spline parameter only affects the function on some small region while keeping the rest of the function unchanged, thus making it a good candidate for continual learning. Cubic B-splines are considered in this paper.
Catastrophic forgetting is typically mitigated with two broad strategies: carefully designed and parameterised models, or through augmented training and regularisation techniques ~\citep{robins1995catastrophic,shin2017continual}. This paper attempts to address catastrophic forgetting through the following novel contributions:
\begin{itemize}
\item A novel Spline Additive Model (SAM) with guaranteed robustness to catastrophic forgetting is proposed, which is useful for many applications. However, it is not a universal function approximator.
\item The Kolmogorov-Arnold Spline Additive Model (KASAM) is proposed, which is a novel architecture that combines SAMs with the Kolmogorov-Arnold representation theorem to create a universal function approximator.
\end{itemize}
These goals demand reviewing the fundamentals of function approximation in one variable, with B-spline functions that are resistant to catastrophic forgetting. The paper proceeds to build multi-variable function approximators with single-variable B-spline functions and the Kolmogorov-Arnold representation theorem.
The rest of the paper is structured as follows: Section~\ref{sec:previous} provides an overview of the related literature. Section~\ref{sec:catastrophic} illustrates the concept of catastrophic forgetting. Section~\ref{sec:singleVar} lists the properties of splines in the context of single-variable function approximation. Section~\ref{sec:sam} discusses SAMs as function approximators. Section~\ref{sec:KASAM} introduces the KASAM architecture. Section~\ref{sec:pseudo} describes the pseudo-rehearsal technique employed in this study. Section~\ref{sec:method} details the methodology. Section~\ref{sec:exp} presents the empirical results. Section~\ref{sec:conclusions} concludes the paper, and Section~\ref{sec:opportunities} proposes some directions for future research.
\section{Relevant Studies}\label{sec:previous}
It has been hypothesized that overlapping, dense and distributed representations in ANNs lead to catastrophic forgetting~\citep{kaushik2021understanding}. Catastrophic forgetting occurs when many parameter estimates that store knowledge for one task change during sequential learning to meet the objectives of another task~\citep{kirkpatrick2017overcoming, mcrae1993catastrophic}. If the same parameter is shared or overlaps with many inputs, then it is more susceptible to catastrophic forgetting. Any gradient-based updates would affect the same shared parameter and thus, the parameter value would be more likely to change between tasks. Catastrophic forgetting can be ameliorated with models that are parameterised in such a way that weight sharing or overlap is minimised over all inputs.
Training techniques to counteract catastrophic forgetting include identifying and protecting key parameters for different tasks. Parameter regularisation to penalise adjusting parameters from their initial values is one approach. Retraining over all training data for all tasks can also be done, although this scales poorly as the amount of training data increases.
Data augmentation techniques can also be used to counteract catastrophic forgetting, some of which are referred to as rehearsal techniques. There are many suggested rehearsal and pseudo-rehearsal techniques~\citep{robins1995catastrophic}. Pseudo-rehearsal works quite well for low dimensional problems, but there is some experimental evidence and practical use cases that suggest pseudo-rehearsal has poorer performance in high dimensional problems. It is hypothesised that the concentration of measure in high dimensional probability distributions require more complexity to be modelled for acceptable results. As a consequence, some researchers considered estimating the data's distribution using a Generative Adversarial Network (GAN) for higher dimensional problems.
However, training GANs requires a lot of computational resources, and might not be a scalable solution for all problems and use-cases.
Using splines in ANNs has been studied to some extent. \citet{scardapane2017learning} studied learnable activation functions parameterised by splines. \citet{scardapane2017learning} introduced vectorised gradient descent based methods to train the parameters of their architecture. \citet{scardapane2017learning} tested the function approximation ability of the architecture. \citet{douzette2017b} made use of spline networks, allowing trainable and non-uniform sub-intervals, and developed algorithms for evaluating splines and their derivatives with respect to all their parameters. The research concluded that splines which allowed non-uniform partitions that vary during training achieved state-of-the-art results for spline-based architectures. The accuracy of non-uniform splines compared well against conventional neural networks. However, allowing the partitions of sub-intervals to vary had counter-productive effects. Intermediate or hidden layers could take on values outside the support interval of the splines. Since splines are zero outside their sub-intervals, this could lead to increased training times~\citep{douzette2017b}. To the best of the authors' knowledge, catastrophic forgetting was not considered in the context of splines.
A lot of research has taken place on the neural network approximations of the Kolmogorov-Arnold representation theorem~\citep{funahashi1989approximate,scarselli1998universal,igelnik2003kolmogorov,braun2009constructive,guliyev2018approximation,sannai2019universal, schmidt2021kolmogorov,shen2021neural}. \citet{shen2021neural} gives a good overview of the direction of research, and has shown that the focus has been on reducing the number and width of the hidden layers required to approximate the theorem with a neural network, at the trade-off of increasing the bounded approximation error. The difference between the well-known universal function approximation theorems for arbitrarily deep or arbitrarily wide neural network and the Kolmogorov-Arnold representation theorem is striking.
ANN architectures using splines and the Kolmogorov-Arnold representation theorem have also been explored~\citep{igelnik2003kolmogorov,lane1991multi}. \citet{igelnik2003kolmogorov} put forth a Kolmogorov Spline Network architecture, which was also based on the Kolmogorov-Arnold representation theorem. However, this differed from our work through the manner in which the weights are applied~\citep{igelnik2003kolmogorov}. Furthermore, no prototype was constructed, and no experimental work was performed.
The NeurIPS (originally NIPS) paper \citet{lane1991multi} explored a Kolmogorov based multi-layer perceptron architecture which employed B-splines in the nodes. \citet{lane1991multi} trained the architecture with gradient-descent based techniques and tested out the function approximation ability. \citet{lane1991multi} states that the architecture was able to approximate functions, and commented on the fast rate of convergence of the network’s parameters. Their work is most relevant to this paper. KASAM has a skip-connection to the output and can more easily represent functions that are sum-decomposable into a sum of single-variable functions. This paper focuses on memory retention and catastrophic forgetting, whereas the paper \citet{lane1991multi} does not consider catastrophic forgetting.
The research by Numenta and Jeffrey Hawkins is focused on neuroscience and reverse-engineering the computational mechanisms employed in the brain~\citep{htm_neocortex}. Their research is based on discrete and stochastic models like hierarchical temporal memory with update rules that appear otherworldly compared to the familiar differentiable models that can be trained with gradient descent algorithms. One aspect that is missing is a universal function approximation theorem proving the expressive power of their models. One component of their technology is Sparse Distributed Representations (SDRs) that encode real numbers with sparse vectors \citep{Hinton1990DistributedR}. SDRs are similar to zeroth order B-splines, although the connection has not been explicitly shown in the reviewed literature.
The potential use of B-spline functions to mitigate catastrophic forgetting was not thoroughly investigated in any of the reviewed literature. The convergence rate and numerical stability of B-splines require more thorough analysis and empirical study.
\section{Susceptibility to Catastrophic Forgetting}\label{sec:catastrophic}
Model parameters that are shared over the entire input-domain of a function approximator make models such as ANNs or linear functions susceptible to catastrophic forgetting. A parameter that affects a model over all inputs is a globally shared parameter, and not a localised parameter. Parameters that are localised only affect a model's output over a small region of the input-domain. Catastrophic forgetting is easily demonstrated with a simple linear function approximator for a single-variable scalar function.
A linear model is trained on the data sampled from the distribution for the first task in Figure~\ref{fig:fig_label_linear_functions}. The model is thereafter trained on the second task. Without any additional guidance, such as revision of the first task or weight regularisation to prevent catastrophic forgetting, the model promptly unlearns the first task.
\begin{figure}[h!]
\centering
\noindent
\includegraphics[width=0.7\textwidth]{KASAM_Theory/Linear_functions_catastrophic_interference.png}
\caption{\small A linear model with only two parameters is trained on data for the first task. The same model is then trained on data from a second task, and will adapt to the new task. The model abruptly forgets about the first task without revision. Trainable parameters that are globally shared across the input-domain make the model susceptible to catastrophic forgetting.}
\label{fig:fig_label_linear_functions}
\end{figure}
If the linear model had been trained on both tasks simultaneously, then it would have found a reasonable solution for both tasks. The individual tasks have different data distributions, which affects the out-of-sample performance and the potential for continual or incremental learning. The joint distribution for both tasks has a non-linear optimal function (which is piece-wise linear for this specific example).
The seemingly disparate ideas of catastrophic forgetting, distribution shift, out-of-sample performance, continual learning, and the non-linearity of a target function are related. The relationship is not a simple analytical rule, but it is worth investigating.
\section{Splines for Single Variable Function Approximation}\label{sec:singleVar}
Single-variable function approximation on the unit interval is typically done with the use of single-variable basis functions. Examples include the polynomial basis (Legendre polynomials), or the Fourier basis. Using a large density (i.e. number) of basis functions can lead to high-variance models with extreme oscillations between the finitely many training data points. It is an open question if there are regularisation methods that attenuate such oscillations for any choice of basis functions. The specific basis functions considered in this paper are uniform cubic B-splines, illustrated in Figure~\ref{fig:fig_label_uniform_cubic_spline_basis_functions}.
\begin{figure}[h]
\centering
\noindent
\includegraphics[width=0.7\textwidth]{KASAM_Theory/uniform_cubic_b_spline_basis_functions.png}
\caption{\small Uniform cubic B-spline basis functions with a density of 8. Note that the basis functions have been plotted for input values outside the unit interval to clearly see all 8 basis functions. Each basis function is zero almost everywhere except on a small sub-interval. Each basis function is a scaled and translated version of the same function such that $ S_{i}(x) = S(w_{i}x + b_{i})$. }
\label{fig:fig_label_uniform_cubic_spline_basis_functions}
\end{figure}
The use of B-splines is a flexible and expressive method of function approximation. The analysis of B-splines and characterising their general behaviour is simple compared to other basis functions. Note that the number of basis functions is fixed and chosen independently of the training data. The model considered in this paper does not necessarily interpolate between consecutive data points. Exact interpolation of training data is ill-advised for tasks with noisy training data, since the resulting model would have a lot of variance and oscillate wildly. The order of a B-spline (e.g. zeroth or cubic) is different to the number of basis functions. The number or density of basis functions is sometimes referred to as sub-intervals or knots in the literature on splines and B-splines.
Each basis function for a uniform cubic B-spline $S_{i}$ can be obtained by scaling and translating the input of the following activation function:
$$ S(x) =\begin{cases}
\frac{1}{6} x^{3} & 0 \leq x < 1\\
\frac{1}{6} \left[-3(x-1)^{3} +3(x-1)^{2} +3(x-1) + 1 \right] & 1 \leq x < 2\\
\frac{1}{6} \left[3(x-2)^{3} -6(x-2)^{2} + 4 \right] & 2 \leq x < 3\\
\frac{1}{6} ( 4-x ) ^{3} & 3 \leq x < 4\\
0 & otherwise
\end{cases}
$$
The initial impetus for considering B-splines followed from the revelation that Numenta's Sparse Distributed Representations (SDRs) \citep{Hinton1990DistributedR} are mathematically similar to zeroth-order B-splines. Zeroth order splines incidentally resemble lookup tables, and are not differentiable over their entire domain. It is also known that lookup tables are very robust to catastrophic forgetting \citep{look_up_tables_are_robust}. Replacing zeroth-order B-splines with cubic B-splines yields a differentiable model that is smooth and amicable to gradient descent-based learning methods. If the unit interval is uniformly partitioned B-splines as in Figure~\ref{fig:fig_label_uniform_cubic_spline_basis_functions}, then the subsequent set of basis functions have the same shape, as illustrated in Figure~\ref{fig:fig_label_uniform_cubic_spline_activation}.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=0.8\textwidth]{KASAM_Theory/uniform_cubic_b_spline_activation_function.png}
\caption{\small The activation function used to implement uniformly spaced cubic B-splines. In the uniform case all basis functions are the same shape with only the input or argument being scaled and translated with a fixed linear function for each basis function.}
\label{fig:fig_label_uniform_cubic_spline_activation}
\end{figure}
Each cubic B-spline basis function is simply a re-scaled or translated version of any other cubic B-spline basis function, as seen in Figures~\ref{fig:fig_label_uniform_cubic_spline_basis_functions} and~\ref{fig:fig_label_uniform_cubic_spline_activation}. This symmetry can promote reuse. In a neural network context, uniform cubic B-splines can be modelled using a two-layer neural network with an activation function corresponding to the shape of each basis function, and a pre-defined set of untrainable weights and biases. A final trainable linear layer multiplies each basis function with its coefficient parameter and sums together the results. A single variable function approximator for uniform B-splines is given by:
$$
f(x)
= \sum_{i=1}^{K} \theta_{i} S_{i}(x)
= \sum_{i=1}^{K} \theta_{i} S(w_{i}x + b_{i})
$$
This construction presented above is similar to other basis functions. The Fourier series is composed of basis functions that are orthogonal to each other. There are many analytical and practical reasons for considering basis functions that are orthogonal with respect to each other. The Fourier basis for functions on the unit interval is:
$$
f(x)
= c + \sum_{n=1}^{K} a_{n} \sin(2 \pi n x) + b_{n} \cos(2 \pi n x)
$$
Polynomial functions can also constitute a set of basis functions. The relatively simple monomial basis is non-orthogonal and of the form:
$$
f(x)
= c + \sum_{n=1}^{K} a_{n} x^{n}
$$
In many applications it is preferable to use orthogonal basis functions. Legendre polynomials are orthogonal polynomial basis functions. A function approximator in terms of Legendre basis functions $P_{n}(x)$:
$$
f(x)
= \sum_{n=0}^{K} a_{n} P_{n}(x)
$$
The choice of B-splines seems arbitrary, but it is a critical design choice. The problem with trigonometric and polynomial functions is that they are non-zero almost everywhere. There are only finitely many points where each basis function is zero. This means that each parameter could affect a model's output almost everywhere and lead to catastrophic forgetting. B-splines are zero almost everywhere and don't suffer such off-target effects - each parameter affects only a small region. Cubic B-splines have three properties that are atypical of most function approximators, namely sparsity, bounded gradients, and orthogonal distal gradients. These properties hold for arbitrarily many basis functions. The listed properties are specified below for cubic B-splines, but similar properties hold for any fixed order B-spline functions.
\subsection*{Property 1: Sparsity of the Gradient Vector}
The maximum number of non-zero cubic B-spline basis functions for any input is four, regardless of the number of basis functions $K$:
$$
\norm{ \grad_{\vec{\mathbf{\theta}}} f(x)}_{0}
:= \sum_{i=1}^{K} d_{Hamming} \left(\frac{\partial f}{\partial \theta_{i}}(x),0 \right)
\leq 4
\; \forall x \in D(f)
$$
Sparsity is related to the robustness of a model to catastrophic forgetting. Uniform cubic B-splines require a minimum of four basis functions over the unit interval, so $K\geq 4$. For very large models with a large density of basis functions the gradient vector is zero for nearly all trainable parameter as shown in Figure~\ref{fig:Proof_properties_1_2}.
\subsection*{Property 2: Gradient Flow Attenuation}
The gradient vector has a bounded L1 norm, for any number of B-spline basis functions:
$$
\norm{ \grad_{\vec{\mathbf{\theta}}} f(x)}_{1}
= \sum_{i=1}^{K} \left| \frac{\partial f}{\partial \theta_{i}} (x) \right|
< 4U
\; \forall x \in D(f)
$$
Gradient flow attenuation affects the stability of training a model with many trainable parameters. If the gradient vector is bounded, then the flow of gradient updates during training is attenuated and numerically stable. The visual proof of the boundedness of the gradient vector is demonstrated with Figure~\ref{fig:Proof_properties_1_2}.
\begin{figure}[!ht]
\centering
\noindent
\includegraphics[width=0.6\textwidth]{KASAM_Theory/SAM_properties_1_2_Proof.png}
\caption{\small Plot showing which basis functions are active for a single input. At most four basis functions are non-zero for any given input to a uniform cubic B-spline function, so one has the property of sparsity. Each of the four active basis functions that are active are bounded, so the sum of their absolute values is also bounded. Thus the $L_{1}$ norm of the gradient vector with respect to the trainable parameters is bounded.}
\label{fig:Proof_properties_1_2}
\end{figure}
\subsection*{Property 3: Distal Orthogonality}
For any uniform cubic B-spline function approximator of one variable there exists a $\delta>0$ such that:
$$
|x - y| > \delta
\implies \langle \grad_{\vec{\mathbf{\theta}}} f(x),\grad_{\vec{\mathbf{\theta}}} f(y) \rangle = 0
\; \forall x,y \in D(f)
$$
If two input points are sufficiently far apart, then the gradient vectors with respect to the trainable parameters at each input are orthogonal to each other. Thus, only points within the same neighbourhood have non-orthogonal gradient vectors that can influence one another. A visual proof of distal orthogonality can be done with a diagram given in Figure~\ref{fig:Proof_properties_3}.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=0.7\textwidth]{KASAM_Theory/SAM_properties_3_Proof.png}
\caption{\small Plot showing which basis functions are active for two distant input-points.
If two input-points are sufficiently distant from each other,
then there are no overlapping basis functions.
The gradients with respect to trainable parameters are zero almost everywhere,
except for the active basis functions. Thus, the inner-product of such gradient vectors must be zero. This property is best described as distal orthogonality.}
\label{fig:Proof_properties_3}
\end{figure}
Distal orthogonality guarantees memory retention for single-variable function approximators based on cubic B-splines. Gradient flow attenuation guarantees bounded gradient vectors, which mean cubic B-splines are numerically stable during training, making optimisation much easier and well-posed. Sparsity of the gradient vector and sparse activation mean it is possible to implement very efficient models and training procedures that only compute the non-zero values. The above properties lead to a single-variable function approximator that is efficient, easily trained, and robust to catastrophic forgetting with intrinsic memory retention - in theory.
To the authors' knowledge, there are not many such guarantees given to other function approximators.
\subsection*{Stratification}
Uniform cubic B-splines with a large density of basis functions can exhibit what is best described as stratification, illustrated in Figure~\ref{fig:fig_label_uniform_cubic_spline_stratification}. Stratification can be considered to be a special form of overfitting, where the data points are learned exactly, and the model is not adjusted for any regions not explicitly represented by the training data. The models in Figure~\ref{fig:fig_label_uniform_cubic_spline_stratification} are initialised to zero. Regions with no training data are never modified and retain their initial values. The gaps between data-points can thus lead to stratification, since only the regions with training data are updated. Therefore, over-parameterised B-spline models can easily memorise all training data.
\begin{figure}[h]
\centering
\noindent
\includegraphics[width=0.6\textwidth]{KASAM_Theory/uniform_cubic_b_spline_stratification.png}
\caption{\small Visualising stratification. The training data was sampled from the sine function $\sin(2 \pi x)$ to illustrate the effect of increasing the number basis function (denoted K). The uniform cubic B-spline models were initialised to be zero everywhere prior to training.}
\label{fig:fig_label_uniform_cubic_spline_stratification}
\end{figure}
There are a few qualitative differences between stratification and overfitting. Regions with no training data have predictable values and do not exhibit oscillations as seen in the Runge phenomenon with other kinds of basis functions~\cite{RungeBOYD199257,RungeFORNBERG2007379,RungeDEMARCHI2020112347}. Over-fitting is task-specific: Suppose the target function was known to be zero almost everywhere except at finitely many points, then an over-parametrised model would perform well. Anomaly detection is an ideal application of such over-parametrised B-spline models. On the other hand, over-parametrised B-spline models would perform poorly on regression problems like estimating a sine function with few training data-points as seen in Figure~\ref{fig:fig_label_uniform_cubic_spline_stratification}.
Stratification could be detrimental or advantageous, depending on the application. Manifold Mixup regularisation and data augmentation could be ideal strategies for counteracting stratification when necessary ~\cite{mixupBeyondEmpiricalRiskMinimization, ManifoldMixupBetterRepresentations, MixUpasLocallyLinearOut-Of-ManifoldRegularization}.
\section{Spline Additive Model (SAM)}\label{sec:sam}
Extending single-variable function approximators to multi-variable function approximation is non-trivial. One possibility is to map a higher dimensional input to the unit interval using space-filling curves (e.g. fractals like Hilbert curves), or some other projection technique based on path integrals. In this paper, the choice was made to create a function approximator that is a sum of single variable functions of each input variable, called the Spline Additive Model (SAM) and illustrated in Figure~\ref{fig:SAM_implementation}.
Consider any target function $y$ that can be expressed as the sum of continuous single variable functions $y_{j}$ in each of the input variables $x_{j}$ given by:
$$
y(\vec{\mathbf{x}})
= y(x_{1},...,x_{n})
= \sum^{n}_{j=1} y_{j}( x_{j} )
$$
SAM uses a uniform cubic B-spline function approximator $f_{j}$ with $K$ trainable parameters to approximate each single-variable function $y_{j}$. There are $K$ basis functions for each $f_{j}$, and there are $n$ input variables. The total number of trainable parameters is $nK$ for the entire model $f(\vec{\mathbf{x}})$. SAM inherits some of the properties discussed in Section~\ref{sec:singleVar}. SAM is given by the sum of $n$ single-variable B-spline functions:
$$
f(\vec{\mathbf{x}})
= \sum^{n}_{j=1} f_{j}( x_{j} )
= \sum^{n}_{j=1} \sum_{i=1}^{K} \theta_{i,j} S_{i,j}( x_{j} )
= \sum^{n}_{j=1} \sum_{i=1}^{K} \theta_{i,j} S(w_{i,j}x_{j} + b_{i,j})
$$
\begin{figure} [!h]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Models/graph_SAM_fully_connected.txt.png}
\caption{SAM with all weights}
\label{fig:SAM_dense}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Models/graph_SAM.txt.png}
\caption{Non-zero or trainable weights}
\label{fig:sam_dense}
\end{subfigure}
\caption{\small
Structure of SAM. The weights and biases for the first layer are constant and specified with model creation to correctly implement B-splines. The nodes in the black rectangle apply a non-linear activation function given by the cubic B-spline activation function. The output layer is a trainable linear layer with no bias term. The resulting model is a sum of single-variable functions in each parameter.}
\label{fig:SAM_implementation}
\end{figure}
\subsection*{Property 1: Sparsity of the gradient vector}
For a fixed number of variables $n$, the gradient vector has a maximum of $4n$ non-zero entries for any number of basis functions $K \geq 4$:
$$
\norm{ \grad_{\vec{\mathbf{\theta}}} f(\vec{\mathbf{x}} )}_{0}
= \sum_{i=1}^{Kn} d_{Hamming} \left(\frac{\partial f}{\partial \theta_{i}} (\vec{\mathbf{x}}),0 \right)
\leq 4 n
\; \forall \; \vec{\mathbf{x}} \in D(f) \subset R^{n}
$$
The sparsity of the multi-variable SAM model follows from the sparsity of each single-variable function used. SAM is robust to catastrophic forgetting.
\subsection*{Property 2: Gradient flow attenuation}
For a fixed number of variables $n$, the gradient vector has bounded L1 norm for any number of basis functions $K \geq 4$:
$$
\norm{ \grad_{\vec{\mathbf{\theta}}} f( \vec{\mathbf{x}} )}_{1}
= \sum_{i=1}^{Kn} \left| \frac{\partial f}{\partial \theta_{i}} ( \vec{\mathbf{x}} ) \right|
< 4Un
\; \forall \; \vec{\mathbf{x}} \in D(f)
$$
The bounded norm for SAM follows from the single-variable case. SAM is numerically stable during training and well-behaved in the limit of arbitrarily many basis functions.
\subsection*{Property 3: Distal orthogonality}
For any spline additive model, there exists a $\delta>0$ such that:
$$
\min_{j=1, \dots , n }
\{ |x_{j} - y_{j}| \} > \delta
\implies
\langle
\grad_{\vec{\mathbf{\theta}}} f(\vec{\mathbf{x}})
,
\grad_{\vec{\mathbf{\theta}}} f(\vec{\mathbf{y}})
\rangle
= 0 \;
\forall \; \vec{\mathbf{x}},\vec{\mathbf{y}} \in D(f) \subset R^{n}
$$
The distal orthogonality follows from the single-variable case. Two points that sufficiently differ in each input-variable have orthogonal parameter gradients. It is worth mentioning that the condition resembles a cross-like region in two variables, and planes that intersect in higher dimensions. Distal orthogonality means SAM is reasonably robust to catastrophic forgetting.
\subsection*{General Overview of SAM}
SAMs also exhibit stratification and inherent memory retention that is robust, but not perfect since the overlapping regions are not as localised as in the single-variable case. SAMs can be implemented as neural networks, as illustrated in Figure~\ref{fig:SAM_implementation}. SAMs are not a universal function approximation scheme, there are continuous multi-variable functions that cannot be expressed as a sum of single variable functions. However, for problems where the manifold hypothesis is true and data lies on a very low-dimensional manifold, SAMs can be sufficient. Additionally, SAMs could be well-suited for use with kernel techniques, or the Fourier transform of the input. Recurrent neural networks and reservoir computers may also benefit from the robust nature of SAMs.
\section{KASAM: Universal Function Approximation}\label{sec:KASAM}
The Kolmogorov-Arnold representation shows how any multi-variable function can be represented with single-variable functions~\cite{kolmogorov1957representation,kolmogorovRevisited}. The theorem states that any continuous multi-variable function $y$ on the unit hyper-cube with $n$ input-variables $x_{p}$ can be exactly represented with continuous single-variable functions $\Phi_{q}$, $\phi_{q,p}$ of the form:
$$
y(\vec{\mathbf{x}})
= y(x_{1},...,x_{n})
= \sum^{2n}_{q=0} \Phi_{q} \left( \sum^{n}_{p=1} \phi_{q,p}( x_{p} ) \right)
$$
The representation theorem is not an approximation - it is exact~\cite{kolmogorov1957representation,kolmogorovRevisited}. Furthermore, there is no mention of the computability or learnability of the representation in the theorem. It is also notable that the summation in the representation is finite, and not arbitrarily large or infinite as with Taylor series. It is also unlike the Universal Function Approximation theorems for arbitrarily wide and arbitrarily deep neural networks, which also correspond to summing together arbitrarily many terms. The core building block for the Kolmogorov-Arnold representation theorem is continuous single-variable functions.
The approximation scheme for KASAM stems from the arbitrary density of basis functions to approximate each single-variable function. A sum of single-variable functions $g_{p}$ is added to the expression. Inspiration was taken from ResNet arcitectures to make optimisation easier~\citep{resnetpaper}. The Kolmogorov-Arnold representation theorem can be extended with a ResNet-style residual skip-connection for some constant $\lambda$ to obtain the definition for KASAM. The function approximator $f$ replaces the summation over $2n$ with a summation over some $N$. The functions $ h_{q,p}$ and $g_{j}$ are uniform B-spline functions. The outer functions are given by $H_{q}(z) = b_{q} ( \sigma(z))$ where $\sigma$ is the sigmoid function that maps any real number to the unit interval. The functions $s_{q}$ are uniform B-spline functions. The overall expression for Kolmogorov-Arnold Spline Additive Model (KASAM) is given by:
$$
f(\vec{\mathbf{x}})
= \sum^{N}_{q=1} H_{q} \left( \sum^{n}_{p=1} h_{q,p}( x_{p} ) \right)
+ \lambda \sum^{N}_{q=1}
\left( \sum^{n}_{p=1} h_{q,p}( x_{p} ) \right)
+ \sum^{n}_{j=1} g_{j}( x_{j} )
$$
There is a one-to-one correspondence between the exact representation and the structure of KASAM (assuming $N = 2n$). Keep in mind that for $\lambda \neq 0$ one can choose the functions $g_{p}$ such that it cancels out with the residual term, yielding the original representation theorem.
If an arbitrary density of basis functions are used, then one can approximate the exact target function. It is hoped that using cubic B-splines and SAM would give rise to a model that is easy to implement and train. Unfortunately the analytical guarantees that SAM possesses do not hold for KASAM in general.
A more compact vector notation can be given, replacing $\lambda$ with a constant matrix or linear projection $T$:
$$
f(\vec{\mathbf{x}})
= H \left( \vec{\mathbf{h}}(\vec{\mathbf{x}}) \right)
+ T \vec{\mathbf{h}}(\vec{\mathbf{x}})
+ g(\vec{\mathbf{x}})
$$
KASAM uses SAM modules throughout to approximate all sums of single-variable functions, as shown in Figure~\ref{fig:fig_graph_KASAM_stucture}. The TensorFlow implementation and prototype is a proof of concept and can be seen in the linked \href{https://github.com/hpdeventer/KASAM}{GitHub repository}\footnote{\href {https://github.com/hpdeventer/KASAM}{https://github.com/hpdeventer/KASAM}}. The focus was on developing a working prototype, and not computational efficiency. Future research into more efficient implementations would be ideal.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=\textwidth]{KASAM_Models/graph_KASAM.png}
\caption{\small Structure of KASAM. The computational graph represents layer(s) as nodes. The SAM module is used extensively in KASAM. One branch is passed through a sigmoid function, and is then passed to another SAM layer (implementing the exterior functions). There is a residual skip-connection implemented with a constant linear layer between the interior SAM module and the output of the model.}
\label{fig:fig_graph_KASAM_stucture}
\end{figure}
\subsection*{General Overview of KASAM}
KASAM is a universal function approximator. KASAM can be implemented as a special type of artificial neural network with specifically chosen weights and activation functions, and trainable weights that can be optimised with gradient descent algorithms. The KASAM neural network that is constructed from SAM might inherit some memory retention, although it is not as predictably and reliably robust to catastrophic forgetting as SAMs. Stratification could hinder KASAM's generalisation on certain tasks, and it is not obvious how to manage this potential weakness. To reduce catastrophic forgetting, a pseudo-rehearsal training technique can be used. Further implementation details can be found in Section~\ref{sec:method} and the Github repository.
\section{Pseudo-Rehearsal Training Techniques}\label{sec:pseudo}
Numerous review, rehearsal and pseudo-rehearsal techniques exist~\citep{robins1995catastrophic}. Some use generative models like GANs~\citep{shin2017continual}. The expected risk has the familiar form:
$$ R(f) = \int \ell (f(x),y) \mathrm{d}P(x,y)$$
Pseudo-rehearsal explicitly refers back to the model's previous state, and is similar to the ideas related to generative replay~\citep{shin2017continual}. The distribution $\mathcal{P}(z)=\mathcal{P}_{D(f)}$ of the input data with $z \sim \mathcal{P}_{D(f)}$ could be given by a generative model or a uniform distribution $z \sim U_{D(f)}$ over the domain of $f$ denoted $D(f)$. Using a uniform distribution is less computationally demanding than training a generative model. The listed integral elicits memory. The mixing coefficient $\rho \in \left[0,1 \right]$ controls novel learning and memory retention. This technique is given by the discrete-time functional of the form:
$$
R(f_{t+1}) =
\rho \int \ell (f_{t+1}(x),y) \mathrm{d}P(x,y) + (1-\rho)\int \ell (f_{t+1}(z),f_{t}(z))\mathrm{d}\mathcal{P}(z)
$$
The key idea is to minimise the loss on new data, subject to the constraint that the model retains its input-output mapping over the rest of its domain. The same functional can be seen in the paper~\citep{shin2017continual}. This could mitigate catastrophic forgetting and is an iterative process that may or may not converge depending on the choice of loss function $\ell$, model $f$, and the target values. A simple version of pseudo-rehearsal is demonstrated in Figure~\ref{fig:fig_label_reveries}.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=0.6\textwidth]{KASAM_Theory/pseudo_rehearsal.png}
\caption{\small Pseudo-rehearsal is demonstrated. The training data is augmented with data sampled from the model's initial input-output values. Pseudo-rehearsal exploits the model's memory of the previous tasks to retain memory and change only in regions where the new data is present.}
\label{fig:fig_label_reveries}
\end{figure}
\section{Methodology}\label{sec:method}
Four models were considered for experimental evaluation. The first was a SAM based model with no additional regularisation, called \textit{SAM}. The second model was a feed-forward ANN with the same structure and activation functions as KASAM, but all the model parameters were trainable and randomly initialised instead of set to predefined constants, further referred to as simply \textit{ANN}. The third was KASAM, with some parameters being trainable and others being fixed to correctly implement cubic B-splines, named \textit{KASAM}. (Note: the ANN is capable of learning an identical clone of KASAM, but such eventuality is unlikely given the random initialisation of the ANN). The fourth model was KASAM in combination with the pseudo-rehearsal (PR) data-augmentation technique, referred to as \textit{KASAM+PR}.
\subsubsection*{SAM}
The SAM model is a scalar-valued function that maps a two-dimensional input to a one-dimensional output. A density of 32 basis functions was chosen for each input variable. The SAM model chosen implements:
$$
f(\vec{\mathbf{x}})
= \sum^{2}_{j=1} f_{j}( x_{j} )
= \sum^{2}_{j=1} \sum_{i=1}^{32} \theta_{i,j} S_{i,j}( x_{j} )
= \sum^{2}_{j=1} \sum_{i=1}^{32} \theta_{i,j} S(w_{i,j}x_{j} + b_{i,j})
$$
SAM has a reasonably simple structure shown in Figure~\ref{fig:fig_SAM_model_specifics}. The inputs were two-dimensional and the output was one-dimensional. The density of basis functions for each input-variable is 32, so there are 64 neural units in the hidden layer. The activation function and weights were chosen to correctly implement B-spline basis functions. The final linear layer is trainable - each basis function is multiplied by its trainable parameter and summed together to give output of SAM.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=\textwidth]{KASAM_Models/SAM_model_specifics.png}
\caption{\small Structure of SAM.}
\label{fig:fig_SAM_model_specifics}
\end{figure}
\subsubsection*{KASAM}
The KASAM model is a scalar valued function that maps a two dimensional input to a one-dimensional output. It was decided to use three exterior functions such that:
$$
f(\vec{\mathbf{x}})
= \sum^{3}_{q=1} H_{q} \left( \sum^{2}_{p=1} h_{q,p}( x_{p} ) \right)
+ \lambda \sum^{3}_{q=1}
\left( \sum^{2}_{p=1} h_{q,p}( x_{p} ) \right)
+ \sum^{2}_{j=1} g_{j}( x_{j} )
$$
The structure of KASAM is a generalisation of SAM. One branch is the same as SAM. For technical and practical reasons to make optimisation easier it was necessary to use a mixture of different densities of basis functions. There are densities of 4, 8, 16 and 32 basis functions for each input variable. The maximum density of 32 corresponds to the same expressive power of the mentioned SAM model also with a density of 32 basis functions. Adding together all the densities of basis functions in KASAM gives a total 60 basis functions for each variable. The two input variables have 120 basis functions or neural units in total. The three hidden variables have 180 basis functions or neural units in total - this implements three exterior functions each with their own input.
Most of the connections represent fully connected dense layers. The only exception is the activation layer that only applies an element-wise sigmoid to each input value, with no trainable parameters. The layers with trainable parameters are indicated with solid black arrows. The constant parameters or weights are not solid black arrows.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=\textwidth]{KASAM_Models/KASAM_model_specifics.png}
\caption{\small Structure of KASAM.}
\label{fig:fig_SAM_Two_var}
\end{figure}
KASAM is fully capable of representing the SAM model exactly, if the appropriate parameters were chosen or zeroed.
\subsubsection*{KASAM+PR}
KASAM+PR has the same structure as KASAM. The only difference is that a pseudo-rehearsal data augmentation is utilised in training. The training data for the second task is mixed with input-output values of the model after it was trained on Task 1 (but before being trained on Task 2). The rehearsal dataset is constructed from 10000 uniformly sampled points of two dimensions on the unit interval. The target values for rehearsal are predicted by the model itself, using the stored memory of the previous data in the model. The augmented dataset is 10000 points randomly sampled with $50\%$ probability of choosing either a rehearsal data point or training data point for Task 2.
\subsubsection*{ANN}
The ANN model has the same structure as KASAM. The ANN model is similar to more commonly used initialisation methods. The ANN model has randomly initialised and trainable parameters, whereas KASAM has many specifically chosen parameters that are not trainable. It is possible for ANN to implement KASAM, but it is unlikely. ANN does not neatly decompose into single-variable function because some of the parameters in the dense layers are not zero. All that remains is the structure as seen in Figure~\ref{fig:fig_ANN_model_specifics}, but all parameters are trainable and randomly initialised with the default values for TensorFlow.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=\textwidth]{KASAM_Models/ANN_model_specifics.png}
\caption{\small Structure of the ANN model.}
\label{fig:fig_ANN_model_specifics}
\end{figure}
The ANN model was chosen to show how randomly initialised parameters compare to the specifically chosen parameters for KASAM. Given appropriate random initialisation, hyper-parameters, optimizer, and excessive data, it is possible to get the same performance from ANN as with KASAM. Most ANNs are very sensitive to many externally chosen values, and require a lot of fine-tuning for acceptable performance.
\subsection{Experiments}
The target functions for Experiments A, B and C were chosen to demonstrate the expressive power and limitation of all four models. The target function for Experiment A is a sum of single-variable functions which is easily expressed (in theory) by the SAM, KASAM and KASAM+PR models. The target function for Experiment B is a modified version of a difficult to learn two variable function that is the sum of single variable and multi-variable functions~\citep{malan_cleghorn}. The target function for Experiment C is a product of periodic functions which is impossible for SAM to represent.
All models and experiments were performed with Python and TensorFlow. The loss function chosen for training and evaluation in all presented experiments is mean absolute error (MAE). The training data set and test set in all experiments had $10 000$ data points. Gaussian noise of variance $0.05$ was added to all training and test data target values. The test set was also used as a validation set to quantify the test error during training. All models were trained with a learning rate of $0.001$ with the Adam optimizer. All models and experiments used batch sizes of 100 during training.
\subsubsection*{Task 1}
The training and test sets were sampled uniformly from the Task 1 target functions over the domain $\left[0.,1. \right]^{2}$, with Gaussian noise added to the target values. All models were trained for $200$ epochs. The Task 1 target functions for experiments A, B, and C are given by:
\begin{equation*} \label{eq1}
\begin{split}
Y_{A}(x_{1},x_{2}) &= \cos{(4\pi x_{1})}\exp(-(2x_{1}-1)^{2})+\sin{(\pi x_{2})} \\
Y_{B}(x_{1},x_{2}) &= 2\exp(-\sum_{i=1}^{2} (10 x_{i}-5)^{2}) + \sum_{i=1}^{2}\sin^{2}(10x_{i}-5) \\
Y_{C}(x_{1},x_{2}) &= 1 + \cos(20x_{1}-10) \cos(20x_{2}-10)
\end{split}
\end{equation*}
\subsubsection*{Task 2}
The test sets were sampled uniformly from the Task 2 target functions over the domain $\left[0.,1. \right]^{2}$, with Gaussian noise added to the target values. The training sets were sampled uniformly over the domain $\left[0.45,0.55 \right]^{2}$, and target values of zero with added Gaussian noise. All models were trained for $20$ epochs. The Task 2 target functions for experiments A, B, and C are given by:
$$ Y'_{A}(x_{1},x_{2}) =\begin{cases}
0 & 0.45 < x_{1} < 0.55, \text{ and } 0.45 < x_{2} < 0.55 \\
Y_{A}(x_{1},x_{2}) & \text{otherwise.}
\end{cases}
$$
$$ Y'_{B}(x_{1},x_{2}) =\begin{cases}
0 & 0.45 < x_{1} < 0.55, \text{ and } 0.45 < x_{2} < 0.55 \\
Y_{B}(x_{1},x_{2}) & \text{otherwise.}
\end{cases}
$$
$$ Y'_{C}(x_{1},x_{2}) =\begin{cases}
0 & 0.45 < x_{1} < 0.55, \text{ and } 0.45 < x_{2} < 0.55 \\
Y_{C}(x_{1},x_{2}) & \text{otherwise.}
\end{cases}
$$
\section{Empirical Results}\label{sec:exp}
\subsection{Experiment A}
The mean and standard deviation of the test MAE over thirty independent trials for each model are shown in Table~\ref{table:A_results_averaged}. The null hypothesis for each pair-wise comparison between models is that they have indistinguishable test errors (threshold is $p<0.0001$). The p-values were calculated from raw data.
Task 1 indicated that KASAM and KASAM+PR have indistinguishable test errors, and the null hypothesis was accepted ($p=0.1822$). All other pair-wise comparisons for Task 1 indicated distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$). Rounding the test MSE in Task 1 to a few decimal places shows that SAM, KASAM, and KASAM+PR have practically the same test error as shown in Table~\ref{table:A_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:A_task_1_training_validation_plot}. The test sets were used for validation as well. The SAM, KASAM and KASAM+PR models easily learned the target function, and reasonably quickly. The ANN model struggled to learn the Task 1 target function for experiment A as seen in Figure~\ref{fig:A_task_1_training_validation_plot}.
Task 2 indicated that all four models had distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$) for each pair-wise comparison. KASAM+PR had the best test MAE indicating the benefit of using pseudo-rehearsal techniques with KASAM as shown in Table~\ref{table:A_results_averaged}. The SAM model had the second best performance on Task 2 with some memory retention. The KASAM model alone had the third best performance on Task 2, indicating marginal memory retention. The ANN model had the worst performance and suffered catastrophic forgetting that severely impedes its performance compared to the other models in Table~\ref{table:A_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:A_task_2_training_validation_plot}. The test sets were used for validation as well. All models had similar training loss curves in Figure~\ref{fig:A_task_2_training_validation_plot}. The validation loss curves displayed interesting dynamics during training Figure~\ref{fig:A_task_2_training_validation_plot}. KASAM+PR performed the best of all models, and pseudo-rehearsal limited catastrophic forgetting and allowed the model to improve after initially degrading in performance. The SAM model degraded in performance and plateaued with little variance. The KASAM and ANN models had the worst performance with a lot of variance in validation MAE. The ANN validation loss curves exhibited oscillatory behaviour possibly due to the use of Adam as an optimizer, as shown in Figure~\ref{fig:A_task_2_training_validation_plot}.
\begin{table}[!h]
\centering
\begin{tabular}{|c c c|}
\hline
& Task 1 MAE & Task 2 MAE \\ [0.5ex]
\hline\hline
SAM & \bf{0.040 (0.000)} & 0.334 (0.004) \\
ANN & 0.169 (0.029) & 1.279 (0.107) \\
KASAM & 0.042 (0.001) & 0.977 (0.055) \\
KASAM+PR & 0.042 (0.001) & \bf{0.051 (0.001)} \\ [0.5ex]
\hline
\end{tabular}
\caption{Experiment A: final test mean absolute error (MAE) for Task 1 and Task 2 averaged over 30 trials, rounded to two decimal places.}
\label{table:A_results_averaged}
\end{table}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/training_time_task_1.png}
\caption{Training loss curve.}
\label{fig:A_task_1_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/validation_task_1.png}
\caption{Validation loss curve.}
\label{fig:A_task_1_validation_plot}
\end{subfigure}
\caption{\small
Experiment A: Task 1 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:A_task_1_training_validation_plot}
\end{figure}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/training_time_task_2.png}
\caption{Training loss curve.}
\label{fig:A_task_2_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/validation_task_2.png}
\caption{Validation loss curve.}
\label{fig:A_task_2_validation_plot}
\end{subfigure}
\caption{\small
Experiment A: Task 2 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:A_task_2_training_validation_plot}
\end{figure}
The outputs of the models and the target functions were visualised in Figure~\ref{fig:visualisation_experiment_A} with grid-sampled points. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.
For Task 1 all versions of SAM and KASAM could easily fit the target function as visualised in Figure~\ref{fig:visualisation_experiment_A}. The ANN model struggled to fit to the first target function as visualised in Figure~\ref{fig:visualisation_experiment_A}.
In Task 2 SAM exhibited intrinsic memory retention, except within a cruciform region of overlap between Task 1 and Task 2 as visualised in Figure~\ref{fig:visualisation_experiment_A}, consistent with the developed theory. The ANN suffered catastrophic forgetting while training to output zero in the central region, which fits the training data very well, but it ruined global memory retention. KASAM did not exhibit perfect memory retention on its own as seen in Figure~\ref{fig:visualisation_experiment_A}. KASAM+PR which used pseudo-rehearsal yielded the best memory retention of all four models and boasted nearly perfect performance on the second target function, as seen in Figure~\ref{fig:visualisation_experiment_A}.
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/Inspection_SAM_0.png}
\caption{SAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/Inspection_ANN_0.png}
\caption{ANN}
\label{fig:three sin x}
\end{subfigure}
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/Inspection_KASAM_0.png}
\caption{KASAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/Inspection_KASAM_PR_0.png}
\caption{KASAM+PR}
\label{fig:y equals x}
\end{subfigure}
\hfill
\caption{\small
Experiment A: outputs of the models and the target functions. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.}
\label{fig:visualisation_experiment_A}
\end{figure}
\subsection{Experiment B}
The mean and standard deviation of the test MAE over thirty independent trials for each model is shown in Table~\ref{table:B_results_averaged}. The null hypothesis for each pair-wise comparison between models is that they have indistinguishable test errors (threshold is $p<0.0001$). The p-values were calculated from raw data.
Task 1 indicated that KASAM and KASAM+PR have indistinguishable test errors, and the null hypothesis was accepted ($p=0.0598$). All other pair-wise comparisons for Task 1 indicated distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$). Rounding the test MSE in Task 1 to a few decimal places shows that KASAM and KASAM+PR have the same test error as shown in Table~\ref{table:B_results_averaged}. SAM had slightly worse performance compared to KASAM. The ANN model was the worst-performing. The training and validation MAE during training is shown in Figure~\ref{fig:B_task_1_training_validation_plot}. The test sets were used for validation as well. The KASAM and KASAM+PR models easily learned the target function, and reasonably quickly. The SAM model couldn't represent the Task 1 target function. The ANN model struggled to learn the Task 1 target function for experiment B as seen in Figure~\ref{fig:B_task_1_training_validation_plot}.
Task 2 indicated that all four models had distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$) for each pair-wise comparison. KASAM+PR had the best test MAE indicating the benefit of using pseudo-rehearsal techniques with KASAM as shown in Table~\ref{table:B_results_averaged}. The SAM model had the second best performance on Task 2 with some memory retention. The KASAM model alone had the third best performance on Task 2, indicating marginal memory retention. The ANN model had the worst performance and suffered catastrophic forgetting that severely impedes its performance compared to the other models in Table~\ref{table:B_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:B_task_2_training_validation_plot}. The test sets were used for validation as well. All models had similar training loss curves in Figure~\ref{fig:B_task_2_training_validation_plot}. The validation loss curves displayed peculiar dynamics during training Figure~\ref{fig:B_task_2_training_validation_plot}. KASAM+PR performed the best of all models, and pseudo-rehearsal limited catastrophic forgetting and allowed the model to improve after initially degrading in performance. The SAM model degraded in performance and plateaued with little variance. The KASAM and ANN models had the worst performance with a lot of variance in validation MAE, as shown in Figure~\ref{fig:B_task_2_training_validation_plot}.
\begin{table}[!h]
\centering
\begin{tabular}{|c c c|}
\hline
& Task 1 MAE & Task 2 MAE \\ [0.5ex]
\hline\hline
SAM & 0.092 (0.002) & 0.144 (0.003) \\
ANN & 0.311 (0.005) & 1.465 (0.080) \\
KASAM & \bf{0.042 (0.001)} & 0.875 (0.376) \\
KASAM+PR & \bf{0.045 (0.007)} & \bf{0.061 (0.008)} \\ [0.5ex]
\hline
\end{tabular}
\caption{Experiment B: final test mean absolute error (MAE) for Task 1 and Task 2 averaged over 30 trials, rounded to two decimal places.}
\label{table:B_results_averaged}
\end{table}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/training_time_task_1.png}
\caption{Training loss curve.}
\label{fig:B_task_1_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/validation_task_1.png}
\caption{Validation loss curve.}
\label{fig:B_task_1_validation_plot}
\end{subfigure}
\caption{\small
Experiment B: Task 1 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:B_task_1_training_validation_plot}
\end{figure}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/training_time_task_2.png}
\caption{Training loss curve.}
\label{fig:B_task_2_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/validation_task_2.png}
\caption{Validation loss curve.}
\label{fig:B_task_2_validation_plot}
\end{subfigure}
\caption{\small
Experiment B: Task 2 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:B_task_2_training_validation_plot}
\end{figure}
The outputs of the models and the target functions were visualised in Figure~\ref{fig:visualisation_experiment_B} with grid-sampled points. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.
For Task 1 KASAM could easily fit the target function as visualised in Figure~\ref{fig:visualisation_experiment_B}. The SAM and ANN model struggled to fit to the first target function as visualised in Figure~\ref{fig:visualisation_experiment_B}.
In Task 2 SAM exhibited intrinsic memory retention, except within a cruciform region of overlap between Task 1 and Task 2 as visualised in Figure~\ref{fig:visualisation_experiment_B}, consistent with the developed theory. The ANN suffered catastrophic forgetting while training to output zero in the central region, which fits the training data very well, but it ruined global memory retention. KASAM did not exhibit perfect memory retention on its own as seen in Figure~\ref{fig:visualisation_experiment_B}. KASAM+PR which used pseudo-rehearsal yielded the best memory retention of all four models and boasted nearly perfect performance on the second target function, as seen in Figure~\ref{fig:visualisation_experiment_B}.
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/Inspection_SAM_0.png}
\caption{SAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/Inspection_ANN_0.png}
\caption{ANN}
\label{fig:three sin x}
\end{subfigure}
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/Inspection_KASAM_0.png}
\caption{KASAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/Inspection_KASAM_PR_0.png}
\caption{KASAM+PR}
\label{fig:y equals x}
\end{subfigure}
\hfill
\caption{\small
Experiment B: outputs of the models and the target functions. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.}
\label{fig:visualisation_experiment_B}
\end{figure}
\subsection{Experiment C}
The mean and standard deviation of the test MAE over thirty independent trials for each model is shown in Table~\ref{table:C_results_averaged}. The null hypothesis for each pair-wise comparison between models is that they have indistinguishable test errors (threshold is $p<0.0001$). The p-values were calculated from raw data.
Task 1 test MAE indicated that KASAM and KASAM+PR have indistinguishable test errors, and the null hypothesis was accepted ($p=0.6018$). KASAM and KASAM had the best performance. The SAM and ANN models also have the same test error ($p=0.0007$), and the worst performance. All other pair-wise comparisons for Task 1 indicated distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$). The test MSE in Task 1 is rounded to a few decimal places and presented in Table~\ref{table:C_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:C_task_1_training_validation_plot}. The test sets were used for validation as well. The KASAM and KASAM+PR models easily learned the target function, and reasonably quickly. The SAM and ANN model struggled to learn the Task 1 target function for experiment C as seen in Figure~\ref{fig:C_task_1_training_validation_plot}.
Task 2 indicated that all four models had distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$) for each pair-wise comparison. KASAM+PR had the best test MAE indicating the benefit of using pseudo-rehearsal techniques with KASAM as shown in Table~\ref{table:C_results_averaged}. The SAM model had the second best performance on Task 2 with some memory retention. The KASAM model alone had the third best performance on Task 2, indicating marginal memory retention. The ANN model had the worst performance and suffered catastrophic forgetting that severely impedes its performance compared to the other models in Table~\ref{table:C_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:C_task_2_training_validation_plot}. The test sets were used for validation as well. All models had similar training loss curves in Figure~\ref{fig:C_task_2_training_validation_plot}. The validation loss curves displayed interesting dynamics during training Figure~\ref{fig:C_task_2_training_validation_plot}. KASAM+PR performed the best of all models, and pseudo-rehearsal limited catastrophic forgetting and allowed the model to improve after initially degrading in performance. The SAM model degraded in performance and plateaued with little variance. The KASAM and ANN models had the worst performance with a lot of variance in validation MAE, as shown in Figure~\ref{fig:C_task_2_training_validation_plot}.
\begin{table}[!h]
\centering
\begin{tabular}{|c c c|}
\hline
& Task 1 MAE & Task 2 MAE \\ [0.5ex]
\hline\hline
SAM & 0.430 (0.003) & 0.467 (0.004) \\
ANN & 0.427 (0.003) & 1.004 (0.021) \\
KASAM & \bf{0.042 (0.001)} & 0.748 (0.137) \\
KASAM+PR & \bf{0.042 (0.001)} & \bf{0.063 (0.008)} \\ [0.5ex]
\hline
\end{tabular}
\caption{Experiment C: final test mean absolute error (MAE) for Task 1 and Task 2 averaged over 30 trials, rounded to two decimal places.}
\label{table:C_results_averaged}
\end{table}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/training_time_task_1.png}
\caption{Training loss curve.}
\label{fig:C_task_1_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/validation_task_1.png}
\caption{Validation loss curve.}
\label{fig:C_task_1_validation_plot}
\end{subfigure}
\caption{\small
Experiment C: Task 1 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:C_task_1_training_validation_plot}
\end{figure}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/training_time_task_2.png}
\caption{Training loss curve.}
\label{fig:C_task_2_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/validation_task_2.png}
\caption{Validation loss curve.}
\label{fig:C_task_2_validation_plot}
\end{subfigure}
\caption{\small
Experiment C: Task 2 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:C_task_2_training_validation_plot}
\end{figure}
The outputs of the models and the target functions were visualised in Figure~\ref{fig:visualisation_experiment_C} with grid-sampled points. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.
For Task 1 all versions of KASAM could easily fit the target function as visualised in Figure~\ref{fig:visualisation_experiment_C}. The SAM and ANN model struggled to fit to the first target function as visualised in Figure~\ref{fig:visualisation_experiment_C}.
In Task 2 SAM exhibited intrinsic memory retention, except within a cruciform region of overlap between Task 1 and Task 2 as visualised in Figure~\ref{fig:visualisation_experiment_C}, consistent with the developed theory. The ANN suffered catastrophic forgetting while training to output zero in the central region, which fits the training data very well, but it ruined global memory retention. KASAM did not exhibit perfect memory retention on its own as seen in Figure~\ref{fig:visualisation_experiment_C}. KASAM+PR which used pseudo-rehearsal yielded the best memory retention of all four models and boasted nearly perfect performance on the second target function, as seen in Figure~\ref{fig:visualisation_experiment_C}.
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/Inspection_SAM_0.png}
\caption{SAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/Inspection_ANN_0.png}
\caption{ANN}
\label{fig:three sin x}
\end{subfigure}
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/Inspection_KASAM_0.png}
\caption{KASAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/Inspection_KASAM_PR_0.png}
\caption{KASAM+PR}
\label{fig:y equals x}
\end{subfigure}
\hfill
\caption{\small
Experiment C: outputs of the models and the target functions. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.}
\label{fig:visualisation_experiment_C}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
This paper contributes in three ways: theoretically, technically, and empirically. Catastrophic forgetting was analysed and theoretical models, namely SAM and KASAM, were developed to combat catastrophic forgetting. The developed models were implemented in TensorFlow, and released for public use. The models were analysed on a simple problem to empirically demonstrate their effectiveness.
The paper introduced and implemented Spline Additive Models (SAMs) to demonstrate their robustness to catastrophic forgetting, and that SAM itself is not a universal function approximator, but is still useful for many potential applications. The Kolmogorov-Arnold Spline Additive Model (KASAM) was introduced and implemented. KASAM is shown to be a universal function approximator that is expressive, but more susceptible to catastrophic forgetting than SAM. It is unknown if tractable models exist with better guaranteed bounds for catastrophic forgetting.
The empirical scope of the paper was limited to target functions of two variables, mainly for demonstration purposes. The statistical analysis and detailed inspection of the results show that SAM exhibits intrinsic memory retention that is robust to catastrophic forgetting. The memory retention of SAM is not perfect: SAM exhibits cross-shaped regions of overlapping interference in higher dimensional models. SAM is also shown not to be a universal function approximator. The extension of SAM to a universal function approximator leads to a more expressive model: KASAM. KASAM has limited intrinsic memory retention, but is a universal function approximator.
Conventional or typical neural networks based on affine or linear transformations can be susceptible to catastrophic forgetting. A more typical artificial neural network with the exact same structure and activation functions as KASAM, with randomly initialised and trainable parameters, performed significantly worse than KASAM. The feed-forward artificial neural network (ANN) had the capacity to implement KASAM and spline functions, but it did not exploit this potential in any appreciable way during training and evaluation. KASAM exhibited superior performance with some weights being chosen constants, as compared to having all of its parameters being randomly initialised and trainable.
KASAM in combination with other regularisation, data-augmentation and training techniques can mitigate catastrophic forgetting. KASAM in combination with pseudo-rehearsal had the best performance of all models considered for the chosen regression problem. Pseudo-rehearsal works well for memory retention in low-dimensional problems.
\section{Opportunities for Further Research}\label{sec:opportunities}
This paper explored a few models and ideas to combat catastrophic forgetting. It is an open question if there are other approaches to combat catastrophic forgetting.
Future research can explore: More efficient implementation; Letting the sub-intervals vary during training instead of using uniform partitions; experimenting with different target functions; experimenting with higher dimensional problems; increasing the density of basis functions during training; the incorporation of B-spline functions into other models such as recurrent neural networks, LSTMs, GRUs or reservoir computers; evaluating the bias-variance decomposition for over-parameterised B-spline functions; fitness landscape analysis of B-spline functions; or different initialisations of parameters for KASAM.
| 2023-04-23T06:40:59.784Z | 2022-05-16T02:03:53.000Z | redpajama/arxiv | arxiv_0001 | 1,520 | 11,069 |
6913641baa6f75fae6f8b91e0637b439b93d9fc6 | \section{Introduction}
Forward-backward stochastic differential equations (FBSDEs)
have numerous applications in stochastic control theory and mathematical finance
(see, for instance, \cite{Ci}, \cite{Karo}, \cite{Par}, and \cite{Yong}).
Several recent papers \cite{aboura,antonelli, mastrolia} studied existence of densities and density estimates for the laws of solutions of one-dimensional backward SDEs (BSDEs) (\cite{aboura, antonelli, mastrolia}). To the best of authors' knowledge, the aforementioned problem has never been addressed in connection to the laws of solutions to fully coupled FBSDEs.
In this paper, we are concerned with the fully coupled one-dimensional FBSDE
\eq{
\label{fbsde}
X_t = x+ \int_0^t f(s,X_s, Y_s,Z_s) ds + \int_0^t \sigma(s,X_s,Y_s) dB_s,\\
Y_t = h(X_T) + \int_t^T g(s,X_s,Y_s,Z_s) ds - \int_t^T Z_s dB_s,
}
where $B_t$ is a one-dimensional standard Brownian motion, and $f$, $\sigma$, $g$, and $h$ are functions
defined on appropriate spaces and taking values in ${\mathbb R}$. It
is known (see, e.g., \cite{ma}, \cite{Par}, \cite{Yong}, and also references therein) that there exists
a unique $\mathcal F_t$-adapted solution $(X_t,Y_t, Z_t)$ of system (\ref{fbsde}) under some appropriate smoothness and boundedness
conditions on its coefficients, where $\mathcal F_t$ is the augmented filtration generated by the Brownian motion $B_t$.
Our goal is to provide conditions that guarantee the existence of the densities for the laws of $X_t$, $Y_t$, and $Z_t$,
and that allow Gaussian estimates of these densities. Additionally,
we obtain estimates of the tail probabilities of the laws of the solution components.
Our approach works for a large class of the FBSDE coefficients.
In particular, the BSDE generator $g$
is not assumed to depend just on some of the spatial variables (unlike \cite{antonelli, mastrolia}),
or to be linear in $Z_s$ (unlike \cite{aboura}). To add even more generality, we obtain our density estimates
in the situation when the generator $g$ has the quadratic growth in the last variable, and hence,
it is not necessarily Lipschitz in $Z_s$.
Our method relies on the analysis of the quasilinear parabolic PDE associated to
FBSDE \eqref{fbsde}:
\eq{
\label{final}
\frac12 \sigma^2(t,x,u) \partial^2_{xx} u + f(t,x,u, \sigma(t,x,u)\partial_x u)\partial_x u + g(t,x,u, \sigma(t,x,u)\partial_x u) \\ \;\; + \partial_t u = 0,
\hspace{2cm} u(T,x) = h(x),
}
where $u$, $\partial_xu$, and $\partial^2_{xx}u$ are everywhere evaluated at $(t,x)$.
It is well known (see, e.g., \cite{ma}) that if $u$ is the ${\rm C}^{1,2}_b$-solution to final value problem \eqref{final},
then it is related to the solution of FBSDE \eqref{fbsde} by the formulas
\aaa{
\label{link1}
Y_t = u(t,X_t), \quad Z_t = \partial_xu(t,X_t) \sigma(t,X_t,u(t,X_t,)),
}
where $X_t$ is the unique $\mathcal F_t$-adapted solution to the SDE
\aaa{
\label{sde}
& X_t = x + \int_0^t \tilde f(t,X_t) + \int_0^t \tilde \sigma(t,X_t) dB_t \qquad \text{with}\\
\label{coeff-sde}
&\tilde f(t,x) = f(t,x,u(t,x), \partial_xu(t,x)\sigma(t,x,u(t,x))), \quad \tilde\sigma(t,x) = \sigma(t,x,u(t,x)).
}
Since the Malliavin differentiability and the existence of bounds for $D_rX_t$ are well known facts
(see, e.g., \cite{nualart}, \cite{detemple}),
then, provided that the coefficients of PDE \eqref{final} are sufficiently smooth,
the Malliavin differentiability of $Y_t$ and $Z_t$ follows immediately, and, moreover, the existence
of bounds for $D_rY_t$ and $D_rZ_t$ is reduced to the existence of positive lower bounds for
$\partial_x u$ and $\partial_x\big(\partial_x u\, \sigma\big)$. This can be done by the classical comparison theorem
for PDEs (see, e.g., \cite{friedman}).
Let us remark that our assumptions allow the BSDE generators $g(t,x,u,p)$ to have the quadratic growth in $p$.
It happens because the four step scheme, developed in \cite{ma},
also works for a quadratic BSDE, provided that it is one-dimensional.
This follows from the version of the existence and uniqueness theorem, obtained in \cite{lady},
for the associated one-dimensional
PDE \eqref{final}. Remark that density (tail probability) estimates
for the law of the $Z_s$-component of quadratic BSDEs are important for some numerical schemes,
as it was mentioned in \cite{mastrolia}.
In comparison to our approach, papers \cite{aboura, antonelli, mastrolia} mainly use manipulations of the BSDE itself,
such as, consideration of the BSDEs for the second order Malliavin derivatives of the solution processes,
Girsanov's transformation,
It\^o's formula for various functions of the solution, etc, to arrive at the existence of
estimates for the Malliavin derivatives $D_rY_t$ and $D_rZ_t$.
Overall, compared to the previous works, our analysis is simpler, many of the assumptions are dropped or
easier formulated (c.f. \cite{mastrolia}), while the FBSDE itself is, overall, more general (in particular, fully coupled)
and the density estimates hold on the entire real line.
\section{Preliminaries}
For simplicity, all PDEs considered in this section are one-dimensional and with respect to one space variable,
although all the results are valid for PDEs of several space variables.
\subsection{Useful function spaces}
We start by defining some function spaces used in this paper.
The H\"older space ${\rm C}^{2+\beta}_b({\mathbb R})$,
$\beta\in (0,1)$,
is understood as the (Banach) space with the norm
\aa{
\|\phi\|_{{\rm C}^{2 +\beta}_b({\mathbb R})} = \|\phi\|_{{\rm C}^2_b({\mathbb R})} + [\phi'']_\beta,
\quad \text{where} \quad
[\tilde \phi]_\beta = \sup_{x,y\in{\mathbb R}, \, 0<|x-y|<1}\frac{|\tilde\phi(x)- \tilde\phi(y)|}{|x-y|^\beta},
}
and ${\rm C}^2_b({\mathbb R})$ denotes the space of twice continuously differentiable functions on ${\mathbb R}$ with bounded derivatives up to the second order.
For a function $\phi(x,\xi)$ of more than one variable, the H\"older constant
with respect to $x$ is defined as
\aa{
[\phi]^x_{\beta} = \sup_{x,x'\in{\mathbb R}, \, 0<|x-x'|<1} \frac{|\phi(x,\xi) - \phi(x',\xi)|}{|x-x'|^\beta},
}
i.e., it is understood as a function of $\xi$.
The H\"older spaces ${\rm C}^{1+\frac{\beta}2,2+\beta}_b([0,T]\times {\mathbb R})$, ${\rm C}^{\frac{\beta}2,\beta}_b([0,T]\times{\mathbb R})$,
${\rm C}^{\frac{\beta}2,1+\beta}_b([0,T]\times{\mathbb R})$, and ${\rm C}^{0,\beta}_b([0,T]\times{\mathbb R})$
($\beta\in (0,1)$) are
defined, respectively, as Banach spaces of functions $\phi(t,x)$ possessing the finite norms
\aa{
&\|\phi\|_{{\rm C}_b^{1+\frac{\beta}2,2+\beta}([0,T]\times {\mathbb R})} =
\|\phi\|_{{\rm C}_b^{1,2}([0,T]\times {\mathbb R})} +
\sup_{t\in [0,T]}[\partial_t \phi]_{\beta}^x + \sup_{t\in [0,T]}[\partial^2_{xx} \phi]_{\beta}^x \\
& \hspace{4.2cm}+ \sup_{x\in {\mathbb R}}[\partial_t \phi]_{\frac{\beta}2}^t + \sup_{x\in {\mathbb R}}[\partial_x \phi]_{\frac{1+\beta}2}^t
+ \sup_{x\in {\mathbb R}}[\partial^2_{xx} \phi]_{\frac{\beta}2}^t; \\
&\|\phi\|_{{\rm C}_b^{\frac{\beta}2,\beta}([0,T]\times {\mathbb R})} =
\|\phi\|_{{\rm C}_b([0,T]\times {\mathbb R})} + \sup_{t\in [0,T]}[\phi]_{\beta}^x + \sup_{x\in {\mathbb R}}[\phi]_{\frac{\beta}2}^t;\\
&\|\phi\|_{{\rm C}_b^{\frac{\beta}2,1+\beta}([0,T]\times {\mathbb R})} =
\|\phi\|_{{\rm C}_b([0,T]\times {\mathbb R})} + \|\partial_x \phi\|_{{\rm C}_b^{\frac{\beta}2,\beta}([0,T]\times {\mathbb R})}; \\
&\|\phi\|_{{\rm C}_b^{0,\beta}([0,T]\times {\mathbb R})} = \|\phi\|_{{\rm C}_b([0,T]\times {\mathbb R})}
+ \sup_{t\in [0,T]}[\phi]_{\beta}^x,
}
where
${\rm C}_b^{1,2}([0,T]\times {\mathbb R})$ is the space of bounded continuous functions
whose derivatives up to the first order in $t\in [0,T]$ and the second order in $x\in{\mathbb R}$
are bounded and continuous on $[0,T]\times {\mathbb R}$, and ${\rm C}_b([0,T]\times {\mathbb R})$ is the space
of bounded continuous functions.
\subsection{Some results on quasilinear parabolic PDEs}
Here we formulate some results on linear and quasilinear parabolic PDEs which will be useful in the next section.
Consider the Cauchy problem for a one-dimensional PDE of one space variable
\eq{
\label{pde}
a(t,x,u) \partial^2_{xx} u + f(t,x,u,\partial_x u) \partial_x u + g(t,x,u, \partial_x u) - \partial_t u = 0, \\
u(0,x) = h(x),
}
where $u$, $\partial_x u$, $\partial_t u$, and $\partial^2_{xx} u$ are everywhere evaluated at $(t,x)$.
In what follows, $(t,x,u,p)$ denotes the element of $[0,T]\times {\mathbb R} \times {\mathbb R} \times {\mathbb R}$,
$f$ and $g$ are functions of $(t,x,u,p)$, and $a$ is a function of $(t,x,u)$. Further,
$\partial_t$, $\partial_x$, $\partial_u$, and $\partial_p$ denote the partial derivatives w.r.t. $t$, $x$, $u$, and $p$,
respectively.
The theorem below, proved in \cite{lady} (Theorem 8.1, Section V, p. 495), provides
the existence and uniqueness of solution to problem \eqref{pde}.
\begin{thm}
\label{lady}
Assume conditions (i)--(vii) below:
\begin{itemize}
\item[(i)] for all $(t,x,u) \in [0,T]\times {\mathbb R} \times {\mathbb R}$, $\nu(|u|) \leqslant a(t,x,u) \leqslant \mu(|u|)$,
where $\nu$ and $\mu$ are non-increasing and, respectively, non-decreasing positive functions;
\item[(ii)] for all $(t,x,u,p) \in [0,T]\times {\mathbb R} \times {\mathbb R} \times {\mathbb R}$, $g(t,x,u,p) u \leqslant c_1 + c_2 |u|^2$,
where $c_1$ and $c_2$ are positive constants;
\item[(iii)] the function $h$ is of class ${\rm C}^{2+\beta}_b({\mathbb R})$, $\beta\in (0,1)$.
\item[(iv)] $\partial_x a$ and $\partial_u a$ exist and
$|a| + |\partial_x a| + |\partial_u a| \leqslant \alpha$, where $\alpha>0$ is a constant;
\item[(v)] there exists a positive non-decreasing function $\tilde \mu$ such that
$|f| \leqslant \tilde\mu(|u|)(1+|p|)$ and $|g| \leqslant \tilde\mu(|u|)(1+|p|^2)$
everywhere on $[0,T]\times{\mathbb R} \times{\mathbb R} \times{\mathbb R}$;
\item[(vi)] the functions $a$, $\partial_x a$, $\partial_u a$, $f$, and $g$
are H\"older continuous in $t$, $x$, $u$, and $p$ with exponents $\frac{\beta}2$, $\beta$,
$\beta$, and $\beta$, respectively, and globally bounded H\"older constants;
\item[(vii)] the derivatives $\partial_u f$, $\partial_u g$, $\partial_p f$, $\partial_p g$ exist and
$
\sup_{\substack{
(t,x) \in [0,T]\times {\mathbb R}\\
|u|+|p| \leqslant N }}
\big(|\partial_u f| + |\partial_u g| + |\partial_p f| + |\partial_p g|\big) \leqslant \gamma(N),
$
where $\gamma(N)$ is a positive constant depending on $N$.
\end{itemize}
Then, there exists a unique ${\rm C}^{1+\frac{\beta}2, 2+\beta}_b([0,T]\times{\mathbb R})$-solution to problem \eqref{pde}.
\end{thm}
Now consider a Cauchy problem for a linear PDE:
\eq{
\label{pde-lin}
a(t,x) \partial^2_{xx} u + b(t,x) \partial_x u + c(t,x) u + g(t,x) - \partial_t u = 0, \\
u(0,x) = \varphi(x).
}
We have the following result, proved in \cite{friedman} (Theorem 12, p. 25 and Theorem 10, p. 44),
on the solvability of problem \eqref{pde-lin} and the representation of its solution via
the fundamental solution $\Gamma(t,x,s,z)$ to PDE \eqref{pde-lin}.
\begin{thm}
\label{fundamental}
Let PDE \eqref{pde-lin} be uniformly parabolic, and let
the coefficient $a$ of \eqref{pde-lin} be of class ${\rm C}^{\frac{\beta}2,\beta}_b([0,T]\times {\mathbb R})$, $\beta\in (0,1)$.
Further let the coefficients $b$, $c$, and the function $g$ be of class
${\rm C}^{0,\beta}_b([0,T]\times {\mathbb R})$, and the initial condition $\varphi$
be of class ${\rm C}_b({\mathbb R})$. Then, there exists a unique ${\rm C}^{1,2}_b([0,T]\times {\mathbb R})$-solution to problem \eqref{pde-lin}.
Moreover, this solution takes the form
\aa{
u(t,x) = \int_{{\mathbb R}}\Gamma(t,x,0,z) \varphi(z) dz - \int_0^t \int_{{\mathbb R}} \Gamma(t,x,s,z) g(s,z) ds dz.
}
\end{thm}
Introduce the linear differential operator
\aa{
L u = a(t,x) \partial^2_{xx} u + b(t,x) \partial_x u + c(t,x) u - \partial_t u.
}
Theorem \ref{hold-unique} below, provides conditions when the solution $u$ to \eqref{pde-lin}
belongs to class ${\rm C}^{1+\frac{\beta}2, 2+\beta}_b([0,T]\times{\mathbb R})$. The theorem was
proved in \cite{lady} (Theorem 5.1, p. 320).
\begin{thm}
\label{hold-unique}
Let PDE \eqref{pde-lin} be uniformly parabolic, and the coefficients of the operator $L$ and
the function $g$ belong to class ${\rm C}^{\frac{\beta}2,\beta}_b([0,T]\times {\mathbb R})$. Further let
the initial condition $\varphi$ belong to class ${\rm C}^{2+\beta}_b({\mathbb R})$. Then, problem \eqref{pde-lin}
has a unique ${\rm C}^{1+\frac{\beta}2, 2+\beta}_b([0,T]\times{\mathbb R})$-solution $u(t,x)$.
\end{thm}
The following below comparison theorem, proved in \cite{friedman} (Theorem 9, p 43), will be an important tool
in the next section.
\begin{thm}
\label{comparison}
Let the coefficients of $L$ be bounded and continuous on $[0,T]\times{\mathbb R}$.
Assume $Lu\leqslant 0$ on $(0,T] \times {\mathbb R}$ and $u$ is bounded.
If $\varphi(x) \geqslant 0$ on ${\mathbb R}$, then $u(t,x) \geqslant 0$ on $[0,T]\times {\mathbb R}$.
\end{thm}
\subsection{A link between FBSDEs and quasilinear parabolic PDEs}
It is well known that there is a link between FBSDE \eqref{fbsde} and a quasilinear parabolic PDE of form \eqref{pde}
(see, e.g., \cite{ma}). Specifically, the final value problem
for the PDE associated to FBSDE \eqref{fbsde} takes form \eqref{final}.
By introducing the time-changed function $\theta(t,x) = u(T-t,x)$, we transform \eqref{final} to the Cauchy
problem
\mmm{
\label{cauchy}
\frac12 \sigma^2(T-t,x,\theta) \partial^2_{xx} \theta + f(T-t,x,\theta, \sigma(T-t,x,\theta)\partial_x \theta)\partial_x \theta \\ + g(T-t,x,\theta, \sigma(T-t,x,\theta)\partial_x \theta) - \partial_t \theta = 0,\quad
\theta(0,x) = h(x).
}
Remark that under assumptions (i)--(vii) of Theorem \ref{lady}, the existence
and uniqueness of a ${\rm C}^{1+\frac{\beta}2,2+\beta}_b([0,T]\times {\mathbb R})$-solution to \eqref{cauchy}
is established, and is equivalent to the existence and uniqueness of a ${\rm C}^{1+\frac{\beta}2,2+\beta}_b([0,T]\times {\mathbb R})$-solution $u$ to final value problem \eqref{final}.
The theorem below provides an explicit solution to FBSDE \eqref{fbsde} via the solution $u$.
\begin{thm}
\label{link} Let the functions $f$, $g$, and $h$ satisfy assumptions (ii), (iii), (v)--(vii) of Theorem \ref{lady}.
Further let the function $\sigma$ satisfy assumptions (i), (iv), and (vi) of the same theorem in the place of the function $a$.
Then, there exists a unique $\mathcal F_t$-adapted solution $(X_t,Y_t,Z_t)$
to FBSDE \eqref{fbsde}. Moreover, this solution takes form \eqref{link1} with $u$ being
the unique ${\rm C}^{1+\frac{\beta}2,2+\beta}_b([0,T]\times {\mathbb R})$-solution, $\beta\in (0,1)$, to
problem \eqref{final}.
\end{thm}
\begin{rem}
\rm The solution to FBSDE \eqref{fbsde} is understood as in \cite{ma}.
\end{rem}
The proof of Theorem \ref{link} is exactly the same as the proof of Theorem 4.1 in \cite{ma},
where the latter result is known as the four step scheme. It relies exceptionally
on the existence of the unique ${\rm C}^{1,2}_b$-solution to Cauchy problem \eqref{cauchy}.
This implies that the assumptions of Theorem \ref{lady} guarantee the existence of a unique
solution to FBSDE \eqref{fbsde}. These assumptions turn out to be more general than in \cite{ma},
but they are restricted to the case of just one PDE.
Remark, that the Cauchy problem for systems of PDEs was not actually solved in \cite{lady},
so the authors of \cite{ma} had to fill this gap imposing own assumptions. However, for the case
of just one PDE, the Cauchy problem is solved in \cite{lady}, and the result is represented
by Theorem 8.1 in Section V (p. 495), so we make use of its more general assumptions.
\subsection{The Malliavin derivative}
Here we describe the elements from the Malliavin calculus that we need in the paper.
We refer the reader to \cite{nualart} for a more complete exposition.
Consider ${\mathcal{H}}$ a real separable Hilbert space and $(B (\varphi), \varphi\in{\mathcal{H}})$ an isonormal Gaussian process \index{Gaussian process} on a probability space $(\Omega, A, P)$, which is a centered Gaussian family of random variables such that
$\mathbb{E}\left( B(\varphi) B(\psi) \right) = \langle\varphi, \psi\rangle_{{\mathcal{H}}}$.
We denote by $D$ the Malliavin derivative operator that acts on smooth functions of the form $F=g(B(\varphi _{1}), \ldots , B(\varphi_{n}))$ ($g$ is a smooth function with compact support and $\varphi_{i} \in {\mathcal{H}}, i=1,...,n$):
\begin{equation*}
DF=\sum_{i=1}^{n}\frac{\partial g}{\partial x_{i}}(B(\varphi _{1}), \ldots , B(\varphi_{n}))\varphi_{i}.
\end{equation*}
It can be checked that the operator $D$ is closable from $\mathcal{S}$ (the space of smooth functionals as above) into $ L^{2}(\Omega; \mathcal{H})$ and it can be extended to the space $\mathbb{D}^{1,p}$ which is the closure of $\mathcal{S}$ with respect to the norm
\begin{equation*}
\Vert F\Vert _{1,p}^{p} = \mathbb{E} |F|^{p} + \mathbb{E} \Vert DF\Vert _{\mathcal{H}} ^{p}.
\end{equation*}
In our paper, $\mathcal{H} = L_2([0,T])$ and $B(\varphi) = \int_0^T \varphi(t) dB_t$.
\subsection{Gaussian density estimates}
Theorem \ref{privault-dung} below is an important tool that we will use to obtain the existence of densities
and density estimates. It was proved in \cite{privault} (Theorem 2.4).
\begin{thm}
\label{privault-dung}
Let $F \in D^{1,2}$ be a random variable such that
\aaa{
\label{condition}
0 < l \leqslant \int_0^\infty D_sF \, \mathbb E[D_s F |\mathcal F_s]ds \leqslant L \quad \text{a.s.},
}
where $l$ and $L$ are constants. Then,
$F$ possesses a density $p_F$ with respect to the Lebesgue measure. Moreover,
for almost all $x\in{\mathbb R}$, the density $p_F$ satisfies
\aa{
\frac{\mathbb E |F-\mathbb E[F]|}{2L} \exp\Big(-\frac{\big(x-\mathbb E[F]\big)^2}{2l}\Big) \leqslant
p_F(x) \leqslant \frac{\mathbb E |F-\mathbb E[F]|}{2l} \exp\Big(-\frac{\big(x-\mathbb E[F]\big)^2}{2L}\Big).
}
Furthermore, for all $x>0$, the tail probabilities satisfy
\aa{
\mathbb P(F \geqslant x) \leqslant \exp\Big(-\frac{\big(x-\mathbb E[F]\big)^2}{2L}\Big)
\quad \text{and} \quad \mathbb P(F \leqslant -x) \leqslant \exp\Big(-\frac{\big(x+\mathbb E[F]\big)^2}{2L}\Big).
}
\end{thm}
\begin{rem}
\rm Theorem \ref{privault-dung} was, in fact, obtained in \cite{privault} for centered random variables $F$.
However, since $p_F(x) = p_{F-\mathbb E[F]}(x-\mathbb E[F])$, where $p_{F-\mathbb E[F]}$
is the density function for $F-\mathbb E[F]$, and condition \eqref{condition} does not change if we replace $F$ with $F-\mathbb E[F]$,
the statement of Theorem \ref{privault-dung} follows immediately.
\end{rem}
\subsection{Malliavin derivatives of solutions to SDEs}
Consider SDE \eqref{sde}, where the coefficients are given by \eqref{coeff-sde} and $u(t,x)$ is the unique
${\rm C}^{1+\frac{\beta}2,2+\beta}_b([0,T]\times {\mathbb R})$-solution to problem \eqref{final}. It is known that
(see, e.g., \cite{nualart}) if the coefficients of an SDE are differentiable with
bounded derivatives, its solution is Malliavin differentiable.
It is also known that if, additionally, $\tilde\sigma$
is bounded away from zero, then,
by means of Lamperti's transform $\eta(t,x) = \int_0^x \frac1{\tilde\sigma(t,\xi)} d\xi$ (\cite{lamperti}),
the Malliavin derivative of $X_t$ can be explicitly computed.
The algorithm is well known (see, e.g., \cite{detemple}), so we skip the computation, and write the final result:
\aaa{
\label{dx}
D_r X_t = \tilde\sigma(t,X_t) e^{\int_r^t \psi(s,X_s) ds},
}
where $\psi(s,x) = \frac{2\tilde f(s,x)\partial_x\tilde\sigma(s,x)}{\tilde\sigma^2(s,x)}-\frac{\partial_x\tilde f(s,x) +
\tilde f(s,x) \partial^2_{xx}\tilde \sigma(s,x)+ \partial_s\tilde\sigma(s,x)}{\tilde\sigma(s,x)} - \frac12\partial^2_{xx}\tilde \sigma(s,x)\tilde\sigma(s,x)$.
\section{Results}
In this section, we prove that the laws of $X_t$, $Y_t$, and $Z_t$ possess densities
with respect to the Lebesgue measure, and obtain
Gaussian estimates for the densities and tail probabilities of these laws.
In what follows, we will make use of assumptions (A1)--(A9) below.
Assumptions (A1)--(A3) are required to obtain density estimates for the law of $X_t$.
\begin{itemize}
\item[\bf (A1)] For all $(t,x,u) \in [0,T]\times {\mathbb R} \times {\mathbb R}$, $\nu(|u|) \leqslant \sigma(t,x,u) \leqslant \mu(|u|)$,
where $\nu$ and $\mu$ are non-increasing and, respectively, non-decreasing positive functions;
\item[\bf (A2)] the functions $f$, $g$, and $h$ satisfy conditions
(ii), (iii), and (v)--(vii) of Theorem \ref{lady};
\item[\bf (A3)]
the derivatives $\partial_x \sigma$, $\partial_u \sigma$,
exist and are
H\"older continuous in $t$, $x$, $u$ with exponents $\frac{\beta}2$,
$\beta$, $\beta$, respectively, and globally bounded H\"older constants; further, $\partial_s\sigma$ exists, and
$|\sigma| + |\partial_s \sigma| + |\partial_x \sigma| + |\partial_u \sigma| \leqslant \alpha$ for some constant $\alpha>0$.
\end{itemize}
Assumptions (A4) and (A5) below should be added to (A1)--(A3) to obtain density estimates for the law of $Y_t$.
Remark that under (A1)--(A3), the solution $u$ to problem \eqref{final} possesses a bound for $|\partial_xu|$.
This bound will be denoted by $M_1$. Also, we recall that the bound for $|u|$ is denoted by $M$.
\begin{itemize}
\item[\bf (A4)]
In the region $[0,T] \times \mathcal R$, where $\mathcal R ={\mathbb R} \times \{|u|\leqslant M\}\times \{|p| \leqslant M_1\}$,
$\partial_x f$, $\partial_x g$, $\partial_u f$, $\partial_u g$, $\partial_p f$, $\partial_p g$ exist, are bounded
and H\"older continuous in $t$, $x$, $u$, $p$ with exponents $\frac{\beta}2$, $\beta$,
$\beta$, $\beta$, respectively, and bounded H\"older constants;
\item[\bf (A5)]
either (a) or (b) holds:
\aa{
&\text{(a)\,} \quad
h'\geqslant 0 \quad \text{and} \; \inf_{(x,u,p)\in \mathcal R} \partial_x g(t,x,u,p) > 0 \; \text{for all} \; t\in (0,T];\\
&\text{(b)\,} \quad
h' \leqslant 0\quad \text{and} \; \sup_{(x,u,p)\in \mathcal R} \partial_x g(t,x,u,p) < 0 \; \text{for all} \; t\in (0,T].
}
\end{itemize}
Finally, to estimate the density of the law of $Z_t$,
assumption (A5) should be replaced with assumption (A5') below, and, additionally, (A6)--(A9) should be in force.
\begin{itemize}
\item[\textbf{(A5}\textrm{'}\textbf{)}]
For all $(t,x,u,p) \in (0,T] \times\mathcal R$, $\partial_x g \geqslant 0, \; h'\geqslant 0$.
\end{itemize}
Further, (A6)--(A9) read:
\begin{itemize}
\item[\bf (A6)] $\partial_x \sigma\geqslant 0$, $\partial_u \sigma\geqslant 0$ on $[0,T]\times{\mathbb R} \times \{|u|\leqslant M\}$;
\item[\bf (A7)] $\partial^2_{px} f$, $\partial^2_{pu}f$, $\partial^2_{pp} f$, $\partial^2_{xx}f$,
$\partial^2_{xu} f$, $\partial^2_{uu} f$, $\partial^2_{px} g$, $\partial^2_{pu}g$, $\partial^2_{pp} g$, $\partial^2_{xx}g$,
$\partial^2_{xu} g$, $\partial^2_{uu} g$ exist on $[0,T]\times \mathcal R$, are bounded and H\"older continuous in $t$, $x$, $u$, $p$ with
exponents $\frac{\beta}2$, $\beta$, $\beta$, $\beta$, respectively, and bounded H\"older constants;
\item[\bf (A8)]
for all $t\in (0,T]$,
$\inf_{(x,u,p)\in \mathcal R}\partial^2_{xx} g >0$ and $h''\geqslant 0$;
\item[\bf (A9)] the following inequalities hold on
$[0,T]\times\mathcal R$
\eqq{
\partial^2_{xx} f + 2\partial^2_{xu} g + \partial_p g \partial^2_{xx} \sigma + \partial^2_{px} g \, \partial_x \sigma \geqslant 0, \\
\partial^2_{uu} g + 2\partial^2_{ux} f + 2\partial^2_{px}g \, \partial_u \sigma + 2\partial^2_{pu}g\, \partial_x \sigma + 2\partial^2_{px} f \, \partial_x \sigma
+ 2\partial_p g \partial^2_{xu} \sigma +\partial^2_{pp} g(\partial_x\sigma)^2\geqslant 0,\\
\partial^2_{uu} f + 2\partial^2_{pu} g\, \partial_u\sigma + 2\partial^2_{pu}f \partial_x \sigma + 2\partial^2_{px} f \partial_u \sigma
+ 2\partial^2_{pp} g\, \partial_x\sigma\partial_u\sigma + 2\partial_p f \partial^2_{xu}\sigma + \partial_p g \partial^2_{uu}\sigma \\
\hspace{10cm} + \partial^2_{pp} f (\partial_x\sigma)^2 \geqslant 0,\\
2\partial^2_{up} f \partial_u\sigma + 2\partial^2_{pp}f \partial_x\sigma \partial_u\sigma + \partial_p f \partial^2_{uu}\sigma + \partial^2_{pp} g (\partial_u\sigma)^2 \geqslant 0, \\
\partial^2_{pp} f \geqslant 0.
}
\end{itemize}
\subsection{Density estimates for the law of $X_t$}
\begin{thm}
Let (A1)--(A3) hold. Then, the law of $X_t$ has a density $p_{X_t}$ with respect to the Lebesgue measure.
Moreover, for almost all $x\in {\mathbb R}$, $p_{X_t}$ satisfies the estimate
\aaa{
\label{est-x}
\frac{\mathbb E |X_t-\mathbb E[X_t]|}{2\Xi(t)} \exp\Big(-\frac{\big(x-\mathbb E[X_t]\big)^2}{2\xi(t)}\Big) \leqslant
p_{X_t}(x) \leqslant \frac{\mathbb E |X_t-\mathbb E[X_t]|}{2\xi(t)} \exp\Big(-\frac{\big(x-\mathbb E[X_t]\big)^2}{2\Xi(t)}\Big),
}
where $\xi(t)$ and $\Xi(t)$ are positive functions that can be computed explicitly.
Further, for all $x>0$, the tail probabilities of $X_t$ satisfy
\aaa{
\label{tail-x}
\mathbb P(X_t > x) \leqslant \exp\Big(-\frac{\big(x-\mathbb E[X_t]\big)^2}{2\Xi(t)}\Big) \quad \text{and} \quad
\mathbb P(X_t< -x) \leqslant \exp\Big(-\frac{\big(x + \mathbb E[X_t]\big)^2}{2\Xi(t)}\Big).
}
\end{thm}
\begin{proof}
Note that, under (A1)--(A3), the solution $u$ to problem \eqref{final} and its derivative $\partial_x u$, $\partial_s u$, and $\partial^2_{xx} u$
are bounded. Hence,
$\tilde f$, $\tilde\sigma$, $\partial_x\tilde f$, $\partial_s\tilde\sigma$, $\partial_x \tilde\sigma$, and $\partial^2_{xx} \tilde\sigma$ are bounded as well
(functions $\tilde f$ and $\tilde \sigma$ are defined by \eqref{coeff-sde}).
Further, by (A3), on $[0,T]\times {\mathbb R}$, $\tilde\sigma(t,x)\geqslant\nu(M)$, where by $M$ is the bound for $|u|$.
Therefore, the function $\psi$ in
\eqref{dx} is bounded. Let $M_\psi$ be its bound. Formula \eqref{dx} allows us to estimate $D_rX_t$ as follows
\aaa{
\label{dx-est}
\nu(M) e^{-M_\psi t}\leqslant D_rX_t \leqslant \mu(M) e^{M_\psi t} \quad \text{a.s.}
}
This implies that
\aa{
t \nu(M)^2 e^{-2M_\psi t} \leqslant \int_0^t D_r X_t \mathbb E[D_r X_t | \mathcal F_r] dr \leqslant t\mu(M)^2 e^{2M_\psi t}.
}
Remark that $D_r X_t = 0$ if $r>t$. By Theorem \ref{privault-dung}, the law of $X_t$ has a density with respect to
the Lebesgue measure and estimate \eqref{est-x} holds with $\xi(t) = t \nu(M)^2 e^{-2M_\psi t}$
and $\Xi(t) = t\mu(M)^2 e^{2M_\psi t}$. Moreover, the tail probabilities of $X_t$ satisfy \eqref{tail-x}.
\end{proof}
\subsection{Density estimates for the law of $Y_t$}
To estimate the density for $Y_t$, we will use the formula $Y_t = u(t,X_t)$, where $u$
is the unique ${\rm C}^{1+\frac{\beta}2,2+\beta}_b$-solution to problem \eqref{final}. This formula immediately
implies that $Y_t$ is Malliavin differentiable and $D_rY_t = \partial_x u(t,X_t) D_r X_t$.
Below, we prove that under (A1)--(A5), there exists a positive function $m(t)$, $t\in [0,T]$,
such that
\eqn{
\label{D-bound}
\text{either} \;\; &\inf_{x\in{\mathbb R}} \partial_x u(t,x) \leqslant -m(t) \; &&\text{for all} \; t\in [0,T] \\
\text{or} \quad
&\sup_{x\in{\mathbb R}} \partial_xu(t,x) \geqslant m(t) \; &&\text{for all} \; t\in [0,T].
}
To this end, we obtain a PDE for the function $v= \partial_x u$.
We start by considering linear PDE \eqref{pde-lin} and prove that we can differentiate it with respect to $x$.
The following result can be viewed as a corollary of Theorem \ref{fundamental}.
\begin{pro}
\label{pr1}
Assume PDE \eqref{pde-lin} is uniformly parabolic.
Let the coefficients of $L$ and the function $g$ be of class ${\rm C}^{\frac{\beta}2,1+\beta}_b([0,T]\times {\mathbb R})$,
$\beta\in (0,1)$.
Further, let the initial condition $\varphi$ be of class ${\rm C}^{2+\beta}_b({\mathbb R})$.
Then, the solution $u(t,x)$ of \eqref{pde-lin}, whose existence was established by Theorem \ref{fundamental},
belongs to class ${\rm C}^{1, 3}_b([0,T]\times{\mathbb R})$, and its derivative $v(t,x) = \partial_x u(t,x)$
is the unique solution to
\eq{
\label{d1}
Lv + \partial_x a \, \partial^2_{xx} u + \partial_x b\, \partial_x u + \partial_x c \, u + \partial_x g = 0,\\
v(0,x) = \varphi'(x).
}
In particular, we can differentiate PDE \eqref{pde-lin} w.r.t. $x$.
\end{pro}
\begin{proof}
Introduce the function $u_\Delta(t,x) = \frac{u(t, x+\Delta x) - u(t,x)}{\Delta x}$. Since $u$ is a solution
to \eqref{pde-lin}, the linear PDE for $u_\Delta$ takes the form
\mmm{
\label{pde-dl}
(Lu_\Delta)(t,x) =-\big( \tilde\partial_x a(t,x) \partial^2_{xx} u(t,x+\Delta x) + \tilde\partial_x b(t,x) \partial_x u(t,x+\Delta x) \\
+ \tilde\partial_x c(t,x) u(t,x+\Delta x) + \tilde\partial_x g(t,x)\big),
}
where $\tilde\partial_x$ is defined as follows: $\tilde\partial_x \phi(x) = \int_0^1 \partial_x\phi(x+\lambda\Delta x) d\lambda$.
Remark that, by Theorem \ref{hold-unique}, $u$ is of class ${\rm C}^{1+\frac{\beta}2, 2+\beta}_b$,
and, therefore, by assumptions, the right-hand side of \eqref{pde-dl} is of class ${\rm C}^{0,\beta}_b$.
By Theorem \ref{fundamental}, the Cauchy problem consisting of PDE \eqref{pde-dl}
and the initial condition $u_\Delta(0,x) = \frac{\varphi(x+\Delta x) - \varphi(x)}{\Delta x}$ has a unique solution which takes the form
\mm{
u_\Delta(t,x) = \int_{{\mathbb R}}\Gamma(t,x,0,z) u_\Delta(0,z) dz -
\int_0^t \int_{{\mathbb R}} \Gamma(t,x,s,z) \big(\tilde\partial_x a(s,z) \partial^2_{xx} u(s,z+\Delta x) \\
+ \tilde\partial_x b(s,z) \partial_x u(s,z+\Delta x) + \tilde\partial_x c(s,z) u(s,z+\Delta x) + \tilde\partial_x g(s,z)\big) ds dz.
}
On the other hand, consider problem \eqref{d1} w.r.t. $v$.
By Theorem \ref{fundamental}, \eqref{d1} has a unique solution $v(t,x)$ which takes the form
\mm{
v(t,x) = \int_{{\mathbb R}}\Gamma(t,x,0,z) \varphi'(z) dz -
\int_0^t \int_{{\mathbb R}} \Gamma(t,x,s,z) \big(\partial_x a(s,z) \partial^2_{xx} u(s,z) \\
+ \partial_x b(s,z) \partial_x u(s,z) + \partial_x c(s,z) u(s,z) + \partial_x g(s,z)\big) ds dz.
}
Recalling that the fundamental solution $\Gamma(t,x,s,z)$ possesses bounds by Gaussian densities \cite{friedman}, we conclude
that as $\Delta x\to 0$, $u_\Delta(t,x) \to v(t,x)$. This means that
$v=\partial_x u$. In particular, it means that the derivatives $\partial^3_{xxx} u$ and $\partial^2_{xt} u$ exist,
and we can differentiate PDE \eqref{pde-lin} w.r.t. $x$.
\end{proof}
\begin{lem}
\label{ux-bound}
Let (A1)--(A5) hold, and let $u$ be the solution to problem \eqref{final}
(whose existence, together with the existence of the bound $M_1$ for its gradient $\partial_x u$,
was established under (A1)--(A3)).
Then, there exists a positive function $m(t)$,
such that one of the alternatives in \eqref{D-bound} is fulfilled.
\end{lem}
\begin{proof}
Problem \eqref{final} can be rewritten as a linear problem as follows
\eq{
\label{eq-main}
\frac12 \tilde\sigma^2(t,x) \partial^2_{xx} u + \tilde f(t,x)\partial_x u + \tilde g(t,x) + \partial_t u = 0,\\
u(T,x) = h(x),
}
where
$\tilde g(t,x) = g(t,x,u(t,x), \partial_xu(t,x)\sigma(t,x,u(t,x)))$, and $\tilde\sigma$ and $\tilde f$ are defined by \eqref{coeff-sde}.
By Proposition \ref{pr1}, we can differentiate PDE
\eqref{eq-main} w.r.t. $x$. By doing so, we obtain the following PDE for $v(t,x) = \partial_x \theta(t,x) = \partial_x u(T-t,x)$
\aaa{
\label{eq-d}
a(t,x,\theta) \, \partial^2_{xx}v + b(t,x,\theta,\partial_x \theta) \partial_x v + c(t,x,\theta,\partial_x \theta) v - \partial_t v = - \partial_x g(t,x,\theta,\partial_x \theta),
}
where $\theta(t,x) = u(T-t,x)$, and the functions $v$, $\theta$, $\partial_x v$, and $\partial_x \theta$
are everywhere evaluated at $(t,x)$. Furthermore, $a$, $b$, and $c$ are defined as follows
\eq{
\label{abc}
a(t,\ldots) = \frac12\sigma^2(T-t,\ldots); \qquad
b(t,\ldots) = \big(\partial_x a + \partial_p f\sigma \partial_x u + \partial_p g\, \sigma + f\big)(T-t,\ldots); \\
c(t,\ldots) = \big(\partial_x f + \partial_u g + \partial_p g \partial_x\tilde\sigma + \partial_u f \partial_x u + \partial_p f \partial_x \tilde\sigma \partial_x u\big)(T-t,\ldots),
}
where the dots are used to simplify notation and are to be substituted with $x,\theta(t,x), \partial_x\theta(t,x)$.
Let $\mathcal L$ be the partial
differential operator defined by the left-hand side of \eqref{eq-d}, i.e.,
\aa{
\mathcal L v = a\, \partial^2_{xx} v + b\, \partial_x v + c \,v - \partial_t v.
}
If (A5)-(a) is in force, define the function
$\tilde v(t,x) = v(t,x) - m(t)$, where $m(t) = \int_0^t \tilde m(s) ds$ and $\tilde m(s)$ is a positive sufficiently
small function whose choice is explained below. Then,
\aa{
\mathcal L \tilde v = - \partial_x g - c\, m(t) + \tilde m(t) \leqslant -\inf_{(x,u,p)\in \mathcal R} \partial_x g(t,x,u,p) - c\, m(t) + \tilde m(t).
}
Remark that by (A4), $c$ is bounded.
Therefore, if $\tilde m(t)$ and $m(t)$ are sufficiently small, then $\mathcal L \tilde v \leqslant 0$.
Further, since $m(0) = 0$, then $\tilde v(0,x) \geqslant 0$.
By Theorem \ref{comparison}, $\tilde v(t,x) \geqslant 0$, and, therefore $v(t,x) \geqslant m(t)$ on $[0,T]\times {\mathbb R}$.
If (A5)-(b) is in force, then, defining the function $\tilde v(t,x) = v(t,x) + m(t)$, we obtain
that $\mathcal L \tilde v = - \partial_x g + c m(t) - \tilde m(t)$. By a similar argument, we conclude that $v(t,x) \leqslant -m(t)$ on
$[0,T]\times {\mathbb R}$. The lemma is proved.
\end{proof}
As a corollary of Theorem \ref{privault-dung} and Lemma \ref{ux-bound}, we obtain Gaussian
estimates for the density of the law of $Y_t$.
\begin{thm}
\label{y-est}
Let (A1)--(A5) hold. Then, the distribution of $Y_t$ has a density $p_{Y_t}$ with respect to the
Lebesgue measure. Moreover, for almost all $x\in {\mathbb R}$, this density satisfies the estimate
\aaa{
\label{est1}
\frac{\mathbb E |Y_t-\mathbb E[Y_t]|}{2\Lambda(t)} \exp\Big(-\frac{\big(x-\mathbb E[Y_t]\big)^2}{2\lambda(t)}\Big) \leqslant
p_{Y_t}(x) \leqslant \frac{\mathbb E |Y_t-\mathbb E[Y_t]|}{2\lambda(t)} \exp\Big(-\frac{\big(x-\mathbb E[Y_t]\big)^2}{2\Lambda(t)}\Big),
}
where $\lambda(t)$ and $\Lambda(t)$ are positive functions that can be computed explicitly.
Further, for all $x>0$, the tail probabilities of $Y_t$ satisfy
\aaa{
\label{tail}
\mathbb P(Y_t > x) \leqslant \exp\Big(-\frac{\big(x-\mathbb E[Y_t]\big)^2}{2\Lambda(t)}\Big) \quad \text{and} \quad
\mathbb P(Y_t<-x) \leqslant \exp\Big(-\frac{\big(x+\mathbb E[Y_t]\big)^2}{2\Lambda(t)}\Big).
}
\end{thm}
\begin{proof}
Since $D_rY_t = \partial_x u(t,X_t)D_rX_t$, by \eqref{dx-est} and Lemma \ref{ux-bound},
\eq{
\label{Y-bounds}
\text{either \; } &m(t)\nu(M) e^{-M_\psi t} \leqslant D_r Y_t \leqslant M_1 \mu(M) e^{M_\psi t} \; \text{\; a.s.} \\
\text{or \; } &m(t)\nu(M) e^{-M_\psi t} \leqslant - D_r Y_t \leqslant M_1 \mu(M) e^{M_\psi t} \; \text{\; a.s.,}
}
where $M_1$ is the bound for $\partial_x u$.
Taking into account that $D_rX_t = 0$ if $r>t$, we obtain
\aa{
\lambda(t) = t\big( m(t)\nu(M) e^{-M_\psi t} \big)^2 \leqslant \int_0^t D_r Y_t \,\mathbb E[D_rY_t|\mathcal F_r] dr < t\big(M_1\mu(M) e^{M_\psi t}\big)^2 = \Lambda(t).
}
By Theorem \ref{privault-dung}, $Y_t$ has a density with respect to the Lebesgue measure,
and estimate \eqref{est1} holds. Also, we have estimates for the tail probabilities of $Y_t$,
given by \eqref{tail}.
\end{proof}
\subsection{Density estimates for the law of $Z_t$}
To estimate the density for $Z_t$, we recall that $Z_t = \partial_x u(t,X_t) \sigma(t,X_t,u(t,X_t))$.
This immediately implies that $Z_t$ is Malliavin differentiable, and
\aaa{
\label{dz}
D_r Z_t = \big(\partial_x u(t,X_t) \partial_x \tilde\sigma(t,X_t)
+ \partial^2_{xx}u(t,X_t) \tilde \sigma(t,X_t)\big)D_rX_t,
}
where $\tilde\sigma(t,x) = \sigma(t,x,u(t,x))$.
Lemma \ref{lem2} below provides a lower bound for the derivative $\partial^2_{xx} u$.
\begin{lem}
\label{lem2}
Let (A1)--(A4), (A5'), and (A7)--(A9) hold, and let $u$ be the solution to problem \eqref{final}.
Then there exists a positive function $\rho(t)$ such that
$\partial^2_{xx} u \geqslant \rho(t)$ for all $(t,x) \in [0,T]\times{\mathbb R}$.
\end{lem}
\begin{proof}
Remark that linear PDE \eqref{eq-d} takes form \eqref{pde-lin} with $a$, $b$, and $c$ given by \eqref{abc}.
Since $\theta(t,x) = u(T-t,x)$
is of class ${\rm C}^{1+\frac{\beta}2, 2+\beta}_b([0,T]\times{\mathbb R})$, then, by (A7), the coefficients
$a(t,x,\theta(t,x), \partial_x\theta(t,x))$, $b(t,x,\theta(t,x), \partial_x\theta(t,x))$, and $c(t,x,\theta(t,x), \partial_x\theta(t,x))$ of PDE \eqref{eq-d}
and its right-hand side $-\partial_x g(t,x,\theta(t,x),\partial_x \theta(t,x))$
are of class ${\rm C}^{\frac\beta2,1+\beta}_b$ as functions of $(t,x)$. By Proposition \ref{pr1}, the solution $v=\partial_x u$
to \eqref{eq-d} is of class ${\rm C}^{1,3}_b$ (and, therefore, $u$ is of class ${\rm C}^{1,4}_b$),
and we can differentiate PDE \eqref{eq-d} w.r.t. $x$.
Defining $w=\partial^2_{xx} u$ and replacing $\partial^2_{xx} u$, $\partial^3_{xxx} u$, and $\partial^4_{xxxx}u$
by $w$, $\partial_x w$, and $\partial^2_{xx} w$, respectively, everywhere where it is possible, we obtain the following
PDE w.r.t. $w$
\mmm{
\label{last2}
a \, \partial^2_{xx} w + b\, \partial_x w + \mathcal P w -\partial_t w \\
= - \partial^2_{xx} g - \Psi_1 \partial_x u - \Psi_2 (\partial_x u)^2 - \Psi_3 (\partial_x u)^3
- \Psi_4 (\partial_x u)^4 - \Psi_5 (\partial_x u)^5,
}
where $\mathcal P$ is a polynomial of $\sigma$, $f$, $g$, all their first
and second order derivatives w.r.t. $x$, $u$, $p$, and, additionally, of $\partial_x u$.
Further, the functions
$\Psi_i$, $i=1,2,3,4,5$, are defined by the right-hand sides of the first, second, third, fourth, and the fifth
inequalities, respectively, in assumption (A9). Let
$\mathcal L_1$ denote the partial differential operator defined by the left-hand side of \eqref{last2}.
We proceed with the same argument as in
Lemma \ref{ux-bound}, that is, define the function
$\tilde w(t,x) = w(t,x) - \rho(t)$, where $\rho(t) = \int_0^t \tilde \rho(s) ds$ and $\tilde \rho(s)$ is a sufficiently small positive function. Then,
\aa{
\mathcal L_1 \tilde w = - \partial^2_{xx} g - \sum_{n=1}^5 \Psi_n (\partial_x u)^n - \mathcal P\, \rho(t) + \tilde \rho(t).
}
Remark that under (A5'), $\partial_x u\geqslant 0$ on $[0,T]\times {\mathbb R}$. Indeed, this follows from the proof of Lemma \ref{ux-bound},
where we have to apply Theorem \ref{comparison} with $m(t) = 0$.
Hence, by (A9), $\sum_{n=1}^5 \Psi_n (\partial_x u)^n\geqslant 0$. Further, by (A2)--(A4), $\mathcal P$ is bounded.
Therefore, (A7) implies that if $\tilde \rho(t)$ and $\rho(t)$ are sufficiently small, then $\mathcal L_1 \tilde w \leqslant 0$. Since $h''\geqslant 0$,
by Theorem \ref{comparison}, we obtain that $w(t,x) \geqslant \rho(t)$ for all $(t,x)\in [0,T]\times{\mathbb R}$.
\end{proof}
\begin{thm}
Let (A1)--(A4), (A5'), and (A6)--(A9) hold. Then, the distribution of $Z_t$ has a density $p_{Z_t}$ with respect to the
Lebesgue measure. Moreover, for almost all $x\in {\mathbb R}$, this density satisfies
\aaa{
\label{est2}
\frac{\mathbb E |Z_t-\mathbb E[Z_t]|}{2\Sigma(t)} \exp\Big(-\frac{\big(x-\mathbb E[Z_t]\big)^2}{2\varsigma(t)}\Big) \leqslant
p_{Z_t}(x) \leqslant \frac{\mathbb E|Z_t-\mathbb E[Z_t]|}{2\varsigma(t)} \exp\Big(-\frac{\big(x-\mathbb E[Z_t]\big)^2}{2\Sigma(t)}\Big),
}
where $\varsigma(t)$ and $\Sigma(t)$ are positive functions that can be computed explicitly.
Further, for all $x>0$, the tail probabilities of $Z_t$ satisfy
\aaa{
\label{tail2}
\mathbb P(Z_t > x) \leqslant \exp\Big(-\frac{\big(x-\mathbb E[Z_t]\big)^2}{2\Sigma(t)}\Big)
\quad \text{and} \quad
\mathbb P(Z_t<-x) \leqslant \exp\Big(-\frac{\big(x+\mathbb E[Z_t]\big)^2}{2\Sigma(t)}\Big).
}
\end{thm}
\begin{proof}
Assumptions (A5') and (A6)--(A9) provide the lower bound for the function
\aaa{
\label{func}
\partial_x u \, \partial_x \sigma + (\partial_x u)^2\partial_u \sigma + \partial^2_{xx}u\, \sigma
}
on the right-hand side of \eqref{dz}. Indeed, $\partial_x u \, \partial_x \sigma + (\partial_x u)^2\partial_u \sigma \geqslant 0$ by (A5') and (A6). Finally, from (A1) and (A7)--(A9), by virtue of Lemma \ref{lem2}, it follows that
$\partial^2_{xx}u\, \sigma \geqslant \rho(t) \nu(M)$, where $\rho(t)$ is the positive function defined in Lemma \ref{lem2}
and $\nu(\,\cdot\,)$ is the function from (A1). Now taking into account that $D_rX_t$ possesses
upper and lower positive bounds, provided by
\eqref{dx-est}, we obtain that the Malliavin derivative
$D_r Z_t$ satisfies
\aa{
\nu(M)^2 \rho(t) e^{-M_\psi t}\leqslant D_rZ_t \leqslant \mu(M) \gamma \,e^{M_\psi t} \quad \text{a.s.},
}
where $\gamma$ is an upper bound for \eqref{func}. This bound, indeed, exists by (A3) and since
the solution $u$ has bounded derivatives. Now by the same argument as in Theorem \ref{y-est},
we obtain that the law of $Z_t$ possesses a density $p_{Z_t}$ w.r.t. the Lebesgue measure, and
\eqref{est2} holds with $\Sigma_t = t \mu(M)^2 \gamma^2 e^{2 M_\psi t}$ and
$\varsigma(t) = t \nu(M)^4 \rho(t)^2 e^{-2 M_\psi t}$. Moreover, we obtain estimates \eqref{tail2} for the
tail probabilities of $Z_t$.
\end{proof}
\section*{Acknowledgements}
Christian Olivera is partially supported by FAPESP by the grants 2017/17670-0 and 2015/07278-0.
| 2023-04-23T06:41:01.141Z | 2019-02-18T02:17:26.000Z | redpajama/arxiv | arxiv_0001 | 1,556 | 7,356 |
2b92f93d007e69513ef4bd10c32cb77fbd28f353 | \section*{Introduction}
\label{sec:intro}
Nonlinear photonic crystals, characterized by a two-dimensional periodic modulation of the nonlinear response \cite{Berger1998,Arie2007,Arie2009},
offer a high degree of flexibility for engineering the properties of optical parametric processes, because of the multeplicity of vectors of the nonlinear lattice providing quasi phase matching. When considering the generation of twin photons or twin-beams by parametric down-conversion (PDC), these photonic crystals have shown interesting potentialities as monolithic sources of path entangled photonic states \cite{Gong2012, Jin2013, Megidish2013}, and may provide novel compact schemes for continuous-variable quantum technologies \cite{Gong2016}.
In this work we analyse the quantum state of twin photons and twin beams generated in a hexagonally poled nonlinear photonic crystal (HexNPC) with a quadratic nonlinearity (see \cite{Gallo2011, Levenius2012, Jin2013}).
The HexNPC is characterized by the coexistence of two nonlinear processes, sustained by the two fundamental vectors of the reciprocal lattice of the nonlinearity (Fig.1).
In the spectral-angular domain of the down-converted light there are special points where phase matching occurs simultaneously for both processes, and where twin photons may originate by either processes. In the high-gain regime the two possibilities add coherently and stimulate each other, giving rise to unusual isolated hot spots in the parametric emission \cite{Liu2008,Chen2014,Conforti2014}.
\par
In the quantum domain, we describe a general scenario of tripartite entanglement holding among specific triplets of hot-spots. We show that here
the action of the photonic crystal is equivalent to a single nonlinear process generating a pair of entangled twin beams, followed by a 50:50 beam-splitter dividing one of the twin beams into two separated paths. The occurrence of such a situation was
already suggested in \cite{Jin2013}. In contrast to the analysis performed in \cite{Jin2013}, which was limited to the two-photon state generated in the spontaneous regime, our study is valid for any photon number, encompassing both the photonic and continuous-variable entanglement, and emphasizes the role of conditional measurements on generating path entanglement.
\begin{figure}[thb]
\includegraphics[scale=0.5]{Fig1_scheme_bis.pdf}
\caption{(a) Scheme of parametric down-conversion in a hexagonally poled nonlinear photonic crystal. The pump beam is allowed to be slightly tilted with respect to the symmetry axis $z$ of the nonlinear pattern.
(b) Vectors of the reciprocal lattice contributing to quasi phase-matching.
}
\label{fig1}
\end{figure}
\par
In a related experimental work \cite{Gallo2011}, we observed that by properly tuning the angle of incidence of the pump laser, it was possible to reach a particular condition, that was referred to as {\em superresonance}. In such condition,
two triplets of hot spots coalesce into four coupled modes, with a sudden enhancement of the brightness of the hot-spots. Indeed, we demonstrated that the rate of growth of parametric light along the crystal increases in these modes by the famous { \em Golden Ratio} of the segment, $ \phi= \frac{1 + \sqrt{5}}{2}$. In the present work we show that this condition, corresponding to a transverse spatial resonance between the pump and the nonlinear lattice, gives rise to a quadripartite entangled state, and enables the maximal coherence between the two concurrent nonlinear processes characterizing the hexagonal photonic crystal. In this condition we demonstrate that the action of the HexNPC is equivalent to i) {\em two} independent parametric processes, with different gains $ g_0 \phi$ and $ \frac{-g_0}{\phi}$, generating two pairs of entangled twin beams, ii) followed by an unbalanced beam-splitter that mixes the two processes according to the Golden Ratio.
\par An original aspect of our analysis is that it fully takes into account the 3D character of the parametric emission (transverse spatial coordinates plus temporal frequency), by means of extended 3D+1 numerical simulations of the device, complemented by approximated analytical models. This allows us to show that both the 3-mode and 4-mode entanglement can be realized at different frequencies over a broad bandwidth, opening the possibility of a widely tunable implementation of novel quantum states of light.
\section{The model}
\label{sec:model}
We consider the geometry depicted in Fig.\ref{fig1}, where a laser pump beam propagates in a $\chi^{(2)}$ photonic crystal, with a hexagonal pattern of the nonlinear response \cite{Gallo2011, Levenius2012, Jin2013,Stensson2014,Conforti2014}, generating signal and idler waves at lower energies. Light is assumed to propagate mainly
along the symmetry axis of the pattern ($z$-axis of our frame of reference), but we allow the input pump to be slightly tilted in the pattern plane ($(x,z)$ plane). Although our analysis may apply to different tuning conditions,
we focus
on the type 0 process, where all the three waves have the same extraordinary polarization, and on non-degenerate parametric emission, in a configuration similar to the experiment in \cite{Gallo2018}.
\par
The model is formulated in terms of coupled propagation equations for the pump signal and idler field operators, describing three wave-packets centered around frequencies $\omega_p$, $\omega_s$ and $\omega_i= \omega_p-\omega_s$,respectively.
The grating of the nonlinearity is described by keeping only the leading order terms in the Fourier expansion of the nonlinear-susceptibility \cite{Berger1998,Levenius2013}
\begin{equation}
d(x,z) \simeq e^{-iG_z z}\left[ d_{01} e^{-i G_x x} + d_{10} e^{i G_x x} \right] = 2 d_{01} e^{-iG_z z} \cos{(G_x x)} ,
\end{equation}
where only the contribution of the two fundamental vectors of the reciprocal lattice (Fig.\ref{fig1}b),
$\Gone \equiv \vec{G}_{01}= -G_x \vec{e}_x -G_z \vec{e}_z$ and $\Gtwo \equiv \vec{G}_{10}= +G_x \vec{e}_x -G_z \vec{e}_z$, which allow quasi-phase matching (QPM), have been retained. For the hexagonally poled Mg-doped Lithium Tantalate (LiTaO$_3$) crystal used in \cite{Gallo2018,Jin2013}, $d_{10}=d_{01} \simeq 0.29 d_{33}$ ($d_{33}=17\,$pV/m), $G_z=\frac{2\pi}{\Lambda}$ and $G_x=\frac{2\pi}{\sqrt{3}\Lambda}$ where $\Lambd
$ is the poling period.
\par
The propagation equations are best written in the Fourier domain spanned by the 3D vector $\w=(\q,\Omega)$, where $\q=q_x \vec{e}_x + q_y \vec{e}_y $ is the wave-vector in the plane transverse to the mean direction of propagation $z$, and $\Om$ is the frequency shift with respect to the carrier frequencies. The conjugate spatio-temporal domain is in turn described by the vector $\vxi= (\rr,t)$, $\rr=x \vec{e}_x + y \vec{e}_y$, with the convention for the inner product $ \w\cdot \vxi := \q \cdot \rr -\Om t$ . We consider the three slowly varying envelope operators
$\A_j (\w,z) \propto e^{-i k_{jz} (\w) z} \int \frac{d^3 \xi} {(2\pi)^{\frac{3}{2} }} e^{-i \xi \cdot \w} \hat E_j^{(+)} (\xi,z) e^{i \omega_j t}
$ for the signal (j=s), the idler (j=i) and the pump (j=p) fields, where $ \hat E_j^{ (+)}$ are the positive-frequency parts of the respective electric field operators, and
$k_{jz} (\q,\Om) = \sqrt{k_j ^2(\q, \Om) -q^2}$ are the z-components of the wave-vectors, with the wave number
$k_j (\q,\Om) = n_j(\q, \Om) \frac{\omega_j +\Om}{c} $ being determined by the linear dispersion relation of the j-th wave in the medium. Dimensions are such that $\hat A_j^\dagger (\w) \hat A_j (w)$ are photon numbers per unit frequency and wavevector squared.
The field operators $\A_j$ are slowly varying along $z$, because all the effects of the linear part of the interaction with the medium, contained in $k_{jz}$, have been subtracted.
Their evolution along the nonlinear photonic crystal is described by the following equations (see \cite{Gatti2003,Brambilla2014} for derivations of similar equations):
\begin{align}
\frac{\partial}{\partial z} \hat{A}_s (\w_s, z ) &= \chi \int
\frac{d^3 \w_p } {(2\pi)^{\frac{3}{2}} } \hat{A}_p(\w_p,z)
\left[ \hat{A}_i^\dagger(\w_p -\w_s -\Gx, z) e^{-i \DD_1 (\w_s, \w_p ) z } +
\hat{A}_i^\dagger(\w_p -\w_s +\Gx, z) e^{-i \DD_2 (\w_s, \w_p ) z } \right]
\label{NLs}\\
\frac{\partial}{\partial z} \hat{A}_i (\w_i, z ) &= \chi \int
\frac{d^3 \w_s } {(2\pi)^{\frac{3}{2}} } \hat{A}_p (\w_p,z)
\left[ \hat{A}_s ^\dagger(\w_p -\w_i -\Gx, z) e^{-i \DD_1 (\w_p -\w_i -\Gx, \w_p) z } + \right. \nonumber \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \left. \hat{A}_s^\dagger(\w_p -\w_i +\Gx, z) e^{-i \DD_2 (\w_p -\w_i +\Gx, \w_p) z }
\right]
\label{NLi} \\
\frac{\partial}{\partial z} \hat{A}_p (\w_p, z ) &= - \chi \int
\frac{d^3 \w_s } {(2\pi)^{\frac{3}{2}} } \hat{A}_s (\w_s,z)
\left[ \hat{A}_i ( \w_p -\w_s-\Gx, z) e^{i \DD_1 (\w_s, \w_p ) z } +
\hat{A}_i ( \w_p -\w_s +\Gx, z) e^{i \DD_2 (\w_s, \w_p ) z } \right]
\label{NLp}
\end{align}
where $\Gx= (G_x, 0,0)$ is a short-hand notation for the x-component of the reciprocal lattice vector in the 3D Fourier space, and
$
\chi
\simeq d_{01}
\sqrt{\frac{ \hbar \omega_s \omega_i \omega_p }{ 8\epsilon_0 c^3 n_i n_s n_p }}
$.
The first and second term at r.h.s of these equations describe all the possible three-photon interactions $\w_p \longleftrightarrow \w_s \, \w_i$
that satisfy the generalized energy-momentum conservation by means of the lattice vectors $\Gone$ and $\Gtwo$, respectively :
\begin{subequations}
\label{phase-matching}
\begin{align}
&\Om_s + \Om_i = \Om_p \quad &\text{energy conservation} \\
&\q_s + \q_i =\q_p \mp G_x \vec{e}_x \quad &\text{transverse momentum conservation} \\
&k_{sz} + k_{iz} = k_{pz} - G_z \quad &\text{longitudinal momentum conservation}
\end{align}
\end{subequations}
The last rule, rigorously obeyed only for an infinite propagation length, is accounted for by the QPM functions:
\begin{align}
\DD_{1}(\w_s,\w_p)=k_{sz}(\w_s)+k_{iz}(\w_p - \w_s -\Gx)-k_{pz}(\w_p)+G_z \nonumber\\
\DD_{2}(\w_s,\w_p)=k_{sz}(\w_s)+k_{iz}(\w_p - \w_s + \Gx)-k_{pz}(\w_p)+G_z
\label{DD12}
\end{align}
which contain the effects of temporal dispersion and diffraction at any order.
\par
These equations are in general too complicated to be solved without approximations, but stochastic simulations can be performed in the medium-high gain regime of PDC in the framework of the Wigner representation, where the field operators are replaced by c-number fields \cite{Gatti1997}. In this context, the vacuum fluctuations at the crystal entrance facet are simulated by Gaussian white noise. The propagation equations are then integrated with a pseudo-spectral method, splitting linear propagation, solved in Fourier space, from nonlinear propagation solved in direct space. The linear part of the evolution is evaluated by using empirical Sellmeier formulas, found in \cite{Lim2013} for the LiTaO$_3$ crystal. Quantum expectation values (in symmetric ordering) can be obtained by averaging over the initial conditions. A single stochastic realization can be taken as a semiclassic simulation of the system. Indeed,
despite the unavoidable limitations of the size of the numerical grid (typically 512 x 256 x512 points in the x, y and t directions), such simulations were able to closely reproduce, also quantitatively, the classical features of optical parametric generation in a HexNPC observed in \cite{Gallo2018}.
\section{Parametric limit and shared modes}
Analytic results can be derived in the {\em parametric limit} where the pump beam, undepleted by the down-conversion process, is approximated as a classical plane-wave of constant amplitude along the sample.
In this limit, by assuming that the input pump propagates in the $(x,z)$ plane at an angle $\theta_p \simeq \frac{q_p}{k_p} \ll 1 $ with the $z-$axis, we can set $\hat A_p (x,y,t,z) \to \alpha_p e^{i q_p x} $ where $\alpha_p$ is the classical field amplitude. Linear evolution equations for the signal and idler operators
are then obtained from Eqs.(\ref{NLs},\ref{NLi}), by letting
$\hat A_p (\w_p,z) \to (2\pi)^{3/2} \alpha_p \delta(\w_p -\w_{0p})$, where $\w_{0p} = (q_p, 0,0)$. Let us focus on a signal mode $\w_s$, for which
\label{pareqs}
\begin{align}
\frac{\partial}{\partial z} \hat{A}_s (\w_s, z ) &= g_0
\left[ \hat{A}_i^\dagger(\w_{0p} -\w_s -\Gx, z) e^{-i D_1 (\w_s ) z } +
\hat{A}_i^\dagger(\w_{0p} -\w_s +\Gx, z) e^{-i D_2 (\w_s ) z } \right]
\label{Ls}
\end{align}
where $g_0=\chi \alpha_p $ is taken real for definiteness, and
\begin{equation}
D_{1,2} (\w_s) \equiv \DD_{1,2} (\w_s, \w_p= \w_{0p}) = k_{sz}(q_{sx}, q_{sy},\Omega_s)+k_{iz}(q_p - q_{sx} \mp G_x, -q_{sy}, -\Omega_s )-k_{pz}(q_p,0,0)+G_z
\label{DPW}
\end{equation}
are the QPM functions in the parametric limit. This equation needs to be coupled to the evolution of the two idler modes
\begin{align}
\frac{\partial}{\partial z} \hat{A}_i (\w_{0p} -\w_s -\Gx, z ) &= g_0
\left[ \hat{A}_s^\dagger( \w_s ,z) e^{-i D_1 (\w_s ) z } +
\hat{A}_s^\dagger(\w_s +2\Gx, z) e^{-i D_2 (\w_s +2\Gx) z } \right]
\label{Li1} \\
\frac{\partial}{\partial z} \hat{A}_i (\w_{0p} -\w_s +\Gx, z ) &= g_0
\left[ \hat{A}_s^\dagger( \w_s -2\Gx, z) e^{-i D_1 (\w_s -2\Gx ) z } +
\hat{A}_s^\dagger(\w_s , z) e^{-i D_2 (\w_s) z } \right]
\label{Li2}
\end{align}
forming in principle an infinite chain of coupled equations. However, in most cases, only one of the two nonlinear processes is effective, because for a given signal mode either $\D_1 (\w_s)=0$ or $\D_2(\w_s)=0$. Then the usual pair of parametric equations coupling two signal-idler conjugate modes is obtained. Noticeably, there exist special points which are shared by both processes, satisfying
\begin{equation}
\D_1(\w_s)=\D_2(\w_s)=0 .
\label{SS}
\end{equation}
As it can be easily verified from Eq.\eqref{DPW}, the first equality requires that $q_{sx}=q_p$, i.e. all the shared modes are characterized by the same x-component of the wave-vector as the pump. The y-component wave-vector then depends on the frequency $q_{sy}= q_{sy} (\Om_s)$, as determined by QPM (the second equality in Eq. \eqref{SS}). Each shared signal mode is then coupled to two idler modes at
$q_{ix}= \mp G_x$, $q_{iy} =-q_{sy}$, $\Om_i= -\Om_s$ .
\par
The dual situation occurs for a shared idler mode at $q_{ix}= q_p$, coupled to two signal modes at $q_{sx}=\mp G_x$. Notice that unless the pump satisfies the {\em resonance} condition $q_p = \pm G_x$, the shared signal and shared idler configurations are strictly incompatible, and the evolution of each shared mode with its coupled modes forms a closed set of three parametric equations that will be examined in the next Sec.\ref{Sec:3mode}. Conversely, when the pump is tilted at $q_p = \pm G_x$, two triplets of 3-modes, initially uncoupled, merge into a system of 4 coupled modes, whose equations will be studied in Sec. \ref{Sec:4mode}.
\begin{figure}[h]
\includegraphics[scale=0.55]{Fig2_QPM_03.pdf}
\caption{ QPM surfaces in the Fourier space (see text), calculated via the Sellmeier formulas in \cite{Lim2013}, for $q_p= -0.3 G_x$ (non resonant pump).
For better readability $\Om$ has been mapped to the wavelength $\lambda$.
(a) Projections along $q_y=0$ of the full 3D surfaces in (b). The bullets mark on-axis (a) and off-axis (b) examples of the 3 entangled modes, corresponding to a shared signal + 2 coupled idlers. The stars show the dual configuration with a shared idler+ two coupled signals. Parameters are those of the
non-degenerate HexNPC crystal with $\Lambda= 8.3 \mu$m used in \cite{Gallo2018}.
}
\label{fig2_QPM}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.5]{Fig3_QPM_Gx.pdf}
\caption{Same as Fig.\ref{fig2_QPM} but at superresonance, with the pump tilted at
$q_p= - G_x$. In this condition, all the shared modes superimpose to the lines of modes at $q_x =-G_x$, and two triplets of hot-spots merge into 4 coupled modes, as observed in \cite{Gallo2018}.}
\label{fig3_QPM}
\end{figure}
\par
Figs.\ref{fig2_QPM} and \ref{fig3_QPM} show two examples of the QPM surfaces in the Fourier space, away from superresonance and at superresonance, respectively. In the signal panels, the lower and upper curves show the surfaces $\D_1(\w_s)=0 $ and $\D_2(\w_s)=0$, respectively, with the shared modes lying at their intersections. For the conjugate idler field, the same surfaces are plotted as a function of $\w_i = -\w_s + \w_{0p}
\mp \Gx$.
The shared modes form continuous lines, which sit on the plane $q_x= q_p$; their coupled modes also form continuous lines sitting on the two planes $q_x = \mp G_x$.
When the pump is tilted at $q_p= -G_x$, all the shared modes superimpose to the lower coupled modes, and a sudden increase in the intensity at the location of these modes occur, as demonstrated in \cite{Gallo2018}.
\section{3-mode entanglement}
\label{Sec:3mode}
Let us focus on the shared signal configuration: we will show that away from superresonance this configuration realizes a 3-mode entangled state, which can be pictured as a parametric process of gain $g=g_0 \sqrt{2}$ followed by a beam splitter.
The shared idler configuration can be treated in a completely analogous way.
\par
Let us consider a signal mode $\w_s = (q_p,q_{sy},\Om_s)$ shared by both processes, i.e. such that $\D_1(\w_s)= \D_2(\w_s): =\D(\w_s) \approx 0$. It can be easily shown that sufficiently away from the condition $q_p =\mp G_x$, the modes $\w_s \mp 2\Gx$ appearing in Eqs. \eqref{Li1} and \eqref{Li2} are not phase-matched, so that \eqref{pareqs} reduce to a closed set of three equations, coupling the shared signal with the two idlers at $q_x=\mp G_x$. Let us give a short name to these modes
\begin{align}
&\azero := \As (q_p, q_{sy},\Om_s) \qquad &\text {shared signal at $q_p$}\\
&\auno := \Ai (-G_x, -q_{sy},-\Om_s ) \qquad &\text {coupled idler at $-G_x$}\\
&\adue := \Ai (+G_x, -q_{sy},-\Om_s ) \qquad &\text {coupled idler at $+G_x$}
\end{align}
Their evolution along the sample is described by
\begin{subequations}
\label{ssprop}
\begin{align}
&\frac{\partial}{\partial z} \azero (z) = g_0
\left[ \auno^\dagger (z)
+ \adue^\dagger (z) \right] e^{-i \D (\w_s)z }
\label{sprop}\\
&\frac{\partial}{\partial z} \auno (z)= g_0
\azero^\dagger (z) e^{-i \D ( \w_s) z }
\label{i1prop} \\
&\frac{\partial}{\partial z} \adue (z) = g_0 \azero^\dagger (z)e^{-i \D( \w_s) z }
\label{i2prop}
\end{align}
\end{subequations}
These equations can be easily solved by introducing the canonical transformation
$
\ah_{i\pm} = \frac{ \auno \pm \adue }{\sqrt{2}} \, $,
leading to:
\begin{subequations}
\label{ssprop2}
\begin{align}
&\frac{\partial}{\partial z} \azero (z ) = \sqrt{2}g_0
\ah_{i+}^\dagger (z)
e^{-i \DD (\w_s) z }
\label{sprop1}\\
&\frac{\partial}{\partial z} \ah_{i+} (z) = \sqrt{2}g_0
\azero^\dagger (z) e^{-i \DD ( \w_s) z }
\label{i+prop} \\
&\frac{\partial}{\partial z} \ah_{i-} (z) = 0
\label{i-prop}
\end{align}
\end{subequations}
The last equation simply
means that the difference mode $ \ah_{i-} $ is not affected by the parametric process: if at the crystal entrance face the idler field is in the vacuum state, then the difference between any two symmetrical idler modes at $q_{ix} = \mp G_x$ remains in the vacuum. However, each of the two idler output modes has a nonzero intensity, because it has been parametrically amplified: as we shall see, this implies a correlation between the two idlers coupled via the same shared signal.
The first two equations are the usual coupled
equations, describing parametric generation for a pair of signal-idler conjugate modes, with an enhanced parametric gain
\begin{equation}
g = \sqrt 2 \, g_0
\label{gsqrt2}
\end{equation}
As first shown in \cite{Liu2008} and then demonstrated in \cite{Levenius2012, Chen2014, Gallo2018}, in the high-gain regime $g_0 l_c \gg 1$,
this local gain enhancement gives rise to bright hot-spots in the parametric emission at the location of the three modes (see Fig.\ref{figHS} and \ref{figqxqy}) . There,
$ \langle \hat a_j ^\dagger \hat a_j \rangle \propto \sinh^2 (\sqrt 2 g_0 z) \simeq \left(e^{2g_0 z} \right)^{\sqrt 2} $, $(j=s0, i1,i2)$, and the increase of intensity with respect to the background 2-mode fluorescence follows a power law $ I_{3-mode} = (I_{background} )^{\sqrt 2}$ \cite{Gallo2018}.
\begin{figure}[ht]
\includegraphics[scale=0.5]{Fig4_HS_qp0.pdf}
\caption{Numerical simulations of the $(\lambda, q_x)$ distributions of the signal and idler photon-numbers at the output of a HexNPC at $q_y=0$, and for $q_p=0$ (non resonant pump). Two triplets of hot spots corresponding to the shared signal and shared idler configurations are evident. The pump is a 10ps Gaussian pulse centered at $\lambda_p=527.5$nm, with a spatial Gaussian profile, of widths $600\, \mu$m and $200 \, \mu$m in the x and y directions. The crystal length is $l_c= 10$mm, and $g_0 l_c= 5$. }
\label{figHS}
\end{figure}
Notice that in the direct space the sum and difference modes have spatial modulations $\sim \cos (G_x x)$, and $\sim \sin (G_x x)$, respectively, in-phase and out-of-phase with the transverse modulation of the nonlinearity. Thus, the enhanced gain of the $\apiu$ mode can be interpreted as a spatial resonance with the nonlinear lattice.
\par
Coming to the quantum properties of the state, the explicit solution of the two coupled parametric equations \eqref{sprop1} and \eqref{i+prop} can be found within the standard input-output formalism (see e.g. \cite{Gatti2017} for a summary), as a Bogoliubov transformation linking the quantum operators at the crystal exit face $\hat a_j^{ \mathrm out} = \hat a_j (l_c)$ to those at the input $\hat a_j^{ \mathrm in} = \hat a_j (0)$, $(j=s0, i+)$:
\begin{subequations}
\label{inout1}
\begin{align}
& \asout = U_\gamma (\w_s) \, \asin + V_\gamma (\w_s) \, \apiu^{\dagger \, in}
\label{inouta}\\
& \apiuout = U_\gamma (\w_s) \, \apiuin + V_\gamma (\w_s) \, \as^{\dagger \, in}
\label{inoutb}
\end{align}
Here $\gamma= \sqrt{2}= g/g_0$ is a parameter giving the local gain enhancement in the hot-spots, and
the explicit expressions of $U_\gamma$ and $V_\gamma$ can be found in Appendix \ref{AppA}.
A well known consequence of this transformation (see e.g. \cite{Knight2005a}) is that $\ah_{i+}$ is the twin beam of $\azero$, and their joint state is the {\em two-mode squeezed vacuum}. In the continuous-variable domain, the usual picture of twin-beam entanglement holds: the intensity fluctuations of $\ah_{i+}$ and $\azero$ are perfectly correlated, and there is a "Einsten-Podolsky-Rosen" \cite{Einstein1935,Reid1989} correlation between their field quadratures. Equations \eqref{inout1} have to be considered together with
\begin{equation}
\amenout =\amenin
\label{inout2}
\end{equation}
and the back-transformation
\begin{align}
&\hat a_{i1,2}= \frac{ \apiuout\pm\amenout }{\sqrt{2}}\, .
\label{back1}
\end{align}
\end{subequations}
Equations \eqref{inout1} are sufficient to calculate all the quantities of interest starting e.g. from a vacuum input. However, most of the properties of the 3-mode entanglement can be understood by noticing that the transformation \eqref{back1} can be formally described by the action of a 50:50 beam-splitter, mixing $\apiuout $, which is the twin beam of the shared signal $\asout$, with
$\amenout$, which trivially coincides with
$\amenin$.
The overall process can be schematically pictured (Fig \ref{fig_solution}) as
\begin{itemize}
\item The action of a PDC device, generating the entangled twin beams $\asout$, $\apiuout$ with parametric gain $g= \sqrt 2 g_0
$.
\item Followed by a 50:50 beam-splitter acting on the idler arm, that mixes $\apiuout $ with an independent mode $\amenin$.
\end{itemize}
All of this is implemented by the nonlinear photonic crystal, which can be viewed as a sort of monolithic 3-mode nonlinear interferometer.
\begin{figure}
{\scalebox{.5}{\includegraphics*{Fig5_solution.pdf}}}
\caption{Unfolding of the 3-mode entanglement generated by the HexNPC away from resonance, where a shared signal mode $\azero$ is coupled to two idlers $\auno$ and $\adue $.
The process is equivalent to the a single PDC process generating maximally entangled twin beams, followed by the action of a 50:50 beam-splitter that mixes one of the entangled twins with an independent vacuum field (the difference between the two input idlers). As a result, tripartite entanglement is realized, with the two output idlers being entangled, but not perfectly, to the signal, and being correlated one to each other. }
\label{fig_solution}
\end{figure}
\par
Clearly, each of the two splitted idlers is quantum entangled to the signal, even though the entanglement is not the maximal one of the twin beam, because of the random vacuum fluctuations of $\amenin$ entering the other port of the beam-splitter (assuming the difference mode is in a vaccum or coherent state). In addition, the coupling via the shared idler induces a certain degree of correlation between the splitted idler beams. Precisely :
\begin{itemize}
\item
If the signal is {\em undetected}, the correlation between $\aunoout$ and $\adueout$ is classical, and is equivalent to the correlation between the two outputs of a beam splitter illuminated by a thermal light beam. This follows straightforwardly from the well known fact that the marginal statistics of each twin beam, when considered independently from the other, is thermal-like. Then, in the high-gain the two idlers may possess a very high degree of mutual coherence, just as the splitted thermal beams used for thermal ghost imaging \cite{Gatti2004, Gatti2006}, but their correlation is always shot-noise limited.
\item
If the signal is {\em detected}, the statistics of the coupled idlers, conditioned to detection (of some light observable) in the signal arm can be nonclassical, in particular they may show anticorrellated intensity fluctuations.
\end{itemize}
To clarify this last point, the problem can be reformulated in terms of the quantum state of the system.
Let us focus on a specific shared-signal mode $\w$, with its coupled idlers (to be precise, one should also perform a discretization of Fourier modes, but it really does not matter). Given the input-output relations \eqref{inout1} for the transformed modes $ \azero, \apiu, \ameno$,, their output state can be written as:
\begin{align}
| \psi ^\prime_{out}\rangle& = \hat R_{s+} (\xi) \hat {\text{1}}_- | \psi _{in}\rangle
\end{align}
where the suffixes "$s,+,-$ " refer to the shared signal , to the sum of the two idlers and to their difference, respectively, $\hat {\text{1}}_-$ is the identity operator, and
$\hat R_{s+} (\xi) = \exp {\left( \xi \azero^\dagger \hat a^\dagger_{i+} -\xi^\star \azero \apiu\right) } $, is the {\em two-mode squeeze operator}, acting on the shared signal and the $"+"$ mode. The squeeze parameter $\xi $ can be related to the input-output coefficients in Eq.\eqref{inout1} (appendix \ref{AppA}), and for phase-matched modes reduces to $ \xi = \sqrt 2 g_0 l_c$.
Provided that the input state is vacuum, then
\begin{align}
| \psi^\prime _{out}\rangle& = \left[ \sum_{N=0}^{\infty} c_N |N\rangle_s \, |N\rangle_+\right] \otimes |0\rangle_-
\end{align}
where $|N\rangle $ denotes the Fock state with N photons, $|0\rangle$ is the vacuum state, and $c_N = \frac{ \left[ U_\gamma(\w) V_\gamma(\w) \right]^N}{ \left| U_\gamma (\w) \right|^{2N+1} }$, ($\gamma = \sqrt 2$). Incidentally, one can readily see that the reduced state in the idler arm is the thermal-like state:
$\hat \rho_{\pm} = {\mathrm Tr}_s \left\{ | \psi^\prime _{out}\rangle\, \langle \psi^\prime _{out} \right\}
= \left[\sum_{N=0}^{\infty} | c_N |^2 |N\rangle_+ \phantom{i}_+\langle N| \right] \otimes |0\rangle_- \phantom{i}_- \langle 0 | $.\\
For the original modes ($\azero,\auno,\adue$), the state is then obtained as $| \psi _{out}\rangle = \hat B_{+-}\, | \psi^\prime _{out}\rangle$, where $\hat B_{+-} $ is the generator of the beam-splitter transformation $ \hat B_{+-} \hat a_{\pm} \hat B_{+-} ^\dagger =
\frac{ \auno \pm \adue } {\sqrt{2}} $.
By using then standard properties of Fock states,
\begin{align}
| \psi _{out}\rangle
&= \sum_{N=0}^{\infty} c_N |N\rangle_s \otimes \frac{1}{ \sqrt{N!}}
\left( \frac{ \auno^{\dagger} + \adue^{\dagger} } {\sqrt{2}} \right)^N |0\rangle_1 \, |0\rangle_2 \\
&= \sum_{N=0}^{\infty} c_N |N\rangle_s \left[
\sum_{k=0}^{N} \frac{1}{(\sqrt{2})^N} \sqrt{\frac{N!}{k! (N-k)!} }
|k \rangle_1 \, |N-k \rangle_2
\right]
\label{state_fin}
\end{align}
where the binomial expansion formula has been used to step to the second line. The result \eqref{state_fin} is the sum of infinite terms: each of them is the product tensor of a state with N photons in the signal mode, times the superposition of all the possible partitions of the N twin idler photons into k photons in mode 1 and N-k photons in mode 2.
Assuming that {\em exactly N photons} are detected in the signal arm the state of the two idler modes conditioned to this measurement is
\begin{equation}
| \psi _{idler}/_\text{N signal photon}\rangle = \frac{ \phantom{}_s\langle N | \psi_{out} \rangle} { || \phantom{}_s\langle N | \psi_{out} \rangle|| } =
\sum_{k=0}^{N} \frac{1}{(\sqrt{2})^N} \sqrt{ \frac{N!}{k! (N-k)!} }
|k \rangle_1 \, |N-k \rangle_2
\end{equation}
This state is the N-photon path-entangled state, corresponding to the superposition of all the possible partitions of N photons into k photons in arm 1 and N-k photons in arm 2, with probability
$
P_{k,N-k}= \frac{1}{2^N} \frac{N!}{k! (N-k)!}
$
following the binomial distribution
($\frac{1}{2}$ being the probability of taking either path for each photon). Such a state simply describes the random partition of the N photons in the two idler arms with equal probability of going one way or the other.
It can be easily shown that the state is antibunched, and exhibits the maximum level of anticorrelation in the photon number fluctuations allowed by quantum mechanics. The simplest example is for $N=1$,
\begin{equation}
\left | \psi_{idler} / (\text {1 signal photon} ) \right\rangle = \frac{1}{\sqrt{2}} \left( | 0 \rangle_1 \, | 1\rangle_2 + | 1 \rangle_1 \, | 0\rangle_2 \right)
\end{equation}
which is the path entangled single photon state explored in \cite{Jin2013}. Conversely, when the signal is not detected, the state of the two splitted idlers is non-entangled because it is obtained by a linear transformation acting on the classical thermal state (but can be highly correlated and have mutual phase coherence \cite{Gatti2004b,Gatti2006}).
\begin{figure}[h]
\includegraphics[scale=0.65]{Fig6_qxy_qp0.pdf}
\caption{ (a) and (b) Numerical simulations of the$(q_x, q_y)$ photon-number distributions (integrated over a single pump shot), for $q_p=0$, showing the emergence of hot-spot lines for increasing propagation distance. Shared modes are located at $q_x=q_p=0$, while their coupled modes are at $q_x = \mp G_x$ . (c) Sections along $q_y$, highlighting a spatial correlation among the intensity fluctuations of the three modes ($\azero, \auno,\adue$). Notice that the idler distributions have been mirrored $q_y \to -q_y$ . (d) Same as (c) for the shared idler configuration. Other parameters as in Fig. \ref{figHS}. }
\label{figqxqy}
\end{figure}
\par
We remind that in a full view of the problem, this 3-mode entangled state can be generated for a continuous range of Fourier modes $w_s$ satisfying the shared signal condition \eqref{SS}, as well as for their shared idler counterparts. These modes span a broad band of frequencies, as the $q_y$ coordinate is varied, or in practice, as the y-direction is scanned in the far-field (see \cite{Gallo2018} Supplementary Information).
\section{Golden Ratio entanglement}
\label{Sec:4mode}
In this section we focus on a particular pump propagation direction, corresponding to a transverse spatial resonance between the pump and the nonlinear lattice. We demonstrate in this condition the emergence of a peculiar 4-mode entangled state, dominated by the {\em Golden Ratio} of the segment.
\par
By tilting the pump direction away from the symmetry $z$-axis, the position in the far-field plane of the shared modes move along $q_x$ jointly with $q_p$, while the position of the coupled modes remain fixed at $q_x=\pm G_x$. Thus, when $q_p = \mp G_x $, in each signal/idler beam the shared modes arrive to superimpose to one line of unshared mode to which they were originally uncoupled (Fig. \ref{fig3_QPM}). As a result, the three hot-spot lines of the signal and the idler far-fields degenerate into two lines symmetrically positioned at $q_x=\pm G_x$ (Fig.\ref{figresonance}), with a sudden increase of their intensity \cite{Gallo2018}.
\begin{figure}[h]
\includegraphics[scale=0.65]{Fig7_qxy_Gx.pdf}
\caption{Pump tilted at resonance $q_p=-G_x$. (a) and (b) Photon-number distributions in the $(q_x, q_y)$ plane, at the output of a nondegenerate HexNPC, showing the emergence of 4 lines of hot-spots. Those at $q_x=-G_x$ are shared by both processes (modes $\hat b_s$ and $\hat b_i$), while those at $q_x = + G_x$ (modes $\hat c_s$ and $\hat c_i$) are unshared. The yellow arrows schematically show the coupling among the 4-modes. (c) Sections along $q_y$ of the signal hot spots. (d) Same for the idler but mirrored $q_y \to -q_y$ , in order to evidence the correlation. Other parameters as in Fig.\ref{figHS}. }
\label{figresonance}
\end{figure}
\\
We can describe this phenomenon as a {\em spatial resonance} between the pump and the lattice, because
the pump has a transverse modulation (in its phase, not in its intensity) with the same spatial periodicity $\Lambda_x= \frac{2\pi}{G_x}$ as the nonlinear grating. Notice that in this conditions he same modulation characterizes all the hot-spots of the down-converted beams.
\\
Let us assume for definiteness $q_p = - G_x$, so that shared and coupled modes merge at $q_x=-G_x$. If we focus on a specific y-direction and on pair of conjugate signal idler frequencies (Fig.\ref{figresonance}), two triplets of hot spots which were originally uncoupled coalesce
into 4 coupled modes. Let us give a name to these modes, as in the scheme of Fig. \ref{figinteractions}:
\begin{align}
&\bs := \As (-G_x, q_{sy},\Om_s) & \bi := \Ai (-G_x, -q_{sy},- \Om_s)\qquad &\text {shared modes at $-G_x$}\\
&\cs := \As (+G_x, q_{sy}, \Om_s ) & \ci := \Ai (+G_x, -q_{sy},- \Om_s)\qquad &\text {unshared modes at $+G_x$}
\end{align}
\begin{figure}[t]
\includegraphics[scale=0.45]{Fig8_interactions.pdf}
\caption{Scheme of the coupling between the 4 modes at resonance, leading to Eqs.\eqref{resonance_prop}, and of the 3 microscopic down-conversion processes. The shared modes $\hat b_j$ are populated by two processes out of the three. In the spontaneous regime, modes $\hat b_j$ are twice as intense as the unshared modes $\hat c_j$, while in the stimulated regime the ratio between the two intensities is $\phi^2$. }
\label{figinteractions}
\end{figure}
Denoting by $\w_s = (-G_x, q_{sy}, \Omega_s) $ the coordinate of the shared signal, the equations describing their evolution along the sample take the closed form
\begin{subequations}
\label{resonance_prop}
\begin{align}
&\frac{\partial}{\partial z} \bs (z) = g_0
\left[ \bi^\dagger (z)
+ \ci^\dagger(z) \right] e^{-i \D (\w_s) z }
\label{bsprop}\\
&\frac{\partial}{\partial z} \cs (z) = g_0\bi^\dagger(z) e^{-i \D (\w_s) z } \label{csprop} \\
&\frac{\partial}{\partial z} \bi (z ) = g_0
\left[ \bs^\dagger(z)
+ \cs^\dagger(z) \right] e^{-i \D (\w_s) z }
\label{biprop}\\
&\frac{\partial}{\partial z} \ci (z ) = g_0\bs^\dagger(z) e^{-i \D (\w_s) z }
\label{ciprop}
\end{align}
\end{subequations}
It is not difficult to find their solutions in the spontaneous regime, that corresponds to the limit $g_0 l_c \ll 1$. As we shall see in the following, the mean number of photons in each of the 4 modes, at the leading order in $g _0 z$ read
\begin{align}
\langle \bs^\dagger (z) \bs ( z)\rangle &= \langle \bi^\dagger ( z) \bi (z)\rangle =2 (g_0 z) ^2 {\rm sinc} ^2 \left( \frac{\D (\w_s) z}{2}\right) \delta (0)
\\
\langle \cs^\dagger (z) \cs (z)\rangle &= \langle \ci^\dagger ( z) \ci ( z)\rangle
=
(g_0 z)^2 {\rm sinc}^2 \left( \frac{\D (\w_s) z}{2}\right) \delta (0)
\label{int_low}
\end{align}
where the $\delta (0) $ is an artificial divergence coming from the plane-wave pump approximation which can be easily
removed\footnote{the $\delta (0) $ will then be replaced by a term $ \propto \frac {1}{\Delta \Omega_p \, \Delta q_p^2} $, where $\Delta \Omega_p, \Delta q_p$ are the temporal and spatial frequency bandwidths of the pump}. Thus, we see that in the spontaneous regime modes $\hat b_j$ are twice as populated as modes $\hat c_j$. This is natural because out of the three microscopic processes allowed by energy-momentum conservation, two of them contribute to the population of e.g the shared signal mode $\hat b_s$, and only one process contributes to the population of the unshared mode $\cs$, as shown in Fig. \ref{figinteractions}b.
\\
The situations changes in the stimulated regime of PDC. Indeed, the linear system \eqref{resonance_prop} can be solved for any value of $g_0$, by introducing
the transformation
\begin{align}
\begin{pmatrix}
\hat \delta_j \\[0.8em]
\hat \sigma_j
\end{pmatrix}
=
\begin{pmatrix}
\frac{ 1}{\sqrt{1+\phi^2}} \; & \frac{ -\phi}{\sqrt{1+\phi^2}} \\[0.8em]
\frac{ \phi}{\sqrt{1+\phi^2}} \; & \frac{ 1}{\sqrt{1+\phi^2}}
\end{pmatrix}
\begin{pmatrix}
\hat b_j \\[0.8em]
\hat c_j
\end{pmatrix}
\qquad (j=s,i)
\label{trphi}
\end{align}
where $\phi$ is the so called {\em Golden Ratio } of the segment
\begin{equation}
\phi= \frac{1+ \sqrt{5} }{2} = 1.618..
\end{equation}
This sort of magic irrational number \cite{Dunlap2017} is the solution to the problem of partitioning a segment into two parts $b$ and $c$, whose ratio is equal to the ratio between the total length of the segment and the longer part:
$
\frac{b}{c}= \frac{b+c}{b}= \phi
$.
The Golden ratio can be also put in relation with the Fibonacci sequence (see e.g \cite{Koshy2011ch2}, \cite{Dunlap2017}), and is the asymptotic value to which the ratio between two consecutive numbers in the sequence converge.
With few steps which makes use of the properties of the Golden ratio, the original 4-mode equations can be decoupled into two independent systems
\begin{align}
\label{sigmaprop}
\begin{cases}
\frac{\partial}{\partial z} \hat \sigma_s (z ) = g_0 \phi
\hat \sigma_i^\dagger( z) e^{-i \D (\w_s)z }
\\[0.7em]
\frac{\partial}{\partial z} \hat \sigma_i ( z ) = g_0 \phi
\hat \sigma_s^\dagger (z) e^{-i \D(\w_s) z }
\end{cases}
\end{align}
and
\begin{align}
\label{deltaprop}
\begin{cases}
\frac{\partial}{\partial z} \hat \delta_s (z ) = -\frac{ g_0}{\phi }
\hat \delta_i^\dagger(z) e^{-i \D(\w_s) z }
\\[0.9em]
\frac{\partial}{\partial z} \hat \delta_i (z ) = -\frac{ g_0}{\phi }
\hat \delta_s^\dagger ( z) e^{-i \D (\w_s) z }
\end{cases}
\end{align}
Notice that the transformation \eqref{trphi} is canonical, i.e. such that $\hat \sigma_j$ and $\hat \delta_j$ are independent photon annihilation operators, and that the two linear systems \eqref{sigmaprop} and \eqref{deltaprop} are the standard equations describing parametric amplification in a pair of conjugate signal-idler modes. Thus, each of them generates a pair of independent twin beams, where
modes $\hat \sigma_s, \hat \sigma_i $ are characterized by an enhanced gain/squeeze parameter:
\begin{equation}
g_0 \to g_0 \times \phi = g_0 \times 1.618...,
\end{equation}
whereas the gain/squeeze parameter is reduced for modes $ \hat \delta_s, \hat \delta_i$:
\begin{equation}
g_0 \to - \frac{g_0 }{\phi}= - g_0 \times 0.618...
\end{equation}
Their explicit solutions can be written within the input-output formalism (Appendix \ref{AppA}) as:
\begin{subequations}
\label{inout2}
\begin{align}
& \sigmas^{\mathrm out} = U_{\phi} (\w) \, \sigmas^{\mathrm in} + V_{\phi} (\w) \, \sigmai^{\dagger \, in}
\label{inoutsigma}\\
& \deltas^{\mathrm out} = U_{-\frac{1}{\phi}} (\w) \, \deltas^{\mathrm in} + V_{-\frac{1}{\phi}} (\w) \, \delta_i^{\dagger \, in}
\label{inoutdelta}
\end{align}
and analog transformations for the output idlers, obtained by exchanging the indexes $ s \leftrightarrow i$.
These solutions have to be considered together with the inverse transformation
\begin{align}
\begin{pmatrix}
\hat b_j \\[0.8em]
\hat c_j
\end{pmatrix}
=
\begin{pmatrix}
\frac{ 1}{\sqrt{1+\phi^2}} \; & \frac{ \phi}{\sqrt{1+\phi^2}} \\[0.8em]
\frac{ -\phi}{\sqrt{1+\phi^2}} \; & \frac{ 1}{\sqrt{1+\phi^2}}
\end{pmatrix}
\begin{pmatrix}
\hat \delta_j \\[0.8em]
\hat \sigma_j
\end{pmatrix}
\qquad (j=s,i)
\label{trphi2}
\end{align}
\end{subequations}
We notice that the original modes $\hat b_j $, $\hat c_j$ can be obtained from modes $\hat \sigma_j$ and $\hat \delta_j$ (and viceversa) via the action of a non balanced beam splitter with reflection and transmission coefficients
$
t= \frac{1}{\sqrt{1+ \phi^2}} $, $r = \frac{\phi}{\sqrt{1+ \phi^2}}$, as
$
\hat b_j = t \delta_j + r \hat \sigma _j $,
$\hat c_j = - r \hat \delta_j + t\hat \sigma_j $.
Therefore, in the resonant case, the 4-mode entanglement can be considered equivalent to (see Fig. \ref{figgolden}):
\begin{itemize}
\item Two independent parametric processes, with different gain parameters $g_0 \phi$ and $-{g_0} /{\phi} $, generating two independent pairs of entangled twin- beams in modes $\sigma_j$ and $\delta_j$;
\item Followed by a non-balanced beam splitter that performs a "golden partition" of the two twins into the original modes
\end{itemize}
We thus see that a resonant pump provides the maximal coherence between the two nonlinear processes that coexist in the HexNPC .
Indeed, away from this resonance, the 3-mode entanglement generated by the HexNPC is equivalent to a single nonlinear process (as depicted in Fig.\ref{fig_solution}) generating bipartite entanglement, followed by a beam-splitter acting on one of the two parties. Conversely, when the pump resonates with the lattice, the output of the device is equivalent to {\em two} independent nonlinear processes generating two bipartite entangled states, followed by a linear device that mixes them. \\
\begin{figure}[ht]
\includegraphics[scale=0.5]{Fig9_golden.pdf}
\caption{Unfolding of the 4-mode entanglement generated in the hexagonal nonlinear photonic crystal at pump resonance. This can be considered equivalent to two independent parametric processes with gains $g_0 \phi$ and $-g_0/\phi$, mixed on a beam-splitter which makes a golden partition of the two pairs of twin beams into 4 entangled modes.
}
\label{figgolden}
\end{figure}
This maximal coherence is also reflected into an enhancement of the intensity of hot-spots at resonance, experimentally observed in \cite{Gallo2018}. The mean photon numbers can be calculated from the solutions \eqref{inout2} as
\begin{align}
\langle \hat b_j^{\dagger \, out} \hat b_j^{ out} \rangle & = \frac{\phi^2}{1+\phi^2} \langle \hat \sigma_j^{\dagger \, out} \hat \sigma_j^{ out} \rangle + \frac{1}{1+\phi^2} \langle \hat \delta_j^{\dagger \, out} \hat \delta_j^{ out} \rangle \nonumber \\
&=\delta (0) \, \left[ \frac{\phi^2}{1+\phi^2} \left| V_\phi (\w_s) \right|^2 + \frac{1}{1+\phi^2} \left| V_{-\frac{1}{\phi}} (\w_s) \right|^2 \right] \label{intb} \\
& \to \delta (0) \frac{\phi^2}{1+\phi^2} \sinh^2 \left( g_0 \phi l_c\right) \qquad \qquad \text{for} \quad g_0 l_c \gg 1
\end{align}
\begin{align}
\langle \hat c_j^{\dagger \, out} \hat c_j^{ out} \rangle & = \frac{1}{1+\phi^2} \langle \hat \sigma_j^{\dagger \, out} \hat \sigma_j^{ out} \rangle + \frac{\phi^2}{1+\phi^2} \langle \hat \delta_j^{\dagger \, out} \hat \delta_j^{ out} \rangle \nonumber \\
&=\delta (0) \, \left[ \frac{1}{1+\phi^2} \left| V_\phi (\w_s) \right|^2 + \frac{\phi^2}{1+\phi^2} \left| V_{-\frac{1}{\phi}} (\w_s) \right|^2 \right] \label{intc} \\
& \to \delta (0) \frac{1}{1+\phi^2} \sinh^2 \left( g_0 \phi l_c \right) \qquad \qquad \text{for} \quad g_0 l_c \gg 1
\end{align}
where the last lines hold for phase-matched modes $\D=0$ and in the regime of stimulated PDC, i.e for $g_o l_c \gg 1$, where the mode intensities grow exponentially with $g_0 z$. In such conditions, the contribution of modes $\delta $ with smaller gain becomes negligible, and, as observed in \cite{Gallo2018} the local enhancement of intensity in the hot-spots is ruled by the Golden ratio: $I_{4-modes} \simeq e^{2g_0 \phi l_c} = ( I_{\rm background})^\phi$. The Golden Ratio also rules the ratio between the amplitudes of the shared and unshared modes because
$ \langle \hat b_j^{\dagger \, out} \hat b_j^{ out} \rangle/ \langle \hat c_j^{\dagger \, out} \hat c_j^{ out} \rangle \to \phi^2$. \\
Conversely, in the spontaneous PDC limit, by using the asymptotic behaviour of the functions $V_\gamma$ provided in Appendix \ref{AppA}, the expressions \eqref{intb}, \eqref{intc} reduce to Eqs\eqref{int_low}, where the Golden ratio does not appear and the ratio of the intensities of the shared and unshared mode is just 2.
In this connection, it is also interesting to consider the quantum state generated by the HexNPC in the same limit This can be written in general as the state generated from the vacuum by the action of the two-mode squeeze operators acting on modes $\sigma_j$ and $\delta_j$ plus the action of the beam-splitter. As this is a rather cumbersome formula, we limit to its expression in the spontaneous regime ($ g_0 l_c\ll 1$):
\begin{align}
| \psi_{out}\rangle& = \hat B_{\delta \sigma} \hat R_{\delta_s \delta_i} \left(\xi_{-1/\phi} \right)
\hat R_{\sigma_s \sigma_i} \left(\xi_{\phi} \right)| \psi _{in}\rangle \nonumber \\
& \stackrel{ g_0 l_c \ll 1} {\longrightarrow} \,
|0\rangle + g_0 l_c\, {\rm sinc} \left(\frac{D(\w_s) l_c}{2} \right)\, e^{-i \frac{D (\w) l_c}{2} }
\left[ \hat b^\dagger_s \hat c^\dagger_i
+ \hat b^\dagger_s \hat b^\dagger_i
+ \hat c^\dagger_s \hat b^\dagger_i \right] \;
|0\rangle
\end{align}
In this equation the contributions of the three microscopic processes allowed by energy-momentum conservation depicted in Fig.\ref{figinteractions} are evident. Since they take place with the same probability, in the spontaneous regime
where the elementary down-conversion processes occur independently, the ratio of the two intensities is 2, and the Golden Ratio does not appear. This number appears only asymptotically, as the chain of stimulated processes become longer and longer.
\par
One can recognize here an analogy with the Fibonacci sequence, defined by the recurrence relation $F_{n+1}= F_{n} + F_{n-1}$, with initial values $F_1=F_2=1$. As well known \cite{Dunlap2017}, the ratio between two consecutive numbers $\frac{F_{n+1}}{ F_{n}} $ goes asymptotically to the Golden Ratio $\phi$ for $n\to \infty$. Indeed, by considering the $D=0$ case, introducing proper hermitian quadrature operators, and the normalized distance $\bar z= g_0 z$, the dynamical evolution of the { amplitudes} of modes $b$ and $c$ in Eqs. \eqref{resonance_prop}
can be reformulated as the simpler model
\footnote{Precisely, these equations describe the evolution of the most amplified field quadratures in the resonant case
$\hat B = \frac{1} {\sqrt 2} \left (\hat b_s + \hat b_i + \hat b_s^\dagger + \hat b_i^\dagger\right) $, $\hat C= \frac{1} {\sqrt 2} \left (\hat c_s + \hat c_i + \hat c_s^\dagger + \hat c_i^\dagger\right) $.}
\begin{equation}
\label{BC}
\begin{array} {rl}
\frac{d B }{d\bar z} &= B+C \\[0.3em]
\frac{d C}{d\bar z} &= B
\end{array}
\end{equation}
whose eigenvelues $\phi$ and $-\frac{1}{\phi}$ are the solution of the quadratic characteristic equations $\lambda^2=\lambda + 1$. It is possible to write a discrete version of the continuous evolution in Eq.\eqref{BC}, introducing
$B_n .= B(n \Delta z) , C_n= C (n \Delta z)$, where $\Delta z = z/n$ is a discrete step. It can be then demonstrated that at the n-th layer the amplitudes obey the Fibonacci-like recursive relations
$B_{n+1} = (2+ \Delta z) B_n + (\Delta z^2 -1- \Delta z) B_{n-1}$.
A similarity then holds with the famous description of the evolution of the population of rabbits introduced in the 13th century by L. Fibonacci \cite{Koshy2011ch2}
\begin{equation}
\label{FN}
\begin{array} {rl}
F_ {n+1} &= F_{n} + N_{n} \\
N_{n+1} &= F_{n}
\end{array}
\end{equation}
where $F_n$ and $N_n$ are the number of adult and newly born rabbits at month $n$, respectively.
With initial conditions $N_1=0, F_1=1$, the solution is the Fibonacci sequence $F_n= \frac{\phi^n -
\left(-{1}/ {\phi} \right)^n }{\sqrt 5}$. Clearly the analogy is limited, also because the quadratures in Eqs.\eqref{BC} are not integer, since they are initiated by vacuum fluctuations, but the two models share the same eigenvalues (the Golden Ratio and its inverse), exhibiting an exponential and a geometrical growth rate, in the case of Eq.\eqref{BC} and Eq.\eqref{FN}), respectively.
Interestingly, the asymptotic behaviour of the ratio between the two variables is also preserved:
$\lim_{\bar z \to \infty} \frac{B(\bar z) }{C(\bar z} = \lim_{ n \to \infty} \frac{F_n}{N_n}= \phi$.
\begin{figure}[ht]
\includegraphics[scale=0.6]{Fig10_qylambda.pdf}
\caption{Spectra of the hot-spots at superresonance $(q_p=-G_x)$ as the $q_y$ coordinate is varied. Numerical simulations of
the photon-number distributions in the $(q_y, \lambda)$ plane at $q_x= +G_x$ (a,b) and $q_x=-G_x$ (b,c). Other parameters as in Fig.\ref{figHS}.
}
\label{figspectraqy}
\end{figure}
\par
As already remarked for the 3-mode case, also the Golden Ratio entanglement is widely tunable. Indeed the 4-mode entangled state is generated for a continuous range of shared signal modes $\w_s$, spanning a broad band of frequencies as the $q_y$ coordinate is scanned along the hot-spot lines. This is evidenced by Fig. \ref{figspectraqy} which show numerical simulations of the spectra of the hot-spots as the $q_y$ coordinate is varied.
\section{Conclusions}
In this work, we studied the quantum state of twin photons and twin beams generated by parametric down-conversion in a hexagonally poled photonic crystal. The latter is characterized by the coexistence of two nonlinear processes, and represents an interesting monolithic source of path entangled photonic states. In particular we presented a novel point of view of the tripartite state that can be produced and we described the peculiar 4-mode entangled state generated by proper angular tuning of the input pump, in a regime we refer to as superresonance.
\par
Away from superresonance, our description generalizes and partially confirms the analysis performed by other authors [5] in the low gain regime. Indeed our analysis of the 3 mode entanglement, not limited to the two-photon state, shows that it is necessary to perform a conditional measurement on the shared mode to produce quantum entanglement in the other two coupled modes. In the absence of such a conditional measurement, the state of the coupled modes is a non-entangled state, exhibiting the classical correlation/coherence properties of splitted thermal beams. Therefore in spite of the interesting compactness of the device, we can see in this case a somehow trivial analogy with the output state from a beam-splitter. \\
In contrast, we have shown that the angular degrees of freedom of the pump can be used for a further engineering of the output state. In particular at superresonance, the 4-mode state thereby generated is definitely non-trivial, as the spatial resonance of the pump with the lattice establishes a strong coherence between the two nonlinear processes that cohexist in the hexagonal photonic crystal. The output of the device can be seen as two bipartite entangled states followed by a linear device that performs a golden partition. The curious appearance of the golden ratio in this physical system can be explained by recognizing the existence of an analogy between the dynamical evolution of the mode amplitudes and Fibonacci sequence.
\par
Finally, the original aspect of our theoretical model has allowed to highlight the spectral and spatial tunability of the HexNPC, as the entanglement described is generated over a large frequency bandwidth and over a continuous range of Fourier modes, rendering this compact monolithic device appealing for different integrated quantum optics and optical parametric generation experiments, that may require a versatility in the generation or detection schemes.
| 2023-04-23T06:41:01.201Z | 2018-08-01T02:02:47.000Z | redpajama/arxiv | arxiv_0001 | 1,561 | 8,603 |
2ffc32938dd99f8db0278dcdd9f0e09c8379a260 | \section{Introduction}
Multi-armed bandit (MAB) algorithms have received considerable attention and have been studied quite intensely in machine learning in the recent past. The great interest in this topic is hardly surprising, given that the MAB setting is not only theoretically challenging but also practically useful, as can be seen from its use in a wide range of applications. For example, MAB algorithms turned out to offer effective solutions for problems in medical treatment design \citep{LaRo85,KuPr14}, online advertisement \citep{ChKuRaUp08}, and recommendation systems \citep{KoSaSt13}, just to mention a few.
The multi-armed bandit problem, or bandit problem for short, is one of the simplest instances of the sequential decision making problem, in which a \emph{learner} (also called decision maker or agent) needs to select \emph{options} from a given set of alternatives repeatedly in an online manner---referring to the metaphor of the eponymous gambling machine in casinos, these options are also associated with ``arms'' that can be ``pulled''. More specifically, the agent selects one option at a time and observes a numerical (and typically noisy) \emph{reward} signal providing information on the quality of that option. The goal of the learner is to optimize an evaluation criterion such as the \emph{error rate} (the expected percentage of playing a suboptimal arm) or the \emph{cumulative regret} (the expected difference between the sum of the rewards actually obtained and the sum of rewards that could have been obtained by playing the best arm in each round). To achieve the desired goal, the online learner has to cope with the famous exploration/exploitation dilemma \citep{AuCeFi02,CeLu06,LaRo85}: It has to find a reasonable compromise between playing the arms that produced high rewards in the past (exploitation) and trying other, possibly even better arms the (expected) reward of which is not precisely known so far (exploration).
The assumption of a numerical reward signal is a potential limitation of the MAB setting. In fact, there are many practical applications in which it is hard or even impossible to quantify the quality of an option on a numerical scale. More generally, the lack of precise feedback or exact supervision has been observed in other branches of machine learning, too, and has led to the emergence of fields such as \emph{weakly supervised learning} and \emph{preference learning} \citep{FuHu11}. In the latter, feedback is typically represented in a purely qualitative way, namely in terms of pairwise comparisons or rankings. Feedback of this kind can be useful in online learning, too, as has been shown in online information retrieval \citep{Ho13,RaKuJo08}.
As another example, think of crowd-sourcing services like the Amazon Mechanical Turk, where simple questions such as pairwise comparisons between decision alternatives are asked to a group of annotators. The task is to approximate an underlying target ranking on the basis of these pairwise comparisons, which are possibly noisy and partially noncoherent \citep{ChBeCoHo13}. Another application worth mentioning is the ranking of XBox gamers based on their pairwise online duels; the ranking system of XBox is called TrueSkill$\texttrademark$ \citep{GuSaGrBu12}.
Extending the multi-armed bandit setting to the case of preference-based feedback, i.e., the case in which the online learner is allowed to compare arms in a qualitative way, is therefore a promising idea. And indeed, extensions of that kind have received increasing attention in the recent years. The aim of this paper is to provide a survey of the state of the art in the field of preference-based multi-armed bandits (PB-MAB). After recalling the basic setting of the problem in Section 2, we provide an overview of methods that have been proposed to tackle PB-MAB problems in Sections 3 and 4. Our taxonomy is mainly based on the assumptions made by these methods about the data-generating process or, more specifically, the properties of the pairwise comparisons between arms. Our survey is focused on the \emph{stochastic} MAB setup, in which feedback is generated according to an underlying (unknown but stationary) probabilistic process; we do not cover the case of an \emph{adversarial} data-generating processes, except briefly in Section \ref{sec:extensions}, although this setting has recently received a lot of attention, too \citep{AiHaTa14,CeLu12,CeLu06}.
\section{The Preference-based Multi-Armed Bandit Problem}
The stochastic MAB problem with pairwise comparisons as actions has been studied under the notion of ``dueling bandits'' in several papers \citep{YuJo09,YuBrKlJo12}. Although this term has been introduced for a concrete setting with specific modeling assumptions \citep{SuZoHoYu18}, it is meanwhile used more broadly for variants of that setting, too. Throughout this paper, we shall use the terms ``dueling bandits'' and ``preference-based bandits'' synonymously.
Consider a fixed set of arms (options) ${\cal A} = \{a_{1}, \dots , a_{K} \}$. As actions, the learning algorithm (or simply the learner or agent) can perform a comparison between any pair of arms $a_i$ and $a_j$, i.e., the action space can be identified with the set of index pairs $(i,j)$ such that $1 \leq i \leq j \leq K$. We assume the feedback observable by the learner to be generated by an underlying (unknown) probabilistic process characterized by a \emph{preference relation}
$$
{\bf Q} = \left[ q_{i,j} \right]_{1\le i,j \le K} \in [0,1]^{K \times K} \enspace .
$$
More specifically, for each pair of actions $(a_{i}, a_{j})$, this relation specifies the probability
\begin{align}\label{eq:pairwisex}
\prob \left( a_{i} \succ a_{j} \right) = q_{i,j}
\end{align}
of observing a preference for $a_{i}$ in a direct comparison with $a_{j}$. Thus, each $q_{i,j}$ specifies a Bernoulli distribution. These distributions are assumed to be stationary and independent, both across actions and iterations. Thus, whenever the learner takes action $(i,j)$, the outcome is distributed according to (\ref{eq:pairwisex}), regardless of the outcomes in previous iterations.
The relation ${\bf Q}$ is reciprocal in the sense that $q_{i,j} = 1 - q_{j,i}$ for all $i , j \in [K] = \{1, \ldots , K \}$. We note that, instead of only observing strict preferences, one may also allow a comparison to result in a \emph{tie} or an \emph{indifference}. In that case, the outcome is a trinomial instead of a binomial event. Since this generalization makes the problem technically more complicated, though without changing it conceptually, we shall not consider it further. \citet{BuSzWeChHu13,BuSzHu14} handle indifference by giving ``half a point'' to both arms, which, in expectation, is equivalent to deciding the winner by tossing a coin. Thus, the problem is essentially reduced to the case of binomial outcomes.
We say arm $a_{i}$ beats arm $a_{j}$ if $q_{i,j}>1/2$, i.e., if the probability of winning in a pairwise comparison is larger for $a_{i}$ than it is for $a_{j}$. Clearly, the closer $q_{i,j}$ is to $1/2$, the harder it becomes to distinguish the arms $a_{i}$ and $a_{j}$ based on a finite sample set from $\prob \left( a_{i} \succ a_{j} \right)$. In the worst case, when $q_{i,j} = 1/2$, one cannot decide which arm is better based on a finite number of pairwise comparisons. Therefore,
$$
\Delta_{i,j} = q_{i,j} - \frac{1}{2}
$$
appears to be a reasonable quantity to characterize the hardness of a PB-MAB task (whatever goal the learner wants to achieve). Note that $\Delta_{i,j}$ can also be negative (unlike the value-based setting, in which the quantity used for characterizing the complexity of a multi-armed bandit task is always positive and depends on the gap between the means of the best arm and the suboptimal arms).
\subsection{Pairwise probability estimation}\label{sec:ppe}
The decision making process iterates in discrete steps, either through a finite time horizon $\mathbb{T} = [T]=\{1, \ldots , T \}$ or an infinite horizon $\mathbb{T} = \mathbb{N}$.
As mentioned above, the learner is allowed to compare two actions in each iteration $t \in \mathbb{T}$. Thus, in each iteration $t$, it selects an index pair $1 \leq i(t) \leq j(t) \leq K$ and observes
$$
\left\{
\begin{array}{cl}
a_{i(t)} \succ a_{j(t)} & \text{ with probability } q_{i(t),j(t)} \\[2mm]
a_{j(t)} \succ a_{i(t)} & \text{ with probability } q_{j(t),i(t)}
\end{array}
\right.
$$
The pairwise probabilities $q_{i,j}$ can be estimated on the basis of finite sample sets. Consider the set of time steps among the first $t$ iterations, in which the learner decides to compare arms $a_{i}$ and $a_{j}$, and denote the size of this set by $n_{i,j}^{t}$. Moreover, denoting by $w_{i,j}^{t}$ and $w_{j,i}^{t}$ the frequency of ``wins'' of $a_i$ and $a_j$, respectively,
the proportion of wins of $a_i$ against $a_j$ up to iteration $t$ is then given by
\[
\widehat{q}^{\, t}_{i,j}= \frac{w_{i,j}^{t}}{n^{t}_{i,j}} = \frac{w_{i,j}^{t}}{w_{i,j}^{t} + w_{j,i}^{t}}
\enspace .
\]
Since our samples are assumed to be independent and identically distributed (i.i.d.), $\widehat{q}^{\, t}_{i,j}$ is a plausible estimate of the pairwise probability (\ref{eq:pairwisex}). Yet, this estimate might be biased, since $n_{i,j}^{t}$ depends on the choice of the learner, which in turn depends on the data; therefore, $n_{i,j}^{t}$ itself is a random quantity. A high probability confidence interval for $q_{i,j}$ can be obtained based on the Hoeffding bound \citep{Ho63}, which is commonly used in the bandit literature. Although the specific computation of the confidence intervals may differ from case to case, they are generally of the form $[\widehat{q}^{\, t}_{i,j} \pm c_{i,j}^{t}]$. Accordingly, if $\widehat{q}^{\, t}_{i,j} - c_{i,j}^{t} > 1/2$, arm $a_{i}$ beats arm $a_{j}$ with high probability; analogously, $a_{i}$ is beaten by arm $a_{j}$ with high probability, if $\widehat{q}^{\, t}_{i,j} + c_{i,j}^{t} < 1/2$.
\subsection{Evaluation criteria}
The goal of the online learner is usually stated as minimizing some kind of cumulative regret. Alternatively, in the ``pure exploration'' scenario, the goal is to identify the best arm (or the top-$k$ arms, or a ranking of all arms) both quickly and reliably. As an important difference between these two types of targets, note that the regret of a comparison of arms depends on the concrete arms being chosen, whereas the sample complexity penalizes each comparison equally.
It is also worth mentioning that the notion of optimality of an arm is far less obvious in the preference-based setting than it is in the value-based (numerical) setting. In the latter, the optimal arm is simply the one with the highest expected reward---more generally, the expected reward induces a natural total order on the set of actions ${\cal A}$. In the preference-based case, the connection between the pairwise preferences ${\bf Q}$ and the order induced by this relation on ${\cal A}$ is less trivial; in particular, the latter may contain preferential cycles. We shall postpone a more detailed discussion of these issues to subsequent sections, and for the time being simply assume the existence of an arm $a_{i^{*}}$ that is considered optimal.
\subsection{Cumulative regret}
In a preference-based setting, defining a reasonable notion of regret is not as straightforward as in the value-based setting, where the sub-optimality of an action can be expressed easily on a numerical scale. In particular, since the learner selects two arms to be compared in an iteration, the sub-optimality of both of these arms should be taken into account. A commonly used definition of regret is the following \citep{YuJo09,YuJo11,UrClFeNa13,ZoWhMuDe14}: Suppose the learner selects arms $a_{i(t)}$ and $a_{j(t)}$ in time step $t$. Then, the \emph{cumulative regret} incurred by the learner $A$ up to time $T$ is
\begin{equation} \label{eq:regret}
R^{T}_{A} = \sum_{t=1}^{T} r^{t} = \sum_{t=1}^{T} \frac{ \Delta_{i^{*},i(t)}+ \Delta_{i^{*},j(t)} }{2}\enspace .
\end{equation}
This regret takes into account the optimality of both arms, meaning that the learner has to select two nearly optimal arms to incur small regret. Note that this regret is zero if the optimal arm $a_{i^{*}}$ is compared to itself, i.e., if the learner effectively abstains from gathering further information and instead fully commits to the arm $a_{i^{*}}$.
\subsection{Regret bounds}
In a theoretical analysis of a MAB algorithm, one is typically interested in providing a bound on the (cumulative) regret produced by that algorithm.
We are going to distinguish two types of regret bound. The first one is the \emph{expected regret bound}, which is of the form
\begin{equation}\label{eq:expreg}
\mathbf{E} \left[ R^{T} \right] \le B( {\bf Q}, K, T ) \enspace ,
\end{equation}
where $\mathbf{E} \left[ \cdot \right]$ is the expected value operator, $R^{T}$ is the regret accumulated till time step $T$, and $B( \cdot )$ is a positive real-valued function with the following arguments: the pairwise probabilities ${\bf Q}$, the number of arms $K$, and the iteration number $T$. This function may additionally depend on parameters of the learner, however, we neglect this dependence here. The expectation is taken with respect to the stochastic nature of the data-generating process and the (possible) internal randomization of the online learner. The regret bound (\ref{eq:expreg}) is technically akin to the expected regret bound of value-based multi-armed bandit algorithms like the one that is calculated for UCB \citep{AuCeFi02}, although the parameters used for characterizing the complexity of the learning task are different.
The bound in (\ref{eq:expreg}) does not inform about how the regret achieved by the learner is concentrated around its expectation. Therefore, we consider a second type of regret bound, namely one that holds with high probability. This bound can be written in the form
$$
\prob \Big( \, R^{T} < B( {\bf Q}, K, T, \delta ) \, \Big) \ge 1-\delta \enspace .
$$
For simplicity, we also say that the regret achieved by the online learner is ${\cal O} (B( {\bf Q}, K, T, \delta ))$ with high probability.
\subsection{Sample complexity}
The sample complexity analysis is considered in a ``pure exploration'' setup where the learner, in each iteration, must either select a pair of arms to be compared or terminate and return its recommendation. The \emph{sample complexity of the learner} is then the number of pairwise comparisons it queries prior to termination, and the corresponding bound is denoted $B( {\bf Q}, K, \delta )$. Here, $1-\delta$ specifies a lower bound on the probability that the learner terminates and returns the correct solution\footnote{Here, we consider the pure exploration setup with fixed confidence. Alternatively, one can fix the horizon and control the error of the recommendation \citep{AuBuMu10,BuMuSt11,BuWaVi13}.}.
Note that only the number of the pairwise comparisons is taken into account, which means that pairwise comparisons are equally penalized, independently of the suboptimality of the arms chosen.
The recommendation of the learner depends on the task to be solved. In the simplest case, it consists of the best arm. However, as will be discussed in Section~\ref{sec:noass}, more complex predictions are conceivable, such as a complete ranking of all arms.
The above sample complexity bound is valid most of the time (more than $1-\delta$ of the runs). However, in case an error occurs and the correct recommendation is not found by the algorithm, the bound does not guarantee anything. Therefore, it cannot be directly linked to the expected sample complexity. In order to define the expected sample complexity, the learning algorithm needs to terminate in a finite number of steps with probability $1$. Under this condition, running a learning algorithm on the same bandit instance results in a finite sample complexity, which is a random number distributed according to an unknown law $\prob : \N \longrightarrow [0,1]$. The distribution $\prob$ has finite support, since the algorithm terminates in a finite number of steps in every case. By definition, the \emph{expected sample complexity} of the learning algorithm is the finite mean of the distribution $\prob$. Moreover, the \emph{worst case sample complexity} is the upper bound of the support of $\prob$.
\subsection{PAC algorithms}
In many applications, one is willing to gain efficiency at the cost of optimality: The algorithm is allowed to return a solution that is only approximately optimal, though it is supposed to do so more quickly. For standard bandit problems, for example, this could mean returning an arm the expected reward of which deviates by at most some $\epsilon$ from the expected reward of the optimal arm.
In the preference-based setup, approximation errors are less straightforward to define.
Nevertheless, the sample complexity can also be analyzed in a PAC-framework as originally introduced by \citet{EvMaMa02} for value-based MABs. A preference-based MAB algorithm is called $(\epsilon, \delta)$-PAC preference-based MAB algorithm with a \emph{sample complexity} $B( {\bf Q}, K, \epsilon, \delta )$, if it terminates and returns an $\epsilon$-optimal arm with probability at least $1-\delta$, and the number of comparisons taken by the algorithm is at most $B( {\bf Q}, K, \epsilon, \delta )$. If the problem is to select a single arm, $\epsilon$-optimality could mean, for example, that $\Delta_{i^{*},j}<\epsilon$, although other notions of approximation can be used as well.
\subsection{Explore-then-exploit algorithms}
Most PB-MAB algorithms for optimizing regret are based on the idea of decoupling the exploration and exploitation phases: First, the algorithm tries to identify the best arm with high probability, and then fully commits to the arm found to be best for the rest of the time (i.e., repeatedly compares this arm to itself). Algorithms implementing this principle are called ``explore-then-exploit'' algorithms.
Such algorithms need to know the time horizon $T$ in advance, since being aware of the horizon, the learning algorithm is able to control the regret incurred in case it fails to identify the best arm. More specifically, assume a so-called exploratory algorithm $A$ to be given, which is able to identify the best arm $a_{i^{*}}$ with probability at least $1-\delta$. By setting $\delta$ to $1/T$, algorithm $A$ guarantees that $\prob \big( \, \widehat{i}^{*} = i^{*} \, \big) > 1-1/T$ if it terminates before iteration step $T$, where $\widehat{i}^{*}$ is the arm index returned by $A$. Thus, if $A$ terminates and commits a mistake, i.e., $\widehat{i}^{*} \neq i^{*}$, then the expected regret incurred in the exploitation phase is $1/T \cdot {\cal O} ( T ) = {\cal O}(1)$, since the per-round regret is upper-bounded by 1 and the exploitation phase consists of at most $T$ steps. Consequently, the expected regret of an explore-then-exploit algorithm is
\[
\mathbf{E} [ R^{T} ] \le (1-1/T) \, \mathbf{E} [ R^{T}_{A} ] + (1/T) \, {\cal O}( T ) = {\cal O} \left( \mathbf{E} [ R^{T}_{A} ] + 1 \right) \enspace .
\]
Note that the inequality is trivially valid if $A$ does not terminate before $T$.
The same argument as given above for the case of expected regret also holds for high probability regret bounds in the explore-then-exploit framework. In summary, the performance of an explore-then-exploit algorithm is bounded by the performance of the exploration algorithm. More importantly, since the per round regret is at most 1, the sample complexity of the exploration algorithm readily upper-bounds the expected regret; this fact was pointed out by \citet{YuJo11} and \citet{YuBrKlJo12}. Therefore, like in the case of value-based MABs, explore-then-exploit algorithms somehow blur the distinction between the ``pure exploration'' and regret optimization setting.
However, in a recent study \citep{ZoWhMuDe14}, a novel preference-based MAB algorithm is proposed that optimizes the cumulative regret without decoupling the exploration from the exploitation phase (for more details see Section \ref{subsec:regaxiom}). Without decoupling, there is no need to know the horizon in advance, which allows one to provide a \emph{horizonless} regret bound that holds for any time step $T$.
The regret defined in (\ref{eq:regret}) reflects the average quality of the decision made by the learner. Obviously, one can define a more strict or less strict regret by taking the maximum or minimum, respectively, instead of the average. Formally, the strong and weak regret in time step $t$ are defined, respectively, as
\begin{align*}
r_{\max}^{t} & = \max
\left\{ \Delta_{i^{*}, i(t)} , \Delta_{i^{*},j(t)} \right\} \enspace , \\
r_{\min}^{t} & = \min
\left\{ \Delta_{i^{*},i(t)}, \Delta_{i^{*},j(t)} \right\} \enspace .
\end{align*}
From a theoretical point of view, when the number of pairwise comparisons is bounded by a known horizon, these regret definitions do not lead to a fundamentally different problem. Roughly speaking, this is because most of the methods designed for optimizing regret seek to identify the best arm with high probability in the exploration phase, based on as few samples as possible.
\section{Learning from Coherent Pairwise Comparisons}\label{sec:assonp}
As explained in Section \ref{sec:ppe}, learning in the PB-MAB setting essentially means estimating the pairwise preference matrix ${\bf Q}$, i.e., the pairwise probabilities $q_{i,j}$. The target of the agent's prediction, however, is not the relation ${\bf Q}$ itself, but the best arm or, more generally, a ranking $\succ$ of all arms ${\cal A}$. Consequently, the least assumption to be made is a connection between ${\bf Q}$ and $\succ$, so that information about the former is indicative of the latter. Or, stated differently, the pairwise probabilities $q_{i,j}$ should be sufficiently coherent, so as to allow the learner to approximate and eventually identify the target (at least in the limit when the sample size grows to infinity). For example, if the target is a ranking $\succ$ on ${\cal A}$, then the $q_{i,j}$ should be somehow coherent with that ranking, e.g., in the sense that $a_i \succ a_j$ implies $q_{i,j} > 1/2$.
While this is only an example of a consistency property that might be required, different consistency or regularity assumptions on the pairwise probabilities ${\bf Q}$ have been proposed in the literature---needless to say, these assumptions have a major impact on how PB-MAB problems are tackled algorithmically. In this section and the next one, we provide an overview of approaches to such problems, categorized according to these assumptions (see Figure \ref{fig:stpbmab}).
\tikzset{
basic/.style = {draw, drop shadow, font=\sffamily, rectangle},
root/.style = {basic, text width=6cm, rounded corners=1.5pt, thin, align=center,
fill=gray!30},
level 2/.style = {basic, rounded corners=4pt, thin,align=center, fill=blue!30,
text width=9em},
level 22/.style = {basic, rounded corners=5pt, thin,align=center, fill=red!30,
text width=11.5em},
level 3/.style = {basic, thin, align=left, fill=white!30, text width=12em}
}
\begin{figure}
\centering
\resizebox {.83\textwidth} {!} {
\begin{tikzpicture}[
level 1/.style={sibling distance=85mm},
edge from parent/.style={->,draw},
>=latex]
\node[root] {\normalsize Preference-based (stochastic) MAB}
child {node[level 2] (c1) {\small Coherent ${\bf Q}$\\ Section \ref{sec:assonp} }}
child {node[level 2] (c2) {\small Arbitrary ${\bf Q}$ \\ Section \ref{sec:noass} }};
\begin{scope}[every node/.style={level 3}]
\node [level 22, below of = c1, xshift=20pt] (c101) {\footnotesize Axiomatic approaches};
\node [below of = c101, xshift=60pt] (c11) {\scriptsize Interleaved filtering \citep{YuBrKlJo12}};
\node [below of = c11, yshift=-5pt] (c12) {\scriptsize Beat-the-mean \citep{YuJo11}};
\node [below of = c12, yshift=-5pt] (c13) {\scriptsize Knockout tournaments \citep{FaOrPiSu17}};
\node [below of = c13, yshift=-5pt] (c14) {\scriptsize Sequaential elimination \citep{FaHaOrPiRa17}};
\node [below of = c14, yshift=-5pt] (c15) {\scriptsize Single elimination tournament \citep{MoSuEl17}};
\node [below of = c15, yshift=-5pt] (c16) {\scriptsize Successive elimination \citep{JaKaDeNo15}};
\node [below of = c16] (c17) {\scriptsize RUCB \citep{ZoWhMuDe14}};
\node [below of = c17] (c18) {\scriptsize Relative confidence sampling \citep{ZoWhDeMu14}};
\node [below of = c18] (c19) {\scriptsize MergeRUCB \citep{ZoWhDe15}};
\node [below of = c19] (c110) {\scriptsize Relative minimum empirical divergence \citep{KoHoKaNa15}};
\node [below of = c110, yshift=-5pt] (c111) {\scriptsize Verification based solution \citep{Ka16}};
\node [below of = c111, yshift=-5pt] (c112) {\scriptsize Winner stays \citep{ChFr17}};
\node [level 22, below of = c112, xshift=-40pt] (b101) {\footnotesize Utility functions};
\node [below of = b101, xshift=60pt] (b11) {\scriptsize Gradient descent \citep{YuJo09}};
\node [below of = b11, yshift=-5pt] (b12) {\scriptsize Multiple-Point gradient descent \citep{ZhKi16}};
\node [below of = b12, yshift=-10pt] (b13) {\scriptsize Stochastic mirror descent \citep{Ku17}};
\node [below of = b13, yshift=-10pt] (b14) {\scriptsize Reduction to value-based MAB \citep{AiKaJo14}};
\node [below of = b14, yshift=-5pt] (b15) {\scriptsize Multisort \citep{MaGr17}};
\node [level 22, right of = b101, xshift=180pt, yshift=10pt] (d101) {\footnotesize Statistical models};
\node [below of = d101, xshift=60pt] (d11) {\scriptsize Mallows \citep{BuHuSz14}};
\node [below of = d11, yshift=-10pt] (d12) {\scriptsize Plackett-Luce \citep{SzBuPaHu15}};
\node [below of = c2, xshift=40pt, yshift=-10pt] (c21) {\scriptsize Preference-based racing \citep{BuSzWeChHu13}};
\node [below of = c21, yshift=-10pt] (c22) {\scriptsize PAC rank elicitation \citep{BuSzHu14}};
\node [below of = c22] (c23) {\scriptsize Voting bandits \citep{UrClFeNa13}};
\node [below of = c23] (c24) {\scriptsize Copeland confidence bound \citep{ZoKaWhDe15}};
\node [below of = c24, yshift=-10pt] (c25) {\scriptsize Copeland winners relative minimum empirical divergence \citep{koHoNa16}};
\node [below of = c25, yshift=-10pt] (c26) {\scriptsize Double Thompson sampling \citep{WuLi16}};
\node [below of = c26, yshift=-10pt] (c27) {\scriptsize Sparse Sparring \citep{BaKaScZo16}};
\node [below of = c27, yshift=-10pt] (c28) {\scriptsize Generic tournament solutions \citep{RaRaAg16}};
\node [below of = c28] (c29) {\scriptsize Active ranking \citep{HeShRaWa16}};
\end{scope}
\foreach \value in {1,...,12}
\draw[->] (c101.195) |- (c1\value.west);
\foreach \value in {1,...,5}
\draw[->] (b101.195) |- (b1\value.west);
\foreach \value in {1,2}
\draw[->] (d101.195) |- (d1\value.west);
\foreach \value in {1,...,9}
\draw[->] (c2.195) |- (c2\value.west);
\draw[->] (c1.195) |- (c101.west);
\draw[->] (c1.195) |- (b101.west);
\draw[->] (c1.195) |- (d101.west);
\end{tikzpicture}
}
\caption{A taxonomy of (stochastic) PB-MAB algorithms.}
\label{fig:stpbmab}
\end{figure}
\subsection{Axiomatic approaches}\label{subsec:regaxiom}
We begin this section by collecting various assumptions on pairwise preferences that can be found in the literature. As will be seen later on, by exploiting the (preference) structure imposed by these assumptions, the development of efficient algorithms will become possible.
\begin{enumerate}
\item[--] \emph{Total order over arms}: There is a total order $\succ$ on ${\cal A}$, such that $a_{i} \succ a_{j}$ implies $\Delta_{i,j}>0$.
\item[--] \emph{Strong stochastic transitivity}: For any triplet of arms such that $a_{i} \succ a_{j} \succ a_{k}$, the pairwise probabilities satisfy $\Delta_{i,k} \ge \max \left( \Delta_{i,j} , \Delta_{j,k} \right) $.
\item[--] \emph{Relaxed stochastic transitivity}: There is a $\gamma \ge 1$ such that, for any triplet of arms such that $a_{i} \succ a_{j} \succ a_{k}$, the pairwise probabilities satisfy $ \gamma \, \Delta_{i,k} \ge \max \left\{ \Delta_{i,j}, \Delta_{j,k} \right\} $.
\item[--] \emph{Stochastic triangle inequality}: For any triplet of arms such that $a_{i} \succ a_{j} \succ a_{k}$, the pairwise probabilities satisfy $\Delta_{i,k} \le \Delta_{i,j} + \Delta_{j,k} $.
\item[--] \emph{Existence of a Condorcet winner}: An arm $a_{i}$ is considered a Condorcet winner if $\Delta_{i,j} > 0 $ for all $j\in [K]$, i.e., if it beats all other arms in a pairwise comparison.
\item[--] \emph{Specific structural constraints on the preference matrix:} We will see an example of such constraint in subsection \ref{subsubsec:se}.
\end{enumerate}
Note that the first assumption of a total order with arms separated by positive margins ensures the existence of a unique best arm, which in this case coincides with the Condorcet winner. Also note that strong stochastic transitivity is recovered from the relaxed stochastic transitivity for $\gamma = 1$.
Prior to describing the methods, we summarize the assumptions, targets, and goals they consider in in Table \ref{tab:regaxiom}.
\begin{table}
\begin{center}
\label{tab:regaxiom}
\begin{tabular}{|p{3cm}|p{4cm}|p{3.25cm}|p{3.25cm}|}
\hline
Algorithm & Assumptions & Target & Goal of learner \\
\hline
Interleaved filtering \citep{YuBrKlJo12} & Total order over arms, strong stochastic transitivity and stochastic triangle inequality & Best arm & Expected regret minimization \\
\hline
Beat the mean \citep{YuJo11} & Total order over arms, relaxed stochastic transitivity and stochastic triangle inequality both relative to the best arm & Best arm & High probability regret and sample complexity minimization in the PAC setting \\
\hline
Knockout tournaments \citep{FaOrPiSu17} & Strong stochastic transitivity and stochastic triangle inequality & Best arm and best ranking in the PAC setting & Sample complexity minimization \\
\hline
Sequential elimination \citep{FaHaOrPiRa17} & Strong stochastic transitivity & Best arm and best ranking in the PAC setting & Sample complexity minimization \\
\hline
Single elimination tournament \citep{MoSuEl17} & Total order over arms & Top-$k$ ranking & Sample complexity minimization \\
\hline
Successive elimination \citep{JaKaDeNo15} & A specific type of structural constraints on the preference matrix & Borda winner with high probability & Sample complexity minimization \\
\hline
Preference-based UCB \citep{ZoWhMuDe14} & Existence of a Condorcet winner & Condorcet winner & Expected and high probability regret minimization \\
\hline
Relative confidence sampling \citep{ZoWhDeMu14} & Existence of a Condorcet winner & Condorcet winner & Regret minimization \\
\hline
MergeRUCB \citep{ZoWhDe15} & Existence of a Condorcet winner & Condorcet winner & High probability regret minimization \\
\hline
Relative minimum empirical divergence \citep{KoHoKaNa15} & Existence of a Condorcet winner & Condorcet winner & Regret minimization \\
\hline
Verification based solution \citep{Ka16} & Existence of a Condorcet winner & Condorcet winner & Sample complexity minimization \\
\hline
Winner stays \citep{ChFr17} & Existence of a Condorcet winner or a total order over the arms & Condorcet winner & Expected weak and strong regret minimization \\
\hline
\end{tabular}
\caption{Axiomatic approaches for the dueling bandits problem.}
\end{center}
\end{table}
\subsubsection{Interleaved filtering}
Assuming a total order over arms, strong stochastic transitivity, and the stochastic triangle inequality, \citet{YuBrKlJo12} propose an explore-then-exploit algorithm. The exploration step consists of a simple sequential elimination strategy, called \Algo{Interleaved Filtering} (\Algo{IF}), which identifies the best arm with probability at least $1-\delta$. The \Algo{IF} algorithm successively selects an arm which is compared to other arms in a one-versus-all manner. More specifically, the currently selected arm $a_{i}$ is compared to the rest of the active (not yet eliminated) arms. If an arm $a_{j}$ beats $a_{i}$, that is, $\widehat{q}_{i,j} + c_{i,j} < 1/2$, then $a_{i}$ is eliminated, and $a_{j}$ is compared to the rest of the (active) arms, again in a one-versus-all manner. In addition, a simple pruning technique can be applied: if $\widehat{q}_{i,j} - c_{i,j} > 1/2$ for an arm $a_{j}$ at any time, then $a_{j}$ can be eliminated, as it cannot be the best arm anymore (with high probability). After the exploration step, the exploitation step simply takes the best arm $a_{\widehat{i}^{*}}$ found by $\Algo{IF}$ and repeatedly compares $a_{\widehat{i}^{*}}$ to itself.
The authors analyze the expected regret achieved by \Algo{IF}. Assuming the horizon $T$ to be finite and known in advance, they show that \Algo{IF} incurs an expected regret
\[
\mathbf{E} \left[ R^{T}_{\Algo{IF}} \right] = {\cal O} \left( \frac{K}{\min_{j \neq i^{*}} \Delta_{i^{*},j} } \log T \right) \enspace .
\]
\subsubsection{Beat the mean}
In a subsequent work, \citet{YuJo11} relax the strong stochastic transitivity property and only require relaxed stochastic transitivity for the pairwise probabilities. Further, both the relaxed stochastic transitivity and the stochastic triangle inequality are required to hold only relative to the best arm, i.e., only for triplets where $ i $ is the index of the best arm $a_{i^{*}}$.
With these relaxed properties, \citet{YuJo11} propose a preference-based online learning algorithm called \Algo{Beat-The-Mean} (\Algo{BTM}), which is an elimination strategy resembling \Algo{IF}. However, while \Algo{IF} compares a single arm to the rest of the (active) arms in a one-versus-all manner, \Algo{BTM} selects an arm with the fewest comparisons so far and pairs it with a randomly chosen arm from the set of active arms (using the uniform distribution). Based on the outcomes of the pairwise comparisons, a score $b_{i}$ is assigned to each active arm $a_{i}$, which is an empirical estimate of the probability that $a_{i}$ is winning in a pairwise comparison (not taking into account which arm it was compared to). The idea is that comparing an arm $a_{i}$ to the ``mean'' arm, which beats half of the arms, is equivalent to comparing $a_{i}$ to an arm randomly selected from the active set. One can deduce a confidence interval for the $b_{i}$ scores, which allows for deciding whether the scores for two arms are significantly different. An arm is then eliminated as soon as there is another arm with a significantly higher score.
In the regret analysis of \Algo{BTM}, a high probability bound is provided for a finite time horizon. More precisely, the regret accumulated by \Algo{BTM} is
$$
{\cal O} \left( \frac{\gamma^{7} K}{\min_{j \neq i^{*}} \Delta_{i^{*},j} } \log T \right)
$$
with high probability. This result is stronger than the one proven for \Algo{IF}, in which only the expected regret is upper bounded. Moreover, this high probability regret bound matches with the expected regret bound in the case $\gamma = 1$ (strong stochastic transitivity). The authors also analyze the \Algo{BTM} algorithm in a PAC setting, and find that \Algo{BTM} is an $(\epsilon, \delta)$-PAC preference-based learner (by setting its input parameters appropriately) with a sample complexity of ${\cal O} ( \frac{\gamma^{6} K}{\epsilon^{2}} \log \frac{KN}{\delta} )$ if $N$ is large enough, that is, $N$ is the smallest positive integer for which $N=\left\lceil \frac{36 \gamma^{6}}{ \epsilon^{2} } \log \frac{K^{3} N }{\delta} \right\rceil $. One may simplify this bound by noting that $N<N'=\left\lceil \frac{864 \gamma^{6}}{\epsilon^{2}} \log \frac{K}{\delta}\right\rceil$. Therefore, the sample complexity is
$$
{\cal O} \left( \frac{\gamma^{6} K}{\epsilon^{2}} \log \frac{K\gamma \log (K / \delta)}{\delta \epsilon} \right) \enspace .
$$
\subsubsection{Knockout tournaments}
\citet{FaOrPiSu17} assume strong stochastic transitivity and stochastic triangle inequality and consider the goals of finding the best arm as well as the best ranking in the PAC setting. More specifically, for any given $ \epsilon,\delta > 0 $, the algorithm for the best arm must output an arm $ i $ such that, with probability at least $ 1-\delta $, for all $ j \neq i, \Delta_{i,j} \geq -\epsilon $, and the algorithm for the best ranking must output, with probability at least $ 1-\delta $, a ranking ${\bf r}$ such that $ \Delta_{i,j} \geq -\epsilon $ whenever $ r_i > r_j $.
For the best arm problem they propose the \Algo{KNOCKOUT} algorithm, which is based on Knockout tournaments and has a sample complexity of $ {\cal O} \left( \frac{K}{\epsilon^2} (1+\log \frac{1}{\delta}) \right)$. The \Algo{KNOCKOUT} algorithm takes as input the set of all arms and runs in rounds, in which arms are randomly paired. At the end of each round, the size of the input is halved while ensuring that the maximum arm in the output set is comparable to the maximum arm in the input set, i.e., the $ \Delta $-value corresponding to them is no more than $ \epsilon $.
For the best ranking problem, the authors propose the \Algo{Binary-Search-Ranking} algorithm, which uses $ {\cal O} \left( \frac{K\log K(\log \log K)^3}{\epsilon^3} \right) $ comparisons for $ \delta = \frac{1}{K} $. This algorithm comprises three major steps. In the first step, it randomly selects a set of arms of size $ \frac{K}{(\log K)^x} $, called anchors, and ranks them using a procedure called \Algo{Rank-x}---an $ (\epsilon,\delta) $-PAC ranking algorithm, which for any $ x>1 $, uses $ {\cal O} \left( \frac{K}{\epsilon^2}(\log K)^x\log \frac{K}{\delta} \right) $ comparisons, while at the same time creating $ \frac{K}{(\log K)^x}-1 $ bins between each two successive anchors. Then, in the second step, a random walk on a binary search tree is used to assign each arm to a bin. Finally, in the last step, the output ranking is produced. To this end, the arms that are close to an anchor are ranked close to it, while arms that are distant from two successive anchors are ranked using \Algo{Rank-x}.
\subsubsection{Sequential elimination}
Seeking the same goals as \citet{FaOrPiSu17}, but this time only requiring the property of strong stochastic transitivity, \citet{FaHaOrPiRa17} present the \Algo{Seq-Eliminate} algorithm for the best arm problem, which uses $ {\cal O}(K) $ comparisons. The algorithm adopts a sequential elimination technique to find the best arm. More specifically, it starts by selecting a running arm at random, and keeps comparing it to another random competing arm until the better of the two is determined. It then proceeds to the next competition stage, after setting the winner from the last stage as the new running arm and eliminating the loser. The algorithm stops as soon as only a single arm remains.
For the best ranking problem however, the authors show that any algorithm needs $ \Omega(K^2) $ comparisons by considering a model for which they reduce the problem of finding $ 1/4 $-ranking to finding a coin with bias $ 1 $ among $ \frac{K(K-1)}{2}-1 $ other fair coins and showing that any algorithm requires quadratically many comparisons.
The authors also consider the Borda-score metric without any assumptions. The Borda score of an arm $ a_i $ is $ s(a_i)=\frac{1}{K}\sum_{j} q_{i,j} $, which gives its probability of winning against a randomly selected arm from the rest of arms. An arm $ a_i $ such that $ s(i) = \max_{j}s(j) $ is called Borda maximal or winner. An arm $ a_i $ such that $ s(i) \geq \max_{j}s(j)-\epsilon $ is called $ \epsilon $-Borda maximal. A permutation $ a_1,\ldots,a_n $ such that for all $ 1\leq j\leq n-1, s(a_j)\geq s(a_{j+1}) $ is called a Borda ranking. A permutation $ a_1,\ldots,a_n $ such that for all $ 1\leq j\leq k\leq n, s(a_j)\geq s(a_{k})-\epsilon $ is called an $ \epsilon $-Borda ranking.
They show that the problem of finding an $ \epsilon $-Borda maximum can be solved using linearly many comparisons by showing that PAC optimal algorithms for the standard MAB setting can be used to solve the Borda score setting using the so-called Borda reduction of the dueling bandits to the standard MAB problem, in which drawing a sample from an arm $ i $ is simulated by dueling it with another randomly selected arm.
For the problem of finding an $ \epsilon $-Borda ranking, they present an algorithm that requires $ {\cal O}(K\log K) $ comparisons. The algorithm first approximates the Borda scores of all arms with an additive error of $ \epsilon/2 $, then second ranks the arms based on these approximate scores.
\subsubsection{Single elimination tournament}
Under the assumption of the existence of a total order over the arms, \citet{MoSuEl17} study the top-$k$ ranking problem, in which the goal is to find the top-$k$ out of the $ K $ arms in order, and the top-$k$ partitioning problem, where of interest is only the set of the top-$k$ arms, using the error rate performance metric.
They first characterize an upper bound on the sample size required for both problems, and demonstrate the benefit in sample complexity of active over passive ranking.
Then, they present the \Algo{Select} algorithm for identifying the top arm, which can be seen as a customized Single-elimination tournament, consisting of multiple layers, where in each layer, pairs of arms are randomly built first, and on the basis of pairwise comparisons, one arm is retained and the other one is eliminated. This process is repeated until the top arm is identified. They subsequently show that the algorithm \Algo{Select} finds the top arm with high probability and has sample complexity $ {\cal O}(\frac{K\log \log K}{\Delta_{1,2}^2}) $.
Lastly, they generalize the \Algo{Select} to the \Algo{Top} algorithm, which works for both top-$k$ ranking and partitioning, by first splitting the arms into $k$ sub-groups, then identifying the top arm in each sub-group using \Algo{Select}, and finally forming a short list that includes all winners from the sub-groups. For the formed list, they build a heap data structure, from which they extract the top-$k$ arms one after another, while whenever a top arm is extracted from its list, the second top arm from that list is identified and reinserted into the short list. The \Algo{Top} algorithm achieves the sample complexity $ {\cal O} \left( \frac{ (K+k\log k) \max\{ \log k, \log \log K \} }{\Delta_k} \right) $, where $ \Delta_k = \min_{i \in [k]} \min_{j:j\geq i} \Delta_{i,j}^2 $ in the case of ranking and $ \Delta_k = \Delta_{k,k+1}^2 $ in the case of partitioning.
\subsubsection{Successive elimination}\label{subsubsec:se}
\citet{JaKaDeNo15} focus on the pure exploration problem of finding the best arm according to the Borda criterion and consider a specific type of structural constraints on the preference matrix; a sparsity model in which there are a small set of top candidates that are similar to each other, and a large set of irrelevant candidates, which would always lose in a pairwise comparison against one of the top candidates.
They first show that, in such a situation, the Borda reduction in which the number of samples required only depends on the Borda scores, but not on the individual entries of the preference matrix may result in very poor performance. Subsequently, they propose the Successive Elimination with Comparison Sparsity (\Algo{SECS}) algorithm which automatically exploits this kind of structure by determining which of two arms is better on the basis of their performance with respect to a sparse set of comparison arms, leading to significant sample complexity improvements compared to the Borda reduction scheme. Basically, \Algo{SECS} implements the successive elimination strategy of \citet{EvMaMa06} together with the Borda reduction and an additional elimination criterion that exploits sparsity. More specifically, \Algo{SECS} maintains an active set of arms of potential Borda winners, and at each time, it chooses an arm uniformly at random and compares it with all the arms in the active set. The algorithm terminates when only one arm remains.
\subsubsection{Preference-based UCB}
In a work by \citet{ZoWhMuDe14}, the well-known \Algo{UCB} algorithm \citep{AuCeFi02} is adapted from the value-based to the preference-based MAP setup. One of the main advantages of the proposed algorithm, called \Algo{RUCB} (for Relative UCB), is that only the existence of a Condorcet winner is required. The \Algo{RUCB} algorithm is based on the ``optimism in the face of uncertainty'' principle, which means that the arms to be compared next are selected based on the optimistic estimates of the pairwise probabilities, that is, based on the upper bounds $\widehat{q}_{i,j}+c_{i,j}$ of the confidence intervals. In an iteration step, \Algo{RUCB} selects the set of potential Condorcet winners for which all $\widehat{q}_{i,j}+c_{i,j}$ values are above $1/2$, and then selects an arm $a_{i}$ from this set uniformly at random. Finally, $a_{i}$ is compared to the arm $a_{j}$, $j = \operatornamewithlimits{argmax}_{\ell \neq i} \widehat{q}_{i,\ell}+c_{i,\ell}$, that may lead to the smallest regret, taking into account the optimistic estimates.
In the analysis of the \Algo{RUCB} algorithm, horizonless bounds are provided, both for the expected and high probability regret. Thus, unlike the bounds for \Algo{IF} and \Algo{BTM}, these bounds are valid for each time step. Both the expected regret bound and high probability bound of \Algo{RUCB} are ${\cal O} (K^2 + K\log T)$. However, while the regret bounds of \Algo{IF} and \Algo{BTM} only depend on $\min_{j\neq i^{*}} \Delta_{i^{*},j}$, the constants are now of different nature, despite being still calculated based on the $\Delta_{i,j}$ values. Therefore, the regret bounds for \Algo{RUCB} are not directly comparable with those given for \Algo{IF} and \Algo{BTM}. Moreover, the regret bound for \Algo{IF} and \Algo{BTM} is derived based on the so-called explore-and-exploit technique which requires the knowledge of the horizon in advance, whereas regret bounds for \Algo{RUCB}, both the high probability and expectation one, are finite time bounds, thus they hold for any time step $T$.
\subsubsection{Relative confidence sampling}
\citet{ZoWhDeMu14} consider the cumulative regret minimization setting assuming the existence of a Condorcet winner.
They propose the relative confidence sampling (\Algo{RCS}) algorithm, whose goal is to reduce cumulative regret by being less conservative than existing methods when eliminating arms from comparison. More specifically, \Algo{RCS} proceeds in two phases. First, the results of the comparisons conducted so far are used to simulate a round-robin tournament among the arms, in which posterior distributions over the expected value of each arm are maintained and sampling is performed from those posteriors to determine a champion, which is then compared in a second phase against a challenger deemed to have the best chance of beating it. As more comparisons are conducted, it becomes more likely that the best arm is selected as both champion and challenger, causing regret to fall over time. The authors present experimental results on learning to rank datasets but no theoretical guarantees for the algorithm.
\subsubsection{MergeRUCB}
\citet{ZoWhDe15} consider the problem of finding the Condorcet winner while minimizing the total regret accumulated, when the number of arms is large.
To reduce the overall number of comparisons carried out, they use the \Algo{MergeRUCB} algorithm which proceeds, in a similar divide-and-conquer strategy as the merge sort algorithm, by first grouping the arms in small batches and then processing them separately before merging them together. Further, because of the stochasticity of the feedback the algorithm gets, the local comparisons between two arms within each batch are run multiple times before eliminating loosing arms based on upper confidence bounds of the preference probabilities. When the procedure encounters similar arms, the best arm in the batch is used to eliminate them, and if only similar arms are present in a batch or its size becomes small, then the latter is merged with another one with more variety. The process ends when only a single arm remains, that is guaranteed, with high probability, to be the Condorcet winner.
Under the assumptions, that there is no repetition of arms, unless they are uninformative, i.e., they provide no useful information and lose to all others, and that at most a third of the arms are uninformative, they provide theoretical performance guarantees in terms of high probability bounds on the total regret accumulated by the algorithm, which is logarithmic in the number of time-steps $ T $ and linear in $ K $, taking the form $ \mathcal{O}(K \log T) $ improving upon the regret bound of \Algo{RUCB} by eliminating the $K^2$ factor.
\subsubsection{Relative minimum empirical divergence}\label{subsubsec:rmed}
\citet{KoHoKaNa15} assume that the preference matrix has a Condorcet winner, and propose the Relative Minimum Empirical Divergence (\Algo{RMED}) algorithm that is inspired by the Deterministic Minimum Empirical Divergence (\Algo{DMED}) algorithm \citep{HoTa10}. \Algo{RMED} is based on the empirical Kullback-Leibler (KL) divergence between Bernoulli distributions with parameters corresponding to the probability that one arm being preferred to another one and draws arms that are likely to be the Condorcet winner with high probability. Based on the information divergence, they show a regret bound of the form $ \mathcal{O}(K \log T) $ for \Algo{RMED}, which is optimal in the sense that its constant factor matches the asymptotic lower bound under the Condorcet and also the total order assumption.
They also provide the \Algo{RMED2} algorithm, which shares its main routine with \Algo{RMED}, but differs from it in the way how it selects the comparison target of the first selected arm. More specifically, \Algo{RMED2} tries to select the arm that is most likely the Condorcet winner for most rounds, and explores from time to time in order to reduce the regret increase when it fails to estimate the true Condorcet winner correctly. For ease of analysis, they further propose the algorithm \Algo{RMED2} Fixed Horizon (\Algo{RMED2FH}), which is a static version of RMED2, and show that it is asymptotically optimal under the Condorcet assumption.
\subsubsection{Verification based solution}
\citet{Ka16} considers the problem of finding the best arm in stochastic MABs in the pure exploration setting with the goal of minimizing the sample complexity, focusing on the scenario where the failure probability is very small, and presents the Explore-Verify framework for improving the performance of the task in multiple generalizations of the MAB setting, including dueling bandits with the Condorcet assumption. The framework is based on the fact that in MAB problems with structure, the task of verifying the optimality of a candidate is easier than discovering the best arm, which leads to a design in which first the arms are explored and a candidate best arm is obtained with probability $ 1-\kappa $ for some constant $ \kappa $, and then it is verified whether the found arm is indeed the best with confidence $ 1-\delta $. If the exploration procedure was correct, the sample complexity will be the sum of the one of the exploration algorithm which is independent of $ \delta $, and the one of the easier verification task which depends on $ \delta $. Thus, for small values of $ \delta $, the savings are large, regardless whether the sample complexity is dominated by that of the verification task, or by that of the original task with a constant failure probability.
In concrete terms for the setting of dueling bandits with the Condorcet assumption in the high confidence regime, the exploration procedure queries all pairs until it finds, for each suboptimal arm $ a_i $, an arm $ a_j $ with $ q_{i,j} < 1/2 $; the exploration algorithm provides as output the identity of the optimal arm, together with the identity of an arm $ a_j(a_i) $ that beats $ a_i $ by the largest gap for each sub-optimal arm $ a_i $. The verification procedure proceeds from the above advice by making sure that for each allegedly sub-optimal $ a_i $, the arm $ a_j(a_i) $ indeed beats it. This exploration verification algorithm leads to a sample complexity that is an improvement of the one from \citep{KoHoKaNa15} by more than $ K^{\epsilon} $ for large $ \delta $ and $ \epsilon \in (0,1) $.
\subsubsection{Winner stays}
\citet{ChFr17} study the dueling bandit problem in the Condorcet winner setting, and consider two notions of regret: strong regret, which is $ 0 $ only when both arms pulled are the Condorcet winner; and the weak regret, which is $ 0 $ if either arm pulled is the Condorcet winner. They propose the Winner Stays (\Algo{WS}) algorithm with variations for each kind of regret. \Algo{WS} for weak regret (\Algo{WS-W}) which runs in a sequence of rounds, in each of which, pairs of arms play each other in a sequence of iterations, and the winner from an iteration plays in the next iteration against a randomly selected arm from those that have not yet played in the round. At the end of a round, the winner is considered first in the next round. And \Algo{WS} for strong regret (\Algo{WS-S}), which uses \Algo{WS-W} as a subroutine and in which each round consists of an exploration phase, whose length increases exponentially with the number of phases and an exploitation phase.
The authors prove that \Algo{WS-W} has expected cumulative weak regret that is constant in time, with dependence on the number of arms $ K $ given by $ \mathcal{O}(K^2) $ under the Condorcet winner setting, and $ \mathcal{O}(K \log K) $ under the total order setting, and that \Algo{WS-S} has expected cumulative strong regret that is $ \mathcal{O}(K^2 + K \log T) $ under the Condorcet winner setting, and $ \mathcal{O}(K \log K + K \log T) $ under the total order setting, both of which have optimal dependence on $ T $. Further, they also consider utility-based extensions of weak and strong regret, and show that their bounds also apply here, with a small modification. It is worth to mentiom that even if the regret bound of these algorithms are not optimal, they are unique in a sense that the Gambler's ruin problem is used to upper bound the number of pull of sub-optimal arms, whereas all regret optimization algorithm which we review in this study, make use of the Chernoff bound in some way,
\subsection{Regularity through latent utility functions}
The representation of preferences in terms of utility functions has a long history in decision theory \citep{Fi70}. The idea is that the absolute preference for each choice alternative can be reflected by a real-valued utility degree. Obviously, such degrees immediately impose a total order on the set of alternatives. Typically, however, the utility degrees are assumed to be latent and not directly observable.
\subsubsection{Gradient descent}
In \citep{YuJo09}, a preference-based stochastic MAB setting is introduced in which the pairwise probabilities are directly derived from the (latent) utilities of the arms. More specifically, the authors assume a space ${\cal S}$ of arms, which is not necessarily finite\footnote{This space corresponds to our set of arms ${\cal A}$. However, as we assume ${\cal A}$ to be finite, we use another notation here.}. The probability of an arm $a\in {\cal S}$ beating arm $a' \in {\cal S}$ is given by
\[
\prob ( a \succ a' ) = \frac{1}{2} + \delta( a, a' ) \enspace
\]
where $\delta: \, {\cal S} \times {\cal S} \rightarrow [-1/2,1/2]$. Obviously, the closer the value of the function $\delta$ is to $0$, the harder it becomes to compare the corresponding pair of arms. The authors furthermore assume the pairwise $\delta$-values to be connected to an underlying (differentiable and strictly concave) utility function $u: {\cal S} \rightarrow {\cal R}$:
$$
\frac{1}{2} + \delta ( a, a' ) = \sigma \big( u(a)- u(a') \big) \enspace ,
$$
where $\sigma: \R \rightarrow [0,1]$ is called \emph{link function}, as it establishes a connection between the pairwise probabilities and utilities. This function is assumed to satisfy the following conditions: $\lim_{x\to\infty} \sigma ( x ) = 1$ and $\lim_{x\to -\infty} \sigma ( x ) = 0$, $\sigma( x ) = 1- \sigma (-x )$, $\sigma ( 0 ) = 1/2$.
An example of such a function is the logistic function given by $ \sigma(x) = 1/(1+\exp(-x)) $, which was used by \citet{YuJo09}.
The problem of finding the optimal arm can be viewed as a noisy optimization task \citep{FiBeMe11}. The underlying search space is ${\cal S}$, and the function values cannot be observed directly; instead, only noisy pairwise comparisons of function values (utilities) are available. In this framework, it is hard to have a reasonable estimate for the gradient. Therefore, the authors opt for applying an online convex optimization method \citep{FlKaMc05}, which does not require the gradient to be calculated explicitly, and instead optimizes the parameter by estimating an unbiased gradient approximation. This optimization algorithm is an iterative search method that proceeds in discrete time step $1,...,t,...$. In time step $t$, assume that the current point is $a_{t}\in {\cal S}$. Then it draws a random direction $u_{t}$ from the unit ball uniformly, and calculates $P_{{\cal S}} (a_{t} + \delta u_{t} )$, with $ P_{{\cal S}}(\cdot) $ denoting the projection into $ {\cal S} $, and $ \delta $ an exploratory learning step parameter.
In the theoretical analysis of the proposed method, called Dueling Bandit Gradient Descent (\Algo{DBGD}), the regret definition is similar to the one in (\ref{eq:regret}), and can be written as
\[
R^{T} = \sum_{t=1}^{T} \delta ( a_{*}, a_{t} ) + \delta (a_{*}, a^{\prime}_{t} ) \enspace .
\]
Here, however, the reference arm $a_{*}$ is the best one known only in hindsight. In other words, $a_{*}$ is the best arm among those evaluated during the search process.
Under a strong convexity assumption on $\delta$, an expected regret bound for the proposed algorithm is derived. More specifically, assuming the search space ${\cal S}$ to be given by the $d$-dimensional ball of radius $R$, the expected regret is
\[
\mathbf{E}[ R^{T} ] \le 2T^{3/4} \sqrt{10RdL} \enspace ,
\]
where $ L $ is the Lipschitz constant of $ \delta $.
\subsubsection{Multiple-point gradient descent}
In an attempt to improve the performance of the \Algo{DBGD} algorithm \citep{YuJo09}, which may suffer from large variance due to the fact that exploration is performed based on one exploratory parameter that is a sum of the current parameter $a_{t}$ and a real multiple of a stochastic uniform vector $u_{t}$, \citet{ZhKi16} propose two extensions of \Algo{DBGD}, a Dual-Point Dueling Bandit Gradient Descent (\Algo{DP-DBGD}) method and a Multi-Point Deterministic Gradient Descent (\Algo{MP-DGD}) method, which construct gradient exploration from multiple directions within one time step.
More specifically, \Algo{DP-DBGD} extends the exploration in \Algo{DBGD} to two exploratory parameters constructed by two opposite stochastic directions $ a_{t} + \delta u_{t} $ and $ a_{t} - \delta u_{t} $, instead of only one exploring parameter, to reduce the variance of the gradient approximation.
\Algo{MP-DGD} constructs a set of deterministic standard unit basis vectors for exploration, and updates the parameter by walking along the combination of exploratory winners from the basis ones, where the winner vectors are the ones that perform better than the current parameter.
\subsubsection{Stochastic mirror descent}
\citet{Ku17} studies the utility-based dueling bandit problem imposing convexity and smoothness assumptions for the utility function, which are stronger than those in \citep{YuJo09}, and which guarantee the existence of a unique minimizer of the utility function, and other assumptions on the link function, which are weaker than those in \citep{AiKaJo14} and satisfied by common functions, including the logistic function used in \citep{YuJo09}, the linear function used in \citep{AiKaJo14}, and the Gaussian distribution function.
Motivated by the fact that \cite{YuJo09} use a stochastic gradient descent algorithm for the dueling bandits problem, the authors propose to use a stochastic mirror descent algorithm, which achieves near optimal order in convex optimization. They first reduce the dueling bandit problem to a locally-convex optimization problem and then show that the regret of dueling bandits and function optimization under noisy comparisons are essentially equivalent.
The proposed algorithm, called Noisy Comparison-based Stochastic Mirror Descent (\Algo{NC-SMD}), achieves a regret bound of $ {\cal O} \left( \sqrt{T \log T} \right) $ in expectation, which is optimal except for a logarithmic factor.
\subsubsection{Reduction to value-based MAB}
\citet{AiKaJo14} propose various methodologies to reduce the utility-based PB-MAB problem to the standard value-based MAB problem. In their setup, the utility of an arm is assumed to be in $[0,1]$. Formally, $u : {\cal S} \rightarrow [0,1]$, and the link function is a linear function $\sigma_{lin} ( x ) =\frac{1}{2} x$. Therefore, the probability of an arm $a\in {\cal S}$ beating another arm $a' \in {\cal S}$ is
\[
\prob ( a \succ a' ) = \frac{1 + u(a) - u(a')}{2} \enspace ,
\]
which is again in $[0,1]$. The regret considered is the one defined in (\ref{eq:regret}), where the reference arm $a_{i^{*}}$ is the globally best arm with maximal utility.
In \citep{AiKaJo14}, two reduction techniques are proposed for a finite and an infinite set of arms. In both techniques, value-based MAB algorithms such as \Algo{UCB} \citep{AuCeFi02} are used as a black box for driving the search in the space of arms. For a finite number of arms, value-based bandit instances are assigned to each arm, and these bandit algorithms are run in parallel. More specifically, assume that an arm $i(t)$ is selected in iteration $t$ (to be explained in more detail shortly). Then, the bandit instance that belongs to arm $i(t)$ suggests another arm $j(t)$. These two arms are then compared in iteration $t$, and the reward, which is 0 or 1, is assigned to the bandit algorithm that belongs to $i(t)$. In iteration $t+1$, the arm $j(t)$ suggested by the bandit algorithm is compared, that is, $i(t+1) = j(t)$. What is nice about this reduction technique is that, under some mild conditions on the performance of the bandit algorithm, the preference-based expected regret defined in (\ref{eq:regret}) is asymptotically identical to the one achieved by the value-based algorithm for the standard value-based MAB task.
For infinitely many arms, the reduction technique can be viewed as a two player game. A run is divided into epochs: the $\ell$-th epoch starts in round $t = 2^\ell$ and ends in round $t = 2^{\ell+1}-1$, and in each epoch the players start a new game. During the $\ell$\/th epoch, the second player plays adaptively according to a strategy provided by the value-based bandit instance, which is able to handle infinitely many arms, such as the ConfidenceBall algorithm by \citet{DaHaKa08}. The first player obeys some stochastic strategy, which is based on the strategy of the second player from the previous epoch. That is, the first player always draws a random arm from the multi-set of arms that contains the arms selected by the second player in the previous epoch. This reduction technique incurs an extra $\log T$ factor to the expected regret of the value-based bandit instance.
\subsubsection{Multisort}
\citet{MaGr17} address the ranking problem when comparison outcomes are generated from the Bradley-Terry (BT) \citep{BrTe52} probabilistic model with parameters $ \theta = (\theta_{1}, \ldots , \theta_{K}) \in \mathbb{R}_+^K$, which represent the utilities of the arms. Using the BT model, the probability that an arm $a_i$ is preferred to $a_j$ is given by
\[
\prob ( a_i \succ a_j ) = \frac{1}{ 1 + \exp[-(\theta_{i}-\theta_{j})]} \enspace .
\]
Thus, they end up with a utility-based MAB instance with the logistic link function as in \citep{YuJo09}.
Under the assumption that the distance between adjacent parameters is stochastically uniform across the ranking, they first show that the output of a single call of the \Algo{QuickSort} algorithm \citep{Ho62} is a good approximation to the ground-truth ranking in terms of the quality of a ranking estimate by its displacement with respect to the ground truth measured by the Spearman's footrule distance given by $ F(\sigma,\tau) = \sum_{i=1}^{n} |\sigma(i)-\tau(i)| $, where $ \sigma(i) $ and $ \tau(i) $ are the ranks of $ i $ according to the rankings $ \sigma $ and $ \tau $ respectively. Then they show that the aggregation of $ O(\log ^5 K) $ independent runs of \Algo{QuickSort} using Copeland's method \citep{Co51}, in which the arms are increasingly sorted by their scores given by the total number of pairwise wins, can recover the ground truth everywhere, except at a vanishing fraction of the items, i.e., all but a vanishing fraction of the arms are correctly ranked, based on which they propose an active-learning strategy that consists of repeatedly sorting the items. More specifically, for a budget of $ c $ pairwise comparisons, they run \Algo{QuickSort} repeatedly until the budget is exhausted to get a set of $ c $ comparison pairs and their outcomes while ignoring the produced rankings themselves, and then they induce the final ranking estimate from the ML estimate over the set of all the $ c $ pairwise comparison outcomes.
\subsection{Regularity through statistical models}\label{sec:mall}
Since the most general task in the realm of preference-based bandits is to elicit a ranking of the complete set of arms based on noisy (probabilistic) feedback, it is quite natural to establish a connection to statistical models of rank data \citep{Ma95}.
The idea of relating preference-based bandits to rank data models has been put forward by \citet{BuHuSz14}, who assume the underlying data-generating process to be given in the form of a probability distribution $\prob:\, \mathbb{S}_K \rightarrow [0,1]$. Here, $\mathbb{S}_K$ is the set of all permutations of $[K]$ (the symmetric group of order $K$) or, via a natural bijection, the set of all rankings (total orders) of the $K$ arms.
The probabilities for pairwise comparisons are then obtained as marginals of $\prob$. More specifically, with $\prob({\bf r})$ the probability of observing the ranking ${\bf r}$, the probability $q_{i,j}$ that $a_i$ is preferred to $a_j$ is obtained by summing over all rankings ${\bf r}$ in which $a_i$ precedes $a_j$:
\begin{align}
\label{eq:pairwisexy}
q_{i,j} = \prob(a_i \succ a_j) = \sum_{{\bf r} \in {\cal L}(r_{j} > r_{i})} \prob ({\bf r})
\end{align}
where ${\cal L}(r_{j} > r_{i}) = \left\{ {\bf r} \in \mathbb{S}_K \,\vert\, r_{j}>r_{i} \right\}$ denotes the subset of permutations for which the rank $r_j$ of $a_j$ is higher than the rank $r_i$ of $a_i$ (smaller ranks indicate higher preference).
In this setting, the learning problem essentially comes down to making inference about $\prob$ based on samples in the form of pairwise comparisons.
\subsubsection{Mallows}
\citet{BuHuSz14} assume the underlying probability distribution $\prob$ to be a Mallows model \citep{Ma57}, one of the most well-known and widely used statistical models of rank data \citep{Ma95}. The Mallows model or, more specifically, Mallows $\phi$-distribution is a parameterized, distance-based probability distribution that belongs to the family of exponential distributions:
\begin{align}
\label{eq:mallows}
\prob ( {\bf r} \, \vert \, \theta, \widetilde{\br} ) = \frac{1}{ Z(\phi) } \, \phi^{d ( {\bf r}, \widetilde{\br} )}
\end{align}
where $\phi$ and $\widetilde{\br}$ are the parameters of the model: $\widetilde{\br} = (\tilde{r}_1, \ldots , \tilde{r}_K) \in \mathbb{S}_K$ is the location parameter (center ranking) and $\phi \in (0,1]$ the spread parameter. Moreover, $d(\cdot,\cdot)$ is the Kendall distance on rankings, that is, the number of discordant pairs:
\[
d({\bf r},\widetilde{\br})= \sum_{1 \leq i < j \leq K} \IND{ \, (r_{i} - r_{j})(\tilde{r}_{i}-\tilde{r}_{j})<0 \, } \enspace ,
\]
where $\IND{\cdot}$ denotes the indicator function.
The normalization factor in (\ref{eq:mallows}) can be written as
$$
Z(\phi)=\sum_{{\bf r}\in \mathbb{S}_K} \prob ( {\bf r} \, \vert \, \theta, \widetilde{\br} )
= \prod_{i=1}^{K-1}\sum_{j=0}^{i} \phi^{j}
$$
and thus only depends on the spread \citep{FlVe86}. Note that, since $d({\bf r},\widetilde{\br})=0$ is equivalent to ${\bf r} = \widetilde{\br}$, the center ranking $\widetilde{\br}$ is the mode of $\prob( \cdot \, \vert \, \theta, \widetilde{\br} )$, that is, the most probable ranking according to the Mallows model.
In \citep{BuHuSz14}, three different goals of the learner, which are all meant to be achieved with probability at least $1-\delta$, are considered, depending on whether the application calls for the prediction of a single arm, a full ranking of all arms, or the entire probability distribution:
\begin{enumerate}
\item[--] The MPI problem consists of finding the most preferred item $i^{*}$, namely the item whose probability of being top-ranked is maximal:
\begin{align}
i^{*} & =\operatornamewithlimits{argmax}_{1\le i \le K } \, \mathbf{E}_{{\bf r} \sim \prob}
\, \IND{ r_{i} = 1 } \notag =\operatornamewithlimits{argmax}_{1\le i \le K } \, \sum_{{\bf r}\in {\cal L}(r_{i}=1)} \prob ( {\bf r} ) \notag
\end{align}
\item[--] The MPR problem consists of finding the most probable ranking ${\bf r}^{*}$:
\[
{\bf r}^{*} = \operatornamewithlimits{argmax}_{{\bf r} \in \mathbb{S}_K} \, \prob ({\bf r} )
\]
\item[--] The KLD problem calls for producing a good estimate $\widehat{\prob}$ of the distribution $\prob$, that is, an estimate with small KL divergence:
$$
\operatornamewithlimits{KL} \left( \prob, \widehat{\prob} \right) = \sum_{{\bf r} \in \mathbb{S}_K} \prob ({\bf r} ) \log \frac{\prob ({\bf r})}{ \widehat{\prob} ({\bf r}) } < \epsilon
$$
\end{enumerate}
In the case of Mallows, it is easy to see that $\widetilde{r}_{i}<\widetilde{r}_{j}$ implies $q_{i,j}>1/2$ for any pair of items $a_{i}$ and $a_{j}$. That is, the center ranking defines a total order on the set of arms: If an arm $a_{i}$ precedes another arm $a_{j}$ in the (center) ranking, then $a_{i}$ beats $a_{j}$ in a pairwise comparison\footnote{Recall that this property is an axiomatic assumption underlying the \Algo{IF} and \Algo{BTM} algorithms. Interestingly, the stochastic triangle inequality, which is also assumed by \citet{YuBrKlJo12}, is not satisfied for Mallows $\phi$-model \citep{Ma57}.}. Moreover, as shown by \citet{Ma57}, the pairwise probabilities can be calculated analytically as functions of the model parameters $\phi$ and $\widetilde{\br}$ as follows:
Assume the Mallows model with parameters $\phi$ and $\widetilde{\br}$. Then, for any pair of items $i$ and $j$ such that $\widetilde{r}_{i}<\widetilde{r}_{j}$, the pairwise probability is given by $q_{i,j}=g(\widetilde{r}_{i},\widetilde{r}_{j},\phi)$, where
\[
g(i,j,\phi)= h(j-i+1,\phi) - h(j-i,\phi)
\]
with $h(k,\phi) = k/(1-\phi^{k})$.
Based on this result, one can show that the ``margin''
$$
\min_{i \neq j} |1/2 - q_{i,j}|
$$
around $1/2$ is relatively wide; more specifically, there is no $q_{i,j} \in ( \frac{\phi}{1+\phi}, \frac{1}{1+\phi} )$. Moreover, the result also implies that $q_{i,j}-q_{i,k} = {\cal O}(\ell \phi^\ell)$ for arms $a_i, a_j, a_k$ satisfying $\widetilde{r}_{i}=\widetilde{r}_{j}-\ell=\widetilde{r}_k-\ell-1$ with $1<\ell$, and $q_{i,k}-q_{i,j} = {\cal O}(\ell \phi^\ell)$ for arms $a_i, a_j, a_k$ satisfying $\widetilde{r}_{i}=\widetilde{r}_{j}+\ell=\widetilde{r}_k+\ell+1$ with $1<\ell$. Therefore, deciding whether an arm $a_j$ has higher or lower rank than $a_i$ (with respect to $\widetilde{\br}$) is easier than selecting the preferred option from two candidates $a_j$ and $a_k$ for which $j,k\neq i$.
Based on these observations, one can devise an efficient algorithm for identifying the most preferred arm when the underlying distribution is Mallows. The algorithm proposed in \citep{BuHuSz14} for the MPI problem, called \Algo{MallowsMPI}, is similar to the one used for finding the largest element in an array. However, since a stochastic environment is assumed in which the outcomes of pairwise comparisons are random variables, a single comparison of two arms $a_{i}$ and $a_{j}$ is not enough; instead, they are compared until
\begin{equation}\label{eq:ovl}
1/2 \notin \big[ \, \widehat{q}_{i,j}-c_{i,j},\widehat{q}_{i,j}+c_{i,j} \, \big] \enspace .
\end{equation}
This simple strategy finds the most preferred arm with probability at least $1-\delta$ for a sample complexity that is of the form ${\cal O} \left( \frac{K}{\rho^2} \log \frac{K}{\delta \rho }\right)$, where $\rho = \frac{1-\phi}{1+\phi}$.
For the MPR problem, a sampling strategy called \Algo{MallowsMerge} is proposed, which is based on the merge sort algorithm for selecting the arms to be compared. However, as in the case of MPI, two arms $a_{i}$ and $a_{j}$ are not only compared once but until condition (\ref{eq:ovl}) holds. The \Algo{MallowsMerge} algorithm finds the most probable ranking, which coincides with the center ranking of the Mallows model, with a sample complexity of
$$
{\cal O} \left( \frac{K\log_2 K}{\rho^{2}} \log \frac{K\log_2 K}{\delta \rho}\right) \enspace ,
$$
where $\rho = \frac{1-\phi}{1+\phi}$. The leading factor of the sample complexity of \Algo{MallowsMerge} differs from the one of \Algo{MallowsMPI} by a logarithmic factor. This was to be expected, and simply reflects the difference in the worst case complexity for finding the largest element in an array and sorting an array using the merge sort strategy.
The KLD problem turns out to be very hard for the case of Mallows, and even for small $K$, the sample complexity required for a good approximation of the underlying Mallows model is extremely high with respect to $\epsilon$. In \citep{BuHuSz14}, the existence of a polynomial algorithm for this problem (under the assumption of the Mallows model) was left as an open question.
\subsubsection{Plackett-Luce}
\citet{SzBuPaHu15} assume the underlying probability distribution is a Plackett-Luce (PL) model \citep{Pl75,Lu59}. The PL model is parametrized by a vector $\theta = (\theta_1, \theta_2, \ldots , \theta_K) \in \mathbb{R}_+^K$. Each $\theta_i$ can be interpreted as the weight or ``strength'' of the option $o_i$. The probability assigned by the PL model to a ranking represented by a permutation $\pi \in \mathbb{S}_K$ is given by
\begin{equation}\label{eq:pl}
\mathbb{P}_\theta(\pi) = \prod_{i=1}^K \frac{\theta_{\pi^{-1}(i)}}{\theta_{\pi^{-1}(i)} + \theta_{\pi^{-1}(i+1)} + \ldots + \theta_{\pi^{-1}(K)}} \enspace .
\end{equation}
The product on the right-hand side of (\ref{eq:pl}) is the probability of producing the ranking $\pi$ in a \emph{stagewise} process: First, the item on the first position is selected, then the item on the second position, and so forth. In each step, the probability of an item to be chosen next is proportional to its weight. Consequently, items with a higher weight tend to occupy higher positions. In particular, the most probable ranking (i.e., the mode of the PL distribution) is simply obtained by sorting the items in decreasing order of their weight:
\begin{equation}\label{eq:mpr}
\tau = \operatorname*{argmax}_{\pi \in \mathbb{S}_K} \mathbb{P}_\theta (\pi ) = \operatorname*{argsort}_{k \in [K]} \{ \theta_1, \ldots , \theta_K \}
\end{equation}
The authors consider two different goals of the learner, which are both meant to be achieved with high probability. In the first problem, called PACI (for PAC-item), the goal is to find an item that is almost as good as the Condorcet winner, i.e., an item $ j $ such that $ |\Delta_{i^{\star},j}| < \epsilon $, where $ i^{\star} $ is the Condorcet winner, for which they devise the \Algo{PLPAC} algorithm with a sample complexity of $ {\cal O}(K \log K) $.
The second goal, called AMPR (approximate most probable ranking), is to find the approximately most probable ranking $ {\bf r} $, i.e., there is no pair of items $ 1\leq i,j \leq K, $ such that $ r_i^{\star} < r_j^{\star}, \, r_i > r_j $ and $ \Delta_{i,j} > \epsilon $, where $ {\bf r}^{\star} = \operatorname*{argmax}_{{\bf r} \in \mathbb{S}_K} \mathbb{P} ({\bf r}) $, for which they propose the \Algo{PLPAC-AMPR} algorithm, whose sample complexity is of order $ {\cal O}(K \log^2 K) $.
Both algorithms are based on a budgeted version of the \Algo{QuickSort} algorithm \citep{Ho62}, which reduces its quadratic worst case complexity to the order $ {\cal O}(K \log K) $, and in which the pairwise stability property is provably preserved (the pairwise marginals obtained from the distribution defined by the \Algo{QuickSort} algorithm coincide with the marginals of the PL distribution).
\section{Learning from Noncoherent Pairwise Comparisons}\label{sec:noass}
The methods presented in the previous section essentially proceed from a given target, for example a ranking $\succ$ of all arms, which is considered as a ``ground truth''. The preference feedback in the form of (stochastic) pairwise comparisons provide information about this target and, consequently, should obey certain consistency or regularity properties. This is perhaps most explicitly expressed in Section \ref{sec:mall}, in which the $q_{i,j}$ are derived as marginals of a probability distribution on the set of all rankings, which can be seen as modeling a noisy observation of the ground truth given in the form of the center ranking.
Another way to look at the problem is to start from the pairwise preferences ${\bf Q}$ themselves, that is to say, to consider the pairwise probabilities $q_{i,j}$ as the ground truth. In tournaments in sports, for example, the $q_{i,j}$ may express the probabilities of one team $a_i$ beating another one $a_j$. In this case, there is no underlying ground truth ranking from which these probabilities are derived. Instead, it is just the other way around: A ranking is derived from the pairwise comparisons. Moreover, there is no reason for why the $q_{i,j}$ should be coherent in a specific sense. In particular, preferential cyclic and violations of transitivity are commonly observed in many applications.
This is exactly the challenge faced by \emph{ranking procedures}, which have been studied quite intensely in operations research and decision theory \citep{Mo88,ChEnLaMa07}. A ranking procedure ${\cal R}$ turns ${\bf Q}$ into a complete preorder relation $\succ^{{\cal R}}$ of the alternatives under consideration. Thus, another way to pose the preference-based MAB problem is to instantiate $\succ$ with $\succ^{{\cal R}}$ as the target for prediction---the connection between ${\bf Q}$ and $\succ$ is then established by the ranking procedure ${\cal R}$, which of course needs to be given as part of the problem specification.
Formally, a ranking procedure ${\cal R}$ is a map $[0,1]^{K \times K} \rightarrow \mathcal{C}_{K}$, where $\mathcal{C}_{K}$ denotes the set of complete preorders on the set of alternatives. We denote the complete preorder produced by the ranking procedure ${\cal R}$ on the basis of ${\bf Q}$ by $\succ^{{\cal R}}_{{\bf Q}}$, or simply by $\succ^{{\cal R}}$ if ${\bf Q}$ is clear from the context. Below we present some of the most common instantiations of the ranking procedure ${\cal R}$
\begin{enumerate}
\item[--] Copeland's ranking ($\BIN$) is defined as follows \citep{Mo88}: $a_i
\succ^{\BIN} a_j$ if and only if $d_i > d_j$, where $d_i = \# \{k \in [K] \, \vert \, 1/2 < q_{i,k} \}$ is the Copeland score of $ a_i $. The interpretation of this relation is very simple: An option $a_i$ is preferred to $a_j$ whenever $a_i$ ``beats'' more options than $a_j$ does.
\item[--] The sum of expectations ($\SE$) (or Borda) ranking is a ``soft'' version of $\BIN$: $a_i \succ^{\SE} a_j$ if and only if
\begin{equation}\label{eq:sscore}
q_i = \frac{1}{K-1}\sum_{k \neq i} q_{i,k} > \frac{1}{K-1}\sum_{k \neq j} q_{j,k} = q_j \enspace .
\end{equation}
\item[--] The idea of the random walk (\Algo{RW}) ranking is to handle the
matrix ${\bf Q}$ as a transition matrix of a Markov chain and order the options
based on its stationary distribution. More precisely, \Algo{RW} first transforms ${\bf Q}$ into the stochastic matrix ${\bf S} = \left[s_{i,j}\right]_{K\times K }$
where $s_{i,j}= q_{i,j}/\sum_{\ell=1}^K q_{i,\ell}$. Then, it determines
the stationary distribution $(v_1, \dots, v_K)$ for this matrix (i.e., the eigenvector corresponding
to the largest eigenvalue 1). Finally, the options are sorted according to these probabilities: $a_i \succ^{\RW} a_j$ iff $v_i > v_j$. The $\RW$ ranking is directly motivated by the PageRank algorithm \citep{BrPa98}, which has been well studied in social choice theory \citep{CoShSi99,BrFi07} and rank aggregation \citep{NeOhSh12}, and which is widely used in many application fields \citep{BrPa98,KoBuPo08}.
\end{enumerate}
In Table \ref{tab:inconsist}, we summarize the assumptions, targets, and goals considered in the approaches for the dueling bandits problem with noncoherent pairwise comparisons, prior to elaborating on the methods.
\begin{table}
\begin{center}
\label{tab:inconsist}
\begin{tabular}{|p{4cm}|p{5cm}|p{5cm}|}
\hline
Algorithm & Target & Goal of learner \\
\hline
Preference-based racing \citep{BuSzWeChHu13} & Top-$k$ arms with respect to the Copeland, Borda, and Random walk ranking procedures with high probability & Sample complexity minimization \\
\hline
PAC rank elicitation \citep{BuSzHu14} & Ranking of all arms with respect to the Copeland and Borda ranking procedures along with the NDP and MRD measures in the PAC setting & Sample complexity minimization \\
\hline
Voting bandits \citep{UrClFeNa13} & Copeland and Borda winners & Sample complexity minimization \\
\hline
Copeland confidence bound \citep{ZoKaWhDe15} & Copeland winner & Regret minimization \\
\hline
Copeland winners relative minimum empirical divergence \citep{koHoNa16} & Copeland winner & Regret minimization \\
\hline
Double Thompson sampling \citep{WuLi16} & Copeland winner & Regret minimization \\
\hline
Sparse sparring \citep{BaKaScZo16} & Von Neumann winner & Regret minimization \\
\hline
General tournament solutions \citep{RaRaAg16} & Copeland set, the top cycle, uncovered set, and Banks set & Regret minimization \\
\hline
Active ranking \citep{HeShRaWa16} & Borda ranking & Sample complexity minimization \\
\hline
\end{tabular}
\caption{Approaches for the dueling bandits problem with noncoherent pairwise comparisons.}
\end{center}
\end{table}
\subsection{Preference-based racing}
The learning problem considered by \cite{BuSzWeChHu13} is to find, for some $k < K$, the top-$k$ arms with respect to the $\BIN$, $\SE$, and $\RW$ ranking procedures with high probability. To this end, three different learning algorithms are proposed in the finite horizon case, with the horizon given in advance. In principle, these learning problems are very similar to the value-based racing task \citep{MaMo94,MaMo97}, where the goal is to select the $k$ arms with the highest means. However, in the preference-based case, the ranking over the arms is determined by the ranking procedure instead of the means. Accordingly, the algorithms proposed by \cite{BuSzWeChHu13} consist of a successive selection and rejection strategy. The sample complexity bounds of all algorithms are of the form ${\cal O} (K^{2} \log T )$. Thus, they are not as tight in the number of arms as those considered in Section \ref{sec:assonp}. This is mainly due to the lack of any assumptions on the structure of ${\bf Q}$. Since there are no regularities, and hence no redundancies in ${\bf Q}$ that could be exploited, a sufficiently good estimation of the entire relation is needed to guarantee a good approximation of the target ranking in the worst case.
\subsection{PAC rank elicitation}
In a subsequent work by \citet{BuSzHu14}, an extended version of the top-k selection problem is considered. In the \emph{\Algo{PAC} rank elicitation problem}, the goal is to find a ranking that is ``close'' to the ranking produced by the ranking procedure ${\cal R}$ with high probability. To make this problem feasible, more practical ranking procedures are considered. In fact, the problem of ranking procedures like Copeland is that a minimal change of a value $q_{i,j} \approx \frac{1}{2}$ may strongly influence the induced order relation $\succ^{\BIN}$. Consequently, the number of samples needed to assure (with high probability) a certain approximation quality may become arbitrarily large. A similar problem arises for $\succ^{\SE}$ as a target order if some of the individual scores $q_i$ are very close or equal to each other.
As a practical (yet meaningful) solution to this problem, the relations $\succ^{\BIN}$ and $\succ^{\SE}$ are made a bit more ``partial'' by imposing stronger requirements on the order. To this end, let
$d^{*}_{i} = \#\left\{ k \, \vert \, 1/2 + \epsilon < q_{i,k}, i\neq k
\right\}$ denote the number of options that are beaten by $a_i$ with a margin $\epsilon > 0$, and let $s^{*}_{i} = \#\left\{ k : \vert 1/2 - q_{i,k} \vert \le \epsilon, \, i \neq k \right\}$.
Then, the $\epsilon$-insensitive Copeland relation is defined as follows: $a_i \succ^{\BIN_\epsilon} a_j$ if and only if $d^{*}_{i} + s^{*}_i > d^{*}_{j}$. Likewise, in the case of $\succ^{\SE}$, small differences of the $q_{i}$ are neglected and the $\epsilon$-insensitive sum of expectations relation is defined as follows: $a_i \succ^{\SE_\epsilon} a_j$ if and only if $q_{i}+\epsilon > q_{j}$.
These $\epsilon$-insensitive extensions are interval (and hence partial) orders, that is, they are obtained by characterizing each option $a_i$ by the interval $[d_i^*, d_i^* + s_i^*]$ and sorting intervals according to $[a,b] \succ [a',b']$ iff $b > a'$. It is readily shown that $\succ^{\BIN_\epsilon} \,\subseteq\, \succ^{\BIN_{\epsilon'}} \,\subseteq\, \succ^{\BIN}$ for $\epsilon > \epsilon'$, with equality $\succ^{\BIN_0} \,\equiv\, \succ^{\BIN}$ if $q_{i,j} \neq 1/2$ for all $i \neq j \in [K]$ (and similarly for $\SE$). The parameter $\epsilon$ controls the strictness of the order relations, and thereby the difficulty of the rank elicitation task.
As mentioned above, the task in PAC rank elicitation is to approximate $\succ^{{\cal R}}$ without knowing the $q_{i,j}$. Instead, relevant information can only be obtained through sampling pairwise comparisons from the underlying distribution. Thus, the options can be compared in a pairwise manner, and a single sample essentially informs about a pairwise preference between two options $a_i$ and $a_j$. The goal is to devise a \emph{sampling strategy} that keeps the size of the sample (the sample complexity) as small as possible while producing an estimation $\succ$ that is ``good'' in a PAC sense: $\succ$ is supposed to be sufficiently ``close'' to $\succ^{{\cal R}}$ with high probability. Actually, the algorithms in \citep{BuSzHu14} even produce a total order as a prediction, i.e., $\succ$ is a ranking that can be represented by a permutation $\tau$ of order $K$, where $\tau_{i}$ denotes the rank of option $a_i$ in the order.
To formalize the notion of ``closeness'', appropriate distance measures are applied that compare a (predicted) permutation $\tau$ with a (target) order $\succ$. In \citep{BuSzHu14}, the following two measures are used:
The \emph{number of discordant pairs} (\Algo{NDP}), which is closely connected to Kendall's rank correlation \citep{ke55}, and can be expressed as follows:
\[
d_{{\cal K}}(\tau, \succ) = \sum_{i=1}^K \sum_{j \neq i } \IND{ \tau_j < \tau_i } \IND{ a_i \succ a_j }
\]
The \emph{maximum rank difference} (\Algo{MRD}) is defined as the maximum difference between the rank of an object $a_i$ according to $\tau$ and $\succ$, respectively. More specifically, since $\succ$ is a partial but not necessarily total order, $\tau$ is compared to the set $\mathcal{L}^\succ$ of its linear extensions\footnote{$\tau \in \mathcal{L}^\succ$ iff $\forall\, i, j \in [K]: \, (a_{i} \succ a_{j}) \Rightarrow (\tau_{i} < \tau_{j}$)}:
\[
d_{{\cal M}}(\tau, \succ) = \min_{ \tau' \in \mathcal{L}^\succ } \max_{1\le i\le K} \vert \tau_{i} -\tau'_{i} \vert
\]
The authors propose four different methods for the two $\epsilon$-sensitive ranking procedures, along with the two distance measures described above. Each algorithm calculates a surrogate ranking based on the empirical estimate of the preference matrix whose distance can be upper-bounded again based on some statistics of the empirical estimates of preference. The sampling is carried out in a greedy manner in every case, in the sense that those arms are compared which are supposed to result in a maximum decrease of the upper bound calculated for the surrogate ranking.
An expected sample complexity bound is derived for the $\epsilon$-sensitive Cope\-land ranking procedure along with the MRD distance in a similar way like in \citep{Ka11,KaTeAuSt12}. The bound is of the form ${\cal O} \left( R_1 \log \left( \frac{R_1}{\delta} \right)\right)$, where $R_{1}$ is a task dependent constant. More specifically, $R_{1}$ depends on the $\Delta_{i,j}$ values, and on the robustness of the ranking procedure to small changes in the preference matrix (i.e., on how much the ranking produced by the ranking procedure might be changed in terms of the MRD distance if the preference matrix is slightly altered). Interestingly, an expected sample complexity can also be derived for the $\epsilon$-insensitive sum of expectations ranking procedure along with the MRD distance with a similar flavor as for the $\epsilon$-sensitive Copeland ranking procedure. The analysis of the NDP distance is more difficult, since small changes in the preference matrix may strongly change the ranking in terms of the NDP distance. The sample complexity analysis for this distance has therefore been left as an open question.
\subsection{Voting bandits}
\citet{UrClFeNa13} consider a setup similar to the one of \cite{BuSzHu14}. Again, a ranking procedure is assumed that produces a ranking over the arms, and the goal of the learner is to find a maximal arm according to this ranking (instead of the top-k). Note that a ranking procedure only defines a complete preorder, which means there can be more than one ``best'' arm. The authors propose an algorithm called \Algo{SAVAGE} as a general solution to this problem, which can be adapted to various ranking procedures. Concretely, two procedures are used in their study: the Copeland procedure, in which case the maximal arm is termed the Copeland winner, and the sum of expectations (or Borda counts), where the best arm is called the Borda winner. Moreover, they also devise a method to find the Condorcet winner, assuming it exists---a problem that is akin to the axiomatic approaches described in Subsection \ref{subsec:regaxiom}.
The sample complexity of the implementations in \citep{UrClFeNa13} are of order $K^{2}$ in general. Just like in \citep{BuSzHu14}, this is the price to pay for a ``model-free'' learning procedure that does not make any assumptions on the structure of the preference matrix. The analysis of the authors is more general, because they also investigate the infinite horizon case, where a time limit is not given in advance.
\subsection{Copeland confidence bound}
\citet{ZoKaWhDe15} present two algorithms for the Copeland dueling bandit problem. Copeland Confidence Bound (\Algo{CCB}), which is designed for small numbers of arms, is inspired by \Algo{RUCB} \citep{ZoWhMuDe14}. \Algo{CCB} is based on the principle of ``optimism followed by pessimism''. More specifically, it maintains optimistic estimates of the preference matrix, which are used to choose an optimistic Copeland winner, and pessimistic estimates, which are used to choose an opponent that is believed likely to discredit the hypothesis that the first chosen arm is indeed a Copeland winner. On the basis of the comparisons thus produced, the algorithm successively removes non-Copeland winners, i.e., alternative that are highly improbable to be a Copeland winner.
The second algorithm is Scalable Copeland Bandits (\Algo{SCB}), which works better for large-scale problems, and is based on an arm-identification algorithm that identifies a Copeland winner with high probability. More specifically, a KL-based arm-elimination algorithm is used, which implements an elimination tournament with confidence regions based on the KL-divergence between probability distributions.
Assuming that the number of Copeland winners and the number of losses of each Copeland winner are bounded, and that there are no ties, i.e., $q_{i,j}\neq 0.5$ for all pairs $(a_i,a_j)$, $i \neq j$, the authors provide algorithm-specific bounds on the cumulative regret with respect to a Copeland winner. Here, the regret incurred by the learner when comparing $a_{i}$ and $a_{j}$ is $ 2\text{cp}(a_1)-\text{cp}(a_i)-\text{cp}(a_j)$, where $ a_1 $ is a Copeland winner and $\text{cp}(a_i)=\frac{d_i}{K-1}$ the normalized Copeland score. The regret bound of \Algo{CCB} takes the form $ \mathcal{O}(K^2 + K \log T) $, while \Algo{SCB} achieves an expected regret bound of $ \mathcal{O}(K \log K \log T) $.
\subsection{Copeland winners relative minimum empirical divergence}
\citet{koHoNa16} consider Copeland dueling bandits and propose the Copeland Winners Relative Minimum Empirical Divergence (\Algo{CW-RMED}) algorithm. Like the relative minimum empirical divergence algorithm (cf.\ Section \ref{subsubsec:rmed}), it is inspired by the DMED algorithm \citep{HoTa10} for solving the standard MAB problem. Regret minimization is considered as a cost minimization problem and reduced to a linear optimization problem with an exponential number of constraints, for which the authors consider a suboptimal solution that runs in time almost the same as the time for sorting.
To address the complexity issue of \Algo{CW-RMED}, the authors further devise Efficient \Algo{CW-RMED} (\Algo{ECW-RMED}), another algorithm that essentially differs from \Algo{CW-RMED} in the amount of exploration, and in which an optimal solution of the linear optimization problem can be computed efficiently.
The authors define the regret in the same way as \cite{ZoKaWhDe15}, except for a constant factor, and derive an asymptotic regret lower bound of a strongly consistent algorithm. Besides, under the assumption of uniqueness of the optimal solution, they derive an asymptotically optimal regret bound for \Algo{CW-RMED}, which is based on the minimum amount of exploration for identifying a Copeland winner. Moreover, they show that \Algo{ECW-RMED} has a regret bound that is close to optimal in general, and has exactly the same leading logarithmic constant as \Algo{CW-RMED} when the Copeland winners are not unique.
\subsection{Double Thompson sampling}
\citet{WuLi16} propose the Double Thompson Sampling (\Algo{D-TS}) algorithm as well as an enhanced version of it (\Algo{D-TS$ ^{+} $}) for Copeland dueling bandits, including Condorcet bandits as a special case.
\Algo{D-TS} relies on Thompson sampling \citep{Th33,AgGo13,KaKoMu12,ChLi11} to choose an optimal action that maximizes the expected reward according to the agent's current belief (randomly drawn according to the current prior). Moreover, to avoid engaging in suboptimal comparisons, it utilizes the idea of confidence bounds and eliminates arms that are unlikely to be the winner. More specifically, it maintains a posterior distribution for the preference matrix, and chooses the pair of arms for comparison according to two sets of samples, independently drawn from the posterior distributions, which are then updated according to the comparison results.
Under the assumption that all distinct pairwise probabilities are different from $ 1/2 $, the authors show that for Copeland dueling bandits, \Algo{D-TS}, achieves expected cumulative regret bounds of $ \mathcal{O}(K^2 \log T) $, with the regret as defined by \cite{ZoKaWhDe15}. Using the fact that the distribution of the samples only depends on the historic comparison results and not on the time step $ t $, and referring to a back substitution argument, they further refine this bound to $ \mathcal{O}(K \log T +K^2 \log \log T) $ for Condorcet and a special class of Copeland dueling bandits.
Since \Algo{D-TS} breaks ties randomly, and thus tends to explore all potential winners, its regret scales with the number of winners. The authors therefore propose an enhanced version of \Algo{D-TS}, referred to as \Algo{D-TS$ ^{+} $}, which achieves the same regret bound, but performs better in practice, especially in the case of multiple winners. This is accomplished by a strategy for carefully breaking ties according to estimated regret.
\subsection{Sparse sparring}
\citet{BaKaScZo16} adopt a game-theoretic view of dueling bandits. They introduce the matrix $ P $, whose entry at position $ (i,j) $ is the expected outcome of a duel between arms $a_i$ and $a_j$, where the outcome is $ +1 $ if $a_i$ wins and $ -1 $ if $a_j$ wins. $P$ thus defined is skew-symmetric and specifies a zero-sum game. Therefore, von Neumann'Äôs Minmax theorem for zero-sum games implies the existence of a probability vector $ w $, the von Neumann winner, which is a Maxmin strategy for the game $ P $. That is, an action that is chosen at random according to the distribution defined by $ w $ will win a duel against any other action with probability at least $ 1/2 $. The authors then aim for algorithms the performance of which approach the performance of the von Neumann winner. They define the cumulative regret up to time $ T $ as $ \max_{k\in K}\sum_{t=1}^{T} (P_{k,i(t)}+P_{k,j(t)})/2 $.
Under the assumptions of uniqueness of the von Neumann winner and sparsity (there is only a small set of ``good'' actions, with the rest being strictly inferior), the authors present the Sparse Sparring (\Algo{SPAR2}) algorithm.
They show that \Algo{SPAR2} has regret, relative to a unique von Neumann winner with sparsity $ s $ (the number of nonzero elements of the von Neumann winner), at most\footnote{$ \tilde{\mathcal{O}} $ is a variant of the $ \mathcal{O} $-notation that ignores logarithmic factors.} $ \tilde{\mathcal{O}}( \sqrt{sT} + C(P)) $, where $C(P)$ is an instance-dependent constant relative to $ T $ that depends on the underlying probabilities controlling the outcomes of the pairwise duels. \Algo{SPAR2} follows the action elimination principle of \cite{EvMaMa02}, in which actions that cannot belong to the support of the von Neumann winner are eliminated based on confidence bounds on probability estimates. For the remaining actions, the sparring idea of \cite{AiKaJo14} is applied: two independent copies of the \Algo{Exp3} \citep{AuCeFrSc02} algorithm are maintained, the estimate of $ P $ is successively improved, and actions are excluded appropriately.
\subsection{General tournament solutions}
\citet{RaRaAg16} consider general tournament solutions from social choice theory \citep{BrBrHaMo16} as sets of winning arms:
\begin{enumerate}
\item[--] The Copeland set defined as the set of arms in $ [K] $ that beat the maximal number of arms, i.e., all arms $a_j$ with $ j \in \operatornamewithlimits{argmax}_{i \in [K] } \sum_{j\neq i} \IND{a_j \succ a_i} $ ,
\item[--] the top cycle defined as the smallest $ W \subseteq [K] $ for which $ a_i \succ a_j, \forall i \in W$, $j \notin W $,
\item[--] the uncovered set defined as the set of all arms that are not covered by any other arms, where we say an arm $ a_i $ covers an arm $ a_j $ if $ a_i \succ a_j $ and $ \forall \, k: \, a_j \succ a_k \implies a_i \succ a_k $,
\item[--] and the Banks set defined as the set of maximal elements of all maximal acyclic sub-tournaments, where a tournament associated with a preference matrix $ P $ is $ {\cal T}_P = ([K],E_P) $ with $ E_P=\{(i,j): a_i\succ_P a_j\} $, and a sub-tournament $ {\cal T} = (V,E) $ with $ V \subseteq [K] $ and $ E = E|_{V \times V} $ is said to be maximal acyclic if it is acyclic and no other sub-tournament containing it is acyclic.
\end{enumerate}
The authors develop a family of upper confidence bound (UCB) style dueling bandit algorithms for such general tournament solutions, \Algo{UCB-TS}, together with anytime regret bounds of the form $ {\cal O}(K^2\log T) $ in the worst case. Here, the regret is measured relative to the tournament solution of interest, i.e., a suitable notion of individual regret of an arm with respect to a tournament solution is defined and used to define the common pairwise regret. The generic algorithm maintains a matrix of upper confidence bounds on the unknown pairwise probabilities, which are updated on each trial, after the algorithm observes the feedback for a pair of arms it selects based on the current UCB matrix using a selection procedure designed for a specific tournament solution. Each of the selection procedures adopts an ``optimism followed by pessimism approach'' to manage the exploration-exploitation tradeoff. More specifically, the first arm is selected as the potential winning arm based on the UCBs, and the second arm as the one that has the greatest chance of invalidating the first arm as a winning arm.
\subsection{Active ranking}
\citet{HeShRaWa16} consider the Borda ranking problem without requiring any structural properties of the underlying pairwise probability matrix. Concretely, for a given tolerance parameter $ \delta \in (0,1) $, they consider finding an estimation $\hat{{\bf r}}$ of the true ranking ${\bf r} $ such that $ \prob ( \hat{r}_{i}=r_{i} \text{ for all } i \in [K] ) \geq 1-\delta $, while minimizing the number of comparisons queried.
They propose the Active Ranking (\Algo{AR}) algorithm, which adopts a successive elimination strategy \citep{Pa64}. More specifically, the algorithm maintains estimates of the Borda scores of the arms obtained based on comparisons with randomly chosen arms, and assigns arms to ranks once being confident enough. It terminates when all arms are ranked.
For the proposed algorithm, the authors prove a sample complexity of the order
$$ {\cal O} \left( \sum_{i=1}^{K}\Delta_i^{-2}.\log \left(\frac{K}{\delta\Delta_i} \right) \right) \enspace ,
$$
where $ \delta \in (0,0.14) $ and $ \Delta_i $ is the gap between the $i$-th and $(i+1)$-st highest Borda score. They show that the algorithm is optimal up to logarithmic factors and, moreover, that imposing parametric models such as Bradley-Terry-Luce can reduce the sample complexity by at most a logarithmic factor.
\section{Extensions}
\label{sec:extensions}
In this section, we review different generalizations and extensions of the setting of preference-based (dueling) bandits as discussed in the previous sections.
\subsection{Contextual dueling bandits}
\citet{DuHoShSlZo15} extend the dueling bandits framework to incorporate contextual information. More precisely, the learner is supposed to optimize its choice of arms in the course of an iterative learning process of the following kind: In each round, the learner observes a random context, chooses a pair of actions, runs a duel between them, and observes the outcome. The authors consider the solution concept of a von Neumann winner and present three algorithms for online learning and for approximating such a winner from batch-like data, while measuring performance using regret as defined by \cite{BaKaScZo16}.
The authors first present an online algorithm that shares similarities with the sparring approach of \citet{AiKaJo14}. Two separate independent copies of the multi-armed bandit algorithm \Algo{Exp4.P} \citep{BeLaLiReSc11}, which is designed to work with policies in a contextual setting, are run against each other. Both policies use the same actions, contexts, and policies for the two copies as for the original problem. In each round, nature chooses a context and a preference matrix, and only the context is revealed to both copies, which then select two actions. A duel is run between these actions, and the (negated) outcome is passed as feedback to the first (second) copy. This approach leads to a regret that is upper bounded by $ {\cal O}(\sqrt{KT\ln(K/\delta)}) $ with probability at least $1-\delta$, and requires time and space linear in the size of the policy space, which is impractical for large policy spaces.
To deal with the problem of the linear dependence on the policy space size, the authors furthermore propose a general approach for constructing an approximate von Neumann winner. This is done by reducing the problem to be solved to a more tractable form on the basis of a collection of empirical exploration data. More specifically, the authors assume the existence of a classification oracle on the space of policies, which can find the minimum cost policy when given the cost of each action on each of a sequence of contexts, and propose two algorithms: \Algo{SparringFPL}, which is based on the Follow-the-Perturbed-Leader algorithm \citep{KaVe05}, and \Algo{ProjectedGD}, which builds on the projected gradient ascent algorithm \citep{Zi03}. The two algorithms solve a compact game, or equivalently compute an approximate von Neumann winner. While their regret bound is weaker than the one for the first algorithm, they require time and space that depend only logarithmically on the size of the policy space.
Based on the Perceptron algorithm, \citet{CoCr14} develop the \Algo{SHAMPO} (SHared Annotator for Multiple PrOblems) algorithm for online multi-task learning with a shared annotator, in which learning is performed in rounds. In each round, each of $ K $ different learners receives an input and predicts its label. A shared stochastic mechanism then annotates one of the $ K $ inputs, and the learner receiving the feedback updates its prediction rule. The authors show that this algorithm can be used to solve the contextual dueling bandits problem when a decoupling of exploration and exploitation is allowed.
To pick a task to be labeled, \Algo{SHAMPO} performs an exploration-exploitation strategy in which tasks are randomly queried, with a bias towards tasks that involve a high uncertainty about the labeling. To perform an update on the parameter vector representing the model, the algorithm applies the Perceptron update rule to the true label revealed for the task chosen.
\subsection{Dueling bandits on posets}
\citet{AuRa17} extend the dueling bandits problem to partially ordered sets (posets), allowing pairs of arms to be incomparable. They consider the problem of identifying, with a minimal number of pairwise comparisons in a pure exploration setting, the set of maximal arms or Pareto set among all available arms. The main challenge in this framework is the problem of indistinguishability: the agent may be unsure whether two arms are indeed comparable and just very close to each other, or whether they are incomparable, regardless of the number of comparisons that have been performed. Without any additional information, it might then be impossible to recover the exact Pareto set.
The authors first devise the \Algo{UnchainedBandits} algorithm to find a nearly optimal approximation of the Pareto front of any poset. The strategy implemented by the algorithm is based on a peeling approach that offers a way to control the time spent on pulling arms that are indistinguishable. The authors provide a high probability regret bound of $ {\cal O} \left( K \text{\textbf{width}}(S)\log \frac{K}{\delta} \sum_{i,i \notin \mathcal{P}} \frac{1}{\Delta_i} \right) $, where $ S $ is the poset, $ \text{\textbf{width}}(S) $ is its width defined as the maximum size of an antichain (a subset in which every pair is incomparable), $ {\cal P} $ is the Pareto front, $ \Delta_i $ is the regret associated with arm $i$ defined as the maximum difference between arm $i$ and the best arm comparable to $i$, and the regret incurred by comparing two arms $a_i$ and $a_j$ is defined by $ \Delta_i+\Delta_j $.
Further, by making use of the concept of decoys, the authors show that \Algo{UnchainedBandits} can recover the exact set $ \mathcal{P} $, incurring regret that is comparable to the former one---except for an extra term due to the regret incurred by the use of decoys---with a sample complexity that is upper bounded by $ {\cal O} ( K \text{\textbf{width}}(S) \log(NK^2/\delta)/\Delta^2 ) $, where $ N $ is a positive integer related to a weaker form of distinguishability and $ \Delta $ is a parameter of the decoys. The concept of decoys is an idea inspired by works from social psychology, intended to force an agent towards a specific good by presenting her a choice between the targeted good and a degraded version of it.
\subsection{Graphical dueling bandits}
\citet{DiGeMa11} consider the bandit problem over a graph, the structure of which defines the set of possible comparisons. More specifically, they assume that there is an inherent and unknown value per node, and that the graph describes the allowed comparisons: two nodes are connected by an edge if they can be compared by a single query. Such a query returns a random number, the distribution of which depends on the difference between the values of the two nodes. Thus, unlike the traditional dueling bandit setup, the topology is not a complete graph, and non-adjacent nodes can only be compared indirectly by sampling all the edges on a path between them.
The authors consider different topologies and focus on the sample complexity for finding the optimal arm in the PAC setting. They provide algorithms that construct estimates of edge reward differences, and combine these estimates into a node selection procedure, together with their sample complexities, in the case when the edges are bounded. For the linear topology, in which each node is comparable to at most two other nodes, they present an algorithm that samples all edges, computes the empirical mean of each edge, and finds the highest edge based on these statistics. The sample complexity is $ {\cal O}(\frac{K^2}{\max \{ \epsilon, u\}^2} \log(\frac{1}{\delta})) $, where $ u $ is the difference between the node with the highest value and the node with the second highest value.
For the tree topology, that is, a topology in the form of a tree rooted at one node and all edges being directed downwards to the leaves, the authors present an algorithm that considers all possible paths from the root to the leaves and treats each one of them as a line graph. The latter is then processed like in the linear setup, with a sample complexity of $ {\cal O}(\frac{KD}{\max \{ \epsilon, u\}^2} \log(\frac{|L|}{\delta})) $, where $ D $ is the diameter of the tree and $ L $ the set of leaves.
For the network topology, that is, general connected and undirected graphs, the authors present the Network Node Elimination (\Algo{NNE}) algorithm, which is inspired by the action elimination procedure of \citet{EvMaMa06}. This algorithm has a sample complexity upper bounded by $ \frac{KD}{(\max \{ \epsilon, u\}/\log K)^2} \log(\frac{K}{\delta/\log K}) $. Further, they consider an extension to the case where the algorithm receives contextual information in the form of feature vectors before sampling the edges, and show that a version of the \Algo{NNE} algorithm achieves a sample complexity of the form $ {\cal O}(B \log^2 B) $, where $ B = \frac{KD}{(\epsilon/\log K)^2} \log \left( \frac{K}{\delta/\log K} \right)d^2 $, and $ d $ is the dimension of the feature vectors.
\subsection{Multi-dueling bandits}
\citet{BrSeCoLi16} consider the problem of online ranker evaluation and address the question of which rankers to compare in each iteration. To this end, they generalize the $ K $-armed dueling bandit to the \emph{multi-dueling} bandit framework, which is based on simultaneous comparisons of more than two rankers through \emph{multileaving}, as opposed to comparisons of only pairs of rankers through interleaving. They assume the existence of a Condorcet winner and aim at selecting subsets of arms, so that the cumulative regret is minimized, where the regret of a set of arms is the average of the regrets of the individual arms with respect to the Condorcet winner. The authors then propose the multi-dueling bandit algorithm (\Algo{MDB}), which plays arms that, according to optimistic estimates of the pairwise winning probabilities, are most likely to be the Condorcet winner. More specifically, when only a single candidate remains, \Algo{MDB} plays only that candidate, and when there are multiple candidates, \Algo{MDB} explores by comparing all of them, together with additional arms obtained using wider upper confidence bounds on the probabilities to increase parallel exploration.
\citet{ScOoWhDe16} propose the online learning-to-rank algorithm Multileave Gradient Descent (\Algo{MGD}), which extends \Algo{DBGD} \citep{YuJo09} with multileaving, i.e., to learn from comparisons of multiple rankers at once instead of only a pair. More specifically, \Algo{MGD} uses a current best ranker, which is updated based on user feedback by creating $ n $ exploratory candidate rankers along with their corresponding rankings by repeatedly sampling the unit sphere around the current best ranker (with a uniform distribution), and adding the resulting unit vector to the current best ranker. If the current best ranker is among the winners, then no update is performed; otherwise, the current best ranker is updated accordingly, using one of two update methods, thus leading to two variants of \Algo{MGD} that correspond to two different ways of how the preferences are used for updating a current best ranker. In \Algo{MGD} winner takes all (\Algo{MGD-W}), the gradient is estimated using one ranker randomly sampled from the rankers that win a comparison, and in \Algo{MGD} mean winner (\Algo{MGD-M}), the gradient is estimated using the mean of all winners of the comparison.
\citet{OoScDe16} replace the team draft multileaving evaluation method used in \citep{ScOoWhDe16} to infer the preferences, in which multiple but a limited number of candidate rankers per user interaction can be compared. This is done by the probabilistic multileave comparison method \citep{ScBrBuVaGrOOTrVeVaWeWoDe15}, an extension of probabilistic interleave \citep{HoWhDe11}, which selects documents from a distribution, where the probability of being added correlates with the perceived relevance. Thus, allowing for comparisons of a virtually unlimited number of rankers at a time, this leads to the probabilistic multileave gradient descent (\Algo{P-MGD}) algorithm.
\subsection{Dueling bandits with dependent arms}
\citet{ChFr16} study dueling bandits with utility-based weak regret, when arms have a total order, determined by a utility that is a function of observable arm features and unknown latent preference vectors---a structure that induces dependence between preferences over arms.
The authors introduce the Comparing The Best (\Algo{CTB}) algorithm, which is based on the idea of cells that correspond to possible orderings of the arms by utility. The algorithm uses optional prior information to initialize each cell with a score, which can be interpreted as a monotone transformation of the posterior probability that the unknown preference vector is in this cell. It updates these scores based on results from duels, where arms are chosen for a duel by selecting two cells that have different best arms, and are together most likely to contain the unknown preference vector.
The authors provide a general implementation that is appropriate for a small number of arms, and a computationally efficient one for a larger number of arms. It can be used when prior information that can be expressed as an initial score for each pair of arms is available. They further prove that \Algo{CTB} has expected cumulative regret which is constant in $T$. The dependence on $K$ is of the form $2^K$ in the worst case and $K^{2d}$ when the utility function is linear in the dimension of the space of preferences and arm features $d$.
\subsection{Multi-dueling bandits with dependent arms}
\citet{SuZhBuYu17} introduce the setting of multi-dueling bandits with dependent arms, which extends the original dueling bandits setting by simultaneously dueling multiple arms per iteration rather than just two, and modelling low-dimensional dependencies between arms rather than treating each arm independently.
Inspired by the idea of sparring \citep{AiKaJo14}, the authors propose the \Algo{SelfSparring} framework with Thompson sampling as MAB algorithm, in which $m$ MAB algorithms are used to control the choice of each of the $m$ arms to be drawn in each iteration. The input of the algorithm is the set of arms $ {\cal A} $, the number of arms to be dueled in each iteration $ m $, and a learning rate for posterior updates. The algorithm uses a prior distribution to initialize the sampling process over $ {\cal A} $, selects $ m $ arms in each iteration by sampling over the prior distribution from the last iteration, and computes the posterior by combining the preference feedback with the last prior.
The authors distinguish between the independent setting with a finite set of arms and the kernelized setting with a continuous action space of arms.
For the first setting, they propose a specialisation of \Algo{SelfSparring}, called \Algo{IndSelfSparring}, which makes use of Beta-Bernoulli Thompson sampling. Under the assumption of approximate linearity---a generalization of the linear utility-based setting of \cite{AiKaJo14}, where for any triplet of arms such that $ a_i\succ a_j\succ a_k $ and some constant $ \gamma>0 $, $\delta(a_i,a_j)-\delta(a_j,a_k)\geq \gamma \delta(a_i,a_j)$---they show that the algorithm converges to the optimal arm with asymptotically optimal no-regret rate of $ {\cal O}(K\ln(T)/\Delta) $, where the regret of a set of arms is the sum of the regrets of the individual arms with respect to the best arm, and $ \Delta $ is the difference between expected rewards of the best two arms.
For the second setting, the authors provide a specialisation of \Algo{SelfSparring}, called \Algo{KernelSelfSparring}. This algorithm uses a Gaussian process prior with an appropriate kernel, which is used to specify dependencies. The mean reward estimates for all the arms that share dependencies with the arms selected for comparison are updated by posterior inference. The authors do not provide a regret analyses for the algorithm.
\subsection{Adversarial dueling bandits}
\citet{AiKaJo14} consider the adversarial utility-based dueling bandit problem, in which no stochastic assumption on the utilities of the arms is required, i.e., the expected utility of each arm may change over iterations. The authors suggest to apply the reduction algorithm \Algo{SPARRING}, which has originally been designed for stochastic settings, with an adversarial bandit algorithm such as the Exponential-weight algorithm for Exploration and Exploitation (\Algo{EXP3}) \citep{AuCeFrSc02} as a black-box MAB. More specifically, the algorithm uses two separate MABs, one for each arm. On receiving a relative feedback about a duel, one instantiation of \Algo{EXP3} only updates its weight for one arm and the other instantiation only updates for the other arm. For this \Algo{SPARRING} reduction, it is shown that the $ \mathcal{O}(\sqrt{KT \ln K}) $ upper bound of \Algo{EXP3} is preserved.
\citet{GaUrCl15} also study adversarial utility-based dueling bandits and suggest the Relative Exponential-weight for Exploration and Exploitation (\Algo{REX3}) algorithm, which is an extension of the \Algo{EXP3} algorithm to the dueling bandit setting. The authors notice that the best arm in expectation at a specific time is not only the one that maximizes the absolute reward, but also the one that maximizes the regret of any fixed reference strategy against it (a role which might be played by the algorithms' strategy itself). This observation provides them a means to estimate the individual rewards of two arms involved in a comparison, despite having access to only a relative value, thus allowing them to adapt the \Algo{EXP3} algorithm to the dueling bandits setting. In addition to providing a general lower bound of order $ \Omega(\sqrt{KT}) $ on the regret of any algorithm, using the reduction to the classical MAB problem by \cite{AiKaJo14}, they prove a finite-horizon non-stochastic upper bound on the expected regret of order $ \mathcal{O}(\sqrt{KT \ln K}) $ for \Algo{REX3}, which matches the original bound of the \Algo{EXP3} algorithm for classical adversarial MABs, and the one by \cite{AiKaJo14}.
\subsection{Partial monitoring games}
\citet{GaUr15} study the dueling bandit problem as an instance of a partial monitoring (PM) problem \citep{BaPaSz11,Ba13,BaFoPARASz14}---a generic model for sequential decision-making with incomplete feedback, which is defined by a quintuple $ ({\bf N},{\bf M},{\bf \Sigma},{\cal L},{\cal H}) $, where ${\bf N}$ is the set of actions, ${\bf M}$ is the set of outcomes, and ${\bf \Sigma} $ is the feedback alphabet; the loss function ${\cal L}$ associates a real-valued loss ${\cal L}(I,J)$ with each action $ I \in {\bf N} $ and outcome $ J \in {\bf M} $, and the feedback function $ {\cal H} $ associates a feedback symbol $ {\cal H}(I,J) \in {\bf \Sigma} $. In each round of a PM game, first the opponent chooses an outcome $ J_t $ from $ {\bf M} $, and the learner an action $ I_t $ from $ {\bf N} $. Then, the learner suffers the loss $ {\cal L}(I_t,J_t) $ and receives the feedback $ {\cal H}(I_t,J_t) $. The goal of the learner is to control the expected cumulative regret against the best single-action strategy at time $ T $:
\[ R_T = \max_{i} \sum_{t=1}^{T} {\cal L}(I_t,J_t)-{\cal L}(i,J_t)
\]
The dueling bandits problem can be encoded as a PM problem with the set of actions given by the set of all pairs of arms $ {\bf N} = \{ (i,j):1\leq i,j \leq K, i\leq j \} $, the alphabet $ {\bf \Sigma} = \{0,1\} $, and the set of outcomes given as vectors ${\bf m} \in {\bf M} = \{0,1\}^K$, where $ {\bf m}_i $ denotes the instantaneous gain of the $ i^{th} $ arm. After the environment selects an outcome $ {\bf m} \in {\bf M} $ and a duel $ (i,j) \in {\bf N} $, the instantaneous gain is
\[ {\cal G}((i,j),{\bf m}) = \frac{{\bf m}_i+{\bf m}_j}{2} \enspace ,
\]
and the feedback is
\begin{equation*}
X=
\begin{cases}
\text{loss}, & \text{if}\ {\bf m}_i<{\bf m}_j \\
\text{tie}, & \text{if}\ {\bf m}_i={\bf m}_j \\
\text{win}, & \text{if}\ {\bf m}_i>{\bf m}_j
\end{cases} \enspace .
\end{equation*}
Using the PM formalism, the authors prove that the dueling bandits problem is an easy instance according to the hierarchy of PM problems. Further, they survey existing PM algorithms and their optimality to solve dueling bandits problems efficiently, with respect to time $ T $ and number of actions $ K $. Their study reveals that the $ REX3 $ algorithm \citep{GaUrCl15} for adversarial utility-based dueling bandits with a regret of $ \tilde{{\cal O}}(\sqrt{KT}) $ is the only optimal algorithm with respect to $T$ and $K$, and that the \Algo{SAVAGE} algorithm \citep{UrClFeNa13} for general stochastic dueling bandits with a regret of $ {\cal O}(K^2 \log T) $ is optimal in $T$ but not in $K$.
\section{Applications}
Multi-armed bandits have been used in various fields of application, and the more recent setting of dueling bandits is receiving increasing attention from a practical perspective, too. In the following, we provide a short overview of some recent applications of dueling bandit algorithms.
\citet{ChMa12} consider the problem of learning a model of gesture generation to automatically generate animations for dialogues. To this end, they make use of subjective human judgement of naturalness of gestures. In this regard, pairwise comparisons (one gesture is considered more natural than another one) appear to be much easier than absolute judgements, which are often very noisy. This is why the authors tackle the task as a dueling bandits problem. Concretely, they use the \Algo{DBGD} algorithm \citep{YuJo09} and show empirically that the framework can effectively improve the generated gestures based on the simulated naturalness criterion.
In the context of information retrieval, \citet{HoShWhDe13} investigate whether and how previously collected historical interaction data can be used to speed up online learning to rank. They introduce an approach based on the \Algo{DBGD} algorithm \citep{YuJo09} and a recently developed Probabilistic Interleave (PI) method. The latter is based on a probabilistic interpretation of interleaved comparisons that allows one to infer comparison outcomes using data from arbitrary result lists. They evaluate the performance of their approach in terms of discounted cumulative reward on several learning-to-rank data sets and find that historical data can indeed be used to increase the effectiveness of online learning to rank for information retrieval.
Supporting clinical research that aims at recovering motor function after severe spinal cord injury, \citet{SuBu14} set up a dueling bandit instance to help paralyzed patients regain the ability to stand on their own feet. The feedback consists of a stochastically ranked subset of $ K $ tests, each of which corresponding physically to an electrical stimulation period applied to the spinal cord with a specific stimulus. The goal is to identify the optimal stimulus for a patient, and the ranking is based on a combined scoring of certain standing criteria by the observing clinicians (under noisy conditions). The authors introduce the Rank-Comparison algorithm, a modified version of \Algo{BTM} \citep{YuJo11}, which has an optimal expected total regret.
\citet{SuYuBu17} address the same application. To overcome the issue of very large action spaces, which is due to the huge number of different stimulating configurations and hinders a fast convergence of algorithms attempting to solve this problem, they consider correlation structures on the action space and exploit dependencies among the arms. This allows them to update a set of active arms instead of only a pair in each iteration of the algorithm. The authors propose $ \Algo{CorrDuel} $, an algorithm based on BTM. This algorithm is applied in a synthetic experimental setup and shown to perform better than algorithms that do not exploit correlation information. In a live clinical trial of therapeutic spinal cord stimulation, $ \Algo{CorrDuel} $ performs as good as specialized physicians.
\citet{SoRiUr16} use the Structured Dueling Bandits algorithm, an extension of \Algo{DBGD} \citep{YuJo09}, for response-based structured prediction in Statistical Machine Translation (SMT). In a repeatable generate-and-test procedure, the learner is given partial feedback that consists of assessments of the quality of the predicted translation. This feedback is used by the learner to update model parameters. In a simulation experiment, the authors show that learning from responses alleviates the supervision problem and allows a direct optimization of SMT for different tasks.
\citet{ChZhKi16} consider the problem of the allocation of assessment tasks among peers when grading open-ended assignments in Massive Open Online Courses (MOOCs), and formalize it as a sequential noisy ranking aggregation problem. More specifically, each student ranks a subset of his peers' assignments, and the goal is to aggregate all the partial rankings into a complete ranking of all assignments. The authors assume the existence of a ground-truth ranking, and that the underlying distribution is a Mallows model. Based on these assumptions, they propose \Algo{TustAwareRankingBasedMAB}, an algorithm based on merge sort and MallowsMPR \citep{BuHuSz14}.
\citet{ScKu17} investigate the problem of learning a user's task preferences in Human-Robot Interaction for socially assistive tasks. Concretely, they consider the goal of learning a user's exercise category. They formulate a dueling bandits problem, where arms represent exercises. Preference feedback is given by a user who, given two exercises that are presented to him as text on a display, selects the more preferred one. \Algo{DTS} \citep{WuLi16} is used as a dueling bandit learning algorithm. Simulation experiments show that the users were satisfied with the suggested preference rankings. Moreover, the results of a comparison of the preference learning approach against a simulated strategy that randomly selects preference rankings show that the preference learning approach leads to a significant reduction of ranking errors.
\section{Summary and Perspectives}
This paper provides a survey of the state of the art in preference-based online learning with bandit algorithms, an emerging research field that we referred to as preference-based multi-armed bandits (PB-MAB). In contrast to standard MAB problems, where bandit information is understood as (stochastic) real-valued rewards produced by the arms the learner decided to explore (or exploit), feedback is assumed to be of a more indirect and qualitative nature in the PB-MAB setting. In particular, the work so far has focused on preference information in the form of comparisons between pairs of arms. We have given an overview of instances of the PB-MAP problem that have been studied in the literature, algorithms for tackling them, and criteria for evaluating such algorithms.
Needless to say, the field is still developing and far from being mature. The contributions so far are highly interesting, and some of them have already found their way into concrete applications. Yet, a complete and coherent theoretical framework is still to be developed. With this survey, we hope to contribute to the further popularization, development, and shaping of the field.
We conclude the paper with a short (and certainly non-exhaustive) list of open problems that we consider particularly interesting for future work:
(\emph{i}) As we have seen, the difficulty of PB-MAB learning strongly depends on the assumptions on properties of the preference relation ${\bf Q}$: The more restrictive these assumptions are, the easier the learning task becomes. An interesting question in this regard concerns the ``weakest'' assumptions one could make while still guaranteeing the existence of an algorithm that scales linearly in the number of arms.
(\emph{ii}) A similar question can be asked for the regret. The RUCB algorithm achieves a high probability regret bound of order $K\log T$ by merely assuming the existence of a Condorcet winner. Yet, this assumption is arguable and certainly not always valid.
(\emph{iii}) For most of the settings discussed in the paper, such as those based on statistical models, a lower bound on the sample complexity is not known. Thus, it is difficult to say whether an algorithm is optimal or not. There are a few exceptions, however. For the regret optimization setup with the assumption of the existence of a total order over arms, it is known that, for any algorithm A, there is a bandit problem such that the regret of A is $\Omega (K \log T)$ (see Theorem 2 in \citep{YuBrKlJo12}). Moreover, in the case of the utility-based bandit setup, the reduction technique of \cite{AiKaJo14} implies that the lower bound of the standard bandit setup~\citep{LaRo85} also applies for the utility-based setup. Obviously, these lower bounds are also lower bounds for all settings starting from even weaker assumptions.
(\emph{iv}) Another important problem concerns the development of (statistical) tests for verifying the assumptions made by the different approaches in a real application. In the case of the statistical approaches based on the Mallows and PL distributions, for example, the problem would be to decide, based on data in the form of pairwise comparisons, whether the underlying distribution could indeed be Mallows or PL. Similarly, one could ask for methods to test the validity of relaxed or strong stochastic transitivity, stochastic triangle inequality, and existence of a Condorcet winner as required by many methods.
\acks{The authors gratefully acknowledge financial support by the German Research Foundation (DFG).}
\vskip 0.2in
| 2023-04-23T06:41:01.831Z | 2018-07-31T02:21:59.000Z | redpajama/arxiv | arxiv_0001 | 1,575 | 21,508 |
5f61e9684c243b5369010d1c8c7597200d0453f2 | \section{Introduction}
\label{sec:section1}
Shape optimization subject to partial differential equations plays an important role in a variety of problems such as minimum drag shapes in fluid dynamics, acoustics, material sciences or geometric inverse problems in non-destructive testing and medical imaging.
In order to propose an efficient design, an initial geometry is described with the help of a finite set of parameters, which are modified such that a given cost function is minimized. Steepest-descend methods iteratively modify the design according to the negative gradient with respect to the chosen parameters, thereby ensuring an successive descend of the cost function. The use of the adjoint approach~\cite{jameson1988aerodynamic} makes computing the gradient independent of the number of design parameters, promoting using all mesh node positions as design parameters, i.e., the richest design space possible. A downside of the plethora of design variables are possibly high-frequency oscillations in the search direction as no boundary smoothness is inherent in the parametrization. Consequently, resulting non-smooth designs can cause an irregular computational mesh resulting in the failure of the optimization.
A natural choice to overcome these difficulties is using a smoothed~\cite{jameson1988aerodynamic,jameson1990automatic,jameson1994optimum} or, equivalently, a Sobolev gradient descend~\cite{renka2006simple}, which necessitates a manual or automatic parameter study to determine a problem-dependent smoothing parameter \cite{kim2005enhancement}. Picking an adequate smoothing parameter is a crucial task as manipulating the search direction poorly can potentially slow down the convergence.
Additionally, the convergence speed of the steepest-Descent method deteriorates if the Hessian of the optimization problem is ill-conditioned~\cite[Chapter~3.3]{wright1999numerical}. An approach to overcome the condition number dependency is Newton's method, necessitating a computation of the Hessian. As it is computationally not feasible to determine the exact Hessian, a lot of work has gone into its approximation. One strategy is to approximate the symbol of the exact operator \cite{arian1995analysis,arian1999analysis,arian1999preconditioning}. Furthermore, closely related studies based on Fourier analysis have also been used in~\cite{yang2011shape} to study the condition of several Navier--Stokes flow situations, bridging the gap between optimization acceleration and studying the well- or ill-posed nature of flow problems.
An approach to construct a search direction fulfilling both, the desired regularity of the design as well as including Hessian information has been derived in \cite{schmidt2009impulse} for energy minimization. The authors use the symbol of the exact Hessian for a half-space geometry to choose a constant parameter for Sobolev smoothing which is based on the spacing of the computational mesh. The choice of a constant smoothing parameter can however lead to a limitation of the design space, as non-smooth areas can become physically meaningful in certain areas of the design. One can for example think of the sharp trailing edge of an airfoil. Furthermore, the limitation to half-space geometries means that the smoothing parameter might not be valid in practical applications.
The aim of this paper is to extend the derivation of the Hessian symbol to body fitted coordinates, allowing the direct usage of the symbol in the construction of an approximate Newton method. The resulting preconditioner will inherit the local smoothing properties of the exact Hessian by picking spatially dependent smoothing parameters automatically.
The paper is structured as follows: After the introduction in section \ref{sec:section1}, we derive the steepest-descent search direction in section \ref{sec:section2}. To accelerate the optimization, we derive the Hessian symbol in section \ref{sec:section3} and discuss why the Hessian has smoothing behavior. This behavior is demonstrated and investigated further in section \ref{sec:section4}, where a numerical approximation of the Hessian symbol for low Reynolds number flows is presented. The considered technique to approximate this symbol gives insight into the smoothing properties of the Hessian. Having derived the analytic symbol, we construct a preconditioner, which approximates this symbol in section \ref{sec:section5}. To ensure minimal computational costs, we use differential operators to construct said preconditioner. The coefficients of these operators are determined automatically to match the exact Hessian symbol. Finally, we compare our novel method in section \ref{sec:section6} to classical Sobolev smoothing when using a constant smoothing parameter
\section{The steepest-descent search direction}
\label{sec:section2}
We start by defining the shape optimization problem for a general cost function $F$ given a Stokes flow: This constraint optimization problem takes the form
\begin{subequations}\label{eq:optProblem}
\begin{align}
\min_{\bm{v},p,\Gamma_{o}}F(\bm{v},p,&\Gamma_{o}) \\
\text{s.t. }
-\mu \Delta \bm{v} + \nabla p &= 0,\\
\nabla \cdot \bm{v} &= 0,\\
\bm{v} = 0 &\text{ on } \Gamma_{o},
\end{align}
\end{subequations}
where the design variable $\Gamma_{o}$ is the surface of a flow obstacle with volume $\Omega_o$. In our setting, the flow domain is given by $\Omega = \mathbb{R}^d\setminus \Omega_o$. The velocity $\bm{v}\in\mathbb{R}^2$ and the pressure $p\in\mathbb{R}$ fulfill the Stokes equations with dynamic viscosity $\mu$. Physically, the Stokes equations describe a creeping flow, in which convective forces are negligible compared to viscous forces. We are interested in the minimization of the obstacle's drag, which is given by
\begin{equation}\label{eq:drag}
F_{D} := \int_{\Gamma_{o}}-\mu (\bm{n}\cdot\nabla)\bm{v} \cdot \bm{a}+p\bm{n}\cdot\bm{a}d\Gamma,
\end{equation}
where $\bm{a}$ is defined as
\begin{align*}
\bm{a}:=(\cos(\phi),\sin(\phi))^T.
\end{align*}
The angle of attack $\phi$ will be zero in our case. Making use of gradient-based methods, the design variable $\Gamma_o$ is modified iteratively such that the cost function $F_D$ is minimized. In order to calculate the gradient of the optimization problem \eqref{eq:optProblem}, the shape derivative of the cost function needs to be determined. As proposed in \cite{ZolesioSokolowski,DelfourZolesio2}, the shape derivative can be computed by defining a mapping $T_t[\bm{V}]$ which maps the original domain $\Omega$ to the deformed domain $\Omega_t$, given by
\begin{align*}
T_t[\bm{V}](\bm{x}) = \bm{x}+t\bm{V}(\bm{x}).
\end{align*}
The shape derivative of the cost function $F$ in the direction of the vector field $\bm{V}$ is then given by
\begin{align*}
dF(\Omega)[\bm{V}]:=\left.\frac{d}{dt} \right\vert_{t=0}F(T_t[\bm{V}](\Omega)).
\end{align*}
To efficiently calculate the shape derivative, one often makes use of the adjoint approach, which yields the following theorem:
\begin{theorem}
The shape derivative of problem \eqref{eq:optProblem} with respect to $\bm{V}$ using the drag \eqref{eq:drag} as cost function is
\begin{align*}
dF_{D}(\bm{v},p,\Omega)[\bm{V}] = -\int_{\Gamma_{o}} (\bm{V}\cdot \bm{n})\left[ \sum_{i=1}^d \mu (\bm{n}\cdot\nabla)\lambda_i (\bm{n}\cdot\nabla)v_i \right] d\Gamma
\end{align*}
where the adjoint velocity $\bm{\lambda}\in\mathbb{R}^d$ is given by
\begin{align*}
-\mu \Delta \bm{\lambda}- \nabla \lambda_p &= \bm{0}, \\
\div{\bm{\lambda}} &= 0,\\
\bm{\lambda} = -\bm{a} &\text{ on } \Gamma_{o}
\end{align*}
and $\bm{n}$ is the normal vector of the optimization patch $\Gamma_o$.
\end{theorem}
\begin{proof}
We start by calculating the shape derivative for a general cost function
\begin{align*}
F(\bm{v},p,\Gamma_{o}) = \int_{\Gamma_{o}}f(\bm{v},(\bm{n}\cdot\nabla)\bm{v},p,\bm{n})d\Gamma.
\end{align*}
Following \cite{Schmidt10}, the shape derivative of this cost function is
\begin{align*}
dF(\Omega)[\bm{V}]=&\int_{\Gamma_o}(\bm{V}\cdot\bm{n})\left[ ( \bm{n}\cdot\nabla ) f +\kappa f\right] + \frac{\partial f}{\partial \bm{n}} d\bm{n}[\bm{V}] d\Gamma\nonumber \\
&+\int_{\Gamma_o} \frac{\partial f}{\partial v_i} v_i'[\bm{V}] + \frac{\partial f}{\partial b_i}(\bm{n}\cdot\bm{\nabla})v_i'[\bm{V}]+\frac{\partial f}{\partial p}p'[\bm{V}] d\Gamma
\end{align*}
where $\bm{b}:=(\bm{n}\cdot\nabla)\bm{v}$, the curvature is denoted by $\kappa$ and the tangential divergence of a vector field $\bm{W}$ is given by
\begin{align*}
\tandiv(\bm{W}) = \div(\bm{W}) - (\bm{n}\cdot \nabla)\bm{W} \cdot \bm{n} = \partial_{x_j}W_j-n_k \partial_{x_k}W_jn_j.
\end{align*}
Furthermore, the normal derivative $( \bm{n}\cdot\nabla )$ is only applied to the first three inputs of $f$, namely $\bm{v},\bm{b}$ and $p$. The gradient can be rewritten as
\begin{align}\label{eq:GradientBoundary}
dF(\Omega)[\bm{V}]=&\int_{\Gamma_o}(\bm{V}\cdot\bm{n})\left[ ( \bm{n}\cdot\nabla ) f +\kappa \left(f- \bm{n}\cdot \frac{\partial f}{\partial \bm{n}}\right) + \tandiv{\frac{\partial f}{\partial \bm{n}}}\right]d\Gamma \nonumber \\
&+\int_{\Gamma_o} \frac{\partial f}{\partial v_i} v_i'[\bm{V}] + \frac{\partial f}{\partial b_i}(\bm{n}\cdot\bm{\nabla})v_i'[\bm{V}]+\frac{\partial f}{\partial p}p'[\bm{V}] d\Gamma.
\end{align}
The local shape derivative of the velocity and pressure due to a perturbation $\bm{V}$ is denoted by $\bm{v}'[\bm{V}]$ and $p'[\bm{V}]$. Computing these two functions is numerically expensive, which is why we aim at finding a representation of the shape derivative, independent of these two terms. The functions $\bm{v}'[\bm{V}]$ and $p'[\bm{V}]$ are determined by linearizing the Stokes equations around the primal state $\bm{v}$ and $p$, meaning that we write down the Stokes equation for the states of the perturbed geometry with $\bm{\tilde{v}} = \bm{v}+\bm{v'}$ and $\tilde{p}=p+p'$. This yields
\begin{subequations}\label{eq:linStokes}
\begin{align}
-\mu \Delta \bm{v}'[\bm{V}] + \nabla p'[\bm{V}] &= 0,\\
\nabla \cdot \bm{v}'[\bm{V}] &= 0,\\
\bm{v}'[\bm{V}] = -(\bm{n}\cdot\nabla)\bm{v}(\bm{n}&\cdot\bm{V}) \text{ on } \Gamma_{o}.
\end{align}
\end{subequations}
The derivation of the boundary condition can be performed by a Taylor expansion. For more details see \cite{Schmidt10}. In order to eliminate the velocity and pressure perturbations $\bm{v}'$ and $p'$ in the shape derivative \eqref{eq:GradientBoundary}, one chooses the adjoint ansatz. We start by taking the integral of the scalar product of the linearized Stokes equations \eqref{eq:linStokes} and the adjoint states $(\bm{\lambda},\lambda_p)^T$, where $\bm{\lambda}\in\mathbb{R}^d$ is the adjoint velocity and $\lambda_p$ is the adjoint pressure, leading to
\begin{equation}\label{eq:adjointPart}
0 = \int_{\Omega} -\lambda_k \mu \partial_{x_j x_j} v_k' + \lambda_k \partial_{x_k}p'+\lambda_p \partial_{x_k}v_k' d\Omega.
\end{equation}
Let us look at each term of the adjoint part individually. We start with
\begin{align*}
\int_{\Omega} -\lambda_k \mu \partial_{x_j x_j} v_k' d\Omega =& \int_{\Omega} -\mu \partial_{x_j}(\lambda_k \partial_{x_j} v_k' )+\mu\partial_{x_j}\lambda_k \partial_{x_j}v_k' d\Omega \\
=&\int_{\Gamma_o}-\mu n_j \lambda_k\partial_{x_j}v_k'd\Gamma+\int_{\Omega}\mu \partial_{x_j}\left( v_k'\partial_{x_j}\lambda_k\right)-\mu v_k'\partial_{x_j x_j}\lambda_k d\Omega\\
=&\int_{\Gamma_o}-n_j \mu \lambda_k\partial_{x_j}v_k'd\Gamma+\int_{\Gamma_o} \mu n_j v_k'\partial_{x_j}\lambda_k-\int_{\Omega}\mu v_k'\partial_{x_j x_j} \lambda_k d\Omega.
\end{align*}
Here, we used the reverse chain rule as well as Gauss divergence theorem. The remaining two terms can be transformed with the same strategy. We obtain
\begin{align*}
\int_{\Omega}\lambda_k \partial_{x_k} p' d\Omega = -\int_{\Omega} p'\partial_{x_k} \lambda_k d\Omega + \int_{\Gamma_o}n_k \lambda_k p' d\Gamma, \\
\int_{\Omega} \lambda_p \partial_{x_k}v_k'd\Omega = - \int_{\Omega}v_k'\partial_{x_k}\lambda_p d\Omega+\int_{\Gamma_o}\lambda_p v_k' n_k d\Gamma .
\end{align*}
Adding the transformed equation \eqref{eq:adjointPart} to the gradient \eqref{eq:GradientBoundary}, we get
\begin{align}\label{eq:GradientBoundaryAdj}
dF(\Omega)[\bm{V}]+0 =&\int_{\Gamma_o}V_l n_l\left[ n_j\partial_{x_j} f +\kappa \left(f- n_j\frac{\partial f}{\partial n_j}\right) + \tandiv{\frac{\partial f}{\partial n_j}}\right] d\Gamma \nonumber \\
&+\int_{\Gamma_o} \frac{\partial f}{\partial v_j} v_j' + \frac{\partial f}{\partial b_j}(n_k\partial_{x_k})v_j'+\frac{\partial f}{\partial p}p' d\Gamma \nonumber \\
&+\int_{\Gamma_o}- \mu \lambda_j n_k\partial_{x_k}v_j'+\mu n_k v_j'\partial_{x_k}\lambda_j+n_j \lambda_j p'+\lambda_p v_j' n_j d\Gamma \nonumber \\
&+\int_{\Omega}-\mu v_k'\partial_{x_j x_j} \lambda_k - p'\partial_{x_k} \lambda_k - v_k'\partial_{x_k}\lambda_p d\Omega.
\end{align}
Remembering that the adjoint states $(\bm{\lambda},\lambda_p)^T$ are still free to choose, those states can be picked to cancel the perturbations of the primal states. The resulting constraint for the adjoint states is called the adjoint equation. By looking at the volume part of the gradient \eqref{eq:GradientBoundaryAdj}, we see that in $\Omega$ we must have
\begin{subequations}
\begin{align*}
-\mu \Delta \bm{\lambda}- \nabla \lambda_p &= \bm{0}, \\
\div{\bm{\lambda}} &= 0.
\end{align*}
\end{subequations}
Now, let us determine the adjoint boundary conditions, i.e. the conditions, which the adjoint states must fulfill on the boundaries such that the perturbed primal states drop out of the gradient \eqref{eq:GradientBoundaryAdj}. Assuming that we fulfill the adjoint equations, we can rearrange the gradient to
\begin{align*}
dF(\Omega)[\bm{V}] =&\int_{\Gamma_o}V_l n_l\left[ n_j\partial_{x_j} f +\kappa \left(f- n_j\frac{\partial f}{\partial n_j}\right) + \tandiv{\frac{\partial f}{\partial n_j}}\right] d\Gamma \\
&+\int_{\Gamma_o} v_j'\left[ \frac{\partial f}{\partial v_j} + \mu n_k \partial_{x_k}\lambda_j +\lambda_p n_j \right]
+(n_k\partial_{x_k})v_j' \left[ \frac{\partial f}{\partial b_j} - \mu \lambda_j \right]
+p'\left[\frac{\partial f}{\partial p}+n_j \lambda_j\right]d\Gamma.
\end{align*}
Remember that we have $v_j' = V_l n_l (n_i \partial_{x_i}) v_j$ on $\Gamma_o$ from the boundary conditions of the linearized Stokes equations \eqref{eq:linStokes}, which is why we do not need to calculate $v_j'$. The remaining perturbed primal states are forced to vanish with the help of the adjoint boundary conditions. Hence, on $\Gamma_o$ the adjoint states must fulfill
\begin{subequations}\label{eq:bcAdjoint}
\begin{align}
\frac{\partial f}{\partial \bm{b}} - \mu \bm{\lambda} = \bm{0}, \\
\frac{\partial f}{\partial p}+\bm{n}\cdot\bm{\lambda} = 0.
\end{align}
\end{subequations}
If the dual states fulfill these conditions, we are left with
\begin{align*}
dF(\Omega)[\bm{V}] =&\int_{\Gamma_o}V_l n_l\left[ n_j\partial_{x_j} f +\kappa \left(f- n_j\frac{\partial f}{\partial n_j}\right) + \tandiv{\frac{\partial f}{\partial n_j}}\right] d\Gamma \\
&+\int_{\Gamma_o} V_l n_l (n_i \partial_{x_i}) v_j \left[ \frac{\partial f}{\partial v_j} + \mu n_k \partial_{x_k}\lambda_j +\lambda_p n_j \right]d\Gamma .
\end{align*}
Let us now simplify the gradient \eqref{eq:GradientBoundaryAdj} as well as the adjoint boundary conditions \eqref{eq:bcAdjoint} for the drag minimization problem by making use of
\begin{align*}
f_D = -\mu \bm{b} \cdot \bm{a}+p\bm{n}\cdot\bm{a}.
\end{align*}
We have
\begin{align*}
\frac{\partial f_D}{ \partial \bm{v}} &= \bm{0}\text{,}\enskip\frac{\partial f_D}{\partial \bm{b}} = -\mu \bm{a}, \\
\frac{\partial f_D}{\partial p} &= \bm{n}\cdot\bm{a},\enskip\frac{\partial f_D}{\partial \bm{n}} = -\mu \nabla \bm{v} \cdot \bm{a} + p\bm{a}.
\end{align*}
Hence, the adjoint boundary conditions on $\Gamma_{o}$ become $\bm{\lambda} = -\bm{a}$. Furthermore, the gradient changes to
\begin{align*}
dF_D(\Omega)[\bm{V}] =\int_{\Gamma_o}V_l n_l\left[ n_j\partial_{x_j} f_D + \tandiv{\frac{\partial f_D}{\partial n_j}} \right]+V_l n_l (n_i \partial_{x_i}) v_j \left[ \mu n_k \partial_{x_k}\lambda_j +\lambda_p n_j \right]d\Gamma,
\end{align*}
because $f_D$ is linear in the $n$ argument and consequently
\begin{align*}
f_D-\bm{n}\frac{\partial f_D}{\partial\bm{n}} = 0.
\end{align*}
Additionally, the term $V_l n_l (n_i \partial_{x_i}) v_j\lambda_p n_j$ is zero, because we can rewrite the velocity gradient as
\begin{align*}
\partial_{x_j} v_l = n_i\partial_{x_i}v_l n_j + t_k\partial_{x_k}v_l t_j.
\end{align*}
Due to the no-slip boundary condition, the derivative w.r.t. the tangential direction $\bm{t}$ drops out. If we now choose the resulting gradient to write down the mass conservation, we get
\begin{align*}
\partial_{x_j} v_j = n_i\partial_{x_i}v_j n_j = 0.
\end{align*}
Plugging in the remaining derivatives of $f_D$, we are left with
\begin{align}\label{eq:dragGrad}
dF_D(\Omega)[\bm{V}] =\int_{\Gamma_o}&V_l n_l\left[ n_j\partial_{x_j} \left( -\mu n_k\partial_{x_k}v_i a_i+p n_k a_k \right) + \tandiv \left( -\mu \partial_{x_j} v_k a_k + p a_j \right) \right] \nonumber \\
+&V_l n_l (n_i \partial_{x_i}) v_j \mu n_k \partial_{x_k}\lambda_jd\Gamma \nonumber \\
= \int_{\Gamma_{o}}&(\bm{V}\cdot \bm{n})\left[ -\mu (\nabla_{\bm{n}})^2 \bm{v} \bm{a} + (\bm{n}\cdot\nabla)p(\bm{n}\cdot \bm{a})-\div_{\Gamma}(-\mu (\nabla \bm{v})^T \bm{a} + p\bm{a}) \right] \nonumber \\
+&(\bm{V}\cdot \bm{n})\left[ \mu (\bm{n}\cdot\nabla)\lambda_i (\bm{n}\cdot\nabla)v_i\right]d\Gamma,
\end{align}
where we have used
\begin{align*}
(\nabla_{\bm{n}})^2 \bm{v} := n_j\partial_{x_j}\left( n_k\partial_{x_k}v_i\right).
\end{align*}
The derived shape derivative can further be simplified to facilitate the derivation of the Hessian: Taking a closer look at the tangential divergence part of \eqref{eq:dragGrad}, one sees that the term inside the tangential divergence becomes
\begin{align*}
\div_{\Gamma}(-\mu \nabla \bm{v}\cdot\bm{a}) &= \div(-\mu \nabla \bm{v}\cdot\bm{a}) - \bm{n}\cdot \nabla(-\mu \nabla \bm{v}\cdot\bm{a}) \cdot\bm{n} \\
&=-\partial_{x_j}(\mu \partial_{x_j} v_k a_k )+\mu n_i\partial_{x_i}(\partial_{x_j} v_k a_k)n_j\\
&=-\mu\partial_{x_j x_j}v_k a_k+\mu n_i\partial_{x_i x_j} v_k a_k n_j.
\end{align*}
For the remaining term, we get
\begin{align*}
\div_{\Gamma}(p\bm{a}) &= \div(p\bm{a}) - (\bm{n}\cdot \nabla)(p\bm{a}) \cdot\bm{n} \\
&=\partial_{x_j} p a_j -n_k\partial_{x_k}p a_j n_j.
\end{align*}
Hence, the tangential divergence term in \eqref{eq:dragGrad} becomes
\begin{align*}
&\int_{\Gamma_{o}} (V_l n_l)\left[ -\mu\partial_{x_j x_j}v_k a_k+\mu n_i\partial_{x_i x_j} v_k a_k n_j+ \partial_{x_j} p a_j -n_k\partial_{x_k}p a_j n_j \right]d\Gamma \nonumber \\
=& \int_{\Gamma_{o}} (\bm{V}\cdot \bm{n})\left[ -\mu\Delta \bm{v} \cdot \bm{a}+\mu (\nabla_{\bm{n}})^2 \bm{v} \bm{a}+ \nabla p \cdot \bm{a} -(\bm{n}\cdot\nabla)p(\bm{n}\cdot \bm{a}) \right] d\Gamma \nonumber \\
=& \int_{\Gamma_{o}} (\bm{V}\cdot \bm{n})\left[ (-\mu\Delta \bm{v} + \nabla p) \cdot \bm{a}+\mu (\nabla_{\bm{n}})^2 \bm{v} \bm{a}-(\bm{n}\cdot\nabla)p(\bm{n}\cdot \bm{a}) \right]d\Gamma \nonumber \\
=& \int_{\Gamma_{o}} (\bm{V}\cdot \bm{n})\left[ \mu (\nabla_{\bm{n}})^2 \bm{v} \bm{a}-(\bm{n}\cdot\nabla)p(\bm{n}\cdot \bm{a}) \right]d\Gamma.
\end{align*}
Note, that $-\mu\Delta \bm{v} + \nabla p$ is zero, due to the fact that the state variables fulfill the Stokes equations. Now most of the terms in \eqref{eq:dragGrad} cancel, meaning that we are left with
\begin{align*}
dF_{D}(\Omega)[\bm{V}] = \int_{\Gamma_{o}} (\bm{V}\cdot \bm{n}) df_D d\Gamma
\end{align*}
where
\begin{align}\label{eq:df}
df_D:=-\sum_{i=1}^d \mu (\bm{n}\cdot\nabla)\lambda_i (\bm{n}\cdot\nabla)v_i.
\end{align}
\qed
\end{proof}
With the help of the adjoint approach, a numerically cheap calculation of the shape derivative can be ensured, as the computational costs no longer depend on the number of design parameters. This motivates using a detailed description of the optimization patch $\Gamma_o$ by using the nodes of the discretized surface, defined by $\bm{x}_k$ for $k = 1,...,N$ as design parameters. Having derived the shape derivative of the optimization problem \eqref{eq:optProblem}, one can iteratively approach the optimal design with a steepest-descent update. To obtain a search direction for every surface node with the help of the shape derivative, the perturbations
\begin{align}\label{eq:Vk}
\bm{V}_k(\bm{x}) := \bm{n}(\bm{x})\varphi_k(\bm{x}),
\end{align}
for $k = 1,...,N$ is defined, where $\varphi_k:\Gamma_o\to\mathbb{R}$ are piece-wise linear basis functions fulfilling $\varphi_k(\bm{x}_l) = 1 \text{ if }\bm{x}_l = \bm{x}_k$.
The deformation of the $k$-{th} mesh node is now given by
\begin{align*}
\bm{x}_k^{\text{new}} &= \bm{x}_k - \bm{V}_k(\bm{x}_k) dF_{D}[\bm{V}_k(\bm{x}_k)] \\
&= \bm{x}_k - \bm{n}(\bm{x}_k)\varphi_k(\bm{x}_k)dF_{D}[\bm{n}(\bm{x}_k)\varphi_k(\bm{x}_k)] \\
&\approx \bm{x}_k - \frac{1}{2}\bm{n}(\bm{x}_k)df_D(\bm{x}_k)\left( \Vert \bm{x}_k-\bm{x}_{k-1}\Vert + \Vert \bm{x}_{k+1}-\bm{x}_{k}\Vert \right),
\end{align*}
where we used a first order quadrature rule to evaluate the integral in $dF_{D}$. Choosing an adequate step size $\gamma$ yields the steepest-descent update
\begin{align*}
\bm{x}_k^{(l+1)} = \bm{x}_k^{(l)} +\gamma p_k^{(l)} \bm{n}\left(x_k^{(l)}\right),
\end{align*}
where the steepest-descent search direction is given by
\begin{align*}
p_k^{(l)} = -df_D\left(\bm{x}_k^{(l)}\right).
\end{align*}
Alternatively, we can collect all values of the gradient evaluated at the surface points in a vector
\begin{align}\label{eq:GradientVector}
\bm{df}^{(l)} = \left( df_D\left(\bm{x}_1^{(l)}\right), \cdots, df_D\left(\bm{x}_{N}^{(l)}\right) \right)^T,
\end{align}
yielding the steepest-descent search direction
\begin{align*}
\bm{p}^{(l)} = -\bm{df}^{(l)}.
\end{align*}
As already discussed, the convergence of steepest-descent is slow. Additionally, the gradient $df_D$ is of insufficient regularity, leading to rough designs with subsequent problems in getting the flow solver to converge. To overcome this problem, we derive the Hessian of the optimization problem analytically. When given a Hessian matrix $\bm{H}\in\mathbb{R}^{N \times N}$, we can choose the Newton search direction
\begin{align}\label{eq:NewtonDirection}
\bm{p}^{(l)} = -\bm{H}^{-1} \bm{df}^{(l)}.
\end{align}
The derivation of the Hessian will show that the inverse Hessian will have properties of a smoothing method, which is why we can combine the tasks of accelerating the optimization and smoothing the search direction.
\section{The analytic Hessian symbol}
\label{sec:section3}
In order to accelerate the optimization process, we wish to make use of Hessian information, or to be more precise, the symbol of the Hessian. This derivation uses the techniques introduced in \cite{arian1995analysis,arian1999analysis,arian1999preconditioning,schmidt2009impulse}. In contrast to previous works, our analysis holds for smooth geometries beyond the typical upper half-plane, allowing the derivation of the Hessian symbol in applications of practical interest. For a given operator $L$, its symbol $\sigma_L$ is the response of $L$ to a wave with a fixed frequency $\omega$. To give a brief understanding of operator symbols, we look at the following example:
\begin{example}
We derive the symbol of the operator
\begin{align*}
Lg := \left(1-\frac{d^2}{dx^2}\right)g.
\end{align*}
To derive the response of $L$ to an input wave, we choose $g = e^{-i\omega x}$, which yields
\begin{align*}
L e^{-i\omega x} = \left(1+\omega^2\right)e^{-i\omega x}.
\end{align*}
The symbol is therefore given by $\sigma_L = 1+\omega^2$. It can be seen that the operator $L$ amplifies frequencies quadratically with respect to the input frequency $\omega$. The fact that $\sigma_L$ is a real number, tells us that the operator does not cause a phase shift.
\end{example}
Our aim is to derive the Hessian symbol $\sigma_H$ for the drag minimization problem when using the Stokes equations. For this, the Hessian response to a Fourier mode with frequency $\omega$, which is used to perturb the optimization patch $\Gamma_{o}$ is investigated analytically. We assume a two-dimensional geometry, which can be described by body fitted coordinates $\xi_1$ and $\xi_2$. A mapping to the physical coordinates is given by
\begin{align*}
\Phi\left(\xi_1,\xi_2\right) = \bm{x}.
\end{align*}
The physical coordinates of the optimization patch are
\begin{align*}
\Gamma_o(\xi_1) = \Phi\left(\xi_1,0\right),
\end{align*}
meaning that $\xi_1$ is the parameter describing the position on the optimization patch. We choose the parametrization such that
\begin{align*}
\left\Vert \frac{d}{d \xi_1}\Gamma_o(\xi_1)\right\Vert = 1,
\end{align*}
i.e., the tangential vector $\bm{t}$ has unit length. If the remaining parameter $\xi_2$ is used as a parametrization into the normal direction $\bm{n}$, we can write our mapping as
\begin{align*}
\Phi\left(\xi_1,\xi_2\right) = \Gamma_o(\xi_1)+\xi_2\bm{n}( \Gamma_o(\xi_1) ).
\end{align*}
In this setting, we derive the symbol of the Hessian in the following theorem.
\begin{theorem}
The symbol of the Hessian for the Stokes equations is given by
\begin{align}\label{eq:HessianSymbol}
\sigma_H = \beta_1 + \beta_2 \omega
\end{align}
where
\begin{align}\label{eq:beta1}
\beta_1 = \mu (\bm{n}\cdot\nabla) \left(\sum_{k = 1}^2 (\bm{n}\cdot\nabla)v_k (\bm{n}\cdot\nabla)\lambda_k \right)
\end{align}
and
\begin{equation}\label{eq:beta2}
\beta_2 = - 2\mu \sum_{k = 1}^2 (\bm{n}\cdot\nabla) \lambda_k (\bm{n}\cdot\nabla) v_k .
\end{equation}
\end{theorem}
\begin{proof}
The Hessian of our problem is the response of the gradient $df_D$ given in \eqref{eq:df} to a perturbation of the design space, which we call $\alpha$. The response of a function $g$ due to a perturbation $\alpha$ is denoted as
\begin{align*}
g'[\alpha] = \lim_{\epsilon\rightarrow 0} \frac{g(\Gamma_o^\epsilon)- g(\Gamma_o)}{\epsilon},
\end{align*}
where the perturbed surface is given by
\begin{align*}
\Gamma_o^\epsilon := \Gamma_o + \epsilon \alpha\bm{n}.
\end{align*}
The response of the gradient $df_D$ to such a perturbation is now given by
\begin{equation}\label{eq:gradientPert}
df_D'[\alpha] = -\mu n_k\partial_{x_k}\lambda_i'[\alpha] n_l\partial_{x_l}v_i-\mu n_k\partial_{x_k}\lambda_i n_l\partial_{x_l}v_i'[\alpha].
\end{equation}
The change of the state variables as well as the adjoint variables due to a small perturbation $\alpha$ in the normal direction can be computed from the linearized primal and adjoint state equations, which are
\begin{align*}
-\mu \Delta \bm{v}'[\alpha] + \nabla p'[\alpha] &= 0,\\
\nabla \cdot \bm{v}'[\alpha] &= 0,\\
\bm{v}'[\alpha] = -(\bm{n}\cdot\nabla)\bm{v} \alpha \text{ on } &\Gamma_{o},
\end{align*}
and
\begin{align*}
-\mu \Delta \bm{\lambda}'[\alpha] - \nabla \lambda_p'[\alpha] &= 0,\\
\nabla \cdot \bm{\lambda}'[\alpha] &= 0,\\
\bm{\lambda}'[\alpha] = -(\bm{n}\cdot\nabla)\bm{\lambda}\alpha \text{ on } &\Gamma_{o}.
\end{align*}
Transforming these equations into body fitted coordinates $(\xi_1,\xi_2)$ leads to
\begin{subequations}\label{eq:primalPert}
\begin{align}
-\mu \frac{\partial \xi_l}{\partial x_i}\frac{\partial^2 v_j'[\alpha]}{\partial \xi_k \xi_l}\frac{\partial \xi_k}{\partial x_i} -\mu \frac{\partial v_j'}{\partial\xi_k}\frac{\partial^2 \xi_k}{\partial x_i^2} + \frac{\partial p'[\alpha]}{\partial \xi_k}\frac{\partial \xi_k}{\partial x_j} &= 0,\\
\frac{\partial v_i'[\alpha]}{\partial \xi_k}\frac{\partial \xi_k}{\partial x_i} &= 0,\\
v_j'[\alpha] = -n_k \frac{\partial v_j}{\partial \xi_i}\frac{\partial \xi_i}{\partial x_k} \alpha \text{ on } &\Gamma_{o},
\end{align}
\end{subequations}
and
\begin{subequations}\label{eq:adjointPert}
\begin{align}
-\mu \frac{\partial \xi_l}{\partial x_i}\frac{\partial^2 \lambda_j'[\alpha]}{\partial \xi_k \xi_l}\frac{\partial \xi_k}{\partial x_i}-\mu\frac{\partial \lambda_j'}{\partial\xi_k}\frac{\partial^2 \xi_k}{\partial x_i^2} - \frac{\partial \lambda_p'[\alpha]}{\partial \xi_k}\frac{\partial \xi_k}{\partial x_j} &= 0,\\
\frac{\partial \lambda_i'[\alpha]}{\partial \xi_k}\frac{\partial \xi_k}{\partial x_i} &= 0,\\
\lambda_j'[\alpha] = -n_k \frac{\partial \lambda_j}{\partial \xi_i}\frac{\partial \xi_i}{\partial x_k} \alpha \text{ on } &\Gamma_{o}.
\end{align}
\end{subequations}
As our goal is to determine the Hessian response to a Fourier mode, we let $\alpha$ be a mode with frequency $\omega_1$, meaning that we have
\begin{align*}
\alpha = e^{i \omega_1 \xi_1}.
\end{align*}
Furthermore, we make the assumption that the perturbed states have the form
\begin{align}\label{eq:assumptionPerturbed}
\bm{v}'[\alpha] &= \hat{\bm{v}} e^{i \omega_1 \xi_1} e^{i \omega_2^{p} \xi_2},\enskip
p'[\alpha] = \hat{p} e^{i \omega_1 \xi_1} e^{i \omega_2^{p} \xi_2},\nonumber \\
\bm{\lambda}'[\alpha] &= \bm{\hat{\lambda}} e^{i \omega_1 \xi_1} e^{i \omega_2^{a} \xi_2},\enskip
\lambda_p'[\alpha] = \hat{\lambda}_p e^{i \omega_1 \xi_1} e^{i \omega_2^{a} \xi_2}.
\end{align}
It is important to note that the choice of the dependency on $\xi_1$ is straight forward, since we would like to match the boundary conditions of the perturbed state variables for $\xi_2 = 0$. The complex exponential or wave like dependency in $\xi_2$ direction is a Fourier ansatz. There are two unknowns that need to be determined, namely the amplitudes which are the $\hat{\bullet}$ variables as well as the response frequencies $\omega_2^{p,a}$. Our first goal is to determine these response frequencies $\omega_2^{p}$ and $\omega_2^{a}$. For this, we insert our ansatz for the perturbed states into the linearized state equations \eqref{eq:primalPert} and \eqref{eq:adjointPert}. For $\omega_2 = \omega_2^{p}$ we now must fulfill
\begin{align*}
\begin{pmatrix}
\mu \omega_l\omega_k\frac{\partial \xi_l}{\partial x_j}\frac{\partial \xi_k}{\partial x_j} -i\omega_k \mu\frac{\partial^2 \xi_k}{\partial x_j^2}& 0 & i \omega_k \frac{\partial \xi_k}{\partial x_1} \\
0 & \mu \omega_l \omega_k\frac{\partial \xi_l}{\partial x_j}\frac{\partial \xi_k}{\partial x_j}-i\omega_k\mu \frac{\partial^2 \xi_k}{\partial x_j^2} & i \omega_k \frac{\partial \xi_k}{\partial x_2} \\
i \omega_k \frac{\partial \xi_k}{\partial x_1} & i \omega_k \frac{\partial \xi_k}{\partial x_2} & 0 \\
\end{pmatrix}
\begin{pmatrix}
\hat{v}_1 \\
\hat{v}_2 \\
\hat{p} \\
\end{pmatrix}
=
\begin{pmatrix}
0 \\
0 \\
0 \\
\end{pmatrix}
\end{align*}
as well as for $\omega_2 = \omega_2^{a}$
\begin{align*}
\begin{pmatrix}
\mu \omega_l\omega_k\frac{\partial \xi_l}{\partial x_j}\frac{\partial \xi_k}{\partial x_j}-i\omega_k \mu\frac{\partial^2 \xi_k}{\partial x_j^2}& 0 & -i \omega_k \frac{\partial \xi_k}{\partial x_1} \\
0 & \mu \omega_l \omega_k\frac{\partial \xi_l}{\partial x_j}\frac{\partial \xi_k}{\partial x_j}-i\omega_k \mu\frac{\partial^2 \xi_k}{\partial x_j^2} & -i \omega_k \frac{\partial \xi_k}{\partial x_2} \\
i \omega_k \frac{\partial \xi_k}{\partial x_1} & i \omega_k \frac{\partial \xi_k}{\partial x_2} & 0 \\
\end{pmatrix}
\begin{pmatrix}
\hat{\lambda}_1 \\
\hat{\lambda}_2 \\
\hat{\lambda}_p \\
\end{pmatrix}
=
\begin{pmatrix}
0 \\
0 \\
0 \\
\end{pmatrix}.
\end{align*}
These two systems of equations only have a non-trivial solution if the determinant of the two matrices is zero. Note that these two matrices only differ in the sign of the last row, leading to determinants, which have the same roots. Therefore, every non-trivial response frequency of the primal system is also a valid response frequency of the adjoint system. Hence, we denote $\omega_2^{p}$ and $\omega_2^{a}$ as $\omega_2$, which leads to the determinant
\begin{align*}
&\left(\sum_{l,k,j}\mu \omega_l\omega_k\frac{\partial \xi_l}{\partial x_j}\frac{\partial \xi_k}{\partial x_j}-\sum_{k,j}i\omega_k\mu \frac{\partial^2 \xi_k}{\partial x_j^2}\right)\left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_2} \sum_{k} \omega_k \frac{\partial \xi_k}{\partial x_2}\right)\\
&+\left(\sum_k\omega_k \frac{\partial \xi_k}{\partial x_1}\right)\left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_1} \left(\sum_{l,k,j} \mu \omega_l \omega_k\frac{\partial \xi_l}{\partial x_j}\frac{\partial \xi_k}{\partial x_j}-\sum_{k,j}i\omega_k\mu \frac{\partial^2 \xi_k}{\partial x_j^2}\right) \right) \\
=&\left(\sum_{l,k,j}\mu \omega_l\omega_k\frac{\partial \xi_l}{\partial x_j}\frac{\partial \xi_k}{\partial x_j}-\sum_{k,j}i\omega_k \mu\frac{\partial^2 \xi_k}{\partial x_j^2}\right)\left[ \left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_1}\right)^2 + \left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_2}\right)^2 \right] \\
=&\mu\left(\left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_1}\right)^2 + \left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_2}\right)^2-\sum_{k,j}i\omega_k \frac{\partial^2 \xi_k}{\partial x_j^2}\right)\left[ \left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_1}\right)^2 + \left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_2}\right)^2 \right]\stackrel{!}{=} 0.
\end{align*}
Here, we no longer use Einstein's sum convention such that we can reuse indices. Let us determine the roots of the polynomial inside the square brackets to obtain a valid response frequency $\omega_2$. The determinant will be zero if $\omega_2$ fulfills
\begin{align*}
&\left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_1}\right)^2 + \left( \sum_{k}\omega_k \frac{\partial \xi_k}{\partial x_2}\right)^2 = 0\\
\Leftrightarrow& \omega_1^2\left(\frac{\partial \xi_1}{\partial x_2}\right)^2+2\omega_1\omega_2\frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}+\omega_2^2\left(\frac{\partial \xi_2}{\partial x_2}\right)^2 +\omega_1^2\left(\frac{\partial \xi_1}{\partial x_1}\right)^2+2\omega_1\omega_2\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1}+\omega_2^2\left(\frac{\partial \xi_2}{\partial x_1}\right)^2 = 0 \\
\Leftrightarrow&\omega_2^2\left(\left(\frac{\partial \xi_2}{\partial x_2}\right)^2+\left(\frac{\partial \xi_2}{\partial x_1}\right)^2\right)+\omega_2\left[ 2\omega_1\left(\frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}+\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1}\right)\right] + \omega_1^2\left(\frac{\partial \xi_1}{\partial x_2}\right)^2+\omega_1^2\left(\frac{\partial \xi_1}{\partial x_1}\right)^2 =0.
\end{align*}
For simplicity, we define
\begin{align*}
c:=\frac{1}{\left(\frac{\partial \xi_2}{\partial x_2}\right)^2+\left(\frac{\partial \xi_2}{\partial x_1}\right)^2}.
\end{align*}
Applying the $p,q-$formula with
\begin{align*}
p &= 2c\omega_1\left( \frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}+\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1} \right) \\
q &= c \omega_1^2\left[\left(\frac{\partial \xi_1}{\partial x_2}\right)^2+\left(\frac{\partial \xi_1}{\partial x_1}\right)^2\right]
\end{align*}
yields
\begin{align*}
\omega_2^{1,2} =& -c\omega_1\left( \frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}+\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1} \right) \pm \omega_1 \sqrt{ c^2 \left(\frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}+\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1} \right)^2- c\left[\left(\frac{\partial \xi_1}{\partial x_2}\right)^2+\left(\frac{\partial \xi_1}{\partial x_1}\right)^2\right] } \\
=& -c\omega_1 \left[ \left( \frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}+\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1} \right) \mp \sqrt{ \left(\frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}+\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1}\right)^2 - \frac{1}{c}\left[\left(\frac{\partial \xi_1}{\partial x_2}\right)^2+\left(\frac{\partial \xi_1}{\partial x_1}\right)^2\right] }\right].
\end{align*}
Let us take a closer look at the term inside the square root, which is
\begin{align*}
&\left(\frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}+\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1}\right)^2 - \frac{1}{c}\left[\left(\frac{\partial \xi_1}{\partial x_2}\right)^2+\left(\frac{\partial \xi_1}{\partial x_1}\right)^2\right]\\
=&\left(\frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}\right)^2+2\frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1}+\left(\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1}\right)^2 - \left(\left(\frac{\partial \xi_1}{\partial x_2}\right)^2+\left(\frac{\partial \xi_1}{\partial x_1}\right)^2\right)\left(\left(\frac{\partial \xi_2}{\partial x_2}\right)^2+\left(\frac{\partial \xi_2}{\partial x_1}\right)^2\right)\\
=&2\frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1} -\left(\frac{\partial \xi_1}{\partial x_2}\right)^2\left(\frac{\partial \xi_2}{\partial x_1}\right)^2-\left(\frac{\partial \xi_1}{\partial x_1}\right)^2\left(\frac{\partial \xi_2}{\partial x_2}\right)^2 \\
=&-\left( \frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_1}-\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_2} \right)^2.
\end{align*}
Since this term is always negative, we know that the square root will result in a complex term, leading to
\begin{equation}\label{eq:omega1234}
\omega_2^{1,2} =-c\omega_1 \left[ \left( \frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_2}+\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_1} \right) \mp i\left( \frac{\partial \xi_1}{\partial x_2}\frac{\partial \xi_2}{\partial x_1}-\frac{\partial \xi_1}{\partial x_1}\frac{\partial \xi_2}{\partial x_2} \right)\right].
\end{equation}
This means the system can be solved for $\omega_2^{1,2}(\omega_1)$. We express the derivatives of the body fitted coordinates as derivatives of the physical coordinates, which have an intuitive geometric meaning on the boundary, since
\begin{align*}
&\left(\frac{\partial x_1}{\partial\xi_1},\frac{\partial x_2}{\partial\xi_1}\right)^T_{\xi_2 = 0} = \frac{d}{d \xi_1}\Gamma_o(\xi_1) = \bm{t}(\xi_1), \\
&\left(\frac{\partial x_1}{\partial\xi_2},\frac{\partial x_2}{\partial\xi_2}\right)^T_{\xi_2 = 0} = \bm{n}\left(\Gamma_o(\xi_1)\right).
\end{align*}
The relation between the derivatives of the two coordinate systems can be determined by noting that
\begin{align*}
\begin{pmatrix}
\frac{\partial }{\partial \xi_1} \\
\frac{\partial }{\partial \xi_2} \\
\end{pmatrix}
=
\begin{pmatrix}
\frac{\partial x_1}{\partial \xi_1}& \frac{\partial x_2}{\partial \xi_1} \\
\frac{\partial x_1}{\partial \xi_2} & \frac{\partial x_2}{\partial \xi_2} \\
\end{pmatrix}
\begin{pmatrix}
\frac{\partial }{\partial x_1} \\
\frac{\partial }{\partial x_2} \\
\end{pmatrix}
\end{align*}
and
\begin{align*}
\begin{pmatrix}
\frac{\partial }{\partial x_1} \\
\frac{\partial }{\partial x_2} \\
\end{pmatrix}
=
\begin{pmatrix}
\frac{\partial \xi_1}{\partial x_1}& \frac{\partial \xi_2}{\partial x_1} \\
\frac{\partial \xi_1}{\partial x_2} & \frac{\partial \xi_2}{\partial x_2} \\
\end{pmatrix}
\begin{pmatrix}
\frac{\partial }{\partial \xi_1} \\
\frac{\partial }{\partial \xi_2} \\
\end{pmatrix}
.
\end{align*}
This means that
\begin{align*}
\begin{pmatrix}
\frac{\partial \xi_1}{\partial x_1}& \frac{\partial \xi_2}{\partial x_1} \\
\frac{\partial \xi_1}{\partial x_2} & \frac{\partial \xi_2}{\partial x_2} \\
\end{pmatrix}
=
\begin{pmatrix}
\frac{\partial x_1}{\partial \xi_1}& \frac{\partial x_2}{\partial \xi_1} \\
\frac{\partial x_1}{\partial \xi_2} & \frac{\partial x_2}{\partial \xi_2} \\
\end{pmatrix}^{-1}
=
\frac{1}{\frac{\partial x_1}{\partial \xi_1}\frac{\partial x_2}{\partial \xi_2}-\frac{\partial x_2}{\partial \xi_1}\frac{\partial x_1}{\partial \xi_2}}
\begin{pmatrix}
\frac{\partial x_2}{\partial \xi_2}& -\frac{\partial x_2}{\partial \xi_1} \\
-\frac{\partial x_1}{\partial \xi_2} & \frac{\partial x_1}{\partial \xi_1} \\
\end{pmatrix}
.
\end{align*}
The response frequency \eqref{eq:omega1234} can now be evaluated on the boundary:
\begin{align*}
\left.\omega_2^{1,2}\right\vert_{\Gamma_o} =&\left.\mp \frac{c\omega_1}{\frac{\partial x_1}{\partial \xi_1}\frac{\partial x_2}{\partial \xi_2}-\frac{\partial x_2}{\partial \xi_1}\frac{\partial x_1}{\partial \xi_2}} \left[ -\frac{\partial x_1}{\partial \xi_2}\frac{\partial x_1}{\partial \xi_1}-\frac{\partial x_2}{\partial \xi_2}\frac{\partial x_2}{\partial \xi_1}
\mp i\left( \frac{\partial x_1}{\partial \xi_2}\frac{\partial x_2}{\partial \xi_1}-\frac{\partial x_2}{\partial \xi_2}\frac{\partial x_1}{\partial \xi_1} \right)\right]\right\vert_{\Gamma_o} \\
=&\left.\mp \frac{c\omega_1}{\frac{\partial x_1}{\partial \xi_1}\frac{\partial x_2}{\partial \xi_2}-\frac{\partial x_2}{\partial \xi_1}\frac{\partial x_1}{\partial \xi_2}} \left[ -\bm{n}\cdot\bm{t} \mp i\left( \frac{\partial x_1}{\partial \xi_2}\frac{\partial x_2}{\partial \xi_1}-\frac{\partial x_2}{\partial \xi_2}\frac{\partial x_1}{\partial \xi_1} \right)\right]\right\vert_{\Gamma_o} \\
=& \pm i \left.c\right\vert_{\Gamma_o}\omega_1.
\end{align*}
For $c$ we obtain
\begin{align*}
c\vert_{\Gamma_o} =& \frac{1}{\left(\frac{\partial x_1}{\partial \xi_1}\frac{\partial x_2}{\partial \xi_2}-\frac{\partial x_2}{\partial \xi_1}\frac{\partial x_1}{\partial \xi_2}\right)^2}\frac{1}{\left(\frac{\partial x_1}{\partial \xi_1}\right)^2+\left(\frac{\partial x_2}{\partial \xi_1}\right)^2}=\frac{1}{\Vert \bm{t} \Vert^6 } = 1,
\end{align*}
due to the fact that $\bm{\hat{t}}:=\left(\partial_{\xi_2}x_2,-\partial_{\xi_2}x_1\right)^T$ is either $\bm{t}$ or $-\bm{t}$, since
\begin{align*}
\bm{\hat{t}}\cdot \bm{n} = 0, \enskip \left\Vert \bm{\hat{t}} \right\Vert = 1.
\end{align*}
We now have two possible choices for $\omega_2^{p}$ and $\omega_2^{a}$, which will allow a non-trivial solution, namely
\begin{equation}\label{eq:omega12OnGamma}
\left.\omega_2^{1,2}\right\vert_{\Gamma_o} = \pm i \omega_1.
\end{equation}
Inserting the expression for $\omega_2$, which we have derived in \eqref{eq:omega1234}, into the assumption for the perturbed state variables \eqref{eq:assumptionPerturbed}, we get
\begin{align*}
\bm{v}'[\alpha] &= \hat{\bm{v}} e^{i \omega_1 \xi_1} e^{i\omega_2^{1,2}(\omega_1)\xi_2},\enskip
p'[\alpha] = \hat{p} e^{i \omega_1 \xi_1} e^{i\omega_2^{1,2}(\omega_1)\xi_2},\\
\bm{\lambda}'[\alpha] &= \bm{\hat{\lambda}} e^{i \omega_1 \xi_1} e^{i\omega_2^{1,2}(\omega_1)\xi_2},\enskip
\lambda_p'[\alpha] = \hat{\lambda}_p e^{i \omega_1 \xi_1} e^{i\omega_2^{1,2}(\omega_1)\xi_2}.
\end{align*}
The remaining unknowns in our ansatz for the perturbed primal and adjoint states are the $\hat{\bullet}$ variables, which can be determined with the help of the boundary conditions. Remember, that we are only interested in knowing the perturbed states, which influence the perturbation of the gradient \eqref{eq:gradientPert}, namely $\bm{v}'[\alpha]$ and $\bm{\lambda}'[\alpha]$. The boundary conditions contain normal derivatives of those states, which is why we first write down the normal derivatives for boundary fitted coordinates. We have
\begin{align*}
n_k \frac{\partial W_j}{\partial \xi_i}\frac{\partial \xi_i}{\partial x_k} =& \frac{1}{\frac{\partial x_1}{\partial \xi_1}\frac{\partial x_2}{\partial \xi_2}-\frac{\partial x_2}{\partial \xi_1}\frac{\partial x_1}{\partial \xi_2}}\left( \left( \frac{\partial x_2}{\partial \xi_2}, -\frac{\partial x_1}{\partial \xi_2}\right)^T \cdot\bm{n} \frac{\partial W_j}{\partial \xi_1} +\left( -\frac{\partial x_2}{\partial \xi_1}, \frac{\partial x_1}{\partial \xi_1}\right)^T \cdot\bm{n} \frac{\partial W_j}{\partial \xi_2} \right)\\
=& \frac{1}{\frac{\partial x_1}{\partial \xi_1}\frac{\partial x_2}{\partial \xi_2}-\frac{\partial x_2}{\partial \xi_1}\frac{\partial x_1}{\partial \xi_2}}\left( \left( n_2, -n_1\right)^T \cdot\bm{n} \frac{\partial W_j}{\partial \xi_1} +\left( -\frac{\partial x_2}{\partial \xi_1}, \frac{\partial x_1}{\partial \xi_1}\right)^T \cdot\left( \frac{\partial x_1}{\partial \xi_2}, \frac{\partial x_2}{\partial \xi_2}\right)^T \frac{\partial W_j}{\partial \xi_2} \right)\\
=&\frac{\partial W_j}{\partial \xi_2}.
\end{align*}
Plugging this expression into the boundary condition of the perturbed state variables given in \eqref{eq:primalPert} and \eqref{eq:adjointPert} leads to
\begin{align*}
v_j'[\alpha] &= \hat{v}_j e^{i \omega_1 \xi_1} = - \frac{\partial v_j}{\partial \xi_2}\alpha,\\
\lambda_j'[\alpha] &= \hat{\lambda}_j e^{i \omega_1 \xi_1} =- \frac{\partial \lambda_j}{\partial \xi_2} \alpha,
\end{align*}
meaning that
\begin{align*}
\hat{\bm{v}} &= - \partial_{\xi_2}\bm{v},\enskip \hat{\bm{\lambda}} = - \partial_{\xi_2}\bm{\lambda}.
\end{align*}
If we now write down the ansatz for the perturbed state variables, we get
\begin{align*}
\bm{v}'[\alpha] = - \partial_{\xi_2}\bm{v} e^{i \omega_1 \xi_1} e^{i\omega_2^{1,2}(\omega_1)\xi_2},\enskip \bm{\lambda}'[\alpha] = - \partial_{\xi_2}\bm{\lambda} e^{i \omega_1 \xi_1} e^{i\omega_2^{1,2}(\omega_1)\xi_2}.
\end{align*}
Let us now use this solution to calculate the unknown terms in the perturbed gradient \eqref{eq:gradientPert} on the boundary $\Gamma_o$, meaning that $\xi_2=0$. The perturbed gradient for boundary fitted coordinates is given by
\begin{align*}
df_D'[\alpha] = -\mu \partial_{\xi_2}\lambda_k'[\alpha] \partial_{\xi_2}v_k-\mu \partial_{\xi_2}\lambda_k \partial_{\xi_2}v_k'[\alpha].
\end{align*}
On the boundary, where $\xi_2=0$, we have that
\begin{align*}
\partial_{\xi_2}v_k'[\alpha] = - \left(\partial_{\xi_2 \xi_2}v_k+\partial_{\xi_2}v_k i\left.\omega_2^{1,2}\right\vert_{\Gamma_o}\right) e^{i \omega \xi_1} =- (\partial_{\xi_2 \xi_2}v_k\pm\partial_{\xi_2}v_k\omega_1) e^{i \omega_1 \xi_1}, \\
\partial_{\xi_2}\lambda_k'[\alpha] = - \left(\partial_{\xi_2 \xi_2}\lambda_k+\partial_{\xi_2}\lambda_k i\left.\omega_2^{1,2}\right\vert_{\Gamma_o}\right) e^{i \omega \xi_1} = - (\partial_{\xi_2 \xi_2}\lambda_k\pm\partial_{\xi_2}\lambda_k\omega_1) e^{i \omega \xi_1},
\end{align*}
where we used the expression for $\omega_2^{1,2}$ on the boundary, which was given in \eqref{eq:omega12OnGamma}. If we now plug this into the perturbed gradient and assume that $\omega_2^p$ and $\omega_2^a$ have the same sign, we get
\begin{align*}
df_D'[\alpha] = \mu \left[ \partial_{\xi_2}v_k \partial_{\xi_2 \xi_2}\lambda_k + \partial_{\xi_2}\lambda_k \partial_{\xi_2 \xi_2}v_k \pm2\omega_1(\partial_{\xi_2} \lambda_k \partial_{\xi_2} v_k )\right] \alpha.
\end{align*}
Hence, we have that the Hessian response to a Fourier mode with frequency $\omega$ is
\begin{align*}
H[\alpha]:= df_D'[\alpha] = (\beta_1 + \beta_2 \omega_1 ) \alpha.
\end{align*}
If we transform $\beta_1$ and $\beta_2$ back into physical coordinates, we get
\begin{align*}
\beta_1 =& \mu \sum_{k = 1}^2 \partial_{\xi_2}v_k \partial_{\xi_2 \xi_2}\lambda_k + \partial_{\xi_2}\lambda_k \partial_{\xi_2 \xi_2}v_k \nonumber \\
=& \mu (\bm{n}\cdot\nabla) \left(\sum_{k = 1}^2 (\bm{n}\cdot\nabla)v_k (\bm{n}\cdot\nabla)\lambda_k \right)
\end{align*}
and
\begin{align*}
\beta_2 = \pm 2\mu \sum_{k = 1}^2 (\bm{n}\cdot\nabla) \lambda_k (\bm{n}\cdot\nabla) v_k .
\end{align*}
Note that if we use $\omega_2 = i\omega_1$, the perturbation of the state variables will go to zero for $\xi_2\rightarrow\infty$. This solution is plausible, as a perturbation of the surface should not change the flow solution far away from the obstacle. Therefore, we choose the sign to be negative. \qed
\end{proof}
Before constructing a preconditioner with the derived Hessian information, let us take a closer look at several interesting properties of the problem. With the help of the symbol, we can see that a Newton-like preconditioner will be important when trying to solve the optimization problem efficiently: If we follow \cite{ta1995trends} and interpret the symbol of the Hessian as an approximation of the eigenvalues, we see that the eigenvalues will grow linearly by a factor of $\beta_2$. Due to the fact that a fine discretization allows high as well as low frequencies in the design space, we obtain small and large eigenvalues, leading to an ill-conditioned Hessian. Consequently, steepest-descent methods will suffer from poor convergence rates, see \cite[Chapter~3.3]{wright1999numerical}. Furthermore, the Hessian symbol reveals the following properties:
\begin{enumerate}[I]
\item \label{itm:phase} $H[\alpha]$ is a wave with the same phase and frequency $\omega$ as $\alpha$.
\item \label{itm:linScaling} as the frequency of $\alpha$ increases, the amplitudes of $H[\alpha]$ increase linearly (linear scaling)
\item \label{itm:beta1beta2Scaling} the scaling consists of a constant part $\beta_1$ given by \eqref{eq:beta1} and a linear part $\beta_2$, which can be calculated according to \eqref{eq:beta2}
\item \label{itm:nonConstantScaling} the amplitudes of $H[\alpha]$ vary along $\xi_1$ as $\beta_1(\xi_1)$ and $\beta_2(\xi_1)$ are non-constant functions
\item the inverse of the Hessian will damp frequencies by a factor of
\begin{align*}
\sigma_{H^{-1}} = \frac{1}{\beta_1 + \beta_2 \omega},
\end{align*}
meaning that the inverse Hessian, which we wish to use as a preconditioner has smoothing behavior.
\end{enumerate}
Before turning to the construction of a preconditioner, we investigate the applicability of the analytic results for convective flows. \\
\section{The discrete Hessian symbol}
\tikzstyle{block} = [rectangle,draw,minimum width=8em,align=center,rounded corners, minimum height=2em,scale=1.0]
\tikzstyle{connect} = [draw,-latex']
\label{sec:section4}
In the following, we wish to numerically reproduce the analytically derived symbol to test the applicability of the analytic Hessian behavior in the case of convective terms. The calculations are carried out with the SU2 flow solver, which incorporates an optimization framework. Information as well as test cases of the SU2 solver can for example be found in \cite{palacios2013stanford}.\\
We look at a cylinder as described in section \ref{sec:section6} placed inside a flow with a Reynolds number of $1$ as well as $80$. For our configuration, these choices of the Reynolds number are reasonable, since a higher Reynolds number will result in an unsteady von K\'{a}rm\'{a}n vortex street, meaning that the derivation of our optimization framework no longer holds. The task is to change the shape of the cylinder such that the drag is minimized. Hence, the optimization patch $\Gamma_o$ is the surface of the cylinder. Due to the fact that we do not want to focus on the optimization, but on the Hessian approximation and especially its response to certain Fourier modes, we first think of possibilities to numerically determine the Hessian matrix. One way to do so is by finite differences. If the perturbed optimization patch is given by
\begin{align}\label{eq:pertSurf}
\Gamma_o^\epsilon(\xi_1) := \Gamma_o(\xi_1) + \epsilon \alpha(\xi_1)\bm{n}(\xi_1),
\end{align}
the shape Hessian in direction $\alpha$ is given by
\begin{align*}
H[\alpha] = \lim_{\epsilon\rightarrow 0} \frac{df_D(\Gamma_o^\epsilon)- df_D(\Gamma_o)}{\epsilon},
\end{align*}
where $df_D(\Gamma_o^\epsilon)$ is the gradient evaluated for the flow around the perturbed optimization patch \eqref{eq:pertSurf}. The dependency on the direction $\bm{V}_k$ as defined in \eqref{eq:Vk} has been omitted for better readability. The Hessian can now be approximated with finite differences, i.e., instead of taking the limit, we choose a small value for $\epsilon$, yielding
\begin{align} \label{eq:diffH}
H^{FD}[\alpha] := \frac{df_D(\Gamma_o^\epsilon)- df_D(\Gamma_o)}{\epsilon}.
\end{align}
We expect the numerical results to coincide with the analytic derivation, which is why we wish to recover the Hessian properties \ref{itm:phase} to \ref{itm:nonConstantScaling}.
\subsection{Flow case 1: Re = 1}
\begin{figure}[htbp!]
\begin{center}
\begin{tikzpicture}[node distance = 3.5cm,auto,font=\sffamily]
\node[block](id1){choose $\omega^*$};
\node[block, right of = id1](id2){choose initial design $\Omega$};
\node[block, below = 0.5cm of id1, fill=white](id3){perturb mesh for $\omega^*$\\ according to \eqref{eq:pertSurf}};
\node[block, below = 0.5cm of id3, fill=white](id4){calculate gradient \\ for perturbed $\Omega$};
\node[block, right of = id4, fill=white](id5){calculate gradient\\ for base $\Omega$};
\node[block, below left = 0.5cm and -1.5cm of id5,fill=white](id6){$H^{FD}$ for $\omega^*$ with \eqref{eq:diffH}};
\node[block, below = 0.5cm of id6](id7){Figure \ref{fig:inout}};
\begin{scope}[on background layer]
\node[fit =(id3)(id4)(id5)(id6), fill = blue!20, draw](box){};
\end{scope}
\node[below right = 0.1cm and -1.7cm of box]{\color{blue!60}{Discrete Hessian Calculation}};
\path[connect] (id1) -- (id3);
\path[connect] (id2.south) + (-0.8,0) |- (id3.east);
\path[connect] (id2) -- (id5);
\path[connect] (id3) -- (id4);
\path[connect] (id4) -- (id6);
\path[connect] (id5) -- (id6);
\path[connect] (id6) -- (id7);
\end{tikzpicture}
\end{center}
\caption{Scheme for computing the Hessian for specified frequency and design.}
\label{tz:DiscreteHessianResponse}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=1.0\linewidth]{inputoutput60Re1.png}
\caption{Scaled input $\alpha$ and resulting discrete Hessian $H^{FD}$ on the cylinder's surface.}
\label{fig:inout}
\end{figure}
Let us start with a Reynolds number of $1$. The discrete Hessian response is calculated according to Figure \ref{tz:DiscreteHessianResponse} for $\omega^{*}=60$ and a two-dimensional cylinder as initial design. The resulting discrete shape Hessian can be seen in Figure \ref{fig:inout}. One can see that the Hessian structure coincides with the analytic results to the extent that the output will have the same phase and frequency as the input (property \ref{itm:phase}). Furthermore, we see that the Hessian will modify the amplitude of $\alpha$, which varies along the optimization patch. This behavior can also be deduced from the analytic derivation, as the non-constant derivatives of the primal and adjoint states affect the parameters $\beta_1$ and $\beta_2$ (property \ref{itm:nonConstantScaling}).
\begin{figure}[htbp!]
\begin{center}
\begin{tikzpicture}[node distance = 3.5cm,auto,font=\sffamily]
\node[block](id0){choose $\xi_1^*$};
\node[block, below = 0.5cm of id0, fill=white](id1){choose $\omega_n$ for $n = 1,\cdots,N$ \\ s.t. $\sin(\omega_n\xi_1^*+s_n)\stackrel{!}{=}1$};
\node[block, right = 0.5cm of id0](id2){choose initial design $\Omega$};
\node[minimum height=2, below right = 0.5cm and -1.5cm of id1, fill = blue!20, draw](box){Discrete Hessian Calculation};
\node[block, left = 0.5cm of box, fill=white](out){Figure \ref{fig:Scalings}};
\node[block, below = 0.5cm of box, fill=white](out1){plot $H^{FD}$ at $\xi_1^*$ for all $\omega_n$};
\node[block, below = 0.5cm of out1, fill=white](out2){compute linear fit\\ for data points };
\node[block, left = 0.5cm of out2, fill=white](out3){Figure \ref{fig:AmplitudeScalingRe1}};
\node[block, below = 0.5cm of out2, fill=white](end){$\beta_1^{FD},\beta_2^{FD}$ at $\xi_1^*$};
\begin{scope}[on background layer]
\node[fit =(id1)(box)(out)(out1)(out2), fill = red!20, draw](newBox){};
\end{scope}
\node[below left = 0.1cm and -4.0cm of newBox]{\color{red!60}{Scaling Parameter Calculation}};
\path[connect] (id0) -- (id1);
\path[connect] (id1) -- (box);
\path[connect] (id2) -- (box);
\path[connect,dashed] (box.west) -- (out.east);
\path[connect] (box) -- (out1);
\path[connect] (out1) -- (out2);
\path[connect,dashed] (out1.west) -| (out3.north);
\path[connect,dashed] (out2) -- (out3);
\path[connect] (out2) -- (end);
\end{tikzpicture}
\end{center}
\caption{Scheme for calculating scaling parameters.}
\label{tz:SchemeScalingParameters}
\end{figure}
A detailed picture of how the amplitude depends on the input frequency at a fixed point $\xi_1^*$ can be obtained with the scheme depicted in Figure \ref{tz:SchemeScalingParameters}. We choose $N$ different frequencies such that the amplitudes of $\sin(\omega_n\xi_1+s_n)$ overlap at $\xi_1^{*}$. Note that we use a shift $s_n$ to allow choosing all frequencies. The resulting Hessian responses can be found in Figure \ref{fig:Scalings}.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{ScalingsOmegaRe1.png}
\caption{Hessian responses on top cylinder.}
\label{fig:Scalings}
\end{subfigure}%
~
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{AmplitudeScalingRe1.png}
\caption{Amplitudes at $\xi_1 = \frac{3}{4}\pi$ with linear fit.}
\label{fig:AmplitudeScalingRe1}
\end{subfigure}
~
\caption{Discrete Hessian responses to different input frequencies for $Re=1$.}
\end{figure}
We now evaluate the $N$ discrete Hessian responses at $\xi_1^*$. Plotting the different amplitudes over the corresponding input frequency $\omega$ and calculating a linear curve fit yields Figure \ref{fig:AmplitudeScalingRe1}. One can see that choosing a linear function will lead to a good approximation of the scaling behavior, indicating that the the numerical investigation matches the linear scaling of the analytically derived symbol (property \ref{itm:linScaling}). Note that the curve fit in Figure \ref{fig:AmplitudeScalingRe1} can be used to calculate the scaling parameters $\beta_1^{FD}$ (which is the fit at $\omega = 0$) as well as $\beta_2^{FD}$ (which is the slope of the fit) at $\xi_1=\frac{3}{4}\pi$. The superscript $FD$ denotes that the scaling parameters are obtained by the finite difference approximation and not by the analytic formulas \eqref{eq:beta1} and \eqref{eq:beta2}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[node distance = 3.5cm,auto,font=\sffamily]
\node[block](id0){choose $\xi_{1}^{(j)}$\\ for $j = 1,\cdots,M$};
\node[block, right = 0.5cm of id0](id1){choose initial design $\Omega$};
\node[minimum height=2, below right = 0.5cm and -1.5cm of id0, fill = red!20, draw](box){Scaling Parameter Calculation};
\node[block, below = 0.5cm of box](end){Figure \ref{fig:Re1Beta1Beta2}};
\path[connect] (id0) -- (box);
\path[connect] (id1) -- (box);
\path[connect] (box) -- (end);
\end{tikzpicture}
\end{center}
\caption{Scheme for calculating Figure \ref{fig:Re1Beta1Beta2}.}
\label{tz:SchemeBetaField}
\end{figure}
To calculate the scaling parameters $\beta_{1,2}^{FD}$ at several positions on the cylinder's surface, we repeat the previously described analysis for $M$ different choices of $\xi_1^*$, see \ref{tz:SchemeBetaField}. In Figure \ref{fig:Re1Beta1Beta2}, we compare the resulting scaling values with the continuous derivations \eqref{eq:beta1} and \eqref{eq:beta2}. Note that in order to minimize computational costs, we use only two frequencies, hence $N=2$.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.6\linewidth]{Re1Beta1Beta2.png}
\caption{Comparison of the analytic result and the finite difference approximation of $\beta_{1,2}$ for $Re = 1$.}
\label{fig:Re1Beta1Beta2}
\end{figure}
We can now see that the scaled values of $\beta_1^{FD}$ as well as the $\beta_2^{FD}$ values match the analytic results (property \ref{itm:beta1beta2Scaling}). Note that $\beta_1$ only coincides up to a factor of $0.08$, which is most likely caused by the poor approximation of second-derivatives of the flow solution.
As the properties of the finite difference approximation of the Hessian coincide with the analytic derivation, it is reasonable to use the analytic formulas of the $\beta$ values in order to calculate the preconditioner for small Reynolds number flows.
\subsection{Flow case 2: Re = 80}
We now turn to a flow with a Reynolds number of $80$. Again, we choose a surface perturbation $\alpha(\xi_1) = \cos(60(\xi_1-0.06))$ and investigate the resulting discrete Hessian, which can be seen in Figure \ref{fig:inoutRe80}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=1.0\linewidth]{inputoutput60Re80.png}
\caption{Scaled input $\alpha$ and resulting discrete Hessian $H^{FD}$.}
\label{fig:inoutRe80}
\end{figure}
Just as in the first flow case, the input wave has the same phase as the outgoing Hessian signal (property \ref{itm:phase}). The amplitude of the output does again vary, meaning that we again have non-constant scaling parameters (property \ref{itm:nonConstantScaling}). The next step is to investigate how the output depends on the input frequency. We therefore study the output for several input frequencies and calculate a linear fit for the scaling of the amplitude, hoping that the linear analytic result will hold even though we no longer have negligible convective properties of the flow. The result of this fit at the spatial position $\xi_1 = \frac{3}{4}\pi$ can be found in Figure \ref{fig:AmplitudesRe80}.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{ScalingsOmegaRe80.png}
\caption{Hessian responses on top cylinder.}
\label{fig:ScalingsRe80}
\end{subfigure}%
~
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{AmplitudeScalingRe80.png}
\caption{Amplitudes at $\xi_1 = \frac{3}{4}\pi$ with linear fit.}
\label{fig:AmplitudesRe80}
\end{subfigure}
~
\caption{Discrete Hessian responses to different input frequencies for $Re=80$.}
\end{figure}
Fortunately, the results again point to a Hessian symbol with linear scaling (property \ref{itm:linScaling}). Repeating this computation for different values of $\xi_1$ yields Figure \ref{fig:Re80Beta1Beta2}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.6\linewidth]{Re80Beta1Beta2.png}
\caption{Comparison of the analytic result and the finite difference approximation of $\beta_{1,2}$ for $Re = 80$.}
\label{fig:Re80Beta1Beta2}
\end{figure}
It can be seen that the $\beta_2^{FD}$ values match the analytic result very well, whereas the $\beta_1^{FD}$ values do not coincide with the analytic predictions (property \ref{itm:beta1beta2Scaling} partially violated). Therefore, one can conclude that the convective flow behavior, which we did not include in the analytic derivations, will result in $\beta_1^{FD}$ values that do not correspond to $\beta_1$. However the analytic prediction of the scaling parameter $\beta_2$ can be used to mimic the Hessian behavior.
\section{Construction of the approximate Newton smoothing method}
\label{sec:section5}
Our aim is to use the scaling behavior, which we have investigated analytically and numerically to precondition and to smooth the search direction of our problem. Here, we need to distinguish between the low and higher Reynolds number cases, due to the fact that the numerical evaluation of $\beta_1$ did not coincide with the analytic prediction in the case of convective flow behavior. Let us for now assume that we know the values of $\beta_1$ and turn to several other problems arising when trying to determine a preconditioner. We start by using standard Hessian manipulation techniques as they can be found in \cite[Chapter~6.3]{wright1999numerical} to construct a modified Hessian $\bar{H}$, which is sufficiently positive definite. After that, we think of how to approximate this Hessian with a sparse and computationally cheap preconditioner $B$. Here, the main task will be to mimic pseudo-differential behavior.
\subsection{Hessian manipulation}
Let us start by pointing out that instead of using the Hessian, Newton's method uses the inverse Hessian, which has the inverse scaling behavior
\begin{equation}\label{eq:numericInvHessian}
H^{-1}[\alpha] = \frac{1}{\beta_1+\beta_2 \omega }\alpha.
\end{equation}
Our first step is to investigate the effect of this inversion, which can be found in Figure \ref{fig:inoutHInv} when using the analytic scaling parameters $\beta_{1,2}$ of a flow with a Reynolds number of $1$.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{inoutHInvRe1.png}
\caption{$H^{-1}[\alpha]$}
\label{fig:inoutHInv}
\end{subfigure}%
~
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{inoutHInvEpsRe1.png}
\caption{$\bar{H}^{-1}[\alpha], \eta = 0.2$}
\label{fig:inoutHInvEps}
\end{subfigure}
~
\caption{Response of the original and modified preconditioner to the input $\alpha$, which is a wave with frequency $60$.}
\end{figure}
It is clear that the inverse Hessian will blow up for frequencies that fulfill
\begin{align*}
\beta_1+\beta_2\omega = 0.
\end{align*}
In our example, this behavior can be seen at the front and the rear of the cylinder. Furthermore, the scaling behavior of the Hessian can become negative, which can be interpreted as negative eigenvalues of the Hessian matrix. Hence, we need to take care of two problems frequently arising when trying to approximate a Hessian matrix, namely singularities as well as negative eigenvalues. As proposed in \cite{wright1999numerical}, we modify the Hessian symbol such that its inverse has the form
\begin{align*}
\bar{H}^{-1}[\alpha] := \frac{1}{\bar{\beta_1}+\left\vert\beta_2\right\vert \omega }\alpha,
\end{align*}
where
\begin{equation}\label{eq:choiceBeta1}
\bar{\beta_1}:=\eta+\beta_1-\min\left\{0,\min_{\xi_1}{\beta_1}\right\}.
\end{equation}
The regularization parameter $\eta$ is chosen to prevent singularities of the inverse symbol and to ensure that all eigenvalues of the Hessian will be sufficiently positive. Let us now use $\bar{H}^{-1}$ to precondition the search direction, which for now is a Fourier mode with frequency $60$. When taking a look at the output in Figure \ref{fig:inoutHInvEps}, we see that the Hessian behavior will only be affected in the critical regions of the optimization patch, namely the front and rear. We will further justify this modification of the Hessian by investigating its effects on the final preconditioner.
However, we first think of how one can calculate the Hessian when not using a Fourier mode as input.
\subsection{Approximation of the pseudo-differential operator}
Our goal is to find a computationally cheap preconditioner $B^{-1}$ with the symbol $\sigma_{B^{-1}}$, which approximates the symbol of the inverted modified Hessian $\bar{H}^{-1}$. The two main properties of the symbol, namely its scaling and no phase shift belong to so-called pseudo-differential operators. Solving equations containing pseudo-differential operators is time consuming, which is why we use a different approach. To ensure a sparse and computationally cheap Hessian, we make use of differential operators. Using operators of even order will prevent a phase shift, however yields incorrect scaling. The chosen operator is
\begin{align*}
\bar{H}^{-1}[\alpha] \approx B^{-1}[\alpha] :=\left( \bar{\beta_1} + \epsilon \partial_{\xi_1 \xi_1} \right)^{-1} \alpha.
\end{align*}
This operator can easily be evaluated, however its symbol is
\begin{align}\label{eq:correctSymbol}
\sigma_{B^{-1}} = \frac{1}{\bar{\beta_1} - \epsilon \omega^2},
\end{align}
whereas the symbol which we wish to approximate is
\begin{align}\label{eq:approximatedSymbol}
\sigma_{\bar{H}^{-1}} = \frac{1}{\bar{\beta_1} + \beta_2 \omega}.
\end{align}
We will mimic the correct scaling by choosing $\epsilon$ such that the symbol of the preconditioner $\sigma_{B^{-1}}$ will be similar to the correct symbol $\sigma_{\bar{H}^{-1}}$. Before deriving a strategy to pick $\epsilon$, we apply the preconditioner $B$ to Newton's method: The preconditioned search direction $\bm{p}$ is given by
\begin{align*}
\bm{p}(\xi_1) = -\left( \bar{\beta_1}(\xi_1) + \epsilon(\xi_1) \partial_{\xi_1 \xi_1} \right)^{-1}\bm{df}(\xi_1),
\end{align*}
which is the continuous version of the Newton update \eqref{eq:NewtonDirection} when using the derived Hessian approximation. Discretizing this differential equation on the given mesh nodes of the optimization mesh yields
\begin{equation}\label{eq:discretePrecond}
\bar{\beta}_{1,j}p_{j} + \frac{\epsilon_j}{\Delta\xi_1^2}\left( p_{j-1} -2p_{j}+p_{j+1} \right) = df_j
\end{equation}
with $j = 1,\cdots,N$. Since this discretized equation is linear in $\bm{p}$, we can rewrite it as
\begin{align}\label{eq:localPreconditioning}
\bm{p} = -\bm{B}^{-1}\bm{df},
\end{align}
where the matrix $\bm{B}$ is given by
\begin{align*}
\begin{pmatrix}
\bar{\beta}_{1,1}-\frac{2\epsilon_{1}}{\Delta\xi_1^2} & \frac{\epsilon_{1}}{\Delta\xi_1^2} & 0 & & & \dots & & 0 & \frac{\epsilon_{1}}{\Delta\xi_1^2} \\
\frac{\epsilon_{2}}{\Delta\xi_1^2} & \bar{\beta}_{1,2}-\frac{2\epsilon_2}{\Delta\xi_1^2} & \frac{\epsilon_2}{\Delta\xi_1^2} & 0 & & & & \dots & 0 \\
0 & \frac{\epsilon_3}{\Delta\xi_1^2} & \bar{\beta}_{1,3}-\frac{2\epsilon_3}{\Delta\xi_1^2} & \frac{\epsilon_3}{\Delta\xi_1^2} & 0 & & & \dots & 0 \\
\vdots & & & \ddots & & & & & \vdots \\
0 & & & & & & & & \\
\frac{\epsilon_{N}}{\Delta\xi_1^2} & 0 & \dots & & & & & \frac{\epsilon_{N}}{\Delta\xi_1^2} & \bar{\beta}_{1,N}-\frac{2\epsilon_{N}}{\Delta\xi_1^2}
\end{pmatrix},
\end{align*}
when assuming periodic boundary conditions and the gradient $\bm{df}$ is the collection of gradient values on every surface node as defined in \eqref{eq:GradientVector}.
We now return to choosing the smoothing parameter $\epsilon$ such that the correct and approximated symbols \eqref{eq:approximatedSymbol} and \eqref{eq:correctSymbol} match for relevant frequencies. Note that $\epsilon$ has been discretized in \eqref{eq:discretePrecond}. Hence, it remains to pick $\epsilon_j$ for $j = 1,\cdots,N$. For this task, we need to determine frequencies in the gradient vector $\bm{df}$. We use the discrete Fourier transform
\begin{align*}
df_k = \frac{1}{N}\sum_{l = 0}^{N-1}\hat{df}_l\text{exp}\left(\frac{i2\pi k l}{N}\right)
\end{align*}
to determine frequencies with amplitude $\hat{df}_l$ in $\bm{df}$. These amplitudes can be calculated by
\begin{align*}
\hat{df}_k = \sum_{l = 0}^{N-1} df_l \text{exp}\left(\frac{i2\pi k l}{N}\right).
\end{align*}
Note that these frequencies are global. Frequencies can be localized by multiplying a discrete window function $g$, yielding
\begin{align}\label{eq:windowedFourier}
\tilde{df}_{m,l} = \sum_{k = 0}^{N-1}df_k g_{k-m}\text{exp}\left(\frac{-i2\pi k l}{N}\right).
\end{align}
This representation is common in signal compression and is called a discrete windowed Fourier transform. For further details can be found in \cite[Chapter~4.2.3]{mallat1999wavelet}.
The discrete values of $\bm{\epsilon}$ are now determined by minimizing the distance between the response of the windowed Fourier transform to the correct and approximated symbol
\begin{equation}\label{eq:minScale}
\epsilon_j = \argmin_{\varepsilon} \sum_{k=0}^{N-1} \tilde{df}_{j,k}^2\left(\frac{1}{\bar{\beta}_{1,j} - \varepsilon \omega_k^2 } -\frac{1}{\bar{\beta}_{1,j} + \vert\beta_{2,j}\vert \omega_k }\right)^2.
\end{equation}
Since we have localized the frequencies, we are able to pick the smoothing parameter in a given spatial cell $j$ such that the approximated matches the correct scaling for frequencies that are dominant in cell $j$. The optimal value of this parameter in cell $j$ is then denoted by $\epsilon_j$. The constructed preconditioner is depicted in Figure \ref{tz:ApproxNewtonSmoothing}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[node distance = 3.5cm,auto,font=\sffamily]
\node[block](grad){gradient $\bm{df}$};
\node[block, right =0.9cm of grad](states){primal and adjoint states};
\node[block, below = 0.75cm of grad, fill=white](FF){compute local frequencies in $\bm{df}$ \\ by \eqref{eq:windowedFourier}};
\node[block, below = 0.75cm of states, fill=white](beta){compute $\beta_1,\beta_2$ \\ by \eqref{eq:beta1} and \eqref{eq:beta2}};
\node[block, below left = 0.5cm and -2.0cm of beta, fill=white](epsilon){compute smoothing parameters $\bm{\epsilon}$ \\ by minimizing \eqref{eq:minScale}};
\node[block, right = 2.0cm of epsilon, fill=white](eta){choose $\eta$};
\node[block, below = 0.5cm of epsilon, fill=white](p){compute smooth approximate Newton \\ search direction $\bm{p}$ with \eqref{eq:localPreconditioning}};
\begin{scope}[on background layer]
\node[minimum width=10cm, fit =(FF)(beta)(epsilon)(p), fill = orange!20, draw](newBox){};
\end{scope}
\node[block, below = 0.75cm of p](output){search direction $\bm{p}$};
\path[connect] (grad) -- (FF);
\path[connect] (states) -- (beta);
\path[connect] (beta) -- (epsilon);
\path[connect] (FF) -- (epsilon);
\path[connect] (eta) -- (epsilon);
\path[connect] (epsilon) -- (p);
\path[connect] (grad.west) -| node [near start] {} ([xshift=-1.2cm] grad.west)
|- ([xshift=-.0cm] p.west);
\path[connect] (p) -- (output);
\end{tikzpicture}
\end{center}
\caption{Approximate Newton smoothing method}
\label{tz:ApproxNewtonSmoothing}
\end{figure}
The presented preconditioner is similar to common Sobolev smoothing: The Sobolev-smoothed search direction $p_S$ is obtained by solving
\begin{align}\label{eq:SobolevSmoothing}
p_S(\xi_1) = -\left( 1 + \tilde{\epsilon} \partial_{\xi_1 \xi_1} \right)^{-1}df(\xi_1).
\end{align}
In contrast to the presented method, the smoothing parameter $\tilde{\epsilon}$ is usually obtained by a parameter study. In our method, we pick the spatially dependent smoothing parameter $\epsilon$ such that we mimic Hessian behavior. Hence, the introduced smoothness is chosen locally such that the optimization process is accelerated. Therefore, we call the new method \textit{local smoothing}, whereas common Sobolev Smoothing is called \textit{global smoothing} in the following.
We now investigate the effects of the Hessian manipulations introduced by the modification of the scaling parameters $\beta_1$ and $\beta_2$. Remember that our goal was to construct a sufficiently positive definite preconditioner, which means that the smallest eigenvalue $\mu^*$ fulfills
\begin{align*}
\mu^* \geq \delta > 0.
\end{align*}
We can estimate the eigenvalues of the preconditioning matrix $\bm{B}$ with the help of the Gershgorin circle theorem, which states that
\begin{align*}
\vert \mu - B_{kk} \vert < \sum_{j\neq k} \vert B_{kj}\vert.
\end{align*}
Therefore, we have that
\begin{align*}
\mu \in \bigcup_{k} \left( \bar{\beta}_{1,k}-2\frac{\epsilon_k}{\Delta \xi_1^2} - 2\frac{\vert\epsilon_k\vert}{\Delta \xi_1^2}, \bar{\beta}_{1,k}-2\frac{\epsilon_k}{\Delta \xi_1^2} + 2\frac{\vert\epsilon_k\vert}{\Delta \xi_1^2} \right) .
\end{align*}
Note that since we are using $\vert\beta_1\vert$ as scaling parameter, we know that $\epsilon_k < 0 $, which means that we have
\begin{align*}
\mu \in \bigcup_{k} \left(\bar{\beta}_{1,k}, \bar{\beta}_{1,k}-4\frac{\epsilon_k}{\Delta \xi_1^2} \right).
\end{align*}
Remember that in \eqref{eq:choiceBeta1} the modified scaling parameter $\bar{\beta_1}$ was chosen such that $\bar{\beta_1} > \eta$, meaning that the regularization parameter $\eta$ can be understood as the minimal eigenvalue of the preconditioner. Hence the regularization parameter should be chosen sufficiently large, meaning that $\eta \geq \delta$. As a result we can easily control the lower bound of the minimal eigenvalue. This further motivates the Hessian modifications we used. We now use the preconditioner, which we constructed to optimize the design of a cylinder inside a flow.
\section{Results}
\label{sec:section6}
In the following, the results of the optimization when using common preconditioner \eqref{eq:localPreconditioning} will be presented and compared to the common choice of Sobolev smoothing \eqref{eq:SobolevSmoothing} with a constant smoothing parameter $\tilde\epsilon$, which we call global preconditioning. A good smoothing parameter for the global method, has been determined by investigating the grid resolution as done in \cite{schmidt2009impulse}. As discussed, the local methods picks the smoothing parameter automatically by minimizing \eqref{eq:minScale}. The task is to minimize the drag of a two dimensional cylinder, which is placed inside a fluid. To prevent the methods from simply decreasing the volume of the cylinder in order to achieve a minimization of the drag, we employ a volume constraint, which ensures a constant obstacle volume. The radius of the cylinder is one meter. We use a farfield density of $998.2\frac{kg}{m^3}$ and a farfield velocity of $10^{-5}\frac{m}{s}$ for the Reynolds number of $1$ and a velocity of $6.4\cdot 10^{-5}\frac{m}{s}$ for the Reynolds number of $80$. The chosen viscosity is $0.798\cdot 10^{-3}\frac{Ns}{m^2}$.
\subsection{Flow case 1: Re = 1}
We first look at the flow with $Re=1$, where we are confident to use the analytic form of $\beta_2$ and especially $\beta_1$. The regularization parameter $\eta$ in \eqref{eq:choiceBeta1} is set to $0.2$. As commonly done in Newton's method, we use a step length of $1.0$ for the local method. When comparing the first design update of the local and the global method when using a step length of $1$, one observes that the global method is penalized as the design update is much smaller. This is why we scale the step size of the global method, such that the magnitude of the design change will be of the same size for both methods in the first design step, see Figure \ref{fig:searchIt1Re1}. Let us now compare the optimization histories of local and global preconditioning. In Figure \ref{fig:fRe1}, we can see that using the analytic derivation of the scaling parameters $\beta$ as well as the information on the local frequencies inside the gradient will lead to a speedup, compared to the common global preconditioner. Whereas global preconditioning needs $15$ iterations to decrease the drag by roughly six percent, the local method will reach this reduction after nine iterations. A comparison of the flow field before and after the optimization can be found in Figure \ref{fig:v1OptNewMethodRe1}.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{optimizationFDRe1.png}
\caption{Optimization history}
\label{fig:fRe1}
\end{subfigure}%
\begin{subfigure}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{searchDirRe1.png}
\caption{Scaled search direction iteration $1$.}
\label{fig:searchIt1Re1}
\end{subfigure}
\caption{Comparison of standard and local preconditioning for $Re=1$.}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.7\linewidth]{v1OrigNewMethodRe1.png}
\includegraphics[width=0.7\linewidth]{v1OptNewMethodRe1.png}
\caption{Zoomed view of the flow solution. Top: Original design and velocity magnitude. Bottom: Locally optimized design and velocity magnitude.}
\label{fig:v1OptNewMethodRe1}
\end{figure}
\subsection{Flow case 2: Re = 80}
We now turn to the more complicated convective case, where we choose a Reynolds number of $80$. Again, we modify the surface of a cylinder in order to reduce the drag. Remembering the numerical investigation of the Hessian matrix for such a flow, it is obvious, that we cannot use the analytic values of $\beta_1$ as scaling parameter, whereas the values of $\beta_2$ fit the analytic prediction. A reasonable choice for $\beta_1$ is a scaled and smoothed version of $\beta_2$, which can be seen when looking at the numeric results of Figure \ref{fig:Re80Beta1Beta2}. Therefore, we choose $\beta_1 = 10\beta_2$, where the scaling of $10$ is motivated by the value $\beta_1^{FD}$, which we calculated when choosing multiple frequencies, see Figure \ref{fig:ScalingsRe80}. Furthermore, we use Sobolev smoothing with a very small choice of the smoothing parameter $\epsilon\approx 10^{-4}$ to ensure a smooth scaling parameter $\beta_1$. The regularization parameter $\eta$ is chosen as in the first flow case, meaning that a value of $0.2$ is taken. Due to the fact that we wish to use a constant step length for the optimization process, we use a step length of $0.5$ for the local preconditioner and scale the search direction proposed by the global preconditioner such that both search directions are of the same size in the first optimization step. A comparison of the scaled search directions can be found in Figure \ref{fig:searchIt1Re80}.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{optimizationFDRe80.png}
\caption{Optimization history}
\label{fig:fRe80}
\end{subfigure}%
\begin{subfigure}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{searchDirRe80.png}
\caption{Scaled search direction iteration $1$.}
\label{fig:searchIt1Re80}
\end{subfigure}
\caption{Comparison of standard and local preconditioning for $Re=80$.}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.7\linewidth]{v1OrigNewMethodRe80.png}
\includegraphics[width=0.7\linewidth]{v1OptNewMethodRe80.png}
\caption{Zoomed view of the flow solution. Top: Original design and velocity magnitude. Bottom: Locally optimized design and velocity magnitude.}
\label{fig:v1OptNewMethodRe80}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.8\linewidth]{optimizationDesignGlobal}
\includegraphics[width=0.8\linewidth]{optimizationDesignLocal}
\caption{Obstacle designs at iteration $1,10,20,\cdots ,60$. Top: Global preconditioning. Bottom: Local preconditioning.}
\label{fig:optimizationDesign}
\end{figure}
During the optimization process, we see that the local preconditioner will focus on creating an optimal front and rear, whereas the global preconditioner will heavily modify the top and bottom of the obstacle. Let us now compare the optimization histories of the local and the global preconditioner, which can be seen in Figure \ref{fig:fRe80}. We see that both methods are able to heavily decrease the drag by more than $33$ percent. While the local preconditioner will reach this reduction after $62$ iterations, the global preconditioner needs $240$ iterations to reach the same drag value. It is important to note that the local preconditioner will not further decrease the drag value after iteration $62$, since the step size is too big. A smaller step size will further decrease the drag value, however we wish to perform our optimization with a constant step length, which is why subsequent drag values do not appear in the optimization history. The global method is stopped after iteration $240$ as the norm of the gradient will fall to almost zero. Let us now take a closer look at the optimized design, which can be found in Figure \ref{fig:v1OptNewMethodRe80}. We can see that the optimization process will create a sharp front and rear, as well as a smooth top and bottom. Comparing the design histories of the global and local method in Figure \ref{fig:optimizationDesign}, we can see that the local preconditioner will focus on creating an optimal front and rear, whereas the global preconditioner will heavily modify the top and bottom of the obstacle. Note that the local method results in an inverted front, in the first design steps. However, the ability to choose non-smooth deformations is advantageous in this problem as a sharp edge is allowed to form. A disadvantage of non-smooth deformations is that it can lead to complex meshes, which is why robust mesh deformation tools need to be employed.
\section{Summary and Outlook}
\label{sec:section7}
In this paper, we derived a local smoothing preconditioner, which automatically picks smoothing parameters such that the symbol of the inverse Hessian is approximated. This preconditioner has been derived by determining the analytic symbol when choosing the Stokes equations as flow constraints. The resulting coefficients of the symbol $\beta_1$ and $\beta_2$, which we called scaling parameters, have been compared to the numerical Hessian response. The presented technique to determine the scaling parameters numerically showed good agreements with the analytic results for flows with a Reynolds number of one. As convective forces become dominant, the parameter $\beta_1$ looses validity, however $\beta_2$ coincides with the analytic calculation. Standard Hessian manipulations of approximate Newton method have been used to obtain a sufficiently positive definite preconditioner. A computationally cheap preconditioner, which mimics the symbol of the Hessian, has been constructed by using differential operators. The derived method can be interpreted as Sobolev smoothing, which automatically picks a local smoothing parameter such that the symbol of the Hessian is approximated. Comparing the new method with Sobolev smoothing, we see that we obtain a faster convergence to the optimal design. By making use of a local smoothing parameter, which depends on the position of the optimization patch $\Gamma_o$, the method is able to turn off smoothing in physically meaningful areas such as the front and rear of the cylinder.
A question that one could focus on in future work is how to determine the scaling parameter $\beta_1$ in the case of a convective flow. Setting $\beta_1$ to a smoothed version of $\beta_2$ led to an acceleration of the optimization, however this choice was based on problem dependent numerical investigations, which might not hold for further applications. Furthermore, one needs to check the validity of the Hessian symbol at non-smooth parts of the optimization patch, as the symbol has been derived for smooth geometries. A construction of further preconditioners making use of the derived Hessian symbol is possible. Here, one should look at the construction of a preconditioner with pseudo-differential properties to further improve the search direction.
| 2023-04-23T06:41:02.034Z | 2018-07-31T02:17:30.000Z | redpajama/arxiv | arxiv_0001 | 1,582 | 13,462 |
ba7277803a5aa5e45ade81b38f60e74880a39cdc | \section{Introduction}
The influence of data systems in finance has been so far focused on cutting administrative costs. However, technology could transform business models in finance. With the help of cryptography on distributed data systems, we can design infrastructure that not only keeps track of transactions and balances but also allows transfer of assets and enforcement of contracts without the need of clearinghouses, middle men, escrows and even some tasks performed by lawyers. On these systems settlements are final and there is no possibility of repudiation while smart contracts execute deterministically according to predefined inputs. Institutions that enclasp this vision will have a competitive advantage. Investing in distributed cryptographic technology and regulatory oriented technology will pay off in the form of less intermediation, less legal fees or exordium and ultimately an accelerated innovation cycle where new financial services can be provided with much less friction and where these services are much better tailored to smaller market segments. These institutions will be able to compete in the long tail of financial services and offer a much more competitive service for the mainstream.
However, for the Internet of Value vision to happen, there’s need for a whole redesign of the architectural Information Technology (IT) principles used to build new services. In this paper, our proposed models can achieve better performances than state-of-the-art existing models on the Ethereum Blockchain~\cite{eth_block} network.
\section{Architectural Details}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{imgs/w1}
\caption{Architecture to create new services}
\label{fig:w1}
\end{figure}
In search of new services such as global payment solution, we created an architecture that makes possible to quickly innovate and create new services. In order to achieve this at web scale, we attempt to employ the Event Sourcing~\cite{event_sourcing} pattern, an approach to handle large operations on data driven by a series of events, in which each event is recorded in an append-only datastore. In this kind of architecture, instead of having a centralized state for all the applications in a centralized database or a set of disconnected state holding databases, we rely on a centralized log of state changes, which allows to create new applications very quickly because it is only necessary to replay the log of changes in order to build new services with new states that will be consistent with the state being held by other services in the same company.
\textit{Event Sourcing} is very powerful but has a coupling
problem. In big organizations a central team needs to own the event log.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{imgs/w2}
\caption{Architecture based on Event Sourcing Pattern}
\label{fig:w2}
\end{figure}
Usually, many different teams want to access the log and they all need to agree on the technology and permissions to write and read from it. Also teams reading or writing intensively from or to the central infrastructure risk causes performance problem to the whole.
Also, this kind of architecture is not suitable for collaborating among different organizations or group of companies. For that reason, an architecture where each business unit or application would hold a copy of the parts the log relevant to it would be better. If the stream of data is composed of different topics (for example, premium payments or claims), then each business unit would be able to hold a copy of the topics relevant to it. The above
architecture would be feasible in a trusted environment but in a collaborative environment, all transactions would have to be signed
by the different parties and the order of the transactions should be guaranteed.
We can achieve that in a manner similar to blockchain by means of a linked stream of events using hash pointers to the previous events and by embedding signatures of both sides of a transaction in the message itself.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{imgs/w3}
\caption{Sub-model based on Linked stream of events}
\label{fig:w3}
\end{figure}
This would imply that some business units would have to act as signing authorities to their counter-parties in external organizations. By acting like that we would achieve the benefits of a distributed ledger but without being able to transfer assets.
Asserting that assets can’t be transferred with only a distributed ledger can be contentious. Indeed if both parties have proof that the transaction has been agreed.
However, even in this case where the transaction implied a transfer of assets, for example, in the case where the transaction implied a transfer of assets, for example, in the case of signing an insurance policy, this would not imply that any of the parties would pay the stipulated amounts. We would have no guarantee that policy takers would pay the claims. For that purpose we would need to resort to a third party escrow or even better to a Smart Contract running in a
blockchain with distributed consensus. It is arguable whether this blockchain would have to be public or if a permissioned distributed ledger would be able to achieve this purpose.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{imgs/w4}
\caption{Model for State Channel and Smart
Contract operation on Public Blockchain}
\label{fig:PCA_WIKI}
\end{figure}
Here we assume it as a public blockchain capable of running Smart Contracts. In fact the structure we described here is following an existing approach, so called a State Channel~\cite{statechannel} and it allows untrusted parties to exchange transactions with the possibility to resort to an independent escrow held in a Smart Contract operating on a public blockchain in case that one of the parties does not fulfill its commitment.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{imgs/w5}
\caption{Architecture of Parsec Infrastructure Layer}
\label{fig:w5}
\end{figure}
Finally get the big picture on how a state of the art architecture for a financial system should look like. It would have to: (1) Be based on transaction logs, (2) It would have to include a replication mechanism of such logs between business units, (3) Logs would have to be enhanced to become shared ledgers when dealing with external untrusted parties, and (4) State channel support would have to be added for asset transfer.
\section{Implementation}
We implemented a infrastructure layer named Parsec\footnote{Code Available at: https://github.com/quanonblocks/Parsec} using Apache Kafka, a distributed data system that can be used as an event sourcing unified log tool. It can be seen both as a queue or as a data store. We intend to implement only the very minimum additions to the already existing functionality based on~\cite{uber,LinkedIn,netflix} so that it can work as a State Channel. For that purpose a fundamental tool of the Apache Kafka~\cite{kafka} ecosystem will be Kafka Streams~\cite{kafka_stream} as it already offers many of the functionalities we need to implement.
We integrated few major functions on top of state channel to work which are (1) Scalability and replicability is provided by Apache Kafka and Kafka mirror maker, (2) Ability to maintain state at scale and interaction with Smart Contract or chaincode~\cite{chaincode} is extended by Parsec on top of Kafka Streams, and (3) Guarantees an exact sequence of hash pointers and transaction verification with its cryptographic signatures is developed in Parsec.
For above given functionality, we intend to provide a library running on top of Kafka Streams under fast data solution system Landoop.
Streams that guarantees the last three points.
\subsection{Parsec Nodes}
Parsec nodes are the job instances of the Parsec Library implementing the Parsec Software Development Kit (SDK). The Parsec library is developed on top of the Kafka streams library with the
several addition of functionality such as support for a catalogue of various kinds of transactions and to keep an update state for all unique Ids of the transaction topic (kafka topic). Also, we have integrated the Hashed Timelock~\cite{hashed_time} to interact with Ethereum smart contracts~\cite{smart_contract} for the kafka topic partitions under the nodes' supervision for scheduled settlements or the request of control topic.
All functionality for each Parsec job relates only to the partitions under it’s control.
\subsection{Parsec Transactions}
A basic catalogue of predefined transactions is implemented which can be used to add additional custom transactions.
Micropayment transaction~\cite{micropayment} performs the following actions:
\begin{itemize}
\item Keep a sliding array of the latest n number of transactions in order to reconstruct the exact order of the transactions from the transaction hash pointers. This is performed in order to cope with our order of messages or repeated messages.
\item Calculate the balance after each transaction.
\item Commit the balance to the Smart Contract in the Ethereum network at predefined intervals. Intervals are defined by the modulo m of the incremental transaction sequence number.
\item Commit the balance to the smart contract in the Ethereum network at predefined timeouts. The timeout duration can be specified by the \textit{checkpoint\_timeout} parameter.
\item Under the request of the control topic, perform an unscheduled settlement request to the smart contract.
\item Under the request of the control topic, perform a disputed settlement request to the smart contract by means of transferring the signed transactions since the last settlement.
\end{itemize}
\section{Conclusion \& Future Works}
This paper attempts to address the challenge in scaling, fast transaction processing, verification process issues simplifies with our proposed model contains Parsec Simulator which generates a special protocol messages under Kafka cluster with fully functional API support for existing cryptocurrencies like Bitcoin~\cite{bitcoin} and Ethereum~\cite{ethereum} accessible via Avro User Interface{\footnote{Avro UI: https://github.com/kaaproject/avro-ui}}. We also have supports to check Schema's~\cite{schemas} and protocol messages with Coyote test suite{\footnote{Coyote Tester: https://github.com/Landoop/coyote}}. For future work, we are looking forward to provide additional ways to support State Management functionality.
\section*{ACKNOWLEDGMENT}
This work is supported by Quanonblocks LLP.
\section{Introduction}
You are strongly encouraged to use \LaTeXe{} for the
preparation of your camera-ready manuscript together with the
corresponding Springer class file \verb+llncs.cls+. Only if you use
\LaTeXe{} can hyperlinks be generated in the online version
of your manuscript.
The \LaTeX{} source of this instruction file for \LaTeX{} users may be used as
a template. This is located in the ``authors'' subdirectory in
\url{ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/instruct/} and
entitled \texttt{typeinst.tex}. Kindly
send the final and checked source and PDF files of your paper to the
Contact Volume Editor. This is usually one of the organizers of the
conference. You should make sure that the \LaTeX{} and the PDF files are
identical and correct and that only one version of your paper is sent.
It is not possible to update files at a later stage. Please note that we
do not need the printed paper.
We would like to draw your attention to the fact that it is not possible
to modify a paper in any way, once it has been published. This applies
to both the printed book and the online version of the publication.
Every detail, including the order of the names of the authors, should
be checked before the paper is sent to the Volume Editors.
\subsection{Checking the PDF File}
Kindly assure that the Contact Volume Editor is given the name and email
address of the contact author for your paper. The Contact Volume Editor
uses these details to compile a list for our production department at
SPS in India. Once the files have been worked upon, SPS sends a copy of
the final pdf of each paper to its contact author. The contact author is
asked to check through the final pdf to make sure that no errors have
crept in during the transfer or preparation of the files. This should
not be seen as an opportunity to update or copyedit the papers, which is
not possible due to time constraints. Only errors introduced during the
preparation of the files will be corrected.
This round of checking takes place about two weeks after the files have
been sent to the Editorial by the Contact Volume Editor, i.e., roughly
seven weeks before the start of the conference for conference
proceedings, or seven weeks before the volume leaves the printer's, for
post-proceedings. If SPS does not receive a reply from a particular
contact author, within the timeframe given, then it is presumed that the
author has found no errors in the paper. The tight publication schedule
of LNCS does not allow SPS to send reminders or search for alternative
email addresses on the Internet.
In some cases, it is the Contact Volume Editor that checks all the final
pdfs. In such cases, the authors are not involved in the checking phase.
\subsection{Additional Information Required by the Volume Editor}
If you have more than one surname, please make sure that the Volume Editor
knows how you are to be listed in the author index.
\subsection{Copyright Forms}
The copyright form may be downloaded from the ``For Authors"
(Information for LNCS Authors)
section of the LNCS Website: \texttt{www.springer.com/lncs}.
Please send your signed copyright
form to the Contact Volume Editor, either as a scanned pdf or by fax or
by courier. One author may sign on behalf of all of the other authors of a particular
paper. Digital signatures are acceptable.
\section{Paper Preparation}
Springer
provides you with a complete integrated \LaTeX{} document class (\texttt{llncs.cls})
for multi-author books such as those in the LNCS series.
Papers not complying with the LNCS style will be reformatted. This can
lead to an increase in the overall number of pages. We would therefore
urge you not to squash your paper.
Please always cancel any superfluous definitions that are
not actually used in your text. If you do not, these may conflict with
the definitions of the macro package, causing changes in the structure
of the text and leading to numerous mistakes in the proofs.
If you wonder what \LaTeX{} is and where it can be obtained, see the
``\textit{LaTeX project site}'' (\url{http://www.latex-project.org})
and especially the webpage ``\textit{How to get it}''
(\url{http://www.latex-project.org/ftp.html}) respectively.
When you use \LaTeX\ together with our document class file,
\texttt{llncs.cls},
your text is typeset automatically in Computer Modern Roman (CM) fonts.
Please do
\emph{not} change the preset fonts. If you have to use fonts other
than the preset fonts, kindly submit these with your files.
Please use the commands \verb+\label+ and \verb+\ref+ for
cross-references and the commands \verb+\bibitem+ and \verb+\cite+ for
references to the bibliography, to enable us to create hyperlinks at
these places.
For preparing your figures electronically and integrating them into
your source file we recommend using the standard \LaTeX{} \verb+graphics+ or
\verb+graphicx+ package. These provide the \verb+\includegraphics+ command.
In general, please refrain from using the \verb+\special+ command.
Remember to submit any further style files and
fonts you have used together with your source files.
\subsubsection{Headings.}
Headings should be capitalized
(i.e., nouns, verbs, and all other words
except articles, prepositions, and conjunctions should be set with an
initial capital) and should,
with the exception of the title, be aligned to the left.
Words joined by a hyphen are subject to a special rule. If the first
word can stand alone, the second word should be capitalized.
Here are some examples of headings: ``Criteria to Disprove
Context-Freeness of Collage Language", ``On Correcting the Intrusion of
Tracing Non-deterministic Programs by Software", ``A User-Friendly and
Extendable Data Distribution System", ``Multi-flip Networks:
Parallelizing GenSAT", ``Self-determinations of Man".
\subsubsection{Lemmas, Propositions, and Theorems.}
The numbers accorded to lemmas, propositions, and theorems, etc. should
appear in consecutive order, starting with Lemma 1, and not, for
example, with Lemma 11.
\subsection{Figures}
For \LaTeX\ users, we recommend using the \emph{graphics} or \emph{graphicx}
package and the \verb+\includegraphics+ command.
Please check that the lines in line drawings are not
interrupted and are of a constant width. Grids and details within the
figures must be clearly legible and may not be written one on top of
the other. Line drawings should have a resolution of at least 800 dpi
(preferably 1200 dpi). The lettering in figures should have a height of
2~mm (10-point type). Figures should be numbered and should have a
caption which should always be positioned \emph{under} the figures, in
contrast to the caption belonging to a table, which should always appear
\emph{above} the table; this is simply achieved as matter of sequence in
your source.
Please center the figures or your tabular material by using the \verb+\centering+
declaration. Short captions are centered by default between the margins
and typeset in 9-point type (Fig.~\ref{fig:example} shows an example).
The distance between text and figure is preset to be about 8~mm, the
distance between figure and caption about 6~mm.
To ensure that the reproduction of your illustrations is of a reasonable
quality, we advise against the use of shading. The contrast should be as
pronounced as possible.
If screenshots are necessary, please make sure that you are happy with
the print quality before you send the files.
\begin{figure}
\centering
\includegraphics[height=6.2cm]{eijkel2}
\caption{One kernel at $x_s$ (\emph{dotted kernel}) or two kernels at
$x_i$ and $x_j$ (\textit{left and right}) lead to the same summed estimate
at $x_s$. This shows a figure consisting of different types of
lines. Elements of the figure described in the caption should be set in
italics, in parentheses, as shown in this sample caption.}
\label{fig:example}
\end{figure}
Please define figures (and tables) as floating objects. Please avoid
using optional location parameters like ``\verb+[h]+" for ``here".
\paragraph{Remark 1.}
In the printed volumes, illustrations are generally black and white
(halftones), and only in exceptional cases, and if the author is
prepared to cover the extra cost for color reproduction, are colored
pictures accepted. Colored pictures are welcome in the electronic
version free of charge. If you send colored figures that are to be
printed in black and white, please make sure that they really are
legible in black and white. Some colors as well as the contrast of
converted colors show up very poorly when printed in black and white.
\subsection{Formulas}
Displayed equations or formulas are centered and set on a separate
line (with an extra line or halfline space above and below). Displayed
expressions should be numbered for reference. The numbers should be
consecutive within each section or within the contribution,
with numbers enclosed in parentheses and set on the right margin --
which is the default if you use the \emph{equation} environment, e.g.,
\begin{equation}
\psi (u) = \int_{o}^{T} \left[\frac{1}{2}
\left(\Lambda_{o}^{-1} u,u\right) + N^{\ast} (-u)\right] dt \; .
\end{equation}
Equations should be punctuated in the same way as ordinary
text but with a small space before the end punctuation mark.
\subsection{Footnotes}
The superscript numeral used to refer to a footnote appears in the text
either directly after the word to be discussed or -- in relation to a
phrase or a sentence -- following the punctuation sign (comma,
semicolon, or period). Footnotes should appear at the bottom of
the
normal text area, with a line of about 2~cm set
immediately above them.\footnote{The footnote numeral is set flush left
and the text follows with the usual word spacing.}
\subsection{Program Code}
Program listings or program commands in the text are normally set in
typewriter font, e.g., CMTT10 or Courier.
\medskip
\noindent
{\it Example of a Computer Program}
\begin{verbatim}
program Inflation (Output)
{Assuming annual inflation rates of
years};
const
MaxYears = 10;
var
Year: 0..MaxYears;
Factor1, Factor2, Factor3: Real;
begin
Year := 0;
Factor1 := 1.0; Factor2 := 1.0; Factor3 := 1.0;
WriteLn('Year
repeat
Year := Year + 1;
Factor1 := Factor1 * 1.07;
Factor2 := Factor2 * 1.08;
Factor3 := Factor3 * 1.10;
WriteLn(Year:5,Factor1:7:3,Factor2:7:3,Factor3:7:3)
until Year = MaxYears
end.
\end{verbatim}
\noindent
{\small (Example from Jensen K., Wirth N. (1991) Pascal user manual and
report. Springer, New York)}
\subsection{Citations}
For citations in the text please use
square brackets and consecutive numbers: \cite{jour}, \cite{lncschap},
\cite{proceeding1} -- provided automatically
by \LaTeX 's \verb|\cite| \dots\verb|\bibitem| mechanism.
\subsection{Page Numbering and Running Heads}
There is no need to include page numbers. If your paper title is too
long to serve as a running head, it will be shortened. Your suggestion
as to how to shorten it would be most welcome.
\section{LNCS Online}
The online version of the volume will be available in LNCS Online.
Members of institutes subscribing to the Lecture Notes in Computer
Science series have access to all the pdfs of all the online
publications. Non-subscribers can only read as far as the abstracts. If
they try to go beyond this point, they are automatically asked, whether
they would like to order the pdf, and are given instructions as to how
to do so.
Please note that, if your email address is given in your paper,
it will also be included in the meta data of the online version.
\section{BibTeX Entries}
The correct BibTeX entries for the Lecture Notes in Computer Science
volumes can be found at the following Website shortly after the
publication of the book:
\url{http://www.informatik.uni-trier.de/~ley/db/journals/lncs.html}
\subsubsection*{Acknowledgments.} The heading should be treated as a
subsubsection heading and should not be assigned a number.
\section{The References Section}\label{references}
In order to permit cross referencing within LNCS-Online, and eventually
between different publishers and their online databases, LNCS will,
from now on, be standardizing the format of the references. This new
feature will increase the visibility of publications and facilitate
academic research considerably. Please base your references on the
examples below. References that don't adhere to this style will be
reformatted by Springer. You should therefore check your references
thoroughly when you receive the final pdf of your paper.
The reference section must be complete. You may not omit references.
Instructions as to where to find a fuller version of the references are
not permissible.
We only accept references written using the latin alphabet. If the title
of the book you are referring to is in Russian or Chinese, then please write
(in Russian) or (in Chinese) at the end of the transcript or translation
of the title.
The following section shows a sample reference list with entries for
journal articles \cite{jour}, an LNCS chapter \cite{lncschap}, a book
\cite{book}, proceedings without editors \cite{proceeding1} and
\cite{proceeding2}, as well as a URL \cite{url}.
Please note that proceedings published in LNCS are not cited with their
full titles, but with their acronyms!
| 2023-04-23T06:41:02.185Z | 2018-07-31T02:21:11.000Z | redpajama/arxiv | arxiv_0001 | 1,585 | 3,853 |
35b71e355f3026535ee43ed646c505ebb61ceaff | \section{Introduction}
\label{sec_introduction}
Standard models for random perturbation of partial differential equations have been the cylindrical Brownian motion or the Gaussian space-time white noise for the last 50 years. Both models are equivalent to a certain extent and lead to comparable dynamics, see Dalang, Quer-Sardanyons in \cite{DalangQuer}. In the next step of generalisation to non-Gaussian and discontinuous random perturbations, the choice of modelling the noise is far from being canonical. Gaussian space-time white noise
can be generalised in different ways, which is found under varying names in the literature such as {\em L\'evy-type noise} or {\em L\'evy space-time white noise}, following either an approach based on a L\'evy-It{\^o} decomposition or on random measures. In contrast, there is a natural generalisation of cylindrical Brownian motions to {\em cylindrical L\'evy processes} by a concept introduced in Applebaum and Riedle in \cite{Applebaum_Riedle}, based on the classical theory of cylindrical measures and cylindrical random variables.
Stochastic partial differential equations (SPDEs) driven by a noise which originates
from a generalisation of the Gaussian space-time white noise are extensively considered in the literature, including models with multiplicative noise; see e.g.\ Peszat and Zabczyk~\cite{Peszat_Zabczyk}. However, to the best of our knowledge, SPDEs driven by a cylindrical L\'evy process are only considered in the case of purely additive noise; see for example Priola and Zabczyk \cite{Priola_Zabczyk_1st}. The purpose of this work is to close this gap and to establish existence and uniqueness of a solution to a stochastic evolution equation driven by a multiplicative cylindrical L\'evy process.
More specifically, in the variational approach we consider an evolution equation of the form
\begin{equation}\label{SPDE_introduction}
\, \mathrm{d}X(t) = F(X(t)) \, \mathrm{d}t + G(X(t)) \, \mathrm{d}L(t),
\end{equation}
where $L$ is a cylindrical L{\'e}vy process. The coefficients $F$ and $G$ are given operators and are assumed to satisfy standard assumptions such as monotonicity and coercivity. The variational approach, first in a deterministic and then in a stochastic setting, goes back to the works by Bensoussan, Lions and Pardoux; a brief history of the approach can be found in \cite{Krylov_Rozovskii}. Existence results for equations of the form \eqref{SPDE_introduction} but driven by a Brownian motion were derived in Krylov and Rozovskii \cite{Krylov_Rozovskii}. In a series of publications \cite{Gyongy, Gyongy_Krylov_I, Gyongy_Krylov_II}, Gy\"ongy and Krylov generalised these results to semimartingales as driving noises. The variational approach has been extended in many directions. It is especially worth to mention the works of Liu and R\"ockner \cite{Liu_Rockner_book} and Pr\'ev\^{o}t and R\"ockner \cite{Prevot_Rockner}, where the assumptions on the coefficients were relaxed, such that classical models for example from fluid dynamics are captured by the framework. Recently, Brze\'zniak, Liu and Zhu \cite{Brzezniak_Liu_Zhu} have considered equations of the form \eqref{SPDE_introduction} with locally monotone coefficients and driven by a L\'evy-type noise, i.e.\ a noise which originates from a generalisation of the Gaussian space-time white noise.
Our article serves two purposes: firstly, we show the existence of a solution of \eqref{SPDE_introduction} for arbitrary cylindrical L\'evy processes but with finite second moments. Secondly, we derive this existence result for a subclass of cylindrical L\'evy processes without finite moments. The existence of a solution in the latter case cannot be necessarily anticipated from the first case or other results: it is known that the irregular jumps of a cylindrical L\'evy process, in particular in the case without moments, can cause completely novel phenomena; see e.g.\ \cite{Kumar_Riedle} or \cite{Priola_Zabczyk_1st}. In fact, our assumptions on the cylindrical L\'evy process in the second case still require certain regularity of the jumps and leaves the question open whether for all cylindrical L\'evy processes there exists a solution of \eqref{SPDE_introduction}.
In the case of cylindrical L\'evy processes with finite second moments one can follow the standard approach for the proof of existence of a solution by using a Galerkin approximation and a stochastic version of Lion's theorem in \cite{Gyongy_Krylov_II}. However, the nature of cylindrical processes causes
some technical challenges, in particular for the involved stochastic integrals with respect to a cylindrical L\'evy process (see Section \ref{sec_properties}). For the second case of cylindrical L\'evy processes without finite moments, we restrict ourselves to a subclass whose members enjoy a L\'evy-It{\^o} decomposition in a cylindrical sense. But the unbounded potential of cylindrical L\'evy processes prevents us from following a classical approach. Instead, we compensate the unbounded potential by introducing weights in each dimension accordingly. We show that many examples of cylindrical L\'evy processes allow such a compensation.
The organisation of the paper is as follows. In Section \ref{sec_preliminaries}, we collect some preliminaries and
give the definition of cylindrical L\'evy processes. In Section \ref{sec_properties}, we recall the definition of the stochastic integral in the square integrable setting and we prove new results concerning this integral: characterisation of its angle bracket process and stability under stopping.
In Section \ref{sec_existence_square_integrable}, we derive existence and uniqueness of solution to \eqref{SPDE_introduction} in the weakly square-integrable case. In the last Section \ref{sec_jumps}, we firstly consider some properties of jump times of cylindrical L\'evy processes and extend the definition of the stochastic integral to the case of cylindrical L\'evy processes under consideration, which might have unbounded moments. The final part of this section is devoted to establish the existence of a unique solution of \eqref{SPDE_introduction} driven by a cylindrical L\'evy process
from this subclass.
\section{Preliminaries}
\label{sec_preliminaries}
Let $U$ and $H$ be separable Hilbert spaces with norm $\normU{\cdot}$ and
inner product $\scalarU{\cdot}{\cdot}$ and analogously for $H$. We fix an orthonormal basis $(e_j)$ of $U$ and $(f_j)$ of $H$. The Borel $\sigma$-algebra is denoted by $\Borel(U)$.
We use $L(U,H)$ to denote the space of bounded operators from $U$ to $H$ equipped with the operator norm. The subspace of Hilbert-Schmidt operators from $U$ to $H$ is denoted by $L_{\rm HS}(U,H)$ and it is equipped with the norm
$$\HSnorm{\phi}^2 := \sum_{j=1}^\infty \normH{\phi e_j}^2.$$
Let $(S,\mathcal{S},\mu)$ be a $\sigma$-finite measure space. We denote
by $L^p(S;U)$ the Bochner space of all equivalence classes of measurable functions $f\colon S\to U$ which are $p$-th integrable with respect to $\mu$ equipped with the usual norm. We use $L^0(S;U)$ to denote the space of all equivalence classes of measurable functions $f\colon S\to U$ with the topology induced by convergence in measure. The underlying measure $\mu$ and $\sigma$-algebra $\mathcal{S}$ are always obvious from the context, e.g.\ if $S=[0,T]\times \Omega$ then $\mu= \, \mathrm{d}t\otimes P$ and ${\mathcal S}=\Borel([0,T])\otimes {\mathcal F}$, where $\, \mathrm{d}t$ is the Lebesgue measure on $[0,T]$ and $(\Omega,\mathcal{F},P)$ is a probability space.
For a subset $\Gamma$ of $U$, sets of the form
\[ C(u_1, ... , u_n; B) :=\{ u \in U: (\langle u, u_1 \rangle, ... , \langle u, u_n \rangle) \in B\},\]
for $u_1, ... , u_n \in \Gamma$ and $B\in \Borel(\mathbb{R}^n)$ are called {\em cylindrical sets with respect to $\Gamma$}; the set of all these cylindrical sets is denoted by $\mathcal{Z}(U,\Gamma)$ and it is an algebra. If $\Gamma$ is finite then it is a $\sigma$-algebra. A function $\lambda \colon \mathcal{Z}(U,U) \to [0,\infty]$ is called a \emph{cylindrical measure}, if for each finite subset $\Gamma \subseteq U$ the restriction of $\lambda$ on the $\sigma$-algebra $\mathcal{Z}(U, \Gamma)$ is a measure.
A cylindrical measure is called a cylindrical probability measure if $\lambda(U) =1$.
A \emph{cylindrical random variable} $Z$ in $U$ is a linear and continuous map
\[Z\colon U \rightarrow L^0(\Omega; \mathbb{R}).\]
Each cylindrical random variable $Z$ defines a cylindrical probability measure $\lambda$ by
\begin{align*}
\lambda\colon \mathcal{Z}(U,U) \to [0,1],\qquad
\lambda(C)=P\big( (Zu_1,\dots, Zu_n)\in B\big),
\end{align*}
for cylindrical sets $C=C(u_1, ... , u_n; B)$. The cylindrical probability measure $\lambda$ is called the {\em cylindrical distribution} of $Z$. The characteristic function of a cylindrical random variable $Z$ is defined by
\[\phi_{Z}\colon U \rightarrow \mathbb{C}, \qquad \phi_{Z}(u)=\mathbb{E} [\exp (iZu)],\]
and it uniquely determines the cylindrical distribution of $Z$.
Let $(\mathcal{F}_t : t \geq 0)$ be a filtration satisfying the usual conditions. A family of cylindrical random variables $L(t)\colon U \to L^0(\Omega;\mathbb{R})$, $t\geq 0$, is called a \emph{cylindrical L\'evy process} if for any $n\in \mathbb{N}$ and $u_1, \ldots ,u_n \in U$ we have that
$$\big((L(t)u_1,\ldots ,L(t)u_n) : t \geq 0\big)$$
is a L\'evy process in $\mathbb{R}^n$ with respect to the filtration $(\mathcal{F}_t)$.
A version of this definition appeared for the first time in Applebaum and Riedle \cite{Applebaum_Riedle} with further modifications in \cite{Riedle_L2}. Here, we include a filtration in the definition.
The characteristic function of $L(t)$ can be written in the form
\begin{equation*}
\phi_{L(1)}(u)
= \exp \left( i p(u) - \tfrac12 q(u) + \int_U \left( e^{i \langle u,x \rangle} -1 -i\langle u,x \rangle \1_{B_\mathbb{R}}(\langle u,x \rangle) \right) \nu(\mathrm{d}x) \right);
\end{equation*}
see \cite[Th.\ 2.7]{Applebaum_Riedle} or \cite[Th.\ 3.4]{Riedle_infinitely}.
In the above formula, $B_\mathbb{R}$ is the closed unit ball in $\mathbb{R}$,
$p\colon U \to \mathbb{R}$ is a continuous function with $p(0)=0$,
$q\colon U \to \mathbb{R}$ is a quadratic form,
and $\nu$ is a cylindrical measure on $\mathcal{Z}(U,U)$ satisfying
\begin{equation}\label{cylindrical_Levy_measure}
\int_U \left( \scalar{u}{v}^2 \wedge 1 \right) \, \nu(\mathrm{d}v) < \infty \qquad \mathrm{for\;all\;}u \in U.
\end{equation}
A cylindrical measure satisfying \eqref{cylindrical_Levy_measure} is called a \emph{cylindrical L\'evy measure}.
We say that $L$ is \emph{weakly square-integrable} or that it has \emph{weak second moments} if $\mathbb{E} \big[ \@ifstar{\oldabs}{\oldabs*}{L(t)u}^2 \big] < \infty$ for all $t\geq 0$ and $u\in U$.
In this case, it follows from the closed graph theorem that $L(t)\colon U \to L^2(\Omega;\mathbb{R})$ is continuous for each $t\geq 0$.
Similarly $L$ is said to be \emph{weakly mean-zero} if $\mathbb{E} \left[ L(t)u \right] = 0$ for $t\geq 0$ and $u \in U$. For a weakly mean-zero cylindrical L\'evy process $L$, the \emph{covariance operator} $Q\colon U\to U$ is a bounded linear operator defined by $\langle Qu,v \rangle = \mathbb{E} \big[ L(1)u L(1)v \big]$ for each $u,v\in U$.
\section{Properties of the stochastic integral}
\label{sec_properties}
A theory of stochastic integration with respect to weakly square-integrable cylindrical L\'evy processes is introduced in \cite{Riedle_L2}. The approach is based on the observation that the cylindrical increments of a cylindrical L\'evy process can be radonified by a {\em random} Hilbert-Schmidt mapping. More specifically, for $0\le s\le t$ let $\Phi\colon \Omega\to L_{\rm HS}(U,H)$ be a simple, ${\mathcal F}_s$--measurable random variable of the form
\begin{equation}\label{simple_random_variable}
\Phi(\omega) = \sum_{j=1}^{m} \1_{A_{j}}(\omega) \phi_{j}
\end{equation}
for deterministic operators $\phi_{j}\in L_{\rm HS}(U,H)$ and sets $A_{j} \in \mathcal{F}_{s}$ for $j=1,\dots, m$. The Hilbert-Schmidt property implies that for each $j\in \{1,\dots, m\}$ there exists a {\em genuine} random variable $J_{s,t}\phi_j\colon \Omega\to H$ such that $\big(L(t)-L(s)\big)(\phi_{j}^\ast h)=\scalar{J_{s,t}\phi_{j}}{h}$ for all $h\in H$. By linearity this enables us to define a random variable
$J_{s,t}\Phi\colon \Omega\to H$ satisfying
\begin{align}\label{eq.J-simple}
\scalar{J_{s,t}\Phi}{h}= \sum_{j=1}^m \1_{A_j}\big(L(t)-L(s)\big)(\phi_j^\ast h)
\qquad\text{for all }h\in H.
\end{align}
An argument based on independent increments of $L$ and continuity of $L(t):U\to L^2(\Omega,\mathbb{R})$ enables the author of \cite{Riedle_L2} to derive continuity of $J_{s,t}$ and to uniquely extend this definition to each ${\mathcal F}_s$--measurable random variable
$\Phi\colon \Omega\to L_{\rm HS}(U,H)$ such that one can define a genuine, $H$-valued random variable by
\begin{align*}
\big(L(t)-L(s)\big)\Phi^\ast:=J_{s,t}\Phi.
\end{align*}
By beginning with simple stochastic processes $(\Psi(t):\, t\in [0,T])$ of the form
\begin{equation}\label{simple_process}
\Psi(t) = \Phi_0 \1_{\{0\}}(t)+ \sum_{k=0}^{N-1} \Phi_k \mathbbm{1}_{(t_k,t_{k+1}]}(t)
\qquad\text{for }t\in [0,T],
\end{equation}
where $0=t_0<t_1< \cdots < t_N=T$ and each $\Phi_k$ is an $\mathcal{F}_{t_k}$--measurable $L_{\rm HS}(U,H)$-valued, square-integrable random variable,
one can define the stochastic integral
$
\int_0^T \Psi(s)\, \mathrm{d}L(s)
$
for all stochastic processes in the space
$$\Lambda(U,H)
:= \bigg\{ \Psi\colon [0,T]\times\Omega \to L_{\rm HS}(U,H) : \text{predictable, } \mathbb{E} \left[ \int_0^T \!\! \HSnorm{\Psi(t)}^2 \, \mathrm{d}t \right] < \!\! \infty \bigg\}$$
by mainly following the standard procedure.
The norm in the space $\Lambda(U,H)$ is denoted with $\@ifstar{\oldnorm}{\oldnorm*}{\cdot}_\Lambda$.
We define $\Lambda_0(U,H)$ as the subspace of $\Lambda(U,H)$ which consists of all simple stochastic processes. Furthermore, the space of all simple stochastic processes of the form \eqref{simple_process} where each $\Phi_k\colon \Omega\to L_{\rm HS}(U,H)$ is of the form \eqref{simple_random_variable}, is denoted by $\Lambda_0^S(U,H)$.
Let $L$ be a weakly mean-zero, weakly square-integrable cylindrical L\'evy process. Corollary~4.4 in \cite{Riedle_L2} guarantees that for each $\Psi\in \Lambda(U,H)$ there exists an $H$-valued, square-integrable martingale $\big(I(\Psi)(t):\, t\in [0,T]\big)$ with c\`adl\`ag trajectories which is a modification of
\begin{align*}
\left(\int_0^t \Psi(s)\, \mathrm{d}L(s):\, t\in [0,T]\right).
\end{align*}
If $L$ is a genuine L\'evy process, formulas for the angle bracket processes are well known; see \cite[Cor. 8.17]{Peszat_Zabczyk}. In the cylindrical case, we will derive the corresponding formulas in the following by using the specific construction of the stochastic integral in \cite{Riedle_L2}. Recall that for a square-integrable, $H$-valued martingale $M$, the angle bracket process $\langle M,M \rangle$ is defined as the unique increasing, predictable process such that $\big(\@ifstar{\oldnorm}{\oldnorm*}{M(t)}^2 - \langle M,M \rangle (t) : t\geq 0\big)$ is a martingale. The operator-valued angle bracket $\langle\langle M,M \rangle\rangle$ is defined as the unique increasing, predictable process taking values in the space of non-negative, nuclear operators such that
$\big(M(t) \otimes M(t) - \langle\langle M,M \rangle\rangle(t) : t\geq 0 \big)$
is a martingale; see \cite{Metivier}.
\begin{Pro}
\label{pro_angle_bracket}
Let $L$ be a weakly mean-zero, weakly square-integrable cylindrical L\'evy process with covariance operator $Q$ and let $\Psi\in \Lambda$. For
$$I(t) = \int_0^t \Psi(s) \, \mathrm{d}L(s) \qquad \text{for } t \in [0,T],$$
we have:
\begin{enumerate}
\item[{\rm (a)}]
$\displaystyle
\big\langle I(\Psi),I(\Psi) \big\rangle (t) = \int_0^t \HSnorm{\Psi(s)Q^{1/2}}^2 \, \mathrm{d}s\;$ for all $t\in [0,T]$ $P$-a.s.
\item[{\rm (b)}]
$ \displaystyle
\big\langle\big\langle I(\Psi),I(\Psi) \big\rangle\big\rangle(t) = \int_0^t \Psi(s)Q\Psi(s)^* \, \mathrm{d}s\;$ for all $t\in [0,T]$ $P$-a.s.
\end{enumerate}
\end{Pro}
\begin{proof} (a).
Let $\Psi\in \Lambda_0^S(U,H)$ be of the form \eqref{simple_process}
with each $\Phi_k\colon \Omega\to L_{\rm HS}(U,H)$ for $k=0,\dots, N-1$ equal to
\begin{align*}
\Phi_k = \sum_{j=1}^{m_{k}} \1_{A_{k,j}} \phi_{k,j}
\end{align*}
for deterministic operators $\phi_{k,j}\in L_{\rm HS}(U,H)$ and sets $A_{k,j} \in \mathcal{F}_{t_k}$ for $j=1,\dots, m_{k}$.
We first prove that for each $h\in H$, the stochastic process
\begin{equation}
\label{statement_is_a_martingale}
\left( \left< \int_0^t \Psi(r) \, \mathrm{d}L(r) ,h \right>_H^2 - \int_0^t \normU{Q^{1/2}\Psi^*(r)h}^2 \, \mathrm{d}r :t\in [0,T] \right)
\end{equation}
is a square-integrable martingale. For this purpose, we fix $0 \leq s < t \leq T$. Without loss of generality we can assume that $s=t_{k_0}$ and $t=t_{N_0}$ for some $k_0$ and $N_0$ in $\{0,\dots,N-1\} $. The very definition of the stochastic integral and
of the radonified increments yield
\begin{align*}
&\left< \int_0^t \Psi(r) \, \mathrm{d}L(r) ,h \right>_H^2 \\
&\qquad = \left( \sum_{k=0}^{N_0-1} \big(L(t_{k+1})-L(t_k)\big)(\Phi_k^*h) \right)^2 \\
&\qquad = \left( \sum_{k=0}^{N_0-1} \sum_{j=1}^{m_k} \1_{A_{k,j}} \big(L(t_{k+1})-L(t_k)\big)(\phi_{k,j}^*h) \right)^2 \\
&\qquad = \sum_{k,n=0}^{N_0-1} \sum_{i=1}^{m_k} \sum_{j=1}^{m_n} \1_{A_{k,i}} \1_{A_{n,j}} \big(L(t_{k+1})-L(t_k)\big)(\phi_{k,i}^*h) \big(L(t_{n+1})-L(t_n)\big)(\phi_{n,j}^*h).
\end{align*}
Independent increments of $L$ imply
\begin{align*}
&\mathbb{E} \left[ \left< \int_0^t \Psi(r) \, \mathrm{d}L(r) ,h \right>_H^2 \Big\vert \mathcal{F}_s \right]\\
&\qquad = \sum_{k,n=0}^{k_0-1} \sum_{i=1}^{m_k} \sum_{j=1}^{m_n} \1_{A_{k,i}} \1_{A_{n,j}} \big(L(t_{k+1})-L(t_k)\big)(\phi_{k,i}^*h) \big(L(t_{n+1})-L(t_n)\big)(\phi_{n,j}^*h)\\
&\qquad\qquad + \sum_{k=k_0}^{N_0-1} \sum_{i=1}^{m_k} \1_{A_{k,i}} (t_{k+1}-t_k) \big\langle Q \phi_{k,i}^*h,\phi_{k,i}^*h \big\rangle _U\\
&\qquad = \left< \int_0^s \Psi(r) \, \mathrm{d}L(r) ,h \right>_H^2 + \int_s^t \big\langle Q \Psi^*(r)h,\Psi^*(r)h \big\rangle_U \, \mathrm{d}r.
\end{align*}
Consequently, we arrive at
\begin{multline*}
\mathbb{E} \left[ \left< \int_0^t \Psi(r) \, \mathrm{d}L(r) ,h \right>_H^2 -\int_0^t \normU{Q^{1/2} \Psi^*(r)h}^2 \, \mathrm{d}r \Big\vert \mathcal{F}_s \right] \\
= \left< \int_0^s \Psi(r) \, \mathrm{d}L(r) ,h \right>_H^2 -\int_0^s \normU{Q^{1/2} \Psi^*(r)h}^2 \, \mathrm{d}r,
\end{multline*}
which establishes that \eqref{statement_is_a_martingale} is a martingale.
For the next step, we take $h=f_j$ in \eqref{statement_is_a_martingale} to conclude for each $n\in\mathbb{N}$ that
$$M_n(t):= \sum_{j=1}^n \left( \left< \int_0^t \Psi(s) \, \mathrm{d}L(s), f_j \right>_H^2 - \int_0^t \normU{Q^{1/2}\Psi^*(s)f_j}^2 \, \mathrm{d}s \right)$$
defines a martingale $(M_n(t):\, t\in 0,T])$. Parseval's identity guarantees that for each $t\in [0,T]$, the random variables $M_n(t)$ converge to $M(t)$ $P$-a.s., where
$$M(t)
:= \normH{\int_0^t \Psi(s) \, \mathrm{d}L(s)}^2 - \int_0^t \HSnorm{\Psi(s)Q^{1/2}}^2 \, \mathrm{d}s.$$
Since $M_n(t)$ is dominated by the integrable random variable
\begin{equation*}
Y(t):= \normH{\int_0^t \Psi(s) \, \mathrm{d}L(s)}^2 + \int_0^t \HSnorm{\Psi(s)Q^{1/2}}^2 \, \mathrm{d}s,
\end{equation*}
we conclude that $\{M_n(t):\, n\in\mathbb{N}\}$ is uniformly integrable.
Thus, $M_n(t)$ converges to $M(t)$ in $L^1(\Omega;\mathbb{R})$ and it follows that $(M(t):\, t\in [0,T])$ is a martingale.
For an arbitrary $\Psi\in \Lambda_0(U,H)$, Lemma 4.2 in \cite{Riedle_L2} guarantees that we can find a sequence $(\Psi_n) \subseteq \Lambda_0^S(U,H)$ such that $\@ifstar{\oldnorm}{\oldnorm*}{\Psi-\Psi_n}_{\Lambda} \to 0$ and $\int_0^t \Psi_n(r) \, \mathrm{d}L(r) \to \int_0^t \Psi(r) \, \mathrm{d}L(r)$ in mean-square for all $t\in [0,T]$. We obtain for all $n\in\mathbb{N}$ and $0\le s\le t$ that
\begin{multline}
\label{martingale_poperty_general_psi}
\mathbb{E} \left[ \normH{\int_0^t \Psi_n(r) \, \mathrm{d}L(r)}^2 - \int_0^t \HSnorm{\Psi_n(r)Q^{1/2}}^2 \, \mathrm{d}r \Bigg| \mathcal{F}_s \right] \\
= \normH{\int_0^s \Psi_n(r) \, \mathrm{d}L(r)}^2 - \int_0^s \HSnorm{\Psi_n(r)Q^{1/2}}^2 \, \mathrm{d}r.
\end{multline}
Consequently, equation \eqref{martingale_poperty_general_psi} is also satisfied if $\Psi_n$ is replaced by $\Psi$ in $\Lambda_0$. Since $\Lambda_0$ is dense in $\Lambda$ according to \cite[Cor. 4.7]{Da_Prato_Zabczyk}, we can
repeat the above approximating procedure to conclude that equation \eqref{martingale_poperty_general_psi} is also satisfied if $\Psi_n$ is replaced by $\Psi$ in $\Lambda$, which completes the proof of Part (a).
(b). Part (a), or more specifically
\eqref{statement_is_a_martingale} for $h=f_i+f_j$ and $h=f_i-f_j$, yields that
\begin{align*}
&R(f_i,f_j)(t)\\
:&= \left< \int_0^t \Psi(s)\, \mathrm{d}L(s),f_i\right>_H \left< \int_0^t \Psi(s)\, \mathrm{d}L(s),f_j\right>_H - \int_0^t \scalarU{Q^{1/2}\Psi(s)^* f_i}{Q^{1/2}\Psi^*(s) f_j} \, \mathrm{d}s \\
& = \frac14 \left( \Big\langle \int_0^t \Psi(s)\, \mathrm{d}L(s) , f_i+f_j \Big\rangle^2 - \int_0^t \normU{Q^{1/2}\Psi^*(s)(f_i+f_j)}^2 \, \mathrm{d}s \right) \\
&\qquad\qquad - \frac14 \left( \Big\langle \int_0^t \Psi(s)\, \mathrm{d}L(s) , f_i-f_j \Big\rangle^2 - \int_0^t \normU{Q^{1/2}\Psi^*(s)(f_i-f_j)}^2 \, \mathrm{d}s \right)
\end{align*}
defines a martingale. Define for each $n\in\mathbb{N}$ a martingale $M_n$ by
\begin{align*}
M_n(t) &:= \sum_{i,j=1}^n R(f_i,f_j)(t)f_i\otimes f_j
\qquad\text{for all }t\in [0,T].
\end{align*}
We show that $M_n(t)$ converges to $M(t)$ in $L^1(\Omega;L_1(H))$
for each $t\in [0,T]$, where
$$M(t) := I(\Psi)(t) \otimes I(\Psi)(t) - \int_0^t \Psi(s)Q\Psi^*(s) \, \mathrm{d}s
\quad\text{and}\quad
I(\Psi)(t):=\int_0^t \Psi(s)\, \mathrm{d}L(s).
$$
Since the operators in the definition of $M_n(t)$ and $M(t)$ are non-negative we have
\begin{align*}
&\@ifstar{\oldnorm}{\oldnorm*}{M(t)-M_n(t)}_{L_1(H)} \notag \\
&\leq \Tr \left( I(\Psi)(t) \otimes I(\Psi)(t) - \sum_{i,j=1}^n \scalarH{I(\Psi)(t)}{f_i}\scalarH{I(\Psi)(t)}{f_j} f_i \otimes f_j \right) \\
&\qquad + \Tr \left( \int_0^t \Psi(s) Q \Psi^*(s) \, \mathrm{d}s - \sum_{i,j=1}^n \int_0^t \scalarU{Q^{1/2}\Psi^*(s) f_i}{Q^{1/2}\Psi^*(s)f_j} \, \mathrm{d}s f_i \otimes f_j\right) \notag \\
& =
\sum_{k=n+1}^\infty \langle I(\Psi)(t),f_k\rangle_H^2
+\int_0^t \@ifstar{\oldnorm}{\oldnorm*}{Q^{1/2}\Psi^\ast(s)f_k}_U^2\, \mathrm{d}s.
\end{align*}
Lebesgue's theorem shows that $E[\@ifstar{\oldnorm}{\oldnorm*}{M(t)-M_n(t)}_{L_1(H)} ]$ converges to $0$, which establishes that $(M(t):\, t\in [0,T])$ is a martingale.
\end{proof}
Recall for the following result that $I(\Psi)$ defines a square-integrable martingale in $H$ for each $\Psi\in\Lambda(U,H)$. Stochastic integration with respect to such martingales is introduced for example in \cite{Metivier} or \cite{Peszat_Zabczyk}.
\begin{Lem}\label{le.brackek-intL}
Let $L$ be a weakly mean-zero, weakly square-integrable cylindrical L\'evy process with covariance operator $Q$ and let $\Psi\in \Lambda(U,H)$. If $V$ is another separable Hilbert space and $\Theta$ is an $L(H,V)$-valued stochastic process for which the stochastic integral
$$N(t):= \int_0^t \Theta(s) \,{\rm d} I(\Psi)(s)\qquad\text{for }t\in [0,T],$$
exists in the sense of \cite[Sec.\ 8.2]{Peszat_Zabczyk}, then we have
\begin{equation*}
\langle N,N \rangle(t) = \int_0^t \HSnormV{\Theta(s)\big( \Psi(s) Q \Psi^*(s)\big)^{1/2}}^2 \, \mathrm{d}s.
\end{equation*}
\end{Lem}
\begin{proof}
Since $I(\Psi)$ is an $H$-valued martingale, Theorem 8.2 in \cite{Peszat_Zabczyk} guarantees that there exists the so-called
martingale covariance of $I(\Psi)$, that is a
predicable stochastic process $(C(t) : t\geq 0)$ in the space of symmetric, non-negative, nuclear operators on $U$, such that
\begin{equation*}
\big\langle \big\langle I(\Psi),I(\Psi)\big\rangle \big\rangle(t) = \int_0^t C(s) \, \mathrm{d}\big \langle I(\Psi),I(\Psi)\big \rangle (s)\qquad\text{for all $t\in [0,T]$ $P$-a.s.}
\end{equation*}
By Part (a) of Proposition \ref{pro_angle_bracket} we conclude that
$$ \big\langle \big\langle I(\Psi),I(\Psi)\big\rangle \big\rangle(t)= \int_0^t C(s)\HSnorm{\Psi(s)Q^{1/2}}^2 \, \mathrm{d}s.$$
By comparing with the formula in Part (b) of Proposition \ref{pro_angle_bracket} we obtain
\begin{align*}
C(s) =
\begin{cases}\displaystyle
\frac{\Psi(s) Q \Psi^*(s)}{\HSnorm{\Psi(s) Q^{1/2}}^2}, &\text{if } \Psi(s) Q^{1/2} \neq 0\\
0, & \text{otherwise.}
\end{cases}
\end{align*}
Applying Theorem 8.7.(iv) in
\cite{Peszat_Zabczyk} and Part (a) of Proposition \ref{pro_angle_bracket} results in
\begin{align*}
\left< N,N \right>(t)
&= \int_0^t \HSnormV{\Theta(s)\left( \frac{\Psi(s) Q \Psi^*(s)}{\HSnorm{\Psi(s) Q^{1/2}}^2} \right)^{1/2}}^2 \, \mathrm{d} \big \langle I(\Psi),I(\Psi)\big \rangle (s) \\
&= \int_0^t \HSnormV{\Theta(s)\left( \frac{\Psi(s) Q \Psi^*(s)}{\HSnorm{\Psi(s) Q^{1/2}}^2} \right)^{1/2}}^2 \HSnorm{\Psi(s) Q^{1/2}}^2 \, \mathrm{d}s\\
&=\int_0^t \HSnormV{\Theta(s)\big( \Psi(s) Q \Psi^*(s)\big)^{1/2}}^2 \, \mathrm{d}s,
\end{align*}
which completes the proof.
\end{proof}
\begin{Lem}
\label{lem_linearity}
Let $0\le s\le t$ and $X\colon\Omega\to\mathbb{R}$ be an $\mathcal{F}_s$--measurable, uniformly bounded random variable. Then we have for each $\Psi \in \Lambda(U,H)$ that
$$\int_s^t X\Psi(r) \, \mathrm{d}L(r) = X \int_s^t \Psi(r) \, \mathrm{d}L(r)
\qquad\text{$P$-a.s.} $$
\end{Lem}
\begin{proof}
For fixed $s\le t$, we first prove that for each $\mathcal{F}_s$--measurable
random variable $\Phi\colon\Omega\to L_{\rm HS}(U,H)$ we have
\begin{equation}
\label{linearity_of_Radonification}
(L(t)-L(s))(X\Phi^*) = X(L(t)-L(s))\Phi^*.
\end{equation}
If both $X$ and $\Phi$ are simple random variables, we can assume that they are
the form
$$X = \sum_{j=1}^m \1_{A_j} x_i \quad\text{ and }\quad \Phi = \sum_{j=1}^m \1_{A_j} \phi_i,$$
for some $A_1,\ldots,A_m \in \mathcal{F}_s$, $x_1,\ldots,x_m \in \mathbb{R}$ and $\phi_1,\ldots,\phi_m \in L_{\rm HS}(U,H)$.
Then we obtain by \eqref{eq.J-simple} that
\begin{align}\label{eq.X-and-L}
(L(t)-L(s))(X\Phi^*)
&= \sum_{j=1}^m \1_{A_j} (L(t)-L(s))(x_j \phi_j^*)\notag \\
&=X \sum_{j=1}^m \1_{A_j} (L(t)-L(s))\phi_j^*
=X (L(t)-L(s))(\Phi^*).
\end{align}
For an arbitrary ${\mathcal F}_s$--measurable, uniformly bounded random variable $X\colon\Omega\to \mathbb{R}$ and arbitrary $\mathcal{F}_s$--measurable random variable $\Phi\in L^2(\Omega;L_{\rm HS}(U,H))$,
we find a sequence of simple random variables $(X_n)$ converging to $X$ in
$L^\infty(\Omega;\mathbb{R})$ and a sequence of random variables $(\Phi_n)$ converging to $\Phi$ in $ L^2(\Omega;L_{\rm HS}(U,H))$. Since $X_n\Phi_n$ converges to
$X\Phi$ in $L^2(\Omega; L_{\rm HS}(U,H))$, Theorem 3.2 in
\cite{Riedle_L2} implies
\begin{equation}
\label{Radonification2}
\lim_{n\to\infty}(L(t)-L(s))(X_n\Phi_n^*) =(L(t)-L(s))(X\Phi^*)
\qquad\text{in }L^2(\Omega;H).
\end{equation}
On the other hand, since $(L(t)-L(s))\Phi_n^*$ converges to $(L(t)-L(s))\Phi^*$
in $L^2(\Omega;H)$, we obtain
\begin{align}\label{Radonification3}
\lim_{n\to\infty} X_n(L(t)-L(s))\Phi_n^* = X(L(t)-L(s))\Phi^*
\qquad\text{in }L^2(\Omega;H).
\end{align}
Applying \eqref{eq.X-and-L} to \eqref{Radonification3} and comparing the limit
with \eqref{Radonification2} completes the proof of \eqref{linearity_of_Radonification}.
For a simple stochastic process $\Psi$ of the form \eqref{simple_process}
it follows from \eqref{linearity_of_Radonification} by the very definition of the stochastic integral that
\begin{align*}
\int_s^t X\Psi(r) \, \mathrm{d}L(r)
= X \int_s^t \Psi(r) \, \mathrm{d}L(r).
\end{align*}
This equality extends to arbitrary integrands $\Psi\in \Lambda$ by the usual
approximation arguments, which completes the proof.
\end{proof}
\begin{Lem}
\label{lem_stopped_cylindrical_integral}
Let $L$ be a weakly square-integrable cylindrical L\'evy process and $\Psi \in \Lambda(U,H)$. Then for any stopping time $\tau$ with $P(\tau \leq T) = 1$ we have
$$\int_0^{t \wedge \tau} \Psi(s) \, \mathrm{d}L(s) = \int_0^t \Psi(s) \1_{\{s \leq \tau\}} \, \mathrm{d}L(s)\qquad\text{for all } t \in [0,T] \text{ $P$-a.s.}$$
\end{Lem}
\begin{proof}
The proof follows closely the approach as in the classical case in \cite[Lem. 2.3.9]{Prevot_Rockner}.
Suppose that $\Psi$ is a simple stochastic process of the form \eqref{simple_process} and that the stopping time $\tau$ takes only finitely many values, that is
\begin{align}\label{eq.simple-stopping}
\tau = \sum_{j=1}^m a_j \1_{A_j},
\end{align}
where $a_j\in [0,T]$ and $A_j \in \mathcal{F}_{a_j}$ for $j=1,\dots, m$. Then
we obtain
\begin{align}
\label{stopped_integral_for_simple}
\int_0^{t\wedge \tau} \Psi(s) \, \mathrm{d}L(s)
&= \sum_{j=1}^m \1_{A_j} \int_0^{t \wedge a_j} \Psi(s) \, \mathrm{d}L(s)\notag \\
&= \sum_{j=1}^m \1_{A_j} \sum_{k=0}^{N-1} (L(t_{k+1} \wedge t \wedge a_j) - L(t_k \wedge t \wedge a_j))\Phi_k^*.
\end{align}
On the other hand, Lemma \ref{lem_linearity} implies
\begin{align*}
&\int_0^t \Psi(s) \1_{\{s\leq\tau\}} \, \mathrm{d}L(s) \\
&\qquad= \int_0^t \Psi(s) \left(1-\1_{\{\tau< s\}}\right) \, \mathrm{d}L(s) \\
&\qquad= \int_0^t \Psi(s) \, \mathrm{d}L(s) - \int_0^t \Psi(s) \1_{\{\tau< s\}} \, \mathrm{d}L(s) \\
&\qquad= \sum_{k=0}^{N-1} (L(t_{k+1}\wedge t)-L(t_k\wedge t))\Phi_k^* - \sum_{j=1}^m \int_{a_j}^t \Psi(s) \1_{A_j} \, \mathrm{d}L(s) \\
&\qquad= \sum_{k=0}^{N-1} (L(t_{k+1}\wedge t)-L(t_k\wedge t))\Phi_k^* - \sum_{j=1}^m \1_{A_j} \int_{a_j}^t \Psi(s) \, \mathrm{d}L(s)\\
&\qquad = \sum_{k=0}^{N-1} (L(t_{k+1}\wedge t)-L(t_k\wedge t))\Phi_k^* - \sum_{j=1}^m \1_{A_j} \sum_{k=0}^{N-1} (L(a_j \vee (t_{k+1} \wedge t)) - L(a_j \vee (t_k \wedge t))) \Phi_k^* \\
&\qquad = \sum_{k=0}^{N-1} \sum_{j=1}^m \1_{A_j} \big( L(t_{k+1} \wedge t) - L(t_k \wedge t) - L(a_j \vee (t_{k+1} \wedge t)) + L(a_j \vee (t_k \wedge t))\big) \Phi_k^*,
\end{align*}
which, by direct inspection turns out to be same as in \eqref{stopped_integral_for_simple}.
Assume now that $\tau$ is a stopping time such that $P(\tau\leq T ) =1$ and let $\Psi\in \Lambda_0$ .
There exists a sequence of simple stopping times $(\tau_n)$ of the form \eqref{eq.simple-stopping} decreasing to $\tau$. Step 1 implies
\begin{equation}
\label{stopped_for_tau_n}
\int_0^{\tau_n \wedge t} \Psi(s) \, \mathrm{d}L(s) = \int_0^t \Psi(s) \1_{\{s\leq\tau_n\}} \, \mathrm{d}L(s)\qquad\text{$P$-a.s.}
\end{equation}
Since the stochastic integral $I(\Psi)$ has c\`adl\`ag paths, we obtain
$$\lim_{n\to\infty}\int_0^{\tau_n \wedge t} \Psi(s) \, \mathrm{d}L(s) = \int_0^{\tau \wedge t} \Psi(s) \, \mathrm{d}L(s) \quad P\text{-a.s.}$$
On the other hand, $\1_{[0,\tau_n]}\Psi$ converges to $\1_{[0,\tau]}\Psi$ in $\Lambda(U,H)$, which implies
\begin{align*}
\lim_{n\to\infty} \int_0^t \Psi(s) \1_{\{s\leq\tau_n\}} \, \mathrm{d}L(s)
=\int_0^t \Psi(s) \1_{\{s\leq\tau\}} \, \mathrm{d}L(s)
\end{align*}
in mean-square.
Consequently, we can replace the simple stopping time $\tau_n$ in \eqref{stopped_for_tau_n} by an arbitrary stopping time $\tau$.
Suppose that $\Psi$ is in $\Lambda(U,H)$ and $\tau$ is a stopping time with $P(\tau \leq T) = 1$. Then there exists a sequence $(\Psi_n)$ of simple stochastic processes
converging to $\Psi$ in $\Lambda(U,H)$. The process $L$ can be decomposed into
$$L(t)u = \scalar{\tilde{p}}{u}t + \tilde{L}(t)u, \qquad \text{for } t\geq 0, \, u \in U,$$
where $\tilde{p}\in U$ and $\tilde{L}$ is a square-integrable cylindrical L\'evy process with $E[\tilde{L}(t)u]=0$; see the proof of Th.\ 4.1 in \cite{Riedle_L2}.
Then the stochastic integral can be written as
$$\int_0^t \Psi(s) \, \mathrm{d}L(s) = \int_0^t \Psi(s)\tilde{p} \, \mathrm{d}s + \int_0^t \Psi(s) \,\mathrm{d}\tilde{L}(s),$$
where the first integral is in the Bochner sense. Since the stochastic integral
defines a martingale by \cite[Cor.\ 4.4]{Riedle_L2}, Doob's inequality implies that for some constant $C$
\begin{align*}
\mathbb{E} \left[ \sup_{0\leq t \leq T} \normH{\int_0^t \Psi_n(s) \,\mathrm{d}\tilde{L}(s) - \int_0^t \Psi(s) \,\mathrm{d}\tilde{L}(s)}^2 \right]
&\leq 4C \mathbb{E} \left[ \int_0^T \HSnorm{\Psi_n(s) - \Psi(s)}^2 \, \mathrm{d}s \right] .
\end{align*}
For the Bochner integral we conclude
\begin{align*}
\mathbb{E} \left[ \sup_{0\leq t \leq T} \normH{\int_0^t \Psi_n(s)\tilde{p} \, \mathrm{d}s - \int_0^t \Psi(s)\tilde{p} \, \mathrm{d}s}^2 \right]
&\leq T \mathbb{E} \left[ \sup_{0\leq t \leq T} \int_0^t \normH{\Psi_n(s)\tilde{p}-\Psi(s)\tilde{p}}^2 \, \mathrm{d}s \right] \\
&\leq T \@ifstar{\oldnorm}{\oldnorm*}{\tilde{p}}^2 \mathbb{E} \left[ \int_0^T \HSnorm{\Psi_n(s)-\Psi(s)}^2 \, \mathrm{d}s \right] .
\end{align*}
Therefore, we obtain
$$\lim_{n\to\infty} \int_0^{t\wedge \tau} \Psi_{n}(s) \, \mathrm{d}L(s) =\int_0^{t\wedge \tau} \Psi(s) \, \mathrm{d}L(s)\qquad\text{in }L^2_P(\Omega;H).$$
On the other hand, since $\1_{[0,\tau]}\Psi_n \to \1_{[0,\tau]}\Psi$ in $\Lambda$ it follows by Theorem 4.1 in \cite{Riedle_L2} that
$$
\lim_{n\to\infty} \int_0^t \Psi_n(s) \1_{\{s\leq\tau\}} \, \mathrm{d}L(s) = \int_0^t \Psi(s) \1_{\{s\leq\tau\}} \, \mathrm{d}L(s) \qquad\text{in }L^2_P(\Omega;H),$$
which finishes the proof.
\end{proof}
\section{Existence of solution in the weakly square-integrable case}
\label{sec_existence_square_integrable}
Let $(V, \@ifstar{\oldnorm}{\oldnorm*}{\cdot}_V)$ be a separable reflexive Banach space and let $\left(H, \langle \cdot, \cdot \rangle_H \right)$ and $(U, \langle \cdot , \cdot \rangle_U)$ be separable Hilbert spaces. Let $V^*$ and $H^*$ denote their duals. Assume that $V$ is densely and continuously embedded into $H$. That is we have a Gelfand triple
$$V \subseteq H = H^* \subseteq V^*.$$
Further, denote with $_{V^*}\langle \cdot , \cdot \rangle_{V}$ the duality pairing of $V^*$ and $V$.
We have
$$_{V^*}\langle h,v \rangle_V = \langle h,v \rangle_H, \quad \text{ for all } h\in H, v \in V$$
and without loss of generality we may assume that $\normH{v} \leq \normV{v}$ for $v\in V$ and $\normVstar{h} \leq \normH{h}$ for $h\in H$.
We consider the equation
\begin{equation}
\label{SPDE}
\, \mathrm{d}X(t) = F\big(X(t)\big) \, \mathrm{d}t + G\big(X(t)\big) \, \mathrm{d}L(t) \qquad \text{ for } t \in [0,T],
\end{equation}
with the initial condition $X(0)=X_0$ for a square-integrable, $\mathcal{F}_0$--measurable random variable $X_0$.
The driving noise is a cylindrical L\'evy process on a separable Hilbert space $U$.
In this section we assume that $L$ is a weakly mean-zero, weakly square-integrable, cylindrical L\'evy process, i.e.\ a cylindrical martingale with covariance operator $Q\colon U\to U$. The coefficients in equation \eqref{SPDE} are given by functions $F\colon V\to V^*$ and $G\colon V\to L_{\rm HS}(U,H)$.
More specifically, we assume the following in this section: there are constants $\alpha, \lambda, \beta, c>0$ such that:
\begin{enumerate}[label=(A\arabic*),series=assumptions]
\item (Coercivity) for all $v\in V$ we have
\begin{align*}
2\pairing{F(v)}v + \HSnorm{G(v)Q^{1/2}}^2 + \alpha \normV{v}^2 \leq \lambda \normH{v}^2+\beta;
\end{align*}
\item (Monotonicity) for all $v_1,v_2 \in V,$ we have
\begin{align*}
2\pairing{F(v_1)-F(v_2)}{v_1-v_2} + \HSnorm{(G(v_1)-G(v_2))Q^{1/2}}^2 \leq \lambda \normH{v_1-v_2};
\end{align*}
\item \label{item_linear_growth} (Linear growth) $\normVstar{F(v)} \leq c(1+\normV{v} )$ for all $v \in V$;
\item \label{item_hemicontinuity} (Hemicontinuity) the mapping $\mathbb{R} \ni s \mapsto {\pairing{F(v_1+sv_2)}{v_3}}$ is continuous for all $v_1,v_2,v_3 \in V$.
\item \label{item_Q_diagonal} The cylindrical L\'evy process $L$ is weakly mean-zero
and is weakly square-integrable. Its covariance operator $Q$ has eigenvectors $(e_j)$ forming an orthonormal basis of $U$.
\end{enumerate}
Conditions of this form appear in most of the papers mentioned in the introduction. However, unlike \cite{Brzezniak_Liu_Zhu}, we do not assume that the covariance operator $Q$ equals identity.
We now give the definition of a solution to \eqref{SPDE}, similarly as in Pr\'ev\^{o}t and R\"ockner \cite[Def.\ 4.2.1]{Prevot_Rockner} or Brze\'zniak, Liu and Zhu \cite[Def.\ 1.1]{Brzezniak_Liu_Zhu}.
\begin{Def}
\label{def_variational_solution_non_L2}
A \emph{variational solution} of \eqref{SPDE} is a pair $(X,\bar{X})$ of an
$H$-valued, c\`adl\`ag adapted process $X$ and a $V$-valued, predicable
process $\bar{X}$ such that
\begin{enumerate}
\item $X$ equals $\bar{X}$ $\mathrm{d}t \otimes P$--almost everywhere;
\item $P$-a.s. $\displaystyle \int_0^T \normV{\bar{X}(t)} \, \mathrm{d}t < \infty$;
\item \itemEq{X(t) = X_0 + \int_0^t F(\bar{X}(s))\, \mathrm{d}s + \int_0^t G(\bar{X}(s))\, \mathrm{d}L(s) \text{ for all } t\in [0,T]\,\, P\text{-a.s.}\label{integral_eq_in_def_of_variational_sol}}
\end{enumerate}
We say that the solution is \emph{pathwise unique} if any two variational solutions $(X,\bar{X})$ and $(Y,\bar{Y})$ satisfy
$$P\big(X(t)=Y(t) \text{ for all } t \in [0,T]\big) = 1.$$
\end{Def}
Since we later consider the case of a driving noise without finite moments and thus the solution cannot be expected to have finite moments, we do not require finite expectation of the solution in Definition \ref{def_variational_solution_non_L2} in contrast to most literature.
The main result of this section is the following theorem.
\begin{Th}
\label{th_main_existence_result}
Under Assumptions {\upshape(A1)}-{\upshape(A5)}, equation \eqref{SPDE} has a unique variational solution $(X,\bar{X})$. Moreover, the solution satisfies
\begin{align*}
\int_0^T E\left[\@ifstar{\oldnorm}{\oldnorm*}{\bar{X}(s)}_V^2\right]\, \mathrm{d}s<\infty.
\end{align*}
\end{Th}
Before proceeding to the proof we first give some preparatory results on It\^{o}'s formula for the square of the norm.
It\^{o}'s formula in infinite dimensional spaces is discussed for example in M\'etivier \cite[Th.\ 27.2]{Metivier}. It\^{o}'s formula for the square of the norm is of particular interest. It is discussed for instance in Peszat and Zabczyk \cite[Lem.\ D.3]{Peszat_Zabczyk}.
We need however a more general result taking into account the Gelfand triple. The problem is that the integrand of the Lebesgue integral in \eqref{integral_eq_in_def_of_variational_sol} is $V^*$-valued.
This version of It\^{o}'s formula was given in Krylov and Rozovskii \cite[Th.\ I.3.1]{Krylov_Rozovskii} and can be seen as a stochastic version of an earlier result by Lions, see e.g. \cite[Lem.\ III.1.2]{Temam}. See Pr\'ev\^{o}t and R\"ockner \cite[Th.\ 4.2.5 and Rem.\ 4.2.8]{Prevot_Rockner} for a more modern treatment. These formulas work for the Wiener integrals, because of the continuity assumption. More general theorem can be found in Gy\"ongy and Krylov \cite{Gyongy_Krylov_II}. We present here without proof, Theorem 2 of \cite{Gyongy_Krylov_II}.
\begin{Th}
\label{th_from_Gyongy}
Let $M$ be an $H$-valued, c\`adl\`ag, square-integrable martingale, $\Phi$ be a progressively measurable $V^*$-valued process and define
$$X(t) := X_0 + \int_0^t \Phi(s) \, \mathrm{d}s + M(t) \quad \text{ for all } t \in [0,T].$$
If there exists a $V$-valued process $\bar{X}$ such that $X$ and $\bar{X}$ are equal almost surely-$\mathrm{d}t \otimes P$, then $X$ has $P$-a.s. $H$-valued c\`adl\`ag trajectories
and satisfies
\begin{equation*}
\normH{X(t)}^2
= \normH{X_0}^2 + 2\int_0^t {\pairing{\Phi(s)}{\bar{X}(s)}} \, \mathrm{d}s + 2\int_0^t X(s-)\, \mathrm{d}M(s) + [M,M](t).
\end{equation*}
\end{Th}
We now apply this important result in the cylindrical setting.
\begin{Cor}
\label{cor_Ito_for_exponent_times_square}
Let $\Phi$ be a $V^*$-valued predictable process
and $\Psi$ be an $L_{\rm HS}(U,H)$-valued process satisfying
\begin{equation}
\label{assumption_in_ito_formula_corollary}
\mathbb{E} \left[ \int_0^T \normVstar{\Phi(s)} \, \mathrm{d}s \right] < \infty
\qquad\text{and}\qquad
\mathbb{E} \left[ \int_0^T \HSnorm{\Psi(s)Q^{1/2}}^2 \, \mathrm{d}s \right] < \infty.
\end{equation}
If the stochastic process $X$ defined by
$$X(t) = X_0 + \int_0^t \Phi(s) \, \mathrm{d}s + \int_0^t \Psi(s) \, \mathrm{d}L(s)
\qquad\text{for }t\in [0,T]$$
has a $\mathrm{d}t \otimes P$-version $\bar{X}$, which belongs to $L^2([0,T]\times \Omega;V)$,
then
\begin{equation}
\label{expectation_of_supremum}
\mathbb{E} \left[ \sup_{t \in [0,T]} \normH{X(t)}^2 \right] < \infty ,
\end{equation}
and for each $\lambda\ge 0$ we have
\begin{align}\label{expectation_of_exponential}
&\mathbb{E} \left[ e^{-\lambda t}\normH{X(t)}^2 \right] \\
&= \mathbb{E} \left[ \normH{X_0}^2 \right]
\! + \mathbb{E} \left[\int_0^t \!\! e^{-\lambda s} \left( 2 \pairing{\Phi(s)}{\bar{X}(s)} \! + \HSnorm{\Psi(s)Q^{1/2}}^2 \!\!\!\! - \lambda \normH{X(s)}^2 \right) \mathrm{d} s \right] \! .\notag
\end{align}
\end{Cor}
\begin{proof}
Define a martingale $M$ by $ M(t) := \int_0^t \Psi(s) \, \mathrm{d}L(s)$ for $t\in [0,T]$.
It{\^o}'s formula for real-valued processes together
with Theorem \ref{th_from_Gyongy} imply
\begin{align}\label{Ito_for_X}
\mathrm{d}\big( e^{-\lambda t} \normH{X(t)}^2 \big)
& = e^{-\lambda t} \mathrm{d}\normH{X(t)}^2 - \lambda e^{-\lambda t} \normH{X(t)}^2 \, \mathrm{d}t\notag \\
& = e^{-\lambda t} \left( 2\pairing{\Phi(t)}{\bar{X}(t)}\, \mathrm{d}t+ 2X(s-)\, \mathrm{d}M(t) \right)\notag \\
&\qquad + e^{-\lambda t} \mathrm{d} [M,M](t) - \lambda e^{-\lambda t} \normH{X(t)}^2 \, \mathrm{d}t.
\end{align}
For establishing \eqref{expectation_of_supremum}, define the stopping time $\tau^R := \inf \{ t \geq 0 : \normH{X(t)} >R \} \wedge T$ for some $R>0$. Taking $\lambda=0$ in \eqref{Ito_for_X} we obtain
\begin{equation}
\label{expectation_of_supremum_e1-e3}
\begin{aligned}
\mathbb{E} \left[ \sup_{t \leq \tau^R} \normH{X(t)}^2 \right]
&\leq \mathbb{E} \left[ \normH{X_0}^2\right] + 2 \mathbb{E} \left[ \sup_{t \leq \tau^R} \int_0^t \pairing{\Phi(s)}{\bar{X}(s)} \, \mathrm{d}s \right] \\
&\quad + 2 \mathbb{E} \left[ \sup_{t \leq \tau^R} \int_0^t X(s-) \, \mathrm{d}M(s) \right] + \mathbb{E} \left[\sup_{t \leq \tau^R} [M,M](t)\right].
\end{aligned}
\end{equation}
We have
\begin{equation}
\label{estimate_of_e1}
2\mathbb{E} \left[ \sup_{t \leq \tau^R} \int_0^t \pairing{\Phi(s)}{\bar{X}(s)} \, \mathrm{d}s \right]
\leq \mathbb{E} \left[ \int_0^T \normVstar{\Phi(s)}^2 + \normV{\bar{X}(s)}^2 \, \mathrm{d}s \right].
\end{equation}
By inequality (14) in \cite{Ichikawa} for $p=1$ we derive
\begin{align*}
\mathbb{E} \left[ \sup_{t \leq \tau^R} \int_0^t X(s-) \, \mathrm{d}M(s) \right]
\leq 3 \mathbb{E} \left[ \left( \left< \int X(s-)\, \mathrm{d}M(s),\int X(s-)\, \mathrm{d}M(s) \right>(\tau^R) \right)^{1/2} \right].
\end{align*}
Applying Lemma \ref{le.brackek-intL} and
identifying $H=L_{\rm HS}(H,\mathbb{R})$ yields
\begin{equation*}
\left< \int X(s-) \, \mathrm{d}M(s),\int X(s-) \, \mathrm{d}M(s) \right>(\tau^R)
= \int_0^{\tau^R} \@ifstar{\oldnorm}{\oldnorm*}{X(s-)(\Psi(s)Q\Psi^*(s))^{1/2}}_H^2 \, \mathrm{d}s.
\end{equation*}
Taking into account that pathwise $X(s-)=X(s)$ for almost all $s \in [0,T]$
we conclude
\begin{align*}
\mathbb{E} \left[ \sup_{t \leq \tau^R} \int_0^t X(s-) \, \mathrm{d}M(s) \right]
&\leq 3\mathbb{E} \left[ \left( \int_0^{\tau^R} \normH{X(s)}^2 \@ifstar{\oldnorm}{\oldnorm*}{(\Psi(s)Q\Psi^*(s))^{1/2}}^2_{L_{\rm HS}(H,H)} \, \mathrm{d}s \right)^{1/2} \right] .
\end{align*}
Since we have
\begin{align*}
\@ifstar{\oldnorm}{\oldnorm*}{(\Psi(s)Q\Psi^*(s))^{1/2}}_{L_{\rm HS}(H,H)}^2
= \@ifstar{\oldnorm}{\oldnorm*}{Q^{1/2}\Psi^*(s)}_{L_{\rm HS}(H,U)}^2
= \HSnorm{\Psi(s) Q^{1/2}}^2,
\end{align*}
we obtain by the inequality $\sqrt{ab} \leq \frac16 a+ \frac32 b$
for $a,b\ge 0$, that
\begin{align} \label{estimate_of_e2}
&\mathbb{E} \left[ \sup_{t \leq \tau^R} \int_0^t X(s-) \, \mathrm{d}M(s) \right]\notag\\
&\qquad \leq 3 \mathbb{E} \left[ \left( \left( \sup_{s \leq \tau^R} \normH{X(s)}^2 \right)\int_0^{\tau^R} \HSnorm{\Psi(s)Q^{1/2}}^2 \, \mathrm{d}s \right)^{1/2} \right] \notag\\
&\qquad \leq \frac12 \mathbb{E} \left[ \sup_{s \leq \tau^R} \normH{X(s)}^2 \right]+ \frac92 \mathbb{E} \left[ \int_0^T \HSnorm{\Psi(s)Q^{1/2}}^2 \, \mathrm{d}s \right].
\end{align}
Proposition \ref{pro_angle_bracket} yields
\begin{align}
\label{estimate_of_e3}
\mathbb{E} \Big[ [M,M](\tau^R) \Big]
\leq \mathbb{E} \Big[ [M,M](T)\Big]
&= \mathbb{E} \Big[\langle M,M \rangle(T)\Big]\notag\\
& = \mathbb{E} \left[ \int_0^T \HSnorm{\Psi(s)Q^{1/2}}^2 \, \mathrm{d}s \right].
\end{align}
Applying \eqref{estimate_of_e1}, \eqref{estimate_of_e2} and \eqref{estimate_of_e3} to \eqref{expectation_of_supremum_e1-e3} and rearranging, we obtain
\begin{align*}
\mathbb{E} \left[ \sup_{t\leq \tau^R} \normH{X(t)}^2 \right]
&\leq 2 \mathbb{E} \left[ \normH{X_0}^2 \right] + 2\mathbb{E} \left[ \int_0^T \normVstar{\Phi(s)}^2 + \normV{\bar{X}(s)}^2 \, \mathrm{d}s \right] \\
&\quad + 11 \mathbb{E} \left[ \int_0^T \HSnorm{\Psi(s)Q^{1/2}}^2 \, \mathrm{d}s \right].
\end{align*}
Taking $R \to \infty$ gives \eqref{expectation_of_supremum}.
For establishing \eqref{expectation_of_exponential} let $(\tau_k)$ be a sequence of increasing stopping times such that the process
$\left( \int_0^{t\wedge \tau_k} e^{-\lambda s} X(s-) \, \mathrm{d}M(s) : t \in [0,T] \right)$ is a martingale for each $k \in \mathbb{N}$.
Taking expectation in \eqref{Ito_for_X} for the stopped process results in
\begin{align}
\label{Ito_for_X_stopped}
&\mathbb{E} \left[ e^{-\lambda t \wedge \tau_k} \normH{X(t \wedge \tau_k)}^2 \right] \\
& = \mathbb{E} \left[ \normH{X_0}^2 + \int_0^{t \wedge \tau_k} \!\! e^{-\lambda s} \left( 2\pairing{\Phi(s)}{\bar{X}(s)} - \lambda\normH{X(s)}^2 \right)\! \, \mathrm{d}s +\int_0^{t \wedge \tau_k} \!\!e^{-\lambda s} \, \mathrm{d} [M,M](s)\right]. \notag
\end{align}
Approximating the deterministic integrand by simple functions shows
\begin{align*}
\mathbb{E} \left[ \int_0^{t\wedge \tau_k} e^{-\lambda s} \, \mathrm{d} [M,M](s) \right]
&= \mathbb{E} \left[ \int_0^{t\wedge \tau_k} e^{-\lambda s} \, \mathrm{d} \langle M,M \rangle(s) \right].
\end{align*}
From Proposition \ref{pro_angle_bracket} we conclude for \eqref{Ito_for_X_stopped} that
\begin{align*}
&\mathbb{E} \left[ e^{-\lambda t \wedge \tau_k} \normH{X(t \wedge \tau_k)}^2 \right] \\
&= \mathbb{E} \left[ \normH{X_0}^2 + \int_0^{t \wedge \tau_k} \!\!\! e^{-\lambda s} \left( 2\pairing{\Phi(s)}{\bar{X}(s)} + \HSnorm{\Psi(s)Q^{1/2}}^2 \!\!\! - \lambda \normH{X(s)}^2 \right) \, \mathrm{d}s \right].
\end{align*}
An application of Lebesgue's theorem completes the proof.
\end{proof}
Suppose $(e_j)$ is an orthonormal basis of $U$ consisting of eigenvectors of $Q$ and denote with $\pi_n\colon U\to U$ the projection onto $\Span (e_1, \ldots ,e_n)$.
As $\pi_n$ is Hilbert-Schmidt there exists a genuine L\'evy process $L_n$ in $U$ satisfying $L(t)(\pi_n u)=\scapro{L_n(t)}{u}$ for all $u\in U$. The covariance operator of $L_n$ is given by $Q_n = \pi_n Q \pi_n$.
\begin{Lem}
For each $\Psi\in \Lambda(U,H)$ we have
\begin{equation*}
\int_0^t \Psi(s) \pi_n \, \mathrm{d}L(s) = \int_0^t \Psi(s) \, \mathrm{d}L_n(s).
\end{equation*}
\end{Lem}
\begin{proof}
For a simple process $\Psi$ of the form $\Psi = \1_A \1_{(t_1,t_2]}(s) \phi$ for $0 \leq t_1 \leq t_2 \leq t$ and $\phi\in L_{\rm HS}(H,U)$, we obtain for each $h\in H$
\begin{align*}
\left< \int_0^t \Psi(s) \pi_n \, \mathrm{d}L(s), h \right>_H
&= (L(t_2)-L(t_1))(\pi_n\phi^*h) \1_A \\
&= \scalarU{L_n(t_2)-L_n(t_1)}{\phi^*h} \1_A \\
&= \int_0^t \Psi^*(s)h \, \mathrm{d}L_n(s)
= \left< \int_0^t \Psi(s) \, \mathrm{d}L_n(s), h\right>_H.
\end{align*}
By linearity and continuity of the integral operator we complete the proof.
\end{proof}
Denote by $\tilde{\pi}_n\colon V^\ast \to H$ the projection on span$\{f_1,\dots, f_n\}$, where without restriction of generality we assume
$(f_j)\subseteq V$.
It follows from standard results, see e.g.\ \cite[Th.\ 1]{Gyongy_Krylov_I} that for each $n\in \mathbb{N}$ the equation
\begin{equation}
\label{projected_SDE}
X_n(t) = \tilde{\pi}_n X_0 + \int_0^t \tilde{\pi}_nF(X_n(s)) \, \mathrm{d}s + \int_0^t \tilde{\pi}_nG(X_n(s-))\pi_n \, \mathrm{d}L(s)
\end{equation}
has a unique c\`adl\`ag strong solution in $V$ .
\begin{Lem}
\label{lem_energy_estimate}
The solutions of \eqref{projected_SDE} obey
$\displaystyle \sup_n \mathbb{E} \left[ \int_0^T \normV{X_n(t)}^2 \, \mathrm{d}t \right] < \infty $.
\end{Lem}
\begin{proof}
For the stopping times $\tau_n^R := \inf \{ t\geq 0 : \normV{X_n(t)} \geq R \}$, for $n\in \mathbb{N}$ and $R>0$ denote the stopped process by $X_n^R$. The coercivity assumption (A1) and growth assumption (A3) imply
\begin{equation}\label{growth_bound_for_G}
\HSnorm{G(v)Q^{1/2}}^2
\leq -2\pairing{F(v)}v - \alpha \normV{v}^2 + \lambda \normH{v}^2 + \beta \leq (4c-\alpha+\lambda)\normV{v}^2 + \beta + 4c,
\end{equation}
for all $v\in V$. Therefore, condition \eqref{assumption_in_ito_formula_corollary} is satisfied and
Corollary \ref{cor_Ito_for_exponent_times_square} implies
\begin{align*}
&\mathbb{E} \left[ \normH{X_n^R(t)}^2 \right] \\
&= \mathbb{E} \left[ \normH{\tilde{\pi}_n X_0}^2 \right] + 2 \mathbb{E} \left[ \int_0^t {\pairing{\tilde{\pi}_nF(X_n^R(s))}{X_n^R(s)}} + \HSnorm{\tilde{\pi}_nG(X_n^R(s))Q_n^{1/2}}^2 \, \mathrm{d}s \right]\\
&\le \mathbb{E} \left[ \normH{X_0}^2 \right] + 2 \mathbb{E} \left[ \int_0^t {\pairing{F(X_n^R(s))}{X_n^R(s)}}\, \mathrm{d}s + \HSnorm{G(X_n^R(s))Q^{1/2}}^2 \, \mathrm{d}s \right].
\end{align*}
Adding the expression $\mathbb{E} \left[ \int_0^t \alpha \@ifstar{\oldnorm}{\oldnorm*}{X_n^R(s)}_V^2 \, \mathrm{d}s \right]$ to both sides and using the coercivity assumption (A1), we obtain
\begin{equation}\label{energy_ineq}
\mathbb{E} \left[ \normH{X_n^R(t)}^2 \right] + \mathbb{E} \left[ \int_0^t \alpha \normV{X_n^R(s)}^2 \, \mathrm{d}s \right]
\leq \mathbb{E} \left[ \normH{X_0}^2 \right] + \beta t + \lambda \mathbb{E} \left[ \int_0^t \normH{X_n^R(s)}^2 \, \mathrm{d}s \right].
\end{equation}
Skipping the second term on the left-hand side and taking supremum yields
\begin{equation*
\sup_{r\leq t} \mathbb{E} \left[ \normH{X_n^R(r)}^2 \right]
\leq \mathbb{E} \left[ \normH{X_0}^2 \right] + \beta t + \lambda \int_0^t \sup_{u\leq s} \mathbb{E} \left[ \normH{X_n^R(u))}^2 \right] \, \mathrm{d}s.
\end{equation*}
Gr{\"o}nwall's inequality implies
$$\sup_{u\leq t} \mathbb{E} \left[ \normH{X_n^R(u)}^2 \right]
\leq \mathbb{E} \left[ \normH{X_0}^2 \right] + \beta t + \int_0^t \left(\mathbb{E} \left[\normH{X_0}^2\right]+\beta s \right) e^{\lambda (t-s)} \lambda \, \mathrm{d}s.$$
Letting $R \to \infty$ we obtain
$$\sup_n \sup_{t\in [0,T]} \mathbb{E} \left[ \normH{X_n(t)}^2 \right] < \infty.$$
We conclude that the right-hand side of \eqref{energy_ineq} is bounded, from which the claim follows.
\end{proof}
\begin{Lem}
\label{lem_convergence_of_all_terms}
For the solution $X_n$ of \eqref{projected_SDE} define $X_n^-(t):= X_n(t-)$. Then there exist a subsequence $(n_k)\subseteq \mathbb{N}$ such that:
\begin{enumerate}
\item $X_{n_k}^-$ converges weakly to a predictable process $\bar{X}$ in $L^2([0,T] \times \Omega; V)$;
\item $\tilde{\pi}_{n_k}F(X_{n_k}^-)$ converges weakly to some $\xi$ in $L^2([0,T]\times \Omega; V^*)$;
\item[(iii)] $\tilde{\pi}_{n_k}G(X_{n_k}^-)\pi_{n_k}Q^{1/2} = \tilde{\pi}_{n_k}G(X_{n_k}^-)Q_{n_k}^{1/2}$ converges weakly to $\eta Q^{1/2}$ in $L^2([0,T] \times \Omega; L_{\rm HS}(U,H))$ for some $\eta\in L^2([0,T]\times\Omega; L_{\rm HS}(Q^{1/2}U,H))$;
\item[(iv)] $\int_0^{\cdot} \tilde{\pi}_{n_k}F(X_{n_k}^-(s)) \, \mathrm{d}s$ converges weakly to $\int_0^{\cdot} \xi(s) \, \mathrm{d}s$ in $L^2([0,T] \times \Omega;V^*)$;
\item[(v)] $\int_0^{\cdot} \tilde{\pi}_{n_k}G(Y_{n_k}(s))\pi_{n_k} \, \mathrm{d}L(s)$ converges weakly to $\int_0^{\cdot} \eta(s) \, \mathrm{d}L(s)$ in $L^2([0,T] \times \Omega;H)$.
\end{enumerate}
\end{Lem}
\begin{proof}
By combining Corollary III.2.13 and Theorem IV.1.1 in \cite{Diestel_Uhl} we conclude that all the spaces in the Lemma are reflexive.
Parts (i)-(iii) follow from Lemma \ref{lem_energy_estimate} by the Banach-Alaoglu theorem together with the linear growth assumption (A3) for (ii) and with \eqref{growth_bound_for_G} for (iii).
For part (iv), note that the mapping
\begin{align*}
J\colon L^2([0,T] \times \Omega; V^\ast) \to L^2([0,T] \times \Omega; V^\ast),
\qquad JX=\left(\int_0^t X(s) \, \mathrm{d}s:\, t\in [0,T]\right)
\end{align*}
is continuous. Thus, $J$ is weak-weak continuous, which shows (iv) by Part (ii). Similarly, for Part (v) define the mapping
\begin{align*}
I\colon \Lambda \to L^2([0,T]\times \Omega ;H),\qquad
IX=\left( \int_0^t X(s) \, \mathrm{d}L(s):\, t\in [0,T] \right).
\end{align*}
Corollary 4.3 in \cite{Riedle_L2} guarantees that $I$ is continuous, which shows Part (v) by (iii).
\end{proof}
\begin{proof}[Proof of Theorem \ref{th_main_existence_result}]
Using the notation for the limits in Lemma \ref{lem_convergence_of_all_terms} we define
\begin{equation*}
X(t) := X_0+ \int_0^t \xi(s)\, \mathrm{d}s + \int_0^t \eta(s) \, \mathrm{d}L(s), \quad \text{ for } t\in [0,T].
\end{equation*}
Since the expressions in Lemma \ref{lem_convergence_of_all_terms}(i),(iv) and (v) also converge weakly in $L^2([0,T] \times \Omega;V^*)$, the process $X$ is $V^\ast$-valued as the limit of the right-hand side of \eqref{projected_SDE}. On the other hand, the left-hand side of \eqref{projected_SDE} weakly converges to the $V$-valued process $\bar{X}$ according to Lemma \ref{lem_convergence_of_all_terms}(i).
Hence, we obtain that $X=\bar{X}$ almost surely-$\mathrm{d}t \otimes P$ and
Theorem \ref{th_from_Gyongy} guarantees that $P$-almost surely $X$ is $H$-valued and c\`adl\`ag.
It is left to show that $\xi=F(\bar{X})$ and $\eta Q^{1/2}=G(\bar{X})Q^{1/2}$, $\mathrm{d}t \otimes P$-almost surely, which will be accomplished in two steps.
\textit{Step 1.} As in the proof of Lemma \ref{lem_energy_estimate} we conclude from Corollary \ref{cor_Ito_for_exponent_times_square} that
\begin{align*}
&\mathbb{E} \left[ e^{-\lambda t}\normH{X_n(t)}^2 -\normH{X_0}^2 \right] \\
& \le \mathbb{E} \left[ \int_0^t \! e^{-\lambda s} \left( 2 \pairing{F(X_n(s))}{X_n(s)} + \HSnorm{G(X_n(s))Q^{1/2}}^2 \!\!\! - \lambda \normH{X_n(s)}^2 \right) \mathrm{d}s \right].
\end{align*}
By adding and subtracting an arbitrary process $\Phi \in L^2([0,T] \times \Omega; V)$ and using the monotonicity condition (A2) we obtain
\begin{align}\label{multiplied_by_psi_1}
&\mathbb{E} \left[ e^{-\lambda t}\normH{X_n(t)}^2 - \normH{X_0}^2 \right] \\
& \le \mathbb{E} \bigg[ \int_0^t e^{-\lambda s} \Big( 2\pairing{F(X_n(s))}{\Phi(s)} + 2\pairing{F(\Phi(s))}{X_n(s)-\Phi(s)} + \lambda \normH{\Phi(s)}^2 \notag \\
&\qquad + 2\HSscalar{G(X_n(s))Q^{1/2}}{G(\Phi(s))Q^{1/2}} - \HSnorm{G(\Phi(s))Q^{1/2}}^2 \notag \\
&\qquad - 2\lambda \scalarH{X_n(s)}{\Phi(s)} \Big) \, \mathrm{d}s \bigg].\notag
\end{align}
On the other hand, since $\xi \in L^2([0,T]\times \Omega ; V^*)$ and $\eta Q^{1/2} \in L^2([0,T]\times \Omega;L_{\rm HS}(U,H)$) according to Lemma \ref{lem_convergence_of_all_terms}, we obtain from Corollary \ref{cor_Ito_for_exponent_times_square} that
\begin{align}
\label{multiplied_by_psi_2}
&\mathbb{E} \left[ \int_0^t e^{-\lambda s} \Big( 2{\pairing{\xi(s)}{\bar{X}(s)}} + \HSnorm{\eta(s)Q^{1/2}}^2 -\lambda \normH{X(s)}^2 \Big) \, \mathrm{d}s \right] \notag \\
&\qquad =\mathbb{E} \left[ e^{-\lambda t}\normH{X(t)}^2 \right] - \mathbb{E} \left[ \normH{X_0}^2 \right] .
\end{align}
Multiply \eqref{multiplied_by_psi_2} by a non-negative process $\Psi \in L^{\infty}([0,T];\mathbb{R}_+)$, integrating from $0$ to $T$ and
applying Fubini's lemma, we obtain
\begin{align*}
&\mathbb{E} \bigg[ \int_0^T \Psi(t) \int_0^t e^{-\lambda s} \Big( 2{\pairing{\xi(s)}{\bar{X}(s)}} + \HSnorm{\eta(s)Q^{1/2}}^2 -\lambda \normH{X(s)}^2 \Big) \, \mathrm{d}s \, \mathrm{d}t \bigg] \notag \\
&\qquad \le \liminf_{n\to \infty} \mathbb{E} \left[ \int_0^T \Psi(t) \left( e^{-\lambda t}\normH{X_n(t)}^2 -\normH{X_0}^2 \right) \mathrm{d}t \right]
\end{align*}
Estimating the right-hand side by \eqref{multiplied_by_psi_1} and taking the limit according to Lemma \ref{lem_convergence_of_all_terms}, we obtain
\begin{align*}
&\mathbb{E} \left[ \int_0^T \Psi(t) \int_0^t e^{-\lambda s} \Big( 2{\pairing{\xi(s)}{\bar{X}(s)}} + \HSnorm{\eta(s)Q^{1/2}}^2 -\lambda \normH{X(s)}^2 \Big) \, \mathrm{d}s \, \mathrm{d}t \right] \\
& \le \mathbb{E} \bigg[\int_0^T \Psi(t) \int_0^t e^{-\lambda s} \Big( 2\pairing{\xi(s)}{\Phi(s)} + 2\pairing{F(\Phi(s))}{\bar{X}(s)-\Phi(s)} + \lambda \normH{\Phi(s)}^2 \\
&\qquad \qquad \qquad \qquad + 2\HSscalar{\eta(s)Q^{1/2}}{G(\Phi(s))Q^{1/2}} - \HSnorm{G(\Phi(s))Q^{1/2}}^2 \\
&\qquad \qquad \qquad \qquad - 2\lambda \scalarH{\bar{X}(s)}{\Phi(s)}\Big) \, \mathrm{d}s \, \mathrm{d}t \bigg].
\end{align*}
Moving the terms from the right-hand to the left-hand side we arrive at
\begin{align}\label{terms_less_0}
&\mathbb{E} \bigg[ \int_0^T \Psi(t) \int_0^t e^{-\lambda s} \Big( 2 \pairing{\xi(s)-F(\Phi(s))}{\bar{X}(s) - \Phi(s)} \notag \\
&\qquad\qquad + \HSnorm{(\eta(s)-G(\Phi(s)))Q^{1/2}}^2 - \lambda \normH{\bar{X}(s)-\Phi(s)}^2 \Big) \, \mathrm{d}s \, \mathrm{d}t \bigg]
\leq 0.
\end{align}
\textit{Step 2.}
Taking $\Phi=\bar{X}$ in \eqref{terms_less_0}, we derive
$$\mathbb{E} \left[ \int_0^T \Psi(t) \int_0^t e^{-\lambda s} \left( \HSnorm{(\eta(s)-G(\bar{X}(s)))Q^{1/2}}^2 \right) \, \mathrm{d}s \, \mathrm{d}t \right]
\leq 0,$$
which shows $\eta Q^{1/2}=G(\bar{X})Q^{1/2}$ $\mathrm{d}t\otimes P$-almost everywhere. Moreover, taking $\Phi=\bar{X}-\epsilon \tilde{\Phi}v$ for some $\tilde{\Phi} \in L^{\infty}([0,T]\times \Omega ;\mathbb{R})$, $v\in V$ and $\epsilon >0$ in \eqref{terms_less_0} and neglecting the only non-negative term,
we obtain
$$\mathbb{E} \bigg[ \int_0^T \!\! \Psi(t) \!\! \int_0^t \!\! e^{-\lambda s} \! \left( 2 \tilde{\Phi}(s) \pairing{\xi(s)-F(\bar{X}(s)-\epsilon \tilde{\Phi}(s)v)}{v} - \lambda \big| \tilde{\Phi}(s)\big|^2 \normH{v}^2 \right) \mathrm{d} s \mathrm{d} t \bigg]
\leq 0.$$
Taking the limit $\epsilon\to0$ and applying Fubini's theorem, the hemicontinuity assumption (A4) implies by Lebesgue's theorem that
\begin{equation*}
\int_0^T \Psi(t) \mathbb{E} \left[ \int_0^t e^{-\lambda s} 2 \tilde{\Phi}(s) \pairing{\xi(s)-F(\bar{X}(s))}{v} \, \mathrm{d}s \right] \, \mathrm{d}t
\leq 0.
\end{equation*}
Since $\Psi \in L^{\infty}([0,T];\mathbb{R}_{+})$ and $\tilde{\Phi}\in L^{\infty}([0,T]\times \Omega;\mathbb{R})$ are arbitrary, we can conclude $\xi(\cdot) = F(\bar{X}(\cdot))$ $\mathrm{d}t \otimes P$-almost everywhere.
Uniqueness of the variational solution can be derived as in
\cite{Brzezniak_Liu_Zhu}.
\end{proof}
\section{Jumps of cylindrical L\'evy processes and existence of solution in the non-integrable case}
\label{sec_jumps}
In this section we consider the case of a driving noise without finite moments. Contrary to the classical case of a genuine L\'evy process, one cannot directly apply stopping time arguments such as in \cite[Sec.\ 9.7]{Peszat_Zabczyk} or interlacing techniques such as in \cite[Th.\ IV.9.1]{Ikeda_Watanabe}, since the cylindrical L\'evy process does not attain values in the underlying space. We focus on a special class of cylindrical L\'evy processes, which, similarly as in the case of a cylindrical Brownian motion, can be represented by a sum. However, the sum does not converge in the underlying Hilbert space.
For a sequence of positive, bounded real numbers $c=(c_j) \in \ell^\infty(\mathbb{R}_+)$ we define the sequence of stopping times by
\begin{equation*}
\tau_n^c(k)
:= \inf \bigg\{ t \geq 0 : \sum_{j=1}^n \left( \Delta L(t)e_j \right)^2 c_j^2 > k^2 \bigg\}\qquad\text{for each }k>0, n\in\mathbb{N}.
\end{equation*}
The stopping time $\tau_n^c(k)$ can be seen as the first time, the $n$-dimensional L\'evy process $\big((L(t)(c_1e_1),\dots, L(t)(c_ne_n)):\,t\ge 0\big)$ has a jump of size larger than $k$. Since $\tau_n^c(k)$ is non-increasing in $n$, we can define another sequence of stopping times by
\begin{equation}
\label{definition_of_tau_k}
\tau^c(k) := \lim_{n \to \infty} \tau_n^c(k) \qquad \text{for } k> 0.
\end{equation}
Contrary to the the case of a genuine Hilbert space-valued L\'evy process, if the noise is cylindrical the stopping times $\tau_n^c(k)$ may accumulate at zero,
i.e.\ $\tau^c(k)=0$ $P$-a.s.
It will turn out that the distribution of the stopping time $\tau^c(k)$ depends on the parameter
\begin{equation}
\label{definition_of_m_c_k}
m^c(k) := \sup_{n\in \mathbb{N}} \nu \bigg( \bigg\{ u\in U : \sum_{j=1}^n \langle u,e_j \rangle^2 c_j^2 > k^2 \bigg\} \bigg) \qquad \text{for } k>0,
\end{equation}
where $\nu$ is the cylindrical L\'evy measure of $L$.
If $L$ is a genuine L\'evy process in $U$ then its L\'evy measure $\nu$ is finite outside each ball around $0$ and $m^c(k)\to 0$ as $k\to\infty$.
In the cylindrical case, the situation turns out to be rather different as Proposition \ref{pro_accumulation_and_m_c_k} shows:
\begin{Pro} \label{pro_accumulation_and_m_c_k}
Let $L$ be a cylindrical L\'evy process with $m^c$ defined in \eqref{definition_of_m_c_k} for a fixed $c \in \ell^\infty(\mathbb{R}_+)$.
\begin{enumerate}
\item[{\rm (1)}] We have the following dichotomy for each $k>0$:
\begin{enumerate}
\item $m^c(k)=0$ $\Leftrightarrow$ $\tau^c(k)=\infty$ $P$-a.s;
\item $m^c(k) \in (0,\infty)$ $\Leftrightarrow$ $\tau^c(k)$ is exponentially distributed with parameter $m^c(k)$;
\item $m^c(k)=\infty$ $\Leftrightarrow$ $\tau^c(k) = 0$ $P$-a.s.
\end{enumerate}
\item[{\rm (2)}] We have: $\lim\limits_{k\to\infty} m^c(k)=0$ $\Leftrightarrow$ $\displaystyle \lim_{k\to\infty}\tau^c(k)=\infty$ $P$-a.s.
\end{enumerate}
\end{Pro}
\begin{proof}
(1) Define the mapping
\begin{align*}
\pi_n^c:U \to U, \qquad \pi_n^c(u) = \sum_{j=1}^n c_j \scalar{u}{e_j} e_j.
\end{align*}
Then $\tau_n^c(k)$ is the time of the first jump of size larger than $k$ of the genuine L\'evy process $L_n^c$ defined by
$$L_n^c(t) = \sum_{j=1}^n c_j L(t)(e_j) e_j, \qquad t \geq 0.$$
As the L\'evy measure $\nu_n^c$ of $L_n^c$ is given by $\nu_n^c:=\nu \circ (p_n^c)^{-1}$, the stopping time $\tau_n^c(k)$ is exponentially distributed with parameter
$$\lambda_n^k:= \nu_n^c\big(\{u\in U: \normU{u}>k\}\big)
= \nu \bigg( \bigg\{ u \in U : \sum_{j=1}^n c_j^2 \scalar{u}{e_j}^2 > k^2 \bigg\} \bigg).$$
(i): their very definition implies that $m^c(k)=0$ if and only if $\lambda_n^k=0$ for all $n\in\mathbb{N}$. The latter is equivalent to $\tau_n^c(k)=\infty$ for all $n\in\mathbb{N}$.
(ii), (iii): the characteristic function $\phi_{\tau_n^c(k)}$ of $\tau_n^c(k) $ is given by
\begin{align*}
\phi_{\tau_n^c(k)}\colon \mathbb{R}\to\mathbb{C},\qquad
\phi_{\tau_n^c(k)}(x)=\frac{\lambda_n^k}{\lambda_n^k-ix}.
\end{align*}
As $\lambda_n^k$ monotonically increases to $m^c(k)$ as $n\to\infty$, the characteristic function $\phi_{\lambda_n^k}$ converges to the characteristic function either of the exponential distribution with parameter $m^c(k)$ or of the Dirac measure in $0$.
For establishing (2), note that monotonicity of $k\mapsto \tau^c(k)$ yields
\begin{align*}
P\left(\lim_{k\to\infty} \tau^c(k) =\infty\right)
= P\bigg( \bigcap_{t\in \mathbb{N}} \bigcup_{n\in \mathbb{N}} \bigcap_{k\geq n} \{\tau^c(k) >t\}\bigg)
=\lim_{t\to\infty} \lim_{n\to\infty} P\Big(\tau^c(n)>t\Big).
\end{align*}
Since $P(\tau^c(k) > t) = \exp(-tm^c(k))$, this completes the proof of (2).
\end{proof}
\subsection{Cylindrical L\'evy processes with diagonal structure}
Let $L$ be a cylindrical L\'evy process with cylindrical L\'evy measure $\nu$ and let $(e_j)$ be an orthonormal basis of $U$.
In the reminder of this section we consider the special class of
cylindrical L\'evy processes which are of the form
\begin{align} \label{series_cylindrical_Levy}
L(t)u = \sum_{j=1}^\infty \ell_j(t) \langle u,e_j \rangle
\qquad\text{for all }u\in U,\, t\ge 0,
\end{align}
where $(\ell_j)$ is a sequence of independent, not necessarily identically distributed, one-dimensional L\'evy processes.
Denote the characteristics (with respect to the standard truncation
function $\1_{B_\mathbb{R}}$) of $\ell_j$ by $(b_j,s_j,\rho_j)$ for each $j\in\mathbb{N}$. Lemma 4.2 in \cite{Riedle_OU} guarantees that the sum in \eqref{series_cylindrical_Levy} converges and defines a cylindrical L\'evy process if and only if the characteristic functions of $\ell_j$ are equicontinuous at $0$ and the following three conditions are satisfied for every $(\alpha_j) \in \ell^2(\mathbb{R})$:
\begin{enumerate}
\item \itemEq{\sum\limits_{j=1}^\infty \1_{B_\mathbb{R}}(\alpha_j)\@ifstar{\oldabs}{\oldabs*}{\alpha_j} \@ifstar{\oldabs}{\oldabs*}{b_j + \int_{1<\@ifstar{\oldabs}{\oldabs*}{x}\leq \@ifstar{\oldabs}{\oldabs*}{\alpha_j}^{-1}} x \, \rho_j(\mathrm{d}x)} < \infty, \label{lemma_4_2_first_condition}}
\item \itemEq{ (s_j) \in \ell^\infty(\mathbb{R}), \label{lemma_4_2_second_condition}}
\item \itemEq{\sum\limits_{j=1}^\infty {\displaystyle \int_\mathbb{R}} \left( \@ifstar{\oldabs}{\oldabs*}{\alpha_j x}^2 \wedge 1 \right) \rho_j(\mathrm{d}x) < \infty.\label{lemma_4_2_third_condition}}
\end{enumerate}
Independence of the L\'evy processes ($\ell_j)$ imply that the cylindrical L\'evy measure of $L$ has only support in $\cup\, \text{\rm span}\{e_j\}$. Consequently, the function $m^c$, defined in \eqref{definition_of_m_c_k}, reduces to
\begin{align}\label{eq.m^c-series}
m^c(k) = \sum_{j=1}^\infty \rho_j\left( \left\{x\in\mathbb{R}: \@ifstar{\oldabs}{\oldabs*}{x}>\tfrac{k}{c_j}\right\} \right) \qquad\text{for all }k>0.
\end{align}
In general, cylindrical L\'evy processes do not enjoy a type of L\'evy-It\^{o} decomposition. However, the specific construction of cylindrical L\'evy processes of the form \eqref{series_cylindrical_Levy} suggests to derive a L\'evy-It\^{o} decomposition from an appropriate decomposition of the real-valued processes $\ell_j$. More precisely, for a given sequence
$c=(c_j)\in \ell^\infty (\mathbb{R}_{+})$ and $k>0$ we obtain $\ell_j(t)= p_j^{c,k}(t) + m_j^{c,k}(t) + r_j^{c,k}(t)$ for all $t\ge 0$ where
\begin{align}
p_j^{c,k}(t) &:= \left( b_j + \int_{1<\@ifstar{\oldabs}{\oldabs*}{x}\leq k/c_j} x \, \rho_j(\mathrm{d}x)\right)t , \label{eq.def-p} \\
m_j^{c,k}(t) &:= \sqrt{s_j} W_j(t) + \int_{\@ifstar{\oldabs}{\oldabs*}{x}\leq k/c_j} x \, \tilde{N}_j(t,\mathrm{d}x), \\
r_j^{c,k}(t) &:= \int_{\@ifstar{\oldabs}{\oldabs*}{x}>k/c_j} x \, N_j(t,\mathrm{d}x).
\label{eq.def-q}
\end{align}
Here, the process $W_j$ is a real-valued standard Brownian motion and $N_j$ is a Poisson random measure on $[0,\infty)\times \mathbb{R}$ with intensity measure $\mathrm{d}t\otimes \rho_j$.
In the following lemma, we summarise the conditions on the cylindrical L\'evy process such that the stopping times $\tau^c(k)$, defined in \eqref{definition_of_tau_k}, do not accumulate at zero and such that the decomposition of $\ell_j$ leads to a decomposition of the cylindrical L\'evy process:
\begin{enumerate}[assumptions]
\item \label{item_existence_of_c} there exists a sequence $c=(c_j) \in \ell^\infty(\mathbb{R}_+)$ such that
\begin{enumerate}
\item \itemEq{\left( p_j^{c,k}(1)\right)_{j\in\mathbb{N}}\in \ell^2(\mathbb{R}) \text{ for each } k>0; \label{assumption_drift}}
\item \itemEq{\displaystyle \sup_{j\in\mathbb{N}}\int_{\@ifstar{\oldabs}{\oldabs*}{x}\leq k/c_j} x^2
\, \rho_j(\mathrm{d}x)<\infty \text{ for each } k>0; \label{assumption_square_integrable}}
\item \itemEq{\displaystyle \lim_{k\to\infty} m^c(k) = 0.\label{assumption_no_accummulation}}
\end{enumerate}
\end{enumerate}
\begin{Rem} \label{rem_square_summable_c}
Assume that $L$ is of the form \eqref{series_cylindrical_Levy}, i.e.\ Conditions \eqref{lemma_4_2_first_condition} - \eqref{lemma_4_2_third_condition} are satisfied. For a square summable sequence $(c_j)$, condition \eqref{lemma_4_2_third_condition} implies \eqref{assumption_no_accummulation} by \eqref{eq.m^c-series} and Lebesgue's theorem. On the other hand, if $c_j$ is constantly equal to $1$, then Condition \eqref{lemma_4_2_third_condition} implies \eqref{assumption_square_integrable}.
Indeed, suppose for contradiction that the sequence
$\big( \int_{\@ifstar{\oldabs}{\oldabs*}{x}\leq k} x^2 \, \rho_j(\mathrm{d}x): j \in \mathbb{N}\big)$
is unbounded. Then there exists a sequence $(\alpha_j) \in \ell^2(\mathbb{R})$ such that
$$\sum_{j=1}^\infty \alpha_j^2 \int_{\@ifstar{\oldabs}{\oldabs*}{x}\leq k} x^2 \, \rho_j(\mathrm{d}x) = \infty,$$
which contradicts \eqref{lemma_4_2_third_condition}.
In summary, for the assumption \ref{item_existence_of_c} to hold there must be some balance between the rate of decay of the L\'evy measures $(\rho_j)$ and $c \in \ell^2(\mathbb{R}_{+})$ or $c \in \ell^\infty(\mathbb{R}_{+})$.
\end{Rem}
\begin{Lem}
\label{lem_cyl_decomp}
Assume that $L$ is a cylindrical L\'evy process of the form \eqref{series_cylindrical_Levy} satisfying \ref{item_existence_of_c}
for a sequence $c\in \ell^\infty(\mathbb{R}_{+})$. Then $L$ can be decomposed into $L(t)=P^c_k(t)+M^c_k(t)+R^c_k(t)$ for each $t\ge 0$ and $k>0$, where $P^c_k$, $M^c_k$ and $R^c_k$ are cylindrical L\'evy processes defined by
\begin{equation*}
P^c_k(t)u := \sum_{j=1}^\infty p_j^{c,k}(t) \langle u,e_j \rangle, \enskip
M^c_k(t)u := \sum_{j=1}^\infty m_j^{c,k}(t) \langle u,e_j \rangle, \enskip
R^c_k(t)u := \sum_{j=1}^\infty r_j^{c,k}(t) \langle u,e_j \rangle.
\end{equation*}
The process $M^c_k$ is a weakly square-integrable cylindrical L\'evy martingale
and the stopping times $\tau^c$, defined in \eqref{definition_of_tau_k},
satisfy $\tau^c(k)\to\infty$ $P$-a.s.\ as $k\to\infty$.
\end{Lem}
\begin{proof}
We write $M^c_k(t)=X(t)+Y^c_k(t)$ for each $k>0$ with
$$X(t)u := \sum_{j=1}^\infty \sqrt{s_j} W_j(t) \langle u,e_j \rangle, \qquad
Y^c_k(t)u := \sum_{j=1}^\infty \int_{\@ifstar{\oldabs}{\oldabs*}{x}\leq k/c_j} x \, \tilde{N}_j(t,\mathrm{d}x) \langle u,e_j \rangle, $$
for all $u\in U$. Since condition \eqref{lemma_4_2_second_condition} implies
\begin{align*}
E\left[ \@ifstar{\oldabs}{\oldabs*}{X(t)u}^2\right]
=\sum_{j=1}^\infty \@ifstar{\oldabs}{\oldabs*}{s_j}\scalar{u}{e_j}^2 \le \@ifstar{\oldnorm}{\oldnorm*}{s}_\infty \@ifstar{\oldnorm}{\oldnorm*}{u}^2,
\end{align*}
we obtain that $X(t)\colon U\to L^0(\Omega;\mathbb{R})$ is well defined, continuous and weakly square-integrable.
We have
$$
E\left[\@ifstar{\oldabs}{\oldabs*}{Y^c_k(t)u}^2\right]
=t\sum_{j=1}^\infty\langle u,e_j \rangle^2 \int_{\@ifstar{\oldabs}{\oldabs*}{x}\leq k/c_j} x^2 \, \rho_j(\mathrm{d}x)
< \infty$$
by \eqref{assumption_square_integrable}.
Consequently, $Y^c_k(t)$ and thus $M^c_k(t)$ are well defined, continuous and weakly square-integrable. By \eqref{assumption_drift}, the (deterministic) process $P^c_k$ is well defined.
Since $R^c_k=L-M^c_k-P^c_k$ it follows that the series in the definition of $R^c_k$ converges and that $R^c_k(t)\colon U\to L^0(\Omega;\mathbb{R})$ is continuous for all $t\geq 0$.
\end{proof}
\begin{Exa}\label{ex.symmetric-stable}
\label{exa_symmetric_stable}[Two-sided stable process]
An often considered example of a process given in \eqref{series_cylindrical_Levy} is for $\ell_j = \sigma_j h_j$,
where $h_j$ are identically distributed, symmetric $\alpha$-stable L\'evy processes
and $\sigma_j\in\mathbb{R}$; see \cite{Priola_Zabczyk_2nd, Priola_Zabczyk_1st}.
In this case, $\ell_j$ has L\'evy measure $\rho_j = \rho \circ m_{\sigma_j}^{-1}$, where $m_{\sigma_j}\colon \mathbb{R}\to\mathbb{R}$ is given by $m_{\sigma_j}(x) = \sigma_j x$ and $\rho(\mathrm{d}x) = \frac12 \@ifstar{\oldabs}{\oldabs*}{x}^{-1-\alpha} \, \mathrm{d}x$.
By \cite[Ex.\ 4.5]{Riedle_OU}, formula \eqref{series_cylindrical_Levy} defines a cylindrical L\'evy process if and only if $\sigma=(\sigma_j) \in \ell^{\frac{2\alpha}{2-\alpha}}(\mathbb{R})$. Moreover, $L$ is induced by a classical process if and only if $\sigma \in \ell^\alpha(\mathbb{R})$.
We show that Assumption \ref{item_existence_of_c} is satisfied
for the sequence $(c_j)\in\ell^2(\mathbb{R}_{+})$ defined by $c_j = \@ifstar{\oldabs}{\oldabs*}{\sigma_j}^{\frac{\alpha}{2-\alpha}}$. Condition \eqref{assumption_drift} is trivially satisfied because each $h_j$ has no drift and the L\'evy measure is symmetric. Since
\begin{align*}
\int_{\@ifstar{\oldabs}{\oldabs*}{x}\leq \frac{k}{c_j}} x^2 \, \rho_j(\mathrm{d}x)
= \sigma_j^2 \int_{\@ifstar{\oldabs}{\oldabs*}{x}\leq \frac{k}{\@ifstar{\oldabs}{\oldabs*}{c_j\sigma_j}}} x^2 \, \rho(\mathrm{d}x)
= \sigma_j^2 \frac{k^{2-\alpha}}{2-\alpha} \@ifstar{\oldabs}{\oldabs*}{c_j \sigma_j}^{\alpha-2}
= \frac{k^{2-\alpha}}{2-\alpha},
\end{align*}
Condition \eqref{assumption_square_integrable} is satisfied.
Since $(c_j)\in\ell^2(\mathbb{R}_{+})$ by its very definition,
Remark \ref{rem_square_summable_c} establishes
Condition \eqref{assumption_no_accummulation}.
\end{Exa}
\begin{Exa}\label{ex.one-sided-stable}[One-sided stable process]
We choose $\ell_j=\sigma_j h_j$ in \eqref{series_cylindrical_Levy} with $\sigma_j\in\mathbb{R}$ and $h_j$ arbitrary, strictly $\alpha$-stable L\'evy process with $\alpha\in (0,2)$ and with no negative jumps. Note, that $\alpha\neq 1$, since a 1-stable L\'evy process is strictly stable if and only if its L\'evy measure is symmetric. The characteristic function of $h_j(1)$ is given by
\begin{align*}
\phi_{h_j(1)}(x) =
\exp\left( -c\@ifstar{\oldabs}{\oldabs*}{x}^\alpha \left(1-i \tan\tfrac{\pi\alpha}{2} \sgn x\right) \right),
\end{align*}
for a constant $c>0$; see \cite[Th.\ 14.15, Def.\ 14.16]{Sato}.
It follows that the L\'evy process $\sigma_jh_j$ has characteristics $(b_j, 0,\rho_j)$ given by
\begin{align*}
b_j= \sigma_j \frac{c}{c_\alpha(1-\alpha)} \sigma_j \@ifstar{\oldabs}{\oldabs*}{\sigma_j}^{\alpha-1},
\qquad
\rho_j(\mathrm{d}x)=\big(\rho\circ m_{\sigma_j}^{-1}\big)(\mathrm{d}x),
\end{align*}
where $c_\alpha = - \cos\left( \frac{\alpha \pi}{2} \right) \Gamma(\alpha)$, the function $m_{\sigma_j}\colon \mathbb{R}\to\mathbb{R}$ is defined by $m_{\sigma_j}(x) = \sigma_j x$
and $\rho(\mathrm{d}x) = \1_{(0,\infty)}(x)\frac{c}{c_\alpha}x^{-1-\alpha}\, \mathrm{d}x$.
We claim that $L$ is a cylindrical L\'evy process if and only if $\sigma \in \ell^\frac{2\alpha}{2-\alpha}(\mathbb{R})$. Indeed, Condition \eqref{lemma_4_2_first_condition} reduces to
\begin{align*}
\sum_{j=1}^\infty \@ifstar{\oldabs}{\oldabs*}{\alpha_j} \@ifstar{\oldabs}{\oldabs*}{b_j + \int_{1<\@ifstar{\oldabs}{\oldabs*}{x}\le 1/\@ifstar{\oldabs}{\oldabs*}{\alpha_j}} x \, \big(\rho\circ m_{\sigma_j}^{-1}\big)(\mathrm{d}x)}
&= \frac{c}{c_\alpha\@ifstar{\oldabs}{\oldabs*}{1-\alpha}} \sum_{j=1}^\infty \@ifstar{\oldabs}{\oldabs*}{\alpha_j\sigma_j}^\alpha,
\intertext{whereas Condition \eqref{lemma_4_2_third_condition} reads as}
\sum\limits_{j=1}^\infty {\displaystyle \int_\mathbb{R}} \left( \@ifstar{\oldabs}{\oldabs*}{\alpha_j x}^2 \wedge 1 \right) \rho_j(\mathrm{d}x)
&= \frac{2c}{c_\alpha(2-\alpha)\alpha}\sum_{j=1}^\infty \@ifstar{\oldabs}{\oldabs*}{\alpha_j\sigma_j}^\alpha.
\end{align*}
Assumption \ref{item_existence_of_c} is satisfied with $c_j = \@ifstar{\oldabs}{\oldabs*}{\sigma_j}^{\frac{\alpha}{2-\alpha}}$, since Condition \eqref{assumption_drift} can be calculated as
\begin{align*}
\sum_{j=1}^\infty \bigg(b_j + \int_{1<\@ifstar{\oldabs}{\oldabs*}{x}\leq \frac{k}{c_j}} x \, \rho_j(\mathrm{d}x) \bigg)^2
= \left(\frac{ck^{1-\alpha}}{c_\alpha(1-\alpha)} \right)^2 \sum_{j=1}^\infty \@ifstar{\oldabs}{\oldabs*}{\sigma_j}^\frac{2\alpha}{2-\alpha}.
\end{align*}
Conditions \eqref{assumption_square_integrable} and \eqref{assumption_no_accummulation} follow by the same arguments as in Example \ref{exa_symmetric_stable}.
\end{Exa}
\begin{Rem}
In both Examples \ref{ex.symmetric-stable} and \ref{ex.one-sided-stable}, Condition
\eqref{assumption_no_accummulation} would not be satisfied for a constant level of truncation of jumps i.e. with $c_j=1$ for all $j\in \mathbb{N}$. By introducing the weights $(c_j)$ we compensate the fact that the cylindrical distribution of $L$ is not tight, i.e.\ its mass of the span of the higher nodes decays too slowly.
\end{Rem}
\begin{Exa}\label{ex.one-sided-varying}[One-sided regularly varying tails]
Recall that a measure $\mu$ concentrated on $(0,\infty)$ is said to have regularly varying tails with index $\alpha$ if
$$\lim_{x \to \infty} \frac{\mu(\lambda x,\infty)}{\mu(x,\infty)} = \lambda^{-\alpha} \qquad \text{for all } \lambda>0;$$
see \cite{Bingham, Feller_vol_2}. We choose $\ell_j = \sigma_j h_j$ in \eqref{series_cylindrical_Levy} with a sequence of independent and identically distributed L\'evy processes $h_j$ of regularly varying tails of index $\alpha \in (0,1)\cup (1,2)$. For simplifying the calculations, we assume that the characteristic function of $h_j(1)$ is given by
\begin{align*}
\phi_{h_j(1)}(x) =
\begin{cases}
\exp\left( \int_0^\infty \left( e^{ixy}-1 -ixy\1_{B_\mathbb{R}}(y)\right) \rho(\mathrm{d}y)+ixb \right), &\text{ if }\alpha \in (0,1),\\
\exp\left( \int_0^\infty \left( e^{ixy}-1-ixy \right) \rho(\mathrm{d}y)\right), &\text{ if }\alpha \in (1,2),
\end{cases}
\end{align*}
for a constant $b\in\mathbb{R}$. The L\'evy measure $\rho$ of $h_j$ has regularly varying tails according to \cite{Embrechts_Goldie}.
We show that if $(\sigma_j) \in \ell^{\frac{2\delta}{2-\delta}}(\mathbb{R})$ for some $\delta<\alpha$, then \eqref{series_cylindrical_Levy} defines a cylindrical L\'evy process. For this purpose, we define
\begin{equation*}
V_\delta (x) := \int_x^\infty y^\delta \1_{(1,\infty)}(y)\, \rho(\mathrm{d}y), \qquad
U_2(x) := \int_1^x y^2 \, \rho(\mathrm{d}y), \qquad \text{for } x>0.
\end{equation*}
It follows from \cite[Prop.\ 4.2.1, p.]{Samorodnitsky} that $V_\delta(0)<\infty$ and $U_2(\infty)=\infty$. Theorem~VII.9.2 in \cite{Feller_vol_2} implies
$$\lim_{x\to \infty} \frac{x^{2-\delta} V_\delta(x)}{U_2(x)} = \frac{2-\alpha}{\alpha-\delta}=:c,$$
and therefore there exists $M>0$ such that
$$U_2(x) \leq \frac{2x^{2-\delta}V_\delta(x)}{c}\qquad\text{for all } x\ge M.$$
Since both $(\alpha_j)$ and $(\sigma_j)$ tend to $0$ we can assume without loss of generality that $\frac1{\@ifstar{\oldabs}{\oldabs*}{\alpha_j\sigma_j}}>M$ for all $j\in \mathbb{N}$.
For verifying Condition \eqref{lemma_4_2_third_condition} we obtain
\begin{align*}
&\sum_{j=1}^\infty \left( \alpha_j^2 \sigma_j^2 \int_{0\le x\leq 1} x^2 \, \rho(\mathrm{d}x) + \alpha_j^2 \sigma_j^2 \int_{1<x\leq \frac1{\@ifstar{\oldabs}{\oldabs*}{\alpha_j \sigma_j}}} x^2 \, \rho(\mathrm{d}x) \right) \\
&\qquad = \sum_{j=1}^\infty \alpha_j^2 \sigma_j^2 \int_{0\le x\leq 1} x^2 \, \rho(\mathrm{d}x) + \sum_{j=1}^\infty \alpha_j^2 \sigma_j^2 U_2\left(\frac1{\@ifstar{\oldabs}{\oldabs*}{\alpha_j\sigma_j}}\right)\\
&\qquad \le
\sum_{j=1}^\infty \alpha_j^2 \sigma_j^2 \int_{0\le x\leq 1} x^2 \, \rho(\mathrm{d}x) + \frac2{c} \sum_{j=1}^\infty \@ifstar{\oldabs}{\oldabs*}{\alpha_j \sigma_j}^{\delta } V_\delta\left(\frac1{\@ifstar{\oldabs}{\oldabs*}{\alpha_j \sigma_j}}\right).
\end{align*}
Both sums are finite because of the summability assumptions on $\alpha$ and $\sigma$.
Similarly, we derive that
\begin{align*}
\sum_{j=1}^\infty \rho\left( \Big[\tfrac{1}{\@ifstar{\oldabs}{\oldabs*}{\alpha_j\sigma_j}}, \infty \Big)\right)<\infty,
\end{align*}
which shows Condition \eqref{lemma_4_2_third_condition}.
Similarly to the stable case in Example \ref{ex.one-sided-stable}, the sequence $(c_j)$ satisfying \ref{item_existence_of_c} can be defined by $c_j = \@ifstar{\oldabs}{\oldabs*}{\sigma_j}^\frac{\alpha}{2-\alpha}$.
Note that the conclusion in this example is not optimal in the case of $\alpha$-stable noise. For, in Example \ref{ex.one-sided-stable} we can choose $\sigma \in
\ell^{\frac{2\alpha}{2-\alpha}}(\mathbb{R})$ whereas here we have to choose
$\sigma\in \ell^{\frac{2\delta}{2-\delta}}(\mathbb{R})$ for $\delta<\alpha$.
\end{Exa}
The integration theory developed in \cite{Riedle_L2} and summarised in Section \ref{sec_properties} relies on finite weak moments of the cylindrical L\'evy process. In the following, we extend this stochastic integral to the class of cylindrical L\'evy processes of the form \eqref{series_cylindrical_Levy} under Assumption \ref{item_existence_of_c} without requiring finite weak moments. For this purpose, by fixing a sequence $c\in \ell^\infty(\mathbb{R}_{+})$ such that Assumption \ref{item_existence_of_c} is satisfied and by using the notation \eqref{eq.def-p} - \eqref{eq.def-q} we define for each $k>0$:
\begin{equation*}
L^c_k(t)u := \sum_{j=1}^\infty \Big( p_j^{c,k}(t)+ m_j^{c,k}(t)\Big) \langle u, e_j \rangle, \qquad t\ge 0,\, u \in U.
\end{equation*}
Lemma \ref{lem_cyl_decomp} yields that $L^c_k=P_k^c+M_k^c$ is a square-integrable cylindrical L\'evy process.
At the same time, we extend the class of integrands by the usual localisation arguments. For this purpose, we define the class
\begin{align*}
&\Lambda_{\rm{loc}}(U,H)\\
&:=\left\{\Psi\colon [0,T]\times \Omega\to L_{\rm HS}(U,H):
\text{ predictable, } \int_0^T \@ifstar{\oldnorm}{\oldnorm*}{\Psi(t)}_{L_{\rm HS}(U,H)}^2 \, \mathrm{d}t<\infty
\text{ $P$-a.s.}\right\}.
\end{align*}
\begin{Th}\label{th.integration-predictable}
Assume that $L$ is a cylindrical L\'evy process of the form \eqref{series_cylindrical_Levy} satisfying {\upshape\ref{item_existence_of_c}}
for a sequence $c\in \ell^\infty(\mathbb{R}_{+})$ and let $\Psi$ be in $\Lambda_{\rm{loc}}(U,H)$. Then there exists an increasing sequence of stopping times $(\varrho(k))$ with $\varrho(k)\to \infty$ $P$-a.s.\ as $k\to\infty$ such that $\Psi(\cdot)\1_{[0,\varrho(k)]}(\cdot)\in \Lambda(U,H)$ for each $k\in\mathbb{N}$ and
\begin{align*
\left(\int_0^t \Psi(s)\1_{\{s\leq\varrho(k)\}} \, \mathrm{d}L^{c,k}(s):\, t\in [0,T]\right)_{k\in\mathbb{N}}
\end{align*}
is a Cauchy sequence in the topology of uniform convergence in probability and its limit is independent of the sequence $c$ satisfying Assumption \ref{item_existence_of_c}.
\end{Th}
Theorem \ref{th.integration-predictable} enables us to define for each $\Psi\in \Lambda_{\rm loc}$ the stochastic integrals
\begin{align*}
\int_0^\cdot \Psi(s)\, \, \mathrm{d}L(s):=\lim_{k\to\infty} \int_0^\cdot \Psi(s)\1_{[0,\varrho(k)]}(s)\, \, \mathrm{d}L^c_k(s),
\end{align*}
where the limit is taken in the topology of uniform convergence in probability.
Note, that although in \cite{Jakubowski_Riedle} a stochastic integration theory is developed for a large class of integrands with respect to arbitrary cylindrical L\'evy processes, it does not cover the case of only predictable integrands.
\begin{proof}
Since $\Psi \in \Lambda_{\rm{loc}}(U,H)$, the stopping times
$$\tilde{\tau}(k) := \inf \left\{ t\geq 0 : \int_0^t \HSnorm{\Psi(s)}^2 \, \mathrm{d}s > k \right\},$$
increase to infinity as $k\to\infty$. Consequently, the stopping times $\tau^c(k) \wedge \tilde{\tau}(k) $ also converge to $+\infty$ by Lemma \ref{lem_cyl_decomp} and we can define $\varrho^c(k) := \tau^c(k) \wedge \tilde{\tau}(k) $. Note that if $T\leq \varrho^c(k)$, then $L^c_k=L^c_n$ on $[0,T]$ and
$$\int_0^t \Psi(s) \1_{\{s\leq\varrho^c(k)\}} \, \mathrm{d}L^c_k(s) = \int_0^t \Psi(s) \1_{\{s\leq\varrho^c(n)\}} \, \mathrm{d}L^c_n(s)$$
for all $t \in [0,T]$.
Consequently, we obtain for each $k\le n$ and $\epsilon>0$ that
\begin{align*}
&P\left( \sup_{t \in [0,T]} \normH{\int_0^t \Psi(s) \1_{\{s\leq\varrho^c(k)\}} \, \mathrm{d}L^c_k(s) - \int_0^t \Psi(s) \1_{\{s\leq\varrho^c(n)\}} \, \mathrm{d}L^c_n(s)} \ge \epsilon \right) \\
&\qquad \leq P \left( \int_0^t \Psi(s)\1_{\{s\leq\varrho^c(k)\}} \, \mathrm{d}L^c_k(s) \neq \int_0^t \Psi(s) \1_{\{s\leq\varrho^c(n)\}} \, \mathrm{d}L^c_n(s)\text{ for some } t \in [0,T] \right)\\
&\qquad\le P\big(T > \varrho^c(k)\big)
\to 0 \qquad \text{as }k\to \infty,
\end{align*}
which establishes the claimed convergence.
The limit of the Cauchy sequence does not depend on the choice of the sequence $c$ satisfying \ref{item_existence_of_c} because if $d$ is another sequence satisfying \ref{item_existence_of_c}, then $L^c_k=L^d_n$ for all $t \in [0,T]$ on $\{T\leq \tau^c(k)\wedge \tau^d(n)\}$ and
$$\int_0^t \Psi(s) \1_{\{s\leq\tau^c(k)\}} \, \mathrm{d}L^c_k(s) = \int_0^t \Psi(s) \1_{\{s\leq\tau^d(n)\}} \, \mathrm{d}L^d_n(s),$$
which completes the proof.
\end{proof}
\subsection{Existence of a solution for the diagonal noise}
Existence of a cylindrical L\'evy process of the form \eqref{series_cylindrical_Levy} strongly depends on the interplay between
the drift part $b_j$ and the L\'evy measure $\rho_j$ of the real valued L\'evy process with characteristics $(b_j,s_j,\rho_j)$, see condition \eqref{lemma_4_2_first_condition}. For this reason, we consider the general case of a cylindrical L\'evy process with a possible non-zero drift part. Naturally, we will tackle this part by moving it to the drift part of the equation under consideration. For this purpose, recall the decomposition
$L(t)=P^c_k(t) +M^c_k(t)+R^c_k(t)$ of the cylindrical L\'evy process $L$ for each $k>0$ derived in Lemma \ref{lem_cyl_decomp} under assumption \ref{item_existence_of_c} satisfied for a sequence $c\in \ell^\infty(\mathbb{R}_{+})$. Let $Q_k$ denote the covariance operator of $M_k^c$.
Furthermore, instead of the standard coercivity and monotonicity requirements, we introduce assumptions for each truncation level $k \in \mathbb{N}$.
Assumptions of this form were introduced in Peszat and Zabczyk \cite[Sec. 9.7]{Peszat_Zabczyk} in the semigroup approach: assume that there are constants $\alpha_k, \lambda_k, \beta_k>0$ such that
\begin{enumerate}[assumptions]
\item[(A1$^\prime$)] \label{item_monotonicty_not_L2}(coercivity) For every $k \in \mathbb{N}$
and $v\in V$ we have
\begin{align*}
2\pairing{F(v)+P_k^c(1) G^*(v)}v + \HSnorm{G(v)Q_{k}^{1/2}}^2 + \alpha_k \normV{v}^2 \leq \lambda_k \normH{v}^2+\beta_k;
\end{align*}
\item[(A2$^\prime$)] \label{item_coercivity_not_L2} (monotonicity) For every $k \in \mathbb{N}$ and $v_1,v_2 \in V$ we have
\begin{multline*}
2\pairing{F(v_1)-F(v_2)+P_k^c(1)\big(G^*(v_1)- G^*(v_2)\big)}{v_1-v_2} \\
+ \HSnorm{(G(v_1)-G(v_2))Q_{k}^{1/2}}^2 \leq \lambda_k \normH{v_1-v_2}.
\end{multline*}
\end{enumerate}
\begin{Th}
\label{th_existence_of_solution_not_L2}
Assume that $L$ is a cylindrical L\'evy process of the form \eqref{series_cylindrical_Levy} satisfying \ref{item_existence_of_c} for a sequence $c\in \ell^\infty(\mathbb{R}_{+})$.
If the coefficients $F$ and $G$ satisfy {\upshape (A1$^\prime$)}, {\upshape(A2$^\prime$)} \upshape{(A3)} and \upshape{(A4)}, then
equation \eqref{SPDE} with an $\mathcal{F}_0$--measurable initial condition $X(0) = X_0$ has a pathwise unique variational solution $(X,\bar{X})$.
\end{Th}
\begin{proof}
We reduce the case of the general initial condition to the square integrable one as in \cite[Th.\ 6.2.3]{Applebaum}. For $k\in \mathbb{N}$ let $\Omega_k = \{ \@ifstar{\oldnorm}{\oldnorm*}{X_0} \leq k \}$ and $X_0^k = X_0 \1_{\Omega_k}$.
Using the decomposition $L(t)= P^c_k(t) +M^c_k(t)+R^c_k(t)$, Lemma \ref{lem_cyl_decomp} guarantees that $M^c_k$ is a weakly square-integrable cylindrical L\'evy martingale, and thus according to Theorem \ref{th_main_existence_result} there exists a unique variational solution $(X_k^c,\bar{X}_k^c)$ of
$$\, \mathrm{d}X(t) = \big(F(X(t)) + P_k^c (1) G^*(X(t))) \, \mathrm{d}t + G(X(t)\big) \, \mathrm{d}M_k^c(t),$$
with the initial condition $X(0) = X_0^k$.
Step 1: We first show that for each $k\le n$ we have
$X^c_k=X^c_n$ $P$-a.s.\ on $\{T\le \tau^c(k)\} \cap \Omega_k$.
For each $t\in [0,T]$ we have
\begin{align*}
X^c_k(t)-X^c_n(t)
&= -X_0\1_{\Omega_n \setminus \Omega_k} + \int_0^t F\left(\bar{X}^c_k(s)\right)-F\left(\bar{X}^c_n(s)\right) \, \mathrm{d}s \\
&\qquad \qquad + \int_0^t P^c_k(1)G^*\left(\bar{X}^c_k(s)\right)-P^c_n(1)G^*\left(\bar{X}^c_n(s)\right)\, \mathrm{d}s + V^c_{k,n}(t),
\end{align*}
where the martingale $V^c_{k,n}$ is defined by
\begin{equation}
\label{definition_of_V}
V^c_{k,n}(t):=\int_0^t G(\bar{X}^c_k(s))\, \mathrm{d} M^c_k(s) -\int_0^t G(\bar{X}^c_n(s)) \,\mathrm{d} M^c_n(s).
\end{equation}
Applying Theorem \ref{th_from_Gyongy} with the martingale $V^c_{k,n}$ yields
\begin{equation}
\label{after_Ito}
\begin{aligned}
&\normH{X^c_k(t)-X^c_n(t)}^2 \\
&= \normH{X_0}^2 \1_{\Omega_n\setminus\Omega_k} + 2 \int_0^t \pairing{F(\bar{X}^c_k(s))-F(\bar{X}^c_n(s))}{\bar{X}^c_k(s)-\bar{X}^c_n(s)} \, \mathrm{d}s \\
&\quad +2 \int_0^t \pairing{P^c_k(1)G^*(\bar{X}^c_k(s))-P^c_n(1)G^*(\bar{X}^c_n(s))}{\bar{X}^c_k(s)-\bar{X}^c_n(s)} \, \mathrm{d}s \\
&\quad + 2\int_0^t \left(X^c_k(s-)-X^c_n(s-)\right) \mathrm{d} V_{k,n}^c(s) + [V_{k,n}^c,V_{k,n}^c](t).
\end{aligned}
\end{equation}
Define a cylindrical L\'evy process $Y_k^c$ by
$$ Y_k^c(t)u := \sum_{j=1}^\infty \left( \int_{k/c_j < \@ifstar{\oldabs}{\oldabs*}{x} \leq n/c_j} x\, N_j(t,\mathrm{d} x)\right) \langle u,e_j \rangle
\qquad\text{for all }t\ge 0, u\in U.$$
The cylindrical martingale $M_n^c$ can be rewritten as
\begin{align*}
M^c_n(t)u
&= \sum_{j=1}^\infty \left(m_j^{c,k}(t) + \int_{k/c_j < \@ifstar{\oldabs}{\oldabs*}{x} \leq n/c_j} x \, \tilde{N}_j(t,\mathrm{d}x) \right) \langle u,e_j \rangle \\
&= \sum_{j=1}^\infty \left(m_j^{c,k}(t) + \int_{k/c_j < \@ifstar{\oldabs}{\oldabs*}{x} \leq n/c_j} x \, N_j(t,\mathrm{d} x) - \int_{k/c_j < \@ifstar{\oldabs}{\oldabs*}{x} \leq n/c_j} x \, \rho_j(t,\mathrm{d}x) \right) \langle u,e_j \rangle \\
&=M^c_k(t)u+Y_k^c(t)u-(P^c_n(1)u-P^c_k(1)u)t.
\end{align*}
Applying this representation of $M^c_n(t)$ to \eqref{definition_of_V} and plugging this into \eqref{after_Ito} results in
\begin{align*}
&\normH{X^c_k(t)-X^c_n(t)}^2 \\
&= \normH{X_0}^2\1_{\Omega_n\setminus\Omega_k} + 2 \int_0^t \pairing{F(\bar{X}^c_k(s))-F(\bar{X}^c_n(s))}{\bar{X}^c_k(s)-\bar{X}^c_n(s)} \, \mathrm{d}s \\
&\quad +2 \int_0^t \pairing{P^c_k(1)G^*(\bar{X}^c_k(s))-P^c_n(1)G^*(\bar{X}^c_n(s))}{\bar{X}^c_k(s)-\bar{X}^c_n(s)} \, \mathrm{d}s \\
&\quad + 2 \int_0^t \left(G^*(\bar{X}^c_k(s))-G^*(\bar{X}^c_n(s))\right) \left(X^c_k(s-)-X^c_n(s-)\right) \, \mathrm{d}M^c_k(s) \\
&\quad - 2 \int_0^t G^*(\bar{X}_n^c(s)) \left(X^c_k(s-)-X^c_n(s-)\right) dY_k^c(s) \\
&\quad + 2 \int_0^t \scalarH{X^c_k(s-)-X^c_n(s-)}{(P^c_n(1)-P^c_k(1))G^*(\bar{X}^c_n(s))} \, \mathrm{d}s \\
&\quad + [V_{k,n}^c,V_{k,n}^c](t).
\end{align*}
We stop the processes at $\tau^c(k)$, use the fact that $Y_k^c$ has no jumps before $\tau^c(k)$ and multiply both sides by $\1_{\Omega_k}$. Then we obtain that
\begin{align} \label{after_stopping}
&\normH{X^c_k(t\wedge \tau^c(k))-X^c_n(t\wedge \tau^c(k))}^2 \1_{\Omega_k}\notag \\
&\qquad = 2 \int_0^{t\wedge \tau^c(k)} \pairing{F(\bar{X}^c_k(s))-F(\bar{X}^c_n(s))}{\bar{X}^c_k(s)-\bar{X}^c_n(s)} \, \mathrm{d}s \1_{\Omega_k}\notag \\
&\qquad \quad + 2 \int_0^t \pairing{P^c_k(1)G^*(\bar{X}^c_k(s))-P^c_n(1)G^*(\bar{X}^c_n(s))}{\bar{X}^c_k(s)-\bar{X}^c_n(s)} \, \mathrm{d}s \1_{\Omega_k}\notag \\
&\qquad \quad + 2 \int_0^{t\wedge \tau^c(k)} \left( G^*(\bar{X}^c_k(s)) - G^*(\bar{X}^c_n(s))\right) \left(X^c_k(s-)-X^c_n(s-)\right) \, \mathrm{d}M^c_k(s) \1_{\Omega_k}\notag \\
&\qquad \quad + 2 \int_0^{t\wedge \tau^c(k)} \scalarH{X^c_k(s-)-X^c_n(s-)}{(P^c_n(1)-P^c_k(1))G^*(\bar{X}^c_n(s))} \, \mathrm{d}s \1_{\Omega_k}\notag \\
&\qquad \quad + [V_{k,n}^c,V_{k,n}^c](t\wedge \tau^c(k)) \1_{\Omega_k} .
\end{align}
For each $t>0$ define
$$W^c_{k,n}(t) := \int_0^t \left( G\left(\bar{X}^c_k(s)\right) - G\left(\bar{X}^c_n(s)\right) \right)\,\mathrm{d} M^c_k(s).$$
Since $M_n^c=M_k^c$ on $[0,\tau^c(k)]$, Lemma \ref{lem_stopped_cylindrical_integral}
guarantees that $V_{k,n}^c(t\wedge \tau^c(k))=W_{k,n}^c(t\wedge \tau^c(k))$ for all $t\in [0,T]$. Thus, the angle brackets coincide and Proposition \ref{pro_angle_bracket} implies
\begin{align*}
\mathbb{E} [V_{k,n}^c,V_{k,n}^c](t\wedge \tau^c(k))
&=\mathbb{E} \left[ \langle V_{k,n}^c,V_{k,n}^c\rangle (t\wedge \tau^c_k ) \right] \\
& =E \left[ \langle W_{k,n}^c,W_{k,n}^c\rangle (t\wedge \tau^c_k) \right] \\
&= \mathbb{E} \left[ \int_0^{t\wedge \tau^c(k)\ } \HSnorm{\big(G(\bar{X}^c_k(s))-G(\bar{X}^c_n(s))\big)Q_k^{1/2}}^2 \, \mathrm{d}s \right].
\end{align*}
Recall for the following that the martingale property is invariant under multiplication by $\1_{\{\Omega_k\}}$, since $\Omega_k$ is $\mathcal{F}_0$--measurable. Noting
that we can replace $P_n^c(1)G^*(\bar{X}_n^c(s))$ by $P_n^c(1)G^*(X_n^c(s))$, we obtain by taking expectation in \eqref{after_stopping}
and by applying the monotonicity condition $(A2^\prime)$ that
\begin{align*}
& \mathbb{E}\left[ \normH{X^c_k(t\wedge\tau^c(k))-X^c_n(t\wedge\tau^c(k))}^2 \1_{\Omega_k} \right]\\
&= 2 \mathbb{E} \left[ \int_0^{t\wedge\tau^c(k)} \pairing{F(\bar{X}^c_k(s))-F(\bar{X}^c_n(s))}{\bar{X}^c_k(s)-\bar{X}^c_n(s)} \, \mathrm{d}s \1_{\Omega_k} \right] \\
&\quad + 2 \mathbb{E} \left[ \int_0^{t\wedge\tau^c(k)} \pairing{P^c_k(1)\big(G^*(\bar{X}^c_k(s))-G^*(\bar{X}^c_n(s))\big)}{\bar{X}^c_k(s)-\bar{X}^c_n(s)} \, \mathrm{d}s \1_{\Omega_k} \right] \\
&\quad + \mathbb{E} \left[ \int_0^{t\wedge \tau^c(k)} \HSnorm{(G(\bar{X}^c_k(s))-G(\bar{X}^c_n(s)))Q_k^{1/2}}^2 \, \mathrm{d}s \1_{\Omega_k} \right]\\
&\le \mathbb{E} \left[ \int_0^{t\wedge \tau^c(k)} \lambda_k \normH{\bar{X}^c_k(s)-\bar{X}^c_n(s)}^2 \, \mathrm{d}s \1_{\Omega_k} \right]\\
&\leq \lambda_k \int_0^t \mathbb{E}\left[ \normH{X_k^c(t\wedge\tau^c(k))-X_n^c(t\wedge\tau^c(k))}^2 \1_{\Omega_k} \right] \, \mathrm{d}s.
\end{align*}
Applying Gronwall's inequality establishes the claim.
Step 2.
The first part enables us to define
\begin{equation}
\label{def_of_X_by_X_k}
X := X^c_k \quad \text{ and } \quad \bar{X} := \bar{X}^c_k \qquad \text{on } \{t\leq\tau^c(k)\}.
\end{equation}
This definition does not depend on the choice of $c$: for, if $d$ is another sequence satisfying \ref{item_existence_of_c}, then one can show similarly as in Step 1, that $X^c_k=X^d_n$ on $\{ t \leq \tau^c(k) \wedge \tau^d(n)\}$.
Since for each $k\in \mathbb{N}$ we have $\mathrm{d}t \otimes P$-almost everywhere
$$X\1_{\{t\leq\tau^c(k)\}\cap \Omega_k}
= X^c_k \1_{\{t\leq\tau^c(k)\}\cap \Omega_k}
=\bar{X}^c_k \1_{\{t\leq\tau^c(k)\}\cap \Omega_k}
= \bar{X}\1_{\{t\leq\tau^c(k)\}\cap \Omega_k},$$
we obtain $X=\bar{X}$ $\mathrm{d}t \otimes P$-almost everywhere by taking $k\to\infty$.
Step 3. We show that $(X,\bar{X})$ defined in \eqref{def_of_X_by_X_k} satisfies \eqref{integral_eq_in_def_of_variational_sol}. Note that
\begin{align}\label{eq.X-solution-on-tau}
&X(t) \1_{\{t\leq\tau^c(k)\}\cap \Omega_k}
= X^c_k(t) \1_{\{t\leq\tau^c(k)\}\cap \Omega_k} \notag \\
& = X_0\1_{\Omega_k} + \1_{\{t\leq\tau^c(k)\}\cap \Omega_k} \int_0^t F(\bar{X}^c_k(s)) \, \mathrm{d}s +\1_{\{t\leq\tau^c(k)\}\cap \Omega_k} \int_0^t G(\bar{X}^c_k(s)) \, \mathrm{d}L^c_k(s) .
\end{align}
From the very definition \eqref{def_of_X_by_X_k} it follows
\begin{align}\label{eq.F-and-Xck}
\lim_{k\to\infty} \1_{\{t\leq\tau^c(k)\}\cap \Omega_k} \int_0^t F(\bar{X}^c_k(s)) \, \mathrm{d}s
&= \lim_{k\to\infty} \1_{\{t\leq\tau^c(k)\}\cap \Omega_k} \int_0^t F(\bar{X}(s)) \, \mathrm{d}s\notag \\
&= \int_0^t F(\bar{X}(s)) \, \mathrm{d}s.
\end{align}
The last term in \eqref{eq.X-solution-on-tau} can be rewritten as
\begin{align*}
\1_{\{t\leq\tau^c(k)\}\cap \Omega_k} \int_0^t G(\bar{X}^c_k(s)) \, \mathrm{d}L^c_k(s)
=\1_{\{t\leq\tau^c(k)\}\cap \Omega_k} \int_0^{t\wedge \tau^c(k)} G(\bar{X}^c_k(s)) \, \mathrm{d}L^c_k(s).
\end{align*}
From Lemma \ref{lem_stopped_cylindrical_integral} and the definition of the stochastic integral with respect to $L$ after Theorem \ref{th.integration-predictable}, it follows that
\begin{align}\label{eq.G-and-Xck}
\lim_{k\to\infty}
\int_0^{t\wedge \tau^c(k)} G(\bar{X}^c_k(s)) \, \mathrm{d}L^c_k(s)
&=\lim_{k\to\infty} \int_0^t G(\bar{X}^c_k(s))\1_{\{s< \tau^c(k)\}} \, \mathrm{d}L^c_k(s)\notag \\
&=\lim_{k\to\infty} \int_0^t G(\bar{X}(s))\1_{\{s< \tau^c(k)\}} \, \mathrm{d}L^c_k(s)\notag \\
&=\int_0^t G(\bar{X}(s)) \, \mathrm{d}L(s).
\end{align}
By taking the limit $k\to\infty$ in \eqref{eq.X-solution-on-tau},
equalities \eqref{eq.F-and-Xck} and \eqref{eq.G-and-Xck} show
\begin{align*}
X(t)= X_0+\int_0^t F(\bar{X}(s))\, \mathrm{d}s \int_0^t G(\bar{X}(s)) \, \mathrm{d}L(s),
\end{align*}
which finishes the proof of the theorem.
\end{proof}
| 2023-04-23T06:41:02.337Z | 2018-07-31T02:22:26.000Z | redpajama/arxiv | arxiv_0001 | 1,588 | 17,102 |
e17799b7f56c90350cfe14cfa7d16b936e830520 | \section{Introduction and Motivation}
\label{sec:introduction}
Harmonic analysis is an important step towards creating high-level representations of tonal music. High-level structural relationships form an essential component of music analysis, whose aim is to achieve a deep understanding of how music works. At its most basic level, harmonic analysis of music in symbolic form requires the partitioning of a musical input into segments along the time dimension, such that the notes in each segment correspond to a musical chord. This {\it chord recognition} task can often be time consuming and cognitively demanding, hence the utility of computer-based implementations. Reflecting historical trends in artificial intelligence, automatic approaches to harmonic analysis have evolved from purely grammar-based and rule-based systems \citep{winograd:jmt68,maxwell:chapter92}, to systems employing weighted rules and optimization algorithms \citep{temperley:cmj99,pardo:cmj02,scholz:ismir08,rocher:icmc09}, to data driven approaches based on supervised machine learning (ML) \citep{raphael:ismir03,radicioni:amir10}. Due to their requirements for annotated data, ML approaches have also led to the development of music analysis datasets containing a large number of manually annotated harmonic structures, such as the 60 Bach chorales introduced in \citep{radicioni:amir10}, and the 27 themes and variations of TAVERN \citep{devaney:ismir15}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{segments-small.png}
\caption{Segment-based recognition (top) vs. event-based recognition (bottom) on measures 11 and 12 from Beethoven WoO68, using note onsets and offsets to create event boundaries.}
\label{fig:segments}
\end{figure}
In this work, we consider the music to be in symbolic form, i.e. as a collection of notes specified in terms of onset, offset, pitch, and metrical position. Symbolic representations can be extracted from formats such as MIDI, kern, or MusicXML. A relatively common strategy in ML approaches to chord recognition in symbolic music is to break the musical input into a sequence of short duration spans and then train sequence tagging algorithms such as Hidden Markov Models (HMMs) to assign a chord label to each span in the sequence (bottom of Figure~\ref{fig:segments}). The spans can result from quantization using a fixed musical period such as half a measure \citep{raphael:ismir03}. Alternatively, they can be constructed from consecutive note onsets and offsets \citep{radicioni:amir10}, as we also do in this paper. Variable-length chord segments are then created by joining consecutive spans labeled with the same chord symbol (at the top in Figure~\ref{fig:segments}).
A significant drawback of these short-span tagging approaches is that they do not explicitly model candidate segments during training and inference, consequently they cannot use segment-level features. Such features are needed, for example, to identify figuration notes (Appendix~\ref{sec:appendix-figuration}) or to help label segments that do not start with the root note. The chordal analysis system of \citet{pardo:cmj02} is an example where the assignment of chords to segments takes into account segment-based features, however the features have pre-defined weights and it uses a processing pipeline where segmentation is done independently of chord labeling.
In this paper, we propose a machine learning approach to chord recognition formulated under the framework of semi-Markov Conditional Random Fields (semi-CRFs). Also called segmental CRFs, this class of probabilistic graphical models can be trained to do joint segmentation and labeling of symbolic music (Section~\ref{sec:model}), using efficient Viterbi-based inference algorithms whose time complexity is linear in the length of the input. The system employs a set of chord labels (Section~\ref{sec:labels}) that correspond to the main types of tonal music chords (Appendix~\ref{sec:chords}) found in the evaluation datasets. Compared to HMMs and sequential CRFs which label the events in a sequence, segmental CRFs label candidate segments, as such they can exploit segment-level features. Correspondingly, we define a rich set of features that capture the extent to which the events in an entire segment of music are compatible with a candidate chord label (Section~\ref{sec:features}). The semi-CRF model incorporating these features is evaluated on three classical music datasets and a newly created dataset of popular music (Section~\ref{sec:datasets}). Experimental comparisons with two previous chord recognition models show that segmental CRFs obtain substantial improvements in performance on the three larger datasets, while also being competitive with the previous approaches on the smaller dataset (Section~\ref{sec:evaluation}).
\section{Semi-CRF Model for Chord Recognition}
\label{sec:model}
Since harmonic changes may occur only when notes begin or end, we first create a sorted list of all the note onsets and offsets in the input music, i.e. the list of {\it partition points} \citep{pardo:cmj02}, shown as vertical dotted lines in Figure~\ref{fig:segments}. A basic music {\it event} \citep{radicioni:amir10} is then defined as the set of pitches sounding in the time interval between two consecutive partition points. As an example, Table~\ref{tab:input-rep} provides the pitches and overall duration for each event shown in Figure~\ref{fig:segments2}. The segment number and chord label associated with each event are also included. Not shown in this table is a boolean value for each pitch indicating whether or not it is held over from the previous event. For instance, this value would be false for C5 and E5 appearing in event $e_5$, but true for C5 and E5 in event $e_6$.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{segments-small2.png}
\caption{Segment and labels (top) vs. events (bottom) for measure 12 from Beethoven WoO68.}
\label{fig:segments2}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{lllll}
\toprule
Seg. & Label & Event & Pitches & Len. \\ \midrule
$s_1$ & G7 & $e_1$ & G3, B3, D4, G5 & 1/8\\
& G7 & $e_2$ & G3, B3, D4, F5 & 1/8\\
& G7 & $e_3$ & B4, D5 & 3/16\\
& G7 & $e_4$ & B4, D5 & 1/16\\ \hline
$s_2$ & C & $e_5$ & C4, C5, E5 & 1/8\\
& C & $e_6$ & G3, C5, E5 & 1/8\\
& C & $e_7$ & E3, G4, C5, E5 & 1/8\\
& C & $e_8$ & C3, G4, C5, E5 & 1/8\\
\bottomrule
\end{tabular}
\caption{Input representation for measure 12 from Beethoven WoO68, showing the pitches and duration for each event, as well as the corresponding segment and label, where G7 stands for G:maj:add7, and C stands for C:maj.}
\label{tab:input-rep}
\end{table}
Let $\mathbf{s} = \langle s_1, s_2, ..., s_K \rangle$ denote a segmentation of the musical input $\mathbf{x}$, where a segment $s_k = \langle s_k.f, s_k.l \rangle$ is identified by the positions $s_k.f$ and $s_k.l$ of its first and last events, respectively. Let $\mathbf{y} = \langle y_1, y_2, ..., y_K \rangle$ be the vector of chord labels corresponding to the segmentation $\mathbf{s}$. A semi-Markov CRF \citep{sarawagi:nips04} defines a probability distribution over segmentations and their labels as shown in Equations~\ref{eq:semi1} and \ref{eq:semi2}. Here, the global segmentation feature vector $\mathbf{F}$ decomposes as a sum of local segment feature vectors $\mathbf{f}(s_k, y_k, y_{k-1}, \mathbf{x})$, with label $y_0$ set to a constant ``no chord'' value.
\begin{eqnarray}
P(\mathbf{s}, \mathbf{y} | \mathbf{x}, \mathbf{w}) & = & \frac{e^{{\mathbf{w}}^T{\mathbf{F}(\mathbf{s}, \mathbf{y} ,\mathbf{x})}}}{Z(\mathbf{x})} \label{eq:semi1} \\
\mathbf{F}(\mathbf{s}, \mathbf{y} ,\mathbf{x}) & = & \sum_{k = 1}^K \mathbf{f}(s_k, y_k, y_{k-1}, \mathbf{x}) \label{eq:semi2}
\end{eqnarray}
where $Z(\mathbf{x}) = \displaystyle \sum_{\mathbf{s}', \mathbf{y}'} e^{{\mathbf{w}}^T{\mathbf{F}(\mathbf{s}', \mathbf{y}' ,\mathbf{x})}}$ and $\mathbf{w}$ is a vector of parameters.
Following Muis and Lu \citep{muis:naacl16}, for faster inference, we further restrict the local segment features to two types: {\it segment-label features} $\mathbf{f}(s_k, y_k, \mathbf{x})$ that depend on the segment and its label, and label {\it transition features} $\mathbf{g}(y_k, y_{k-1}, \mathbf{x})$ that depend on the labels of the current and previous segments. The corresponding probability distribution over segmentations is shown in Equations~\ref{eq:weak1} to \ref{eq:weak3}, which use two vectors of parameters: $\mathbf{w}$ for segment-label features and $\mathbf{u}$ for transition features.
\begin{eqnarray}
P(\mathbf{s}, \mathbf{y} | \mathbf{x}, \mathbf{w}, \mathbf{u}) \! & \! = \! & \! \frac{e^{{\mathbf{w}}^T{\mathbf{F}(\mathbf{s}, \mathbf{y} ,\mathbf{x})} + {\mathbf{u}}^T{\mathbf{G}(\mathbf{s}, \mathbf{y} ,\mathbf{x})}}}{Z(\mathbf{x})} \label{eq:weak1} \\
\mathbf{F}(\mathbf{s}, \mathbf{y} ,\mathbf{x}) \! & \! = \! & \! \sum_{k = 1}^K \mathbf{f}(s_k, y_k, \mathbf{x}) \label{eq:weak2} \\
\mathbf{G}(\mathbf{s}, \mathbf{y} ,\mathbf{x}) \! & \! = \! & \! \sum_{k = 1}^K \mathbf{g}(y_k, y_{k-1}, \mathbf{x}) \label{eq:weak3}
\end{eqnarray}
Given an arbitrary segment $s$ and a label $y$, the vector of segment-label features can be written as $\mathbf{f}(s, y, \mathbf{x}) = [f_1(s, y), ..., f_{|\mathbf{f}|}(s, y)]$, where the input $\mathbf{x}$ is left implicit in order to compress the notation. Similarly, given arbitrary labels $y$ and $y'$, the vector of label transition features can be written as $\mathbf{g}(y, y', \mathbf{x}) = [g_1(y, y'), ..., g_{|\mathbf{g}|}(y, y')]$. In Section~\ref{sec:features} we describe the set of segment-label features $f_i(s, y)$ and label transition features $g_j(y, y')$ that are used in our semi-CRF chord recognition system.
As probabilistic graphical models, semi-CRFs can be represented using factor graphs, as illustrated in Figure~\ref{fig:fg}. Factor graphs \citep{frey:ieee01} are bipartite graphs that express how a global function (e.g. $P(\mathbf{s}, \mathbf{y} | \mathbf{x}, \mathbf{w}, \mathbf{u})$) of many {\it variables} (e.g. $s_k$, $y_k$, and $\mathbf{x}$) factorizes into a product of local functions, or {\it factors}, (e.g. $f$ and $g$) defined over fewer variables.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\columnwidth]{fg.png}
\caption{Factor graph representation of the semi-CRF.}
\label{fig:fg}
\end{figure}
Equations~\ref{eq:weak2} and~\ref{eq:weak3} show that the contribution of any given feature to the final log-likelihood score is given by summing up its value over all the segments (for local features $f$) or segment pairs (for local features $g$).
This design choice stems from two assumptions. First, we adopt the stationarity assumption, according to which the segment-label feature distribution does not change with the position in the music. Second, we use the Markov assumption, which implies that the label of a segment depends only on its boundaries and the labels of the adjacent segments. This assumption leads to the factorization of the probability distribution into a product of potentials.
Both the stationarity assumption and the Markov assumption are commonly used in ML models for structured outputs, such as linear CRFs \citep{lafferty:ml01}, semi-CRFs \citep{sarawagi:nips04}, HMMs \citep{rabiner:ieee89}, structural SVMs \citep{tsochantaridis:icml04}, or the structured perceptron \citep{collins:emnlp02} used in HMPerceptron. These assumptions lead to summing the same feature over multiple substructures in the overall output score, which makes {\it inference} and {\it learning} tractable using dynamic programming.
The {\it inference} problem for semi-CRFs refers to finding the most likely segmentation $\hat{\mathbf{s}}$ and its labeling $\hat{\mathbf{y}}$ for an input $\mathbf{x}$, given the model parameters. For the weak semi-CRF model in Equation~\ref{eq:weak1}, this corresponds to:
\begin{eqnarray}
\!\!\!\!\!\!\!\!\hat{\mathbf{s}}, \hat{\mathbf{y}} \! & \!\!\!\! = \!\!\!\! & \! \argmax_{\mathbf{s}, \mathbf{y}} \; P(\mathbf{s}, \mathbf{y} | \mathbf{x}, \mathbf{w}, \mathbf{u}) \\
\! & \!\!\!\! = \!\!\!\! & \! \argmax_{\mathbf{s}, \mathbf{y}} \; {\mathbf{w}}^T{\mathbf{F}(\mathbf{s}, \mathbf{y} ,\mathbf{x})} + {\mathbf{u}}^T{\mathbf{G}(\mathbf{s}, \mathbf{y} ,\mathbf{x})} \\
\! & \!\!\!\! = \!\!\!\! & \! \argmax_{\mathbf{s}, \mathbf{y}} \; {\mathbf{w}}^T \!\! \sum_{k = 1}^K \mathbf{f}(s_k, y_k, \mathbf{x}) + {\mathbf{u}}^T \!\! \sum_{k = 1}^K \mathbf{g}(y_k, y_{k-1}, \mathbf{x}) \label{eq:optimization}
\end{eqnarray}
The maximum is taken over all possible labeled segmentations of the input, up to a maximum segment length. Correspondingly, $\mathbf{s}$ and $\mathbf{y}$ can be seen as ``candidate'' segmentations and ``candidate'' labelings, respectively. Their number is exponential in the length of the input, which rules out a brute-force search. However, due to the factorization into vectors of local features $f_i(s, y)$ and $g_j(y, y')$, it can be shown that the optimization problem from Equation~\ref{eq:optimization} can be solved with a semi-Markov analogue of the usual Viterbi algorithm. Let $L$ be a maximum segment length. Following \citep{sarawagi:nips04}, let $V(i, y)$ denote the largest value ${\mathbf{w}}^T{\mathbf{F}(\tilde{\mathbf{s}}, \tilde{\mathbf{y}}, \mathbf{x})} + {\mathbf{u}}^T{\mathbf{G}(\tilde{\mathbf{s}}, \tilde{\mathbf{y}} ,\mathbf{x})}$ of a partial segmentation $\tilde{\mathbf{s}}$ such that its last segment ends at position $i$ and has label $y$. Then $V(i, y)$ can be computed with the following dynamic programming recursion for $i = 1, 2, ..., |\mathbf{x}|$:
\begin{equation}
V(i,\! y) \!=\!\! \displaystyle \max_{y', 1 \leq l \leq L} V(i\!-\!l,\! y') + {\mathbf{w}}^T \mathbf{f}(\langle i\!-\!l + 1, i \rangle, y, \mathbf{x}) + {\mathbf{u}}^T{\mathbf{g}(y, y',\mathbf{x})}
\end{equation}
where the base cases are $V(0, y) = 0$ and $V(j, y) = -\infty$ if $j < 0$, and $\langle i - l + 1, i \rangle$ denotes the segment starting at position $i - l + 1$ and ending at position $i$.
Once $V(|\mathbf{x}|, y)$ is computed for all labels $y$, the best labeled segmentation can be recovered in linear time by following the path traced by $\max_y V(|\mathbf{x}|, y)$.
The {\it learning} problem for semi-CRFs refers to finding the model parameters that maximize the likelihood over a set of training sequences $T = \{\mathbf{x}_n, \mathbf{s}_n, \mathbf{y}_n\}_{n = 1}^N$. Usually this is done by minimizing the negative log-likelihood $-L(T;\mathbf{w}, \mathbf{u})$ and an L2 regularization term, as shown below for weak semi-CRFs:
\begin{equation}
L(T; \mathbf{w}, \mathbf{u}) \! = \!\! \sum_{n=1}^N {\mathbf{w}}^T{\mathbf{F}(\mathbf{s}_n, \mathbf{y}_n ,\mathbf{x}_n)} + {\mathbf{u}}^T{\mathbf{G}(\mathbf{s}_n, \mathbf{y}_n,\mathbf{x}_n)} - \log{Z(\mathbf{x}_n)}
\end{equation}
\begin{equation}
\hat{\mathbf{w}}, \hat{\mathbf{u}} = \argmin_{\mathbf{w}, \mathbf{u}} -L(T; \mathbf{w}, \mathbf{u}) + \frac{\lambda}{2}\left(||\mathbf{w}||^2 + ||\mathbf{u}||^2\right)
\end{equation}
This is a convex optimization problem, which is solved with the L-BFGS procedure in the StatNLP package used to implement our system. The partition function $Z(\mathbf{x})$ and the feature expectations that appear in the gradient of the objective function are computed efficiently using a dynamic programming algorithm similar to the forward-backward procedure \citep{sarawagi:nips04}.
\section{Chord Recognition Labels}
\label{sec:labels}
A {\it chord} is a group of notes that form a cohesive harmonic unit to the listener when sounding simultaneously \citep{aldwell:book11}. As explained in Appendix~\ref{sec:chords}, we design our system to handle the following types of chords: {\it triads}, {\it augmented 6th chords}, {\it suspended chords}, and {\it power chords}. The chord labels used in previous chord recognition research range from coarse grained labels that indicate only the chord root \citep{temperley:cmj99} to fine grained labels that capture mode, inversions, added and missing notes \citep{harte:phd10}, and even chord function \citep{devaney:ismir15}. Here we follow the middle ground proposed by \citet{radicioni:amir10} and define a core set of labels for triads that encode the chord root (12 pitch classes), the mode (major, minor, diminished), and the added note (none, fourth, sixth, seventh), for a total of 144 different labels. For example, the label {\it C-major-none} for a simple C major triad corresponds to the combination of a root of {\it C} with a mode of {\it major} and no added note. This is different from the label {\it C-major-seventh} for a C major seventh chord, which corresponds to the combination of a root of {\it C} with a mode of {\it major} and an added note of {\it seventh}. Note that there is only one generic type of added seventh note, irrespective of whether the interval is a major, minor, or diminished seventh, which means that a C major seventh chord and a C dominant seventh chord are mapped to the same label. However, once the system recognizes a chord with an added seventh, determining whether it is a major, minor, or diminished seventh can be done accurately in a simple post-processing step: determine if the chord contains a non figuration note (defined in Appendix~\ref{sec:appendix-figuration}) that is 11, 10, or 9 half steps from the root, respectively, inverted or not, modulo 12. Once the type of the seventh interval is determined, it is straightforward to determine the type of seventh chord (dominant, major, minor, minor-major, fully diminished, or half-diminished) based on the mode of the chord (major, minor, or diminished).
Augmented sixth chords are modeled through a set of 36 labels that capture the lowest note (12 pitch classes) and the 3 types (Appendix~\ref{sec:aug6}). Similarly, suspended and power chords are modeled through a set of 48 labels that capture the root note (12 pitch classes) and the 4 types (Appendix~\ref{sec:suspow}).
Because the labels do not encode for function, the model does not require knowing the key in which the input was written. While the number of labels may seem large, the number of parameters in our model is largely independent of the number of labels. This is because we design the chord recognition features (Section~\ref{sec:features}) to not test for the chord root, which also enables the system to recognize chords that were not seen during training. The decision to not use the key context was partly motivated by the fact that 3 of the 4 datasets we used for experimental evaluation do not have functional annotations (see Section~\ref{sec:datasets}). Additionally, complete key annotation can be difficult to perform, both manually and automatically. Key changes occur gradually, thus making it difficult to determine the exact location where one key ends and another begins \citep{papadopoulos:dafx}. This makes locating modulations and tonicizations difficult and also hard to evaluate \citep{gomez:phd06}. At the same time, we recognize that harmonic analysis is not complete without functional analysis. Functional analysis features could also benefit the basic chord recognition task described in this paper. In particular, the chord transition features that we define in Appendix~\ref{sec:chord-bigrams} depend on the absolute distance in half steps between the roots of the chords. However, a V-I transition has a different distribution than a I-IV transition, even though the root distance is the same. Chord transition distributions also differ between minor and major keys. As such, using key context could further improve chord recognition.
\section{Chord Recognition Features}
\label{sec:features}
The semi-CRF model uses five major types of features, as described in detail in Appendix~\ref{sec:appendix-features}. Segment purity features compute the percentage of segment notes that belong to a given chord (Appendix~\ref{sec:purity}). We include these on the grounds that segments with a higher purity with respect to a chord are more likely to be labeled with that chord. Chord coverage features determine if each note in a given chord appears at least once in the segment (Appendix~\ref{sec:coverage}). Similar to segment purity, if the segment covers a higher percentage of the chord's notes, it is more likely to be labeled with that chord. Bass features determine which note of a given chord appears as the bass in the segment (Appendix~\ref{sec:bass}). For a correctly labeled segment, its bass note often matches the root of its chord label. If the bass note instead matches the chord's third or fifth, or is an added dissonance, this may indicate that the chord $y$ is inverted or incorrect. Chord bigram features capture chord transition information (Appendix~\ref{sec:chord-bigrams}). These features are useful in that the arrangement of chords in chord progressions is an important component of {\it harmonic syntax}. Finally, we include metrical accent features for chord changes, as chord segments are more likely to begin on accented beats (Appendix~\ref{sec:chord-accent}).
\section{Chord Recognition Datasets}
\label{sec:datasets}
For evaluation, we used four chord recognition datasets:
\begin{enumerate}
\item BaCh: this is the Bach Choral Harmony Dataset, a corpus of 60 four-part Bach chorales that contains 5,664 events and 3,090 segments in total \citep{radicioni:amir10}.
\item TAVERN: this is a corpus of 27 complete sets of themes and variations for piano, composed by Mozart and Beethoven. It consists of 63,876 events and 12,802 segments overall \citep{devaney:ismir15}.
\item KP Corpus: the Kostka-Payne corpus is a dataset of 46 excerpts compiled by Bryan Pardo from Kostka and Payne's music theory textbook. It contains 3,888 events and 911 segments~\citep{kostka:book84}.
\item Rock: this is a corpus of 59 pop and rock songs that we compiled from Hal Leonard's {\it The Best Rock Songs Ever (Easy Piano)} songbook. It is 25,621 events and 4,221 segments in length.
\end{enumerate}
\subsection{The Bach Chorale (BaCh) Dataset}
The BaCh corpus has been annotated by a human expert with chord labels, using the set of triad labels described in Section~\ref{sec:labels}. Of the 144 possible labels, 102 appear in the dataset and of these only 68 appear 5 times or more. Some of the chord labels used in the manual annotation are enharmonic, e.g. C-sharp major and D-flat major, or D-sharp major and E-flat major. Reliably producing one of two enharmonic chords cannot be expected from a system that is agnostic of the key context. Therefore, we normalize the chord labels and for each mode we define a set of 12 canonical roots, one for each scale degree. When two enharmonic chords are available for a given scale degree, we selected the one with the fewest sharps or flats in the corresponding key signature. Consequently, for the major mode we use the canonical root set \{C, Db, D, Eb, E, F, Gb, G, Ab, A, Bb, B\}, whereas for the minor and diminished modes we used the root set \{C, C\#, D, D\#, E, F, F\#, G, G\#, A, Bb, B\}. Thus, if a chord is manually labeled as C-sharp major, the label is automatically changed to the enharmonic D-flat major. The actual chord notes used in the music are left unchanged. Whether they are spelled with sharps or flats is immaterial, as long as they are enharmonic with the root, third, fifth, or added note of the labeled chord. After performing enharmonic normalization on the chords in the dataset, 90 labels remain.
\subsection{The TAVERN Dataset}
The TAVERN dataset\endnote{Link to TAVERN: \\ \url{https://github.com/jcdevaney/TAVERN}} currently contains 17 works by Beethoven (181 variations) and 10 by Mozart (100 variations). The themes and variations are divided into a total of 1,060 phrases, 939 in major and 121 in minor. The pieces have two levels of segmentations: chords and phrases. The chords are annotated with Roman numerals, using the Humdrum representation for functional harmony\endnote{Link to Humdrum: \\ \url{http://www.humdrum.org/Humdrum/representations/harm.rep.html}}. When finished, each phrase will have annotations from two different experts, with a third expert adjudicating cases of disagreement between the two. After adjudication, a unique annotation of each phrase is created and joined with the note data into a combined file encoded in standard **kern format. However, many pieces do not currently have the second annotation or the adjudicated version. Consequently, we only used the first annotation for each of the 27 sets. Furthermore, since our chord recognition approach is key agnostic, we developed a script that automatically translated the Roman numeral notation into the key-independent canonical set of labels used in BaCh. Because the TAVERN annotation does not mark added fourth notes, the only added chords that were generated by the translation script were those containing sixths and sevenths. This results in a set of 108 possible labels, of which 69 appear in the dataset.
\subsection{The Kostka and Payne Corpus}
The Kostka-Payne (KP) corpus\endnote{Link to Kostka-Payne corpus: \\ \url{http://www.cs.northwestern.edu/~pardo/kpcorpus.zip}} does not contain chords with added fourth or sixth notes. However, it includes fine-grained chord types that are outside of the label set of triads described in Section~\ref{sec:labels}, such as fully and half-diminished seventh chords, dominant seventh chords, and dominant seventh flat ninth chords. We map these seventh chord variants to the generic added seventh chords, as discussed in Section~\ref{sec:labels}. Chords with ninth intervals are mapped to the corresponding chord without the ninth in our label set. The KP Corpus also contains the three types of augmented 6th chords introduced in Appendix~\ref{sec:aug6}.
Thus, by extending our chord set to include augmented 6th labels, there are 12 roots $\times$ 3 triad modes $\times$ 2 added notes + 12 bass notes $\times$ 3 aug6 modes = 108 possible labels overall. Of these, 76 appear in the dataset.
A number of MIDI files in the KP corpus contain unlabeled sections at the beginning of the song. These sections also appear as unlabeled in the original Kostka-Payne textbook. We omitted these sections from our evaluation, and also did not include them in the KP Corpus event and segment counts. Bryan Pardo's original MIDI files for the KP Corpus also contain several missing chords, as well as chord labels that are shifted from their true onsets. We used chord and beat list files sent to us by David Temperley to correct these mistakes.
\subsection{The Rock Dataset}
\label{sec:rock-dataset}
To evaluate the system's ability to recognize chords in a different genre, we compiled a corpus of 59 pop and rock songs from Hal Leonard's {\it The Best Rock Songs Ever (Easy Piano)} songbook. Like the KP Corpus, the Rock dataset contains chords with added ninths---including major ninth chords and dominant seventh chords with a sharpened ninth---as well as inverted chords. We omit the ninth and inversion numbers in these cases. Unique from the other datasets, the Rock dataset also possesses suspended and power chords. We extend our chord set to include these, adding suspended second, suspended fourth, dominant seventh suspended fourth, and power chords. We use the major mode canonical root set for suspended second and power chords and the minor canonical root set for suspended fourth chords, as this configuration produces the least number of accidentals. In all, there are 12 roots $\times$ 3 triad modes $\times$ 4 added notes + 12 roots $\times$ 4 sus and pow modes = 192 possible labels, with only 48 appearing in the dataset.
\def1{1.2}
\begin{table*}[t]
\small
\centering
\begin{tabular}{l r r c | c c | c c | c c | c c | c c |}
\cline{5-14}
& & & & \multicolumn{4}{|c}{Full chord evaluation} & \multicolumn{6}{|c|}{Root-level evaluation}\\ \cline{2-14}
& \multicolumn{3}{|c|}{Statistics} & \multicolumn{2}{c|}{semi-CRF} & \multicolumn{2}{c|}{HMPerceptron} & \multicolumn{2}{c|}{semi-CRF} & \multicolumn{2}{c|}{HMPerceptron} & \multicolumn{2}{c|}{Melisma} \\ \hline
\multicolumn{1}{|l|}{Dataset} & Events & Seg.'s & Labels & Acc$_E$ & F$_S$ & Acc$_E$ & F$_S$ & Acc$_E$ & F$_S$ & Acc$_E$ & F$_S$ & Acc$_E$ & F$_S$\\ \hline
\multicolumn{1}{|l|}{BaCh} & 5,664 & 3,090 & 90 & {\bf 83.2} & {\bf 77.5} & 77.2 & 69.9 & {\bf 88.9 } & {\bf 84.2 } & 84.8 & 77.0 & 84.3 & 74.7\\
\multicolumn{1}{|l|}{TAVERN} & 63,876 & 12,802 & 69 & {\bf 78.0} & {\bf 64.0} & 57.0 & 22.5 & {\bf 86.0} & {\bf 71.4} & 69.2 & 33.2 & 76.7 & 41.5\\
\multicolumn{1}{|l|}{KPCorpus} & 3,888 & 911 & 76 & {\bf 73.0} & {\bf 53.0} & 72.9 & 45.4 & 79.3 & 59.0 & 79.0 & 51.9 & {\bf 81.9} & {\bf 62.2}\\
\multicolumn{1}{|l|}{Rock} & 25,621 & 4,221 & 48 & {\bf 70.1} & {\bf 55.9} & 61.3 & 34.6 & {\bf 86.1} & {\bf 65.1} & 80.7 & 42.9 & 77.9 & 36.3\\
\hline
\end{tabular}
\caption{Dataset statistics and summary of results (event-level accuracy Acc$_E$ and segment-level F-measure F$_S$).}
\label{tab:sum-chord}
\end{table*}
\def1{1}
Similar to the KP Corpus, unlabeled segments occur at the beginning of some songs, which we omit from evaluation. Additionally, the Rock dataset uses an N.C. (i.e. no chord) label for some segments within songs where the chord is unclear. We broke songs containing this label into subsections consisting of the segments occurring before and after each N.C. segment, discarding subsections less than three measures long.
To create the Rock dataset, we converted printed sheet music to MusicXML files using the optical music recognition (OMR) software PhotoScore\endnote{Link to PhotoScore: \\ \url{http://www.neuratron.com/photoscore.htm}}. We noticed in the process of making the dataset that some of the originally annotated labels were incorrect. For instance, some segments with added note labels were missing the added note, while other segments were missing the root or were labeled with an incorrect mode. We automatically detected these cases and corrected each label by hand, considering context and genre-specific theory. We also omitted two songs (`Takin' Care of Business' and `I Love Rock N' Roll') from the 61 songs in the original Hal Leonard songbook, the former because of its atonality and the latter because of a high percentage of mistakes in the original labels.
\section{Experimental Evaluation}
\label{sec:evaluation}
We implemented the semi-Markov CRF chord recognition system using a multi-threaded package\endnote{Link to StatNLP: \\ \url{http://statnlp.org/research/ie/}} that has been previously used for noun-phrase chunking of informal text \citep{muis:naacl16}. The following sections describe the experimental results obtained on the four datasets from Section~\ref{sec:datasets} for: our semi-CRF system; Radicioni and Esposito's perceptron-trained HMM system, HMPerceptron; and Temperley's computational music system, Melisma Music Analyzer\endnote{Link to David Temperley's Melisma Music Analyzer: \\ \url{http://www.link.cs.cmu.edu/melisma/}}.
When interpretting these results, it is important to consider a number of important differences among the three systems:
\begin{itemize}
\item HMPerceptron and semi-CRF are data driven, therefore their performance depends on the number of training examples available. Both approaches are agnostic of music theoretic principles such as harmony changing primarily on strong metric positions, however they can learn such tendencies to the extent they are present in the training data.
\item Compared to HMPerceptron, semi-CRFs can use segment-level features. Besides this conceptual difference, the semi-CRF system described here uses a much larger number of features than the HMPerceptron system, which by itself can lead to better performance but may also require more training examples.
\item Both Melisma and HMPerceptron use metrical accents automatically induced by Melisma, whereas semi-CRF uses the Music21 accents derived from the notated meter. The more accurate notated meter could favor the semi-CRF system, although results in Section~\ref{sec:bach-eval} show that, at least on BaCh, HMPerceptron does not benefit from using the notated meter.
\end{itemize}
Table~\ref{tab:sum-chord} shows a summary of the full chord and root-level experimental results provided in this section. Two overall types of measures are used to evaluate a system's performance on a dataset: event-level accuracy (Acc$_E$) and segment-level F-measure (F$_S$). Acc$_E$ simply refers to the percentage of events for which the system predicts the correct label out of the total number of events in the dataset. Segment-level F-measure is computed based on precision and recall, two evaluation measures commonly used in information retrieval \citep{baeza-yates:book99}, as follows:
\begin{itemize}
\item Precision (P$_S$) is the percentage of segments predicted correctly by the system out of the total number of segments that it predicts (correctly or incorrectly) for all songs in the dataset.
\item Recall (R$_S$) is the percentage of segments predicted correctly out of the total number of segments annotated in the original score for all songs in the dataset.
\item F-Measure (F$_S$) is the harmonic mean between P$_S$ and R$_S$, i.e. $F_S = 2 P_S R_S / (P_S + R_S)$.
\end{itemize}
Note that a predicted segment is considered correct if and only if both its boundaries and its label match those of a true segment.
\subsection{BaCh Evaluation}
\label{sec:bach-eval}
We evaluated the semi-CRF model on BaCh using 10-fold cross validation: the 60 Bach chorales were randomly split into a set of 10 folds, and each fold was used as test data, with the other nine folds being used for training. We then evaluated HMPerceptron using the same randomly generated folds to enable comparison with our system. However, we noticed that the performance of HMPerceptron could vary significantly between two different random partitions of the data into folds. Therefore, we repeated the 10-fold cross validation experiment 10 times, each time shuffling the 60 Bach chorales and partitioning them into 10 folds. For each experiment, the test results from the 10 folds were pooled together and one value was computed for each performance measure (accuracy, precision, recall, and F-measure). The overall performance measures for the two systems were then computed by averaging over the 10 values (one from each experiment). The sample standard deviation for each performance measure was also computed over the same 10 values.
For semi-CRF, we computed the frequency of occurrence of each feature in the training data, using only the true segment boundaries and their labels. To speedup training and reduce overfitting, we only used features whose counts were at least 5. The performance measures were computed by averaging the results from the 10 test folds for each of the fold sets. Table~\ref{tab:bach-chord} shows the averaged event-level and segment-level performance of the semi-CRF model, together with two versions of the HMPerceptron: HMPerceptron$_1$, for which we do enharmonic normalization both on training and test data, similar to the normalization done for semi-CRF; and HMPerceptron$_2$, which is the original system from \citep{radicioni:amir10} that does enharmonic normalization only on test data.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{BaCh: Full chord evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & {\bf 83.2} & {\bf 79.4} & {\bf 75.8} & {\bf 77.5}\\[-4pt]
& \multicolumn{1}{r}{\scriptsize 0.2} & \multicolumn{1}{r}{\scriptsize 0.2} & \multicolumn{1}{r}{\scriptsize 0.2} & \multicolumn{1}{r}{\scriptsize 0.2} \\
HMPerceptron$_1$ & 77.2 & 71.2 & 68.8 & 69.9 \\[-4pt]
& \multicolumn{1}{r}{\scriptsize 2.1} & \multicolumn{1}{r}{\scriptsize 2.0} & \multicolumn{1}{r}{\scriptsize 2.2} & \multicolumn{1}{r}{\scriptsize 1.8} \\
HMPerceptron$_2$ & 77.0 & 71.0 & 68.5 & 69.7 \\[-4pt]
& \multicolumn{1}{r}{\scriptsize 2.1} & \multicolumn{1}{r}{\scriptsize 2.0} & \multicolumn{1}{r}{\scriptsize 2.3} & \multicolumn{1}{r}{\scriptsize 1.8} \\
\bottomrule
\end{tabular}
\caption{Comparative results (\%) and standard deviations on the BaCh dataset, using Event-level {\bf accuracy } (Acc$_E$) and Segment-level {\bf precision} (P$_S$), {\bf recall }(R$_S$), and {\bf F-measure } (F$_S$).}
\label{tab:bach-chord}
\end{table}
The semi-CRF model achieves a 6.2\% improvement in event-level accuracy over the original model HMPerceptron$_2$, which corresponds to a 27.0\% relative error reduction\footnote{$27\% = (83.2 - 77.0)/(100 - 77.0)$}. The improvement in accuracy over HMPerceptron$_1$ is statistically significant at an averaged $p$-value of 0.001, using a one-tailed Welch's t-test over the sample of 60 chorale results for each of the 10 fold sets. The improvement in segment-level performance is even more substantial, with a 7.8\% absolute improvement in F-measure over the original HMPerceptron$_2$ model, and a 7.6\% improvement in F-measure over the HMPerceptron$_1$ version, which is statistically significant at an averaged $p$-value of 0.002, using a one-tailed Welch's t-test. The standard deviation values computed for both event-level accuracy and F-Measure are about one order of magnitude smaller for semi-CRF than for HMPerceptron, demonstrating that the semi-CRF is also more stable than the HMPerceptron. As HMPerceptron$_1$ outperforms HMPerceptron$_2$ in both event and segment-level accuracies, we will use HMPerceptron$_1$ for the remaining evaluations and will simply refer to it as HMPerceptron.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{BaCh: Root only evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & {\bf 88.9} & {\bf 85.4} & {\bf 83.0} & {\bf 84.2}\\
HMPerceptron & 84.8 & 78.0 & 76.2 & 77.0 \\%[-4pt]
Melisma & 84.3 & 73.2 & 76.3 & 74.7 \\
\bottomrule
\end{tabular}
\caption{Root only results (\%) on the BaCh dataset, using Event-level {\bf accuracy } (Acc$_E$) and Segment-level {\bf precision } (P$_S$), {\bf recall } (R$_S$), and {\bf F-measure } (F$_S$).}
\label{tab:bach-root}
\end{table}
We also evaluated performance in terms of predicting the correct root of the chord, e.g. if the true chord label were C:maj, a predicted chord of C:maj:add7 would still be considered correct, because it has the same root as the correct label. We performed this evaluation for semi-CRF, HMPerceptron, and the harmony component of Temperley's Melisma. Results show that semi-CRF improves upon the event-level accuracy of HMPerceptron by 4.1\%, producing a relative error reduction of 27.0\%, and that of Melisma by 4.6\%. Semi-CRF also achieves an F-measure that is 7.2\% higher than HMPerceptron and 9.5\% higher than Melisma. These improvements are statistically significant with a $p$-value of 0.01 using a one-tailed Welch's t-test.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{BaCh: Metrical accent evaluation of semi-CRF}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
With accent & {\bf 83.6} & {\bf 79.6} & {\bf 75.9} & {\bf 77.6}\\
Without accent & 77.7 & 74.8 & 68.0 & 71.2\\
\bottomrule
\end{tabular}
\caption{Full chord Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the BaCh dataset, with and without metrical accent features.}
\label{tab:bach-accent}
\end{table}
Metrical accent is important for harmonic analysis: chord changes tend to happen in strong metrical positions; figuration such as passing and neighboring tones appear in metrically weak positions, whereas suspensions appear on metrically strong beats. We verified empirically the importance of metrical accent by evaluating the semi-CRF model on a random fold set from the BaCh corpus with and without all accent-based features. The results from Table~\ref{tab:bach-accent} show a substantial decrease in accuracy when the accent-based features are removed from the system.
Finally, we ran an evaluation of HMPerceptron on a random fold set from BaCh in two scenarios: HMPerceptron with Melisma metrical accent and HMPerceptron with Music21 accent. The results did not show a significant difference: with Melisma accent the event accuracy was 79.8\% for an F-measure of 70.2\%, whereas with Music21 accent the event accuracy was 79.8\% for an F-measure of 70.3\%. This negligible difference is likely due to the fact that HMPerceptron uses only coarse-grained accent information, i.e. whether a position is accented (Melisma accent 3 or more) or not accented (Melisma accent less than 3).
\subsubsection{BaCh Error Analysis}
Error analysis revealed wrong predictions being made on chords that contained dissonances that spanned the duration of the entire segment (e.g. a second above the root of the annotated chord), likely due to an insufficient number of such examples during training. Manual inspection also revealed a non-trivial number of cases in which we disagreed with the manually annotated chords, e.g. some chord labels were clear mistakes, as they did not contain any of the notes in the chord. This further illustrates the necessity of building music analysis datasets that are annotated by multiple experts, with adjudication steps akin to the ones followed by TAVERN.
\subsection{TAVERN Evaluation}
\label{sec:tavern-eval}
To evaluate on the TAVERN corpus, we created a fixed training-test split: 6 Beethoven sets ({\it B063}, {\it B064}, {\it B065}, {\it B066}, {\it B068}, {\it B069}) and 4 Mozart sets ({\it K025}, {\it K179}, {\it K265}, {\it K353}) were used for testing, while the remaining 11 Beethoven sets and 6 Mozart sets were used for training. All sets were normalized enharmonically before being used for training or testing. Table~\ref{tab:tavern-chord} shows the event-level and segment-level performance of the semi-CRF and HMPerceptron model on the TAVERN dataset.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{TAVERN: Full chord evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & {\bf 78.0} & {\bf 67.3} & {\bf 60.9} & {\bf 64.0} \\
HMPerceptron & 57.0 & 24.5 & 20.8 & 22.5\\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the TAVERN dataset.}
\label{tab:tavern-chord}
\end{table}
As shown in Table~\ref{tab:tavern-chord}, semi-CRF outperforms HMPerceptron by 21.0\% for event-level chord evaluation and by 41.5\% in terms of chord-level F-measure. Root only evaluations provided in Table~\ref{tab:tavern-root} reveal that semi-CRF improves upon HMPerceptron's event-level root accuracy by 16.8\% and Melisma's event accuracy by 9.3\%. Semi-CRF also produces a segment-level F-measure value that is 38.2\% higher than that of HMPerceptron and 29.9\% higher than that of Melisma. These improvements are statistically significant with a $p$-value of 0.01 using a one-tailed Welch's t-test.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{TAVERN: Root only evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & {\bf 86.0} & {\bf 74.6} & {\bf 68.4} & {\bf 71.4} \\
HMPerceptron & 69.2 & 38.2 & 29.4 & 33.2 \\
Melisma & 76.7 & 42.3 & 40.7 & 41.5 \\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the TAVERN dataset.}
\label{tab:tavern-root}
\end{table}
\subsubsection{TAVERN Error Analysis}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{error-analysis.png}
\caption{Semi-CRF correctly predicts A:maj7 (top) for the first beat of measure 55 from Mozart K025, while HMPtron predicts C\#:dim (bottom).}
\label{fig:error-analysis}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{error-analysis2.png}
\caption{Semi-CRF correctly predicts C:maj (top) for all of measure 280 from Mozart K179, while HMPtron predicts E:min (bottom) for the first beat and C:maj for the other two beats (bottom).}
\label{fig:error-analysis2}
\end{figure}
The results in Tables~\ref{tab:bach-chord} and~\ref{tab:tavern-chord} show that chord recognition is substantially more difficult in the TAVERN dataset than in BaCh. The comparatively lower performance on TAVERN is likely due to the substantially larger number of figurations and higher rhythmic diversity of the variations compared to the easier, mostly note-for-note texture of the chorales. Error analysis on TAVERN revealed many segments where the first event did not contain the root of the chord, such as in Figures~\ref{fig:error-analysis} and~\ref{fig:error-analysis2}. For such segments, HMPerceptron incorrectly assigned chord labels whose root matched the bass of this first event. Since a single wrongly labeled event invalidates the entire segment, this can explain the larger discrepancy between the event-level accuracy and the segment-level performance. In contrast, semi-CRF assigned the correct labels in these cases, likely due to its ability to exploit context through segment-level features, such as the chord root coverage feature $f_4$ and its duration-weighted version $f_{11}$. In the case of Figure~\ref{fig:error-analysis}, C\# appears in the bass of the first beat of the measure and HMPerceptron incorrectly predicts a segment with label C\#:dim for this beat. In contrast, semi-CRF correctly predicts the label A:maj7 for this segment. In Figure~\ref{fig:error-analysis2}, semi-CRF correctly predicts a C:maj segment that lasts for the entirety of the measure, while HMPerceptron predicts an E:min segment for the first beat, as E appears doubled in the bass here.
\subsection{KP Corpus Evaluation}
\label{sec:kpcorpus-eval}
To evaluate on the full KP Corpus dataset, we split the songs into 11 folds. In this configuration, 9 folds contain 4 songs each, while the remaining 2 folds contain 5 songs. We then created two versions of semi-CRF: the original system without augmented 6th chord features (semi-CRF$_1$) and a system with augmented 6th features (semi-CRF$_2$). We tested both versions on all 46 songs, as shown in Table~\ref{tab:kpcorpus1-chord}. We could not perform the same evaluation on HMPerceptron because it was not designed to handle augmented 6th chords.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{KP Corpus 46 songs: Full chord evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF$_1$ & 72.0 & 59.0 & 49.2 & 53.5\\
semi-CRF$_2$ & {\bf 73.4} & {\bf 59.6} & {\bf 50.1} & {\bf 54.3} \\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the KP Corpus dataset.}
\label{tab:kpcorpus1-chord}
\end{table}
The results in Table~\ref{tab:kpcorpus1-chord} demonstrate the utility of adding augmented 6th chord features to our system, as semi-CRF$_2$ outperforms semi-CRF$_1$ on all measures.
We will use semi-CRF$_2$ for the rest of the evaluations in this section, simply calling it semi-CRF.
We additionally perform root only evaluation on the full dataset for semi-CRF and Melisma. We ignore events that belong to the true augmented 6th chord segments when computing the root accuracies for both systems, as augmented 6th chords technically do not contain a root note. As shown in Table~\ref{tab:kpcorpus1-root}, Melisma is only marginally better than semi-CRF in terms of event-level root accuracy, however it has a segment-level F-measure that is 1.1\% better.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{KP Corpus 46 songs: Root only evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & 80.7 & {\bf 66.3} & 56.2 & 60.8 \\
Melisma & {\bf 80.9} & 60.6 & {\bf 63.3 } & {\bf 61.9} \\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the KP Corpus dataset.}
\label{tab:kpcorpus1-root}
\end{table}
To enable comparison with HMPerceptron, we also evaluate all systems on the 36 songs that do not contain augmented 6th chords. Because of the reduced number of songs available for training, we used leave-one-out evaluation for both semi-CRF and HMPerceptron. Table~\ref{tab:kpcorpus2-chord} shows that semi-CRF obtains a marginal improvement in chord event accuracy and a more substantial 7.6\% improvement in segment-level F-measure in comparison with HMPerceptron. The comparative results in Table~\ref{tab:kpcorpus2-root} show that Melisma outperforms both machine learning systems for root only evaluation. Nevertheless, the semi-CRF is still competitive with Melisma in terms of both event-level accuracy and segment-level F-measure.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{KP Corpus 36 songs: Full chord evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & {\bf 73.0} & {\bf 55.6} & {\bf 50.7} & {\bf 53.0} \\
HMPerceptron & 72.9 & 48.2 & 43.6 & 45.4\\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the KP Corpus dataset.}
\label{tab:kpcorpus2-chord}
\end{table}
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{KP Corpus 36 songs: Root only evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & 79.3 & {\bf 61.8} & 56.4 & 59.0 \\
HMPerceptron & 79.0 & 54.7 & 49.9 & 51.9\\
Melisma & {\bf 81.9 } & 60.7 & {\bf 63.7} & {\bf 62.2} \\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the KP Corpus dataset.}
\label{tab:kpcorpus2-root}
\end{table}
We additionally compare semi-CRF against the HarmAn algorithm created by \citet{pardo:cmj02}, which achieves a 75.8\% event-level accuracy on the KP Corpus. We made several modifications to the initial evaluation of semi-CRF on the full KP Corpus to enable this comparison. For instance, Pardo and Birmingham omit a Schumann piece from their evaluation, testing HarmAn on 45 songs instead of 46. We omitted this piece as well. They also look at the labels that appear in the dataset beforehand, ignoring any segments whose correct labels are chords that appear less than 2\% of the time when rounded. We followed suit with this, ignoring segments labeled with augmented 6th chords and other less common labels. Overall, semi-CRF obtains an event-level accuracy of 75.3\%, demonstrating that it is competitive with HarmAn. However, it is important to note that these results are still not fully comparable: sometimes HarmAn predicts multiple labels for a single segment, and when the correct label is among these, Pardo and Birmingham divide by the number of labels the system predicts and consider this fractional value to be correct. In contrast, semi-CRF always predicts one label per segment.
\subsubsection{KP Corpus Error Analysis}
Both machine learning systems struggled on the KP corpus, with Melisma performing better on both event-level accuracy and segment-level F-measure. This can be explained by the smaller dataset, and thus the smaller number of available training examples. The KP corpus was the smallest of the four datasets, especially in terms of the number of segments -- less than a third compared to BaCh, and less than a tenth compared to TAVERN. Furthermore, the textbook excerpts are more diverse, as they are taken from 11 composers and are meant to illustrate a wide variety of music theory concepts, leading to mismatch between the training and test distributions and thus lower test performance.
\subsection{Rock Evaluation}
\label{sec:rock-eval}
We split the 59 songs in the rock dataset into 10 folds: 9 folds with 6 songs and 1 fold with 5 songs. Similar to the full KP Corpus evaluation from Section~\ref{sec:kpcorpus-eval}, we create two versions of the semi-CRF model. The first is the original semi-CRF system (semi-CRF$_1$) which does not contain suspended and power chord features. The second is a new version of semi-CRF (semi-CRF$_3$) which has suspended and power chord features added to it.
We do not include HMPerceptron in the evaluation of the full dataset, as it is not designed for suspended and power chords.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{Rock 59 songs: Full chord evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF$_1$ & 66.0 & 49.8 & 47.3 & 48.5 \\
semi-CRF$_3$ & {\bf 69.4} & {\bf 62.0} & {\bf 54.9} & {\bf 58.3} \\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the Rock dataset.}
\label{tab:rock1-chord}
\end{table}
As shown in Table~\ref{tab:rock1-chord}, semi-CRF$_3$ obtains higher event and segment-level accuracies than semi-CRF$_1$. Therefore, we use semi-CRF$_3$ for the rest of the experiments, simply calling it semi-CRF.
We perform root only evaluation on the full Rock dataset using semi-CRF and Melisma. In this case, it is not necessary to omit the true segments whose labels are suspended or power chords, as these types of chords contain a root. As shown in Table~\ref{tab:rock1-root}, semi-CRF outperforms Melisma on all measures: it obtains a 8.4\% improvement in event-level root accuracy and a 31.5\% improvement in segment-level F-measure over Melisma.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{Rock 59 songs: Root only evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & {\bf 85.8} & {\bf 70.9} & {\bf 63.2} & {\bf 66.8} \\
Melisma & 77.4 & 29.5 & 44.0 & 35.3 \\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the Rock dataset.}
\label{tab:rock1-root}
\end{table}
We also evaluate only on the 51 songs that do not contain suspended or power chords to compare semi-CRF against HMPerceptron. We do this by splitting the reduced number of songs into 10 folds: 9 folds with 5 test songs and 46 training songs, and 1 fold with 6 test songs and 45 training songs. The results shown in Table~\ref{tab:rock2-chord} demonstrate that semi-CRF performs better than HMPerceptron: it achieves an 8.8\% improvement in event-level chord accuracy and a 21.3\% improvement in F-measure over HMPerceptron. Additionally, we evaluate the root-level performance of all systems on the 51 songs. The results in Table~\ref{tab:rock2-root} show that the semi-CRF achieves better root-level accuracy than both systems: it obtains a 5.4\% improvement in event-level root accuracy over HMPerceptron and a 8.2\% improvement over Melisma. In terms of segment-level accuracy, it demonstrates a 22.2\% improvement in F-measure over HMPerceptron and a 28.8\% improvement over Melisma. These results are statistically significant with a $p$-value of 0.01 using a one-tailed Welch's t-test.
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{Rock 51 songs: Full chord evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & {\bf 70.1} & {\bf 58.8} & {\bf 53.2} & {\bf 55.9} \\
HMPerceptron & 61.3 & 41.0 & 29.9 & 34.6\\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the Rock dataset.}
\label{tab:rock2-chord}
\end{table}
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\multicolumn{5}{c}{Rock 51 songs: Root only evaluation}\\
\toprule
System & Acc$_E$ & P$_S$ & R$_S$ & F$_S$ \\ \midrule
semi-CRF & {\bf 86.1} & {\bf 68.6} & {\bf 61.9} & {\bf 65.1} \\
HMPerceptron & 80.7 & 51.3 & 36.9 & 42.9\\
Melisma & 77.9 & 30.6 & 45.8 & 36.3 \\
\bottomrule
\end{tabular}
\caption{Event (Acc$_E$) and Segment-level (P$_S$, R$_S$, F$_S$) results (\%) on the Rock dataset.}
\label{tab:rock2-root}
\end{table}
\subsubsection{Rock Error Analysis}
As mentioned in Section~\ref{sec:rock-dataset}, we automatically detected and manually fixed a number of mistakes that we found in the original chord annotations. In some instances, although the root of the provided chord label was missing from the corresponding segment, the label was in fact correct. In these instances, it was often the case that the root appeared in the previous segment and thus was still perceptually salient to the listener, either because of its long duration or because it appeared in the last event of the previous segment. Sometimes, the same harmonic and melodic patterns were repeated throughout the piece, with the root appearing in the first few repetitions of these patterns, but disappearing later on. This was true for `Twist and Shout' by the Beatles, in which the same I IV V7 progression of C major, F major, and G dominant 7 is repeated throughout the song, with the root C disappearing from C major segments by measure 11. Due to their inability to exploit larger scale patterns, neither system could predict the correct label for such segments.
We also found that three of the songs that we manually detected as having labels with incorrect modes (`Great Balls of Fire,' `Heartbreak Hotel,' and `Shake, Rattle, and Roll') were heavily influenced by blues. The three songs contain many major chord segments where the major third is purposefully swapped for a minor third to create a blues feel. We kept the labels as they were in these instances, but again both systems struggled to correctly predict the true label in these cases.
\begin{figure}[t]
\centering
\includegraphics[width=0.65\columnwidth]{error-analysis3.png}
\caption{Measures 14-15 of `Let It Be' by the Beatles, where HMPerceptron incorrectly predicts G:maj6 for measure 15 (bottom), while semi-CRF correctly predicts G:maj (top).}
\label{fig:error_analysis3}
\end{figure}
Figure~\ref{fig:error_analysis3} contains a brief excerpt from `Let It Be' by the Beatles demonstrating the utility of a segmental approach over an event-based approach. Semi-CRF correctly predicts a segment spanning measure 15 with the label G:maj, while HMPerceptron predicts these same segment boundaries, but incorrectly produces the label G:maj:add6. Semi-CRF most likely predicts the correct label because of its ability to heuristically detect figuration: the E5 on the first beat of measure 15 is a suspension, while the E5 on the fourth beat is a neighboring tone. It would be difficult for an event-based approach to recognize these notes as nonharmonic tones, as detecting figuration requires segment information. For instance, to detect a neighbor, this requires determining if one of its anchor notes belongs to the candidate segment (see Appendix~\ref{sec:appendix-figuration} for a full definition of neighbor and anchor tones).
\section{Related Work}
Numerous approaches for computerized harmonic analysis have been proposed over the years, starting with the pioneering system of \citet{winograd:jmt68}, in which a systemic grammar was used to encode knowledge of harmony. \citet{barthelemy:ismir01} and more recently \citet{rizo:chapter16} provide a good survey of previous work in harmonic analysis of symbolic music. Here, we focus on the three systems that inspired our work: Melisma \citep{temperley:cmj99}, HarmAn \citep{pardo:cmj02}, and HMPerceptron \citep{radicioni:amir10} (listed in chronological order). These systems, as well as our semi-CRF approach, incorporate knowledge of music theory through manually defined {\it rules} or {\it features}. For example, the ``compatibility rule'' used in Melisma is analogous to the chord coverage features used in the semi-CRF, the ``positive evidence'' score computed based on the six template classes in HarmAn, or the ``Asserted-notes'' features in HMPerceptron. Likewise, the segment purity features used in semi-CRF are analogous to the ``negative evidence'' scores from HarmAn, while the figuration heuristics used in semi-CRF can be seen as the counterpart of the ``ornamental dissonance rule'' used in Melisma. In these systems, each rule or feature is assigned an importance, or weight, in order to enable the calculation of an overall score for any candidate chord segmentation. Given a set of weights, optimization algorithms are used to determine the maximum scoring segmentation and labeling of the musical input. HMPerceptron uses the Viterbi algorithm \citep{rabiner:ieee89} to find the optimal sequence of event labels, whereas semi-CRF uses a generalization of Viterbi \citep{sarawagi:nips04} to find the joint most likely segmentation and labeling. The dynamic programming algorithm used in Melisma is actually an instantiation of the same general Viterbi algorithm -- like HMPerceptron and semi-CRF it makes a first-order Markov assumption and computes a similar lattice structure that enables a linear time complexity in the length of the input. HarmAn, on the other hand, uses the Relaxation algorithm \citep{cormen:book09}, whose original quadratic complexity is reduced to linear through a greedy approximation.
While the four systems are similar in terms of the musical knowledge they incorporate and their optimization algorithms, there are two important aspects that differentiate them:
\begin{enumerate}
\item Are the weights learned from the data, or pre-specified by an expert? HMPerceptron and semi-CRF train their parameters, whereas Melisma and HarmAn have parameters that are predefined manually.
\item Is chord recognition done as a joint segmentation and labeling of the input, or as a labeling of event sequences? HarmAn and semi-CRF are in the segment-based labeling category, whereas Melisma and HMPerceptron are event-based.
\end{enumerate}
Learning the weights from the data is more feasible, more scalable, and, given a sufficient amount of training examples, much more likely to lead to optimal performance. Furthermore, the segment-level classification has the advantage of enabling segment-level features that can be more informative than event-level analogues. The semi-CRF approach described in this paper is the first to take advantage of both learning the weights and performing a joint segmentation and labeling of the input.
\section{Future Work}
\label{sec:conclusion}
Manually engineering features for chord recognition is a cognitively demanding and time consuming process that requires music theoretical knowledge and that is not guaranteed to lead to optimal performance, especially when complex features are required. In future work we plan to investigate automatic feature extraction using recurrent neural networks (RNN). While RNNs can theoretically learn useful features from raw musical input, they are still event-level taggers, even when used in more sophisticated configurations, such as bi-directional deep LSTMs \citep{graves:book12}. We plan to use the Segmental RNNs of \citet{kong:iclr16}, which combine the benefits of RNNs and semi-CRFs: bidirectional RNNs compute representations of candidate segments, whereas segment-label compatibility scores are integrated using a semi-Markov CRF. Learning the features entirely from scratch could require a larger number of training examples, which may not be feasible to obtain. An alternative is to combine RNN sequence models with explicit knowledge of music theory, as was done recently by \cite{jaques:icml17} for the task of melody generation.
Music analysis tasks are mutually dependent on each other. Voice separation and chord recognition, for example, have interdependencies, such as figuration notes belonging to the same voice as their anchor notes. \citet{temperley:cmj99} note that harmonic analysis, in particular chord changes, can benefit meter modeling, whereas knowledge of meter is deemed crucial for chord recognition. This ``serious chicken-and-egg problem'' can be addressed by modeling the interdependent tasks together, for which probabilistic graphical models are a natural choice. Correspondingly, we plan to develop models that jointly solve multiple music analysis tasks, an approach that reflects more closely the way humans process music.
\section{Conclusion}
We presented a semi-Markov CRF model that approaches chord recognition as a joint segmentation and labeling task. Compared to event-level tagging approaches based on HMMs or linear CRFs, the segment-level approach has the advantage that it can accommodate features that consider all the notes in a candidate segment. This capability was shown to be especially useful for music with complex textures that diverge from the simpler note-for-note structures of the Bach chorales. The semi-CRF's parameters are trained on music annotated with chord labels, a data-driven approach that is more feasible than manually tuning the parameters, especially when the number of rules or features is large. Empirical evaluations on three datasets of classical music and a newly created dataset of rock music show that the semi-CRF model performs substantially better than previous approaches when trained on a sufficient number of labeled examples and stays competitive when the training data is small. The code is made publicly available on the first author's GitHub\endnote{Link to Code: \\ \url{https://github.com/kristenmasada/chord_recognition_semi_crf}}.
\theendnotes{}
\section*{Acknowledgments}
We would like to thank Patrick Gray for his help with pre-processing the TAVERN corpus. We thank Daniele P. Radicioni and Roberto Esposito for their help and their willingness to share the original BaCh dataset and the HMPerceptron implementation. We would like to thank Bryan Pardo for sharing the KP Corpus and David Temperley for providing us with an updated version that fixed some of the labeling errors in the original KP corpus. We would also like to thank David Chelberg for providing valuable comments on a previous version of the manuscript. Finally, we would like to thank the anonymous reviewers and the editorial team for their insightful comments.
Kristen Masada's work on this project was partly supported by an undergraduate research apprenticeship grant from the Honors Tutorial College at Ohio University.
| 2023-04-23T06:41:04.700Z | 2018-10-29T01:05:32.000Z | redpajama/arxiv | arxiv_0001 | 1,652 | 10,884 |
7ad4024a9c571267df657b072ed8cf6cd2f2f7d6 | \section{Introduction}
With near-ubiquitous deployment of WiFi-enabled smart devices ({\em
e.g.}, security cameras, voice assistants, and smart appliances), our
homes and offices are filled with many WiFi
devices\footnote{The worldwide number of WiFi-enabled IoT devices is expected to
reach 5 billion by 2025~\cite{iot-wifi-number}, and the
number of WiFi connected devices will reach 22.2 billion by
2021~\cite{wifi-number}.}. The ubiquity of these devices and their sheer
density means that they will fill the air around us with radio frequency (RF)
signals, wherever we go.
Unfortunately, the RF signals emitted by these devices pose a real security and privacy risk to all of us. They are constantly
interacting with ({\em e.g.}, reflecting off) our bodies, carrying
information about our location,
movement and other physiological properties to anyone nearby with sufficient
knowledge and curiosity. In this work, we explore a new set of
passive reconnaissance attacks that
leverages the presence of {\em ambient WiFi signals} to monitor
users in their homes and offices,
even when the WiFi network, data packets, and individual
devices are completely secured and operating as expected. We show that by just
sniffing existing WiFi signals, an adversary outside of the
target property can accurately detect
and track movements of any users down to their individual rooms, regardless
of whether they are carrying any networked devices.
We believe this is the first in a new class of silent
reconnaissance attacks that are notable
because of their passive nature and general applicability. This attack can be
highly effective, incurs low cost (only cheap commodity hardware), and yet
remains {\em undetectable}. The
attacker does not need to compromise/access the WiFi network or individual
devices, decode packets or transmit any signals. All they need is to place a
single, off-the-shelf, minimally equipped WiFi receiver outside of the target
property. This attacker receiver only needs a {\em single}
antenna, and simply measures the signal strength of existing WiFi signals,
without decoding any packets.
Unaddressed, these reconnaissance attacks put our security and privacy at
significant risk. \shepherd{The ability for an attacker to continuously and automatically
scan, detect and locate humans behind walls at nearly no cost and zero risk ({\em
e.g.\/} attacker waits for notifications remotely) will enable
attackers to launch strong physical attacks and commit serious crimes. Such
threat broadly applies to our homes, businesses, government
facilities and many others. Examples include burglary to homes and
offices, kidnapping and assault of targets in their homes, ``casing'' a
bank prior to robbery, and even planning attacks against government
agencies.
}
\para{Why WiFi sensing?} We note that there are some simple approaches to
inferring user presence that do not require the use of sophisticated
RF sensing.
\shepherd{For example, attackers could infer user presence by
observing lighting or acoustic conditions inside an area, or use thermal imaging. These
attacks are well understood and easily disrupted by time-controlled lighting or sound
systems~\cite{smartlighting}, or insulated walls designed to
prevent heat leakage and naturally block thermal
imaging~\cite{thermalimaging}. } Finally, attackers can infer user presence
from increased WiFi network traffic. Yet this is highly unreliable, as growth of
IoT devices increases traffic levels in the absence of users. It is also easily
thwarted using cover traffic~\cite{covertraffic}.
Instead, we describe a new class of physical reconnaissance attacks enabled by
inherent properties of WiFi signal propagation:
1) user movement near a WiFi transmitter changes its signal propagation in a
way that can be observed by nearby receivers, and 2) walls and buildings
today are not built to insulate against WiFi signals, thus signals sent by
devices inside a property can often be overheard by outside
receivers. \readmore{Leveraging these, we design the attack such
that}, whenever a WiFi device transmits signals, it unknowingly turns into a
tracking device for our attack. In this context, our attack could
be viewed as an adversarial analogy to WiFi-based device-free human sensing
({\em e.g.}, see-through-wall systems that actively transmit
customized RF signals towards the target~\cite{adib2013}). Yet
our attack differs significantly (\S\ref{sec:bksensing}), because we use a novel model on
multipath signal
dynamics to remove dependence on active
transmissions (only passive sensing), customized hardware (only a
commodity, single antenna receiver), \readmore{and knowing precise locations of WiFi devices
inside the property.}
\para{Motion detection via multipath signal dynamics.} The core of our attack is a new model on signal dynamics that links
together human motion near WiFi transmitters and variance of
multipath signal propagation seen by a sniffer outside of
the property. Specifically, when a human target moves ({\em e.g.}, sitting
down, walking, opening/closing doors) near a WiFi device
$x$, the motion changes the multipath signal propagation from $x$ to
the attacker sniffer $S$. Our new signal
model allows $S$ to accurately capture such signal dynamics and use
them to pinpoint the target to her specific room. The more WiFi devices inside the
property, the more accurate the tracking becomes.
Our proposed attack does not assume any prior knowledge of the WiFi network
and devices inside the target property, \readmore{including their
locations. Our attack can discover
devices and estimate their coarse locations using their WiFi signals}, and
the attack continues to function even if these devices are
relocated.
We build a complete prototype of the
attacker system on commodity smartphones, and experimentally show that the attack
(using a single smartphone) is not only highly accurate (detecting and
localizing users to an individual room), but also highly general (effective
across a wide range of 11 different physical settings, including both office buildings
and residential apartments).
\para{Defense via AP-based signal obfuscation.} We explore
robust defenses against our proposed attack and other passive
sensing attacks.
We consider four immediate defenses:
reducing leakage by geo-fencing and rate limiting, signal obfuscation
by MAC randomization, and power randomization at WiFi devices, and find that
they are either impractical or ineffective. We then propose a practical alternative
using {\em AP-based signal obfuscation},
where the WiFi Access Point actively injects customized cover signal for its
associated devices. This defense effectively creates noise to the signal
measurements, such that the attacker is unable to identify change due to
human motion. Our defense is easy to
implement, incurs no changes to devices other than the AP, but reduces
the human detection rate to 47\% while increasing the
false positive rate to 50\%. Such ambiguity renders the attack
useless in practice.
In the rest of the paper, we describe our efforts to understand the feasibility,
challenges, and defenses surrounding the proposed attack. In
short, our key contributions include:
\begin{packed_itemize}
\item We identify a low-cost, undetectable human sensing attack using
just a single sniffer with a single antenna, and design a new
multipath signal variance model for motion detection.
\item We prototype the attacker system on a commodity smartphone
and validate the attack in real-world settings.
\item We propose and evaluate a practical and effective
defense using AP-based signal obfuscation.
\end{packed_itemize}
\para{Limitations.}
\shepherd{Currently, our attack detects
human presence in each room over time}
by detecting and localizing targets to individual rooms. It is unable
to identify fine-grained features such as speed, activity
type and user identity, or separate humans from large animals.
Despite such limitations, our work identifies a realistic, low-cost,
and undetectable
reconnaissance attack using passive WiFi sensing. We hope our work brings more
attention to this important and understudied topic.
\begin{figure}[t]
\centering
\includegraphics[width=0.42\textwidth]{figs/mov}
\caption{Traditional human sensing designs either (a) relies on active
transmissions by (customized) attacker devices, or (b) deploys one
or more advanced sniffers (laptops/USRPs) with multiple antennas; (c) Our attack uses
a single smartphone (with a
single antenna) as the passive sniffer, and turns commodity WiFi devices
inside the property as motion sensors.
}
\label{fig:mov}
\end{figure}
\section{Background: Device-free Human Sensing}
\label{sec:bksensing}
Some details of adversarial sensing attacks are reminiscent of the problem of
``device-free human sensing.'' A natural question is: {\em can we simply
reuse existing work on device-free human sensing systems to launch
adversarial sensing attacks}? To answer this question, and to better
understand how these attacks in the context of prior work, we review in detail
existing works in device-free human sensing.
The task of ``device-free human sensing'' makes no assumptions
on whether targets are carrying networked devices. Sensing is generally
achieved by capturing and analyzing RF signals reflected off or blocked by
human bodies. To be clear, this is quite different from the task of ``device
localization,'' in which the target is a networked device that
communicates and synchronizes
with the sensing system, {\em i.e.\/} sending and/or receiving signals ({\em
e.g.},~\cite{arraytrack13nsdi,spotfi15sigcomm,pinpoint06mobisys,
radar00victorbahl,localization13survey}).
Existing works on device-free human sensing can be categorized into
two broad groups: {\em active mode} and {\em passive mode}.
\para{Active sensing.} Most of the existing works fall into this
group, where the sensing device continuously transmits RF
signals towards the target area (Figure~\ref{fig:mov}a). As some signals get
reflected off the target body, they are captured by the sensing
device to infer the target status ({\em
e.g.},~\cite{adib2013,adib15nsdi,lifs16mobicom, tsinghua14mobicom}). To
facilitate sensing/inference, the RF signals are often custom-designed
to capture information of the target, {\em e.g.}, frequency-modulated continuous
wave (FMCW) signal~\cite{adib2013,adib15nsdi} that largely differs from WiFi
transmissions. We note that some prior works on active sensing ({\em
e.g.},~\cite{widet18mswim,widar2_18mobisys,youssef2007challenges})
are branded as ``passive sensing'' to refer to device-free human
sensing, although their
sensing device is actively transmitting signals.
When considering our adversarial scenario in the context of active sensing,
the key property is ``detectability.'' Since
the attacker device must {\em continuously} transmit RF
signals, it is easy to detect, localize and remove these devices.
\para{Passive sensing.} In a passive mode, sensing devices only listen to
existing transmission signals, but do not transmit signals themselves. They
have no control of the RF signal used for sensing. The state-of-the-art
design~\cite{lifs16mobicom} deploys multiple sniffers to listen to WiFi
signals sent by multiple transmitters in the target
area, and uses these signals to detect and
estimate user location. Specifically, when a user blocks the direct line of
sight (LoS) path from a transmitter to a sniffer, the original signal will diffuse
around the user. By building a precise propagation model on signal diffusion
on the LoS path, \cite{lifs16mobicom} is able to derive the user location.
However, doing so requires precise location of the transmitters (cm-level). Such
requirement is impractical under our adversarial scenario.
Similarly, an earlier work~\cite{banerjee14wisec} detects
\shepherd{the presence of a user}
when she disturbs the direct path between a WiFi access point (AP) and a
sniffer. Again, the attacker must
obtain AP locations a priori, and must deploy multiple
sniffers around the target area to detect \shepherd{user presence}
(see Figure~\ref{fig:mov}b).
\para{Key observation.} While some existing human sensing systems can be
turned into attacks, they impose a hefty cost and risk for the attacker, significantly
limiting the applicability of the attack. This motivates us to consider a new,
passive human sensing attack that can be launched by a minimally equipped
attacker and remains undetectable. Along these lines, our proposed
attack only requires a single
commodity WiFi receiver (with a single antenna) outside of the target
area (Figure~\ref{fig:mov}c). As we will explain in
\S\ref{sec:math_tripwire}, this is made possible by building a new
model to detect motion using dynamics of multipath signal propagation from
each anchor to the sniffer, rather than those of the direct path as in~\cite{lifs16mobicom, banerjee14wisec}.
\section{Attack Scenario and Adversarial Model}
We start by describing the
attack scenario, the adversarial model, and the type of signals captured by the attacker sniffer.
\para{Attack scenario: one sniffer and many anchors.} As shown in
Figure~\ref{fig:mov}c, our attack leverages the
ubiquity of commodity WiFi devices, ranging from routers, desktops, printers, to IoT
devices like voice assistants, security cameras, and smart
appliances. These devices are often
spread over each room of our homes and offices~\cite{iotlayout2,iotlayout1}, and
generally flood the surroundings with periodic WiFi
packets. We refer to these WiFi devices as {\em anchors} in our attack.
Our attack also
leverages the fact that WiFi signals are designed for significant coverage
and can penetrate most walls. Thus an attacker can place a sniffer outside
the target property to passively listen to existing signals sent by
anchors, without compromising them or the network. Because WiFi protocols do
not encrypt source and destination MAC addresses, the
sniffer can isolate packets sent by each anchor, even under MAC
randomization~\cite{privacy14iccst, siby2017iotscanner,
macrandomization}.
Our attack is effective if the sniffer can capture signals from at
least one anchor per room of interest. The actual number of
sniffers required depends on the size and shape of the target property
and wall materials. Across all of our experiments with 11
office buildings, residential apartments and single family houses
(described later in \S\ref{sec:locate_static}), a single sniffer is
sufficient to cover our target scene.
Our attack does not work when WiFi
signals do not leak to outside of the property,
{\em e.g.\/} when the property has thick, concrete exterior walls. The attacker can
detect this (and walk away) when either the sniffer
sees too little WiFi signals, or the detected anchors are outside of the
target area (\S\ref{sec:locate_static}).
\para{Adversarial model.} We make the following assumptions about the adversary.
\begin{packed_itemize}
\item The adversary makes no assumptions about the number, location, or
movement speed of human targets being tracked.
\item The adversary does not have physical or remote access to WiFi
devices in the target property, or the property itself.
\item Similar to the evil maid attack~\cite{evilmaid}, the attacker
can physically move {\em outside} the target
property, either outside exterior walls or along public corridors,
without attracting suspicion. This temporary access is necessary only for
initial bootstrapping the attack, and not required for the prolonged sensing
phase.
\item To avoid detection, the attacker only performs passive WiFi sniffing,
and avoids using any bulky or specialized hardware, {\em e.g.} directional antennas,
antenna arrays, or USRP radios~\cite{usrp}. Instead, they use commodity
mobile or IoT devices, {\em e.g.} smartphones or smart street lights. The
sniffer device only needs a single (built-in) antenna.
Note that while some smartphones (including the ones used in our attack
implementation) have multiple antennas, their firmware only exposes
aggregate signal received across multiple antennas, effectively giving the
same amount of information as devices with a single antenna.
\item \shepherd{The adversary partitions the target property into ``regions''
or virtual rooms around the anchors to detect user presence. When the
adversary has access to rough floor plans of the target
property\footnote{\shepherd{Rough floor plan can often
be derived from publicly available data, such as real estate
websites and apps, and public building
documents.}}, the attacker detects user presence down to their
individual rooms.}
\end{packed_itemize}
We intentionally choose a resource-limited attacker to demonstrate
the generality of this attack. Lower resource
requirements imply that the attack can be successful in a wider range
of practical scenarios.
\para{Signals captured by the sniffer.} For each anchor $x$, the
sniffer $S$ can extract two
metrics from $x$'s raw WiFi signals (even if the packets are
encrypted). The first is {\em amplitude of channel state
information (aCSI)} that measures the received signal strength (RSS) on
each of the many frequency subcarriers used in a transmission. Since human
movements change the multipath signal propagation from $x$ to $S$,
$x$'s aCSI values seen by $S$ will fluctuate over time. The second one
is RSS, or the aggregated aCSIs over all the subcarriers.
This aggregation makes RSS relatively insensitive to human movements.
A passive sniffer with a single antenna is
{\em unable} to extract advanced signal features including phase of
CSI (fCSI), Angle of Arrival
(AoA) and Time of Flight
(ToF)~\cite{wideo15nsdi,widar2_18mobisys}. Tracking fCSI and ToF requires the sniffer to actively synchronize with the
transmitter~\cite{vasisht2016decimeter}, and estimating AoA
requires the sniffer to have an antenna array~\cite{arraytrack13nsdi,mostofi18ipsn}. As mentioned earlier, while some
smartphones are equipped with multiple antennas, their firmware only
reports a single effective CSI but not per-antenna CSI values.
Furthermore, AoA estimation requires antennas to be
separated by half a wavelength (12.5cm for WiFi). Thus a
reasonable array of 4 antennas will be at least 19cm in width.
These physical limitations rule out the feasibility of using phase, ToF and
AoA in our sensing design.
\begin{figure*}[t]
\centering
\subfigure[$\overline{\sigma_{aCSI}}$ captures user movement]{
\includegraphics[width=0.32\textwidth]{figs/csi_events_combined}
} \hfill
\subfigure[$\overline{\sigma_{aCSI}}$ when the user is near, far from
the anchor, or completely absent]{
\includegraphics[width=0.32\textwidth]{figs/aCSI_combined_scene_std}
} \hfill
\subfigure[Illustration of $\overline{\sigma_{aCSI}}$'s near-far phenomenon]{
\includegraphics[width=0.28\textwidth]{figs/aCSI_near_far_explain_new}
}
\caption{Observations on how human movements affect an anchor's
$\overline{\sigma_{aCSI}}$ seen by the
sniffer.
(a) $\overline{\sigma_{aCSI}}$ w/ and w/o user presence;
(b)-(c) When a user moves near an
anchor $x$, some signal paths from $x$ to
the sniffer are more frequently affected, so
$\overline{\sigma_{aCSI}}(x)$ rises.
As she moves away from $x$ and has
less impact on the signal propagation,
$\overline{\sigma_{aCSI}}$ reduces.
}
\label{fig:csi}
\end{figure*}
\vspace{-0.02in}
\section{Turning WiFi Devices into Motion Sensors}
\label{sec:math_tripwire}
Our attack is driven by a new aCSI variance model that links
human motion near any anchor to temporal dynamics of the anchor's multipath signal
propagation seen by the attacker sniffer. Whenever an anchor
transmits WiFi signals, it unknowingly turns into a motion sensor for
our attack. These ``motion signals'' are immediately seen by the
attacker sniffer, who then pinpoints the targets down to their exact
room(s).
Unlike prior work on passive RF sensing~\cite{lifs16mobicom,
banerjee14wisec}, our new model focuses on capturing temporal dynamics of
multipath signal propagation\footnote{WiFi signals sent by an anchor,
when arriving at the sniffer, will go through rich multipath propagation, {\em
e.g.}, reflections by furniture, walls and human.} from each anchor to
the sniffer, rather than only the direct path. This lets the attacker detect
any motion {\em around} each anchor that disturbs the multipath signal
propagation, and also eliminates the need to obtain precise
anchor locations and deploy multiple sniffers~\cite{lifs16mobicom, banerjee14wisec}.
In the following, we describe the basic observations that motivate us
to pursue the attack, the key challenges it faces,
and the key design concepts that make the
attack feasible.
\subsection{Correlation between Signal Dynamics and User Movement}
\para{(i) User movement $\rightarrow$ aCSI variance.}
In an office/home, human users are never completely
stationary. Whether it is playing games, walking, opening doors,
sitting down, their natural movements will disturb the
multipath signal
propagation of nearby WiFi transmitters ({\em i.e.\/} anchors), creating
immediate, temporal
variations in their aCSI values seen by the sniffer.
We propose to capture such temporal variation by a new {\em aCSI
variance}
metric:
\begin{equation} \label{eq:acsi}
\overline{\sigma_{aCSI}} = \frac{1}{|I_q|} \sum_{i\in {I_q}}
\sigma_{aCSI}(f_i)
\end{equation}
where $\sigma_{aCSI}(f_i)$ represents the aCSI {\em standard deviation} for
subcarrier $i$ (at frequency $f_i$) calculated by the sniffer over a short
time window ({\em e.g.}, 5s). We also take efforts to reduce the impact of noise
and artifacts in aCSI reports by the firmware, first denoising
aCSI per subcarrier using the wavelet method~\cite{passivecsi18NaNA}, then removing outliers by only including
subcarriers whose $\sigma_{aCSI}(\cdot)$ sequences are highly
correlated. The set of subcarriers used in the above
calculation ($I_q$) are the top 50\% of most correlated pairs.
Figure~\ref{fig:csi}a plots several 30-second samples of
an anchor's $\overline{\sigma_{aCSI}}$ seen by the sniffer,
for scenarios of no human presence,
a nearby user sitting down and standing up, opening/closing the door, and walking. Compared
to no human presence, user movements lead to much higher
$\overline{\sigma_{aCSI}}$.
We also find that user motion differs from equipment
motion commonly seen in homes and offices, {\em e.g.\/} an oscillating fan and a
robot vacuum. The latter either is too weak to produce any notable impact
on
$\overline{\sigma_{aCSI}}$ or generates periodic signal patterns
different from those of human motion (\S\ref{sec:eval}).
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\textwidth]{figs/scene_extended}
\caption{Four (simple) cases on user presence and the corresponding
$\{\overline{\sigma_{aCSI}}\}$ traces from anchors A, B, and C.}
\label{fig:csi_single}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{figs/scene_extended2}
\caption{Three (complex) cases on user presence and the corresponding $\{\overline{\sigma_{aCSI}}\}$ traces.}
\label{fig:csi_multi}
\end{figure*}
\para{(ii) $\overline{\sigma_{aCSI}}$ depends on user-anchor
distance.} Another key observation is that when a target is
far away from an anchor $x$, its movements will
produce less disturbance to the signal propagation
from $x$ to the
sniffer. This is demonstrated in
Figure~\ref{fig:csi}b, which compares an anchor
$x$'s $\overline{\sigma_{aCSI}}$ when a human user (walking) is close to $x$,
far from $x$ (in a different room), or completely absent.
We think this is due to the fact that a target is ``bigger'' when it is closer
(Figure~\ref{fig:csi}c). As a target moves in the space between an anchor
$x$ and the sniffer, \yzedit{it} blocks and diffracts some signal paths from $x$ to
the sniffer. When close to $x$, \yzedit{it} affects more paths than when \yzedit{it}
is far away from $x$. Thus the received signals seen by the sniffer
will display a larger temporal variation when the user is closer to $x$. This
phenomenon can be modeled using an abstract, ray-tracing model on
$\overline{\sigma_{aCSI}}$ (omitted due to space
limits). Given a fixed time period, user
movements near $x$ create more path dynamics than those far from $x$,
leading to a larger standard deviation in the received signal strength (per
subcarrier).
We validate this observation by measuring $\overline{\sigma_{aCSI}}$
of multiple anchors (Table~\ref{tbl:devices_summary}) in 11 test scenes
(Table~\ref{tbl:attack_environments}). As a target moves in
the space between an anchor and the sniffer, we see a general
tendency of
$\overline{\sigma_{aCSI}}$ decreasing with the anchor-to-target
distance. We experiment with different wall materials
({\em e.g.}, glass, wood), distance of anchor and sniffer (8m--15m),
and sniffer placement ({\em e.g.}, on the floor, in the bush,
underneath a plastic trash can), and observe the same trend.
\para{(iii) $\overline{\sigma_{aCSI}}$ is a more robust motion
indicator than $aCSI$.}
Prior work~\cite{lifs16mobicom} localizes targets by
modeling $aCSI$ of the direct path. This requires an accurate propagation model and the
precise physical location of each anchor. Instead, our
$\overline{\sigma_{aCSI}}$ based method targets multipath
dynamics caused by user motion, thus only
requires knowing the room each anchor resides, rather than its precise
location inside the room.
\subsection{Challenge: Sensing Ambiguity}
The above discussion suggests that with a sufficient number of anchors in a room,
the sniffer should be able to detect human motion in the room from its
anchors' $\overline{\sigma_{aCSI}}$. For example, if any
anchor's $\overline{\sigma_{aCSI}}$ is sufficiently large, {\em
i.e.\/} motion detected, the room
should be marked as occupied.
But we also find notable ambiguity in such design, caused by two factors. {\em First},
$\overline{\sigma_{aCSI}}$ depends on the target-anchor distance
and the motion pattern/strength. Yet the attacker has no knowledge of
target behaviors or previous ground truth. {\em Second}, short
physical distance to an anchor does not
always translate into being the same room.
Next, we illustrate the problem of sensing ambiguity using real-world
examples, including four basic cases with a single user and three complex cases with multiple users.
\npara{Case 1: Target staying in a room.}
\yzedit{Figure~\ref{fig:csi_single}a} shows the traces of $\overline{\sigma_{aCSI}}$ for
three anchors: A and B in the left room, and C in the right room. The target user stays inside the left room
and moves around anchor A. In this case,
anchors B and C show no sign of targets nearby (very low
$\overline{\sigma_{aCSI}}$) while
anchor A has the largest $\overline{\sigma_{aCSI}}$ over time.
\npara{Case 2: Target moving across rooms.} Following case 1,
the target walks towards the room door (already open) at $t=12s$,
enters hallway at $t=18s$, starts to open the right room door at
$t=24s$, closes it and enters the room at $t=28s$. In this case, anchor A's $\overline{\sigma_{aCSI}}$ drops as the target
moves away, followed by a short, minor increase due to the
opening/closing of the right room
door. Anchor C has a short, minor increase in its
$\overline{\sigma_{aCSI}}$ due to the door
opening/closing, followed by a significant increase as the target
moves closer. Interestingly, as the target transitions between the two rooms, we can
observe somewhat synchronized changes on anchor A and C (since they
are triggered by the same event). But from per-anchor
$\overline{\sigma_{aCSI}}$ traces, a naive design will mark both rooms
as occupied.
\npara{Case 3: Sniffer blocked by external pedestrian.}
Pedestrians who move outside of the target area near the attack sniffer
could also create aCSI variations. Yet such movements (near the
common receiver) will create synchronized aCSI
variations at all the transmitting anchors, regardless
of any human presence. Again a naive design will mark both rooms
as
occupied.
\npara{Case 4: External users walking around the house.}
When pedestrians move away from the
sniffer, the impact on $\overline{\sigma_{aCSI}}$
is small even when they are close to the anchors
(\yzedit{Figure~\ref{fig:csi_single}d}). This is because those
movements have little impact on the multi-path propagation between
the anchors (inside the two rooms) and the sniffer.
\npara{Case 5: Multiple users moving in neighboring rooms.}
\yzedit{Figure~\ref{fig:csi_multi}a} shows an example where two targets are moving
in two different rooms, each with an anchor device. In this case, both
anchors (A and C) display large $\overline{\sigma_{aCSI}}$.
\npara{Case 6: Multiple users moving in distant rooms.} A user walks around in room A when another user
sits down near an anchor in room C
(\yzedit{Figure~\ref{fig:csi_multi}b}). We see that room A and C's anchors
are triggered, but not the one in room B.
\npara{Case 7: Anchors on both sides of a wall.}
\yzedit{Figure~\ref{fig:csi_multi}c} shows that when the user moves near
anchor A, it triggers both anchor A and B (on the other side of
wall). Here the simple design will mark both rooms as occupied
(since both anchors are triggered), creating a false positive.
\subsection{Design Concepts}
Our analysis shows that instantaneous $\overline{\sigma_{aCSI}}$ observed
at each individual anchor is insufficient to detect and
localize user
motion. We propose to overcome sensing ambiguity by analyzing the value
and pattern of $\overline{\sigma_{aCSI}}$ across both time and
anchors. The end result is a robust $\overline{\sigma_{aCSI}}$ model that
links each human motion with signal dynamics of anchors in its actual
room. Next, we outline the signal analysis process in two
steps: 1) {\em detecting human motion} and 2) {\em mapping each
motion to a room}. The detailed procedures are described later in \S\ref{sec:locate_mobile}.
\para{Detecting human motion.} If the number of detected
anchors per room is reasonable\footnote{Home/office WiFi devices
naturally
spread out in a room~\cite{iotlayout1,iotlayout2}. One can assume
3-4 devices in a room of common size of $25 m^2$.},
any
user movement should ``trigger'' at least one anchor in the scene.
But {\em how do we determine threshold $\sigma_{p}(x)$ necessary to trigger an
anchor $x$}? This is not easy, because the adversarial has no ground
truth on target presence. Also the threshold should be anchor-specific and
could also vary over time.
Leveraging a common observation where a user will not
stay and move in a single room forever, we
propose to derive $\sigma_{p}(x)$ by finding
``outliers.'' Assuming for anchor $x$ the sniffer can measure
$\overline{\sigma_{aCSI}}(x)$ over a long period of time ({\em
e.g.\/}, hours or even days), it is reasonable to assume that $x$ is mostly
not triggered. Thus the sniffer can apply outlier detection methods
like MAD~\cite{mad,mad_asym} to derive $\sigma_{p}(x)$ and
adapt it
over time.
\para{Mapping Each Motion to a Room} When multiple anchors in more than one
room are triggered, the sniffer needs to decide whether they are triggered by users in one
room (one source) or users in multiple rooms (multiple sources). This is because a target's movement
could trigger anchors in neighboring rooms (case 7), and the same
holds when multiple users move in two rooms (case 5 and 6). The
sniffer needs to distinguish between them and determine the set of
rooms that are actually occupied.
Again we leverage a common observation: human
movements in different rooms are generally asynchronous, thus anchors
triggered by separate sources will display different temporal
patterns in $\overline{\sigma_{aCSI}}$ (case 5 and 6). But when a single source triggers anchors in neighboring rooms (case 7),
these anchors' $\overline{\sigma_{aCSI}}$ will share a similar
pattern. By computing the correlation of normalized
$\overline{\sigma_{aCSI}}$ time series across anchors, we
can determine whether they are triggered by sources in one
room {\em i.e.\/} positively
correlated. For example, the correlation
between the two triggered anchors are -0.07, -0.03, and 0.32, in case
5, 6, and 7, respectively, and 0.23 during the door opening in
case 2.
Our attack can also use the floor plan (or room transition
probabilities) to fine-tune the detection result (similar to~\cite{dina-ubicomp18}). For example, a user cannot ``fly'' from
one room to another when the rooms are widely
separated. If the sniffer observes two anchors in two widely
separated rooms being triggered sequentially with little or no gap, it will
report two users, one in each room, rather than a single user moving
across rooms.
For other cases, our attack
conservatively treats the rooms with at least one anchor triggered as
occupied.
\section{Attack Design}
\label{sec:design}
After presenting the key concepts, we now
present the attack design in detail.
As shown in Figure~\ref{fig:attackprocess}, the attack
includes two phases: (1) identify and locate anchors
during ``bootstrapping,'' and (2) deploy the sniffer and perform ``continuous
human sensing.''
\para{(1) Bootstrapping.} The attacker first needs to identify and
locate the anchors in the target area. The unique feature of our
motion detection is that is does not require precise location of anchors, only
their individual room. In our attack, this is achieved by the
attacker performing
a brief passive WiFi
measurement (using the sniffer) while walking outside the target
property. Similar to the evil maid
attack~\cite{evilmaid}, the walking measurements are only necessary
during initial bootstrapping.
Before feeding the collected measurements into a device
localization algorithm, our attack introduces a novel {\em data
sifting} procedure to identify the right measurement instances for
anchor localization. As a result, the attacker can localize anchors
down to their individual rooms using limited and often noisy signal
measurements\footnote{Because the attacker has little control
on the available walking path
and the propagation environment, the signal measurements will
inevitably contain
bias, noise and human errors.}.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{figs/overview_new}
\caption{Our attack process includes a
bootstrapping phase and a continuous sensing phase.}
\label{fig:attackprocess}
\end{figure}
\para{(2) Continuous human sensing.} After locating a list of anchors, the
attacker hides the same sniffer at a fixed
location outside the target area. The sniffer continuously monitors ambient
WiFi signals, and uses them to locate and
track human presence and movements inside. The sniffer also monitors each
detected anchor, and any relocation of an anchor will trigger its removal
from the anchor list, and possibly another {\em bootstrapping} phase to
(re)locate the anchors.
\shepherd{Our proposed attack process is fully automated, and does not require
any operations by the adversary, beyond the initial bootstrapping which
involves a walk around the property to collect signal measurements. Note
that this walking measurement could also be achieved by a robot/drone.}
\subsection{Continuous Human Sensing}
\label{sec:locate_mobile}
In this phase, the sniffer will continuously collect
$\overline{\sigma_{aCSI}}$ for each anchor and analyze them to
detect, locate human presence to their individual rooms.
\para{Detecting the presence of human motion.} For each
anchor $x$, when
$\overline{\sigma_{aCSI}}(x)>\sigma_{p}(x)$, the sniffer declares the presence of motion near
$x$, or ``anchor $x$ is triggered.'' To compute $\sigma_{p}(x)$, the
sniffer applies median
absolute deviation (MAD)~\cite{mad,mad_asym} on observed
$\overline{\sigma_{aCSI}}(x)$ over time.
Assuming ``untriggered'' $\overline{\sigma_{aCSI}}(x)$ follows a
Gaussian distribution, we have
\begin{equation}
\sigma_{p}(x)=\lambda\cdot \mbox{MAD}(Z)+\mbox{median}(Z)
\end{equation}
where $\lambda$ is the conservativeness factor and $Z$ is the
long-term observation of $\overline{\sigma_{aCSI}}(x)$.
\shepherd{The choice of $\lambda$ will affect the motion detection
rate and false alarm rate. For our attack, we set $\lambda=11$ so
the ideal detection rate per anchor is high. }
\para{Assigning target(s) to rooms.} When
any anchor(s) get triggered, the sniffer analyzes their temporal $\overline{\sigma_{aCSI}}$ traces to
determine the set of rooms that are actually occupied.
(i) If all the
triggered anchors are in the same room, then the room is declared as
occupied. Exit.
(ii) If most of the anchors are triggered, and their
$\overline{\sigma_{aCSI}}$ time series are (consistently) positively correlated, then
the sniffer is blocked by an external pedestrian next to the sniffer,
and the sensing output is
``uncertain.'' Exit.
(iii) Now consider all the triggered anchors. Start from the
triggered anchor $x$ with the highest
$\overline{\sigma_{aCSI}}$. Mark $x$ as ``checked'' and its room
as occupied. Compute pair-wise correlation between $x$
and any triggered anchor ($y$) in neighboring rooms. If $x$ and $y$
are highly positively correlated, mark $y$ as checked. Repeat until
all the triggered anchors are ``checked''.
\para{Tracking targets.} After detecting a set of motion events, the
sniffer can potentially
combine them with room transition probabilities built from the floor plan to
estimate user trajectories.
\shepherd{For example, the sniffer can track a security guard's patrol
route from a sequence of detected motion events.
It should be noted that while our attack can detect whether a room is
occupied or not, it cannot identify an individual out of a group of
users in the same room. Thus accurate per-person tracking is {\bf only} feasible
when the number of users is small. }
\para{Monitoring anchor status.} The sniffer also monitors each
(static) anchor's RSS (see \S\ref{sec:locate_static}). Upon detecting a
considerable RSS change for an
anchor (\shepherd{which indicates potential relocation}), the attacker either removes it from the anchor list or run
bootstrapping to relocate anchors and recompute its $\sigma_{p}$.
\para{Impact of sniffer placement.} The sniffer should be placed where
it can capture aCSI signals from the detected anchors, while avoiding
being too close to the anchors or at busy places with pedestrians frequently
passing by. While one could further optimize the sniffer location, our current
design randomly chooses a location that can capture signals from all
the anchors.
\subsection{Bootstrapping: Locating Anchors}
\label{sec:locate_static}
During bootstrapping, the attacker uses the passive sniffer to
identify and localize {\em static} anchors inside the target property.
There are many device localization proposals, but since the sniffer
stays passive and only has
a single antenna, we choose to use RSS-based method~\cite{liqun14mobicom,zhijing18hotmobile}. In this case, with a brief walk outside of the target's home/office, the
adversary uses the sniffer to measure RSS of potential anchors along the
trajectory. These spatial RSS values and the trajectory (recorded by
the smartphone's IMU sensors) are fed into a log distance path loss
model~\cite{pathlossmodel} to estimate the transmitter location.
Each
transmitter located inside the target scene area is added to the anchor
list.
\para{Why RSS but not aCSI?} The localization uses RSS
rather than aCSI, fCSI or AoA~\cite{multipathtriang18mobisys,
mostofi18ipsn} because of
two
reasons.
{\em First}, our attacker sniffer only has one antenna, and cannot
estimate fCSI accurately due to lack of synchronization with
the transmitter. Recent work~\cite{mostofi18ipsn} estimates AoA
from aCSI, but only if the sniffer has an antenna array and is in
complete LoS to the targets, {\em i.e.\/} no wall.
{\em Second}, as shown in \S\ref{sec:locate_mobile}, aCSI is sensitive to nearby target
movements. As the adversary has no knowledge of the
target status during bootstrapping, it cannot rely on aCSI for localization.
In comparison, RSS is much more robust against
target movements, thus a reliable input for localization under the
adversarial scenario.
\shepherd{
\para{Identifying static anchors.}
}
RSS of a static transmitter,
when captured by a static sniffer, should stay relatively
stable, while those of moving ones will fluctuate over time.
Thus before running spatial RSS measurements, the attacker will
keep the sniffer static and measure the per-device RSS standard
deviation ($\sigma_{RSS}$) for a period of time
({\em e.g.\/} 60s). Devices with large $\sigma_{RSS}$ ($>$2.7dB in
our work) are not used as
anchors. This is repeated during
the continuous sensing phase (see
\S\ref{sec:locate_mobile}) to detect relocation of any anchor device.
A complementary method is to infer the device type (and brand name) from the Organizational Unique Identifier (OUI) field of
the MAC address~\cite{privacy14iccst} and ignore moveable
devices like smartphones, wearables, laptops, and camera robots.
\para{Finding high-quality RSS measurements.} The
localization accuracy depends heavily on the ``quality'' of RSS
measurements. Instead of searching for a new localization design, we
apply a statistical data sifting algorithm to identify proper RSS
data samples as input to the localization algorithm.
The attacker can filter out ``bad'' measurements using
de-noising methods, {\em e.g.}, Kalman
filter~\cite{kalmanfilter}, wavelet filter~\cite{wavelet16mobihoc} and
feature
clustering~\cite{zhijing18hotmobile}. We find that these are
insufficient under our attack scenarios because the propagation
environment is highly complex and unknown to the adversary, making it hard to distinguish between noise and natural propagation
effect. Similarly, features used by~\cite{zhijing18hotmobile} to identify bad
measurement rounds are too coarse-grained to
effectively control localization
accuracy. In fact, our experiments in \S\ref{sec:eval} show that
with~\cite{zhijing18hotmobile}, $>50\%$ of the good measurement
rounds it identifies all locate the device to a wrong room.
Instead, we propose {\em consistency-based data sifting} to
identify proper data samples that will be
used for model fitting.
Our
hypothesis is that, by the law of large numbers~\cite{lawlargenumber},
{\em consistent} fitting results from many random samplings of RSS measurements, if exist, can reveal true signal
propagation behaviors and produce high-fidelity localization results.
\begin{figure}[t!]
\centering
\includegraphics[width=0.38\textwidth]{figs/xiaomi_consistent}
\caption{Improving accuracy of anchor localization using our proposed
consistency-based data sifting. Each red dot is the anchor location
estimated from a Monte Carlo sample of RSS measurements. The
rectangle marks the actual room the anchor resides. In this example,
a dominant cluster is present and is used to estimate the final
anchor location.}
\label{fig:cluster}
\end{figure}
Given a round of measurements $\mathbb{R}$, we apply the Monte Carlo
method~\cite{montecarlo} to randomly sample a subset (80\%) of $\mathbb{R}$ as the input to the model fitting. This is
repeated by $N=1000$ times, producing $N$ localization
results. We can find natural clusters among these $N$ results
from their locations and fitting mean square errors (MSE).
If they form many small
clusters with different room placements, then $\mathbb{R}$ is
inconsistent and cannot be used for localization.
If a dominant cluster exists and its averaged MSE is
less than those of the other clusters, then $\mathbb{R}$ can be used for
localization. An example is shown in Figure~\ref{fig:cluster}, which produces a single,
dominant cluster, while the rest are widely
scattered. When such a dominant cluster is present, we can estimate the anchor
room location by aggregating the location data points of the cluster.
In the example of Figure~\ref{fig:cluster}, all the data
points are located in the top center of a single room. When the data
points belong to different rooms, we choose the room with the most
data points.
When multiple rounds of RSS measurements are available, the attacker can apply consistency
check -- if a localization
result is consistent across multiple rounds, then it is a confident
result. Across our experiments, we find that consistency check across 4 rounds of measurements is
sufficient to achieve a room placement accuracy of 92.6\%.
\para{Floor-level signal isolation.}
\label{subsec:floordetection}
When the target property has multiple floors, the attacker needs to
localize wireless anchors to a particular floor during bootstrapping.
This is easily achieved using coarse angle of arrival (AoA) estimates
captured by the smartphone with a simple cone cover to focus signals from a
particular AoA. The received RSS from each anchor can be combined with the
phone angle (via the built-in gyroscope) to localize each anchor to a floor.
\section{Smartphone Implementation}
\label{sec:imple}
We prototype our attacker system using a commodity smartphone as the sniffer.
We implement the bootstrapping and continuous sensing modules each as
an Android app, and deploy and experiment using two
versions
of Android phones, Nexus 5 and Nexus 6. Both smartphones are equipped with the Broadcom WiFi
chipset. For spatial RSS measurements (during bootstrapping), we use the built-in
IMU sensors (accelerometer and gyroscope) to detect user strides
and build trajectory. \shepherd{The key system parameters are listed in
Table~\ref{tbl:parameters}. }
\para{Enabling continuous, passive sniffing of aCSI.} Previously, aCSI can only be
captured when the receiver actively communicates with the target
transmitter~\cite{Halperin_csitool}. Recent
work~\cite{shadowwifi18mobisys} produces a firmware (Nexmon) that
enables passive\footnote{Passive sniffing means
that the sniffer does not need to communicate with the target
transmitter, thus remains completely undetectable.} sniffing, but only on a single customized transmitter at very low
rates.
For our attack, we made a series of changes to the Nexmon firmware, so
that the sniffer can run continuous passive sniffing and capture aCSI
from multiple commodity WiFi devices simultaneously. In particular,
we made changes to hardware buffer management to resolve the issue of
buffer overflow facing the original Nexmon firmware.
One remaining artifact is that the firmware only reports aCSI
at a limited speed, up to 8--11 packets per second. To save energy,
we subsample sniffed
packets based on this rate limit. Despite this artifact,
our prototype sniffer is able to capture
sufficient aCSI samples to successfully launch the attack.
\para{Computation and energy cost.} One strength of our attack is its
simplicity. For our current smartphone prototype, the bootstrapping
app runs 1000 rounds of Monte Carlo sampling and model fitting, which
finishes in less than 25s per anchor. It takes less than 1s to compute
average aCSI standard deviation.
The app consumes 4.18 watts
(bootstrapping) and 2.1 watts (continuous sensing), respectively.
For Nexus 5 (with a built-in 2300mAh battery), this enables 4.1 hours of continuous
sensing.
Currently our app does not optimize for energy efficiency, which
could be improved to extend sensing duration.
\begin{table}[t]
\centering
\resizebox{0.41\textwidth}{!}{
\begin{tabular}{|l|l|}
\hline
\textbf{Parameters} & \textbf{Value} \\ \hline
MAD conservative factor $\lambda$ & 11 \\ \hline
Threshold of $\sigma_{RSS}$ for static anchors & 2.7 \\ \hline
Ratio of Monte Carlo sampling size & 80\% \\ \hline
Number of Monte Carlo sampling rounds ($N$) & 1000 \\ \hline
\end{tabular}
}
\vspace{0.04in}
\caption{\shepherd{Attack parameters used in our experiments.}}
\label{tbl:parameters}
\end{table}
\begin{table}[t]
\centering
\resizebox{0.32\textwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
Sniffer & Test & \# of & Mean Room \\
Path & Scene & Rooms & Size ($m^2$) \\\hline
\multirow{9}{*}{\begin{tabular}[c]{@{}c@{}}Indoor\\Hallway\end{tabular}}
& 1 & 6 & 14.19 \\ \cline{2-4}
& 2 & 7 & 14.60 \\ \cline{2-4}
& 3 & 8 & 13.65 \\ \cline{2-4}
& 4 & 3 & 14.50 \\ \cline{2-4}
& 5 & 3 & 9.51 \\ \cline{2-4}
& 6 & 6 & 14.21 \\ \cline{2-4}
& 7 & 5 & 16.75 \\ \cline{2-4}
& 8 & 4 & 44.39 \\ \cline{2-4}
& 9 & 2 & 69.83 \\ \hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Outdoor\\Sidewalk\end{tabular}}
& 10 & 2 & 47.20 \\ \cline{2-4}
& 11 & 4 & 12.99 \\ \hline
\end{tabular}
}
\vspace{0.04in}
\caption{Test scene configuration.}
\label{tbl:attack_environments}
\end{table}
\section{Evaluation}
\label{sec:eval}
We evaluate our attack using experiments in typical office buildings and
apartments.
We describe our experiment setup and test scenes,
present our evaluation on individual attack phases (bootstrapping and
continuous sensing), followed by an end-to-end attack evaluation.
\begin{figure*}[t!]
\centering
\mbox{
\subfigure[Scene 4]{
\includegraphics[width=0.30\textwidth]{figs/4_yanzi_home}
}
\subfigure[Scene 6]{
\includegraphics[width=0.24\textwidth]{figs/6_office}
}
\subfigure[Scene 8]{
\includegraphics[width=0.32\textwidth]{figs/8_heather_place}
}
}
\caption{Sample test scene floorplans, derived from the real estate websites
or emergency exit maps, where shaded regions are the target
property. We also show an instance of anchor
placements where $\bigcirc$s are the anchor devices,
and $\bigtriangleup$ is the
static attack sniffer.}
\label{fig:attack_floorplan_example}
\end{figure*}
\begin{table*}[t]
\centering
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
& \multirow{2}{*}{Device Type} & \multirow{2}{*}{Exact Product} & \yzedit{Mean} Packet Per
& \yzedit{Mean} Packet Per \\
& & & Second (pps), Idle & Second, Active \\ \hline
\multirow{5}{*}{Static}
& Cameras (w/o Motion Detection) & AHD Security Camera & N/A & 124 \\ \cline{2-5}
& Cameras (w/ Motion Detection) & Amcrest/Xiaomi IP Camera & $\ge$0.5 & 108 \\ \cline{2-5}
& Home Voice Assistance & Amazon Echo, Google Home & 2 & 16 \\ \cline{2-5}
& Smart TV (\& Sticks) & Chromecast, Apple TV, Roku & 6.64 & 200 \\ \cline{2-5}
& Smart Switches & LifeSmart Plug & $\ge$2.44 & $\ge$3.33 \\ \cline{2-5}
& WiFi Router & Xiaomi/Cisco/Asus Routers & 28.6 & 257 \\ \hline
\multirow{3}{*}{Mobile}
& Surveillance Robot & iPATROL Riley Robot Camera & N/A & 124 \\ \cline{2-5}
& Smartphones & Samsung/Google/Apple Phones & $\ge$0.5 & $\ge$6 \\ \cline{2-5}
\hline
\end{tabular}
}
\vspace{0.04in}
\caption{Summary of WiFi devices used in our experiments. Note that our attack
will detect and recognize static anchors and only use them to
detect/localize human motion.}
\label{tbl:devices_summary}
\end{table*}
\vspace{-0.05in}
\subsection{Experiment Setup}
\label{sec:setup}
We experiment at
11 typical offices and apartments that are
accessible to us. The owners of each test volunteered for our
experiments. The test scenes are of different sizes and configurations, and
have different wall materials. The walking path available to the
adversary also differs across experiments, from indoor corridors outside
the apartment to outdoor
pathways. Table~\ref{tbl:attack_environments} lists the test scene
configuration while Figure~\ref{fig:attack_floorplan_example} shows
floor plan examples \shepherd{derived from publicly available
data. Across all experiments, attack parameters remain
unchanged (as listed in Table~\ref{tbl:parameters}). }
Inside each test scene, we either reuse existing WiFi devices or deploy our
own WiFi devices to emulate smart homes and offices.
We use popular commodity products for smart
offices and homes, {\em e.g.}, wireless
security cameras, voice assistants, WiFi routers, and smart
switches. In total, we have 31 WiFi devices, including 6
security cameras. These devices are naturally placed at locations where they are designed
to be: security cameras at room corners, smart switches on the wall
outlets, and WiFi routers in the center of the room
for coverage. Our experiments use the 2.4GHz WiFi band due to its dominant coverage.
We also test 5GHz WiFi and do not observe notable difference
except its shorter coverage.
Table~\ref{tbl:devices_summary} summarizes these devices
and their traffic patterns during idle and active periods.
The packet rate varies from 0.5 packet per second (pps) to
more than 100 pps. Even when idle, they still periodically transmit
packets. It should be noted that to prevent attackers from inferring
user presence by simply counting the packet rate of a device (if an Amazon
Echo is sending more packets, it means that a human user is around),
devices like home voice assistants, smart TVs, and motion-triggered
cameras will
need to send cover traffic when in idle state and the
corresponding idle packet rate will be much higher than the listed
number.
\para{Bootstrapping.} To benchmark our bootstrapping design, we
collect, for each test scene, 50 walking measurements,
each of 25--50 meters in length and 0.5--2 minutes in time.
We also change anchor placements and repeat the experiments. In
total, we collect more than 3000 RSS measurement traces, with more
than 121,000 location-RSS tuples.
\para{Continuous sensing.} We place a static sniffer behind plants or at the corners (on the ground) outside of
the target building within $2m$ to the building wall.
We ask volunteers
to carry out normal activities in each test scene and
collect more than 41hrs of aCSI entries (7.8hrs of human presence, labeled).
The volunteers
are aware of the attack goals but not the techniques.
\subsection{Evaluation of Continuous Human Sensing}
\label{subsec:sensingeval}
We start from evaluating the {\em continuous sensing} component of our
attack. Here we assume that the attacker knows the actual room where
each anchor resides. By default, the attacker only uses anchors whose packet rate
$\geq$ 11pps.
\para{Performance metrics.} Our goal is to evaluate whether the
continuous sensing component is able to correctly detect user
presence/motion in each room. We divide time into 5s slots, and
run continuous sensing to
estimate room occupancy \shepherd{in each slot} based on aCSI variance values. We compare
these estimates to ground truth values, and compute the detection
rate and false positive rate as follows.
\begin{packed_itemize}
\item {\em Detection rate} (DR) measures the probability of the attack reporting a room
as being occupied when it is actually occupied, \shepherd{across all the slots.}
\item {\em False positive rate} (FP) measures the probability of a
room not being occupied
when our attack reports that it is being occupied.
\end{packed_itemize}
Under our adversarial scenario, having a high detection rate is
more important since the attacker does not want to miss the presence of any targets.
\para{Human sensing accuracy.}
Table~\ref{table:userdetection} lists the detection rate and false
positive rate when we vary
the number of anchors per room. We see that the detection rate
scales with the number of anchors per room, reaching 86.8\%, 95.03\%
99.85\%, and 99.988\% with 1, 2, 3, and 4 anchors per room, respectively. This
trend is as expected since having more anchors increases the chance
that a user movement triggers at least one anchor. Furthermore, the
false positive rate is low ($<$3\%) with a single anchor per
room and increases slightly to 6.9\% if the attacker wants to leverage
all 4 anchors. Across our experiments, the false
positives mainly come from the impulse noises in aCSI reported by
the firmware. Thus having more anchors will lead to more false
positives.
\begin{table}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cc|c|c|c|c|}
\cline{3-6}
& & \multicolumn{4}{c|}{\# of WiFi Devices Per Room} \\ \cline{3-6}
& & 1
& 2 & 3
& 4 \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Ours}} & DR & 86.824\% & 95.034\% & 99.854\% & 99.988\% \\ \cline{2-6}
\multicolumn{1}{|c|}{} & FP & 2.927\% & 4.082\% & 5.305\% & 6.935\% \\ \hline \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{LiFS }} & DR & 20.536\% & 37.040\% & 50.262\% & 60.821\% \\ \cline{2-6}
\multicolumn{1}{|c|}{} & FP & 4.622\% & 4.961\% & 5.395\% & 5.886\% \\ \hline \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{LiFS}} & DR & 43.568\% & 68.315\% & 82.289\% & 90.149\% \\ \cline{2-6}
\multicolumn{1}{|c|}{(unrealistic)} & FP&4.622\%& 5.364\%&6.443\%&7.644\%\\ \hline
\end{tabular}
}
\vspace{0.04in}
\caption{Detection rate (DR) and False positive rate (FP) of continuously
human sensing, assuming accurate room placement of anchors. We
compare our design to the state-of-art human sensing system (LiFS).}
\label{table:userdetection}
\vspace{-0.3in}
\end{table}
We also compare our system to the current state of the art of passive
human sensing (LiFS~\cite{lifs16mobicom}). For fair comparison, we
add \yzedit{wavelet denoising} to LiFS, confirming that it improves
the sensing performance. Since LiFS requires each
anchor's precise physical location in the room (which is not available to our
attacker), we first use the room center as the input to LiFS, mapping to
1--2m localization error. LiFS also requires knowledge of the aCSI
value when no user is present, which we use the same MAD based method
to estimate. Results in Table~\ref{table:userdetection} show that
even with four anchors in the room, LiFS can only achieve a detection rate of
60.82\%. Here the miss-detection happens when LiFS locates the human
presence to a wrong room. We also run another version
of LiFS that is unrealistic under our attack scenario, where each
anchor's physical location error is random but bounded by 50cm (and without any room
placement error). In this case,
its detection rate improves, but is still far
from our attack, especially with smaller number of anchors per room.
\para{Tracking responsiveness.} We also examine whether our
attack is able to track human movements in time. We start from an
example scenario where a user moves
back and forth between two connecting rooms, {\em i.e.\/} she walks
in one direction for 18s, turns around and walks in the opposite
direction, and repeats. Figure~\ref{fig:csi_impact_periodic} shows the
detected user occupancy of the two rooms (each with two anchors). We
see that our detection is highly responsive to rapid human
movements.
We also consider all the aCSI traces collected across our test scenes and examine the duration of individual
movement events estimated by the attacker. We compare these estimations to the
ground truth. Figure~\ref{fig:estduration} plots the \shepherd{cumulative distribution function (CDF)} of the duration
estimation error, where for 80\% of the cases, the error is
less than 16 seconds.
\begin{figure}[t]
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{figs/csi_periodic_trajectory2}
\vspace{-0.1in}
\caption{The attack sniffer can track fast user motion between rooms.}
\label{fig:csi_impact_periodic}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{figs/csi_e2e_timeest}
\vspace{-0.1in}
\caption{Error in motion duration estimation is small.}
\label{fig:estduration}
\end{minipage}
\vspace{-0.2in}
\end{figure}
\para{Impact of anchor packet rate.} So far, our results assume that
anchors send packets at no less than 11pps\footnote{ As discussed in
\S\ref{sec:imple}, the sniffer's firmware reports CSI in an equivalent packet rate of
8--11pps.}.
To study the impact of anchor packet
rate, we take the aCSI
traces of WiFi security cameras (w/o motion detection) and sub-sample them to
produce desired packet rates.
Our experiments show that for a single anchor per room, the detection rate is
86.824\% at its full
rate (an equivalent aCSI rate of 11pps), and
reduces to 85.49\% at 2pps, and 69.29\% at 1pps.
The false positive rate remains $<$5\%. This means that each low-rate
anchor can still ``help'' the attacker
identify and locate targets. For a room with multiple low-rate
anchors, the attacker will take the {\em union}
of their detection results.
\shepherd{
\para{Impact of interference.}
During all experiments, we observe minimal impact on attack performance from interference
by other WiFi transmissions out of our control or access.
In the presence of strong interference, anchor packet rates will drop
(due to CSMA contention) and thus human detection rate will drop
as discussed earlier. }
\para{Non-human sources of motion.} Smart homes and offices often
have equipment that create motion even in the absence of human users. One
relevant question of interest is whether these machines will be detected by
the attack as human movement, leading to false positives? We experiment
with a set of moving devices commonly seen in homes and offices, as well as
pets, {\em e.g.} cats and dogs (see Table~\ref{table:machine}).
\shepherd{For example, robotic vacuums are placed
on the ground level and thus have minimal impact on the sniffed signals.}
The only
device to affect $\overline{\sigma_{aCSI}}$ in our tests is an oscillating
fan. Yet its motion is highly periodic and consistent, making it easy to distinguish as
non-human. We note that certain cats and dogs can also affect
$\overline{\sigma_{aCSI}}$ with their movement, and their movement patterns
can be hard to distinguish from human motion. Overall, our experiments show
that the attack can eliminate all non-human sources of motion, except for pets.
\begin{table}[t]
\centering
\resizebox{0.47\textwidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
Motion source & \begin{tabular}[c]{@{}c@{}}Impact on \\
$\overline{\sigma_{aCSI}}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distinguishable \\from human motion?\end{tabular} \\\hline
Server internal cooling fan & No & - \\\hline
Standing fan & No & - \\\hline
Oscillating fan & Yes & Yes \\\hline
Robot vacuum & No$^*$ & No \\\hline
Cats or dogs & Yes & No \\ \hline
\end{tabular}
}
\vspace{0.04in}
\caption{Impact of sources of non-human motion on our attack. (*) A robot
vacuum only affects $\overline{\sigma_{aCSI}}$
of an anchor in close proximity when the anchor is placed on the
floor. } \label{table:machine}
\vspace{-0.2in}
\end{table}
\vspace{-0.1in}
\subsection{Evaluation of Bootstrapping}
We evaluate the bootstrapping phase (where the attacker detects and
locates anchors) via two metrics: {\em absolute localization
error} which is the physical distance between the ground-truth location
and the attacker-estimated location, and {\em room placement
accuracy} which is the probability of placing an anchor to its
exact room.
Figure~\ref{fig:roomestimation} plots, for each test scene, the quantile distribution of the
absolute localization error and
the room placement accuracy. We compare three systems: the model-fitting algorithm that uses all the measurements,
the feature-clustering based data filtering proposed
by~\cite{zhijing18hotmobile}, and our consistency-based data sifting
method.
\para{Gain of consistency-based data sifting.} The results show that our proposed
data sifting method can largely improve anchor localization accuracy
compared to the two existing approaches. Our method largely reduces
the localization error tail, and for more than 90\% of the
cases, the attacker places the anchor at the right room. Our method
outperforms~\cite{zhijing18hotmobile} by using fine grained, scene-specific features to filter data.
An interesting observation is that in scene 8--10, our method
faces a similar (and even larger) absolute localization error than
feature clustering but produces
higher room placement accuracy. This is because our design directly
analyzes the consistency of room placements, rather than raw localization errors.
Smaller raw localization error does not always translate into higher room
placement accuracy.
\begin{figure}[t]
\centering
\includegraphics[width=0.42\textwidth]{figs/error_per_envir}
\includegraphics[width=0.42\textwidth]{figs/room_correctness_per_envir_compare_3bars}
\vspace{-0.1in}
\caption{Bootstrapping performance: anchor localization accuracy in terms of
absolute localization error (m) and room placement accuracy,
per test scene.}
\label{fig:roomestimation}
\end{figure}
\para{Impact of anchor placements.} As expected, it is relatively
harder to accurately locate anchors placed at room boundaries, {\em
e.g.}, those plugged into wall outlets. In many cases, these boundary anchors create a dominant Monte Carlo cluster, but the data points in the
cluster map to either of the two neighboring rooms. Our current design
simply chooses the room with more data points, which
could lead to room placement errors.
When the number of anchors is sufficiently large, the attacker can
minimize the impact of such boundary anchors in two ways. First, the
attacker can either use these boundary anchors
``with caution'' or not use them at all. Second, the attacker can
leverage past human sensing results to discovery any strong correlation
between anchors and adjust their room placements. We leave these to future work.
\para{Impact of anchor packet rate.} The accuracy of
our proposed anchor
localization method is relatively insensitive to anchor packet
rate. This is likely because RSS (of static anchors) is relatively
stable over time. As long as the
measurement trace covers $>$20m in distance and the RSS values
are between -75dB and -30dB without strong bias,
we observe little difference in
localization (and room placement) accuracy.
\subsection{End-to-End Attack Evaluation}
Finally, we evaluate the
end-to-end performance of our attack, combining the bootstrapping and
continuous sensing phases. Like \S\ref{subsec:sensingeval}, we
consider
the detection rate and false positive rate for the task of detecting and localizing human users to
their individual rooms.
The results will include the impact of any misplaced anchors
during bootstrapping, or errors in detecting/localizing users during continuous
sensing.
\begin{table}[t]
\centering
\resizebox{1\columnwidth}{!}{
\begin{tabular}{cc|c|c|c|c|}
\cline{3-6}
& & \multicolumn{4}{c|}{\# of WiFi Devices Per Room} \\ \cline{3-6}
& & 1
& 2 & 3
& 4 \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Ours}} & DR & 80.603\% & 94.210\% & 98.780\% & 99.725\% \\ \cline{2-6}
\multicolumn{1}{|c|}{} & FP & 3.595\% &
5.962\%
& 8.386\%
& 10.719\% \\ \hline
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{LiFS}} & DR & 14.153\% & 26.381\% & 36.954\% & 46.033\% \\ \cline{2-6}
\multicolumn{1}{|c|}{} & FP & 14.024\%
& 14.077\% &
14.493\% & 15.064\% \\ \hline
\end{tabular}
}
\vspace{0.04in}
\caption{End-to-end performance of our attack vs. LiFS, in terms of
detection rate (DR) and false positive rate (FP).}
\label{table:e2e}
\vspace{-0.2in}
\end{table}
Table~\ref{table:e2e} lists the detection rate and false positive rate for our attack design and those achieved via LiFS~\cite{lifs16mobicom}. We
also vary the number of WiFi anchor devices in each room.
Compared to the results in Table~\ref{table:userdetection} assuming accurate anchor room
placement, the end-to-end attack sees minor drop in both recall
and precision, especially with more anchors per room. Despite using a
passive, minimally equipped attacker device, our attack still
achieves high human sensing accuracy, {\em e.g.\/}, 99.7\% detection
rate at 10.71\% false positive rate.
The impact of anchor localization errors is much more
visible on LiFS, whose detection rate drops to 36.954\% and 46.033\% even with
3 and 4 anchors in each room, respectively. Overall, we see that
while both using the same aCSI values per
anchor, our proposed passive human sensing largely outperforms
LiFS by not requiring precise anchor location to model signals on the direct propagation path.
\section{Defenses}
\label{sec:defense}
We now explore robust
defenses against our proposed attack and other passive sensing
attacks. Our design insight is that attack
effectiveness depends heavily on the quantity and quality
of the
WiFi signals captured by the sniffer. Thus a defense reducing the amount
of WiFi signal leakage to external sniffers or adding inconsistency to WiFi
signals could render the attack ineffective.
\subsection{MAC Randomization}
The first solution coming
to mind would be
{\em MAC address randomization}, a well-known method for protecting mobile devices
against tracking. Since the attack sniffer uses MAC address to isolate
signals of anchors, MAC randomization can disrupt both
bootstrapping and continuous sensing phases. However, recent
work has shown that MAC randomization is disabled on most
devices ($<$3\% of adoption rate so far)~\cite{randommacspread} and can be
easily broken to reveal the real MAC
address~\cite{macrandomization,randommac100perc}. We note that Android
9.0 Pie switches to per-network MAC
randomization~\cite{randommacAndroidP}, which does not apply any MAC
randomization to static WiFi devices. Thus MAC randomization is not a
plausible defense against our attack.
\subsection{Geofencing WiFi Signals}
Geofencing bounds signal propagation to reduce or eliminate WiFi signals accessible to the
adversary. In our attack, when the area with
a signal in our walking trace reduces from 25--50 meters to 10 meters or less, the anchor localization error increases
significantly: raw
errors more than double, and anchor room placement accuracy drops from 92.6\% to
41.15\%.
Geofencing is also extremely difficult to deploy and configure. The simplest form
is to reduce the anchor's transmit power, which is almost always
undesirable since it degrades connectivity. Another option is to
equip WiFi devices with directional antennas, limiting signal spatial
coverage. This is also undesirable as it requires upgrading to equipment with higher cost and larger
form factors, as well as carefully configuring antenna directionality.
Finally, the extreme solution is to block RF propagating beyond
property walls by painting these walls with electromagnetic shielding paints.
This is again impractical, since it blocks incoming WiFi/cellular
signals.
If the area accessible to the attacker is limited, a potential solution is to customize WiFi signal
coverage using 3D fabricated
reflectors~\cite{xiongxi17buildsys} or
backscatter arrays~\cite{kyle19nsdi} that create noise towards the
area. Yet both remain open research problems.
\subsection{WiFi Rate Limiting}
While geofencing reduces spatial leakage of WiFi signals, rate limiting
reduces their temporal volume. When anchors transmit less signals over time,
the sniffer will not have sufficient data to compute
$\overline{\sigma_{aCSI}}$.
Results in \S\ref{sec:eval} show that reducing anchor packet rates
does lower the detection rate (when using a single anchor) but can be
compensated by aggregating the results of multiple anchors.
In practice, rate limiting is undesirable for
most
network applications. As shown in
Table~\ref{tbl:devices_summary}, many WiFi devices, when idle, already transmit at more than 2pps. It is hard
to rate limit further, rendering the defense ineffective.
\subsection{Signal Obfuscation: Existing Designs}
{\em Signal obfuscation} adds noise to
WiFi signals, so the adversary
cannot accurately localize anchors or detect user motion. Existing
works have proposed both temporal and spatial obfuscations against
RF sensing~\cite{zhijing18hotmobile, phycloak16nsdi}.
In {\em
temporal obfuscation}, WiFi devices (anchors) change transmit
power randomly over time, injecting artificial noises to
signals seen by the sniffer. Doing so, however, requires upgrading
commodity WiFi devices to equipment with much higher cost
and energy consumption. Also a more resourceful adversary
can counter by deploying an extra static sniffer (during bootstrapping) to infer the injected signal power changes and
remove them from the signal traces, as shown by~\cite{zhijing18hotmobile}.
In {\em spatial obfuscation}, a recent work~\cite{phycloak16nsdi} shows that by deploying a
full-duplex radio near each
anchor $x$, one can obfuscate $x$'s signal phase, RSS, CSI, and Doppler
shift seen by any nearby sniffers with a single antenna. But full-duplex
radios are of high cost, and there is no
commodity product on the market. On the other hand, defending against
our attack only
needs to obfuscate RSS and aCSI, collected by the sniffer.
\subsection{Proposed: AP-based Signal Obfuscation}
The above four immediate defenses are either ineffective or
impractical. Instead, we propose
a practical defense where
the WiFi access point (AP) actively injects customized cover traffic for any of its
associated WiFi device $w$ that is actively transmitting. We design
this defense to produce large ambiguity to the attack in two
steps. {\em First}, our defense adds noise to the attacker's RSS measurements,
so that during
bootstrapping, the attacker will place most of the anchors to the
wrong room and even outside of the property. {\em Second}, our
defense largely reduces (and even removes) the
$\overline{\sigma_{aCSI}}$ gap between no human presence and human
motion, such that the attacker is unable to identify human motion.
\para{AP inserting customized cover signal.} As soon as the AP
detects a transmission from $w$, it estimates $w$'s transmission rate
$T_w$ and injects a cover traffic stream with the rate of $\rho
T_w$, at a randomized
power level and with $w$'s MAC address.
$\rho$ is the injection rate,
a system parameter. Since the AP limits its cover traffic stream
to be proportional to $w$'s throughput, the CSMA protocol will randomly interleave packets
from the two streams together. In the worst case ($\rho T_w$ is at or higher than
available channel throughput), the cover traffic will reduce $w$'s effective
throughput by $1+\rho$.
The insertion of ``fake'' packets requires a careful design, so that it
disrupts the attack rather than creating obvious ``anomalies'' or heavily
affecting the WiFi network. In particular, the AP configures the sequence numbers of fake
packets to (partially) interleaved with those of real packets, so that the
attacker is unable to separate the two streams based on sequence number and
packet arrival time.
\shepherd{When sending fake packets, the AP's transmit power is
randomized but close to that of $w$, so the mixed traffic follows
natural (and complex) multipath signal variation. This makes it hard
to distinguish real and fake packets from signal strength values alone.}
Finally, this defense can be deployed on today's WiFi
APs that support transmit power adaptation with minor changes. The
major overhead is the extra consumption (a factor of $\rho$) of bandwidth and energy at the AP.
\para{Results: Impact on bootstrapping.} With this defense, the attacker's RSS measurements of anchor $w$ will display
fluctuations, tricking the sniffer to think that $w$ is
moving and not use it as an anchor. Even if the adversary
assumes $w$ is stationary, the noisy RSS measurements (even after our
data sifting) will lead to
inaccurate room placement.
When evaluating this defense, we consider both our original attacker
(with one smartphone)
and an {\em advanced} attacker who adds an extra stationary sniffer and applies
RSS signal
subtraction to detect and remove any ``injected'' signal
variations~\cite{zhijing18hotmobile}. We configure our defense where the AP injects cover traffic of
$\rho$ with power randomization in the $10dB$ range.
For both
attackers, $\rho$=5\% is sufficient to drop the accuracy of anchor
room placement from 92.6\% (without our defense) to 35.71\%, except for the anchors close
to the AP (in the same room). As we further increase $\rho$, the
attacker will map most of the detected anchors to the AP's room.
\begin{figure}[t]
\centering
\includegraphics[width=0.23\textwidth]{figs/csi_defense_noone_raw}
\includegraphics[width=0.23\textwidth]{figs/csi_defense_moving_raw}
\subfigure[User not present]{
\includegraphics[width=0.225\textwidth]{figs/csi_defense_noone}
\label{fig:defense_noone}
}
\subfigure[User in motion]{
\includegraphics[width=0.225\textwidth]{figs/csi_defense_moving}
\label{fig:defense_moving}
}
\caption{aCSI and $\overline{\sigma_{aCSI}}$ with and without AP
based signal obfuscation. }
\end{figure}
\para{Results: Impact on continuous sensing.} As the attacker sniffer
calculates $\overline{\sigma_{aCSI}}(w)$ on randomly interlaced packets
sent by anchor $w$ and the AP, the value of
$\overline{\sigma_{aCSI}}(w)$ with no human presence will
increase significantly. Figure~\ref{fig:defense_noone} shows a sample
trace of aCSI (of a sub-carrier) and $\overline{\sigma_{aCSI}}$ with
and without the defense. We see that our defense can effectively
elevate the threshold $\sigma_{p}(w)$ for human presence detection.
More importantly, the defense has much less impact on
$\overline{\sigma_{aCSI}}(w)$ when there is actual human movement near the
anchor $w$. The sample traces in Figure~\ref{fig:defense_moving} show
that $\overline{\sigma_{aCSI}}(w)$ actually drops (below the threshold) when using the
proposed defense. In this case, human movement will not
trigger any anchor in proximity, for both the original and the
advanced attackers (who deploy an extra sniffer).
Table~\ref{table:sensing_impact} lists the attack performance with our
proposed defense ($\rho$=30\%) and without any defense. We first
consider the case where the attacker manages to obtain perfect anchor
room placement. In this case, our defense increases the false positive rate from 7.9\% to
48.28\% while dropping the detection rate to 78.776\%. Next, we
consider the end-to-end attack scenario where the attacker performs
both bootstrapping and continuous sensing. Our defense drops the detect rate down to 47.48\% while increasing the
false positive rate to 49.5\%. These results apply to both the original
attacker and the advanced attacker. Such ambiguity renders the attack useless in practice.
\begin{table}[h]
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|c|c|c|c|}
\cline{2-5}
\multirow{2}{*}{}
& \multicolumn{2}{c|}{False positive rate} & \multicolumn{2}{c|}{Detection rate} \\ \cline{2-5}
& No
defense
& AP obf & No defense & AP obf \\ \hline
\multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}} knowing anchor
\\ room placement\end{tabular}} & 7.935\% & 48.284\% & 99.988\% & 78.776\% \\ \hline
\multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}end-to-end \\ attack\end{tabular}} & 10.719\% & 49.598\% & 99.725\% & 47.481\% \\ \hline
\end{tabular}
}
\vspace{0.04in}
\caption{The attack performance under AP-based signal
obfuscation (best performance out of the original and the advanced attack with an extra sniffer). }
\label{table:sensing_impact}
\end{table}
\shepherd{
\para{Possible countermeasures.}
To overcome our proposed defense,
the attacker must find ways to distinguish the obfuscation
packets sent by AP from the original packets sent by an anchor $w$. As
discussed earlier, doing so using packet sequence number and arrival
time is infeasible due to our packet injection method. Doing so at the
network traffic level is also difficult, since packet contents are encrypted, and we can
shape traffic to resist traffic
identification by attackers~\cite{covertraffic}. Finally, it is also difficult to
separate the two streams using physical layer characteristics, because
doing so requires much more sophisticated and bulky hardware.
One option is to analyze per-symbol aCSI/RSS patterns. This is infeasible using commodity
WiFi chips, as they only report per-packet aCSI/RSS values. Another
option is to use a large antenna array (MIMO with at least 4--6 antenna
elements, each separated by 6.25cm) to distinguish signals sent by $w$
from those sent by the AP, since they come from different directions. The
resulting sniffer ($>$31cm in length) would be conspicuous and easily raise suspicion.
}
\section{Related Work}
\label{sec:related}
\spara{Human sensing by snooping signals.} We categorize
existing works into five groups.
\shepherd{
The first group applies {\em traffic analysis} to infer user presence and status in a home/office from
their network traffic~\cite{videosniffing16infocom,privacy14iccst,fanzhang11wisec,dewicam18asiaccs, gtid15fingerprint, homesnitch19wisec,peekaboo18}.
It requires strong knowledge on device
behaviors and can be easily countered by
sending cover traffic, applying encryptions and traffic shaping.
In contrast, our attack remains effective even when all network-level defenses are deployed, as long as WiFi devices still transmit packets.
}
The second group uses ``specialized signals'' such as RFID~\cite{rfid17survey},
visible light~\cite{litell16mobicom,lightsense16mobisys},
and acoustic~\cite{cat16mobicom, shyam17ubicomp}, that often correlate with
human motion. But existing solutions require control of
transmitters inside or outside of the target property, which is
infeasible under our attack model.
The third group builds {\em fingerprints} of each predefined target location
and/or activity, based on either aCSI~\cite{NandakumarKG14,jianliu14mobicom,passivecsi18NaNA}, CSI~\cite{wisee13mobicom, weiwang15mobicom}, RSS~\cite{widet18mswim,srinivasan08ubicomp,nuzzer13youssef,rfsensing14tomc}, or
raw signals~\cite{motionfi18infocomm}. Since the attacker under our
model has no knowledge of the target users and access to the target
property, building fingerprints becomes infeasible.
The fourth group uses advanced radio hardware (laptops or USRPs with
antenna arrays or directional antennas) that communicate with the
anchors inside the target property. This allows the sniffer to
measure fine-grained CSI values
(both amplitude and phase)\shepherd{~\cite{freesense16globecom}}, and use
them to calculate AoA and doppler frequency shift (DFS) to detect human
motion~\cite{wideo15nsdi,widar2_18mobisys,weiwang15mobicom,
yousefi2017survey, activitytrain18mobicom, passiveradar12}. Our
attack differs by using a passive sniffer with a single
antenna, which does not
communicate/synchronize with the anchors. In this case, the sniffer
cannot infer CSI phase, AoA or DFS.
The final group detects user motion using passive
sniffers
\shepherd{to collect and analyze physical RF signals}~\cite{banerjee14wisec,passiveradar12, lifs16mobicom}. As
discussed earlier, both~\cite{banerjee14wisec, lifs16mobicom} target
user motion that disturbs the direct propagation path,
requiring precise locations of the anchors. ~\cite{passiveradar12} uses multiple sniffers with bulky
directional antennas to compute doppler shift of user motion. The sensing method used by our attack falls into this
category, but targets multipath signal
propagation from each anchor to the sniffer. We design a new
aCSI variance model to reliably detect user motion, eliminating the need
for precise anchor location and antenna array at the sniffer.
\spara{Passive transmitter localization.} Existing works often
leverage bulky receivers with
multiple antennas~\cite{adib2013, spotfi15sigcomm, multipathtriang18mobisys,
mostofi18ipsn,arraytrack13nsdi,vrwifitracking17cvpr} to estimate signal AoA, and applies triangulation across receivers to
derive target location.
Our anchor localization (during bootstrapping) uses a compact smartphone with a single
antenna, and applies passive localization
that fits spatial RSS measurements to a propagation
model~\cite{ariadne06mobisys,zhijing18hotmobile,wigem11conext}. Our key contribution is the data sifting algorithm that
identifies good RSS samples as input to the model fitting.
\spara{Defense against RF sensing.} Existing
works~\cite{carving12asiaccs, phycloak16nsdi, ijam10, waveforming15ccs}
defend against eavesdropping on a transmitter by a jammer
transmitting simultaneously, preventing the attacker from decoding packets
or estimating CSI/AoA.
This requires precise synchronization between the
transmitter and the
jammer~\cite{defense16winet} or a
high-cost full-duplex obfuscator~\cite{phycloak16nsdi}. Our defense
uses AP to insert fake packets (rather than transmitting simultaneously),
which is easy to deploy and effective against
our attack.
\section{Conclusion}
\label{sec:discussion}
\readmore{
Our work shows that the ubiquity of WiFi devices has
an unexpected cost: reflected or blocked RF transmissions leak
information about our location and activities. We describe a set of
low-cost, stealthy reconnaissance attacks that can continuously monitor and
locate human motion inside a private property, turning WiFi
devices inside into motion sensors. All this is done without
compromising the WiFi network, data packets or devices, and only requires a
commodity WiFi sniffer outside of the property. We validate the attack on
a variety of real-world locations, and develop a new effective defense
based on carefully tuned WiFi signal obfuscation by APs.
We believe our work points to the potential of more powerful information
leakage attacks via passive RF reflections. With more sophisticated signal
processing techniques (and potentially new hardware), much more might be
learned from the way ambient RF signals interact with our bodies and
surroundings. We are pursuing this line of research to both better
understand these attacks and to develop defenses to better safeguard
our security and privacy.
}
\section*{Acknowledgment}
We thank our shepherd Earlence Fernandes and
the anonymous reviewers for their feedback. We also thank
Vyas Sekar and Fadel Adib for their feedback on the early version of
this work.
This work is supported in part by the National Science Foundation
grants CNS-1923778 and CNS-1705042.
Any opinions, findings, and conclusions or recommendations
expressed in this material do not necessarily reflect the views of any
funding agencies.
\bibliographystyle{IEEEtranS}
| 2023-04-23T06:41:04.819Z | 2020-01-14T02:07:04.000Z | redpajama/arxiv | arxiv_0001 | 1,654 | 13,254 |
fa259326195f4a59cbba84a5297e57a5df54ef64 | \section{Experiments}\label{sec:exp}
The demonstrations are performed on three different systems with different degrees of available physics knowledge: self-driving vehicles, coupled pendulums, and a subsystem of US Illinois climate. All the data we use in the experiment are publicly available. For the code, we use the Python API for the TensorFlow framework \cite{kingma2014adam} and the Adam optimizer \cite{abadi2016tensorflow} for training. All of our code is available online at \textcolor[rgb]{1.00,0.00,1.00}{\url{https://github.com/ymao578/SP-AI}}.
\subsection{Autonomous Vehicles}
\begin{wrapfigure}{r}{0.45\textwidth}
\vspace{-0.9cm}
\begin{center}
\includegraphics[width=0.45\textwidth]{aoo.pdf}
\end{center}
\vspace{-0.5cm}
\caption{Vehicle's driving environment.}
\vspace{-0.5cm}
\label{env}
\end{wrapfigure}
This experiment performs the demonstration of two function: i) the learning of vehicle’s conjunctive lateral and longitudinal dynamics via Phy-Taylor, and ii) the safe velocity regulation via self-correcting Phy-Taylor. The vehicle's driving environment is shown in Figure~\ref{env}, operating in the AutoRally platform \cite{goldfain2019autorally}.
\subsubsection{Vehicle-Dynamics Learning} \label{expppgf}
We first leverage our available physics knowledge to identify model-known parameters for the physics-guided NN editing. According to the Newton’s second law for motion along longitudinal and lateral axes, we have the governing equation \cite{rajamani2011vehicle}:
\begin{align}
\bar{m}\ddot {\mathrm{p}} = {F_{\mathrm{p}f}} + {F_{\mathrm{p}r}} - {F_{\text{aero}}} - {R_{\mathrm{p}f}} - {R_{\mathrm{p}r}}, ~~~~~~~~~~~~~\bar{m} (\ddot {\mathrm{y}} + \dot{\psi} {v_{\mathrm{p}}}) = {F_{\mathrm{y}f}} + {F_{\mathrm{y}r}}, \nonumber
\end{align}
where $\mathrm{p}$ is the longitudinal position, $\mathrm{y}$ is the lateral position, $\psi$ is the vehicle yaw angle, $\bar{m}$ is the vehicle mass, $v_{\mathrm{p}} \triangleq \dot {\mathrm{p}}$
is the longitudinal velocity, ${F_{\mathrm{p}f}}$ and ${F_{\mathrm{p}r}}$ denote the longitudinal tire force at the front and rear tires, respectively, ${R_{\mathrm{p}f}}$ and ${R_{\mathrm{p}r}}$ denote the rolling resistance at the front and rear tires, respectively, ${F_{\text{aero}}}$ represents the longitudinal aerodynamic drag force, ${F_{\mathrm{y}f}}$ and ${F_{\mathrm{y}r}}$ are the lateral tire forces of the front and rear wheels, respectively. With the notations of lateral velocity $v_{\mathrm{y}} \triangleq \dot {\mathrm{y}}$ and yaw velocity $v_{\psi} \triangleq \dot{\psi}$, the following state space model \cite{rajamani2011vehicle} is derived from the governing equation above, without impose assumptions.
\begin{align}
\frac{\mathrm{d}}{{\mathrm{d}t}} \left[ \begin{gathered}
\mathrm{p} \hfill \\
\mathrm{y} \hfill \\
\psi \hfill \\
{v_{\mathrm{p}}} \hfill \\
{v_{\mathrm{y}}} \hfill \\
{v_\psi } \hfill \\
\end{gathered} \right] = \left[ {\begin{array}{*{20}{c}}
0&0&0&1&0&0 \\
0&0&0&0&1&0 \\
0&0&0&0&0&1 \\
0&0&0&*&0&0 \\
0&0&0&0&*&* \\
0&0&0&0&*&*
\end{array}} \right]\underbrace{\left[ \begin{gathered}
\mathrm{p} \hfill \\
\mathrm{y} \hfill \\
\psi \hfill \\
{v_{\mathrm{p}}} \hfill \\
{v_{\mathrm{y}}} \hfill \\
{v_\psi } \hfill \\
\end{gathered} \right]}_{\triangleq {\mathbf{x}}} + \left[ \begin{gathered}
0 \hfill \\
0 \hfill \\
0 \hfill \\
* \hfill \\
0 \hfill \\
0 \hfill \\
\end{gathered} \right]\theta + \left[ \begin{gathered}
0 \hfill \\
0 \hfill \\
0 \hfill \\
0 \hfill \\
* \hfill \\
* \hfill \\
\end{gathered} \right]\delta, \label{statmodel}
\end{align}
where `*' can represent a state-dependent or time-dependent function or mixed of them or just a scalar, \underline{but is unknown to us}, and $\theta$ and $\delta$ denote throttle and steering, respectively. Given the practical physics knowledge that \textit{the throttle computation depends on the longitudinal velocity and position only, while the dependencies of steering are unknown}, the state space model \eqref{statmodel} updates with
\begin{align}
\dot {{\mathbf{x}}} = \left[ {\begin{array}{*{20}{c}}
0&0&0&1&0&0 \\
0&0&0&0&1&0 \\
0&0&0&0&0&1 \\
*&0&0&*&0&0 \\
*&*&*&*&*&* \\
*&*&*&*&*&*
\end{array}} \right]{\mathbf{x}}. \nonumber
\end{align}
The sampling technique, with sampling period denoted by $T$, transforms the continuous-time state-space model above to the discrete-time one:
\begin{align}
{\mathbf{x}}\left( {k + 1} \right) = \left[ {\begin{array}{*{20}{c}}
1&0&0&T&0&0 \\
0&1&0&0&T&0 \\
0&0&1&0&0&T \\
*&0&0&*&0&0 \\
*&*&*&*&*&* \\
*&*&*&*&*&*
\end{array}} \right]{\mathbf{x}}\left( k \right).\label{discar}
\end{align}
\begin{wrapfigure}{r}{0.50\textwidth}
\vspace{-0.9cm}
\begin{center}
\includegraphics[width=0.50\textwidth]{moon1.pdf}
\end{center}
\vspace{-0.5cm}
\caption{(a): DPhN has a PhN with large order $r = 4$. (b): DPhN has cascading PhNs with relatively small orders satisfying $r_{\left\langle 1 \right\rangle} \cdot r_{\left\langle 2 \right\rangle} = 2 \cdot 2 = 4 = r$.}
\vspace{-0.5cm}
\label{capspoo}
\end{wrapfigure}
We first consider two Phy-Taylor models, named `Phy-Taylor$_{\text{large order}}$' and `Phy-Taylor$_{\text{small order}}$', which can embed the available knowledge (i.e., the known parameters included in system matrix of model \eqref{discar}). Their architectures are are shown in Figure \ref{capspoo} (a) and (b), observing which we discover: (i) the Phy-Taylor$_{\text{large order}}$ has one PhN with large augmentation order while the Phy-Taylor$_{\text{small order}}$ has two cascading PhNs with relatively small orders of augmentation, meanwhile (ii) the three orders satisfy the condition \eqref{mcc} for having the same monomials of Taylor series. The example of Phy-Taylor$_{\text{small order}}$ in implementing the NN editing is presented in Supplementary Information \ref{SI07}. We also consider the corresponding models without NN editing (i.e., without physics-knowledge embedding), which degrades the Phy-Taylor to the deep PhNs (DPhNs). The two DPhN models are named `DPhN$_{\text{large order}}$' and `DPhN$_{\text{small order}}$'. The final model we considered is the seminal Deep Koopman \cite{lusch2018deep}, following the same model configuration therein. The configurations of five models are summarized in Table \ref{taboo}.
\begin{table*}\footnotesize{
\caption{Model Configurations}
\centering
\begin{tabular}{l cc cc cc c c}
\toprule
& \multicolumn{2}{c}{Layer 1} & \multicolumn{2}{c}{Layer 2} & \multicolumn{2}{c}{Layer 3}\\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
Model ID & \#weights & \#bias & \#Weights & \#bias & \#weights & \#bias & \#parameter sum & prediction error\\
\midrule
DPhN$_{\text{large order}}$ & $3135$ & $15$ & $90$ & $6$ & $-$ & $-$ & $3246$ & nan\\
DPhN$_{\text{small order}}$ & $270$ & $10$ & $520$ & $8$ & $48$ & $6$ & $862$ & $45.57277$ \\
Phy-Taylor$_{\text{large order}}$ & $2313$ & $12$ & $30$ & $3$ & $-$ & $-$ & $2358$ & $0.047758$ \\
Phy-Taylor$_{\text{small order}}$ & $167$ & $7$ & $265$ & $5$ & $18$ & $3$ & $465$ & $0.003605$\\
\end{tabular}
\begin{tabular}{l cc cc cc c c}
\toprule
& \multicolumn{2}{c}{Encoder} & \multicolumn{2}{c}{Decoder} & \multicolumn{2}{c}{Auxiliary Networks}\\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
Model ID & \#weights & \#bias & \#weights & \#bias & \#weights & \#bias & \#parameter sum & prediction error\\
\midrule
Deep Koopman & $2040$ & $176$ & $2040$ & $176$ & $19920$ & $486$ & $24838$ & $0.232236$\\
\bottomrule
\end{tabular}\label{taboo}}
\end{table*}
\begin{figure*}[!t]
\centering
\subfigure{\includegraphics[scale=0.162]{ok1.pdf}}
\subfigure{\includegraphics[scale=0.162]{ok3.pdf}}
\subfigure{\includegraphics[scale=0.162]{ok2.pdf}}
\subfigure{\includegraphics[scale=0.225]{om1.pdf}}
\subfigure{\includegraphics[scale=0.225]{om2.pdf}}
\subfigure{\includegraphics[scale=0.225]{om3.pdf}}
\subfigure{\includegraphics[scale=0.225]{om4.pdf}}
\subfigure{\includegraphics[scale=0.225]{om5.pdf}}
\subfigure{\includegraphics[scale=0.225]{om6.pdf}}
\caption{\textbf{Training and Testing}. (a)-(c): The average of training loss of models given in Table \ref{taboo}. (i)-(vi): Ground truth and predicted trajectories under different trained models.}
\label{trvala}
\vspace{-0.4cm}
\end{figure*}
The trajectories of training loss and validation loss of Phy-Taylor models and DPhN models are presented in Figure \ref{trvala} (a)--(c). The (training loss, validation loss) of trained DPhN$_{\text{large order}}$, DPhN$_{\text{small order}}$, Phy-Taylor$_{\text{large order}}$ and Phy-Taylor$_{\text{small order}}$ are (0.00389, 0.00375), (0.000344, 0.000351), (0.000222, 0.000238) and (0.000915, 0.000916), respectively. To perform the testing, we consider the long-horizon prediction of system trajectories, given the same initial conditions. The prediction error is measured by $e = \frac{1}{\kappa }\sum\limits_{t = k + 1}^{k + \kappa } {\frac{1}{6}} \left\| {\underline{\bf{x}} \left( t \right) - {\bf{x}}\left( t \right)} \right\|~\text{with}~\underline {\bf{x}} \left( k \right) = {\bf{x}}\left( k \right)$, where $\underline {\bf{x}} \left( t \right) $ is the prediction of ground truth ${\bf{x}} \left( t \right)$ at time $t$. The prediction errors over the horizon $\kappa = 300$ and initial time $k = 950$ are summarized in Table \ref{taboo}. The ground-truth trajectories and predicted ones via the trained Deep Koopman and the Phy-Taylor models are shown in Figure \ref{trvala} (i)--(vi). Observing Table \ref{taboo} and Figures \ref{trvala}, we conclude that \\
-- Figure \ref{trvala} (i)--(vi) and Figure \ref{trvala} (a)--(b): the physics-guided NN editing can strikingly accelerates training process, reduces validation and training loss and increase model accuracy. \\
-- Figure \ref{trvala} (i)--(vi) and Figure \ref{trvala} (c): with physics-guided NN editing, the cascading TLs with small orders of input augmentation can further significantly reduce validation and training loss, and increase model accuracy, which can be due to the further removed contradicting connections via the cascading architecture. \\
-- Figure \ref{trvala} (i)--(vi) and Table \ref{taboo}: compared with the fully-connected DPhN models, i.e., DPhN$_{\text{large order}}$ and TNO$_{\text{small order}}$, the seminal Deep Koopman strikingly increases model accuracy, viewed from the perspective of long-horizon prediction of trajectory. While compared with Deep Koopman, the Phy-Taylor models (both Phy-Taylor$_{\text{large order}}$ and Phy-Taylor$_{\text{small order}}$) remarkably reduce the model parameters and further notably increase model accuracy simultaneously.
\subsubsection{Self-Correcting Phy-Taylor}
\begin{wrapfigure}{r}{0.80\textwidth}
\vspace{-0.9cm}
\begin{center}
\includegraphics[width=0.80\textwidth]{moo2.pdf}
\end{center}
\vspace{-0.5cm}
\caption{Self-correcting Phy-Taylor for safe control of autonomous vehicle.}
\vspace{-0.3cm}
\label{capsp}
\end{wrapfigure}
This experiment demonstrates the effectiveness of self-correcting Phy-Taylor in guaranteeing vehicle's safe driving in the environment shown in Figure \ref{env}. The architecture of self-correcting Phy-Taylor is described by Figure \ref{capsp}. Its real-time input vector is
${\mathbf{x}}(k) = [w(k); ~\mathrm{p}(k); ~\mathrm{y}(k); ~\psi(k); ~v_{\mathrm{p}}(k); ~v_{\mathrm{y}}(k); v_{\psi}(k)]$,
where $w(k)$ is the average of four wheels' velocities. The mediate output $\mathbf{u}(k) = \left[\theta(k); \gamma(k) \right]$ denotes the vector of control commands, where $\theta(k) \in [-0.156, 0.156]$ is the throttle command and $\gamma(k) \in [-0.6, 0.6]$ is the steering command. The considered safety metric in \eqref{mkbzkm} is
\begin{align}
\mathbf{s}({\mathbf{x}}(k),\mathbf{u}(k),\tau) = \sum\limits_{t = k + 1}^{k + \tau } {\left[ {{{\left( {{v_{\mathrm{p}}}(t) - \mathrm{v}} \right)}^2};~~{{\left( {{v_{\mathrm{p}}}(t) - r \cdot w(k)} \right)}^2}} \right]} \in {\mathbb{R}^2}, \label{hho}
\end{align}
where $\mathrm{v}$ and $r$ denote the reference of longitudinal velocity and the wheel radius, respectively. The safety metric \eqref{hho} together with the driving scenario in Figure \ref{env} indicate the objective of safe control command is to simultaneously steer vehicle's longitudinal velocity to reference $\mathrm{v}$, and constrain the slip (i.e., $(v_{\mathrm{x}}(t) - r \cdot w(k))^2$) to the safety set to prevent slipping and sliding.
The testing results of trained model are presented in Figure \ref{inff} (blue curves), which in conjunction with the ground truth (orange curves) demonstrate the trustfulness of trained model. We next output the learned safety relationships for off-line verification and necessary revision.
\begin{subequations}
\begin{align}
{\left[ {\bf{s}}(\bf{u}) \right]_1} &= 0.00111007 + {\left[\! \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \!\right]^\top} \left[ {\begin{array}{*{20}{c}}
{-0.04581441}&{0.00100625}\\
{0.00100625}&{0.00342825}
\end{array}} \right] \left[\! \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \!\right], \label{choo1}\\
{\left[ {\bf{s}}(\bf{u}) \right]_2} &= 0.14376973 - {\left[\! \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \!\right]^\top}\left[ {\begin{array}{*{20}{c}}
{6.06750536}&{ 0.02701398}\\
{0.02701398}&{0.00601609}
\end{array}} \right]\left[\! \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \!\right].\label{choo2}
\end{align}\label{choo}
\end{subequations}
\begin{figure*}
\centering
\subfigure{\includegraphics[scale=0.35]{vk1.pdf}}
\subfigure{\includegraphics[scale=0.35]{vk2.pdf}}
\subfigure{\includegraphics[scale=0.35]{vk3.pdf}}
\subfigure{\includegraphics[scale=0.35]{vk4.pdf}}
\caption{(a)--(b): Ground truth and inference of control commands; (c)--(d) ground truth, original inference and inference (according to the revised safety relationship) of safety metrics: tracking error: ${\left( {{v_{\mathrm{x}}}(t) - \mathrm{v}} \right)}^2$, slip: $(v_{\mathrm{x}}(t) - r \cdot w(k))^2$.}
\label{inff}
\end{figure*}
The safety metrics of ground truth \eqref{hho} are always non-negative. We thus need to verify that give the ranges of control commands (i.e., $\theta(k) \in [-0.156, 0.156]$, $\gamma(k) \in [-0.6, 0.6]$, $\forall k \in \mathbb{N}$), if both $\eqref{choo1}$ and $\eqref{choo2}$ can always be non-negative. If the violation occurs, we will make revisions to the relationships. We can straightforwardly verify from \eqref{choo} that $\left[\bf{s}(\bf{u}(k)) \right]_1 \ge 0$ and $\left[\bf{s}(\bf{u}(k)) \right]_2 \ge 0$ do not always hold for $\theta(k) \in [-0.156, 0.156]$ and $\gamma(k) \in [-0.6, 0.6]$. For example, $\theta(k) = 0.156$ and $\gamma(k) = 0$. The blue curves of inferred safety matrices in Figure \ref{inff} (c) and (d) also shows the occurring of such violation. Therefore, the revision of safety relationship is needed before working on the self-correcting procedure.
Based on \eqref{choo}, the regulated safety relationships with minor revisions (the revisions are highlighted by \textcolor[rgb]{1.00,0.00,0.00}{red} color) are presented below.
\begin{subequations}
\begin{align}
{\left[ {\bf{s}}(\left. {{\bf{u}}(k)} \right|{\widehat{\mathbf{x}}}(k)) \right]_1} &= \underbrace{0.00\textcolor[rgb]{1.00,0.00,0.00}{02}1007}_{{[\mathbf{b}]_1}} + {\left[ \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \right]^\top} \underbrace{\left[ {\begin{array}{*{20}{c}}
{\textcolor[rgb]{1.00,0.00,0.00}{0.0018}1441}&{0.00100625}\\
{0.00100625}&{0.00342825}
\end{array}} \right]}_{{\triangleq {\bf{P}}_1}} \left[ \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \right], \label{rchoo1}\\
{\left[ {\bf{s}}(\left. {{\bf{u}}(k)} \right|{\widehat{\mathbf{x}}}(k)) \right]_2} &= \underbrace{0.14376973}_{{[\mathbf{b}]_2}} - {\left[ \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \right]^\top}\underbrace{\left[ {\begin{array}{*{20}{c}}
{\textcolor[rgb]{1.00,0.00,0.00}{5.90769724}}&{ \textcolor[rgb]{1.00,0.00,0.00}{0.01201398}}\\
{\textcolor[rgb]{1.00,0.00,0.00}{0.01201398}}&{0.00601609}
\end{array}} \right]}_{\triangleq {\bf{P}}_2}\left[ \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \right],\label{rchoo2}
\end{align}\label{rchoo}
\end{subequations}
\!\!which can be verified that $\left[ {\bf{s}}({\bf{u}}(k)) \right]_1 \ge 0$ and $\left[ {\bf{s}}({\bf{u}}(k)) \right]_2 \ge 0$, for any $\theta(k) \in [-0.156, 0.156]$ and $\gamma(k) \in [-0.6, 0.6]$. This is also demonstrated green curves of inferred safety matrices in Figure \ref{inff} (c) and (d).
With the revised safety relationships \eqref{rchoo} at hand, we are ready to develop the self-correcting procedure. Considering the two matrices ${\bf{P}}_1$ and ${\bf{P}}_2$ defined in \eqref{rchoo} are symmetric, we have
\begin{align}
{\bf{P}}_1 &= \underbrace{\left[ {\begin{array}{*{20}{c}}
{ - 0.934}&{ - 0.3572} \\
{ - 0.3572}&{0.934}
\end{array}} \right]}_{\triangleq {\bf{Q}}_1} \left[ {\begin{array}{*{20}{c}}
{\underbrace{0.0008}_{\triangleq {\lambda}_{1}}}&0 \\
0&{\underbrace{0.0038}_{\triangleq {\lambda}_{2}}}
\end{array}} \right] \underbrace{\left[ {\begin{array}{*{20}{c}}
{ - 0.934}&{ - 0.3572} \\
{ - 0.3572}&{0.934}
\end{array}} \right]}_{= {\bf{Q}}^{\top}_1}, \label{rch1}\\
{\bf{P}}_2 &= {\bf{Q}}_1 \cdot {\bf{Q}}_1 \cdot {\bf{P}}_2 \cdot {\bf{Q}}_1 \cdot {\bf{Q}}_1, \label{rch2}
\end{align}
based on which, we further define:
\begin{align}
\widehat{\mathbf{u}}(k) \triangleq {\bf{Q}}_1 \left[ \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \right], ~~~~~~~~~~\bf{S} \triangleq {\bf{Q}}_1 \cdot {\bf{P}}_2 \cdot {\bf{Q}}_1 = \left[ {\begin{array}{*{20}{c}}
{{s_{11}}}&{{s_{12}}}\\
{{s_{12}}}&{{s_{22}}}
\end{array}} \right]. \label{rch3}
\end{align}
We let $[\widehat{\mathbf{c}}]_1$ and $[\widehat{\mathbf{c}}]_2$ denote the two assigned safety metrics. With the consideration of \eqref{rchoo}--\eqref{rch3}, the formula: $\left[ {\bf{s}}({\bf{u}}(k)) \right]_1 = [\widehat{\mathbf{c}}]_1$ and $\left[ {\bf{s}}({\bf{u}}(k)) \right]_2 = [\widehat{\mathbf{c}}]_2$ can be rewritten as
\begin{align}
{\lambda _1}[\widehat{\mathbf{u}}(k)]_1^2 + {\lambda _2}[\widehat{\mathbf{u}}(k)]_2^2 &= {[\widehat{\mathbf{c}}]_1} - {[\mathbf{b}]_1}, \label{rch41}\\
{\left[\begin{array}{l}
\theta (k)\\
\gamma (k)
\end{array} \right]^\top}
{\mathbf{P}}_2 \left[ \begin{array}{l}
\theta (k)\\
\gamma (k)
\end{array} \right] &= {s_{11}}[\widehat{\mathbf{u}}(k)]_1^2 + 2{s_{12}}{[\widehat{\mathbf{u}}}(k)]_1{[\widehat{\mathbf{u}}}(k)]_2 + {s_{22}}[\widehat{\mathbf{u}}(k)]_2^2 = [\mathbf{b}]_2 - {[\widehat{\mathbf{c}}]_2}, \label{rch42}
\end{align}
whose solution (the derivations appears in Supplementary Information \ref{SI10}) is obtained as
\begin{align}
\left[ \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \right] = {\mathbf{Q}_1}\left[ \begin{array}{l}
\pm \sqrt {\frac{{{[\widehat{\mathbf{c}}]_1} - {[\mathbf{b}]_1}}}{{{\lambda _1}}} - \frac{{{\lambda _2}}}{{{\lambda _1}}}\frac{{\sqrt {\varpi _2^2 - 4{\varpi _1}{\varpi _3}} - {\varpi _2}}}{{2{\varpi _1}}}} \\
\pm \sqrt {\frac{{\sqrt {\varpi _2^2 - 4{\varpi _1}{\varpi _3}} - {\varpi _2}}}{{2{\varpi _1}}}}
\end{array} \right] \triangleq \left[ \begin{array}{l}
\pm \widehat{\theta}(k)\\
\pm \widehat{\gamma}(k)
\end{array} \right], \label{rch43}
\end{align}
where
\begin{align}
{\varpi _1} &\triangleq {\left( {\frac{{{\lambda _2}}}{{{\lambda _1}}}} \right)^2}s_{11}^2 + s_{22}^2 + \frac{{\left( {4s_{12}^2 - 2{s_{11}}{s_{22}}} \right){\lambda _2}}}{{{\lambda _1}}}, \label{pko1}\\
{\varpi _2} &\triangleq \frac{{2( {{[\widehat{\mathbf{c}}]_1} - {[\mathbf{b}]_1}}){s_{11}}{s_{22}} - 4( {{[\widehat{\mathbf{c}}]_1} - {[\mathbf{b}]_1}})s_{12}^2 + 2{\lambda _2}( {{[\mathbf{b}]_2} - {[\widehat{\mathbf{c}}]_2}}){s_{11}}}}{{{\lambda _1}}} \nonumber\\
&\hspace{7.5cm} - \frac{{2( {{[\widehat{\mathbf{c}}]_1} - {[\mathbf{b}]_1}}){\lambda _2}s_{11}^2}}{{\lambda _1^2}} - 2( {{[\mathbf{b}]_2} - {[\widehat{\mathbf{c}}]_2}}){s_{22}},\label{pko2}\\
{\varpi _3} &\triangleq {\left( {{[\mathbf{b}]_2} - {[\widehat{\mathbf{c}}]_2}} \right)^2} + {\left( {\frac{{{[\widehat{\mathbf{c}}]_1} - {[\mathbf{b}]_1}}}{{{\lambda _1}}}} \right)^2}s_{11}^2 - \frac{{2\left( {{[\widehat{\mathbf{c}}]_1} - {[\mathbf{b}]_1}} \right)\left( {{[\mathbf{b}]_2} - {[\widehat{\mathbf{c}}]_2}} \right){s_{11}}}}{{{\lambda _1}}}.\label{pko3}
\end{align}
\begin{figure*}
\centering
\subfigure{\includegraphics[scale=0.22]{sl1.pdf}}
\subfigure{\includegraphics[scale=0.22]{sl2.pdf}}
\subfigure{\includegraphics[scale=0.22]{sl3.pdf}}
\subfigure{\includegraphics[scale=0.22]{sl4.pdf}}
\caption{\textbf{Performance comparisons of Phy-Taylor and self-correcting Phy-Taylor.} \textbf{(a)}: Safety metric: tracking error. \textbf{(b)}: Safety metric: wheel slip. \textbf{(c}: Average of wheel velocities. \textbf{(d)}: Vehicle's driving curve.}
\label{sysp}
\end{figure*}
The solution \eqref{rch43} has paved the way to deliver the self-correcting procedure: Algorithm \ref{ALG4}. The algorithm can be summarized as if the real-time safety metric $[{\bf{s}}(\bf{u}(k))]_1$
or $[{\bf{s}}(\bf{u}(k))]_2$ is larger than or equates to the corresponding safety bound $[\mathbf{c}]_1$ or $[\mathbf{c}]_2$, the real-time safety metric will be updated with the corresponding safety bound (indicated by Line \ref{ALG4-4} of Algorithm \ref{ALG4}). The corrected control commands are then computed according to \eqref{rch43} (see Lines \ref{ALG4-8}-\ref{ALG4-10}). The solutions however are not unique. To address the problem, the Line \ref{ALG4-12} of Algorithm \ref{ALG4} picks up the control commands that are most close to current ones.
\begin{algorithm} \small
\caption{Self-correcting Procedure for Safe Control Commands} \label{ALG4}
\KwIn{Real-time control commands $\theta(k)$ and $\gamma(k)$, safety bounds $[\mathbf{c}]_1$ and $[\mathbf{c}]_2$, and learned matrices $\mathbf{P}_1$ and $\mathbf{P}_2$ and bias $[\mathbf{b}]_1$ and $[\mathbf{b}]_2$ defined in \eqref{rchoo}.}
Update with off-line verified and revised safety relationship: ${\bf{s}}(\left. {{\bf{u}}(k)} \right|{\widehat{\mathbf{x}}}(k)) \leftarrow \eqref{rchoo}$; \label{ALG4-1}\\
\eIf{$[{\bf{s}}({\bf{u}}(k))]_1 \ge [\mathbf{c}]_1$ or $[{\bf{s}}({\bf{u}}(k))]_2 \ge [\mathbf{c}]_2$ \label{ALG4-2}}
{\eIf{$[{\bf{s}}({\bf{u}}(k))]_{i} \ge [\mathbf{c}]_i$, $i \in \{1,2\}$ \label{ALG4-3}}
{Regulate safety metric: $[\widehat{\mathbf{c}}]_{i} \leftarrow [\mathbf{c}]_i, i \in \{1,2\}$; \label{ALG4-4}}
{Maintain safety metric: $[\widehat{\mathbf{c}}]_{i} \leftarrow [{\bf{s}}({\bf{u}}(k))]_{i}, i \in \{1,2\}$; \label{ALG4-6}}
Compute orthogonal matrix $\mathbf{P}_1$ and eigenvalues $\lambda_1$ and $\lambda_1$ according to \eqref{rch1}; \label{ALG4-8}\\
Compute matrix: $\mathbf{S} \leftarrow \mathbf{Q}_1 \cdot \mathbf{P}_2 \cdot \mathbf{Q}_1$; \label{ALG4-9}\\
Compute $\widehat{\theta}(k)$ and $\widehat{\gamma}(k)$ according to \eqref{rch43}; \label{ALG4-10}\\
Correct real-time control commands: \begin{align}
\hspace{-1cm}\theta(k) \leftarrow \mathop {\arg \min }\limits_{\left\{ {\widehat \theta(k), - \widehat \theta(k)} \right\}} \left\{ {|{\theta(k) - \widehat \theta (k)}|,| {\theta(k) + \widehat \theta(k)}|} \right\}, ~~\gamma(k) \leftarrow \mathop {\arg \min }\limits_{\left\{ {\widehat \gamma(k), - \widehat \gamma(k)} \right\}} \left\{ {| {\gamma(k) - \widehat \gamma(k)}|, | {\gamma(k) + \widehat \gamma(k)}|} \right\}.\nonumber
\end{align}\label{ALG4-12}}
{Maintain real-time control commands: $\theta(k) \leftarrow \theta(k)$ and $\gamma(k) \leftarrow \gamma(k)$.}
\end{algorithm}
Under the control of Phy-Taylor with and without the self-correcting procedure, the system performances are presented in Figure \ref{sysp}. Observing the figure, we conclude that the self-correcting procedure can strikingly enhance the safe assurance of velocity regulation. The demonstration video of implementing the self-correcting Pay-Taylor in AutoRally is available at
\textcolor[rgb]{1.00,0.00,1.00}{\url{https://ymao578.github.io/pubs/taylorcontrol2.mp4}}.
\subsection{Coupled Pendulums}
\begin{table} \scriptsize{
\caption{Models with Different Degrees of Embedded Physics Knowledge}
\centering
\begin{tabular}{l cccc c c}
\toprule
& \multicolumn{4}{c}{Available Physics Knowledge} \\
\cmidrule(lr){2-5}
Model ID & \makecell{Physics Law: \\ $\upsilon = \dot \theta$} & \makecell{Sampling Period: \\$T$} & \makecell{Coupling Topology: \\$1 \leftrightsquigarrow 2 \leftrightsquigarrow 3$} & \makecell{Force Dependency} & Training Loss & \makecell{Out-of-Distribution:\\Prediction Error}\\
\midrule
Pin-Taylor-1 & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $1.75146\!\cdot\!10^{-6}$ & $0.06486147$\\
Pin-Taylor-2 & $\surd$ & $\times$ & $\surd$ & $\times$ & $2.42426\!\cdot\!10^{-6}$ & $1.46647886$\\
FDNN & $\times$ & $\times$ & $\times$ & $\times$ & $6.63564\cdot10^{-7}$ & $3.65772883$\\
\bottomrule
\end{tabular}
\label{tab11}}
\end{table}
This experiment focuses on the dynamics learning of three coupled pendulums (shown in Figure \ref{df}). The ground-truth dynamics of coupled pendulums is derived in Chapter 8.4 \cite{idema2018mechanics} as
\begin{align}
l{\ddot \theta _i} = - g\sin({{\theta _i}}) - \kappa l\sum\limits_{j = 1}^2 {{a_{ij}}}( {\sin( {{\theta _i}}) - \sin( {{\theta _j}})}), ~~~~i \in \left\{1,2,3 \right\}, \label{pendya}
\end{align}
where each pendulum is characterized by its phase angle $\theta _i$ and velocity $\dot \theta _i$. The associated length of rod, spring constant, magnitude of gravitational field and connection links are denoted by $l$, $\kappa$, $g$ and $a_{ij}(a_{12} = a_{21} = a_{23} = a_{32} = 1,a_{13} = a_{31} = 0)$, respectively. We let ${\upsilon _i} \triangleq {\dot \theta _i}$. Referring to Figure \ref{df}, the available physics knowledge for NN editing are summarized in Table \ref{tab11}.
We consider the dynamics learning via Phy-Taylor and a fully-connected deep neural network (FDNN), whose architectures are shown in Figure \ref{gfd} (a) and (b). The inputs of both networks are identical as $\mathbf{x}(k)$ $=$ $[$ ${\theta _1}( {k})$; ${\theta _2}( {k})$; ${\theta _3}( {k})$; ${\upsilon _1}( {k})$; ${\upsilon _2}( {k})$; ${\upsilon _3}( {k})$ $]$. For a fair comparison, we let each layer of FDNN has the same number of nodes as the Phy-Taylor, which suggests the network architecture in Figure \ref{gfd} (b).
\begin{figure*}[!t]
\centering
\subfigure{\includegraphics[scale=0.48]{jjjk.pdf}}
\subfigure{\includegraphics[scale=0.24]{ca.pdf}}
\subfigure{\includegraphics[scale=0.24]{cb.pdf}}
\subfigure{\includegraphics[scale=0.24]{cc.pdf}}
\subfigure{\includegraphics[scale=0.24]{cd.pdf}}
\subfigure{\includegraphics[scale=0.24]{ce.pdf}}
\subfigure{\includegraphics[scale=0.24]{cf.pdf}}
\caption{(i): Architecture of Pin-Taylor. (ii): Architecture of two fully-connected FNNs. (a)--(f): Ground truth and predicted trajectories.}
\label{gfd}
\end{figure*}
We aim to demonstrate the robustness of Phy-Taylor owning to neuron editing. To achieve this, we consider the testing data that is out-of-distribution. Specifically, the initial conditions of phase angles for generating training data are ${\theta _1}(0), {\theta _2}(0), {\theta _3}(0) \in [-1,1]$, while the initial conditions for generating testing data are outside the range: ${\theta _1}(0), {\theta _2}(0), {\theta _3}(0) \in [-1.5,-1)$.
The training loss (mean square error) of the considered models with different degrees of embedded physics knowledge are summarized in Table \ref{tab11}. To review the robustness of trained models, we consider the prediction of long-horizon trajectory, given the same initial input. The prediction error is measured by $e = \frac{1}{6}\sum\limits_{i = 1}^3 {\frac{1}{\tau }\left( {\sum\limits_{t = k + 1}^{k + \tau } {\left( {{{\widehat \theta }_i}(t) - {\theta _i}(t)} \right)^2} + \sum\limits_{t = k + 1}^{k + \tau } {\left( {{{\widehat \upsilon }_i}(t) - {\upsilon _i}(t)} \right)^2} } \right)}$, where ${{\widehat \theta }_i}(t)$ and ${{\widehat \upsilon }_i}(t)$ denote the predicted angle and angular speed of $i$-th pendulum at time $t$, respectively. The prediction errors are summarized in Table \ref{tab11}, which, in conjunction with ground-truth and predicted trajectories in Figure \ref{gfd} (c)--(h), demonstrate that 1) the NN editing can significantly enhance the model robustness, and 2) the more embedded physics knowledge can lead to stronger robustness, viewed from the perspective of long-horizon (out-of-distribution) predictions of trajectories.
\subsection{US Illinois Climate}
We consider a climate systems without any available physics knowledge for neuron editing, which degrades Phy-Taylor to deep PhN. The considered dataset is the hourly climate normals in Illinois state. The data is available at the Climate Data Online--NOAA. The data of Illinois state includes five stations\footnote{\tiny{\hspace{-0.0cm} NOAA: \hspace{-0.0cm} \url{https://www.ncdc.noaa.gov/cdo-web/datasets/NORMAL_HLY/locations/FIPS:17/detail}}}, whose periods of record are the same as 01/01/2010--12/31/2010. The locations of considered stations are indicated in Figure \ref{location}, where $S_{1}$-$S_{4}$ denote station IDs: GHCND:USW00094846, GHCND:USW00094822, GHCND:USW00014842, and GHCND:USW00093822. The TNO input is
\begin{align}
\hspace{-1cm}\mathbf{x}(k) \!=\! \left[ {{x_1}(k);{x_2}(k);{x_3}(k);{x_4}(k);{{\dot x}_1}(k);{{\dot x}_2}(k);} \right. {{\dot x}_3}(k); {{\dot x}_4}(k);{{\ddot x}_1}(k);{{\ddot x}_2}(k);{{\ddot x}_3}(k);\left. {{{\ddot x}_4}(k)} \right] \nonumber
\end{align}
\begin{wrapfigure}{r}{0.29\textwidth}
\vspace{-0.9cm}
\begin{center}
\includegraphics[width=0.29\textwidth]{location.pdf}
\end{center}
\vspace{-0.3cm}
\caption{\small{Station locations.}}
\vspace{-0.2cm}
\label{location}
\end{wrapfigure}
where ${{\dot x}_i}\left( k \right) = {{x}_i}\left( k \right) - {{x}_i}\left( k-1 \right)$ and ${{\ddot x}_i}\left( k \right) = {\dot{x}_i}\left( k \right) - {\dot{x}_i}\left( k-1 \right) =
({{x}_i}\left( k \right) - {{x}_i}\left( k-1 \right)) - ({{x}_i}\left( k-1 \right) - {{x}_i}\left( k-2 \right))$, $i = 1,2,3,4$, and the $x_{1}$, $x_{2}$, $x_{3}$ and $x_{4}$ denote the dew point mean, the heat index mean, the wind chill mean and the average wind speed, respectively. In this example, the data over the final 100 hours is preserved for testing, other data is used for training. The constructed network architectures are shown in Figure \ref{et}. The considered training loss function is
\begin{align}
\mathcal{L} = {\left\| {{[\widehat{\mathbf{y}}]_1} - {x_2}\left( {k + 1} \right)} \right\|} + {\left\| {{[\widehat{\mathbf{y}}]_2} - {x_4}\left( {k + 1} \right)} \right\|}, \nonumber
\end{align}
which indicates the network models are to predict the heat index mean and the average wind speed for next hour.
\subsubsection{Phy-Augmentation}
\begin{wrapfigure}{r}{0.61\textwidth}
\vspace{-0.9cm}
\begin{center}
\includegraphics[width=0.61\textwidth]{co2.pdf}
\end{center}
\vspace{-0.3cm}
\caption{Architectures of deep PhN and classical DNN.}
\vspace{0.4cm}
\label{et}
\end{wrapfigure}
We use the data of station GHCND:USW00094846 to study the influence of augmentation order $r$ from the perspective of training loss.
The trajectories of training loss under different augmentation orders are presented in Figure \ref{ghj} (a). It is straightforward to observe from the figure that compared with classical DNN (if $r = 1$), the Taylor neural operators (if $r \ge 2$) have much faster convergence speed and smaller training loss. In other words, the input augmentation significantly speeds up the training process and reduces the training loss. The intuitive explanation is that the input augmentation enlarges the number of nodes. To validate the statement, we perform the comparisons with the DNN, whose structure is shown in Figure \ref{et} (b). To guarantee the comparisons are carried in the fair settings, we let the input dimension (i.e., $n$) of the final layer of DNN in Figure \ref{et} (d) equates to the output dimension (i.e., $\text{len}(\mathfrak{m}(\mathbf{x},r))$) of input augmentation. According to \eqref{cpeq1}, we let $n = {\rm{len}}(\mathfrak{m}(\mathbf{x},r = 5)) = 253$. The comparison results are presented in Figure \ref{ghj} (b), which indicates that even given the same number of nodes, the deep PhN still has much faster convergence speed and smaller training loss of mean. The existing phenomenon can be explained that the Phy-Augmentation well captures the physics features.
\begin{figure*}
\centering
\subfigure{\includegraphics[scale=0.285]{fk1.pdf}}
\subfigure{\includegraphics[scale=0.285]{fk2.pdf}}
\caption{deep PhN v.s. classical DNN: log of training loss.}
\label{ghj}
\end{figure*}
\subsubsection{Large Noise}
To test the robustness of deep PhN in facing with large noise, we consider the network structure in Figure \ref{et} (b). To introduce noisy datasets for training, we consider using the inputs $\mathbf{x}(k)$ from stations $S_{2}$--$S_{4}$, while the dataset of outputs $[\widehat{\mathbf{y}}]_1$ and $[\widehat{\mathbf{y}}]_1$ is from the station $S_{1}$. For the compressor of deep PhN in \eqref{compb}, we let $\beta = 90$ and $\alpha = -1$. The trajectories of training loss in Figure \ref{ddo} together with station locations in Figure \ref{location} show that the training loss decreases as the distance with station $S_1$ increases. The result demonstrates the function of suppressor in mitigating the influence of large noise in augmented input features.
\begin{figure*}
\centering
\subfigure{\includegraphics[scale=0.175]{fa.pdf}}
\subfigure{\includegraphics[scale=0.175]{fc.pdf}}
\subfigure{\includegraphics[scale=0.175]{fb.pdf}}
\caption{Training loss with and without suppressor function.}
\label{ddo}
\end{figure*}
\section{Introduction}
The paper proposes a novel physics-model-based deep neural network framework, called Phy-Taylor, that addresses a critical flaw in purely data-driven neural networks, when used to model aspects of physical engineering systems. Namely, it addresses the potential lack of agreement between learned latent neural network representations and prior physical knowledge -- a flaw that sometimes leads to catastrophic consequences~\cite{brief2021ai}. The Phy-Taylor framework introduces two contributions: the Taylor neural operator and a physics-guided neural network editing mechanism, aiming at ensuring compliance with prior physical knowledge.
The work contributes to emerging research on physics-enhanced deep neural networks. Current approaches include physics-informed neural networks~\cite{wang2021physics,willard2021integrating,jia2021physics,jia2019physics,wang2021deep,lu2021physics,chen2021theory,wang2020deep,xu2022physics,karniadakis2021physics,wang2020towards,daw2017physics,cranmer2020lagrangian,finzi2020simplifying,greydanus2019hamiltonian}, physics-guided neural-network architectures~\cite{muralidhar2020phynet,masci2015geodesic,monti2017geometric,horie2020isometric,wang2021incorporating,li2019learning} and physics-inspired neural operators~\cite{lusch2018deep,li2020fourier}. The physics-informed networks and physics-guided architectures use compact partial differential equations (PDEs) for formulating loss functions and/or architectural components. Physics-inspired neural operators, such as the Koopman neural operator~\cite{lusch2018deep} and the Fourier neural operator~\cite{li2020fourier}, on the other hand, map nonlinear functions into alternative domains, where it is easier to train their parameters from observational data and reason about convergence.
These frameworks improve consistency with prior analytical knowledge, but remain problematic in several respects. For example, (i) due to incomplete knowledge, the compact or precise PDEs may not always be available, and (ii) fully-connected neural networks can introduce spurious correlations that deviate from strict compliance with available well-validated physics knowledge. Instead, through the use of a Taylor-series expansion, the Taylor neural operator is able to leverage partial knowledge. Moreover, thanks to the neural editing mechanism, the framework removes links and reshapes activation functions not consistent with physics-based representations. These operations are notionally illustrated by the example in Figure \ref{df}.
The Phy-Taylor framework leverages the intuition that most physical relations live in low-dimensional manifolds, shaped by applicable physical laws. It's just that the estimation of key physical variables from high-dimensional system observations is often challenging. By expressing known knowledge as relations between yet-to-be-computed latent variables, we force representation learning to converge to a space, where these variables represent desired physical quantities, shaped by the applicable (expressed) physical laws. In effect, by shaping non-linear terms and relations in the latent space, we arrive at a desired physics-compliant latent representation. More specifically, Phy-Taylor offers the following two advantages:
\begin{itemize}
\vspace{-0.1in}
\item \textbf{\textit{Non-linear Physics Term Representation:}} Classical neural networks can learn arbitrary non-linear relations by unfolding them into layers of linear weighting functions and switch-like activations. This mechanism is akin to constructing nonlinearities by stitching together piecewise linear behaviors. Instead, by directly exploiting non-linear terms of the Taylor series expansion, we offer a set of features that express physical nonlinearities much more succinctly, thereby reducing the number of needed parameters and improving accuracy of representation. Monomials of the Taylor series can capture common nonliearities present in physics equations, such as kinetic energy, potential energy, rolling resistance and aerodynamic drag force. The model error of the series drops significantly as the series order increases~\cite{konigsberger2013analysis}. As shown in Figures~\ref{PhyArt} and \ref{phyaug}, the approach constructs input features that represent monomials of the Taylor series and adds a compressor for mitigating influence of noise on augmented inputs.
\vspace{-0.1in}
\item
\textbf{\textit{Removing Spurious Correlations:}} The general topology of neural networks allows for models that capture spurious correlations in training samples (overfitting)~\cite{yang2022understanding, sagawa2020investigation}. In contrast, we develop a neural network (topology) editing mechanism in the latent space that removes links among certain latent variables, when these links contradict their intended physical behaviors, thereby forcing the latent representation to converge to variables with the desired semantic interpretation that obey the desired physical relations.
\vspace{-0.1in}
\end{itemize}
\begin{figure*}[!t]
\centering
\subfigure{\includegraphics[scale=0.31]{apen.pdf}}
\subfigure{\includegraphics[scale=0.35]{bpen.pdf}}
\subfigure{\includegraphics[scale=0.22]{fpen3.pdf}}
\caption{\textbf{Neural network (NN) editing for dynamics learning}. \textbf{(a)}: Mechanical analog of three coupled pendulums. \textbf{(b)}: NN editing example: i) remove the red links contradicting with the physics law: ${\theta _3}( {k + 1}) = {\theta _3}(k) + T{v_3}(k)$ (${v_3} \buildrel \Delta \over = \dot \theta$ and $T$ is sampling period), ii) remove blue links contradicting with the physical coupling topology: pendulum 1 $\leftrightsquigarrow$ pendulum 2 $\leftrightsquigarrow$ pendulum 3, iii) preserve green links and assign weights according to physics law: ${\theta _1}( {k + 1}) = {\theta _1}(k) + T{v_1}(k)$, iv) partially remove activation for the ${v_3}( {k + 1})$ computation, i.e., ${v_3}({k + 1}) = {v_3}(k) + \text{act}( f( {{\theta _3}(k),{\theta _2}(k),{v_2}(k)}))$. \textbf{(c)}: Given out-of-distribution initial conditions, trajectory predictions via fully-connected deep neural network (FDNN) and Phy-Taylor.}
\label{df}
\end{figure*}
\noindent
Through experiments with learning the dynamics of autonomous vehicles and other non-linear physical systems, we show that Phy-Taylor exhibits a considerable reduction in learning parameters, a remarkably accelerated training process, and greatly enhanced model robustness and accuracy (viewed from the perspective of long-horizon prediction of a trajectory). Experiments with safe velocity regulation in autonomous vehicles further demonstrate that the self-correcting Phy-Taylor successfully addresses the dilemma of prediction horizon and computation time that nonlinear model-predictive control and control barrier function are facing in safety-critical control.
\section{Problem Formulation} \label{sec:problem}
\begin{table}[ht] \footnotesize{
\centering
\caption{Table of Notation}
\begin{tabular}{|l|l|}
\hline
$\mathbb{R}^{n}$:~set of $\emph{n}$-dimensional real vectors & $\mathbb{R}_{\ge 0}$:~ set of non-negative real numbers \\ \hline
$\mathbb{N}$:~set of natural numbers & $[\mathbf{x}]_{i}$:~$i$-th entry of vector $\mathbf{x}$ \\ \hline
$[\mathbf{x}]_{i:j}$:~a sub-vector formed by the $i$-th to $j$-th entries of vector $\mathbf{x}$ & $[\mathbf{W}]_{i,j}$:~ element at row $i$ and column $j$ of matrix \\ \hline
$[\mathbf{W}]_{i,:}$:~ $i$-th row of matrix $\mathbf{W}$ & $\left[\mathbf{x}~;~\mathbf{y}\right]$:~stacked (tall column) vector of vectors $\mathbf{x}$ and $\mathbf{y}$ \\ \hline
$\mathbf{0}_{n}$:~ $n$-dimensional vector of all zeros & $\mathbf{1}_{n}$:~ $n$-dimensional vector of all ones \\ \hline
$\mathbf{O}_{m \times n}$:~ $m \times n$-dimensional zero matrix & $\mathbf{I}_{n}$:~ $n \times n$-dimensional identity matrix \\ \hline
$|| \cdot ||$:~ Euclidean norm of a vector or absolute value of a number & $\odot$:~ Hadamard product \\ \hline
$\bigcdot$:~ multiplication operator & $\mathrm{len}(\mathbf{x})$:~ length of vector $\mathbf{x}$ \\ \hline
$\text{act}$:~ activation function & $\text{sus}$:~ suppressor function \\ \hline
$\top$:~matrix or vector transposition & \text{ina}: a function that is inactive\\ \hline
$\boxplus$:~a known model substructure parameter & $*$:~an unknown model substructure parameter \\ \hline
\end{tabular}\label{notation}}
\end{table}
\noindent
Consider the problem of computing some output vectors, $\mathbf{y}$, from a set of observations, $\mathbf{x}$. The relation between $\mathbf{x}$ and $\mathbf{y}$ is partially determined by physical models of known structure (but possibly unknown parameter values) and partially unknown, thus calling for representation learning of the missing substructures using neural network observables.
For example, $\mathbf{y}$ might denote the estimated momentum and future position of a target as a function of a vector of s, $\mathbf{x}$, that include its position, velocity, and type. In this formulation, position and velocity might be directly related to output quantities via known physical relations, but type is represented only indirectly by an image that requires some representation learning in order to translate it into relevant parameters (such as mass and maneuverability) from which the outputs can be computed.
We express the overall input/output relation by the function:
\begin{align}
\mathbf{y} = \underbrace{\mathbf{A}}_{\text{weight matrix}} \cdot \underbrace{\mathfrak{m}(\mathbf{x},r)}_{\text{node-representation vector}} + \underbrace{\mathbf{f}(\mathbf{x})}_{\text{model mismatch}},
\label{eq:lobja}
\end{align}
\noindent
where $\mathbf{y}$ and $\mathbf{x}$ are the output and input vectors of overall system model, respectively, and the parameter $r$ controls model size. For convenience, Table~\ref{notation} summarizes the remaining notations used throughout the paper.
Since Equation~\eqref{eq:lobja} combines known and unknown model substructures, we distinguish them according to the definition below.
\begin{defa}
For all $i \in \{1, 2, \ldots, \mathrm{len}(\mathbf{y})\}$, $j \in \{1, 2, \ldots, \mathrm{len}(\mathfrak{m}(\mathbf{x},r))\}$, element $[\mathbf{A}]_{i,j}$ is said to be a {\em known model substructure parameter\/} in Equation~\eqref{eq:lobja} if and only if $\frac{{\partial [\mathbf{f}(\mathbf{x})]_i}}{{\partial {[\mathfrak{m}}(\mathbf{x},r)]_{j}}} \equiv 0$. Otherwise, it is called an {\em unknown model substructure parameter\/}.
\label{defj}
\end{defa}
\noindent
Definition~\ref{defj} indicates that a known model substructure include two practical cases:
\begin{itemize}
\vspace{-0.1in}
\item \textbf{\textit{Known Parameter Values But Unknown Model Formula:}} it can describe the scenario that the model formula knowledge is unknown but values of some involving parameter are known. An example is the dynamics learning in Figure \ref{df}. We here assume the only available physics knowledge is the physical topology described in Figure \ref{df} (a). Without loss generality, the dynamics of the third pendulum's velocity can be expressed as ${v_3}( {k + 1}) = {g}( {{\theta _3}(k),{\theta _2}(k),{v_2}(k)})$, which complies with the physical topology that the first and third pendulums do not have direct interactions. For simplicity, we let $\mathfrak{m}(\mathbf{x},r) = \mathbf{x}( k )$, we then obtain the ground truth model in formula of \eqref{eq:lobja}:
\begin{align}
{v_3}( {k + 1}) = \underbrace{\left[ {\begin{array}{*{20}{c}}
0&*&*&0&*&*
\end{array}} \right]}_{\mathbf{A}} \cdot \underbrace{\left[ \begin{array}{l}
{\theta _1}( k)\\
{\theta _2}( k)\\
{\theta _3}( k)\\
{v_1}( k)\\
{v_2}( k)\\
{v_3}( k)
\end{array} \right]}_{\mathfrak{m}(\mathbf{x},r)} + \underbrace{{g}( {{\theta _3}(k),{\theta _2}(k),{v_2}(k)}) - \mathbf{A} \cdot \mathfrak{m}(\mathbf{x},r)}_{\mathbf{f}(\mathbf{x})}. \nonumber
\end{align}
Since $\mathbf{f}(\mathbf{x})$ is independent of ${\theta _1}( k)$ and ${v _1}( k)$, i.e., $\frac{{\partial [\mathbf{f}(\mathbf{x})]_1}}{{\partial {[\mathfrak{m}}(\mathbf{x},r)]_{1}}} = \frac{{\partial [\mathbf{f}(\mathbf{x})]_1}}{{\partial {[\mathfrak{m}}(\mathbf{x},r)]_{4}}} = [\mathbf{A}]_{1,1} = [\mathbf{A}]_{1,4}\equiv 0$, according to Definition \ref{defj}, $[\mathbf{A}]_{1,1} = 0$ and $[\mathbf{A}]_{1,4} = 0$ are thus the known model substructure parameters, but the model formula is unknown at all.
\item \textbf{\textit{Known Model Formula But Unknown Parameter Values:}} it merely means that the model formula that governs this substructure is known but the parameter values may be unknown. One example is the learning of a Lyapunov function, which appears in Supplementary Information (Section~\ref{toy}).
\end{itemize}
\noindent Considering this definition, the problem addressed in this paper is formally stated below.
\begin{prom}
Given a time-series of inputs, $\mathbf{x}$, the corresponding outputs, $\mathbf{y}$, and the known model substructures in Equation~\eqref{eq:lobja}, it is desired to develop an end-to-end neural network that directly estimates $\mathbf{y}$ (denoted by $\widehat{\mathbf{y}}$), given $\mathbf{x}$, consistently with all known model substructures. In other words, the model must satisfy the property that for each known model substructure parameter, $[\mathbf{A}]_{i,j}$, the end-to-end model must ensure that $\frac{{\partial {{[\widehat{\mathbf{y}}}]_i}}}{{\partial {[\mathfrak{m}}\left( {\mathbf{x},r} \right)]_{j}}} \equiv [\mathbf{A}]_{i,j}$ for any $\mathfrak{m}(\mathbf{x}, r)$.
\label{problem}
\end{prom}
\noindent
The above definition allows the system described by Equation~\eqref{eq:lobja} to have an end-to-end model that intertwines well-known substructure properties with high-order unmodeled correlations of unknown nonlinear structure. In this, our problem differs from past seminal frameworks of physics-enhanced DNNs~\cite{wang2021physics,willard2021integrating,jia2021physics,jia2019physics,wang2021deep,lu2021physics,chen2021theory,wang2020deep,xu2022physics,karniadakis2021physics,wang2020towards,daw2017physics,cranmer2020lagrangian,finzi2020simplifying,greydanus2019hamiltonian, muralidhar2020phynet,masci2015geodesic,monti2017geometric,horie2020isometric,wang2021incorporating,li2019learning,lusch2018deep,kani2017dr,belbute2020combining,wu2021deepgleam,guen2020disentangling,garcia2019combining,long2018hybridnet,yin2021augmenting}, that use a compact partial differential equations (PDEs) for formulating the PDEs-regulated loss function and/or DNN architectures to count the degree mean of consistency with PDEs. The proposed solution to Problem~\ref{problem} is the Phy-Taylor framework, which will rely on two building blocks: a physics-compatible neural network layer (PhN) and a physics-guided neural network editing mechanism, presented in the next section.
\section{Phy-Taylor Framework}
\begin{figure*}[!t]
\centering
\subfigure{\includegraphics[scale=0.52]{cpn.pdf}}\\
\vspace{-0.4cm}
\subfigure{\includegraphics[scale=0.52]{tnaa.pdf}}
\caption{Architectures of Phy-Taylor and physics-compatible neural network (PhN).}
\label{PhyArt}
\end{figure*}
The proposed Phy-Taylor for addressing Problem~\ref{problem} is shown in Figure \ref{PhyArt} (b), which is built on the conjunctive deep physics-compatible neural network (PhN) and physics-guided neural network (NN) editing. In other words, implementing NN editing according to Taylor's theorem for embedding available physics knowledge into inside of deep PhN yields the Phy-Taylor.
As shown in Figure \ref{PhyArt} (a), the PhN is a neural network layer with two key components: (i)
a physics-inspired augmentation (called Phy-Augmentation) for generating monomials (i.e., $\mathfrak{m}(\mathbf{x},r)$ in Equation~\eqref{eq:lobja}) of Taylor series expansion of nonlinear functions capturing physical knowledge, and (ii) a noise suppressor for mitigating the influence of noise. The physics-guided NN editing -- including link editing and activation triggering -- further modifies network links and activation functions consistently with physical knowledge. Specifically, the link editing performs removing and preserving links according to the consistency with physics knowledge (included in system matrix $\mathbf{A}$ in Equation~\eqref{eq:lobja}). Link editing thus transforms the original (fully-connected) weight matrices of PhNs to the edited ones (e.g., ${\mathbf{W}_{\left\langle t \right\rangle }}$ in Figure \ref{PhyArt} (b)). Meanwhile, the activation editing performs the physics-knowledge-preserving computing in output channel of each PhN. Concretely, the activation editing first separates the edited weight matrix (e.g., ${\mathbf{W}_{\left\langle t \right\rangle }}$) into knowledge matrix (e.g., ${\mathbf{K}_{\left\langle t \right\rangle }}$) and uncertainty matrix (e.g., ${\mathbf{U}_{\left\langle t \right\rangle }}$). The knowledge matrix includes all the known model substructure parameters relevant to model~\eqref{eq:lobja}, while the uncertainty matrix excludes all such knowledgeable parameters. The activation finally implements activation only in the computation involving the uncertainty matrix. Collaboratively through link and activation editing, the input-output of Phy-Taylor can strictly comply with the available physics knowledge, which is a desired solution to Problem~\ref{problem}. Next, we detail the two components in addressing Problem~\ref{problem}.
\subsection{The Physics-compatible Neural Network (PhN)}\label{sec:TNO}
In order to capture non-linear features of physical functions, we introduce a new type of network layer that is augmented with terms derived from Taylor series expansion.
The Taylor's Theorem offers a series expansion of arbitrary nonlinear functions, as shown below.
\definecolor{cccolor}{rgb}{.67,.7,.67}
\begin{mdframed}
\textbf{Taylor's Theorem (Chapter 2.4 \cite{konigsberger2013analysis}):} Let $\mathbf{g}\!:~ \mathbb{R}^{n} \to \mathbb{R}$ be a $r$-times continuously differentiable function at the point $\mathbf{o} \in \mathbb{R}^{n}$. Then there exists $\mathbf{h}_{\alpha}\!:~ \mathbb{R}^{n} \to \mathbb{R}$, where $\left| \alpha \right| = r$, such that
\begin{align}
&\mathbf{g}( \mathbf{x} ) = \sum\limits_{\left| \alpha \right| \le r} {\frac{{{\partial^\alpha }\mathbf{g}( \mathbf{o} )}}{{\alpha !}}} {\left( {\mathbf{x} - \mathbf{o}} \right)^\alpha } + \sum\limits_{\left| \alpha \right| = r} {{\mathbf{h}_\alpha }( \mathbf{x} ){{( {\mathbf{x} - \mathbf{o}} )^\alpha} }}, \hspace{0.2cm}\text{and}~\mathop {\lim }\limits_{\mathbf{x} \to \mathbf{o}} {\mathbf{h}_\alpha}\left( \mathbf{x} \right) = \mathbf{0}, \label{taylortheorem}
\end{align}
where $\alpha = \left[ {{\alpha _1};{\alpha _2}; \ldots ;{\alpha _n}} \right]$, $\left| \alpha \right| = \sum\limits_{i = 1}^n {{\alpha _i}}$, ~$\alpha ! = \prod\limits_{i = 1}^n {{\alpha _i}}!$, ~${\mathbf{x}^\alpha } = \prod\limits_{i = 1}^n {\mathbf{x}_i^{{\alpha _i}}}$, ~and ${\partial^\alpha }\mathbf{g} = \frac{{{\partial ^{\left| \alpha \right|}} \mathbf{g} }}{{\partial \mathbf{x}_1^{{\alpha _1}} \cdot \ldots \cdot \partial \mathbf{x}_n^{{\alpha _n}}}}$.
\end{mdframed}
\noindent
The Taylor's theorem has several desirable properties:
\begin{itemize}
\item \textit{Non-linear Physics Term Representation:} The high-order monomials (i.e., the ones included in $\left( {\mathbf{x} - \mathbf{o}} \right)^\alpha$ with $|\alpha| \ge 2$) of the Taylor series (i.e., $\sum\limits_{\left| \alpha \right| \le r} {\frac{{{D^\alpha }\mathbf{g}( \mathbf{o} )}}{{\alpha !}}} {\left( {\mathbf{x} - \mathbf{o}} \right)^\alpha }$) capture core nonlinearities of physical quantities such as kinetic energy ($\triangleq \frac{1}{2}m{v^2}$), potential energy ($\triangleq \frac{1}{2}k{x^2}$), electrical power ($\triangleq V \cdot I$) and aerodynamic drag force ($\triangleq \frac{1}{2}\rho {v^2}{C_D}A$), that drive the state dynamics of physical systems.
\item \textit{Controllable Model Accuracy:} Given ${\mathbf{h}_\alpha}( \mathbf{x} )$ is finite and $\left\| {\mathbf{x} - \mathbf{o}} \right\| < 1$, the error $\sum\limits_{\left| \alpha \right| = r} {{\mathbf{h}_\alpha }( \mathbf{x} ){{( {\mathbf{x} - \mathbf{o}} )^\alpha} }}$ for approximating the ground truth $\mathbf{g}(\mathbf{x})$ will drop significantly as the order $r = |\alpha|$ increases and $\mathop {\lim }\limits_{|\alpha| = r \to \infty } {\mathbf{h}_\alpha}(\mathbf{x}){\left( {\mathbf{x} - \mathbf{o}} \right)^\alpha} = \mathbf{0}$. This allows for controllable model accuracy via controllable order $r$.
\item \textit{Knowledge Embedding:} The Taylor series can directly project the known model substructure parameters of the ground-truth model \eqref{eq:lobja} into neural network parameters including the weight matrix (${\frac{{{D^\alpha }\mathbf{g}( \mathbf{o} )}}{{\alpha !}}}$ with $|\alpha| > 0$) and bias (${\frac{{{D^\alpha }\mathbf{g}( \mathbf{o} )}}{{\alpha !}}}$ with $|\alpha| = 0$), thus paving the way to embed the available physics knowledge in the form of an appropriately weighted neural network layer.
\end{itemize}
\noindent
We note that the Taylor theorem relies on an assumption that the ground truth $\mathbf{g}(\mathbf{x})$ is a $r$-times continuously differentiable function at the point $\mathbf{o}$. If the assumption does not hold, what the Taylor series will approximate is a proximity of ground truth that is $r$-times continuously differentiable. For continuous functions, this is often a sufficient approximation. Next, we describe how PhNs embed the Taylor series expansion into neural network layers.
The resulting architecture (of a single PhN layer) is shown in Figure \ref{PhyArt} (a). Compared with a classical neural network layer, we introduce two additional components: (i) augmented inputs that represent monomials of a Taylor series expansion, and (ii) a suppressor for mitigating the influence of noise on such augmented inputs (i.e., high-order monomials). Next, we detail the two components, respectively.
\subsubsection{Phy-Augmentation with Taylor Series Monomials}
\begin{wrapfigure}{r}{0.57\textwidth}
\vspace{-0.9cm}
\begin{center}
\includegraphics[width=0.57\textwidth]{pop.pdf}
\end{center}
\vspace{-0.5cm}
\caption{An example of Algorithm~\ref{ALG1} in TensorFlow framework, where input is $\tilde{\mathbf{x}} = \chi(\mathbf{x}-\mathbf{o}) \in \mathbb{R}^3$ and augmentation order $r=3$.}
\vspace{-0.5cm}
\label{phyaug}
\end{wrapfigure}
The function of physics inspired augmentation (Phy-Augmentation) is to generate the vectors of node representations in form of Taylor series monomials, which is formally described by Algorithm \ref{ALG1}. The Lines \ref{alg1-5}--\ref{alg1-12} of Algorithm \ref{ALG1} guarantee that the generated node-representation vector embraces all the non-missing and non-redundant monomials of Taylor series. The Line \ref{alg1-16} shows that Algorithm \ref{ALG1} finally stacks vectors with one. This operation means a PhN node will be assigned to be one and the bias (corresponding to ${\frac{{{D^\alpha }\mathbf{g}( \mathbf{o} )}}{{\alpha !}}}$ with $|\alpha = 0|$ in Taylor series) in classical NNs will be treated as link weights in PhN layer. As an example shown in Figure \ref{phyaug}, the Phy-Augmentation empowers each PhN layer to well capture core nonlinearities of physical quantities (see e.g., kinetic energy, potential energy, electrical power and aerodynamic drag force) that drive the state dynamics of physical systems, and then represent or approximate physics knowledge in form of the Taylor series.
We note the Algorithm \ref{ALG1} can generate two vectors of node representations, denoted by $\mathfrak{m}(x, r)$ and $\breve{\mathfrak{m}}(x, r)$. As showed by the Taylor layer architecture in Figure \ref{PhyArt} (a), the only difference between $\breve{\mathfrak{m}}(\mathbf{x},r)$ and $\mathfrak{m}(\mathbf{x},r)$ is their inputs. Specifically, the input of Algorithm \ref{ALG1} for generating $\mathfrak{m}(\mathbf{x},r)$ is the raw input $\mathbf{x}-\mathbf{o}$, while the input for $\breve{\mathfrak{m}}(\mathbf{x},r)$ is the output of suppressor, i.e., $\chi(\mathbf{x}-\mathbf{o})$. The critical reason for generating such two vectors is that the suppressor introduces an additional mapping on inputs, such that it can destroy the compliance with physics knowledge, especially when the knowledge does not include the mapping of suppressor. This means that the
$\breve{\mathfrak{m}}(x, r)$ (suppressor applied) can be used when the involving computing includes unknown or uncertain or unverified physics knowledge. We next is going to present the function of compressor.
\begin{algorithm} \footnotesize{
\caption{Phy-Augmentation Procedure} \label{ALG1}
\KwIn{augmentation order $r$, center point $\mathbf{o}$, output dimension $\text{len}(\mathbf{y})$, and inputs $\tilde{\mathbf{x}} \triangleq \chi(\mathbf{x}-\mathbf{o})$ and $\bar{\mathbf{x}} \triangleq \mathbf{x}-\mathbf{o}$.}
Generate index vector of input entries: $\mathbf{i} \leftarrow [1;~2;~\ldots~;~\mathrm{len}(\tilde{\mathbf{x}})]$;\label{ALG1-1}\\
Generate augmentations: $\widetilde{\mathfrak{m}}(\mathbf{x},r) \leftarrow \tilde{\mathbf{x}}$ and $\overline{\mathfrak{m}}(\mathbf{x},r) \leftarrow \bar{\mathbf{x}}$; \label{ALG1-1}\\
\For{$ \_\ = 2$ to $r$}{ \label{alg1-3}
\For{$i=1$ to $\mathrm{len}(\tilde{\mathbf{x}})$ \label{alg1-4}}
{Compute temporaries: $\widetilde{\mathbf{{t}}}_{a}$ $\leftarrow [\tilde{\mathbf{x}}]_{i} \cdot [\tilde{\mathbf{x}}]_{\left[ {[\mathbf{i}]_{i}~:~\mathrm{len}( \tilde{\mathbf{x}})} \right]}$ and $\overline{\mathbf{{t}}}_{a}$ $\leftarrow [\bar{\mathbf{x}}]_{i} \cdot [\bar{\mathbf{x}}]_{\left[ {[\mathbf{i}]_{i}~:~\mathrm{len}( \bar{\mathbf{x}})} \right]}$; \label{alg1-5}\\
\eIf{$i==1$ \label{alg1-6}}
{Generate temporaries: $\widetilde{\mathbf{{t}}}_{b} \leftarrow \widetilde{\mathbf{{t}}}_{a}$ and $\overline{\mathbf{{t}}}_{b} \leftarrow \overline{\mathbf{{t}}}_{a}$; \label{alg1-7}\\}
{Generate temporaries: $\widetilde{\mathbf{{t}}}_{b} \leftarrow \left[ \widetilde{\mathbf{{t}}}_{b};~\widetilde{\mathbf{{t}}}_{a} \right]$ and $\overline{\mathbf{{t}}}_{b} \leftarrow \left[ \overline{\mathbf{{t}}}_{b};~\overline{\mathbf{{t}}}_{a} \right]$; \label{alg1-9}\\}
Update index entry: $[\mathbf{i}]_i \leftarrow \mathrm{len}(\tilde{\mathbf{x}})$; \label{alg1-11}\\
Update augmentations: $\widetilde{\mathfrak{m}}(\mathbf{x},r) \leftarrow \left[ \widetilde{\mathfrak{m}}(\mathbf{x},r);~{\widetilde{\mathbf{{t}}}_{b}} \right]$ and $\overline{\mathfrak{m}}(\mathbf{x},r) \leftarrow \left[ \overline{\mathfrak{m}}(\mathbf{x},r);~{\overline{\mathbf{{t}}}_{b}} \right]$; \label{alg1-12}\\
}
\label{alg1-13} }\label{alg1-14}
Output vectors of augmented monomials: $\breve{\mathfrak{m}}(\mathbf{x},r) \leftarrow \left[1; ~\widetilde{\mathfrak{m}}(\mathbf{x},r) \right]$ and $\mathfrak{m}(\mathbf{x},r) \leftarrow \left[1; ~ \overline{\mathfrak{m}}(\mathbf{x},r)\right]$. \label{alg1-16}}
\end{algorithm}
\subsubsection{The Noise Suppressor}
The suppressor is to mitigate the influence of large noise on the augmented high-order monomials. Before proceeding on the working mechanism of suppressor, we present a definition and a metric pertaining to noise.
\begin{defa}
Consider the noisy data and define the data-to-noise ratio ($\mathrm{DNR}$):
\begin{align}
[\bar{\mathbf{x}}]_i = \underbrace{[\mathbf{h}]_i}_{\text{true data}} + \underbrace{[{\mathbf{w}}]_i}_{\text{noise}} \in \mathbb{R}, ~~~~~~~~~~~~~~~~~~~~\mathrm{DNR}_i \triangleq \frac{[\mathbf{h}]_i}{[\mathbf{w}]_i}.
\label{nd}
\end{align}
Noise $[{\mathbf{w}}]_i$ is said to be large with respect to true data $[{\mathbf{h}}]_i$ if $|\mathrm{DNR}_i| < 1$, otherwise, it is said to be small. \label{def1}
\end{defa}
In view of definition \ref{def1}, the true data can be equivalently expressed as $[{\mathbf{h}}]_i = \mathrm{DNR}_i \cdot [{\mathbf{w}}]_i$, according to which we have $[\bar{\mathbf{x}}]_i = (1 + \mathrm{DNR}_i){[{\mathbf{w}}]_i}$ such that
\begin{align}
[\bar{\mathbf{x}}]_i^p[\bar{\mathbf{x}}]_j^q = {( {1 + \mathrm{DNR}_i})^p}{( {1 + \mathrm{DNR}_j} )^q}[\mathbf{w}]_i^p[\mathbf{w}]_j^q, ~~~~~~\text{and}~~~[\mathbf{h}]_i^p[\mathbf{h}]_j^q = \mathrm{DNR}_i^p \cdot \mathrm{DNR}_j^q \cdot [\mathbf{w}]_i^p [\mathbf{w}]_j^q. \label{reformula}
\end{align}
We note the truth data of high-order monomial $[\bar{\mathbf{x}}]_i^p[\bar{\mathbf{x}}]_j^q$ is $[{\mathbf{h}}]_i^p[{\mathbf{h}}]_j^q$, while the corresponding noise can be derived from the formula~\eqref{reformula} as
\begin{align}
[\bar{\mathbf{x}}]_i^p[\bar{\mathbf{x}}]_j^q - [\mathbf{h}]_i^p[\mathbf{h}]_j^q = \left[{( {1 + \mathrm{DNR}_i})^p}{( {1 + \mathrm{DNR}_j} )^q} - \mathrm{DNR}_i^p \cdot \mathrm{DNR}_j^q\right] [\mathbf{w}]_i^p[\mathbf{w}]_j^q, \nonumber
\end{align}
which in conjunction with \eqref{nd} lead to
\begin{align}
\left| \mathrm{DNR}^{p+q}_{ij} \right| \triangleq \left|\frac{{[\mathbf{h}]_i^p[\mathbf{h}]_j^q}}{{[\bar{\mathbf{x}}]_i^p[\bar{\mathbf{x}}]_j^q - [\mathbf{h}]_i^p[\mathbf{h}]_j^q}}\right| = \left|\frac{1}{{{{\left( {1 + \frac{1}{{\mathrm{DNR}_i}}} \right)}^p}{{\left( {1 + \frac{1}{{\mathrm{DNR}_j}}} \right)}^q} - 1}}\right|, \label{dnrfinal}
\end{align}
observing which, we conclude an influence of high-order monomial on the data-to-noise ratio presenting in the following theorem. Its proof is given in Supplementary Information \ref{SIAAS}.
\begin{thm}
The magnitude of data-to-noise ratio of high-order monomial $[\bar{\mathbf{x}}]_i^p[\bar{\mathbf{x}}]_j^q$, i.e., $|\mathrm{DNR}^{m+n}_{ij}|$ given in Equation \eqref{dnrfinal}, is strictly increasing with respect to $|\mathrm{DNR}_i|$ and $|\mathrm{DNR}_j|$, if
\begin{align}
\mathrm{DNR}_i, ~\mathrm{DNR}_j \in (-\infty, -1] ~~~~~\text{or}~~~~~ \mathrm{DNR}_i, ~\mathrm{DNR}_j \in [-\frac{1}{2}, 0) ~~~~~\text{or}~~~~~ \mathrm{DNR}_i, ~\mathrm{DNR}_j \in (0, \infty). \label{concon}
\end{align}\label{colthm}
\end{thm}
Theorem \ref{colthm} implies that the high-order (i.e., $p + q \ge 2$) monomial $[\bar{\mathbf{x}}]_i^p[\bar{\mathbf{x}}]_j^q$ enlarges its data-to-noise ratio (DNR), if the DNRs of raw data $[\bar{\mathbf{x}}]_i$ and $[\bar{\mathbf{x}}]_j$ satisfy the condition \eqref{concon}. However, it can be verified from equation \eqref{dnrfinal} that if the DNR condition \eqref{concon} does not hold, the high-order monomial can shrink the data-to-noise ratio, which is thus undesired. This analysis indicates the PhN can be vulnerable to the inputs whose noise violate condition \eqref{concon}, owning to Phy-Augmentation component for generating high-order monomials. Hence, mitigating the influence of noise on the augmented monomials is vital for enhancing the robustness of PhNs, consequently the Phy-Taylor. As shown in Figure \ref{PhyArt} (a), we incorporate a suppressor into PhN to process the raw input data $x$, such that the DNRs of processed data can satisfy one condition in \eqref{concon}. Moving forward, the Phy-Augmentation can further enlarge the DNRs. The working mechanism of suppressor is stated in the following theorem. Its proof is presented in Supplementary Information \ref{SI02}.
\begin{thm}
Consider a noisy input data $[\bar{\mathbf{x}}]_i$ described in Equation \eqref{nd} and assume its noise $[\mathbf{w}]_i \ne 0$. If the suppressor mapping, $\chi(\cdot): ~\mathbb{R} \rightarrow \mathbb{R}$, satisfies
\begin{align}
\chi([\bar{\mathbf{x}}]_i) &= \chi([\mathbf{h}]_i + [\mathbf{w}]_i) = \begin{cases}
0, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i < 0\\
[\mathbf{h}]_i + [\mathbf{w}]_i, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0 ~\text{and}~[\mathbf{w}]_i < 0 \\
([\mathbf{h}]_i + [\mathbf{w}]_i) \cdot \kappa_i + \rho_i, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0 ~\text{and}~[\mathbf{w}]_i > 0
\end{cases}, \label{compbadd} \\
|\rho_i| &\ge |[\mathbf{h}]_i + [\mathbf{w}]_i | \cdot |\kappa_i|,
\label{compb}
\end{align}
the suppressor output has the properties:
\begin{align}
&\chi([\bar{\mathbf{x}}]_i) = [\widetilde{\mathbf{h}}]_i + [\widetilde{\mathbf{w}}]_i, ~~~~~~\text{and}~~~~~~\mathrm{DNR}_{i} = \frac{[\widetilde{\mathbf{h}}]_i}{[\widetilde{\mathbf{w}}]_i} \in (-\infty, -1], \label{cohgq1} \\
&\left| [\widetilde{\mathbf{h}}]_i - [{\mathbf{h}}]_i\right| = \begin{cases}
\left| {[\mathbf{h}]_i \cdot \left( {\kappa_{i} - 1} \right) + \rho_{i} } \right|, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0 ~\text{and}~[\mathbf{w}]_i > 0\\
0, & \text{otherwise} \\
\end{cases} \label{cohgq2}
\end{align}
where $[\widetilde{\mathbf{h}}]_i$ and $[\widetilde{\mathbf{w}}]_i$ denote the true data and the noise of compressor output, respectively, which are computed according to
\begin{align}
\hspace{-0.90cm}[\widetilde{\mathbf{h}}]_i = \begin{cases}
\![\mathbf{h}]_i \cdot \kappa_i + \rho_i, \!\!&[\mathbf{h}]_i \!+\! [\mathbf{w}]_i \!\ge\! 0 ~\text{and}~[\mathbf{w}]_i \!>\! 0 \\
\![\mathbf{h}]_i, \!\!& \text{otherwise}\\
\end{cases}, ~~~~[\widetilde{\mathbf{w}}]_i = \begin{cases}
\!-[\mathbf{h}]_i, \!\!&[\mathbf{h}]_i \!+\! [\mathbf{w}]_i \!<\! 0\\
\![\mathbf{w}]_i, \!\!&[\mathbf{h}]_i \!+\! [\mathbf{w}]_i \!\ge\! 0 ~\text{and}~[\mathbf{w}]_i \!<\! 0 \\
\![\mathbf{w}]_i \cdot \kappa_i, \!\!&[\mathbf{h}]_i \!+\! [\mathbf{w}]_i \!\ge\! 0 ~\text{and}~[\mathbf{w}]_i \!>\! 0
\end{cases}.
\label{cohgq3}
\end{align}
\label{th2}
\end{thm}
The property \eqref{cohgq1} implies one function of suppressor is to constrain the DNR of the processed data to the range $(-\infty, -1]$, through processing raw input data via the mapping \eqref{compbadd}. According to Theorem \ref{colthm}, the high-order monomials can then have larger DNRs compared with the those in the scenario where suppressor in inactive. The property also indicate the second function of suppressor is to compress noise to be small. The mapping parameters $\rho_i$ and $\kappa_i$ are critical in achieving the suppressor functions. However, the property \eqref{cohgq2} implies that even satisfying the imposed condition $|\rho_i| \ge |[\mathbf{h}]_i + [\mathbf{w}]_i | \cdot |\kappa_i|$, the $\rho_i$ and $\kappa_i$ cannot be arbitrarily small or large, otherwise, the suppressor will lead to large distortion with raw true data $[\mathbf{h}]_i$.
\subsection{Physics-guided Neural Network Editing}\label{sec:Neu}
\begin{algorithm} \footnotesize{\caption{Physics-guided NN Editing} \label{ALG2}
\KwIn{Available knowledge included in system matrix $\mathbf{A}$ of ground-truth model \eqref{eq:lobja}, number $p$ of Taylor layers, activation functions $\text{act}(\cdot)$, weight matrices ${\mathbf{W}_{\left\langle t \right\rangle }}$, node-representation vectors $\mathfrak{m}(\mathbf{y}_{\left\langle t-1 \right\rangle}, r_{\left\langle t \right\rangle})$ and $\breve{\mathfrak{m}}(\mathbf{y}_{\left\langle t-1 \right\rangle}, r_{\left\langle t \right\rangle})$ generated by Algorithm \ref{ALG1}, where $t = 1,2,\ldots,p$, $\mathbf{y}_{\left\langle 0 \right\rangle} = \mathbf{x}$ and $r_{\left\langle 1 \right\rangle} = r$.}
\SetKwInOut{Output}{Output}
\For{$t = 1$ to $p$}
{
\eIf{$t==1$}
{Generate knowledge matrix $\mathbf{K}_{\left\langle t \right\rangle }$: ~$[\mathbf{K}_{\left\langle t \right\rangle }]_{i,j} \leftarrow \begin{cases}
[\mathbf{A}_{\left\langle t \right\rangle }]_{i,j}, &[\mathbf{A}_{\left\langle t \right\rangle }]_{i,j} = \boxplus\\
0, &\text{otherwise}
\end{cases}$; \label{alg2-3}\\
Generate weight-masking matrix $\mathbf{M}_{\left\langle t \right\rangle }$: ~$[\mathbf{M}_{\left\langle t \right\rangle }]_{i,j} \leftarrow \begin{cases}
0, &[\mathbf{A}_{\left\langle t \right\rangle }]_{i,j} = \boxplus\\
1, &\text{otherwise}
\end{cases}$; \label{alg2-4}\\
Generate activation-masking vector $\mathbf{a}_{\left\langle t \right\rangle }$: \!~$[\mathbf{a}_{\left\langle t \right\rangle }]_{i} \!\leftarrow\! \begin{cases}
0, \!\!\!&[\mathbf{F}_{\left\langle t \right\rangle }]_{i,j} \!=\! 0, \forall j \!\in\! \{1,\ldots, \text{len}(\mathfrak{m}(\mathbf{x}, r_{\left\langle t \right\rangle})) \}\\
1, \!\!\!&\text{otherwise}
\end{cases}$; \label{alg2-5}\\
}
{
Generate knowledge matrix:
\begin{align}
\mathbf{K}_{\left\langle t \right\rangle } \leftarrow \mymatrix; \nonumber
\end{align} \label{alg2-7}\\
Generate weight-masking matrix $\mathbf{M}_{\left\langle t \right\rangle }$: ~$[\mathbf{M}_{\left\langle t \right\rangle }]_{i,j} \leftarrow \begin{cases}
0, &\frac{{\partial [\mathfrak{m}(\mathbf{y}_{\left\langle t \right\rangle}, r_{\left\langle t \right\rangle})]_{j}}}{{\partial [\mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle})]_{v}}} \ne 0 ~\text{and}~[\mathbf{F}_{\left\langle 1 \right\rangle }]_{i,v} = 0, ~~v \in \{1,2, \dots, \text{len}(\mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle}))\} \\
1, &\text{otherwise}
\end{cases}$; \label{alg2-8}\\
Generate activation-masking vector $\mathbf{a}_{\left\langle t \right\rangle } \leftarrow \left[ \mathbf{a}_{\left\langle 1 \right\rangle };~ \mathbf{1}_{\text{len}(\mathbf{y}_{\left\langle t \right\rangle }) - \text{len}(\mathbf{y}_{\left\langle 1 \right\rangle })} \right]$; \label{alg2-9}\\
}
Generate uncertainty matrix $\mathbf{U}_{\left\langle t \right\rangle } \leftarrow \mathbf{M}_{\left\langle t \right\rangle } \odot {\mathbf{W}_{\left\langle t \right\rangle }}$; \label{alg2-11}\\
Compute output: $\mathbf{y}_{\left\langle t \right\rangle } \leftarrow \mathbf{K}_{\left\langle t \right\rangle } \cdot \mathfrak{m}(\mathbf{x}, r_{\left\langle t \right\rangle}) + \mathbf{a}_{\left\langle t \right\rangle } \odot \text{act}\left( {\mathbf{U}_{\left\langle t
\right\rangle } \cdot \breve{\mathfrak{m}} \left( {\mathbf{x}, r_{\left\langle t \right\rangle } } \right)} \right)$ \label{alg2-12};
}
\Output{terminal output: $\widehat{\mathbf{y}} \leftarrow \mathbf{y}_{\left\langle p \right\rangle }$}}
\end{algorithm}
Building on the deep PhNs, this section presents the neural network editing for embedding and preserving the available physics knowledge, through the physics-guided link and activation editing. Specifically, the link editing centers around removing and preserving the links according to the consistency with physics knowledge. Meanwhile, the activation editing performs the physics-knowledge-preserving computing in the output channels of PhNs. Thanks to the concurrent link and activation editing, the input-output of Phy-Taylor can strictly comply with the available physics knowledge.
Using the notation `$\boxplus$' defined in the Table \ref{notation}, the procedure of physics-based neural network (NN) is described by Algorithm \ref{ALG2}, where the knowledge matrix $\mathbf{K}_{\left\langle t \right\rangle}$ (generated in Lines \ref{alg2-3} and \ref{alg2-7}) and the freezing matrix $\mathbf{F}_{\left\langle t \right\rangle }$ (generated in Lines \ref{alg2-4} and \ref{alg2-8}) are generated by link editing according to the available physics knowledge. The $\mathbf{K}_{\left\langle t \right\rangle}$ and $\mathbf{F}_{\left\langle t \right\rangle }$ will be used later for activation editing. Concretely, the $\mathbf{K}_{\left\langle t \right\rangle}$ includes all the known model-substructure parameters. The $\mathbf{F}_{\left\langle 1 \right\rangle }$ is used to generate uncertainty matrix $\mathbf{U}_{\left\langle t \right\rangle }$ (see Line \ref{alg2-11}) including all the unknown model-substructure parameters, through freezing the weight matrix $\mathbf{W}_{\left\langle t \right\rangle}$'s all known model-substructure parameters to zeros. In other words, leveraging NN editing, the original weight matrix $\mathbf{W}_{\left\langle t \right\rangle}$ is edited to embed the available physics knowledge. The edited matrix is then separated into knowledge matrix $\mathbf{K}_{\left\langle t \right\rangle}$ and uncertainty matrix $\mathbf{U}_{\left\langle t \right\rangle }$ for the physics-knowledge-preserving computing of output.
\begin{wrapfigure}{r}{0.37\textwidth}
\vspace{-0.9cm}
\begin{center}
\includegraphics[width=0.37\textwidth]{asdexp.pdf}
\end{center}
\vspace{-0.5cm}
\caption{Example of Algorithm~\ref{ALG2}.}
\vspace{-0.2cm}
\label{explanexample}
\end{wrapfigure}
For the edited matrix, or the knowledge matrix, if its entries in the same row are all known model-substructure parameters, the associated activation shall be inactivate. Otherwise, the Phy-Taylor cannot strictly preserve the available physics knowledge due to the extra nonlinear mappings induced by the activation functions. This thus motivates the physics-knowledge preserving computing, i.e., the Line \ref{alg2-12} of Algorithm \ref{ALG2}.
We next use a simplified example (without activating suppressor) to explain how the Algorithm \ref{ALG2} is implemented. Given a model of ground truth:
\begin{align}
{y_1} = T {x_2} + {g_1}\left( {{x_3}} \right), ~~~~~~~{y_2} = {g_2}\left( {{x_2},{x_3}} \right). \label{ground-truth-model}
\end{align}
where we assume the formulas of ${g_1}\left( \cdot \right)$ and ${g_2}\left( \cdot \right)$, and the parameter $T$ are unknown, while the remaining knowledge is known. For example, $y_2$ does not depend on $x_1$. We propose to use two PhNs to learn the model. The edited network structure is shown in Figure~\ref{explanexample}. With the available knowledge, we obtain from Algorithm \ref{ALG2} that
\begin{align}
&\hspace{-0.8cm}{\mathbf{K}_{\left\langle 1 \right\rangle }} \!=\! \left[\!\! {\begin{array}{*{20}{c}}
0&T&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0&0
\end{array}} \!\!\right]\!,\hspace{1.28cm}{\mathbf{K}_{\left\langle 2 \right\rangle }} \!=\! \left[\!\! {\begin{array}{*{20}{c}}
0&1&0&0&0&0\\
0&0&1&0&0&0
\end{array}} \!\!\right]\!\!, ~~~{\mathbf{a}_{\left\langle 1 \right\rangle }} = {\mathbf{a}_{\left\langle 2 \right\rangle }} = \left[ {1;1} \right]\!,\nonumber\\
&\hspace{-0.8cm}{\mathbf{U}_{\left\langle 1 \right\rangle }} \!=\! \left[\!\! {\begin{array}{*{20}{c}}
{{w_1}}&0&0&{{w_2}}&0&0&0&0&0&{{w_3}}\\
{{w_4}}&0&{{w_5}}&{{w_6}}&0&0&0&{{w_7}}&{{w_8}}&{{w_9}}
\end{array}} \!\!\right]\!\!,{\mathbf{U}_{\left\langle 2 \right\rangle }} \!=\! \left[\! {\begin{array}{*{20}{c}}
{{w_{10}}}&0&0&0&0&0\\
{{w_{11}}}&0&{{w_{12}}}&0&0&{{w_{13}}}
\end{array}} \!\right]\!. \nonumber
\end{align}
The terminal output is then computed according to Line \ref{alg2-12} of Algorithm \ref{ALG2} as
\begin{align}
&\hspace{-0.9cm}{{\hat y}_1} = {v_1} + \text{act}( {{w_{10}}}) = T{x_2} + \text{act}( {{w_1} + {w_2}{x_3} + {w_3}x_3^2}) + \text{act}( {{w_{10}}}), \nonumber\\
&\hspace{-0.9cm}{{\hat y}_2} = {v_2} + \text{act}( {{w_{11}} + {w_{12}}{v_2} + {w_{13}}v_2^2}) = \text{act}( {{w_4} + {w_5}{x_2} + {w_6}{x_3} + {w_7}x_2^2 + {w_8}{x_2}{x_3} + {w_9}x_3^2}) \nonumber\\
&\hspace{5.5cm} + \text{act}\!\left( {{w_{11}} + {w_{12}}\text{act}\!\left( {{w_4} + {w_5}{x_2} + {w_6}{x_3} + {w_7}x_2^2 + {w_8}{x_2}{x_3} + {w_9}x_3^2} \right)} \right. \nonumber\\
&\hspace{5.5cm} + {w_{13}}{\text{act}^2}\!\!\left. {\left( {{w_4} + {w_5}{x_2} + {w_6}{x_3} + {w_7}x_2^2 + {w_8}{x_2}{x_3} + {w_9}x_3^2} \right)} \right), \nonumber
\end{align}
which strictly complies with the available knowledge pertaining to the ground truth model \eqref{ground-truth-model}.
\subsection{The Solution to Problem \ref{problem}: Phy-Taylor} \label{sec:pintaylor}
As described by Figure \ref{PhyArt} (b), implementing the physics-guided NN editing on the deep PhN yields the \underline{Phy-Taylor}. The architecture of Phy-Taylor is described by Figure \ref{pint} (a). The Phy-Taylor embeds the available physics knowledge into the inside of PhNs, such that its input-output strictly complies with the physics knowledge, which is formally stated in the following theorem. The proof of theorem appears in Supplementary Information \ref{SI03}.
\begin{thm}
Consider the Phy-Taylor described by Figure \ref{PhyArt} (b), where $r_{\left\langle 1 \right\rangle} = r$. The input--output (i.e., ${\mathbf{x}}$--$\widehat{\mathbf{y}}$) of Phy-Taylor strictly complies with the available knowledge pertaining to the physics model of ground truth \eqref{eq:lobja}, i.e., if the $[\mathbf{A}]_{i,j}$ is a known model-substructure parameter, then $\frac{{\partial {{[\widehat{\mathbf{y}}}]_i}}}{{\partial {[\mathfrak{m}}\left( {\mathbf{x},r} \right)]_{j}}} \equiv \frac{{\partial {[\mathbf{y}]_i}}}{{\partial {[\mathfrak{m}}\left( {\mathbf{x},r} \right)]_{j}}} \equiv [\mathbf{A}]_{i,j}$ always holds. \label{thmmm2}
\end{thm}
Moving forward, this section focuses on the property analysis of Phy-Taylor with comparisons of fully-connected DNNs.
\subsubsection{Parameter Reduction}
The Figure \ref{phyaug} shows that the Phy-Augmentation, i.e., the Algorithm~\ref{ALG1}, expands the input without involving weight-matrix multiplication. This trait can be leveraged to significantly reduce the number of model parameters including weights and bias. For the demonstration, we consider the two network models in Figure \ref{pint} (a) and (b), where the (a) describes a fully-connected two-layer network, while the (b) describes a single Taylor layer. Observing them, we obtain that given the same dimensions of input and terminal output, the number of learning parameters of Figure \ref{pint} (a) is $(m+1)n + (n+1)p$ (including $(m + p)n$ weights and $n+p$ bias), while the number of learning parameters of Figure \ref{pint} (b) is $(n+1)p$ (including $n \times p$ weights and $p$ bias). The number difference of learning parameters is thus
\begin{align}
(m+1)n + (n+1)p - (n+1)p = (m+1)n. \label{rs}
\end{align}
\subsection{Phy-Taylor Properties} \label{sec:analysis}
\begin{figure*}[!t]
\centering
\subfigure{\includegraphics[scale=0.52]{fnbc.pdf}}
\caption{(a): Two Fully-connected NN layers. (n): A single PhN layer.}
\label{pint}
\end{figure*}
We note the number of reduced parameters \eqref{rs} is the lower bound of Phy-Taylor, since the \eqref{rs} is obtained without considering physics-based NN editing for removing and freezing links and bias according to the available physics knowledge. In other words, the number of reduced parameters via Phy-Taylor is larger than $(m+1)n$.
\subsubsection{Single Taylor Layer v.s. Cascading Taylor Layers: Complexity}
We next investigate if the complexity, i.e., the number of augmented monomials, of a single PhN layer with a large order can be reduced via cascading PhNs with relatively small orders. For simplicity, a single PhN layer is briefly described by
\begin{align}
\widehat{\mathbf{y}} = \mathrm{PhN}( {\left. \mathbf{x} \in \mathbb{R}^{n} \right| r}) \in \mathbb{R}^{m},\label{sTNO}
\end{align}
whose complexity of monomial augmentation is formally stated in the following theorem, with its proof appearing in Supplementary Information \ref{SI04}
\begin{thm}
Consider a single PhN layer \eqref{sTNO}. The complexity of its Phy-Augmentation, i.e., the dimension of terminal output of Algorithm~\ref{ALG1}, is
\begin{align}
\mathrm{len}(\mathfrak{m}(\mathbf{x},r)) = \mathrm{len}(\breve{\mathfrak{m}}(\mathbf{x},r)) = \sum\limits_{s = 1}^r {\frac{{\left( {n + s - 1} \right)!}}{{\left( {n - 1} \right)!s!}}} + 1. \label{cpeq1}
\end{align}\label{thm3}
\end{thm}
The Theorem \ref{thm3} indicates that the complexity of input augmentation of a single PhN strictly increases with respect to the augmentation order $r$. We next present the complexity can be reduced by the architecture of cascading PhNs denoted by
\begin{align}
&\mathbf{x} \in {{\mathbb{R}}^n} ~~\longmapsto~~ {\mathbf{y}_{\left\langle 1\right\rangle }} = \mathrm{PhN}( {\left. \mathbf{x} \right|{r_{\left\langle 1 \right\rangle }}}) \in {{\mathbb{R}}^{{m_{\left\langle 1 \right\rangle }}}} ~~\longmapsto~~ \ldots ~~\longmapsto~~ {\mathbf{y}_{{\left\langle d-1\right\rangle }}} = \mathrm{PhN}( {\left. {{\mathbf{y}_{{\left\langle d-2 \right\rangle }}}} \right|{r_{{\left\langle d-1 \right\rangle }}}}) \in {{\mathbb{R}}^{{m_{{\left\langle d-1 \right\rangle }}}}} \nonumber\\
&\hspace{9.00cm} \longmapsto~~ \widehat{\mathbf{y}} = \mathrm{PhN}( {\left. {{\mathbf{y}_{{\left\langle d-1 \right\rangle }}}} \right|{r_{\left\langle d \right\rangle }}}) \in \mathbb{R}^{m},\label{cko}
\end{align}
which consists of $d$ PhNs. To guarantee the cascading PhNs \eqref{cko} and the single PhN \eqref{sTNO} have the same monomials, their augmentation orders are required to satisfy
\begin{align}
\prod\limits_{v = 1}^d {{r_{{\left\langle v \right\rangle}}}} = r, ~~~~~~ \forall r_{{\left\langle v \right\rangle }}, ~~r \in \mathbb{N}. \label{mcc}
\end{align}
The complexity difference between single PhN and cascading PhNs is formally presented in the following corollary of Theorem \ref{thm3}, whose proof is presented in Supplementary Information \ref{SI06}.
\begin{cor}
Consider the single PhN \eqref{sTNO} and the cascading PhNs \eqref{cko}. Under the constraint condition \eqref{mcc}, the complexity difference is
\begin{align}
\hspace{-0.5cm}\mathrm{len}(\mathfrak{m}(\mathbf{x},r)) - \sum\limits_{p = 1}^d {\mathrm{len}(\mathfrak{m}(\mathbf{x},r_{{\left\langle p \right\rangle }}))} &= \mathrm{len}(\breve{\mathfrak{m}}(\mathbf{x},r)) - \sum\limits_{p = 1}^d {\mathrm{len}(\breve{\mathfrak{m}}(\mathbf{x},r_{{\left\langle p \right\rangle }}))} \nonumber\\
&= \sum\limits_{s = {r_{\left\langle 1 \right\rangle }} + 1}^r {\frac{{\left( {n + s - 1} \right)!}}{{\left( {n - 1} \right)!s!}}} - \sum\limits_{v = 1}^{d - 1} {\sum\limits_{s = 1}^{{r_{{\left\langle v+1 \right\rangle }}}} {\frac{{\left( {{n_{\left\langle v \right\rangle }} + s - 1} \right)!}}{{\left( {{n_{\left\langle v \right\rangle }} - 1} \right)!s!}}} } + 1 - d. \label{cffc}
\end{align}\label{cor}
\end{cor}
The Corollary \ref{cor} implies that the output dimensions and the augmentation orders of intermediate PhNs are critical in the significant reduction of complexity via cascading PhNs, which is demonstrated by the toy Example \ref{exd1} in Supplementary Information.
\subsubsection{Single Taylor Layer v.s. Cascading Taylor Layers: Model Accuracy}
Concluded in the previous subsection that given reasonable setting of output dimensions and the augmentation orders of intermediate layers, the cascading PhNs can significantly reduce the complexity of input augmentation. However, an intuitive question arises: \textit{Does the cascading PhNs reduce the complexity at the cost of model accuracy?} Without loss of generality, we use the following example to answer the question.
For simplicity in explanation, we ignore the bias and consider the scenario that both the activation and the compressor are inactive. For the single PhN \eqref{sTNO}, we let $\mathbf{x} \in \mathbb{R}^2$ and $\widehat{\mathbf{y}} \in \mathbb{R}$ and $r= 4$. Its output is then computed according to
\begin{align}
&\widehat{\mathbf{y}} = {w_1}{[\mathbf{x}]_1} + {w_2}{[\mathbf{x}]_2} + {w_3}[\mathbf{x}]_1^2 + {w_4}{[\mathbf{x}]_1}{[\mathbf{x}]_2} + {w_5}[\mathbf{x}]_2^2 + {w_6}[\mathbf{x}]_1^3 + {w_7}[\mathbf{x}]_1^2{[\mathbf{x}]_2} + {w_8}{[\mathbf{x}]_1}[\mathbf{x}]_2^2 + {w_9}[\mathbf{x}]_2^3 \nonumber\\
&\hspace{1.80cm}+ {w_{10}}[\mathbf{x}]_1^4 + {w_{11}}[\mathbf{x}]_1^3{[\mathbf{x}]_2} + {w_{12}}[\mathbf{x}]_1^2[\mathbf{x}]_2^2 + {w_{13}}{[\mathbf{x}]_1}[\mathbf{x}]_2^3 + {w_{14}}[\mathbf{x}]_2^4, \label{cx1}
\end{align}
where ${w_1},{w_2}, \ldots ,{w_{14}}$ denote link weights (learning parameters). For the corresponding cascading PhNs \eqref{cko}, we let $n = 2$, $r_{{\left\langle 2 \right\rangle }} = 1$, $r_{{\left\langle 1 \right\rangle }} = 2$ and $m = 1$. Its terminal output is computed as
\begin{align}
\widehat{\mathbf{z}} &= {{\hat w}_1}{\mathbf{y}} + {{\hat w}_2}\mathbf{y}^2 = {{\hat w}_1}( {{{\tilde w}_1}{[\mathbf{x}]_1} + {{\tilde w}_2}{[\mathbf{x}]_2} + {{\tilde w}_3}[\mathbf{x}]_1^2 + {{\tilde w}_4}{[\mathbf{x}]_1}{[\mathbf{x}]_2} + {{\tilde w}_5}[\mathbf{x}]_2^2}) \nonumber\\
&\hspace{6.2cm} + {{\hat w}_2}{\left( {{{\tilde w}_1}{[\mathbf{x}]_1} + {{\tilde w}_2}{[\mathbf{x}]_2} + {{\tilde w}_3}[\mathbf{x}]_1^2 + {{\tilde w}_4}{[\mathbf{x}]_1}{[\mathbf{x}]_2} + {{\tilde w}_5}[\mathbf{x}]_2^2} \right)^2} \nonumber\\
& = \textcolor[rgb]{1.00,0.00,0.00}{{{\hat w}_1}{{\tilde w}_1}}{[\mathbf{x}]_1} + \textcolor[rgb]{1.00,0.00,0.00}{{{\hat w}_1}{{\tilde w}_2}}{[\mathbf{x}]_2} + ( {{{\hat w}_1}{{\tilde w}_4} + 2{{\hat w}_2}{{\tilde w}_1}{{\tilde w}_2}}){[\mathbf{x}]_1}{[\mathbf{x}]_2} + \textcolor[rgb]{1.00,0.00,0.00}{( {{{\hat w}_1}{{\tilde w}_3} + {{\hat w}_2}\tilde w_1^2})}[\mathbf{x}]_1^2 + ( {{{\hat w}_1}{{\tilde w}_5} + {{\hat w}_2}\tilde w_2^2})[\mathbf{x}]_2^2 \nonumber\\
&~~~~ + \textcolor[rgb]{1.00,0.00,0.00}{2{{\hat w}_2}{{\tilde w}_1}{{\tilde w}_3}}[\mathbf{x}]_1^3 + 2{\hat w}_2( {{{\tilde w}_1}{{\tilde w}_4} + {{\tilde w}_2}{{\tilde w}_3}})[\mathbf{x}]_1^2{[\mathbf{x}]_2} + 2{\hat w}_2( {{{\tilde w}_1}{{\tilde w}_5} + {{\tilde w}_2}{{\tilde w}_4}}){[\mathbf{x}]_1}[\mathbf{x}]_2^2 + 2{{\hat w}_2}{{\tilde w}_2}{{\tilde w}_5}[\mathbf{x}]_2^3\nonumber\\
&~~~~ + \textcolor[rgb]{1.00,0.00,0.00}{{{\hat w}_2}\tilde w_3^2}[\mathbf{x}]_1^4 + 2{{\hat w}_2}{{\tilde w}_4}{{\tilde w}_3}[\mathbf{x}]_1^3{[\mathbf{x}]_2} + {{\hat w}_2}\tilde w_4^2[\mathbf{x}]_1^2[\mathbf{x}]_2^2 + 2{{\hat w}_2}{{\tilde w}_4}{{\tilde w}_5}{[\mathbf{x}]_1}[\mathbf{x}]_2^3 + {{\hat w}_2}\tilde w_5^2[\mathbf{x}]_2^4, \label{cx2}
\end{align}
where ${\tilde w}_1, {\tilde w}_2, \ldots, {\tilde w}_5$ are the learning parameters of the first PhN, ${\hat w}_1$ and ${\hat w}_2$ are the learning parameters of the second PhN.
Observing \eqref{cx2}, we discover that if ${\hat w}_1$ and ${\hat w}_2$ are non-zero, the parameters ${\tilde w}_1$, ${\tilde w}_2$ and ${\tilde w}_3$ can be straightforwardly inferred from the coefficients associated with the monomials ${[\mathbf{x}]_1}$, ${[\mathbf{x}]_2}$ and ${[\mathbf{x}]^3_1}$, i.e., ${{\hat w}_1}{{\tilde w}_1}$, ${{\hat w}_1}{{\tilde w}_2}$ and $2{{\hat w}_2}{{\tilde w}_1}{{\tilde w}_3}$, such that the coefficients associated with the monomials ${[\mathbf{x}]^2_1}$ and ${[\mathbf{x}]^3_1}$, i.e., ${{{\hat w}_1}{{\tilde w}_3} + {{\hat w}_2}\tilde w_1^2}$ and ${{\hat w}_2}\tilde w_3^2$ can be determined then. In other words, the coefficients associated with monomials ${[\mathbf{x}]_1}$, ${[\mathbf{x}]_2}$ and ${[\mathbf{x}]^3_1}$ constrain the freedom of parameters associated with the monomials ${[\mathbf{x}]^2_1}$ and ${[\mathbf{x}]^3_1}$, which thus reduce the coefficient space of polynomials, consequently, influence the accuracy of trained model. Hereto, we can conclude that \textit{if the reduced space of coefficients are associated with the node connections that contradict with physics knowledge, the cascading PhNs can further increase model accuracy, otherwise, it can reduce the complexity at the cost of model accuracy}.
\subsection{Extension: Self-Correcting Phy-Taylor}
The recent incidents due to the deployment of DNNs overshadow the revolutionizing potential of AI in the physical engineering systems \cite{arxr3, brief2021ai}, especially the safety-critical systems, whose unintended behavior results in death or serious injury to people, severe damage to equipment or environments. For instance, the Tesla's automated driver technology has been involved in fatal crashes, with problems in the car’s Autopilot blamed in several cases \cite{arxr1,arxr2}. Consequently, according to the 2018 Public Policy Polling/Consumer Watchdog Poll \cite{asdefg}, 75\% of US voters wish the Congress would apply the brakes on self-driving cars, reflecting that safety concerns are the primary impediment to adoption. The safe control and planning is a fundamental solution for enhancing safety assurance of the AI-assisted physical systems often operating in environments where time and safety are critical, such as the airplanes, medical drones and autonomous vehicles.
\begin{wrapfigure}{r}{0.70\textwidth}
\vspace{-0.9cm}
\begin{center}
\includegraphics[width=0.70\textwidth]{ghf.pdf}
\end{center}
\vspace{-0.5cm}
\caption{Self-correcting Phy-Taylor Architecture: $\mathbf{u}(k)$ denotes the vector of real-time decisions, ${\bf{s}}(u(k))$ denotes the vector of real-time safety metrics, $\bf{c}$ is the vector of safety bounds.}
\vspace{-0.5cm}
\label{ghf}
\end{wrapfigure}
In the field of safety-critical control, to compliance with safety constraints in the face of potential conflicts from control objectives, the framework of control barrier function (CBF) has been proposed for the computation of real-time safe control commands \cite{ames2019control,singletary2022onboard}. The CBFs however use only current state information without prediction, whose control policy is thus greedy and challenging for proactive safe control. It is well known that model predictive control yields a less greedy safe control policy, since it takes future state information into account \cite{williams2018information,falcone2007predictive}. Motivated by the observations, model predictive control with incorporation of control barrier function (MPC-CBF) was proposed \cite{zeng2021safety}. Due to the nonlinear dynamics, the nonlinear MPC-CBF however faces a dilemma of prediction horizon and computation time of safe control commands, which induces considerable feedback delays and thus leads to failures in the time- and safety-critical operating environments. To address the dilemma, we propose the self-correcting Phy-Taylor, whose architecture is shown in Figure \ref{ghf}. One mission of the self-correcting Phy-Taylor is learning the safety relationship between the real-time decisions and the safety metrics with consideration of future information:
\begin{align}
\mathbf{s}(\mathbf{x}(k),\mathbf{u}(k),\tau) = \sum\limits_{t = k}^{k + \tau - 1} {\tilde{\mathbf{f}}(\mathbf{x}(t),\mathbf{u}(t))} , \label{mkbzkm}
\end{align}
where $\tilde{\mathbf{f}}(\mathbf{x}(t),\mathbf{u}(t))$ is the predefined vector of safety metrics at time $t$, and $\tau$ denotes the horizon of future information of safety metrics.
Inside the self-correcting Phy-Taylor, the learned safety relationship for approximating \eqref{mkbzkm} will first be subject to the off-line verification of available constraints, physics knowledge or properties, based on which, the necessary revisions may be needed. According to the off-line verified and revised (if needed) safety relationship, the correcting of real-time decision $\mathbf{u}(k)$ will be triggered if any safety metric $[\mathbf{s}(\mathbf{x}(k),\mathbf{u}(k),\tau)]_{i}$, $i \in \{1,2,\ldots,h\}$, exceeds (or leaves) the preset safety bounds (or safety envelopes). However, the current learned formula corresponding to \eqref{mkbzkm} is not ready (if not impossible) for delivering the procedure, which is due to the complicated dependence of $[\mathbf{s}(\mathbf{x}(k),\mathbf{u}(k),\tau)]_{i}$ on both system state and decision. To address this problem, as shown in Figure \ref{ghf}, we decouple the real-time decisions from the real-time system states. Specifically, \\
-- Given the real-time system state $\mathbf{x}(k)$ as the origin input, the first Phy-Taylor outputs the real-time decision $\mathbf{u}(k)$, which is motivated by the fact that the state-feedback control is used most commonly in physical engineering systems \cite{doyle2013feedback}. In other words, the computation of $\mathbf{u}(k)$ directly depends on real-time system state $\mathbf{x}(k)$. \\
-- Given the real-time decision $\mathbf{u}(k)$ (i.e., the output of the first Phy-Taylor) as the input of the second Phy-Taylor, the terminal output is the real-time safety metric $\mathbf{s}(\mathbf{u}(k))$, which is motivated by the fact that the decision $\mathbf{u}(k)$ manipulates system state. In other words, the safety metric $\mathbf{s}(\mathbf{u}(k))$ directly depends on decision $\mathbf{u}(k)$ and indirectly depends on system state $\mathbf{x}(k)$. \\
-- The two Phy-Taylors are trained simultaneously according to the training loss function:
\begin{align}
\mathcal{L} = \alpha \left\| {{\bf{s}}(\mathbf{u}(k)) - {\bf{s}}({\bf{x}}(k),{\bf{u}}(k),\tau )} \right\| + \beta \left\| {{\bf{u}}(k) - {\bf{\hat u}}(k)} \right\|, \label{tranloss}
\end{align}
where ${\bf{s}}({\bf{x}}(k),{\bf{u}}(k),\tau )$ given in \eqref{mkbzkm} is ground truth of safety-metric vector, ${\bf{\hat u}}(k)$ is ground truth of decision vector, $\alpha$ and $\beta$ are hyperparameters. The two cascading Phy-Taylors thus depend on each other.\\
-- To render the learned safety relationship ${\bf{s}}(\mathbf{u}(k))$ tractable, the activation and compressor inside the second Phy-Taylor are inactive, such that the ${\bf{s}}(\mathbf{u}(k))$ is expressed in the form of Taylor series.
For the learned safety relationship, we first perform the verification of available knowledge, and necessary revisions. Given the verified and revised (if needed) relationship, the self-correcting procedure will be triggered if a violation with safety bound $\mathbf{c}$ occurs, for correcting decision according to
\begin{align}
\mathbf{u}(k) = \mathop {\arg \min }\limits_{\widetilde{\mathbf{u}}(k) \in {\mathbb{R}^m}} \left\{ {\left. {\left\| {\widetilde{\mathbf{u}}(k) - \mathbf{u}(k)} \right\|} \right|[{\bf{s}}(\left. {{\bf{u}}(k)} \right|{\bf{x}}(k))]_i < [\mathbf{c}]_i, ~~i \in \{1, 2, \ldots, \text{len}({\bf{s}}(\mathbf{u}(k)))\}} \right\}. \label{correctdecision}
\end{align}
We note the self-correcting mechanism and the safety revision of relationship between ${\bf{s}}(\mathbf{u}(k))$ and $\mathbf{u}(k)$ for guaranteeing \eqref{correctdecision} vary with safety problems and physical systems. An example in this paper is the safe control of autonomous vehicles: Algorithm~\ref{ALG4} in the Experiments section.
\section{Supplementary Information} \label{sec:SI}
\subsection{Toy Examples} \label{toy}
\begin{ex}
A typical example having model-known parameters is the learning of common Lyapunov function of a 2-dimensional system:
\begin{align}
\mathbf{y} = {w_{11}}\left[ \mathbf{x} \right]_1^2 + {w_{12}}{\left[ \mathbf{x} \right]_1}{\left[ x \right]_2} + {w_{22}}\left[ \mathbf{x} \right]_2^2 = \underbrace{\left[ {\begin{array}{*{20}{c}}
0 \!&\! 0 \!&\! 0 \!&\! {{w_{11}}} \!&\! {{w_{12}}} \!&\! {{w_{22}}}
\end{array}} \right]}_{= \mathbf{A}}\underbrace{\left[ \begin{array}{l}
1\\
{\left[ \mathbf{x} \right]_1}\\
{\left[ \mathbf{x} \right]_2}\\
\left[ \mathbf{x} \right]_1^2\\
{\left[ \mathbf{x} \right]_1}{\left[ x \right]_2}\\
\left[ \mathbf{x} \right]_2^2
\end{array} \right]}_{= \mathfrak{m}(\mathbf{x},r)} + \underbrace{0}_{ = \mathbf{f}(\mathbf{x})}. \label{defexp}
\end{align}
In the control community, the formula of Lyapunov function \eqref{defexp} is well-known \cite{khalil2002nonlinear}, which means the knowledge $\mathbf{f}(\mathbf{x}) \equiv 0$ is known. According to Definition \ref{defj}, all the entries of matrix $A$ are model-known parameters, including know zeros and unknown learning parameters $w_{12}$, $w_{12}$ and $w_{22}$. \label{ex1}
\end{ex}
\begin{ex}
For the single TL \eqref{sTNO}, we consider $\mathbf{x} \in {{\mathbb{R}}^4}$, $\mathbf{y} \in {{\mathbb{R}}^3}$ and $r= 4$. According to \eqref{cpeq1}, we obtain the complexity of input augmentation as $\mathrm{len}(\mathfrak{m}(\mathbf{x}, 4)) = 69 + 3 = 72$. For the cascading (two) TLs \eqref{cko}, we consider
\begin{align}
\mathbf{x} \in {{\mathbb{R}}^4} ~~\longmapsto~~ {\mathbf{y}_{\left\langle 1 \right\rangle }} = \mathrm{TL}\left( {\left. \mathbf{x} \right|{2}} \right) \in {{\mathbb{R}}^{{m_{{\left\langle 1 \right\rangle }}}}} ~~\longmapsto~~ \mathbf{y} = \mathrm{TL}\left( {\left. {{\mathbf{y}_{{\left\langle 1 \right\rangle }}}} \right|{2}} \right) \in {{\mathbb{R}}^3}.\nonumber
\end{align}
We let $r_{{\left\langle 1 \right\rangle }} = r_{{\left\langle 2 \right\rangle }}=2$, such that \eqref{mcc} holds. According to \eqref{cffc}, we then have
\begin{align}
\sum\limits_{p = 1}^2{\mathrm{len}(\mathfrak{m}(\mathbf{x},r_{{\left\langle v \right\rangle }}))} = \begin{cases}
10, &m_{\left\langle 1 \right\rangle } = 1\\
23, & m_{\left\langle 1 \right\rangle } = 2 \\
27, & m_{\left\langle 1 \right\rangle } = 3 \\
32, & m_{\left\langle 1 \right\rangle } = 4
\end{cases} \ll \mathrm{len}(\mathfrak{m}(\mathbf{x}, 4)) = 72, \nonumber
\end{align}
which means the complexity of input augmentation of cascading TLs, i.e., $\sum\limits_{p = 1}^2{\mathrm{len}(\mathfrak{m}(\mathbf{x},r_{{\left\langle v \right\rangle }}))}$, is significantly reduced, compared with that, i.e., $\mathrm{len}(\mathfrak{m}(\mathbf{x}, 4))$, of single TL. \label{exd1}
\end{ex}
\subsection{Auxiliary Theorem} \label{Aux}
\begin{thm}\cite{stanley1986enumerative} For any pair of positive integers $n$ and $k$, the number of $n$-tuples of non-negative integers whose sum is $r$ is equal to the number of multisets of cardinality $n - 1$ taken from a set of size $n + r - 1$, i.e., $\left( \begin{array}{l}
n + r - 1\\
n - 1
\end{array} \right) = \frac{{\left( {n + r - 1} \right)!}}{{\left( {n - 1} \right)!r!}}$. \label{ath3}
\end{thm}
\subsection{Proof of Theorem \ref{colthm}} \label{SIAAS}
We can straightforwardly verify from formula \eqref{dnrfinal} that if $\mathrm{DNR}_i, ~\mathrm{DNR}_j \in (0, \infty)$, we have
\begin{align}
\left| \mathrm{DNR}^{p+q}_{ij} \right| = \frac{1}{{{{\left( {1 + \frac{1}{{|\mathrm{DNR}_i|}}} \right)}^p}{{\left( {1 + \frac{1}{{|\mathrm{DNR}_j|}}} \right)}^q} - 1}}, \label{dnrfinal0}
\end{align}
which implies $|\mathrm{DNR}^{p+q}_{ij}|$ is strictly increasing with respect to $|\mathrm{DNR}_i|$ and $|\mathrm{DNR}_j|$ under this condition.
The condition $\mathrm{DNR}_i, ~\mathrm{DNR}_j \in (-\infty, -1]$ means that
\begin{align}
\frac{{1}}{{\mathrm{DNR}_j}} \in [-1, 0),~~~~~~~~\frac{{1}}{{\mathrm{DNR}_j}} \in [-1, 0),~~~~~~~~1 + \frac{{1}}{{\mathrm{DNR}_i}} \in [0, 1), ~~~~~~~~1 + \frac{{1}}{{\mathrm{DNR}_j}} \in [0, 1), \label{dnrpp1}
\end{align}
considering which, the formula \eqref{dnrfinal} equivalently transforms to
\begin{align}
\left| \mathrm{DNR}^{p+q}_{ij} \right| = \frac{1}{{{{1 - \left( {1 + \frac{1}{{\mathrm{DNR}_i}}} \right)}^p}{{\left( {1 + \frac{1}{{\mathrm{DNR}_j}}} \right)}^q}}} = \frac{1}{{{{1 - \left( {1 - \frac{1}{{|\mathrm{DNR}_i|}}} \right)}^p}{{\left( {1 - \frac{1}{{|\mathrm{DNR}_j|}}} \right)}^q}}}, \label{dnrfina2}
\end{align}
which reveals that $|\mathrm{DNR}^{p+q}_{ij}|$ is strictly increasing with respect to $|\mathrm{DNR}_i|$ and $|\mathrm{DNR}_j|$.
The condition $\mathrm{DNR}_i, ~\mathrm{DNR}_j \in [-\frac{1}{2}, 0)$ means
\begin{align}
\frac{{1}}{{\mathrm{DNR}_j}} \in (-\infty, -2], ~~\frac{{1}}{{\mathrm{DNR}_j}} \in (-\infty, -2], ~~1 + \frac{{1}}{{\mathrm{DNR}_i}} \in (-\infty, -1], ~~1 + \frac{{1}}{{\mathrm{DNR}_j}} \in (-\infty, -1], \label{dnrpp2}
\end{align}
in light of which, the formula \eqref{dnrfinal} can equivalently express as
\begin{itemize}
\item if $p + q$ is even,
\begin{align}
\left| \mathrm{DNR}^{m+n}_{ij} \right| = \frac{1}{{{{\left| {\frac{1}{{|\mathrm{DNR}_i|}}-1} \right|}^p}{{\left| {\frac{1}{{|\mathrm{DNR}_j|}}-1} \right|}^q }} - 1}, \label{dnrfina3}
\end{align}
\item if $p + q$ is odd,
\begin{align}
\left| \mathrm{DNR}^{p+q}_{ij} \right| = \frac{1}{{{{1 + \left| {\frac{1}{{|\mathrm{DNR}_i|}}-1} \right|}^p}{{\left| {\frac{1}{{|\mathrm{DNR}_j|}}-1} \right|}^q }}}. \label{dnrfina4}
\end{align}
\end{itemize}
We note both the functions \eqref{dnrfina3} and \eqref{dnrfina4} imply $|\mathrm{DNR}^{m+n}_{ij}|$ are strictly increasing with respect to $|\mathrm{DNR}_i|$ and $|\mathrm{DNR}_j|$, which completes the proof.
\subsection{Proof of Theorem \ref{th2}} \label{SI02}
We first note that the $\left| [\widetilde{\mathbf{h}}]_i - [{\mathbf{h}}]_i\right|$ in \eqref{cohgq2} and the $[\widetilde{\mathbf{h}}]_i$ in \eqref{cohgq3} can be rewritten as
\begin{align}
\left| [\widetilde{\mathbf{h}}]_i - [{\mathbf{h}}]_i\right| &= \begin{cases}
0, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i < 0\\
0, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0 ~\text{and}~[\mathbf{w}]_i < 0\\
\left| {[\mathbf{h}]_i \cdot \left( {\kappa_i - 1} \right) + \rho_i } \right|, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0 ~\text{and}~[\mathbf{w}]_i > 0 \\
\end{cases}, \label{pth31}\\
[\widetilde{\mathbf{h}}]_i &= \begin{cases}
[\mathbf{h}]_i, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i < 0\\
[\mathbf{h}]_i, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0 ~\text{and}~[\mathbf{w}]_i < 0 \\
[\mathbf{h}]_i \cdot \kappa_i + \rho_i, & \text{if}~[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0 ~\text{and}~[\mathbf{w}]_i > 0
\end{cases}. \label{pth32}
\end{align}
The result \eqref{pth31}, equivalently the \eqref{cohgq2} in obtained via considering the $[\widetilde{\mathbf{h}}]_i$ given in \eqref{pth32} subtracting $h$.
Substituting the $[\mathbf{w}]_i$ formula in \eqref{cohgq3} and the $[\widetilde{\mathbf{h}}]_i$ \eqref{pth32} into
$\chi([\bar{\bf{x}}]_i) = [\widetilde{\mathbf{h}}]_i + [\widetilde{\mathbf{w}}]_i$ yields the relation \eqref{compbadd}. We next prove that $\mathrm{DNR} = \frac{[\widetilde{\mathbf{h}}]_i}{[\widetilde{\mathbf{w}}]_i} \in (-\infty, -1]$ through considering three conditions.
\begin{itemize}
\item If $[\mathbf{h}]_i + [\mathbf{w}]_i < 0$, we obtain from the the first items of $[\widetilde{\mathbf{h}}]_i$ in \eqref{pth32} and $[\widetilde{\mathbf{w}}]_i$ in \eqref{cohgq3} that $\mathrm{DNR}_{i} = \frac{[\widetilde{\mathbf{h}}]_i}{[\widetilde{\mathbf{w}}]_i} = -1$.
\item If $[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0$ and $[\mathbf{w}]_i < 0$, we have $[\mathbf{h}]_i > 0$ and $[\mathbf{h}]_i > -[\mathbf{w}]_i| > 0$. We then obtain from the second items of $[\widetilde{\mathbf{h}}]_i$ in \eqref{pth32} and $[\widetilde{\mathbf{w}}]_i$ in \eqref{cohgq3} that $\mathrm{DNR}_{i} = \frac{[\widetilde{\mathbf{h}}]_i}{[\widetilde{\mathbf{w}}]_i} = \frac{[{\mathbf{h}}]_i}{[{\mathbf{w}}]_i} \le -1$.
\item If $[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0$ and $[\mathbf{w}]_i > 0$, we obtain from the third items of $[\widetilde{\mathbf{h}}]_i$ in \eqref{pth32} and $[\widetilde{\mathbf{w}}]_i$ in \eqref{cohgq3} that $\mathrm{DNR}_{i} = \frac{[\widetilde{\mathbf{h}}]_i}{[\widetilde{\mathbf{w}}]_i} = \frac{{[\mathbf{h}]_i \cdot \kappa_i + \rho_i}}{{[\mathbf{w}]_i \cdot \kappa_i}}$. Recalling $[\widetilde{\mathbf{w}}]_i > 0$, if $\kappa_i > 0$, the $\frac{{[\mathbf{h}]_i \cdot \kappa_i + \rho_i}}{{[\mathbf{w}]_i \cdot \kappa_i}} \le -1$ is equivalent to
\begin{align}
\rho_i \le - ([\mathbf{h}]_i + [\mathbf{w}]_i)\kappa_i < 0, ~~~\text{with}~~\kappa_i > 0,~~[\mathbf{h}]_i + [\mathbf{w}]_i \ge 0. \label{pu1}
\end{align}
If $\kappa_i < 0$, the $\frac{{[\mathbf{h}]_i \cdot \kappa_i + \rho_i}}{{[\mathbf{w}]_i \cdot \kappa_i}} \le -1$ is equivalent to
\begin{align}
\rho_i \ge - \left( {[\mathbf{w}]_i + [\mathbf{h}]_i} \right) \kappa_i \ge 0, ~~~\text{with}~ \kappa_i < 0,~~ [\mathbf{h}]_i + [\mathbf{w}]_i \ge 0. \label{pu2}
\end{align}
We finally conclude from \eqref{pu1} and \eqref{pu2} that $\mathrm{DNR}_{i} = \frac{[\widetilde{\mathbf{h}}]_i}{[\widetilde{\mathbf{w}}]_i} \in (-\infty, -1]$ under the condition $|\rho_i| \ge |[\mathbf{h}]_i + [\mathbf{w}]_i | \cdot |\kappa_i|$.
\end{itemize}
\subsection{Proof of Theorem \ref{thmmm2}} \label{SI03}
Let us first consider the first layer of neural operator, i.e., the case $t = 1$. We can verify from the knowledge and freezing matrices generated in Lines \ref{alg2-3} and \ref{alg2-4} of Algorithm~\ref{ALG2} that $\mathbf{A} = \mathbf{K}_{\left\langle 1 \right\rangle} + \mathbf{M}_{\left\langle 1
\right\rangle } \odot \mathbf{A}$, substituting which, in conjunction with the fact $r_{\left\langle 1 \right\rangle} = r$, into the model of ground truth \eqref{eq:lobja}, we arrive at
\begin{align}
\!\!\!\mathbf{y} = (\mathbf{K}_{\left\langle 1 \right\rangle} + \mathbf{M}_{\left\langle 1 \right\rangle } \odot \mathbf{A}) \cdot \mathfrak{m}(\mathbf{x},r) + \mathbf{f}(\mathbf{x})
= \mathbf{K}_{\left\langle 1 \right\rangle} \cdot \mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle}) + \mathbf{M}_{\left\langle 1 \right\rangle } \odot \mathbf{A} \cdot \mathfrak{m}(\mathbf{x},r_{\left\langle 1 \right\rangle}) + \mathbf{f}(\mathbf{x}). \label{pf1}
\end{align}
Let us denote $\mathbf{y}_{\left\langle 0 \right\rangle} = \mathbf{x}$. With the consideration of uncertainty matrix generated in the Line \ref{alg2-11} of Algorithm \ref{ALG2}, the output computing, i.e., the Line \ref{alg2-12} of Algorithm \ref{ALG2}, in the case of $t =1$ is rewritten as
\begin{align}
\mathbf{y}_{\left\langle 1 \right\rangle } &= \mathbf{K}_{\left\langle 1 \right\rangle } \cdot \mathfrak{m}(\mathbf{y}_{\left\langle 0 \right\rangle}, r_{\left\langle 1 \right\rangle}) + \mathbf{a}_{\left\langle 1 \right\rangle } \odot \text{act}\!\left( {\mathbf{U}_{\left\langle 1 \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle 0 \right\rangle}, r_{\left\langle 1 \right\rangle } })} \right) \nonumber\\
&= \mathbf{K}_{\left\langle 1 \right\rangle } \cdot \mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle}) + \mathbf{a}_{\left\langle 1 \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle 1
\right\rangle } \odot \mathbf{W}_{\left\langle 1 \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{x}, r_{\left\langle 1 \right\rangle } })} \right). \label{pf2}
\end{align}
The Line \ref{alg2-3} of Algorithm~\ref{ALG2} means that the knowledge matrix $\mathbf{K}_{\left\langle 1 \right\rangle }$ includes all the known and model-known parameters, whose corresponding entries in the masking matrix $\mathbf{M}_{\left\langle 1 \right\rangle }$ generated in the Line \ref{alg2-4} of Algorithm~\ref{ALG2} are frozen to be zeros. Consequently, both $\mathbf{M}_{\left\langle 1 \right\rangle } \odot \mathbf{A}$ and $\mathbf{M}_{\left\langle 1 \right\rangle } \odot \mathbf{W}_{\left\langle 1 \right\rangle }$ excludes all the known and model-known parameters (included in $\mathbf{K}_{\left\langle 1 \right\rangle }$). With consideration of Definitions \ref{defj}, we thus conclude that $\mathbf{M}_{\left\langle 1 \right\rangle } \odot \mathbf{A} \cdot \mathfrak{m}(\mathbf{x},r_{\left\langle 1 \right\rangle}) + \mathbf{f}(\mathbf{x})$ in the ground-truth model \eqref{pf1} and $\mathbf{b}_{\left\langle 1 \right\rangle } \odot \text{act}\left( {\mathbf{F}_{\left\langle 1 \right\rangle } \odot \mathbf{W}_{\left\langle 1 \right\rangle } \cdot \breve{\mathfrak{m}} \left( {\mathbf{x}, r_{\left\langle 1 \right\rangle } } \right)} \right)$ in the output computation \eqref{pf2} are independent of the term $\mathbf{K}_{\left\langle 1 \right\rangle } \cdot \mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle})$. Moreover, we note the activation-masking vector generated in Line \ref{alg2-5} of Algorithm \ref{ALG2} indicates that the activation function corresponding to the output's $i$-th entry is inactive, if the all the entries in the $i$-th row of freezing matrix are zeros (implying all the entries in the $i$-th row of weight matrix are model-known or known). Finally, we arrive at the conclusion that the input-output (i.e., $\mathbf{x}$--$\mathbf{y}_{\left\langle 1 \right\rangle }$) of the first PhN layer strictly complies with the available physics knowledge pertaining to the ground truth \eqref{pf1}, i.e., i) if the parameter $[\mathbf{A}]_{i,j}$ is known, the $\frac{{\partial {[\mathbf{y}_{\left\langle 1 \right\rangle }]_{i}}}}{{\partial [{\mathfrak{m}}\left( {\mathbf{x},r} \right)]_j}} \equiv \frac{{\partial {[\mathbf{y}]_i}}}{{\partial {[\mathfrak{m}}( {\mathbf{x},r})]_j}} \equiv {[\mathbf{A}]_{i,j}}$ always holds, and ii) if $[{\mathbf{f}}(\mathbf{x})]_{i} \equiv 0$ and is known, the $[\mathbf{y}_{\left\langle 1 \right\rangle }]_{i} \equiv {[\mathbf{y}]_i} \equiv {\left[ \mathbf{A} \right]_{ {i,:}}} \cdot \mathfrak{m}\left( {\mathbf{x},r} \right)$ always holds.
We next consider the remaining TNO layers. Considering the Line \ref{alg2-12} Algorithm 2, we have
\begin{subequations}
\begin{align}
\hspace{-0.05cm}[\mathbf{y}_{\left\langle p \right\rangle }]_{1:\text{len}(\mathbf{y})} &= [\mathbf{K}_{\left\langle p \right\rangle } \cdot \mathfrak{m}(\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle})]_{1:\text{len}(\mathbf{y})} + [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \nonumber\\
&= \mbox{\normalfont\small\bfseries I}_{\text{len}(\mathbf{y})} \cdot [\mathfrak{m}(\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle})]_{(\text{len}(\mathbf{y}_{\left\langle p \right\rangle})+1):(\text{len}(\mathbf{y}_{\left\langle p \right\rangle})
+ \text{len}(\mathbf{y}))} \nonumber\\
&\hspace{4.55cm} + [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \label{pf41}\\
& = \mbox{\normalfont\small\bfseries I}_{\text{len}(\mathbf{y})} \cdot [\mathbf{y}_{\left\langle p-1 \right\rangle }]_{1:\text{len}(\mathbf{y})} + [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \label{pf42}\\
& = [\mathbf{y}_{\left\langle p-1 \right\rangle }]_{1:\text{len}(\mathbf{y})} + [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathfrak{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \nonumber\\
&= [\mathbf{K}_{\left\langle p-1 \right\rangle } \cdot \mathfrak{m}(\mathbf{y}_{\left\langle p-2 \right\rangle }, r_{\left\langle p-1 \right\rangle})]_{1:\text{len}(\mathbf{y})} + [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \nonumber\\
&= \mbox{\normalfont\small\bfseries I}_{\text{len}(\mathbf{y})} \cdot [\mathfrak{m}(\mathbf{y}_{\left\langle p-2 \right\rangle }, r_{\left\langle p-1 \right\rangle})]_{1:\text{len}(\mathbf{y})} + [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \nonumber\\
&= \mbox{\normalfont\small\bfseries I}_{\text{len}(\mathbf{y})} \cdot [\mathbf{y}_{\left\langle p-2 \right\rangle }]_{(\text{len}(\mathbf{y}_{\left\langle p-1 \right\rangle})+1):(\text{len}(\mathbf{y}_{\left\langle p-1 \right\rangle})+ \text{len}(\mathbf{y}))} \nonumber\\
&\hspace{6.00cm} + [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \nonumber\\
&= [\mathbf{y}_{\left\langle p-2 \right\rangle }]_{1 :\text{len}(\mathbf{y})} + [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \nonumber\\
& = \ldots \nonumber\\
& = [\mathbf{y}_{\left\langle 1 \right\rangle }]_{1:\text{len}(\mathbf{y})} + [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \nonumber\\
& = [\mathbf{K}_{\left\langle 1 \right\rangle } \cdot \mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle})]_{1:\text{len}(\mathbf{y})}+ [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{F}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})}, \label{pf46}
\end{align}
\end{subequations}
where \eqref{pf41} and \eqref{pf42} are obtained from their previous steps via considering the block matrix in Line \ref{alg2-7} of Algorithm \ref{ALG2} and the formula of augmented monomials: $\mathfrak{m}(\mathbf{x},r) = \left[\mathbf{b};~\mathbf{x};~[\mathfrak{m}( {\mathbf{x},r})]_{(\text{len}(\mathbf{b})+\text{len}(\mathbf{x})+1) : \text{len}(\mathfrak{m}(\mathbf{x},r))} \right]$ from Algorithm \ref{ALG1}. The remaining iterative steps follow the same path.
The training loss function is to push the terminal output of Algorithm \ref{ALG2} (i.e., $\widehat{\mathbf{y}} = \mathbf{y}_{\left\langle p \right\rangle }$) to approximate the real output $\mathbf{y}$, which in light of \eqref{pf46} yields
\begin{align}
\widehat{\mathbf{y}} &= [\mathbf{K}_{\left\langle 1 \right\rangle } \cdot \mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle})]_{1:\text{len}(\mathbf{y})}+ [\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)]_{1:\text{len}(\mathbf{y})} \nonumber\\
&= [\mathbf{K}_{\left\langle 1 \right\rangle } \cdot \mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle})]_{1:\text{len}(\mathbf{y})} + \mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right), \label{pf5}
\end{align}
where \eqref{pf5} from its previous step is obtained via considering $\text{len}(\widehat{\mathbf{y}}) = \text{len}(\mathbf{y}) = \text{len}(\mathbf{y}_{\left\langle p \right\rangle })$. Meanwhile, the condition of generating freezing weight-matrix in Line \ref{alg2-8} of Algorithm \ref{ALG2} removes all the node-representations' connections with the known and model-known parameters included in $\mathbf{K}_{\left\langle 1 \right\rangle }$. Therefore, we can conclude that for the terminal output computation \eqref{pf5}, the term $\mathbf{b}_{\left\langle p \right\rangle } \odot \text{act}\!\left( {\mathbf{M}_{\left\langle p
\right\rangle } \odot \mathbf{W}_{\left\langle p \right\rangle } \cdot \breve{\mathfrak{m}}( {\mathbf{y}_{\left\langle p-1 \right\rangle }, r_{\left\langle p \right\rangle } })} \right)$ does not have any influence on the knowledge term $[\mathbf{K}_{\left\langle 1 \right\rangle } \cdot \mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle})]_{1:\text{len}(\mathbf{y})}$. Thus, the Algorithm \ref{ALG2} strictly embeds and preserves the available knowledge pertaining to the physics model of ground truth \eqref{eq:lobja}, or equivalently the \eqref{pf1}.
\subsection{Proof of Theorem \ref{thm3}} \label{SI04}
Let us first consider the case: $r = 1$. In this case, the Algorithm~\ref{ALG1} skips the Lines \ref{alg1-3}--\ref{alg1-14} and arrives at $\mathfrak{m}(\mathbf{x},r) = [\mathbf{b}; ~ \mathbf{x}-\mathbf{o}]$ and $\breve{\mathfrak{m}}(\mathbf{x},r) = [\mathbf{b}; ~\chi(\mathbf{x}-\mathbf{o})]$. Noticing $\mathbf{x}-\mathbf{o} \in \mathbb{R}^{n}$ and $\chi(\mathbf{x}-\mathbf{o}) \in \mathbb{R}^{n}$, we obtain $\text{len}(\mathfrak{m}(\mathbf{x},r)) = \text{len}(\breve{\mathfrak{m}}(\mathbf{x},r)) = n + 1$, which verifies the correctness of \eqref{cpeq1} with $r = 1$.
We next consider the case: $r \ge 2$. Given the input dimension $n$ and an order $s \in \{2, \ldots, r-1, r\}$, the Lines \ref{alg1-4}--\ref{alg1-13} of Algorithm \ref{ALG1} are to generate all the non-missing and non-redundant monomials included in ${\left( {\sum\limits_{i = 1}^n {{[\bar{\mathbf{x}}]_i}} } \right)^s}$ and ${\left( {\sum\limits_{i = 1}^n {{[\tilde{\mathbf{x}}]_i}} } \right)^s}$. The problem of the number of generated monomials via Algorithm \ref{ALG1} is equivalent to the problem that for any pair of positive integers $n$ and $s$, the number of $n$-tuples of non-negative integers (whose sum is $s$) is equal to the number of multisets of cardinality $n - 1$ taken
from a set of size $n + s - 1$. Additionally, we note that the vectors $\widetilde{\mathfrak{m}}(\mathbf{x},s)$ and $\overline{\mathfrak{m}}(\mathbf{x},s)$
in Line \ref{alg1-12} of Algorithm \ref{ALG1} stack all the generated monomials. According to the Theorem \ref{ath3} in Supplementary Information \ref{Aux}, we then have $\text{len}(\widetilde{\mathfrak{m}}(\mathbf{x},s)) = \text{len}(\overline{\mathfrak{m}}(\mathbf{x},s)) = \frac{{\left( {n + s - 1} \right)!}}{{\left( {n - 1} \right)!s!}}$. Finally, we note that the Lines \ref{alg1-3}, and \ref{alg1-16} of Algorithm \ref{ALG1} imply that the generated vectors $\mathfrak{m}(\mathbf{x},r)$ and $\breve{\mathfrak{m}}(\mathbf{x},r)$ stack the $1$ with $\overline{\mathfrak{m}}(\mathbf{x},s)$ and $\widetilde{\mathfrak{m}}(\mathbf{x},s)$ over $s \in \{1, \ldots, r-1, r\}$, respectively. We thus can obtain \eqref{cpeq1}.
\subsection{Proof of Corollary \ref{cor}} \label{SI06}
Due to Theorem \ref{thm3}, the number of augmented monomials of $d$ cascading TLs \eqref{cko} is obtained as
\begin{align}
\sum\limits_{p = 1}^d {\text{len}(\mathfrak{m}(\mathbf{x},r_{{\left\langle p \right\rangle }}))} &= \sum\limits_{p = 1}^d {\text{len}(\breve{\mathfrak{m}}(\mathbf{x},r_{{\left\langle p \right\rangle }}))} \nonumber\\
& =
\underbrace{\sum\limits_{s = 1}^{{r_{\left\langle 1 \right\rangle }}} {\frac{{\left( {n + s - 1} \right)!}}{{\left( {n - 1} \right)s!}}}}_{\text{the first TNO}} + \underbrace{\sum\limits_{v = 1}^{d - 1} {\sum\limits_{s = 1}^{{r_{{\left\langle v+1 \right\rangle }}}} {\frac{{\left( {{n_{\left\langle v \right\rangle }} + s - 1} \right)!}}{{\left( {{n_{\left\langle v \right\rangle }} - 1} \right)!s!}}} }}_{\text{the remaining TNOs}} + 1 + d-1. \label{ccff1}
\end{align}
The condition \eqref{mcc} implies that $r > r_{{\left\langle 1 \right\rangle}}$, which in conjunction with \eqref{cpeq1}, lead to
\begin{align}
\text{len}(\mathfrak{m}(\mathbf{x},r)) = \text{len}(\breve{\mathfrak{m}}(\mathbf{x},r)) = \sum\limits_{s = 1}^{{r_{\left\langle 1 \right\rangle }}} {\frac{{\left( {n + s - 1} \right)!}}{{\left( {n - 1} \right)!s!}}} + \sum\limits_{s = {r_{\left\langle 1 \right\rangle }} + 1}^r {\frac{{\left( {n + s - 1} \right)!}}{{\left( {n - 1} \right)!s!}}} + 1. \label{ccff2}
\end{align}
Subtracting \eqref{ccff1} from \eqref{ccff2} yields \eqref{cffc}.
\subsection{Neuron Editing Example} \label{SI07}
We now use the case of `Emd-Phy-Cas' in the Section \ref{expppgf} to explain how the neuron-editing is implemented in the deep TNOs, whose architecture is described in Figure \ref{capsp} (c).
\subsubsection*{The First TNO Layer} The output of monomial augmentation of the first layer of TNO is
\begin{align}
&\mathfrak{m}(\widehat{\mathbf{x}}, r_{\left\langle 1 \right\rangle} = 2) = [\mathbf{1}_{10}; \mathrm{x}; \mathrm{y}; \psi; {v_{\mathrm{x}}}; {v_{\mathrm{y}}}; {v_\psi }; \mathrm{x}^{2}; \mathrm{x}\mathrm{y}; \mathrm{x}\psi; \mathrm{x}{v_{\mathrm{x}}}; \mathrm{x}{v_{\mathrm{y}}}; \mathrm{x}{v_\psi }; \mathrm{y}^{2}; \mathrm{y}\psi; \mathrm{y}{v_{\mathrm{x}}}; \mathrm{y}{v_{\mathrm{y}}}; {v_\psi }; \psi^{2}; \psi{v_{\mathrm{x}}}; \psi{v_{\mathrm{y}}}; \nonumber\\
&\hspace{9.3cm}\psi{v_\psi }; {v_{\mathrm{x}}}^{2}; {v_{\mathrm{x}}}{v_{\mathrm{y}}}; {v_{\mathrm{x}}}{v_\psi }; {v_{\mathrm{y}}}^{2}; {v_{\mathrm{y}}}{v_\psi };{v_\psi }^{2}]. \label{newv}
\end{align}
According to \eqref{discar}, the edited weight matrix of the first layer is
\begin{align}
\hspace{-1cm}\mathbf{A} \!=\! \!\left[\!\! {\begin{array}{*{28}{c}}
\mathbf{0}^\top_{10} \!&\! 1\!&\!0\!&\!0\!&\!T\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\! 0 & \!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\mathbf{0}^\top_{10} \!&\! 0\!&\!1\!&\!0\!&\!0\!&\!T\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\mathbf{0}^\top_{10} \!&\! 0\!&\!0\!&\!1\!&\!0\!&\!0\!&\!T\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
{*} & {*}\!&\!{0}\!&\!{0}\!&\!{*}\!&\!{0}\!&\!{0}\!&\!{*}\!&\!{0}\!&\!{0}\!&\!{*}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{*}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\!&\!{0}\\
* & *\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\\
\vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots\!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \\
* & *\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*\!&\!*
\end{array}} \!\!\right]\!, \label{newv1}
\end{align}
according which, the Lines \ref{alg2-3}--\ref{alg2-5} of Algorithm~\ref{ALG2}, respectively, generate
\begin{align}
&\hspace{-0.5cm}\mathbf{K}_{\left\langle 1 \right\rangle} = \left[\!\! {\begin{array}{*{28}{c}}
\mathbf{0}^\top_{10} \!&\! 1\!&\!0\!&\!0\!&\!T\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\! 0 & \!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\mathbf{0}^\top_{10} \!&\! 0\!&\!1\!&\!0\!&\!0\!&\!T\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\mathbf{0}^\top_{10} \!&\! 0\!&\!0\!&\!1\!&\!0\!&\!0\!&\!T\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\mathbf{0}^\top_{10} & 0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots\!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \\
\mathbf{0}^\top_{10} & 0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0
\end{array}} \!\!\right], \nonumber\\
&\hspace{-0.5cm}\mathbf{F}_{\left\langle 1 \right\rangle} = \left[\!\! {\begin{array}{*{28}{c}}
\mathbf{0}^\top_{10} \!&\! 0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\! 0 & \!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\mathbf{0}^\top_{10} \!&\! 0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\mathbf{0}^\top_{10} \!&\! 0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\mathbf{1}^\top_{10} & 1\!&\!0\!&\!0\!&\!1\!&\!0\!&\!0\!&\!1\!&\!0\!&\!0\!&\!1\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\!&\!1\!&\!0\!&\!0\!&\!0\!&\!0\!&\!0\\
\mathbf{1}^\top_{10} & 1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\\
\vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots\!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \!&\! \vdots \\
\mathbf{1}^\top_{10} & 1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1\!&\!1
\end{array}} \!\!\right], \nonumber\\
&\hspace{-0.5cm}\mathbf{a}_{\left\langle 1 \right\rangle } = \left[ {0;~0;~0;~1;~1;~1;~1;~1;~1;~1} \right],\nonumber
\end{align}
considering which, and Lines \ref{alg2-11} and \ref{alg2-12} of Algorithm~\ref{ALG2}, the output of the first TNO is
\begin{align}
\hspace{-1cm}\mathbf{y}_{\left\langle 1 \right\rangle} \!=\! \mathbf{K}_{\left\langle 1 \right\rangle } \!\cdot\! \mathfrak{m}(\mathbf{x}, r_{\left\langle 1 \right\rangle}) \!+\! \mathbf{a}_{\left\langle 1 \right\rangle } \!\odot\! \text{act}( \mathbf{F}_{\left\langle 1 \right\rangle } \!\odot\! {\mathbf{W}_{\left\langle 1 \right\rangle }} \!\cdot\! {\mathfrak{m}} ( {\mathbf{x}, r_{\left\langle 1 \right\rangle } })) \!=\! [
\mathrm{x}(k\!+\!1); \mathrm{y}(k\!+\!1); \psi(k\!+\!1); [ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_4;
\ldots; {{{[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}} ]}_{10}}}].\nonumber
\end{align}
\subsubsection*{The Second TNO Layer} For the second TNO, its output of monomial augmentation is
\begin{align}
&\hspace{-0.8cm}\mathfrak{m}(\mathbf{y}_{\left\langle 1 \right\rangle}, r_{\left\langle 2 \right\rangle} = 2) \nonumber\\
&\hspace{-0.8cm}= [\mathbf{1}_{8}; ~\mathrm{x}(k\!+\!1); ~\mathrm{y}(k\!+\!1); ~\psi(k\!+\!1); ~[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_4; ~\ldots; ~[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{10}; ~\mathrm{x}^2(k\!+\!1); ~\mathrm{x}(k\!+\!1)\mathrm{y}(k\!+\!1);~ \mathrm{x}(k\!+\!1)\psi(k\!+\!1); \nonumber\\
&\hspace{-0.1cm}\mathrm{x}(k\!+\!1)[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_4; ~\ldots;~\mathrm{x}(k\!+\!1)[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{10}; ~\mathrm{y}^2(k\!+\!1); ~\mathrm{y}(k\!+\!1)\psi(k\!+\!1); ~\mathrm{y}(k\!+\!1)[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_4; ~\ldots; ~\nonumber\\
&\hspace{-0.1cm}\psi(k\!+\!1)[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{10}; ~[{{\mathbf{y}_{\left\langle 1 \right\rangle }}}]^2_{4};~
[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{4}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{5}; ~\ldots;~ [ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{4}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{10}; ~
[{{\mathbf{y}_{\left\langle 1 \right\rangle }}}]^2_{5}; ~
[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{5}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{6}; ~\ldots; ~[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{5}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{10}; ~
\nonumber\\
&\hspace{-0.1cm}[{{\mathbf{y}_{\left\langle 1 \right\rangle }}}]^2_{6};~ [ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{6}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{7}; ~\ldots;~ [ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{6}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{10}; ~
[{{\mathbf{y}_{\left\langle 1 \right\rangle }}}]^2_{7}; ~
[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{7}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{8}; ~\ldots;~[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{7}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{10}; ~
[{{\mathbf{y}_{\left\langle 1 \right\rangle }}}]^2_{8}; \nonumber\\
&\hspace{-0.1cm}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{8}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{9}; ~\ldots;~ [ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{8}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{10}; ~
[{{\mathbf{y}_{\left\langle 1 \right\rangle }}}]^2_{9}; ~
[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{9}[ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]_{10};~ [ {{\mathbf{y}_{\left\langle 1 \right\rangle }}}]^2_{10}]. \label{newva}
\end{align}
Referring to \eqref{newv1}, the Lines \ref{alg2-7}--\ref{alg2-9} of Algorithm~\ref{ALG2}, respectively, generate
\begin{align}
&\mathbf{K}_{\left\langle 2 \right\rangle} = \left[ {\begin{array}{*{20}{c}}
\mathbf{0}^\top_{8} \!&\!1&0&0&0&{\mathbf{0}^\top}&0&0&0&0&{{\mathbf{0}^\top}}&0&{{\mathbf{0}^\top}}\\
\mathbf{0}^\top_{8} \!&\!0&1&0&0&{\mathbf{0}^\top}&0&0&0&0&{{\mathbf{0}^\top}}&0&{{\mathbf{0}^\top}}\\
\mathbf{0}^\top_{8} \!&\!0&0&1&0&{\mathbf{0}^\top}&0&0&0&0&{{\mathbf{0}^\top}}&0&{{\mathbf{0}^\top}}\\
\mathbf{0}^\top_{8} \!&\!0&0&0&0&{\mathbf{0}^\top}&0&0&0&0&{\mathbf{0}^\top}&0&{\mathbf{0}^\top}\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
\mathbf{0}^\top_{8} \!&\!0&0&0&0&{\mathbf{0}^\top}&0&0&0&0&{\mathbf{0}^\top}&0&{\mathbf{0}^\top}
\end{array}} \right], \nonumber\\
&\mathbf{F}_{\left\langle 2 \right\rangle} = \left[ {\begin{array}{*{20}{c}}
\mathbf{0}^\top_{8} \!&\!0&0&0&0&{\mathbf{0}^\top}&0&0&0&0&{{\mathbf{0}^\top}}&0&{{\mathbf{0}^\top}}\\
\mathbf{0}^\top_{8} \!&\!0&0&0&0&{\mathbf{0}^\top}&0&0&0&0&{{\mathbf{0}^\top}}&0&{{\mathbf{0}^\top}}\\
\mathbf{0}^\top_{8} \!&\!0&0&0&0&{\mathbf{0}^\top}&0&0&0&0&{{\mathbf{0}^\top}}&0&{{\mathbf{0}^\top}}\\
\mathbf{1}^\top_{8} \!&\!1&0&0&1&{\mathbf{0}^\top}&1&0&0&1&{\mathbf{0}^\top}&1&{\mathbf{0}^\top}\\
\mathbf{1}^\top_{8} \!&\!1&1&1&1&{\mathbf{0}^\top}&1&1&1&1&{\mathbf{0}^\top}&1&{\mathbf{1}^\top}\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
\mathbf{1}^\top_{8} \!&\!1&1&1&1&{\mathbf{1}^\top}&1&1&1&1&{\mathbf{1}^\top}&1&{\mathbf{1}^\top}
\end{array}} \right], \nonumber
&\mathbf{a}_{\left\langle 2 \right\rangle } = \left[ {0;~0;~0;~1;~1;~1;~1;~1} \right],\nonumber
\end{align}
such that the second TNO's output is computed as
\begin{align}
\hspace{-1cm} \mathbf{y}_{\left\langle 2 \right\rangle} \!=\! \mathbf{K}_{\left\langle 2 \right\rangle } \!\cdot\! \mathfrak{m}(\mathbf{x}, r_{\left\langle 2 \right\rangle}) \!+\! \mathbf{a}_{\left\langle 2 \right\rangle } \!\odot\! \text{act}( \mathbf{F}_{\left\langle 2 \right\rangle } \!\odot\! {\mathbf{W}_{\left\langle 2 \right\rangle }} \!\cdot\! {\mathfrak{m}}( {\mathbf{x}, r_{\left\langle 2 \right\rangle } })) \!=\! [
\mathrm{x}(k\!+\!1); \mathrm{y}(k\!+\!1); \psi(k\!+\!1); [ {{\mathbf{y}_{\left\langle 2 \right\rangle }}}]_4;
\ldots; {{{[ {{\mathbf{y}_{\left\langle 2 \right\rangle }}} ]}_{8}}} ].\nonumber
\end{align}
\subsubsection*{The Third TNO Layer} The order of monomial augmentation of the third TNO is $r = 1$, we thus have $\mathfrak{m}(\mathbf{y}_{\left\langle 2 \right\rangle}, r = 1) = [\mathbf{1}_{6};~\mathbf{y}_{\left\langle 2 \right\rangle}]$, referring which and \eqref{newv1}, the Lines \ref{alg2-7}--\ref{alg2-9} of Algorithm~\ref{ALG2}, respectively, generate
\begin{align}
&\mathbf{K}_{\left\langle 3 \right\rangle} = \left[ {\begin{array}{*{20}{c}}
\mathbf{0}^\top_{6} \!&\!1&0&0&0&{0}&0\\
\mathbf{0}^\top_{6} \!&\!0&1&0&0&{0}&0\\
\mathbf{0}^\top_{6} \!&\!0&0&1&0&{0}&0\\
\mathbf{0}^\top_{6} \!&\!0&0&0&0&{0}&0\\
\mathbf{0}^\top_{6} \!&\!0&0&0&0&{0}&0\\
\mathbf{0}^\top_{6} \!&\!0&0&0&0&{0}&0
\end{array}} \right], \mathbf{F}_{\left\langle 3 \right\rangle} = \left[ {\begin{array}{*{20}{c}}
\mathbf{0}^\top_{6} \!&\!0&0&0&0&{0}&0\\
\mathbf{0}^\top_{6} \!&\!0&0&0&0&{0}&0\\
\mathbf{0}^\top_{6} \!&\!0&0&0&0&{0}&0\\
\mathbf{1}^\top_{6} \!&\!1&0&0&1&{0}&0\\
\mathbf{1}^\top_{6} \!&\!1&1&1&1&{1}&1\\
\mathbf{1}^\top_{6} \!&\!1&1&1&1&{1}&1
\end{array}} \right], \mathbf{a}_{\left\langle 3 \right\rangle } = \left[ {0;~0;~0;~1;~1;~1} \right],\nonumber
\end{align}
such that the terminal output is
\begin{align}
\hspace{-1cm} \mathbf{y}_{\left\langle 3 \right\rangle} \!=\! \mathbf{K}_{\left\langle 3 \right\rangle } \!\cdot\! \mathfrak{m}(\mathbf{x}, r_{\left\langle 3 \right\rangle}) \!+\! \mathbf{a}_{\left\langle 3 \right\rangle } \!\odot\! \text{act}( \mathbf{F}_{\left\langle 3 \right\rangle } \!\odot\! {\mathbf{W}_{\left\langle 3 \right\rangle }} \!\cdot\! {\mathfrak{m}}( {\mathbf{x}, r_{\left\langle 3 \right\rangle } })) \!=\! [
\mathrm{x}(k\!+\!1); \mathrm{y}(k\!+\!1); \psi(k\!+\!1); [ {{\mathbf{y}_{\left\langle 3 \right\rangle }}}]_4;
\ldots; {{{[ {{\mathbf{y}_{\left\langle 3 \right\rangle }}} ]}_{6}}} ].\nonumber
\end{align}
\subsection{Derivations of Solution (\ref{rch43})} \label{SI10}
We first define:
\begin{align}
\overline{\mu}_1 \triangleq \frac{{{[\widehat{\mathbf{c}}]_1} - {[\mathbf{b}]_1}}}{{{\lambda _1}}}, ~~~~~\overline{\lambda} \triangleq \frac{{{\lambda _2}}}{{{\lambda _1}}}, ~~~~~\bar b \triangleq {[\mathbf{b}]_2} - {[\widehat{\mathbf{c}}]_2}. \label{defop}
\end{align}
leveraging which, the \eqref{rch41} is rewritten as
\begin{align}
[\widehat{\mathbf{u}}(k)]_1^2 = \overline{\mu}_1 - \overline{\lambda}[\widehat{\mathbf{u}}(k)]_2^2, \label{ppko1}
\end{align}
and we can obtain from \eqref{rch42} that
\begin{align}
\hspace{-0.7cm} 4s_{12}^2[\widehat{\mathbf{u}}(k)]_2^2[\widehat{\mathbf{u}}(k)]_1^2 = {{\overline{b}}^2} + s_{11}^2[\widehat{\mathbf{u}}(k)]_1^4 + s_{22}^2[\widehat{\mathbf{u}}(k)]_2^4 - 2\overline{b}{s_{11}}[\widehat{\mathbf{u}}(k)]_1^2 - 2\overline{b}{s_{22}}[\widehat{\mathbf{u}}(k)]_2^2 + 2{s_{11}}{s_{22}}[\widehat{\mathbf{u}}( k)]_1^2[\widehat{\mathbf{u}}(k)]_2^2, \nonumber
\end{align}
substituting \eqref{ppko1} into which yields
\begin{align}
{\varpi _1}[\widehat{\mathbf{u}}]_2^4\left( k \right) + {\varpi_2}[\widehat{\mathbf{u}}]_2^2\left( k \right) + {\varpi _3}\left( k \right) = 0, \label{ppko2}
\end{align}
where
\begin{align}
{\varpi _1} &\triangleq s_{11}^2{\overline{\lambda}^2} + s_{22}^2 - 2{s_{11}}{s_{22}}\overline{\lambda} + 4s_{12}^2\overline{\lambda}, \label{ppko21}\\
{\varpi _2} &\triangleq 2\bar b {s_{11}}\overline{\lambda} - 2s_{11}^2{\overline{\mu}_1}\overline{\lambda} - 2\bar b{s_{22}} + 2{s_{11}}{s_{22}}{\overline{\mu} _1} - 4s_{12}^2{\overline{\mu}_1}, \label{ppko22}\\
{\varpi _3} &\triangleq {\bar b^2} + s_{11}^2\overline{\mu}_1^2 - 2\bar b{s_{11}}{\overline{\mu}_1}. \label{ppko23}
\end{align}
Considering $\widehat{u}_2^2(k) \ge 0$, the solution of \eqref{ppko2} is
\begin{align}
[\widehat{\mathbf{u}}]_2^2(k) = \frac{{\sqrt {\varpi_2^2 - 4{\varpi _1}{\varpi _3}} - {\varpi_2}}}{{2{\varpi_1}}}, \label{ppko3}
\end{align}
substituting which into \eqref{ppko1} yields
\begin{align}
[\widehat{\mathbf{u}}]_1^2(k) = {{\overline{\mu} }_1} - \overline{\lambda} \frac{{\sqrt {\varpi _2^2 - 4{\varpi_1}{\varpi _3}} - {\varpi_2}}}{{2{\varpi _1}}}. \label{ppko4}
\end{align}
The $\widehat{\mathbf{u}}(k)$ is then straightforwardly obtained from \eqref{ppko3} and \eqref{ppko4}:
\begin{align}
\widehat{\mathbf{u}}(k) = \left[ { \pm \sqrt {{{\overline{\mu}}_1} - \overline{\lambda} \frac{{\sqrt {\varpi_2^2 - 4{\varpi_1}{\varpi_3}} - {\varpi_2}}}{{2{\varpi_1}}}}; \pm \sqrt {\frac{{\sqrt {\varpi_2^2 - 4{\varpi _1}{\varpi _3}} - {\varpi_2}}}{{2{\varpi _1}}}} } \right], \nonumber
\end{align}
which, in conjunction with \eqref{rch3} and $Q^{-1} = Q = Q^\top$, lead to
\begin{align}
\left[ \begin{array}{l}
\theta \left( k \right)\\
\gamma \left( k \right)
\end{array} \right] = {\mathbf{Q}_1}\left[ \begin{array}{l}
\pm \sqrt {{{\overline{\mu} }_1} - \overline{\lambda} \frac{{\sqrt {\varpi _2^2 - 4{\varpi_1}{\varpi _3}} - {\varpi_2}}}{{2{\varpi_1}}}} \\
\pm \sqrt {\frac{{\sqrt {\varpi _2^2 - 4{\varpi _1}{\varpi_3}} - {\varpi _2}}}{{2{\varpi_1}}}}
\end{array} \right]. \label{ppko5}
\end{align}
Substituting the notations defined in \eqref{defop} into \eqref{ppko5} and \eqref{ppko21}--\eqref{ppko23} results in \eqref{rch43} and \eqref{pko1}--\eqref{pko3}, respectively.
| 2023-04-23T06:41:06.550Z | 2022-09-28T02:26:30.000Z | redpajama/arxiv | arxiv_0001 | 1,695 | 22,185 |
79dfba7c3fc2f92bb6668d8b8c69663dba4f325a | \section{Introduction}
X-waves are diffraction-free and dispersion-free wavepackets localized in space and time travelling at arbitrary superluminal group velocities in free space \cite{LU,SAARI1}. As particular members of the broader family of localized waves \cite{SAARI1,SAARI2,BOOK1,BOOK2}, their spatiotemporal characteristics and experimental generation have deserved decades of research. Most of this research is focused on localized waves without orbital angular momentum (OAM) \cite{SAARI1,SAARI2,BOOK1,BOOK2}, and more recently on synthesizing one-dimensional localized waves, or space-time light sheets \cite{FLORIDA1,FLORIDA2}, which cannot carry OAM. This fact could seem surprising since the interest in OAM of light started and developed over the same decades, at first for monochromatic light\cite{ALLEN,YAO}, and later for ultrashort pulses \cite{SHVEDOV,YAMANE,PORRAS1,PORRAS2,PORRAS3}. Only recently vortex-carrying X-waves and other diffraction-free pulses with OAM have started to gain attention \cite{CONTI1,CONTI2,PORRAS4,ORNI,PANG}.
Localized waves with cylindrical symmetry are built as coherent superpositions of diffraction-free Bessel beams of order $m=0$ (without OAM) or $m\neq 0$ (with OAM) of different frequencies $\omega$ and weights $f(\omega)$. The integer $m$ is the topological charge of the vortex in their center. Localized waves are dispersion-free because the $\omega$-dependent cone angle of the Bessel beams, $\theta(\omega)$, is such that the axial wave number (projection of the wave vector onto the propagation direction, say $z$) follows the linear variation law with frequency $k_z(\omega)=(\omega/c)\cos{\theta(\omega)} =a +\omega /v_g$, where $a$ is an arbitrary constant, and $v_g$ the group velocity. X-waves are characterized by $v_g>c$ and $a=0$, and generic superluminal localized waves \cite{SAARI2}, called here generic X-waves (GX-waves for short) are characterized by $v_g>c$ and $a\neq 0$.
The recent theoretical studies on \cite{CONTI1,CONTI2,PORRAS4} on X-waves with OAM have unveiled a rather complex OAM-temporal coupled structure obeying certain universal constraints. In \cite{CONTI1,CONTI2} the OAM-temporal couplings in the immediate vicinity of the vortex singularity are examined. With the same spectrum $f(\omega)$, the number of oscillations and their frequency increase with the magnitude of the topological charge, $|m|$. On the other hand, the bright ring surrounding the vortex is studied in \cite{PORRAS4}. In X-waves built with the same $f(\omega)$, X-waves have increasing duration at this ring when $|m|$ is increased, while the local frequency of the oscillations does not experience appreciable change with $|m|$, but is solely determined by the particular spectrum $f(\omega)$ of Bessel beams, e. g., the central frequency $\omega_f$ for bell-shaped spectra. The dependence of the duration of X-waves on $m$ originates from the lower bound $\Delta t \gtrsim |m|/\omega_f$ satisfied by all X-waves at their bright ring.
\begin{figure}[!b]
\begin{center}
\includegraphics*[height=4.4cm]{Fig1.eps}
\end{center}
\caption{\label{Fig1} (a) Spectral density of the broadband Bessel beam spectrum $f(\omega)=\exp(-\epsilon \omega)$ with $\epsilon=0.2$ fs, spanning the visible spectrum and beyond. (b) Transversal wave numbers of the X-wave with $v_g=0.0004$ mm/fs (solid line), of the GX-wave with of the same velocity and $\omega_z=2.5$ rad/fs ($a=2.083\times 10^3$ mm$^{-1}$) (dashed curve), and $\omega/c$ (dotted line).}
\end{figure}
Thus, to date, there is no a complete picture of the spatiotemporal structure of OAM-carrying X-waves except in the vicinity of the vortex and the bright ring. Here we describe the spatiotemporal structure of X-waves and GX-waves with OAM and with the broadband spectrum $f(\omega)=e^{-\epsilon \omega}$, where $\epsilon$ is a small quantity for $f(\omega)$ to cover from a dc component to the optical spectrum and beyond, as in Fig. \ref{Fig1}(a). Without OAM, the fundamental X-wave is non-oscillatory in time, and therefore is of little interest in optics; indeed only X-waves with narrower spectra about microwave \cite{MUGNAI} and optical frequencies \cite{SAARI1}, called Bessel-X waves, \cite{SONAJALG} have been generated. However, the introduction of OAM eliminates any component about $\omega=0$ and induces what can be qualified as intrinsic temporal oscillations associated with OAM, whose frequency can be tuned in any range of the electromagnetic spectrum. The previously described OAM-temporal couplings \cite{CONTI1,CONTI2,PORRAS4} arise here naturally as a result of the whole spatiotemporal coupled structure of these X-waves with OAM.
The OAM-induced oscillations feature a number of zeros approximately equal to $|m|$, and hence $|m|/2$ oscillations. Since the same number of oscillations fill the inside of the X arms at any radial distance from the vortex singularity, their frequency is red shifted radially outwards down to zero at infinity (with zero amplitude). With increasing $|m|$, the entire X-wave is blue shifted, at the same time that the bright ring is displaced outwards, resulting in a frequency at this ring independent of the topological charge. The structure of GX-waves closely resembles that of X-waves in their inner part, but the number of oscillations becomes increasingly larger than those of X-waves of same OAM towards the periphery. In addition, the oscillations spread out of temporally out of the X arms in the outer radial part, and their frequency approaches a non-zero constant value.
The duration of the broadband X-waves at their bright ring is found to coincide with the lower bound $|m|/\omega_f$ described in \cite{PORRAS4}. Therefore broadband X-waves are minimal X-waves capable to carry a given amount of OAM. Being diffraction-free, dispersion-free, and minimal OAM carriers (and other properties such as self-healing behavior and turbulence resistance \cite{LI}), broadband X-waves appear as optimum waves modes in many applications, particularly in superdense, multichannel free-space communications \cite{HUANG}, and free-space quantum communications systems \cite{PATERSON,ZHANG}, commonly based on Laguerre-Gauss-type modes.
\begin{figure*}[!]
\begin{center}
\includegraphics*[height=4.8cm]{Fig2.eps}
\end{center}
\caption{\label{Fig2} Real electric field of X-waves with $v_g=0.0004$ mm/fs, $\epsilon =0.2$ fs and the indicated topological charges. All fields are normalized to their respective peak values.}
\end{figure*}
\section{Superluminal localized waves with OAM}
A quite general expression of cylindrically symmetric localized waves with OAM is $E(r,\varphi,t,z)=E(r,t,z)e^{im\varphi}$, with
\begin{equation}\label{LW}
E(r,z,t) = \frac{e^{iaz}}{\pi}\int_{0,\,k_\perp\mbox{\small real}}^\infty \!\!\!d\omega f(\omega) J_m[k_\perp(\omega) r] e^{-i\omega t'}\, ,
\end{equation}
where $(r,\varphi,z)$ are cylindrical coordinates, $t'=t-z/v_g$ is the local time for the group velocity $v_g$, and $k_\perp(\omega)= (\omega/c)\sin\theta(\omega) = \sqrt{(\omega/c)^2 - k_z^2(\omega)}$ is the transversal wave number (modulus of the transversal projection of the wave vector), or, given the linear variation of $k_z(\omega)$,
\begin{equation}\label{KPERP}
k_\perp(\omega) = \sqrt{\left(\frac{\omega}{c}\right)^2 - \left(a+\frac{\omega}{v_g}\right)^2}\,.
\end{equation}
The integral has been limited to nonnegative positive frequencies to yield the analytical signal complex representation of the electric field, whose real part is the real field. The limitation to real $k_\perp(\omega)$ expresses the restriction that the axial wave number $a+\omega/v_g$ cannot be higher than $\omega/c$. The electric field of X-waves ($a=0$, $v_g>c$) does not depend on $z$. The transversal wave number is the straight line $k_\perp(\omega)=(\sin\theta/c)\omega$ crossing the origin $\omega=0$ of slope $\sin\theta/c$ [Fig.\ref{Fig1}(b)], corresponding to a constant cone angle $\theta=\sin^{-1}\left(c \sqrt{1/c^2-1/v_g^2}\right)$. For GX-waves ($a \neq 0$, $v_g>c$) the electric field oscillates with axial period $2\pi/a$. The transversal wave number $k_\perp(\omega)$ is a branch of hyperbola starting at some positive frequency $\omega_z$ (the other branch is entirely in $\omega<0$), the same asymptotic slope $\sin\theta/c$ as the X-wave of the same $v_g$ [Fig. 1(b)], and a $\omega$-dependent cone angle approaching $\theta$ at large $\omega$. We will fix the frequency $\omega_z$ as an important frequency of GX-waves by setting
\begin{equation}
a=\omega_z\left(\frac{1}{c}-\frac{1}{v_g}\right)\,,
\end{equation}
so that $\omega_z=0$ specifies an X-wave, and $\omega_z>0$ a GX-wave, see Fig \ref{Fig1}(b).
\subsection{Broadband X-waves with OAM}
Taking, as in \cite{LU}, the broadband exponential spectrum $f(\omega)=\exp(-\epsilon \omega)$, and setting $a=0$ ($\omega_z=0$) and $k_\perp(\omega) =(\sin\theta/c)\omega$, the integral in (\ref{LW}) can be carried out to yield
\begin{eqnarray}\label{XW}
E(r,z,t) &=&\frac{1}{\pi \sqrt{(\epsilon+it')^2 + \left(\frac{\sin\theta}{c}{r}\right)^2}} \times \nonumber\\
&\times& \frac{\left(\frac{\sin\theta}{c}{r}\right)^{|m|}}{\left[\sqrt{(\epsilon+it')^2 + \left(\frac{\sin\theta}{c}{r}\right)^2}+\epsilon +i t'\right]^{|m|}}
\end{eqnarray}
(and multiplied by $(-1)^{|m|}$ if $m<0$). Equation (\ref{XW}) was already derived in the pioneering work of Lu and Greenleaf \cite{LU}: The integrals involved in obtaining the expression for the fundamental, OAM-free X-wave were performed with Bessel functions of arbitrary order $m$. The expression with $m\neq 0$, however, received no attention either in that work or subsequently, to the best of our knowledge.
The electric field approaches zero as $r^{|m|}$ close to the vortex singularity, and as $1/r$ at large enough distances, carrying then infinite energy, as the OAM-free X-wave. In Figs. \ref{Fig2}(a-d) the real electric field of X-waves without and with OAM can be compared. There is no light at times immediately outside the X arms, so that the duration at each radial distance is the time between the X arms, $2\Delta t= 2(\sin\theta/c)r$.
The most evident and relevant difference is that with $m=0$ the electric field is a unipolar pulse that splits into two unipolar pulses, while with $m\neq 0$ the temporal pulse shape at any radial distance has approximately $|m|$ zeros, or more precisely, the smallest even number greater or equal to $|m|$, e.g., 2 for $|m|=1,2$; 4 for $|m|=3, 4$, and so on. The number of oscillations may slightly differ depending on the particular criterion. According to the instantaneous frequency analysis below, the number of oscillations is $|m|/2$ irrespective of whether $m$ is even or odd. These oscillations are more clearly seen in Figs. \ref{Fig3}(a-c) for $|m|=8$ at different radial distances. Thus, the increase of the number of oscillations with the magnitude of the topological charge does not only pertain to the vicinity of the vortex, as reported in \cite{CONTI1}, but to the whole X-wave. These are intrinsic oscillations associated with OAM, and result from the fact that the inverse Fourier transform $\int_0^\infty J_m\left[(\sin\theta/c)\omega r \right]e^{-i\omega t}d\omega$ has these zeros and oscillations. The exponential $e^{-\epsilon \omega}$ does not remove them, but only makes those at the trailing and leading parts of the pulse to have smaller and smaller amplitude towards the vortex center, as seen in the temporal shapes Figs. \ref{Fig3} from (a) to (c). This softening is the result of the increasing apodization of the Bessel function by the exponential spectrum towards the vortex center, as observed in the respective spectral densities in Figs. \ref{Fig3} from (d) to (f).
\begin{figure}[!]
\begin{center}
\includegraphics*[height=11.3cm]{Fig3.eps}
\end{center}
\caption{\label{Fig3} (a-c) Real electric field of X-waves with $v_g=0.0004$ mm/fs, $\epsilon =0.2$ fs and $|m|=8$ at (a) $r=0.5 r_M$, (b) $r=r_M$ and (c) $r=2r_M$, where $r_M$ is the radial distance of maximum fluence. $\Delta t =(\sin\theta/c) r$ indicates the location of the X arms, where the X-wave terminates. All fields are normalized to their peak values. (d-f) Respective power spectra (solid curves), the Bessel factor $J_m(k_\perp r)$ (dashed curves), and the broadband spectrum factor $f(\omega)= e^{-\epsilon \omega}$ (dotted curves). The vertical lines indicate the central frequency about $t'=0$, $\omega_c(r)$, coinciding with the first rise of the Bessel functions in each case.}
\end{figure}
With a fixed number of oscillations at all radii within a linearly increasing time interval $2\Delta t = 2(\sin\theta/c) r$ between the X arms, their frequency must decrease approximately inversely proportional to $r$. Also, with a number of oscillations proportional to $|m|$, their frequency at any particular radius $r$ must increase proportionally to $|m|$.
For a more quantitative analysis, we consider the instantaneous frequency, defined as $\omega_c =- d\,\mbox{arg} E/dt'$, which can also be evaluated from
\begin{equation}
\omega_c(r,t') = -{\rm Re}\left\{\frac{\int_0^\infty e^{-\omega(\epsilon+it')}J_m [k_\perp(\omega) r]\omega d\omega}{\int_0^\infty e^{-\omega(\epsilon+it')}J_m [k_\perp(\omega) r] d\omega}\right\}\,,
\end{equation}
yielding
\begin{equation}\label{WCE}
\omega_i(r,t') = {\rm Re}\left\{\frac{|m|\sqrt{(\epsilon+it')^2 + \left(\frac{\sin\theta}{c}r\right)^2}+(\epsilon+it')}{(\epsilon+it')^2+\left(\frac{\sin\theta}{c}r\right)^2}\right\}\,.
\end{equation}
Simple inspection shows that $\omega_i(r,t')$ takes a minimum value
\begin{equation}\label{WC}
\omega_c(r)\equiv \omega_i(r,t'=0) = \frac{|m|\sqrt{\epsilon^2 + \left(\frac{\sin\theta}{c}r\right)^2}+\epsilon}{\epsilon^2+\left(\frac{\sin\theta}{c}r\right)^2}\,,
\end{equation}
at $t'=0$, or central instantaneous frequency, that remains almost constant in time except in the vicinity of the X arms, and is
plotted as a function of $r$ for several values of $|m|$ in Fig. \ref{Fig4}(a). As the minimum frequency at each radial distance, it coincides with the first rise of the Bessel function in the spectrum, indicated by vertical lines in Figs. \ref{Fig3}(d-f). In fact a good approximation to (\ref{WC}) can be derived by equating the argument $x=k_\perp r= (\sin\theta/c)\omega r$ of the Bessel function $J_m(x)$ to that of the first rise of the Bessel function, $x \simeq |m|$, which yields
\begin{equation}\label{WC2}
\omega_c(r)\simeq \frac{|m|}{(\sin\theta/c) r}\,.
\end{equation}
This approximate equality is seen in Fig. \ref{Fig4}(a) to fit accurately (\ref{WC}) except in a tiny radial region [see inset in Fig. \ref{Fig4}(a))] about the vortex (compared to the radius of maximum X-wave energy, $r_M$). Thus, except in that region, the frequency $\omega_c(r)$ is independent of the particular broadband spectrum defined by $\epsilon$, and is inversely proportional to $r$, as expected. For $r\rightarrow 0$, the exact formula (\ref{WC}) yields the finite value
\begin{equation}\label{WC0}
\omega_c (0) = \frac{|m|+1}{\epsilon}\,,
\end{equation}
which is similar to the result in Ref. \cite{CONTI1,CONTI2}. From the latter relation and (\ref{WC2}), we also conclude that the blue shift proportional to the magnitude of the topological charge does not only takes place in the vicinity of the vortex, but affects the whole X-wave.
Equation (\ref{WCE}) for the instantaneous frequency at any time allows us to evaluate the number of oscillations within the X arms. In the same way as $\omega_c(r)$, $\omega_i(r,t')$ in (\ref{WCE}) turns out to be almost independent of $\epsilon$, except in the vicinity of the X-arms, and to be approximately given by
\begin{equation}
\omega_i(r,t') \simeq \frac{|m|}{\sqrt{\left(\frac{\sin\theta}{c}r\right)^2- t^{\prime 2}}}\,
\end{equation}
provided that $|t'|<\Delta t = (\sin\theta/c)r$, i. e., within the X arms. Averaging between in this time interval (integrating and dividing by $2\Delta t$) yields an average instantaneous frequency at each radius as
\begin{equation}
\bar \omega_i(r) = \frac{\pi}{2}\frac{|m|}{(\sin\theta/c)r}\,.
\end{equation}
One can then evaluate the number of oscillations as the full duration $2\Delta t$ over the average period $2\pi/\bar\omega_i(r)$, resulting in a number of oscillations equal to $|m|/2$.
\begin{figure}[!]
\begin{center}
\includegraphics*[height=4.1cm]{Fig4.eps}
\end{center}
\caption{\label{Fig4} (a) Central instantaneous frequency of the oscillations, $\omega_c(r)$, as a function of the radius $r$ for the indicated values of $|m|$ of X-waves with $v_g=0.0004$ mm/fs and $\epsilon =0.2$ fs, as given by the exact expression (\ref{WC}) (solid curves) and the approximate expression (\ref{WC2}) (dashed curves). They are almost indistinguishable except in the immediate vicinity of the vortex singularity. This region is enlarged in the inset. (b) Radial profiles of fluence of the same X-waves, numerically evaluated from (\ref{F}), and normalized to their peak values. In (a) and (b) the vertical lines are $r_M$ given by (\ref{RM}), locating approximately the radii of maximum fluence. The horizontal line in (a) helps to visualize that the central instantaneous frequency $\omega_f=\ln(1.7)/2\epsilon$ is the same at the respective radii of maximum fluence, i. e., independent of $m$.}
\end{figure}
The global blue shift of X-waves with $|m|$ might lead one to think that the whole X-wave becomes bluer and bluer with increasing magnitude of the topological charge. However this is only the case in the immediate vicinity of the vortex, as seen in (\ref{WC0}). According to (\ref{WC2}), valid out of this region, any given frequency is displaced radially outwards as $|m|$ is increased. As shown below, the radius of maximum energy density (fluence), or bright ring to a time-integrating detector, is also displaced with increasing $|m|$ in such a way that the frequency of the bright ring is independent of $m$, and in this sense it can be said that X-waves are of the same color irrespective of their OAM.
The fluence is given by ${\cal E}(r)=\int_{-\infty}^\infty (\mbox{Re}E)^2dt' =\frac{1}{2}\int_{-\infty}^\infty|E|^2 dt'$, or, in terms of the spectral density, by
\begin{equation}\label{F}
{\cal E}(r)=\frac{1}{\pi}\int_0^\infty e^{-2\epsilon\omega}|J_m(k_\perp(\omega) r)|^2 d\omega \,,
\end{equation}
which is plotted in Fig. \ref{Fig4}(b) for several values of $|m|$ for illustration. Although the above integral does not admit analytical integration, detailed numerical inspection shows that the area of the product $e^{-2\epsilon\omega}$ and $|J_m(k_\perp r)|^2$ is maximum, and then the fluence, at the radius $r_M$ where the frequency of the first rise of $|J_m(k_\perp r)|^2$, i. e. $\omega_c(r_M)$, coincides with the frequency $\omega_f$ at which the broadband spectral density $e^{-2\epsilon\omega}$ has decayed by about 1/1.7 =0.588 its value at $\omega=0$, i. e., $\omega_f \simeq \ln (1.7)/2\epsilon$. With this relative position, the Bessel function is not too damped and not too oscillatory, as exemplified in Figs. \ref{Fig3} from (d) to (f). The frequency at the bright ring, $\omega(r_M)\simeq \omega_f$, is then independent of $m$, as illustrated in Fig. \ref{Fig4}(a).
The fact that the frequency at the bright ring is solely determined by $f(\omega)$ was also reported in \cite{PORRAS1} for X-waves with bell-shaped spectra, and for ultrashort Laguerre-Gauss pulses in \cite{PORRAS1,PORRAS2,PORRAS3}. From (\ref{WC2}) equated to $\omega_f$, we obtain the radius of the bright ring as
\begin{equation}\label{RM}
r_M\simeq \frac{|m|}{\omega_f (\sin \theta/c)}\,,
\end{equation}
which is proportional to $|m|$, and provides a good approximation to the exact radius, see Fig. \ref{Fig4}(b).
Also in Ref. \cite{PORRAS1}, the lower bound to the duration at the bright ring of X-waves carrying $m$ units of OAM is established as $\Delta t \gtrsim |m|/\omega_f$ (half duration). Broadband X-waves have just the minimum duration $\Delta t=(\sin\theta/c) r_M = |m|/\omega_f$, and are therefore the minimal X-waves capable to carry $m$ units of OAM.
\subsection{Broadband GX-waves with OAM}
Integral (\ref{LW}) with the exponentially decaying Bessel beam spectrum and $k_\perp(\omega)$ in (\ref{KPERP}) with $\omega_z\neq 0$ ($a\neq 0$) cannot be performed analytically, but the spatiotemporal structure of these GX-waves can be easily understood from that of the X-wave with $a=0$ of the same group velocity and vorticity. Without OAM, GX-waves and X-waves differ substantially [compare Fig. \ref{Fig2}(a) with Fig. \ref{Fig5}(a)] because the spectrum $f(\omega)J_m(k_\perp r)$ of GX-waves with $m=0$ is highly peaked at the positive cut-off frequency $\omega_z$ (where $k_\perp r=0$) responsible for the infinite temporal oscillations observed at any radial distance. For GX-waves with OAM, however, the spectrum $f(\omega)J_m(k_\perp r)$ vanishes at $\omega_z$, which removes these oscillations, and makes the GX-wave with OAM to resemble much more the X-wave with OAM [compare Fig. \ref{Fig2}(c) with Fig. \ref{Fig5}(b)].
\begin{figure}[!]
\begin{center}
\includegraphics[height=4.6cm]{Fig5.eps}
\end{center}
\caption{\label{Fig5} Real electric field of GX-waves with $v_g=0.0004$ mm/fs, $\omega_z=2.5$ rad/fs, and $\epsilon =0.2$ fs (a) without OAM and (b) with OAM.}
\end{figure}
\begin{figure}[!]
\begin{center}
\includegraphics*[height=10.7cm]{Fig6.eps}
\end{center}
\caption{\label{Fig6} (a-c) Real electric field of GX-waves with $v_g=0.0004$ mm/fs, $\omega_z=2.5$ rad/fs, $\epsilon =0.2$ fs and $|m|=8$ at $r=0.3 r_M$, $r=r_M$ and $r=3r_M$, where the radius of maximum fluence is given by (\ref{RM2}). $\Delta t =(\sin\theta/c) r$ indicates the location of the X arms. (d-f) Respective power spectra (solid curves), the Bessel factor $J_m(k_\perp r)$ (dashed curves), and the broadband spectrum factor $f(\omega)= e^{-\epsilon \omega}$ (dotted curves). The vertical lines indicate the central frequency $\omega_c(r)$ given by (\ref{WC3}), and coinciding with the rise of the Bessel functions in each case.}
\end{figure}
In the vicinity of the vortex the number of oscillations is indeed the same as that of the X-wave of the same vorticity [Fig. \ref{Fig6}(a)]. This feature can be understood from the fact that the slope of $k_\perp(\omega)$ at high frequencies is the same as for the X-wave [Fig. \ref{Fig1}(b)], and that these high frequencies are located in the vicinity of the vortex. The cut-off frequency $\omega_z$ plays negligible role in the spectrum at these distances [Fig. \ref{Fig6}(d)]. Moving towards the periphery the number of oscillations becomes gradually larger than that of the X-wave, growing without bound and approaching the constant frequency $\omega_z$ [Fig. \ref{Fig6}(b)] because the spectrum approaches $\omega_z$ [Fig. \ref{Fig6}(e)] with increasing radius. At large enough radius [Fig. \ref{Fig6}(c)] the oscillations go beyond the X arms, as for the GX-wave without OAM, because the spectrum becomes dominated at these large distances by the cut-off frequency [Fig. \ref{Fig6}(f)].
An approximate expression for the radial distribution of frequencies at the GX-wave temporal center, $t'=0$, can be obtained as for X-waves. Equating the argument of $J_m(x)$, with $x=k_\perp(\omega)r$ and $k_\perp(\omega)$ given by (\ref{KPERP}), to the location of the first rise of $J_m(x)$, $x\simeq |m|$, we obtain a quadratic equation in $\omega$ whose positive solution is
\begin{equation}\label{WC3}
\omega_c(r) \simeq \omega_z + \frac{c^2}{\sin^2\theta} \left( -\frac{a}{c} +\sqrt{\frac{a^2}{c^2}+ \frac{\sin^2\theta}{c^2}\frac{m^2}{r^2}} \right),
\end{equation}
which is independent of $\epsilon$, as for X-waves, and approaches $\omega_z$ at large radius, as expected [Fig. \ref{Fig7}(a)]. Expression (\ref{WC3}) only fails in the close vicinity of the vortex, where the frequency becomes $\epsilon$-dependent and reaches approximately the same value $\omega_c(0)\simeq (|m|+1)/\epsilon$ as for X-waves.
We also observe that the central frequency $\omega_c(r_M)$ at the ring of maximum fluence is substantially independent of $|m|$ [Fig. \ref{Fig7}(a)] and determined solely by the frequency where $|f(\omega)|^2= e^{-2\epsilon \omega}$ has decayed approximately the same value $1/1.7=0.588$ as for X waves from its value at the cut-off frequency $\omega_z$, i. e., $\omega'_f =\omega_z + \ln(1.7)/2\epsilon$. Equating $\omega'_f$ to $\omega_c(r)$ in (\ref{WC3}), we obtain, after some algebra, the radius of maximum fluence as
\begin{equation}\label{RM2}
r_M\simeq \frac{|m|}{\sqrt{\omega_f^2\frac{\sin^2\theta}{c^2} + 2\omega_f\frac{a}{c}}}\,.
\end{equation}
Expression (\ref{RM2}) provides a reasonably good approximation to the radius of maximum fluence [Fig. \ref{Fig7}(b)]. This radius continues to be proportional to $|m|$, but is smaller than for X-waves.
\begin{figure}[b!]
\begin{center}
\includegraphics*[height=4.2cm]{Fig7.eps}
\end{center}
\caption{\label{Fig7} (a) Central instantaneous frequency of the oscillations, $\omega_c(r)$, as a function of the radius $r$ for the indicated values of $|m|$ of GX-waves with $v_g=0.0004$ mm/fs, $\epsilon =0.2$ fs, and $\omega_z=2.5$ rad/fs as given by the approximate expression (\ref{WC3}). In the immediate vicinity of the vortex singularity $\omega_c(r)$ does not approach infinity but to $\omega_c(0)\simeq (m+1)/\epsilon$. (b) Radial profiles of fluence of the same GX-waves, numerically evaluated from (\ref{F}), and normalized to their peak values. In (a) and (b) the vertical lines are $r_M$ given by (\ref{RM}), locating approximately the radii of maximum fluence. The horizontal lines in (a) helps to visualize that the central instantaneous frequency $\omega'_f=\omega_z + \ln(1.7)/2\epsilon$ is the same at the respective radii of maximum fluence, i. e., independent of $m$, and that the frequency approaches $\omega_z$ at large radius.}
\end{figure}
\section{Conclusions}
We have described the strongly coupled spatiotemporal structure of broadband superluminal localized waves with OAM. Temporal oscillations at all the frequencies in the broadband spectrum are displayed at different radii between the X arms, with a fixed number of oscillations dictated by OAM. A steep red shift with radial distance in conjunction with a pronounced blue shift of the whole X-wave or GX-wave with the magnitude of the topological charge results in an invariant color at the ring of maximum energy density, whose frequency is only determined by the spectrum of Bessel beams (which would directly be related to the spectrum of the laser source).
The practical generation of these diffraction-free, dispersion-free, self-healing, and minimal OAM-carrier wave modes would have an impact in obvious applications such as free-space, classical and quantum communication systems \cite{HUANG,PATERSON,ZHANG} currently using Laguerre-Gauss modes as OAM carriers. In strong-field, nonperturbative light-matter interactions such as in the generation of high harmonics and attosecond pulses with OAM, \cite{HERNANDEZ,GARIEPY,REGO} replacing the standard Laguerre-Gauss modes with OAM-carrying X-waves would enormously lengthen the depth of focus where light and matter can interact with a more uniform axial field.
We have deliberately left subluminal localized waves with OAM aside since they are expected to follow completely different rules as the transversal wave number dispersion is not hyperbolic but elliptical. This analysis is deferred for further work.
M.A.P. acknowledges support from Projects of the Spanish Ministerio de Econom\'{\i}a y Competitividad No. MTM2015-63914-P, and No. FIS2017-87360-P.
| 2023-04-23T06:41:06.767Z | 2021-10-12T02:21:26.000Z | redpajama/arxiv | arxiv_0001 | 1,700 | 4,671 |
9666525645ef3db1d8596e0ad66bfdf08682e29b | \section{Introduction}
\label{sec:intro}
Training an automatic speech recognition (ASR) system without the need to collect a large amount of transcribed data has been a long-lasting problem, especially for low-resource domains/languages. Previous efforts include model transfer learning, domain adaptation, knowledge distillation/teacher-student learning, and semi-supervised training \cite[\emph{inter alia}]{wang2015transfer,das2015cross,asami2017domain,li2017large,wang2020improving,manohar2018semi}. Recently, self-supervised learning (SSL) has emerged as a promising paradigm to tackle this problem. SSL for speech tasks leverages unlabeled data with a self-supervised loss in a pre-training stage, where it is capable of learning good contextualized representations from input speech. Then after fine-tuning the pre-trained model with a small amount of transcribed speech in a conventional supervised manner, the performance can match those trained directly with a much larger amount of labeled data \cite{oord2018representation,schneider2019wav2vec,chung2020generative,liu2020mockingjay,ling2020deep,wang2021unispeech,hsu2021hubert}. Existing SSL methods for speech include contrastive predictive coding (CPC) \cite{oord2018representation,schneider2019wav2vec}, auto-regressive predictive coding \cite{chung2020generative}, and masked predictive encoding \cite{liu2020mockingjay,baevski2020vq,hsu2021hubert}. Certain others may fit in more than one categories above \cite{baevski2020wav2vec,chung2021w2v}. Moreover, a recent work \cite{zhang2020pushing} iteratively performed pseudo-labeling in a more traditional way and fine-tuning on refined pre-trained models to further push the limits of SSL.
Noise robustness is another challenge for ASR in real-world applications \cite{li2015robust}. Speech recordings from real-world scenarios usually contain background noise and noise caused by recording imperfection, resulting in deteriorated ASR performance. Prevailing strategies dealing with this challenge are to plug a dedicated enhancement/denoising module into the pipeline of an ASR system as a frond-end to suppress the noise, either by training that module separately \cite{vincent2017analysis,kinoshita2020improving,wang2020complex} or jointly \cite{subramanian2019speech,chang2019mino} with acoustic models. The motivation of joint training is to alleviate the problem that optimizing enhancement objective independently does not necessarily lead to an optimal solution for the ASR task, even if it improves the intelligibility of speech. In either way it will add complexity to the neural network models.
In this paper we focus on strengthening the noise robustness of the pre-trained model during SSL for ASR. Existing work to this end includes PASE+ \cite{ravanelli2020multi} where a variety of speech transformations were estimated from contaminated speech, and wav2vec-C \cite{sadhu2021wav2vec} where a reconstruction module was added on top of the quantized output of the wav2vec 2.0 network \cite{baevski2020wav2vec} and the reconstruction loss was jointly trained with the existing contrastive loss. Same as wav2vec-C, our work, wav2vec-Switch, is also based on wav2vec 2.0. However, instead of a reconstruction loss, we add another contrastive loss as an auxiliary task to achieve noise robustness. Specifically, we batch both the original speech\footnote{We refer to ``original speech'' rather than ``clean speech'' to avoid any possible confusion, as the original speech in our case is not necessarily clean.} and its noisy version together and feed them to the network. Then in the contrastive learning the quantized targets in each original-noisy pair are switched, so that both the targets are treated as positive in their respective loss calculation. The motivation is that, if we want the contextualized representation robust to noise, the representation of an original speech should also be able to predict the target of its noisy version and vice versa. Different from the prior work, ours does not involve a process of transforming from one representation to another, but enforces the prediction consistency constraint in the contrastive loss without adding any complexity to networks. Experiments on synthesized noisy data and real-world noisy data from the CHiME-4 challenge \cite{vincent2017analysis} indicate the efficacy of our approach. In particular, compared to the baseline with data augmentation applied only, we observe 7.1--11.0\% relative word error rate (WER) reduction on synthesized noisy speech and 7.8\% reduction on real-world noisy speech when decoding without language model (LM), while still maintaining the performance on the original speech. Even in the presence of a strong neural LM, the WER reduction is 2.9--4.9\% and 5.7\% respectively. Moreover, our results on the CHiME-4 data, with only the unlabeled 960-hour LibriSpeech audio for pre-training, are comparable to, or even better than, other work with a complicated speech enhancement module.
\section{Models}
\vspace{-1mm}
\subsection{Wav2vec 2.0}
\vspace{-1mm}
\label{sec:wav2vec2}
We recapitulate the wav2vec 2.0 model on which our method is based. Compared to its precedents \cite{schneider2019wav2vec,baevski2020vq}, wav2vec 2.0 combines masked prediction and contranstive learning into a unified model during pre-training. It has a feature encoder $f: \mathcal{X} \mapsto \mathcal{Z}$ with a raw audio waveform $\mathbf{x} \in \mathbb{R}^T$ as its input, and a latent representation $Z=[\mathbf{z}_1,\ldots,\mathbf{z}_{T'}]$ with a time-domain down-sampling through a set of convolutional blocks as its output; a context network $g: \mathcal{Z} \mapsto \mathcal{C}$ that takes the masked $Z$ and outputs a contextualized representation $\mathbf{c}_t$ at each masked position $t$ through several blocks of Transformers \cite{vaswani2017attention}; and a quantization module $h: \mathcal{Z} \mapsto \mathcal{Q}$ discretizing the unmasked $Z$ to $Q$ from a finite set of codebook via Gumbel Softmax \cite{jang2017categorical} and product quantization \cite{jegou2011product}. The contrastive loss is applied at each masked position $t$, discriminating the true quantized representation $\mathbf{q}_t$ (the positive sample) from $K$ distractors $\tilde{Q}_t=\{\tilde{\mathbf{q}}_1,\ldots,\tilde{\mathbf{q}}_K\}$ (the negative samples) drawn from other masked positions within the same training example:
\begin{eqnarray}
\label{eq:contrastive_loss}
\mathcal{L}^\mathrm{C}(C,Q) &=& \sum_{t=1}^N \mathcal{L}_t^\mathrm{C}(C,Q)/N \\
\mathcal{L}_t^\mathrm{C}(C,Q) &=& - \log \frac{\exp(\mathrm{sim}(\mathbf{c}_t,\mathbf{q}_t))}{\sum_{\tilde{\mathbf{q}} \in \tilde{Q}_t} \exp(\mathrm{sim}(\mathbf{c}_t,\tilde{\mathbf{q}}))}
\end{eqnarray}
where $N$ is the number of masked positions where the loss is computed, and $\mathrm{sim}(\cdot,\cdot)$ is implemented as the \emph{cosine similarity} function. In addition, a diversity loss $\mathcal{L}^\mathrm{D}$ encouraging the codebook utilization is implemented as negative perplexity of the Gumbel Softmax output. The total loss to optimize is:
\begin{equation}
\label{eq:total_loss}
\mathcal{L} = \mathcal{L}^\mathrm{C} + \alpha \mathcal{L}^\mathrm{D}
\end{equation}
where $\alpha$ is a coefficient. During fine-tuning, the quantization module is discarded and the parameters inside the feature encoder are frozen. All the other network parameters are updated with the CTC loss \cite{graves2006connectionist}.
\vspace{-1mm}
\subsection{Wav2vec-Switch}
In the pre-training stage of wav2vec 2.0, the decision that distractors are only sampled from masked positions within the same training example, rather than from any examples, is critical, or it will hurt the ASR performance \cite{baevski2020wav2vec}. The reason is that sampling only within the same training example can avoid learning features irrelevant to ASR, e.g., speaker or environmental characteristics. However, no mechanisms are designed to achieve noise robustness during the pre-training: when a noisy utterance is given, both the positive and negative samples that the contrastive loss is using contain noise, and there is no explicit way to differentiate the noise from the speech or to learn a contextualized representation invariant to noise. Our intuition is that, if the contextualized representation is robust to noise, the representation of an original/noisy speech should be capable of predicting the target of the noisy/original speech as well. Motivated by this, we propose wav2vec-Switch as follows.
For each mini-batch of the original waveform $X \in \mathbb{R}^{B \times T}$ where $B$ is the batch size, we duplicate $X$ and apply an independently sampled noise to each row (example), and consequently we have a noisy version of $X$ as $\hat{X}$. Then both $X$ and $\hat{X}$ are fed into the wav2vec 2.0 network in parallel and forwarded through the feature encoder $f$, the context network $g$ and the quantization module $h$. At this point we have 4 quantities\footnote{Here we slightly abuse the use of the notation $C,Q,\hat{C},\hat{Q}$ to denote the batched versions of their respective quantities described in Section \ref{sec:wav2vec2}.}:
\begin{eqnarray}
\label{eq:four_quantities}
C=g(f(X)), \qquad Q=h(f(X)) \\\nonumber
\hat{C}=g(f(\hat{X})), \qquad \hat{Q}=h(f(\hat{X}))
\end{eqnarray}
In addition to the standard contrastive loss described in Eq. (\ref{eq:contrastive_loss}) where the loss takes $(C,Q)$ or $(\hat{C},\hat{Q})$ as its input arguments, we also switch the quantized targets $Q$ and $\hat{Q}$, and form two more tuples for the loss: $(C,\hat{Q})$ and $(\hat{C},Q)$. Therefore we obtain 4 contrastive loss quantities: $\mathcal{L}^\mathrm{C}(C,Q)$, $\mathcal{L}^\mathrm{C}(\hat{C},\hat{Q})$, $\mathcal{L}^\mathrm{C}(C,\hat{Q})$, and $\mathcal{L}^\mathrm{C}(\hat{C},Q)$. The new loss would be:
\begin{eqnarray}
\label{eq:new_contrastive_loss}
\mathcal{L}_{\mathrm{switch}}^\mathrm{C}(C,Q,\hat{C},\hat{Q})&=&\mathcal{L}^\mathrm{C}(C,Q)+\mathcal{L}^\mathrm{C}(\hat{C},\hat{Q}) + \\\nonumber &&\lambda\left(\mathcal{L}^\mathrm{C}(C,\hat{Q})+\mathcal{L}^\mathrm{C}(\hat{C},Q)\right)
\end{eqnarray}
where the coefficient $\lambda$ controls the weight of the term calculated from the switched targets, relative to the one obtained from the original targets. Fig. \ref{fig:model} also illustrates how the 4 contrastive loss quantities are calculated. Finally $\mathcal{L}^\mathrm{D}$ is added to the total loss the same way as in Eq. (\ref{eq:total_loss}). Note that when $\lambda=0$, the method reduces to the ``wav2vec 2.0 with data augmentation'' case, which will be compared against as the baselines in Section \ref{sec:exp}.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth]{figs/model.pdf}
\caption{This figure illustrates how the 4 contrastive losses are calculated in the proposed wav2vec-Switch model. The original-noisy pair of speech is fed into the network with identical internal states simultaneously. The loss quantities $\mathcal{L}^\mathrm{C}(C,\hat{Q})$ and $\mathcal{L}^\mathrm{C}(\hat{C},Q)$ in the middle are obtained by switching the quantized representations of the original and noisy speech as prediction targets.}
\label{fig:model}
\end{figure*}
We also need to keep in mind that the network's internal states for any specific input pair $(X,\hat{X})$ should be \emph{identical}. \emph{Identical} means that not only the architecture and parameters but also anything inside the network that relies on random states, including masked positions for the context network and dropout masks of all the dropout layers, should be the same for $X$ and $\hat{X}$. Otherwise the representations of the original speech and its noisy version will not follow with each other, and our approach will not behave as what we expect or learn representations with a meaningful interpretation. The ablation study in Section \ref{sec:ablation} also verifies the importance of enforcing such a constraint.
In practice, we batch $X$ and $\hat{X}$ together, and feed the resulting ``large'' mini-batch to the network. Every time before a random function is invoked, we save the current random state. We restore that random state immediately after the random function is invoked for the first half of the mini-batch, and then execute the same function for the second half. By doing in this way, we are able to ensure that the network's internal states for $X$ and $\hat{X}$ are always identical.
\section{Experiments}
\label{sec:exp}
\vspace{-1mm}
\subsection{Datasets}
\label{sec:datasets}
In order to validate the noise robustness of our proposed approach for ASR, we evaluate it on both synthesized and real-world noisy data. In this section, we first introduce how our data was prepared for these experiments. For the SSL paradigm, there are normally 3 stages: 1) self-supervised pre-training with a large unlabeled dataset; 2) fine-tuning with a relatively small set of labeled data; and 3) testing data in the target domain. We will explain the corresponding setup for each of these 3 stages.
In the synthesized data experiments, both the pre-training and test sets were formed by mixing the LibriSpeech corpus \cite{panayotov2015librispeech} with noise randomly drawn from the MUSAN corpus \cite{snyder2015musan}, and the signal-to-noise ratio (SNR) was uniformly sampled from the range between 5 and 10 dB. Note that the MUSAN corpus contains 3 categories of noise: \emph{music}, \emph{noise} and \emph{speech}. To avoid introducing potential confusion with actual speech when learning speech representations, we only used the first 2 categories of the noise from MUSAN and added them on-the-fly to the original 960-hour LibriSpeech training set for pre-training. For the test sets, we prepared the original \texttt{test-clean} and \texttt{test-other}, and several synthesized versions with different levels of SNR or different subsets of the MUSAN corpus being added, in order to evaluate the model on test sets with a variety of mismatched conditions. We used the original LibriSpeech \texttt{train-clean-100} labeled data for fine-tuning, and the reason is that we would like the final model not to degrade its performance on the original sets.
In the real noisy data experiments, we used the data from the CHiME-4 challenge\footnote{\url{http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/index.html}} \cite{vincent2017analysis}. The data in CHiME-4 is based on the ``WSJ0'' portion of the Wall Streat Journal corpus \cite{paul1992design} (WSJ), either obtained by recording the actual talker's speech in real noisy environments (on a bus, cafe, pedestrian area, and street junction) (denoted as ``real data''), or artificially mixing the clean WSJ speech with background noise and recording them using a 6-channel distant microphone array and a close-talk microphone (denoted as ``simulated data''). Specifically, we chose the real 1-channel track for testing. All the channels but channel 2 from both the ``real data'' and ``simulated data'' were used as independent utterances for pre-training and fine-tuning. Channel 2 was excluded because it is the one corresponding to the ``behind'' microphone which is of low recording quality. Since the CHiME-4 corpus is too small (less than 100 hours) to pre-train from scratch, we continued the pre-training with the CHiME-4 corpus on top of a model already pre-trained with LibriSpeech. The detailed comparisons between with and without continual pre-training are available in Table \ref{tab:chime}.
\vspace{-2mm}
\subsection{Model Pre-training}
Our implementation was made upon the official release of wav2vec 2.0 from \textsc{fairseq}\xspace \cite{ott2019fairseq}, and the network architecture used throughout this paper was identical to the LibriSpeech \textsc{Base}\xspace configuration specified in \cite{baevski2020wav2vec}: 12 transformer blocks with hidden dimension 768 and 8 heads. Most of the training settings and strategies for the \textsc{Base}\xspace model were also carried over into ours, except that the batch sizes were doubled to accommodate the pairs of original-noisy examples. The coefficient $\lambda$ in Eq. (\ref{eq:new_contrastive_loss}) was set to 0.3 empirically for all the wav2vec-Switch experiments. We also found that a smaller learning rate (e.g., 1/5 of the one used in the pre-training with LibriSpeech) for the continual training led to better ASR performance.
All the models were trained with 32 NVIDIA Tesla V100 GPUs, each with 32GB memory (required for the double-sized batches). We picked the best checkpoint (in terms of the validation loss), rather than the last one, for continual pre-training or fine-tuning after that, so that we do not need to worry about the risk of over-training.
\vspace{-2mm}
\subsection{Model Fine-tuning}
We followed the \texttt{base\_100h} setup in the wav2vec 2.0 code except the specific data being used for the CHiME-4 experiments. All the fine-tuning models were trained with 2 GPUs. We chose the best checkpoint according to the validation WER for final evaluations.
\vspace{-2mm}
\subsection{Decoding and Language Models}
As we employed the CTC loss for fine-tuning, incorporating a language model during decoding is crucial for the best performance. For the LibriSpeech experiments, we downloaded an existing Transformer word-level language model\footnote{\url{https://dl.fbaipublicfiles.com/wav2letter/sota/2019/lm/lm_librispeech_word_transformer.pt}} and adopted the same decoding configurations as those corresponding to the \textsc{Base}\xspace 100-hour experiment introduced in the wav2vec 2.0 paper. For the CHiME-4 experiments, an LSTM-based word-level language model with a vocabulary size of 65,000 was trained on the text portion of the WSJ corpus (see \cite{wang2019espresso} for model details), and then the same decoding strategy was performed. The LM weight was tuned separately for each model.
\vspace{-1mm}
\subsection{Results on Synthesized Noisy Data}
\label{sec:synthesized}
We first show the WER (\%) results on both the original and synthesized noisy sets under the matched condition in Table \ref{tab:librispeech}. Besides wav2vec-Switch (the third row), we also include the results from the wav2vec 2.0 model pre-trained on the original 960-hour LibriSpeech corpus (the first row), and the model trained on the synthesized noisy data (the second row, a.k.a. the data augmentation baseline). It is not surprising that without ``seeing'' the noisy data in training, the performance on the noisy test sets is much worse than that on the original ones: WERs increase by 2.5--3.2 times. After adding noise to the training data, the performance on the noisy data gets greatly improved, while not changing significantly on the original sets. When further replacing the wav2vec 2.0 model with wav2vec-Switch, the performance on the noisy sets is improved relatively by 7.1--11.0\% without LM, or 2.9--4.9\% with LM. In addition, the WERs on the original sets are even slightly better than those in the baseline in the ``no LM'' case, and remain almost the same if the LM is used. These results are promising as it is shown clearly that the model trained with wav2vec-Switch achieves noise robustness on the synthesized noisy data without hurting the performance on the original data.
\vspace{-1mm}
\begin{table}[ht]
\caption{Results on the original and synthesized noisy LibriSpeech test sets under the matched condition.}
\vspace{-4mm}
\begin{center}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{lccccc}
\toprule
& \multirow{2}{*}{\textbf{LM}} & \multicolumn{2}{c}{\textbf{Original}} & \multicolumn{2}{c}{\textbf{Noisy} (5--10 dB)} \\
\cmidrule(lr){3-4} \cmidrule(lr){5-6}
& & test-clean & test-other & test-clean & test-other \\ \midrule
\multirow{2}{*}{wav2vec 2.0} & N & 5.9 & \textbf{13.4} & 15.6 & 33.1 \\
& Y & \textbf{2.6} & \textbf{6.6} & 8.0 & 21.3 \\ \midrule
\multirow{2}{*}{\makecell{\quad+ MUSAN \emph{music}+\emph{noise} (5--10 dB)\\ (Baseline)}} & N & 6.1 & 14.1 & 8.2 & 19.8 \\
& Y & \textbf{2.6} & 6.7 & 3.4 & 10.2 \\ \midrule
\multirow{2}{*}{wav2vec-Switch} & N & \textbf{5.8} & 13.6 & \textbf{7.3} & \textbf{18.4} \\
& Y & 2.7 & 6.7 & \textbf{3.3} & \textbf{9.7} \\ \bottomrule
\end{tabular}
\end{adjustbox}
\end{center}
\label{tab:librispeech}
\vspace{-2mm}
\end{table}
\vspace{-3mm}
Next we present the results under mismatched conditions. \emph{Mismatched conditions} refers to the cases where the noisy conditions for testing are different from those for training. Given that the noisy condition for training was \emph{music}+\emph{noise} (5--10 dB), we created 3 versions of noisy test sets with mismatched conditions: 1) \emph{music}+\emph{noise} (0--5 dB) was the version where the SNR range in the test sets has no overlap with that in the training set; 2) \emph{speech} (5--10 dB) was the one where the noise type being added to the test sets is different; and 3) \emph{speech} (0--5 dB) was when both the SNR range and noise type are different. Table \ref{tab:librispeech_mismatch} demonstrates the results without LM along with the gains obtained by using the proposed wav2vec-Switch instead of wav2vec 2.0. We can see that the improvements hold in all the mismatched conditions. However, the relative gain becomes smaller when the degree to which the test noise condition mismatch with the training noise condition gets larger.
\begin{table}[ht]
\caption{Results on the synthesized noisy sets under different mismatched noisy conditions (without LM).}
\vspace{-4mm}
\begin{center}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{\emph{music}+\emph{noise} (0--5 dB)} & \multicolumn{2}{c}{\emph{speech} (5--10 dB)} & \multicolumn{2}{c}{\emph{speech} (0--5 dB)}\\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
& test-clean & test-other & test-clean & test-other & test-clean & test-other \\ \midrule
\makecell{wav2vec 2.0 + MUSAN \\\emph{music}+\emph{noise} (5--10 dB)} & 11.0 & 26.1 & 25.7 & 52.9 & 54.7 & 82.4 \\
wav2vec-Switch & \textbf{9.7} & \textbf{24.5} & \textbf{23.8} & \textbf{50.4} & \textbf{52.7} & \textbf{80.7} \\ \midrule
Gain (\%) & 11.8 & 6.1 & 7.4 & 4.7 & 3.7 & 2.1 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{center}
\label{tab:librispeech_mismatch}
\end{table}
\vspace{-7mm}
\subsection{Results on Real Noisy Data}
\label{sec:real}
We now evaluate wav2vec-Switch on the CHiME-4 corpus. The 1-channel track real noisy data was used for model validation and evaluation. We also include the best results reported in the challenge \cite{du2016ustc} and other more recent ones \cite{chen2018building,wang2020complex}. All of them adopted a supervised training paradigm and had a dedicated speech enhancement module to preprocess the noisy speech input before an acoustic modeling module, and some of them even leveraged model ensemble. As presented in Table \ref{tab:chime}, the best results are from wav2vec-Switch with the continual pre-training. The relative improvement from the corresponding baseline is 7.8\% without LM (16.5 vs. 17.9), or 5.7\% with LM (6.6 vs. 7.0). It is also worth noting that, without any speech enhancement, our self-supervised approach followed by a simple CTC fine-tuning achieves better results than those using carefully designed enhancement algorithms. The additional data we were using was just the unlabeled 960-hour LibriSpeech audio and the MUSAN corpus.
\begin{table}[ht]
\caption{Results on the CHiME-4 real 1-channel dev/eval sets.}
\vspace{-5mm}
\begin{center}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{lcccc}
\toprule
& \textbf{continual pre-training} & \textbf{LM} & dev & eval \\
\midrule
Chen et al. (Kaldi Baseline) \cite{chen2018building} (2018) & & Y & 5.6 & 11.4 \\
Du et al. \cite{du2016ustc} (2016) & & Y & 4.6 & 9.2 \\
Wang et al. \cite{wang2020complex} (2020) & & Y & \textbf{3.5} & 6.8 \\
\midrule
\multirow{4}{*}{\makecell{wav2vec 2.0 + MUSAN \emph{music}+\emph{noise} (5--10 dB)\\ (Baseline)}} & \multirow{2}{*}{N} & N & 10.6 & 17.6 \\
& & Y & 3.7 & 7.2 \\ \cmidrule(lr){2-5}
& \multirow{2}{*}{Y} & N & 10.7 & 17.9 \\
& & Y & 4.6 & 7.0 \\\midrule
\multirow{4}{*}{wav2vec-Switch} & \multirow{2}{*}{N} & N & 10.2 & 16.8 \\
& & Y & 3.6 & 7.1 \\ \cmidrule(lr){2-5}
& \multirow{2}{*}{Y} & N & \textbf{10.0} & \textbf{16.5} \\
& & Y & \textbf{3.5} & \textbf{6.6} \\\bottomrule
\end{tabular}
\end{adjustbox}
\end{center}
\label{tab:chime}
\vspace{-6mm}
\end{table}
\vspace{-3mm}
\subsection{Ablation Study}
\label{sec:ablation}
To investigate the impact of not keeping the dropout masks or the masked positions the same within each original-noisy speech pair, we conducted one experiment on LibriSpeech where the dropout masks within the context network were different between the examples within each pair, and another experiment where the masked positions were different\footnote{We still keep the number of masked positions the same to ensure dimension match after switching the targets.}. We report the results without LM in Table \ref{tab:ablation}. For easy comparisons, we also copy 2 relevant rows from Table \ref{tab:librispeech}. It shows that despite a clear degradation from the one with identical dropout masks, the one with non-identical dropout masks is still better than the baseline on all the test sets except the original \texttt{test-clean}. However, having different masked positions resulted in a significant deterioration in WER (15.8--27.6\% increase), which is expected since we cannot force a representation to still have consistent predictions when they are actually making predictions for different positions before and after the switch, and consequently it will end up with a sub-optimal solution if doing so. These two experiments attest the importance of maintaining identical dropout masks and masked positions for input pairs in wav2vec-Switch.
\begin{table}[ht]
\caption{Results on the original and synthesized noisy LibriSpeech sets related to whether to keep identical dropout masks or masked positions between speech pairs. The models were evaluated without LM. Row 1 and 2 are from Table \ref{tab:librispeech} for clearer comparisons.}
\vspace{-4mm}
\begin{center}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{lccccc}
\toprule
& \multicolumn{2}{c}{\textbf{Original}} & \multicolumn{2}{c}{\textbf{Noisy} (5--10 dB)} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& test-clean & test-other & test-clean & test-other \\ \midrule
\makecell{wav2vec 2.0 + MUSAN \emph{music}+\emph{noise} (5--10 dB) \\ (Baseline)} & 6.1 & 14.1 & 8.2 & 19.8 \\ \midrule
wav2vec-Switch w/ identical dropout masks & \textbf{5.8} & \textbf{13.6} & \textbf{7.3} & \textbf{18.4} \\\midrule
wav2vec-Switch w/o identical dropout masks & 6.4 & 13.8 & 7.8 & \textbf{18.4} \\ \midrule
wav2vec-Switch w/o identical masked positions & 7.4 & 16.5 & 8.6 & 21.3 \\ \bottomrule
\end{tabular}
\end{adjustbox}
\end{center}
\label{tab:ablation}
\vspace{-2mm}
\end{table}
\vspace{-5mm}
\section{Conclusions}
\vspace{-1mm}
We present wav2vec-Switch, a self-supervised learning model based on wav2vec 2.0 that infuses noise robustness into the contrastive representation learning for ASR without introducing extra components to neural networks. Robustness to noise is achieved by feeding pairs of original and noisy speech into the network, and enforcing the prediction consistency for the original and noisy speech, i.e. treating the corresponding quantized targets of the original and noisy speech as positive in the contrastive learning. Experiments on both synthesized and real-world noisy speech exhibit the power of our proposed method for robust ASR. Future work includes pre-training on even larger unlabeled data, exploring other ways of getting contrastive samples, and extending our method beyond the contrastive learning.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
\fontsize{8.7}{10.1}\selectfont
| 2023-04-23T06:41:06.899Z | 2021-10-12T02:26:07.000Z | redpajama/arxiv | arxiv_0001 | 1,707 | 4,472 |
100b643d923d945846b01f4e294a9679421bcce1 | \section{Introduction}
Since the end of the 19th century, the diagnosis of many diseases - cancer in particular - has involved human inspection of stained tissue sections using a simple light microscope~\cite{chapman_scope_2020}. Histopathology in both research and clinical settings still involves microscopy-based inspection of physical slides but a rapid shift to digital instruments and computational analysis (\textit{scope to screen}) is now underway~\cite{chapman_scope_2020}. Digital pathology~\cite{weinstein_prospects_1986} in a clinical setting focuses on the analysis of tissues stained with colorimetric dyes (primarily hematoxylin and eosin, H\&E~\cite{titford_long_2005}) supplemented by single-color immunohistochemistry methods that use antibodies to detect molecular features of interest~\cite{pallua_future_2020}. In research settings, recently developed high-plex imaging methods such as CyCIF~\cite{lin_cyclic_2016, lin_highly_nodate}, CODEX~\cite{goltsev_deep_2018}, and mxIF~\cite{gerdes_highly_2013} to measure the levels and sub-cellular localization of 20-60 proteins, providing single-cell information on cell identities and states in a preserved tissue environment. The resulting data are complex, involving multi-channel gigapixel images having $10^6$ or more cells. Underdevelopment of analytical and visualization methods is a barrier to progress in digital pathology, explaining the continuing dominance of physical slides.
Machine learning on high-plex tissue images has shown promise, particularly with respect to automated classification of cell types~\cite{liu_comparison_2019,campanella_clinical-grade_2019}, tissue morphologies~\cite{stoltzfus_cytomap_2020}, and cellular neighborhoods~\cite{madabhushi_image_2016}. However, such data-driven approaches do not leverage hard-won information known primarily to anatomic pathologists on which cell and tissue morphologies are significantly associated with disease outcome or response to therapy. A hundred years of clinical pathology has investigated many striking and recurrent image features whose significance still remains unknown. A critical need, therefore, exists for new software tools that optimally leverage human-machine collaboration in ways that are not supported by existing interfaces~\cite{nicholas_sofroniew_naparinapari_2021, allan_omero_2012}.
Pathologists are very efficient at extracting actionable information from physical slides, frequently panning across a specimen while switching between low and high magnifications. They record key observations in notes and by placing dots on slides next to the key features. Digital software needs to reproduce this efficiency and functionality (including a ‘dotting’ function) while using visual metaphors to present associated data and using machine learning to find similar and dissimilar visual fields. Designing scalable visual interfaces that will work in the context of high-volume clinical workflows~\cite{molin_diagnostic_2016} and to high-dimensional research data represents a substantial challenge.
We addressed these challenges as a team of visualization researchers, pathologists, and cell biologists via a process of goal specification, iterative testing and design, and real-world implementation in a biomedical research laboratory. We make three primary contributions. \textbf{(1)} We demonstrate task-tailored, lens-centric focus+context technique, which enables intuitive interaction with large (ca. 100 GB) multi-channel images and linked multivariate data (Fig.~\ref{fig:teaser}). The lensing technique allows users to focus on different aspects of a region for close-up analysis while maintaining the surrounding context. We design novel domain-specific encodings in which features computed from the image (spatial cross-correlation or cell identity) can be accessed in conjunction with the image. \textbf{(2)} We integrate interactive real-time spatial histogram similarity search algorithms able to identify recurrent patterns across gigapixel multi-channel images at different resolutions. Integrated into the lens, this search guides analysts to regions similar to the one in focus, enabling exploratory analysis at scale. \textbf{(3)} We present a scalable system that combines lens and search features with interactive annotation tools, enabling a smooth transition from exploration to knowledge externalization. Analysts can save, filter, and restore regions of interest (ROIs) within the image space (along with underlying statistics of the filtered single-cell data, channel identities, and color settings) and export them for continued study. Two use-cases demonstrate the applicability of our approach to patient-facing (translational) cancer research and point to future applications in diagnosis and patient care.
\section{Related Work}
The related work is three-fold.
We first discuss large-scale image viewers as an enabler for our approach. We then summarize focus+context techniques in comparison to overview+detail and pan\&zoom, with a focus on image data. Lastly, we compare ROI annotation approaches.
\subsection{Scalable Image Viewers For Digital Pathology}
Many biomedical visualization systems focus on the display of large 2D imaging data and apply multi-resolution techniques such as image pyramids~\cite{in_e_1984} to handle large data sizes at interactive rates. DeepZoom~\cite{microsoft_silverlight_2021} hierarchically divides images into tile pyramids and delivers pieces as required by the viewer. Zarr~\cite{alistair_miles_zarr-developerszarr-python_2020, noauthor_zarr_nodate}, a file format and library, abstracts this concept by providing storage of chunked, compressed, N-dimensional arrays. Viewers such as OpenSeadragon~\cite{noauthor_openseadragon_nodate} and Viv~\cite{manz_viv_2020} leverage these libraries and add GPU-accelerated rendering capabilities. On top of that, many solutions offer data-management, atlas, and analysis capabilities. OMERO PathViewer~\cite{allan_omero_2012} is a widely used web-based viewer for multiplexed image data. As an extension to the data management platform OMERO, it supports a variety of microscope file formats.
Online cancer atlases such as Pancreatlas~\cite{saunders_pancreatlas_2020} and Pan-Cancer~\cite{weinstein_cancer_2013} support data exploration with storytelling capabilities. Minerva Story~\cite{hoffer_minerva_2020, rashid_interpretative_2020} is a new tool used to create atlases for the Human Tumor Atlas Network~\cite{rozenblatt-rosen_human_2020}.
Other solutions focus on combining image visualization with analytics. Napari~\cite{nicholas_sofroniew_naparinapari_2021} is a fast and light-weight multi-dimension viewer designed for browsing, annotating, and analyzing large multi-dimensional images. Written in Python, it can be extended with analytic functionality, e.g., in combination with SciMap~\cite{nirmal_et_al_scimap-_nodate}. Other analytical tools focus on end-users, such as the open-source solutions histoCAT~\cite{schapiro_histocat_2017} and Facetto~\cite{krueger_facetto_2020}, and commercial tools such as Halo~\cite{indica_labs_halo_nodate} and Visiopharm's TissueAlign~\cite{tissue_align} supporting split-screen comparison for serial sections. Screenit~\cite{dinkla_screenit_2017-1} presents a design to analyze smaller histology images at multiple hierarchy levels. Similarly, ParaGlyer~\cite{morth_paraglyder_2020} is an analysis approach for multiparametric medical images that permits analysis of associated feature values and comparisons of volumetric ROIs by voxel subtraction.
Somarakis et al.~\cite{somarakis_visual_2021} offer comparison views with a focus on spatially-resolved omics data in a standard viewer. These tools feature multiple linked views for overview+detail exploration. In comparison, our solution focuses on interactive focus+context and rich annotation with contextual details displayed near the ROI and supports a neighborhood-aware similarity search on top of local image pixel and feature value comparison.
Most viewers operate on much smaller datasets. Our viewer builds on Facetto~\cite{krueger_facetto_2020} and Minerva~\cite{hoffer_minerva_2020, rashid_interpretative_2020} and supports multi-channel and cell-based rendering with linked data at a scale few other solutions support.
The main contribution of this paper, however, is the embedded interactive lensing technique for multivariate image data and its task-tailored features supporting the digital pathology workflow.
\vspace{-0.4em}
\subsection{Focus+Context-based Image Exploration}
Cockburn et al.\cite{cockburn_review_2009} categorize interaction techniques to work at multiple levels of detail into focus+context (F+C), overview+detail (O+D), zooming, and cue-based views.
F+C minimizes the seam between views by displaying the focus within the context, O+D uses spatial separation, zooming temporal
separation~\cite{van_wijk_smooth_2003}, and cue-based methods selectively highlight or suppress items. Comparative studies show that F+C techniques are often preferred and allow for efficient and effective target acquisition~\cite{shoemaker_supporting_2007} and steering tasks~\cite{gutwin_fisheyes_2003} in multi-scale scenarios. A common F+C technique is the lens~\cite{bier_toolglass_nodate},
a generic see-through interface that lies between the application and the cursor.
Tominski et al.~\cite{tominski_survey_2014,tominski_interactive_2017} present a conceptual pipeline for lensing consisting of selection (what data), the lens-function (filters, analysis), and a join operation with the underlying visualization (mapping, rendering). They further categorize into lens properties (shape, position, size, orientation), and into data tasks, e.g., (geo)spatial analysis.
Different lenses to magnify, select, filter, color, and analyze image data were proposed:
Carpendale et al.~\cite{carpendale_extending_1997} present a categorization of 1-3D distortion techniques to magnify in 2D uniform grids. Focusing on lens-based selection,
MoleView\cite{hurter_moleview_2011} selects spatial and attribute-related data ranges in spatial embeddings and Trapp et al. present a technique for filtering multi-layer GIS data for city planning~\cite{trapp_3d_2008}. Similarly, Vollmer et al. propose a lens to aggregate ROIs in a geospatial scene to reduce information overload~\cite{vollmer_hierarchical_2018}.
Flowlens~\cite{gasteiger_flowlens_2011} features a lens for biomedical application: to minimize visual clutter and occlusions in cerebral aneurysms.
There are a few tools in digital histopathology with lensing capabilities. Vitessce~\cite{gehlenborg_vitessce_2021}, positions linked views around an image viewer~\cite{manz_viv_2020} and includes a lens to show a predefined set of channels. However, by design, they do not focus on supporting a specific pathology process, nor does the lens support magnification, feature augmentation, comparison, or search.
\subsection{Handling and Visualization of ROI Annotations}
Different techniques exist to mark, visualize, and extract ROIs in images, but only a small subset is used in the digital pathology domain. QuPath~\cite{bankhead_qupath_2017}, an extensible software platform, allows annotating histology images with free form selection tools and more advanced selection options like pixel-based nearest neighbors and magic wand, extending from the clicked pixel to neighboring areas with a threshold. Visopharm's viewer~\cite{visopharm} similarly provides different geometric shapes to annotate images. In Orbit~\cite{stritt_orbit_2020} and Halo~\cite{indica_labs_halo_nodate} users can define inclusion and exclusion annotations, organize them in groups, and train a classifier.
Going beyond manual annotation, Quick-Annotator~\cite{miao_quick_nodate} leverages a deep-learning approach to search and suggests regions similar to a given example. Similarly, Ip and Varnsesh~\cite{ip_saliency-assisted_2011}
narrow down and cull out ROIs of high conformity and allow users to interactively identify the exceptional ROIs that merit further attention on multiple scales. We incorporate these ideas but instead apply a fast neighborhood-based histogram search running on multiple image channels in real-time to guide the user to similar areas in the viewport.
When large images are annotated on different scales it becomes challenging to navigate in an increasingly cluttered space. Some features might even be too small to be identifiable at certain zoom levels. Scalable Insets~\cite{lekschas_pattern-driven_2020} is a cue-based technique that lays out regions of interest as magnified thumbnail views and clusters them by location and type. TrailMaps~\cite{zhao_trailmap_2013} proposes an algorithm to automatically create such insets (here bookmarks) based on user interaction and previously viewed locations. They also offer timeline- and category-based groupings for a better overview and faster navigation.
We choose a more familiar design to cater to the application domain needs and conventions but enhance our approach by supporting \textit{rich} annotations that store not only geometry but also linked single-cell data and descriptive statistics. Closely related to our approach is the work by Mindek et al.~\cite{mindek_managing_2014} that proposes annotations linked to contextual information so that they remain meaningful during the analysis and possible state changes. We extend this idea with overview, search, and restoring capabilities integrated into focus+context navigation in large-scale multivariate images.
\section{Background: Multiplex Tissue Imaging}
\label{sec:process}
We analyze multiplexed tissue imaging data generated with CyCIF~\cite{lin_cyclic_2016} but our visualization approach can be applied to images acquired using other technologies such as CODEX~\cite{goltsev_deep_2018}. Images are segmented and signal intensity is measured at a single-cell level. Here we provide a brief overview of the process and data (Fig.~\ref{fig:data}).
\noindent
\textbf{Acquisition.}
\label{sec:ImagingWorkflow}
Multiplexed tissue imaging allows to analyze human tissue specimens obtained from patients for pathologic diagnosis. The approach used by the investigators, as described in our previous work~\cite{krueger_facetto_2020}), involves iterative immunofluorescence labeling with 3-4 antibodies to specific proteins followed by imaging with a high-resolution optical microscope in successive cycles. This results in 16-bit four-channel image datasets for up to 60 proteins of interest (60 images), 30k x 30k in resolution, and often greater than 100GB in size, allowing for extensive characterization and correlation of markers of interest in large tissue areas at sub-cellular resolution.
\noindent
\textbf{Processing.}
\label{sec:ImageWorkflowProcessing}
High-resolution optical microscopes have limited fields of view, so large samples are imaged using a series of individual fields which are then stitched together computationally to form a complete mosaic image using software such as ASHLAR~\cite{muhlich_jeremy_et_al_labsyspharmashlar_2021, muhlich2021stitching}. A nonrigid (B-spline) method~\cite{klein_elastix_2010,marstal_simpleelastix_2016} is applied to register microscopy histology mosaics from different imaging processes~\cite{borovec_anhir_2020}, e.g., CyCIF and H\&E.
CyCIF mosaics can be up to 50,000 pixels in each dimension and contain as many as 60 channels, each depicting a different marker. Mosaic images are then classified pixel-by-pixel to discriminate cells using, e.g., a random forest~\cite{sommer_ilastik_2011}, then individual cells are segmented~\cite{lsp_labsyspharms3segmenter_221}.
Segmentation information is stored in 32-bit masks that define the cell ID for each pixel in a multi-channel image stack.
Next, per-cell mean intensities are extracted for the $10^{6}$ or more individual cells in a specimen. The processing steps are combined in an end-to-end processing pipeline named MCMICRO~\cite{schapiro_mcmicro_2021}. The resulting 16bit multi-channel images ($\approx$100GB), 32bit segmentation ($\approx$5GB), and high-dimensional feature data ($\approx$2GB) are then ready for interactive analysis.
\noindent
\textbf{Terminology and Data Characteristics.}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/data.pdf}
\vspace{-2em}
\caption{Our histological tissue image data consists of a multi-channel image stack, a segmentation mask, and extracted tabular marker intensity values (arithmetic mean) for each cell. The tabular data is linked via cell ID and X,Y position.}
\label{fig:data}
\vspace{-2em}
\end{figure}
Our datasets contain (1) a multi-channel tissue image stack with 1-60 channels in OME-TIFF format~\cite{noauthor_ome-tiff_nodate}, (2) a segmentation mask also in TIFF format, and (3) a table of extracted image features in CSV format (Fig.~\ref{fig:data}).
Each \emph{image channel} in the \textbf{multi-channel image stack} represents data from a distinct antibody stain and is stored as an image pyramid (in the OME-TIFF) for efficient multi-resolution access. These channels can result from different imaging processes (e.g., CyCIF and H\&E).
A \textbf{segmentation mask} labels individual \emph{cells} in each tissue specimen with a unique \emph{cell ID}. Similar to each image channel, the mask is stored in pyramid form.
A CSV file stores \textbf{single-cell features} (columns) for each cell (row). These features consist of extracted mean intensity values per image channel for that cell, x and y position of the cell in image space, and its cell ID.
\section{Domain Goals and Tasks}
\label{sec:goalsAndTasks}
\begin{figure*}[t]
\centering
\includegraphics[width=0.97\linewidth]{figures/workflow_enhanced7.pdf}
\vspace{-1em}
\caption{The pathological workflow
starts with exploratory navigation in the image (T1). ROIs are magnified, measured, and analyzed (T2) by switching and combining image channels (T3) and investigating single-cell marker statistics (T4). Identified regions often appear in patterns across the image. Finding such similar regions (T5) can ease manual search. ROIs are then annotated (T6). These steps build an iterative process where annotations are refined, and further areas are explored. The ROIs are stored or exported to discuss with colleagues or for examination.}
\vspace{-1em}
\label{fig:workflow}
\end{figure*}
This project is rooted in a collaboration with physicians and researchers in the Department of
Pathology and the Laboratory of Systems Pharmacology (LSP) at Harvard Medical School. Four experts in the domain of digital histopathology participated in the project.
The team consists of two pathologists, two computational biologists, and four computer scientists. The \textbf{overall goal} of our collaborators is to characterize the features of tumors including cell types \& states, their interactions, and their morphological organization in the tumor microenvironment.
\noindent\textbf{Pathologists} are physicians who diagnose diseases by analyzing samples acquired from patients. Anatomic pathologists specialize in the gross and microscopic examination of tissue specimens. They characterize cell and tissue morphology using light microscopy, and molecular features using immunohistochemistry and immunofluorescence.
The pathologists involved in this project engage in research and have expertise in imaging, computational biology, and defining the role of diverse cell states in shaping and regulating the tumor microenvironment.
\noindent\textbf{Computational and Cell Biologists} complement expertise in biomedical science with skills in technical fields including mathematics, computer science, and physics. Multiplexed immunofluorescence experiments involve collection of primary imaging data which is used for a wide variety of complex computational tasks including image registration and segmentation, and extraction of numerical feature data, as well as downstream analyses of cell states, spatial statistics, and other phenotypes. Biologists interpret aspects of cell morphology and marker expression, but pathologists complement these analyses with greater depth of experience with human tissue morphology and disease states.
\noindent\textbf{Visualization Experts.} By contrast, the computer scientists provide expertise in
visualization and visual data analysis. They work in close collaboration with the aforementioned investigators to provide novel analytics prototypes that perform a variety of analysis tasks and can be integrated into research studies and laboratory IT infrastructure. To understand domain goals, the visualization experts in this study participated in weekly meetings focused on image processing, biomedical topics, and on iterative goal-and-task analyses for the proposed approach. The collaboration with the LSP started in 2018 with Facetto~\cite{krueger_facetto_2020}.
\smallskip
\noindent In Fall 2020, this team began working together
to develop advanced tools that cater to visual exploration, inspection, and annotation.
We followed the design
study methodology by Sedlmair et al.~\cite{sedlmair_design_2012}.
This methodology describes a setting in which visualization experts analyze a domain-specific problem, design a visualization approach to solving it, validate their design, and reflect on lessons learned.
\subsection{Tasks and Challenges}
In weekly sessions,
we identified a workflow (Fig.~\ref{fig:workflow}) of consecutive tasks \textbf{(T1-T6)} leading from image exploration and close-up inspection of regions to annotation and extraction of patterns.
\noindent\textbf{T1. Explore Multimodal, Highplexed Image and Feature Data in Combined Setting:}
A pivotal task is rapid navigation and
visualization of multi-channel images. Pathologists normally operate by moving slides physically on a microscope stage and switching between view magnifications levels. They
depend on a seamless visual experience to diagnose diseases or conditions. \textit{Challenge:} Image analysis must not only provide seamless pan \& zoom, but
also switching between channels of different image modalities.
Existing solutions do not scale beyond 4 to 5 channels. They also lack on-demand rendering, blending of multiple channels, and ways to highlight and recall ROIs.
\noindent\textbf{T2. Close-up ROI Analysis:}
Once a region of interest is found in the tissue specimen, experts focus and zoom in on the area for close-up inspection and measurement, without losing the spatial context of e.g., a tumor region's surrounding immune cells. \textit{Challenge:} In addition to interactive rendering of different resolution levels in a combined space, experts need to focus and measure without losing proportions and larger context. Panning and zooming between overview+detail and individual marker channels requires a large amount of mental effort as either context or details are lost~\cite{lekschas_pattern-driven_2020}.
\noindent\textbf{T3. Regional Comparison of Image Markers:} For a region in focus, cell biologists need to relate and compare between different marker expressions (e.g., DNA, CD45, Keratin) of different image modalities (CyCIF~\cite{lin_highly_nodate, lin_cyclic_2016}, H\&E~\cite{titford_long_2005}, CODEX~\cite{goltsev_deep_2018}, mxIF~\cite{gerdes_highly_2013}, etc.), encoded in individual image channels. \textit{Challenge:}
Whole-slide switching between channels can lead to losing focus and change blindness due to different morphological structures.
\noindent\textbf{T4. Relate to Spatially Referenced Single-Cell Expressions:}
Besides looking at the raw image, experts analyze extracted singe-cell marker values and their spatial statistics in (A) image and (B) high-dimensional feature space. Of special interest is cell density in the tissue, counts and spatial arrangements of cell-types, and distributions of marker intensities. For each of these descriptive statistics, it is important to relate regional phenomena to statistics of the whole image. \textit{Challenge:} Providing complex spatially referenced information
in proximity while dealing with a dense cellular image space and catering to highly specific domain conventions.
\noindent\textbf{T5. Find Similar Regions:}
Analyzing a whole slide image is time-consuming. Often the cancer micro-environment consists of repetitive patterns of cell-cell interactions and morphological structures across channels that pathologists annotate and compare to each other. \textit{Challenge:} Finding and guiding users to such structures in an interactive fashion on different spatial scales and across image dimensions.
\noindent\textbf{T6. ROI Annotation and Summaries:}
A common pathology task concerns the manual delineation of tumor mass and other structures on the digitized tissue slide, known as region annotation. These annotated regions need to be extracted, semantically grouped, and summarized in a structured way for collaboration and examination.
\textit{Challenge:} The annotation process must be integrated seamlessly with the analysis so that experts can extract, group, and refine patterns along the way.
\section{Approach}
We used the tasks of Section 4 to guide the design and implementation of Scope2Screen, playing the translator role put forth in the design study methodology by Sedlmair et al.~\cite{sedlmair_design_2012}.
Fig.~\ref{fig:teaser} offers an overview of the user interface. After importing the data, analysts can explore (Fig.~\ref{fig:teaser}) the image by activating a set of different channels (A) with distinct color configurations and adjustable intensity ranges. The selections are then rendered in a combined view (B) for interactive panning and zooming \textbf{(T1)}. Using a virtual lens (C), users can focus, magnify, and measure regions of interest for close-up analysis \textbf{(T2)}.
By toggling image channel combinations inside and outside of the lenses, one can regionally compare different combinations of marker expressions \textbf{(T3)}. Other lens filters link to underlying single-cell data, offering descriptive statistics about marker distributions and cell counts and types \textbf{(T4)}. Using a search-by-example approach, the tool guides users to regions with similar patterns as those in scope \textbf{(T5)}. To save and extract a region, analysts can take snapshots that save the ROI together with relevant notes (Fig.~\ref{fig:teaser} D), current settings, and interior single-cell data for the region \textbf{(T6)}. Annotated areas can be filtered and exported to share with collaborators and to recall for further examination.
In the following sections, we introduce the corresponding techniques and features \textbf{(F1-F6)} and discuss design decisions to enable these tasks.
\begin{figure*}[t]
\centering
\includegraphics[width=0.90\linewidth]{figures/features_all.pdf}
\vspace{-0.8em}
\caption{Top: Settings for channel analysis: (A) Single channel option, out of three in the context. (B) Multi-channel lens. (C) Split-screen lens enabling juxtaposed comparison of the same area with different multi-channel settings (here CyCIF-DNA and H\&E-RGB). Bottom: Feature augmentation: (D) Single-cell histograms for detailed vertical comparison of selected cell marker distribution (channel-based rendering); (E) Radial single-cell plot a for compact summary of cell marker distribution; (F) Segmentation, cell types and counts showing classification results.}
\label{fig:featureAugmentation}
\vspace{-1em}
\end{figure*}
\begin{figure}[t]
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/lensing_mag_normal.png}
\caption{Magnification}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/lensing_mag_fisheye.png}
\caption{Fisheye}
\label{fig:sfig2}
\end{subfigure}
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/lensing_mag_plateau.png}
\caption{Plateau}
\label{fig:sfig2}
\end{subfigure}
\vspace{-1em}
\caption{Magnification options: (A) normal magnifier; (B) fisheye, introducing distortion with an interpolated spherical shape; (C) plateau with 75\% preserved resolution and 25\% compressed interpolation. High-resolution image quality within the zoom area is achieved by accessing image data from more detailed layers in the image pyramid.}
\label{fig:magnifiers}
\vspace{-2.0em}
\end{figure}
\subsection{Scalable Image Exploration (F1)}
\label{sec:imageExploration}
We designed Scope2Screen for interactive exploration and multi-scale visualization~\textbf{(T1)} of resection specimens
(Sec.~\ref{sec:process}). To make the viewer scalable for high-resolution data, we decided to leverage image pyramids~\cite{in_e_1984} to load only sections of the image for a given viewport and zoom level.
To enable flexible exploration of image channels and to visualize data in the viewport, the viewer operates in two multi-resolution rendering modes: \textbf{channel-based} and \textbf{cell-based}. Channel-based rendering maps intensity values of selected image channels to color. It then computes a mixed color value for each pixel in the viewport.
Cell-based rendering leverages a layered segmentation mask that indexes each pixel to a cell. This way, each cell can be colored individually to visually express selections, cell types, etc. Both rendering modes operate at interactive rates, allowing users to pan \& zoom, select regions, and to color and mix channels in real-time. These rendering modes cater to our experts' needs to analyze on tissue and single-cell level.
Sec.~\ref{sec:implementation} gives details on the implementation of our system.
\subsection{Lensing Features for Multivariate Image Data}
\label{sec:Lensing}
Conveniently, the principal technology of the microscope, the lens, is highly adaptable. To enable close-up analysis \textbf{(T2)} of ROIs and to connect the optical and digital experiences for users, we introduced a digital lens designed to imitate the familiar experience of inspecting through an eyepiece.
To support the requirements of our collaborators, we equip the lens with features (Fig.~\ref{fig:featureAugmentation},\ref{fig:magnifiers}), ranging from magnification and channel filters \textbf{(F3)} to descriptive single-cell statistics \textbf{(F4)}.
\subsubsection{Magnification and Measuring (F2)}
Our users rely on being able to toggle quickly between high and low magnification powers as part of an established workflow for considering a region of interest up-close, as a localized arrangement of cells, and in-context, as part of a larger tissue sample \textbf{(T2)}. Within the virtual viewer, this zoom-level interchange can be challenging to control. Constraining magnification to the lens's boundary (Fig.~\ref{fig:magnifiers}) while maintaining the contextual overview then becomes a convenient strategy for handling simultaneous focal analysis.
Because the magnifying lens (Fig.~\ref{fig:magnifiers} A), when active, occludes part of the image space, we experimented with a common spatial manipulation to create a faux-spherical representation: the fisheye (B). However, distortion is a troubling approach for experts who make evaluations based on morphology, leading us to introduce a hybrid plateau model (C) that maintains the original composition within the central area of the lens using a standard zoom and only compresses the periphery to seamlessly transfer into the context without occlusion.
The scale of an ROI (in microns) is closely tied to magnification interpretability. Users consider area-based standards for clinical validation as part of their inspection methodology. Visible axes around the lens allow for a quick understanding of scale (Fig. ~\ref{fig:teaser} C). Additionally, this embedded conversion capability between digital and physical units is a useful tool for extended functionalities that emulate related user tasks (e.g., cell prevalence counting and density analysis).
\subsubsection{Channel Analysis (F3)}
\label{sec:ChannelAnaysis}
To address regional comparison of channels \textbf{(T3)} that represent data from the same modality (e.g., CyCIF), other modalities (e.g., H\&E~\cite{titford_long_2005}), and across different planes (e.g., slide sectioning), we iterated over different designs and finally settled on the following features.
The first lens feature allows users to quickly isolate each selected channel individually for improved views of a distinct channel (Fig.~\ref{fig:featureAugmentation} A) in focus while keeping the multi-channel setting in the context.
The second feature combines multiple channels in the scope (Fig.~\ref{fig:featureAugmentation} B, Fig. \ref{fig:multimodal_lung}) while the context can keep a distinct setting (Fig.~\ref{fig:teaser}). We specifically designed this setting for semantically dependent channels such as RGB images for H\&E staining. It also addresses our experts' needs to analyze spatial relations of a set of independent channels, for example, from specific immune markers in a region of interest, while keeping globally a different set of more general cancer and stromal markers. Addressing early feedback from our experts, we added the capability to equip the lens with multiple sets of such channel combinations (Fig.~\ref{fig:multimodal_lung}) in advance or add them during analysis. These sets can be quickly toggled during exploration to investigate different biological questions, e.g., focusing on immune reactions, or tissue architecture.
To overcome occlusion, we chose to offer two solutions leveraging temporal and spatial separation. Firstly, we introduce adjustable interpolation controls allowing to \textit{blend} seamlessly from the overlaying lens-image to the underlying global channel combination. This transition helps to visually align and keep track of often very different structures in different channel combinations.
To further reduce change blindness, a split-screen lens juxtaposes a second lens instance in proximity, which displays (copies) the occluded part in the original global channel setting (Fig.~\ref{fig:featureAugmentation} C). This allows for side-by-side comparison.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/histoCombinedSmall.pdf}
\vspace{-1.50em}
\caption{HistoSearch allows to find regions similar to those covered by the lens, taking into account activated channels. Top: HistoSearch is applied at different scales to find mucosal regions. The search works in two settings, for the current viewport (computation time $\approx$ 1 second for Full HD) and for the whole image in the highest resolution. Bottom: The spatial histogram similarity search consists of four steps (Sec.~\ref{sec:similarRegions} for details).
}
\label{fig:histoSearchResults}
\vspace{-2.0em}
\end{figure*}
\subsubsection{Feature Augmentation (F4)}
\label{sec:featureAugmentation}
Our multiplex image data (Sec.~\ref{sec:process}), often comprises up to 60 channels. While three channels can guarantee non-overlapping (RGB) colors, adding even more channels makes visual decoding for analysts mentally challenging and increasingly inaccurate. Additionally, color encodings of quantitative data are often hard to gauge~\cite{mackinlay_automating_1986}. Instead, to enable quantitative analysis of selected regions,
we chose to augment the image space with descriptive statistics of the extracted single-cell data~\textbf{(T4)} using more abstract visual encodings (Fig.~\ref{fig:featureAugmentation} D-F). We developed three task-tailored lens settings showing marker distributions, density reports, and cell types and counts. With every update of the lens position on screen, the back-end queries the in-memory CSV table (Sec.~\ref{sec:process}) for cells in the lens's area and returns cell Id values along with the requested statistics, which are then processed and rendered into different charts.
To speed this up, we execute spatial range queries with a ball-tree index structure~\cite{dolatshah_ball-tree_2015}. This yields a run time complexity of $\mathcal{O}(n\log{}n)$.
\noindent\textbf{Single-Cell Histograms.} To analyze marker distribution in a selected region, we compute binned histograms of the single-cell aggregates (see CSV table, Sec.~\ref{sec:process}) for selected channels. We present these histograms (Fig.~\ref{fig:featureAugmentation} D) in proximity to the focus area for quick look-ups. We decided to arrange the histograms in a vertical layout to ease comparison. According to our domain collaborators, absolute comparison of individual markers is statistically not meaningful as the signal-to-noise ratio changes per cycle and stain in the imaging process (Sec.~\ref{sec:process}). Instead, they are interested in a relative comparison of the distributions. We use a $log_{2}$ transformation to make these skewed marker distributions comparable, followed by a cut-off of the 1st and 99th percentile to remove outliers. Fig.~\ref{fig:teaser} (C) shows the lens operating in this setting. To further analyze where cells lie in that spectrum, we chose to offer a brush functionality. The user can filter a range (min-max) in the histogram, which highlights cells in the lens matching the updated single-cell marker values.
\noindent\textbf{Radial Chart.}
Additionally to histograms, we provide an overview of all markers by arranging their mean values in a radial layout in proximity to the lens (Fig.~\ref{fig:featureAugmentation} E). This decision allows for a more compact representation of the multivariate single-cell data.
When we first showed this to our collaborators, they missed a reference to compare the region of interest to global image statistics. In a second design iteration, we thus encoded arithmetic means for the whole tissue. The histograms show these whole-slide references as orange bins (background) and the radial plot encodes the information as a polyline.
\noindent\textbf{Cell Types and Counts.}
Our collaborators want to validate results of cell-type classification and clustering~\cite{liu_comparison_2019,campanella_clinical-grade_2019} and set them into spatial context to expressions in the image channels.
We color-code cell boundaries by detected cell types, using the cell-based rendering mode. This mode utilizes the segmentation mask to retrieve which pixel is linked to which cellId (Sec.~\ref{sec:imageExploration}). The colored boundaries (Fig.~\ref{fig:featureAugmentation}, right) allow to display the data at its spatial image position and still see the channel colors. As classification often strongly depends on the expressions in a few channels, users can pick matching channel colors and check if the pattern spatially aligns with the cell types.
To ease quantification in the localized region, we also provide an ordered list of cell types and counts near the lens. Counts are encoded as bars that dynamically adapt while hovering over the data. Users can switch between two orderings: Locked cell type positions allow to observe increasing or decreasing presence of specific types; ranking by count is preferred to spot the most prominent cell types in the focus region.
\subsection{Finding Similar Regions (F5)}
\label{sec:similarRegions}
\begin{figure*}[h]
\centering
\includegraphics[width=0.85\linewidth]{figures/lensing_dotting.png}
\vspace{-0.5em}
\caption{The rich snapshot and annotation process. (A) During close-up analysis, the user focuses on an ROI and takes a snapshot. (B) The snapshot is annotated with title and description. (C) The Dotter panel links snapshots to the image space (left). Lens-settings such as channel combination and colors are preserved. (D) Annotated regions can be \textit{reactivated} as lenses to explore further or fine-tune.}
\vspace{-1em}
\label{fig:annotaton}
\end{figure*}
Once identified, a region of interest often serves as a reference point to find similar areas within the image \textbf{(T5)}. Examples for repetitive patterns are, e.g., tumor-immune boundaries or germinal centers where mature B cells proliferate, differentiate, and mutate their antibody genes.
To find similar regions, we chose to provide a method operating directly on the image to align as close as possible with the visual perception (see Fig.~\ref{fig:histoSearchResults}).
We consider regions similar if they have a similar intensity distribution. To compare a region to all possible areas in an image, we employ a sliding-window strategy that compares histograms of regional marker intensity distributions across the image. To trigger the search, the HistoSearch lens can be placed over the pattern of interest in the image. HistoSearch is equipped with a slider to set a contour threshold, allowing for fine-tuning of what's considered similar.
The applied integral histogram method~\cite{porikli_integral_2005,perreault_median_2007} makes it possible to employ even an exhaustive search process in real-time. We adapt and extend a Python implementation~\cite{noauthor_sliding_nodate}. Our method works in four steps (Fig.~\ref{fig:histoSearchResults}, 1-4):
\smallskip
\noindent\textbf{Step 1:} A box- or circle-shaped region (the lens area) is extracted, and a histogram of its greyscale values is computed (Fig.~\ref{fig:histoSearchResults}, Step 1).
\smallskip
\noindent\textbf{Step 2:} For each pixel in the whole channel image, a histogram of the greyscale values surrounding (lens radius) the pixel is computed. Semantically, this means that we take into account spatial neighborhood information and not simply compare pixel by pixel (Fig.~\ref{fig:histoSearchResults}, Step 2).
\smallskip
\noindent\textbf{Step 3:} The local histogram for the region surrounding each pixel in the image is then compared to that of the lens using Chi-square distance. We apply Porikli's integral histogram~\cite{porikli_integral_2005}, which recursively propagates histogram bins of previously visited image areas using values from neighboring data points instead of repetitively executing the full histogram computation. This is then compared to the lens histogram using Chi-square distance (see Eq.~\ref{eq:chi}) for two arrays $X$ and $Y$ with $N$ dimensions across all channels $C$. This leads to a similarity map with a similarity value for each pixel in the image (Fig.~\ref{fig:histoSearchResults}, Step 3).
\vspace{-0.5em}
\begin{equation}
\label{eq:chi}
\frac{1}{c}\sum_{c=1}^{C}{\sum_{i=1}^{N}{((x_i-y_i)^2 / (x_i+y_i))}}
\end{equation}
\noindent\textbf{Step 4:} We use marching squares~\cite{perreault_median_2007} to detect contours in the similarity map and extract these contours as polygons that we render on screen (Fig.~\ref{fig:histoSearchResults}, Step 4).
\smallskip
\noindent Step 2 and 3 can be computed for multiple channels. To not lose information, we compute each channel's similarities separately and then aggregate them to a combined similarity map.
Similar to our multi-resolution rendering strategy (Sec.~\ref{sec:imageExploration}), we execute the histogram search algorithm on the fly on the respective TIFF-file layer (Sec.~\ref{sec:process}) that aligns with the current zoom level. Fully zoomed out, the aggregation level is higher; while zoomed in, we process the highest resolution but for a smaller image area. The approach can also be employed in full resolution to the whole image (Sec.~\ref{sec:ROIsummaries}).
\subsection{Descriptive ROI Summaries (F6)}
\label{sec:ROIsummaries}
When using a light microscope, a pathologist often `dots' whole slides with a pen to indicate ROIs for later examination. We introduced a digital ``dotter'' to represent this approach but with extended support for annotation \textbf{(T6)} during exploration (Fig.~\ref{fig:annotaton}). The lens functions as a camera lens that can snapshots whenever an ROI is in focus.
Our collaborators currently narrate image annotations to data stories for examination, teaching and outreach of their research~\cite{rashid_interpretative_2020}. The link to analysis results is often lost as annotations must be manually recreated in the used tools.
To maintain analytic provenance, we developed a novel \textit{rich snapshot} method that not only saves the image area under the lens but also stores all associated data: a list of active image channels in focus and in the peripheral context, channel colors, and range settings, the linked single-cell data in scope, and textual annotation such as title and a more detailed description pathologists and cell biologists can add.
These rich snapshots are available as thumbnails in the Dotter panel (Fig.~\ref{fig:annotaton} C) and are interactively linked to overlays within the viewport. Inside the panel, the user can save, delete and load ROIs from a database. Names and descriptions can be edited and referenced as tags for filtering results. It is important to our users to be able to quickly recall and restore the snapshots for further analysis and fine tuning, but also to return to ROIs in their original context. Thus, by clicking on thumbnails, the viewer navigates back to the coordinates and zoom level of the overlay. By clicking on the overlay's marker icon, we fully restore the lens, along with its global view setting, i.e. active channels, range and color mappings for the context (Fig.~\ref{fig:annotaton} D). This workflow facilitates hand-offs between colleagues who benefit from shared evaluation.
To extend the search for an ROI in the Dotter panel, users can use HistoSearch (Sec.~\ref{sec:similarRegions}) to find regions alike in the whole image space (other than the viewport during interactive analysis). We render these annotations as image-overlays but as soon as the user picks up a region, we restore the lens, and the user can alter the region as needed.
\section{Architecture and Implementation}
\label{sec:implementation}
Scope2Screen is an open-source web-application (available here:~\cite{scope2screen}) with a back-end Python server built on Flask and a Javascript frontend. The server's restful API allows to retrieve image and feature data and to steer analytics and is easily extendable with new methods and corresponding API endpoints. The frontend is built using Bootstrap, D3.js, and OpenSeadragon (OSD)~\cite{noauthor_openseadragon_nodate}, a web-based zoomable image viewer, which we extend significantly.
We developed a lensing library that supports a subset of the interactive features of Scope2Screen as a generic plugin~\cite{jared_lensing} for OSD. The library relies on a hidden viewer that provides access to both in-view and out-of-view image data to support controlled rendering within the lens, along with other complementary features. Our application utilizes and extended lensing's logic with additional features (Sec.~\ref{sec:Lensing}).
Scope2Screen also builds on Facetto~\cite{krueger_facetto_2020} but makes improvements to the architecture. Instead of preprocessing OME-TIFF~\cite{noauthor_ome-tiff_nodate} image stacks to DeepZoom~\cite{microsoft_silverlight_2021} we now read chunks (cropped 2D arrays of the respective layer in the image channel/mask) on-the-fly from layered OME-TIFFs to render it in the viewport at multiple resolution levels, depending on the current zoom setting. We rely on Zarr~\cite{alistair_miles_zarr-developerszarr-python_2020}, a format for the storage and handling of chunked, compressed, N-dimensional arrays. The client-side rendering is hardware accelerated. It relies on WebGL~\cite{noauthor_webgl_nodate} shaders from Minerva~\cite{hoffer_minerva_2020,rashid_interpretative_2020} and supports Facetto's channel and cell-based rendering modes (Sec.~\ref{sec:imageExploration}) for both the lens (focus) and the whole viewport (context). To access and filter the linked single cell feature (CSV) data more dynamically and at scale, we moved data processing and ball-tree~\cite{dolatshah_ball-tree_2015} indexed querying (Section~\ref{sec:featureAugmentation}) to the back-end so that the client only loads requested pieces (in lens or viewport), aligning with our multi-resolution rendering strategy.
\section{Use Cases}
\label{sec:useCases}
We applied Scope2Screen to study two cancer datasets that we acquired from sections of lung and colon cancer using CyCIF~\cite{lin_cyclic_2016}.
Immediately adjacent sections were H\&E stained~\cite{titford_long_2005} and used to evaluate tissue morphology. We carried out the analysis together with our collaborators over zoom using a Pair-Analytics approach~\cite{arias-hernandez_pair_2011}; we steered the tool guided by the domain collaborators. This method is advantageous because it pairs Subject Matter Experts (SME) with expertise in a domain with Visual Analytics Experts (VAE) who have the technical expertise in the operation of VA tools but lack contextual knowledge.
\subsection{Use Case 1: Colorectal Cancer}
\label{sec:UseCase1}
Tumors are complex ecosystems of numerous cell types arranged into recurrent 3D structures. However, the patterning of specific tumor and immune cell-states is poorly understood due to the difficulty of mapping high-dimensional data onto large tissue sections. In use case 1, two pathologists and one cell biologist analyzed a human colorectal carcinoma (CRC1) from the Human Tumor Atlas Network (HTAN) (PMID 32302568) to explore tumor and immune cell interactions.
\noindent\textbf{Data:}
CRC1 is a human colorectal adenocarcinoma that was serially sectioned at 5um intervals into 106 sections. 24-marker CyCIF was performed on every 5th section. Every adjacent section was stained with H\&E. CyCIF and H\&E images were registered and stitched, and single-cell segmentation and fluorescence intensity quantification were performed using MCMICRO~\cite{schapiro_mcmicro_2021}. Cell types were defined by expression of lineage- \& state-specific markers. Here, we analyze one CyCIF and an adjacent H\&E image. The data included 40 CyCIF channels and 3 (RGB) H\&E channels encompassing 1.28 mio cells in a field 26,139 x 27,120 pixels (8,495 x 8,814 microns) in size (87.66 GB).
\noindent\textbf{Analysis: }
We began the analysis with a low magnification overview of the CyCIF images using DAPI (blue), keratin (white), and aSMA (red) channels in the whole viewport. In combination, these channels illuminated pathologically-relevant structures of the tissue, including the morphology of nuclei (DAPI), abnormal epithelial cells (keratin), and muscular layers (aSMA) (Fig.~\ref{fig:useCase1} A, B). A second channel defined immune populations including CD4+ helper T cells (red), CD8+ cytotoxic T cells (green), and CD20+ B cells (white) to analyze immune interactions with tumor and adjacent normal regions (Fig.~\ref{fig:useCase1} C).
For each marker, we defined an upper and lower color mapping range using the channel range sliders (Fig.~\ref{fig:teaser} A). The low-magnification view revealed a small region of tumor budding cells ($\le$ 1mm$^2$), a phenomenon in which infiltrating single tumor cells or small clusters of cells ($\le$ 4) appear to ``bud'' from the tumor-stromal interface at the invasive margin, correlating with poor clinical outcomes (Fig.~\ref{fig:useCase1} D). We used the standard lens magnifier to focus analysis on the budding region while maintaining spatial context.
To further explore spatial patterns of marker expression, we activated the “single-cell radial chart”. It provides a dynamic display of the mean single-cell expression levels of all CyCIF markers within the lens and the global mean of the markers across the entire image for comparison. This enabled the experts to see that tumor and immune cells in the several regions, including the budding region, were positive for PD-L1, a protein that suppresses the activity of cytotoxic CD8 T cells, which is often clinically targeted by
antibody therapies. To capture images and associated single-cell data of these ROIs for subsequent review of immune interactions and tumor features, we took snapshots and annotated the areas 'PD-L1 rich region' (Fig.~\ref{fig:useCase1} E) for later analysis.
We next used the split-screen lens (Fig.~\ref{fig:featureAugmentation} C) to view H\&E and CyCIF images side-by-side. Using this tool, pathologists validated the alignment of H\&E and CyCIF channels acquired from adjacent sections to compare histologic and molecular features. They identified areas in the H\&E images with marked chronic inflammation (lymphocytes and macrophages) in the peri-tumoral stroma for further evaluation. Direct comparison of the H\&E and CyCIF data in these tissue regions using the split-screen lens allowed the pathologists to characterize lymphocyte aggregates with predominantly cytotoxic CD8+ T cells in peri-tumoral stroma (Fig.~\ref{fig:useCase1} E), as well as clusters of CD20+ B cells and CD4+ helper T cells forming `tertiary lymphoid structures' (TLS). Direct comparison of H\&E and CyCIF data in this manner allowed for targeted molecular characterization of immune populations such as lymphocytes which is not possible with H\&E alone (Fig.~\ref{fig:useCase1} F). To compare the marker intensity value distributions more precisely, we switched to the `lens histogram' (Fig.~\ref{fig:featureAugmentation} A) and compared the results with the `cell-type' lens within each region to assess marker expression with the results of the automated cell type classifier.
Based on review of the images, the pathologists used the Dotter’s `snapshot' function to annotate three immunologically distinct regions (described above), three tumor regions with distinctive histomorphology (glandular, solid, and mucinous regions), and adjacent normal colonic mucosa. Using the `similarity search' on the normal mucosal region (Fig.~\ref{fig:histoSearchResults}, top), we identified areas with similar histologic features, confirmed by pathologist inspection, embedded within the tumor mass which were not readily apparent on initial low-magnification review of the H\&E images, validating the utility of the search method. We saved these findings to our database for subsequent retrieval.
\begin{figure}[t]
\centering
\includegraphics[width=0.90\linewidth]{figures/lensing_sardana_annos.png}
\vspace{-0.75em}
\caption{
Use Case 1. Rich snapshots capture ROIs and important insights: (A) Broad population; (B) Healthy tissue; (C) Immune cell rich; (D) Tumor budding ; (E) Tumor suppression; (F) H\&E - lymphocyte.}
\label{fig:useCase1}
\vspace{-2em}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\linewidth]{figures/lensing_channel_combos.png}
\vspace{-0.5em}
\caption{Use Case 2, Multi-channel lenses in 4 settings: (A) `Basic Cell Typing' shows tissue composition - stromal, immune, and cancer cells. The dense structure is a result of tumor growth in the lung; (B) `Immune Cell Typing' distinguishes between immune and non-immune cells for a broad overview of immune regions (orange); (C) `Lymphocytes and TLS' combines CD-channels reveal distinct immune types, e.g., cytotoxic T cells attacking the cancer; (D) `Lymphocyte Phenotyping' for finer distinction, showing proliferating B-cells for antibody production (in blue).}
\label{fig:multimodal_lung}
\vspace{-1.5em}
\end{figure*}
\noindent\textbf{Feedback:}
Although we worked with our collaborators in weekly sessions over several months, we received additional comments on design improvements and future features within the 2-hour analysis session.
They found the normal lens magnification the most useful. The fisheye lens was problematic for review of tissue images due to distortion of cell morphology and tissue architecture, which may complicate the interpretation of important pathologic features such as nuclear shape. This aligns with earlier feedback motivating the design of the plateau lens, which they confirmed as a helpful improvement.
One pathologist also suggested to equip the lens with different predefined filter settings to be able to quickly toggle for different analysis tasks.
While both the radial chart and histograms were well received, they asked to highlight under- or over-represented marker values more prominently. The histograms were described as easier to read and should be extended to represent non-active image channels as well. They also asked for a) a heatmap that color codes marker correlation, and b) the ability to define channel combinations inside the lens on-the-fly, not only in pre-configuration.
A recurring piece of feedback during the session was to add more descriptive statistics such as bin sizes in the histogram, ratios of cell populations, and ways to precisely define intensity ranges by value.
The proximity of single-cell data to the inspected area and the H\&E channel comparison, which is limited or non-existent in other tools, were described as particularly useful. The ability to simultaneously inspect and mark different regions of the tumor was perceived as a promising area for further development.
\subsection{Use Case 2: Lung Adenocarcinoma}
Lung adenocarcinoma is a common subset of lung cancer that, in later stages, often does not respond to chemotherapy. Immunotherapy has shown great promise, but patient response varies according to each tumor’s specific microenvironment. To assess why only certain patients respond, one needs an in-depth understanding of the tumor and immune landscape. Together with two biologists, we apply Scope2Screen to explore the immune reactions in lymphocyte structures.
\noindent\textbf{Data: }Using t-CyCIF, we have prepared a dataset of human lung adenocarcinoma wedge resection. The image data consists of 38 channels, each with a resolution of 39,843 x 29,227 pixels (12,949 x 9,499 microns) and 118.43 GB in size containing 534,677 segmented cells. The data is enriched with cell-type classification from a deep immune profiling of the tumor microenvironment.
\noindent\textbf{Analysis:}
We began the analysis with the whole tissue in the viewport and activated the DNA channel for a general overview of the tissue architecture. Addressing the feedback from Use Case 1, we equipped the lens with a set of four predefined channel combinations, designed to investigate the tumor's immune response from a high level to detail.
The first channel combination (Fig.~\ref{fig:multimodal_lung} A) consisted of basic cell markers and was designed to assess overall tissue composition. Lung cells are marked by TTF-1,
which enabled the detection of the aberrant tumor morphology in the upper region of the sample. The lens magnifier allowed the biologists to see that the tumor is heavily infiltrated with immune cells (marked by CD45 and Vimentin).
We then switched to a second channel combination for immune cell markers (Fig.~\ref{fig:multimodal_lung} B) to further investigate the types of immune cells infiltrating the tumor. This revealed dense aggregates of immune cells, mainly composed of a core of B cells, surrounded by T cells, which are known tumor-associated tertiary lymphoid structures (TLS). Lens magnification allowed our experts to quickly detect, mark and further characterize the large number (10-12) of TLS's in this lung specimen. By switching to a third channel composition customized for TLS markers (Fig.~\ref{fig:multimodal_lung} C), we inspected each TLS more closely (see Sec.~\ref{sec:UseCase1} for a closer description)
We used the Dotter’s snapshot capability and placed landmarks as we reviewed the TLS's. We measured the TLS size using the ruler functionality (1,200 microns on average). We activated the cell type lens equipped with immune and cancer type classifications which showed B cells in the center (Fig.~\ref{fig:featureAugmentation} F, Fig.~\ref{fig:multimodal_lung} C)
surrounded by epithelial lung cells. The cell biologists also used this setting to check the dataset segmentation, which they rated as very accurate. While focusing on one of these TLS, we activated HistoSearch to find areas with similar marker patterns, successfully identifying the other TLS areas. Surprisingly, in some TLS the HistoSearch detected the structure’s outer rim but not its center. Using the magnifier to zoom into these regions, we noted a distinctly higher level of B cell marker CD20. We focused the lens on this area and magnified for close-up inspection. To better understand the marker distribution in this region, we switched the lens function to the radial distribution chart (Fig.~\ref{fig:featureAugmentation} B), revealing a high Ki67 and PCNA concentration, markers for cell proliferation and growth.
Subsequently, we switched to the fourth channel group to compare lymphocyte markers across a TLS (Fig.~\ref{fig:multimodal_lung} D). We activated the histograms and moved the lens outside of a TLS towards the lung epithelial cells, recognizing TTF-1 positive tumor cells. Some of these were MHC-II positive.
By shrinking the lens scope to single-cell size, we compared the TTF-1 cells, finding that cells with MHC-II expression are non-proliferative. This suggests that transient MHC-II expression is coupled with entry into a non-proliferative state.
\noindent\textbf{Feedback:}
While users were able to make meaningful observations throughout the evaluation, their commentary indicated two categories of improvements.
Channel views and statistical views could be merged so users do not have to rely on rapid memory recall required for toggling filters. For example, simultaneous access to the single-cell marker intensity distribution would have been helpful to monitor non-selected markers for early exploration of broad immune cell lineages.
Acknowledging that our tools do not support a wide range of unanticipated biological questions, the absence of tools for spatial analysis stalled certain leading lines of inquiry. Measuring the degree of cell proliferation around immune cell clusters would have been an important next step for our users, who recommended that we prioritize the introduction of supporting algorithms and visualizations for spatial correlation.
\vspace{-0.5em}
\section{Hands-On User Study and Questionnaire}
\label{sec:handson}
To further evaluate the intuitiveness and usefulness of Scope2Screen's interactive interfaces and lens features, we conducted hands-on user studies in which three of our four domain collaborators (that were involved in goal specification, iterative testing and design, and use-cases), two biologists and one pathologist, steered the tool, gave think-aloud feedback, and additionally filled out a questionnaire.
\noindent\textbf{Study Setup.} The sessions were conducted via Zoom with one subject matter expert (SME) at a time and two members from our visualization team. Scope2Screen was installed on a remote machine to which the experts had access. We used a ``think aloud'' approach~\cite{carpendale2008evaluating} as an opportunity for users to share feedback from their own interaction with the application.
We recorded video and audio. The users worked sequentially through a list of Scope2Screen's features before freely exploring with the features practiced, following regions of biological interest.
Sessions
took between 70 and 90 minutes.
They then filled out a questionnaire rating the usefulness of individual features with a 5 bin Likert scale (strongly agree to strongly disagree).
\noindent\textbf{Study and Feedback.} At first, the experts were asked to make use of global viewer features such as toggling image channels, panning \& zooming, and setting color ranges. Overall, they rated
the application interface as intuitive and accessible (strongly agree, agree, and neutral). All agreed that focus+context
improved exploration over a pure O+D approach and strongly agreed that the lens magnifier is helpful for observing local regions.
Aligning with use-case feedback, the experts preferred the normal magnifier and found the plateau lens helpful to overcome distortion of the fish-eye but less intuitive. The pathologist preferred the circular shape for exploration and the rectangular lens for snapshots.
All experts agreed that the snapshot capabilities were extremely helpful but in some situations, the overlay can occlude areas underneath, hence functionality to show or hide them is needed. The dotting panel and annotation capabilities were used intermittently during evaluation and were rated helpful (2x strongly agree, 1x agree). The selection and combination of distinct channels (Section~\ref{sec:ChannelAnaysis}) within the lens achieved the same rating. One expert mentioned that these were especially beneficial for checking biases in assigned cell types, and two experts suggested to provide means to store channel combinations and color settings of the lens for repetitive analysis tasks. Most liked of these features was the split-screen lens to relate, e.g, CyCIF to H\&E data, and to validate alignment (3x strongly agree). The feature-augmentation lenses (Section~\ref{sec:featureAugmentation}) were also rated to be helpful (1x agree, 2x strongly agree). The linear histograms were preferred over the radial chart as they were easier to comprehend, but one expert mentioned that the radial chart provided a good relative overview of the distribution and might lead to unexpected discoveries. Another comment proposed that showing numbers, in addition to the visual encoding, would be helpful.
Using the cell type lens, one expert said that the tooltip, similar as provided for the histogram and radial chart, should supplement the local cell type counts with cell type counts from the whole tissue in order to aid comparison and give context. It should further be possible to hide cell types. Lastly, users agreed that similarity search results (Section~\ref{sec:similarRegions}) align with a visual similarity impression, with slightly more conservative feedback (2x agree, 1x neutral), mostly due to the nature of the sensitivity threshold, which requires repetitive fine-tuning depending on the underlying image area. This could be improved with parameter optimization. One expert commented that ``what is considered similar'' depends on the morphological or molecular questions in focus, and hence the lens could be equipped with additional similarity methods. Overall, the experts found the tool ``easy to learn and use'' (2x agree, 1x strongly agree).
\vspace{-0.5em}
\section{Conclusion and Future Work}
We present a design study aimed at supporting single-cell research into the composition, molecular states, and phenotypes of normal and diseased tissues, a rapidly growing area of basic and translational biomedical research, as well as pathologists studying human tissues for the purpose of diagnosis and disease management. Our Scope2Screen system supports fluid interactivity based on familiar microscopy metaphors while enabling multivariate exploration of quantitative and image data using a lens.
Throughout the design process and expert feedback, we identified three key areas in which current work could be most usefully extended.
\noindent\textbf{Combining Vision and Statistics: }
According to our experts, visual needs tools for presenting numerical data alongside image data. In many cases, generation of the numerical data is not the problem: computational biologists are familiar with scripting and statistical tools for deriving single-cell data from images (via segmentation) and linking it to external sources (e.g., genomic data) but the information is most useful alongside the original images. Pathologists in particular need to combine deep knowledge of tissue architecture with quantitative data. However, most visual tools do not offer sufficient flexibility, and scripts or notebooks (e.g., Jupyter) lack the interactive visual exploration. We intend to extend Scope2Screen to support scripted statistical queries integrated with lensing.
\noindent\textbf{High-dimensional Features on the Horizon: }
Recent development in digital imaging such as the ability to measure spatial distribution of RNA expression will result in data with thousands, not dozens, of dimensions. Mapping such data into the image space while extracting relevant information will require dimensional reduction techniques and suitable visual representations of found features so that only the most relevant or explanatory data are presented. Very high-resolution 3D microscopy of tissues is also being integrated with the high-plex 2D data described here and this will require appropriate visual metaphors for moving between resolutions and data modalities.
\noindent\textbf{Scalability across Datasets: }
Our use cases demonstrate uses for Scope2Screen in the analysis of a single dataset stored locally. However, digital histology is expected to transition to the analysis of multiple datasets accessed via the cloud. While Scope2Screen scales to a large set of images, it does not yet support analysis and annotation of image collections or work interactively with Docker-based image analysis pipelines.
Adding this functionality will close the gap from data exploration and analysis to generation of machine-assisted interpretative data reports for research and clinical applications including interactive publication via tools such as Minerva~\cite{rashid_interpretative_2020,hoffer_minerva_2020}.
\section{Acknowledgements}
This work is supported by the Ludwig
Center at Harvard Medical School, and by NIH/NCI grants NCI U2C-CA233262, NCI U2C-CA233280 and NCI U54-CA225088.
\section{Outside (Competing) Interests}
PKS is a member of the SAB or Board of Directors of Glencoe Software, Applied Biomath, and RareCyte Inc. and has equity in these companies; PKS is also a member of the SAB of NanoString. Sorger declares that none of these relationships have influenced the content of this manuscript. Other authors declare no competing interests.
\bibliographystyle{abbrv-doi}
| 2023-04-23T06:41:06.937Z | 2021-10-12T02:23:35.000Z | redpajama/arxiv | arxiv_0001 | 1,710 | 10,334 |
7f018d9b6358823b0aaf84984d4719d796b36d53 | \section{Introduction}
Let $M$ be a compact closed 3-manifold and $G \rightarrow P
\rightarrow M$ a trivial principal $G$ bundle over $M$. We denote by
$\mathcal{A}_{3}$ the space of connections on $P$ and $\mathcal{G}_{3}$ the
gauge group. Let $Z_{k,G}[M]$ denote the Chern-Simons path integral at level $k$
and for group $G$ (taken as above),
\be
Z_{k,G}[M]= \frac{1}{\mathrm{Vol}(\mathcal{G}_{3})} \int_{\mathcal{A}_{3}}\,
\exp{\left( I(\mathscr{A})\right)} \nonumber
\ee
where the Chern-Simons action, for $\mathscr{A} \in \mathcal{A}_{3}$, is
\be
I(\mathscr{A})= i\frac{k}{4\pi}\int_{M} \Tr{\left(\mathscr{A} \wedge
d\mathscr{A} +
\frac{2}{3}\mathscr{A} \wedge \mathscr{A} \wedge \mathscr{A}
\right)} \nonumber
\ee
and $\Tr$ is normalized so that under large gauge transformations
(recall that $\pi_{0}(\mathcal{G}_{3}) = \mathbb{Z}$)
$I(\mathscr{A}^{\mathscr{G}})=I(\mathscr{A}) + 2\pi i n$, $\mathscr{G}
\in \mathcal{G}_{3}$ and $n \in \mathbb{Z}$ so that the exponential is
invariant.
One can also consider the inclusion of `Wilson lines' in the path
integral. A Wilson line is a
combination of a knot $K$ and an irreducible representation $R$ of the group
$G$, and is defined to be the holonomy
\be
W_{R}(K) = \Tr_{R}{P\exp{\oint_{K}\mathscr{A}}} \nonumber
\ee
The relevant path integral is
\be
Z_{k,G}[M,(K_{i},R_{i})] = \frac{1}{\mathrm{Vol}(\mathcal{G}_{3})}
\int_{\mathcal{A}_{3}} \exp{\left(I(\mathscr{A}) \right)}\,
\prod_{i=1} W_{R_{i}}(K_{i}) \nonumber
\ee
Let $\Sigma$ be a smooth genus $g$ curve and $\omega$ a unit
volume K\"{a}hler form on $\Sigma$. Let $\mathfrak{M}$ be the
moduli space of flat $G$ connections on $\Sigma$. The moduli
space has a natural symplectic, infact K\"{a}hler, structure and we
let $\mathfrak{L}
\rightarrow \mathfrak{M}$ be the fundamental line bundle
whose first Chern class agrees with the natural symplectic
form on $\mathfrak{M}$ (for $G=SU(n)$ this will be the
determinant line bundle). E. Witten \cite{Witten-CS} has
shown that the quantum Hilbert space of
states of Chern-Simons theory on $\Sigma$ is
$\mathrm{H}^{0}(\mathfrak{M},
\mathfrak{L}^{k})$. Quite generally the
Chern-Simons invariant, $Z_{k,G}[M]$, for a 3-manifold $M$, by Heegard
splitting, will be the inner product of two vectors in the Hilbert
space. However, the dimension of the Hilbert space is the Chern-Simons
invariant of the 3-manifold
$\Sigma \times S^{1}$.
If one includes Wilson lines, then the
Hilbert space is $\mathrm{H}^{0}(\mathfrak{M}, \mathfrak{L}^{k})$ where now
$\mathfrak{M}$ is the moduli space of parabolic bundles on
$\Sigma$, which is still naturally K\"{a}hler, and $\mathfrak{L}$ is
the associated fundamental line bundle.
The
Hirzebruch-Riemann-Roch theorem tells us that
\be
\sum_{q=0} (-1)^{q}\mathrm{dim}\,\mathrm{H}^{q}(\mathfrak{M},
\mathfrak{L}^{k})
= \int_{\mathfrak{M}} \mathrm{Todd}(\mathfrak{M}) \wedge
\mathrm{Ch}(\mathfrak{L}^{k} ) \nonumber
\ee
If the canonical bundle of $\mathfrak{M}$ is negative, as will be
the case in the examples we discuss, then the higher cohomology groups
are trivial by Kodaira vanishing and we have
\be
\mathrm{dim}\,\mathrm{H}^{0}(\mathfrak{M}, \mathfrak{L}^{k})
= \int_{\mathfrak{M}} \mathrm{Todd}(\mathfrak{M}) \wedge
\mathrm{Ch}(\mathfrak{L}^{k} ) \nonumber
\ee
E. Verlinde \cite{Verlinde} provides us with a
concrete formula for the dimension of $\mathrm{H}^{0}(\mathfrak{M},
\mathfrak{L}^{k})$. However, as we have seen, Chern-Simons theory
provides us with another formulation of
E. Verlinde's dimension count, namely
\be
Z_{k,G}[\Sigma \times S^{1},(K_{i},R_{i})] = \int_{\mathfrak{M}}
\mathrm{Todd}(\mathfrak{M}) \wedge
\mathrm{Ch}(\mathfrak{L}^{k} ) \nonumber
\ee
The moduli spaces, which thus far have been generically denoted by
$\mathfrak{M}$, are singular at the points where there are reducible
connections. Nevertheless, suitably interpreted, the index theorem
still yields a topological expression for the invariants $Z_{k,G}[\Sigma \times
S^{1},(K_{i},R_{i})]$. This raises a
\vspace{0.3cm}
{\bf Question:}
Are there other
3-manifolds whose Chern-Simons invariants, or parts of them, can be expressed as
intersection pairings on an appropriate moduli space $\mathfrak{M}$?
\\
The question has been partially answered in the affirmative by Beasley and
Witten \cite{Beasley-Witten}. From now on set $M \equiv M_{(g,p)}$ to
denote the Seifert manifold
that is presented as a degree $-p$, $U(1)$ bundle over a Riemann
surface $\Sigma$ of genus $g$.
Using non-Abelian localization for
the Seifert 3-manifolds $M$ Beasley and Witten are
able to show that (equation (5.176) in their paper with $n=-p$ due to a
different choice of orientation and noting that there is a slightly
different normalization of $\Theta$),
\begin{proposition}\label{BW}{{\rm (Beasley-Witten) Let $M$ be as above. The
portion of the Chern-Simons invariant which is localized on the
smooth part, $\mathfrak{M}$, of the moduli space of Yang-Mills
connections is
\be
\left. Z_{k, \, G}[M]\right|_{\mathfrak{M}}
=
\frac{1}{|\Gamma|}\exp{\left(i\frac{\pi}{2} \eta_{0}\right)}\,
\int_{\mathfrak{M}} \mathrm{Todd}(\mathfrak{M})\wedge
\mathrm{Ch}(\mathfrak{L}^{k} ) \wedge \exp{\left(
- i \frac{p}{2\pi}(k+c_{\lg})
\Theta(\mathfrak{M}) \right) } \nonumber
\ee
In this formula, $\Theta(\mathfrak{M})$ is a certain degree 4
cohomolgy class on $\mathfrak{M}$, $c_{\lg}$ and $\Gamma$ are the dual Coxeter
number and centre of $G$ respectively while $\eta_{0}$ is the framing
of $M$. Notice that this formula is quite analogous to the
Hirzebruch-Riemann-Roch formula and reduces
to it when $p=0$.
}}
\end{proposition}
Let $\mathcal{A}$ be the space of connections on the trivial $G$ bundle on
$\Sigma$ and $\mathcal{G}$ the associated group of gauge
transformations. I will show that
\begin{proposition}\label{onS}{{\rm The Chern-Simons path integral on
$M$, $Z_{k, \, G}[M]$, is equal to a path integral on the space of
connections over $\Sigma$, namely
\be
Z_{k, \, G}[M] = \exp{\left(i\frac{\pi}{2}
\eta_{0}\right)}\frac{1}{\mathrm{Vol}(\mathcal{G})}
\int_{\mathcal{A}} \mathrm{Todd}(\mathcal{A})\wedge
\mathrm{Ch}(\mathfrak{L}^{k} ) \wedge \exp{\left(
- i \frac{p}{2\pi}(k+c_{\lg})
\Theta(\mathcal{A}) \right) } \nonumber
\ee
in this formula all the classes have the same form as those in
Proposition \ref{BW}.
}}
\end{proposition}
As there is a striking resemblance between the formulae presented in
Propositions \ref{BW} and \ref{onS} it is worthwhile, at this point, to
make some remarks.
\begin{remark}{{\rm The formula in Proposition \ref{onS} is an exact
expression for $Z_{k, \, G}[M]$ unlike that in Proposition
\ref{BW} which is a part of the answer.
}}
\end{remark}
\begin{remark}{{\rm It is quite straightforward to evaluate the path
integral in Proposition \ref{onS}. This has been done for
general $\Sigma$ in
\cite{Blau-Thompson-CS} by Abelianization and for $\Sigma=S^{2}$
in \cite{Beasley-Witten} by non-Abelian localization. The
Reshetikhin-Turaev-Witten
invariants for Seifert 3-manifolds may be
obtained by surgery prescriptions
as in \cite{Lawrence-Rozansky} and \cite{Marinio} and agree with the
path integral results. However, I leave the path integral
`un-integrated' as it brings the geometry to the fore.
}}
\end{remark}
\begin{remark}{{\rm I am not at all implying that Proposition \ref{BW}
follows easily from Proposition \ref{onS}, though one would reasonably
expect that it does. Presumably an application of non-Abelian
localization to the path integral appearing in Proposition
\ref{onS} is what is required.
}}
\end{remark}
\begin{remark}{{\rm Proposition \ref{onS} has already been
established, somewhat indirectly in \cite{AOSV} and more
directly in \cite{Blau-Thompson-CS}, though the language is somewhat
different and the classes were not identified in either of these
works and so the geometric significance of the right hand side
of Proposition \ref{onS} was not appreciated.
}}
\end{remark}
I will prove a rather more general result involving knots in the fibre
direction which are located at points $x_{i}\in \Sigma$ on the base of
the fibration and which run along the fibre. The point of view adopted here
is that to the parabolic points $x_{i}$ one `attaches' a co-adjoint
orbit $M_{R_{i}}$ defined by the representation $R_{i}$.
\begin{proposition}\label{onSKnot}{{\rm The Chern-Simons path integral on
$M$, with knots in the fibre direction at the points $x_{i} \in
\Sigma$ and associated representations $R_{i}$, $Z_{k, \, G}[M,
(x_{i},R_{i})]$, is given by
\bea
Z_{k,G}[M,(x_{i},R_{i})]
& = &
\exp{\left(i\frac{\pi}{2}\eta_{0}\right)}\frac{1}{\mathrm{Vol}(\mathcal{G})}\,
\int_{\mathcal{A}\times \prod_{i} M_{R_{i}}} \widehat{A}(\mathcal{A}\times
\prod_{i}M_{R_{i}}) \, \wedge \nonumber\\
& &\;\;\;\;\; .\, \exp{\left( (k+c_{\lg})\Omega(\mathcal{A})
+\sum_{i}\omega(M_{R_{i}})
-i \frac{p}{2\pi}(k+c_{\lg}) \Theta(\mathcal{A}) \right)
}\nonumber
\eea
where $\widehat{A}$ is the $A$ hat genus.
}}
\end{proposition}
{\bf Acknowledgements} This paper is an outcome of a line of research
on abelianization in low dimensions that Matthias Blau and I have been
pursuing over many years. It is a pleasure to thank him for this very
enjoyable collaboration. It is also a pleasure to thank
M.S. Narasimhan and T. Ramadas for helping me to formulate the results
presented here. Finally a thank you to the organizers
of the Bonn workshop ``Chern-Simons Gauge Theory: 20 Years After'' and
to Chris Beasely for discussions on the relationship between his work
and mine.
\section{Background and Strategy}
While it is clear from the description of the Hilbert space of
Chern-Simons theory that the moduli spaces are built in, what is not
at all obvious is that Chern-Simons should, in general, have knowledge of the
cohomology ring of the moduli space. This paragraph is intended to
motivate such a connection. Recall that Witten \cite{Witten-YM2Rev}
had established that the topological field theory analogue of
Donaldson theory on a curve can be mapped to Yang-Mills theory on the
curve. Since the topological field theory is designed to probe the
cohomology ring of the moduli space we learn that Yang-Mills theory
will do just that. The
action of Yang-Mills theory is
\be
S(F_{A},\psi,\phi) = \frac{1}{4\pi^{2}}\int_{\Sigma} \Tr{\left(i\phi
F_{A} + \frac{1}{2} \psi \wedge \psi \right)} +
\frac{\eps}{8\pi^{2}} \int_{\Sigma} \omega
\,\Tr{\phi^{2}} \label{YM2Action}
\ee
where $A$ is a connection on $P$, $\phi \in \Gamma(\Sigma, \ad \, P)$
and $\psi$ is interpreted as a one form on the space
$\mathcal{A}/\mathcal{G}$ (in terms of the universal
bundle construction described in the
next section the elements of the action are all cohomology classes on
$\mathcal{A}/\mathcal{G}$).
In \cite{Beasley-Witten} Beasley and Witten recall that every
3-manifold has a contact structure $\kappa$. Here we use the $U(1)$
bundle structure and the associated nowhere vanishing vector
field. (This has the advantage that it corresponds to the obvious
structure on $\Sigma \times S^{1}$.) One may, therefore,
decompose connections as
\be
\mathscr{A}= A + \kappa\,\frac{1}{2\pi} \phi , \;\;\; \iota_{\kappa}A=0 \nonumber
\ee
The Chern-Simons action, on choice of the contact structure, is
\be
I(A,\phi) = i\frac{k}{4\pi^{2}}\int_{M}\left(\pi \kappa \Tr{A \iota_{\kappa} dA}
+\kappa \Tr{\phi F_{A}} +
\kappa d\kappa \, \frac{1}{4\pi}\Tr{\phi^{2}} \right) \label{CS-Action}
\ee
and this has some resemblance to (\ref{YM2Action}). The first
term in (\ref{CS-Action}) is the `time' derivative (in the direction
of the vector field dual to $\kappa$). There are some differences and
perhaps the most glaring is that the term proportional to $\psi \wedge
\psi$ is absent. However, a variation on the theme is now
available. The structure group of a
general 3-manifold is $SO(3)$, however, on fixing the bundle structure
the compatible structure group reduces to $SO(2)$ (the structure group
of $\Sigma$). There is a minimal, so called $N=1$, supersymmetric version
of Chern-Simons theory which is not obviously topological. However,
when the structure group is reduced to $SO(2)$ one may `twist' the
$N=1$ supersymmetric Chern-Simons theory to obtain a manifestly topological
theory (up to choice of contact structure). The twisted version has
action (\ref{CS-Action}) augmented with
\be
\frac{k}{8\pi^{2}}\int_{M} \kappa\, \Tr{ \psi \wedge \psi}, \;\;\;
\iota_{\kappa}\psi =0 \nonumber
\ee
The $N=1$ action becomes
\be
I(A, \phi, \psi) = \frac{k}{4\pi^{2}}\int_{M}\left(i\pi \kappa
\Tr{A\wedge \iota_{\kappa}dA } + \kappa \Tr{\left(i \phi F_{A} +
\frac{1}{2} \psi \wedge \psi \right)} + \frac{i}{4\pi} \kappa
d\kappa \Tr{\phi^{2}}\right) \nonumber
\ee
so that now the resemblance of the two theories is rather
remarkable. The supersymmetry transformations are closely linked to
those in \cite{Beasley-Witten}, equation (3.48),
\be
QA= i\psi, \;\;\; Q\psi = - d_{A} \phi - 2 \pi \iota_{\kappa}dA \label{susy}
\ee
The fact that the fields $\psi$ are free means that partition function
and knot invariants of the $N=1$ and the usual Chern-Simons theory agree.
One possible novelty that there is a
new set of observables in the theory namely those that are 3-forms on
the moduli space
\be
\int_{M} J_{C} \wedge \Tr{\phi.\psi} \nonumber
\ee
with $J_{C}$ a de Rham current with delta function support on the
1-dimensional cycle $C \subset M$ is, unfortunately, spoilt by the
second part of the transformation of $\psi$ in (\ref{susy}).
In any case it would appear that Chern-Simons theory is some
supersymmetric quantum mechanics on the moduli spaces of
interest. Since supersymmetric quantum mechanics is related to index
theory one may reasonably expect a positive response to the question.
The strategy we will follow here is to
\begin{enumerate}
\item Identify the objects that appear in the path integral in terms
of cohomology classes on the space of connections modulo the gauge group. This
is done in Section \ref{UniversalBundle}.
\item Next we relate the classes that one obtains from the universal bundle
construction to classes associated with the tangent bundle of
$\mathcal{A}/\mathcal{G}$. These are denoted by
$\mathrm{Todd}(\mathcal{A})$ and
$\widehat{A}(\mathcal{A})$ in Section \ref{ToddAhat} for reasons
that will become apparent as we go along.
\item One can now express Witten's localisation of the Yang-Mills path
integral on $\Sigma$ (Proposition
\ref{wittenloc} below) in terms of classes that come from the
universal bundle on $\mathcal{A}/\mathcal{G}$ on the one hand and
the same classes restricted to $\mathfrak{M}$ on the other.
\item Having established the relationship with the cohomology ring in
$\mathcal{A}/\mathcal{G}$ our final task is to show that the
integral over $\mathcal{A}_{3}/\mathcal{G}_{3}$ reduces to that of certain
classes on $\mathcal{A}/\mathcal{G}$ on integrating out sections
which are not $U(1)$ invariant.
\end{enumerate}
\section{ A Universal Bundle}\label{UniversalBundle}
Let $P$ be a principal $G$ bundle over some smooth manifold $X$,
$\mathcal{A}$ the space of
connections on $P$ and $\mathcal{G}$ the group of gauge
transformations (bundle automorphisms). We have the following
universal bundle construction due to
Atiyah and Singer \cite{Atiyah-Singer}. There is an action of
$\mathcal{G}$ on $P$ so that
we may form the space (this is not smooth unless one makes further
assumptions and as it stands it is a stack)
\be
\mathcal{Q} = \mathcal{A} \times_{\mathcal{G}} P \nonumber
\ee
Now $G$ operates on $\mathcal{Q}$ and
infact $\mathcal{Q}$ is itself the total space of a principle bundle
\be
\mathcal{Q}\rightarrow \mathcal{Q}/G = \mathcal{A}/\mathcal{G}
\times X \nonumber
\ee
There is a natural connection on $\mathcal{Q}$ and from it we can
define a curvature 2-form and then Chern classes, via Chern-Weil
theory, for an associated rank $n$ vector bundle $\mathcal{E} = \mathcal{Q}
\times_{G} \mathbb{C}^{n}\rightarrow \mathcal{A}/\mathcal{G} \times
X$. Finally we restrict to some (hopefully smooth) moduli space
$\mathfrak{M} \subset \mathcal{A}/\mathcal{G}$, for which the above
construction makes sense.
\begin{remark}{{\rm A detailed description of this bundle and
its relationship to topological field theories can be found in
\cite{Birmingham-Blau-Rokowski-Thompson} chapter 5, especially
sections 5.1 and 5.3.
}}
\end{remark}
\begin{remark}{{\rm The path integral we need to deal with is the one over all
of $\mathcal{A}/\mathcal{G}$ and not some smooth finite
dimensional subspace $\mathfrak{M}$. Consequently, we will be
integrating over the stack.
}}
\end{remark}
From now on we take $X$ to be $\Sigma$. Decompose the curvature
2-form on $\mathcal{E}$ into its Kunneth components as
\be
1\otimes F_{A} + \Psi +\Phi \otimes 1 \in
\mathrm{H}^{2}(\mathcal{A}
\times \Sigma, \mathrm{Lie}\, G) \nonumber
\ee
(If $\mathfrak{M}$ is simply connected we have that $\Tr{\Psi}$,
restricted to $\mathfrak{M}$, is
cohomologously trivial in which case $
c_{1}(\mathcal{E}) = 1 \otimes c_{1}(E) +\frac{i}{2\pi}
\Tr{\Phi}\otimes 1$
where $\Tr \Phi \in \mathrm{H}^{2}(\mathfrak{M})$.) If we fix on
$G=SU(n)$ then $c_{1}(\mathcal{E})$ vanishes and
the second Chern class decomposes as
\bea
c_{2}(\mathcal{E}) & = &
\frac{1}{4\pi^{2}}\Tr{\left( \Phi \otimes
F_{A} +\frac{1}{2} \Psi \wedge \Psi \right) } + \frac{1}{4\pi^{2}}\Tr{
\Phi \wedge \Psi}+
\frac{1}{8\pi^{2}}\Tr{\Phi^{2}} \otimes 1 \nonumber\\
&= & \Omega(\mathcal{E}) + \gamma(\mathcal{E}) +
\Theta(\mathcal{E}) \label{c2kunneth}
\eea
We have a differential $Q$ on the space $\mathcal{A} \times \Sigma$
which satisfies
\be
QA = \Psi, \;\;\; Q\Psi = d_{A}\Phi, \;\;\; Q\Phi =0 , \;\;\; Q^{2} =
\mathcal{L}_{\Phi} \nonumber
\ee
so that $\Psi$ is a (basis) one form on $\mathcal{A}$ and
\be
(Q-d_{A})(1\otimes F_{A} + \Psi +\Phi \otimes 1) = 0 .\nonumber
\ee
Hence associated to $Q$ we have equivariant cohomology on
$\mathcal{A}/\mathcal{G} \times \Sigma$ and the Chern classes
$c_{n}(\mathcal{E})$ are $Q$ closed.
Note the second Chern class appears as the
action of Yang-Mills theory (\ref{YM2Action}) if we make the
identifications $\Phi = i \phi$ and $\Psi = \psi$, and we consider
these as forms on $\mathcal{A}/\mathcal{G} \times \Sigma$
\be
S(F_{A},\psi,\phi) = \pi_{*} \left(\Omega(\mathcal{E}) - \eps
\Theta(\mathcal{E})\otimes \omega \right)\simeq
\Omega(\mathcal{A})-\eps \Theta(\mathcal{A}) \label{c2isym}
\ee
where $\pi:
\mathcal{A}/\mathcal{G} \times \Sigma
\rightarrow \mathcal{A}/\mathcal{G}$ is projection
onto the first factor.
\begin{remark}{{\rm The identification (\ref{c2isym}) shows us that the
Yang-Mills action is a $Q$ closed form (of mixed degree). It is
also quite clearly not $Q$ exact.
}}
\end{remark}
In general for a vector
bundle $V$ $c_{2}(\mathrm{End}\, V) = 2rc_{2}(V) - (r-1)c_{1}(V)^{2}$ and
consequently $c_{2}(\mathrm{End}\, \mathcal{E})
=2rc_{2}(\mathcal{E})$. The main interest here will be on the trace
free part $\mathrm{End}_{0}\, \mathcal{E}$ (which in any case has the
same second Chern class as $\mathrm{End}\, \mathcal{E}$). Note that
the classes on $\mathcal{E}$ are in the `fundamental' representation
while they are taken to be in the `adjoint' representation on
$\mathrm{End}_{0}\,\mathcal{E}$.
In the rank 2 case, Newstead \cite{Newstead},
writes the second Chern Class of his universal bundle as
\be
c_{2}(\mathrm{End}\, U)=2\alpha \otimes \omega + 4 \gamma -
\beta\otimes 1
\ee
Thus to make contact with that work restrict to
$\mathfrak{M}\subset \mathcal{A}/\mathcal{G}$ and set,
\be
\alpha = \frac{1}{4\pi^{2}} \int_{\Sigma}\Tr{ \Psi \wedge \Psi}, \;\;\;
\beta = -\frac{1}{2\pi^{2}}\Tr{\Phi^{2}} , \;\;\;
\gamma = \frac{1}{4\pi^{2}}\Tr{
\Phi \wedge \Psi} \nonumber
\ee
\section{The Todd and the $\widehat{A}$ Genera}\label{ToddAhat}
In order to express the Todd genus in terms of the classes arising from
the universal bundle we follow the approach of Newstead
\cite{Newstead} for determining the Pontrjagin class in the rank 2
case. Then we use an observation of Thaddeus \cite{Thaddeus} to give
the $\widehat{A}$ class in terms of the Pontrjagin roots and from
there Todd.
The tangent bundle,
$\mathrm{T}_{\mathfrak{M}}$, of $\mathfrak{M}$ is given by
\be
\mathrm{T}_{\mathfrak{M}} \simeq R^{1} \pi_{*} \mathrm{End}_{0}\,\mathcal{E}
\nonumber
\ee
where $R^{i}$ denotes the $i$-th direct image sheaf under the map $\pi:
\mathfrak{M} \times \Sigma \rightarrow \mathfrak{M}$ onto
the first factor. The Grothendieck-Riemann-Roch theorem states that
\be
\mathrm{Ch}(\mathrm{T}_{\mathfrak{M}})-
\mathrm{Ch}(\pi_{*}\mathrm{End}_{0}\, \mathcal{E}) = -
\pi_{*}\left(\mathrm{Ch}(\mathrm{End}_{0}\,
\mathcal{E}) (1-(g-1) \omega) \right) \nonumber
\ee
For the spaces that we are interested in the direct image sheaf
$R^{0}\pi_{*}\mathrm{End}_{0}\, \mathcal{E}$ is trivial so
\be
\mathrm{Ch}(\mathrm{T}_{\mathfrak{M}}) = -
\pi_{*}\left(\mathrm{Ch}(\mathrm{End}_{0}\,
\mathcal{E}) (1-(g-1) \omega \right)\label{Groth-Riemann-Roch}
\ee
Denote the complexification of the Lie algebra of $G$ by
$\lg_{\mathbb{C}}$, the complexification of the Cartan subalgebra
by $\lt_{\mathbb{C}}$ and the space of roots by $\mathbf{k}$ then
\be
\lg_{\mathbb{C}}=\lt_{\mathbb{C}} \oplus \mathbf{k}\nonumber
\ee
We also set $\mathbf{k}_{+}$ to be the space of positive roots.
\begin{lemma}\label{Pontjagin}{\rm{The Pontrjagin class of the tangent bundle
$P(T_{\mathfrak{M}})$
is given by
\be
P(T_{\mathfrak{M}})= \mathrm{det}_{\mathbf{k}}( 1 + \mathrm{ad}\,
\Phi/2\pi)^{2g-2} = \prod_{\mathbf{k}_{+}} \left(1 -
\left(\frac{\alpha(\Phi)}{2\pi}\right)^{2}\right)^{2g-2} \nonumber
\ee
}}
\end{lemma}
{\bf Proof:} Quite generally the only classes that
contribute to $\mathrm{Ch}_{2m}(\mathrm{T}_{\mathfrak{M}})$ from
(\ref{Groth-Riemann-Roch}) are powers of $\Phi$, so we have
$
\mathrm{Ch}_{2m}(\mathrm{T}_{\mathfrak{M}})
=(g-1)\Tr_{\mathrm{Ad}}{\left(\exp{\left(i\Phi/2\pi\right)}\right)_{2m}}$,
and consequently $
\mathrm{Ch}(\mathrm{T}_{\mathfrak{M}} \oplus
\mathrm{T}_{\mathfrak{M}}^{*}) = 2(g-1)
\Tr_{\mathrm{Ad}}{\left(\exp{\left(i\Phi/2\pi\right)}\right)}$. This is the same as
having the direct sum of $2g-2$ copies of a vector bundle with curvature 2-form
$\mathrm{ad}\,\Phi$. The Chern class for one copy is $\mathrm{det}_{\lg}( 1
+i \mathrm{ad}\, \Phi/2\pi)= \mathrm{det}_{\mathbf{k}}( 1
+ i\mathrm{ad}\, \Phi/2\pi)$, the direct sum formula for Chern classes
gives us that $c(\mathrm{T}_{\mathfrak{M}}\oplus
\mathrm{T}_{\mathfrak{M}}^{*})= \prod_{\alpha \in \mathbf{k}_{+}}(1+
(\alpha(\Phi)/2\pi)^{2})^{2g-2}$. On the otherhand we have that the
Pontrjagin classes are related to Chern classes by $p_{m}(E)=
(-1)^{m}c_{2m}(E\oplus E^{*})$ and this completes the proof. \qed
\begin{lemma}{\rm{The Todd class of the tangent bundle of
$\mathfrak{M}$ is
\be
\mathrm{Todd}(\mathfrak{M}) = \exp{\frac{1}{2}c_{1}(T_{\mathfrak{M}})} \,
. \, \left(
\frac{\det_{\mathbf{k}}{\sin{(\ad{\Phi}/4\pi)}}}{\det_{\mathbf{k}}{(\ad{
\Phi}/4\pi})}
\right)^{1-g} \nonumber
\ee
}}
\end{lemma}
{\bf Proof:} Thaddeus (p147 in \cite{Thaddeus}) notes that on writing,
$P = \prod_{i=1}(1+y_{i})$ where the $y_{i}$ are the Pontrjagin roots,
then $\widehat{A} = \prod_{i=1} (\sqrt{y_{i}}/2/\sinh{\sqrt{y_{i}}/2 })$.
From Lemma \ref{Pontjagin} the roots are $-(\alpha(\Phi)/2\pi)^{2}$ and
they come with a multiplicity of $2g-2$ so that
\be
\widehat{A}(\mathfrak{M}) = \prod_{\alpha \in \mathbf{k}_{+}}
\left(\frac{\sinh{(i\alpha(\Phi)/4\pi)}}{
i\alpha(\Phi)/4\pi}\right)^{2-2g} \nonumber
\ee
and the standard relation between $\mathrm{Todd}$ and $\widehat{A}$
completes the proof. \qed
We can also make use of (\ref{Groth-Riemann-Roch}) to determine the
first Chern class of the moduli space. In this case the term
proportional to $(g-1)$ cannot contribute since, as we have seen, it
contributes to even classes. We have
\be
\mathrm{Ch}(\mathrm{End}_{0}\mathcal{E})=
-c_{2}(\mathrm{End}_{0}\mathcal{E}) + \dots = -2r c_{2}(\mathcal{E}) +
\dots \label{myGRR}
\ee
where the ellipses represent higher degree forms.
\begin{theorem}{\rm{(J-M. Drezet and M.S. Narasimhan \cite{Drezet-Narasimhan})
The
first Chern class of the tangent bundle of $\mathfrak{M}(r,d)$ the
moduli space of holomorphic vector bundles of rank $r$ and
determinant of degree $d$ is
\be
c_{1}(T\mathfrak{M}(r,d))= 2\, \mathrm{g.c.d}(r,d)\,
\Omega(\mathfrak{M}(r,d)) \nonumber
\ee
where $\Omega(\mathfrak{M}(r,d)) $ is the class of the basic line bundle.
}}
\end{theorem}
In our case $d=0$ so that
$c_{1}(T\mathfrak{M})=2r\Omega(\mathfrak{M})$ and we set, on comparing
with (\ref{myGRR}),
$\Omega(\mathfrak{M}) =
\pi_{*}c_{2}(\mathcal{E})$ that is
\be
\Omega(\mathfrak{M}) = \frac{1}{4\pi^{2}}\int_{\Sigma} \left(i
\Tr{\phi F_{A} } + \frac{1}{2}\Tr{\Psi \wedge
\Psi}\right) \label{symp2form}
\ee
which is the form one would expect on $F_{A}=0$.
\section{Intersection Pairings on Moduli Spaces}
By comparing to the second Chern class of the universal bundle we see
that, quite generally, one should think of $S(F_{A},\psi,\phi)$ as a
form on the space of connections $\mathcal{A}$. With abuse of
notation we denote those classes on $\mathcal{A}$ by the same
symbols as those on $\mathfrak{M}$,
\be
\frac{1}{\mathrm{Vol}(\mathcal{G})} \int_{\mathcal{A} \otimes \Omega^{0}(\Sigma, \lg)}\,
\exp{\left(S(F_{A},\psi,\phi)\right) } \equiv
\int_{\mathcal{A}/\mathcal{G}} \,
\exp{\left(\Omega(\mathcal{A})- \eps
\Theta(\mathcal{A}) \right)}
\ee
Expressing the path integral in this way hides certain things, like
the fact that there is a Gaussian integration over the degree four
class or, as the space $\mathcal{A}/\mathcal{G}$ is infinite
dimensional, one cannot expand the exponential out to pick the top
form term. Indeed a detailed analysis of the path integral shows that
some care needs to be exercised in the interpretation of the right
hand side (and on the left). Nevertheless expressing the path integral
in this way is also very suggestive.
E. Witten \cite{{Witten-YM2Rev}} shows that this path integral
essentially devolves to one on the moduli space,
\begin{proposition}\label{wittenloc}{\rm{(E. Witten
\cite{{Witten-YM2Rev}}) The path
integral localises onto the moduli space of flat connections,
\be
\int_{\mathcal{A}/\mathcal{G}}
\ex{\left(\Omega(\mathcal{A})- \eps
\Theta(\mathcal{A}) \right)} = \int_{\mathfrak{M}}
\ex{\left(\Omega(\mathfrak{M})- \eps
\Theta(\mathfrak{M}) \right)}\, + \; \mathrm{terms}\;
\mathrm{non-analytic} \;\mathrm{in}\; \eps \nonumber
\ee
and the non-analytic terms vanish as $\eps \rightarrow 0$ (provided
$\mathfrak{M}$ is not singular). The
non-analytic terms arise from higher fixed points of the
action, that is, from non-flat solutions to $d_{A}*F_{A}=0$.
}}
\end{proposition}
The intersection pairings on the moduli space of flat $G$ connections on
$\Sigma$ as presented by Witten \cite{{Witten-YM2Rev}} agree with
those derived by Thaddeus \cite{Thaddeus} in the rank 2 case and
predicts those for higher rank.
\section{Supersymmetric Quantum Mechanics and Passing from
Chern-Simons to Yang-Mills}
In \cite{Blau-Thompson-CS} one begins with the Chern-Simons theory,
then integrates out modes in the bundle direction, to be left with a
theory on the Riemann surface. This Abelian theory is then solved to
finally provide one with the invariant for the Seifert
manifold. However, it is also pointed out that the same result could
be obtained by considering instead Yang-Mills theory on $\Sigma$ with the
inclusion of some observables one of which (equation (6.8) there)
written in the current notation is
\be
j_{\lg}(\phi)^{(1-g)} = \widehat{A}(\mathcal{A}) \nonumber
\ee
This observation was critical for the present study.
In this section the approach of \cite{Blau-Thompson-CS} is followed
`half-way' to the point where we have non-Abelian Yang-Mills theory on
$\Sigma$. In this way we are able to obtain the classes on
$\mathcal{A}/\mathcal{G}$ that one must integrate. I will not repeat
the entire calculation but, rather, explain the essential ingredients
especially those which go beyond \cite{Blau-Thompson-CS}. I should
point out that, when needed, I will consider the section $\phi$ to be
momentarily constant on $\Sigma$ and this will simplify the calculation of the
determinants that we will come across presently. After the
determinants are calculated $\phi$ will be allowed to be non-constant
once more. The justification for
this simplification really comes from knowing that had we abelianized
then $\phi$ would be forced to be constant on $\Sigma$ and so consequently we
lose nothing in making this asumption.
\subsection{Fourier Modes and a Gauge Choice}
In order to begin the calculation we note that, as there is a non
degenerate $S^{1}$ action on $M$, we may
decompose all the sections in terms of characters of that action (a
Fourier series). We may therfore write
\bea
\mathcal{A}_{3} & = & \mathcal{A} \oplus \Omega^{0}(\Sigma, \ad{P})
\oplus_{n \neq 0} \Omega^{1}(\Sigma, L^{\otimes -np}\otimes
\ad{P})\oplus_{n \neq 0}
\Omega^{0}(\Sigma, L^{\otimes -np}\otimes \ad{P})\nonumber \\
\varphi &=& \sum_{n= -\infty}^{\infty} \varphi_{n} , \;\;
\iota_{\kappa}d \varphi_{n} =
-2\pi i n \varphi_{n}, \;\;\;\varphi_{n} \in \Omega^{*}(\Sigma, L^{\otimes -np}
\otimes \ad{P}) \nonumber
\eea
where $L$ is the line bundle that defines $M$.
Now note that there is enough gauge
symmetry to impose the condition
that the section $\phi$ is constant in the fibre direction, that is
$\iota_{\kappa}.d \phi=0$, and we do this. Alternatively put, we make
the identification,
\be
\mathcal{A}_{3}/\mathcal{G}_{3} \simeq\left( \mathcal{A} \oplus
\Omega^{0}(\Sigma, \ad{P})
\oplus_{n \neq 0} \Omega^{1}(\Sigma, L^{\otimes -np}\otimes \ad{P})
\right)/\mathcal{G}
\nonumber
\ee
There is a caveat
here as the `natural' measures do not coincide since the non-constant
components of the section $\phi$, in
$\Omega^{0}(\Sigma, L^{\otimes -np} \otimes \ad{P})$ are tangent
vectors to the orbit of $\mathcal{G}_{3}$. To correct for this
mismatch one introduces the Faddeev-Popov ghost determinant,
$\Delta_{FP}(\phi)$, which is essentially the ratio of the volume of
the orbit to that of the group.
In any case the choice of gauge simplifies the path integral immensely.
\subsection{Integrating over non-trivial characters}
Our aim is to integrate out all those Fourier modes of fields such
that $n\neq0$.
As, by the gauge condition, $\phi$ has no such modes
and the integral over $\psi$ is Gaussian, we concentrate on the
integral of the $A_{n}$ for $n\neq 0$. Note that (with $A_{0}$ denoted
by $A$ again)
\be
I(A,\phi,\psi) = k S(F_{A},\psi, \phi) + \Delta I
\ee
where $S(F_{A},\psi, \phi)$ is the Yang-Mills action with $\eps = ip/2\pi$
and
\be
\Delta I = \frac{k}{4\pi^{2}}\int_{\Sigma} \sum_{n\neq 0} \Tr{\left(
A_{-n}\wedge (2\pi n +
\ad{\phi})A_{-n} + \psi_{n} \wedge\psi_{-n} \right)}
\ee
Let
\be
\exp{i\Gamma(A, \phi)} = \int \prod_{n\neq
0}dA_{n}\,d\psi_{n}\;\Delta_{\mathrm{FP}}(\phi) \,
\exp{i\Delta I}\nonumber
\ee
where the Faddeev-Popov determinant $\Delta_{\mathrm{FP}}$, as I
mentioned above, takes into account the gauge condition on $\phi$.
\begin{definition}{\rm{We will say that two gauge invariant functions
on $\mathcal{A}$ are
equivalent if their integrals over $\mathcal{A}/\mathcal{G}$ agree and
we will denote that equivalence by $\simeq$.}}
\end{definition}
\begin{proposition}\label{propdet}{\rm{The supersymmetric
quantum mechanics path integral gives, for $\phi$ valued in the Cartan
subalgebra,
\be
\exp{i\Gamma(A,\phi)} \simeq \exp{\left(i\frac{\pi}{2} \eta_{0} \right)}\,
\widehat{A}(i\phi) \wedge \exp{\left(i \frac{c_{\mathbf{g}}}{4\pi^{2}}
\int_{\Sigma} \Tr{( \phi.F_{A} + \frac{p}{4\pi} \phi^{2}\omega)}
\right)} \nonumber
\ee
where $\eta_{0}$ is the framing correction.
}}
\end{proposition}
{\bf Proof:} As one can see the action is such that the part of the
connection $A^{\mathbf{k}}$,
only enters in a Gaussian fashion and so
may easily be integrated out. The integration gives rise to a
determinant which requires regularization (a definition). This
calculation has been performed in \cite{Blau-Thompson-CS} but includes
the constant mode (see (B.23) there) and the $\phi$ there should be
rescaled to $\phi/2\pi$ to agree with the definition here. So on putting
back the constant
mode contribution in that work and changing the normalization of
$\phi$ we obtain
the (square root of the) determinant as
\be
\prod_{\alpha \in \mathbf{k}} \left(\frac{\prod_{n}(2\pi n + i
\alpha(\phi)/2\pi)}{i\alpha(\phi)/2\pi} \right)^{1-g} = \prod_{\alpha \in
\mathbf{k}_{+}}
\left(\frac{\sin^{2}{i\alpha(\phi)}/4\pi}{(i\alpha(\phi)/4\pi)^{2}}
\right)^{1-g} \equiv j_{\lg}(\phi)^{1-g}
\nonumber
\ee
together with the phase (B.31)-(B.34) in \cite{Blau-Thompson-CS}.
The integral over those $A^{\lt}$ which are non-constant leads to a
simple overall factor in front of the path integral. The integral over
the non-constant parts of the symplectic volume give rise to a
normalization which compensates that of the connections. The
calculation presumes $\phi$ constant (not just along the fibre) as in
the path integral over $\mathcal{A}$ it may be taken to be so.\qed
There are still 2 issues that we need to deal with:
\begin{enumerate}
\item Extend Proposition \ref{propdet} to general
sections $\phi \in \Gamma(\Sigma, \mathrm{ad}\, P)$.
\item Make sure that supersymmetry is preserved.
\end{enumerate}
\begin{remark}{{\rm Both issues are resolved by recalling a basic
tennet of renormalizable
field theory: Upon regularising a theory it may be neccessary to add
to the Lagrangian local counterterms in order to restore symmetries
broken by the choice of regularization.}}
\end{remark}
We begin with the first issue. We know that $\Gamma(\phi)$ is
formally gauge invariant,
\be
\Gamma(g^{-1}\,\phi\, g) = \Gamma(\phi), \;\;\;\; g \in \mathcal{G} \nonumber
\ee
However, the curvature 2-form $F_{A}$ that appears in the formula in
Proposition \ref{propdet} lies in the Cartan direction,
$F_{A}^{\lt}=dA^{\lt}$, and as it
stands the result is only gauge invariant under gauge transformations
in the maximal torus, $g \in \mathrm{Map}(\Sigma,T)$. The source
for this is that the regularization adopted in \cite{Blau-Thompson-CS}
was only designed to preserve the Torus invariance.
It is straightforward to check that, in general, the absolute value of the
determinant is a function of $\phi^{2}$ and indeed agrees with the
function $\widehat{A}(\sqrt{\phi^{2}})$ so that this is invariant
under $\mathcal{G}$. Our difficulty, therefore,
rests with the phase and we perform a finite renormalization to put
in the complete non-Abelian curvature 2-form which re-instates $\mathcal{G}$
invariance.
This is still not quite the end of the story. The one loop correction
is not supersymmetric. Or put another way we have not maintained the
original supersymmetry (\ref{susy}) at the level of the zero modes,
which is now,
\be
QA=i\psi, \;\;\; Q\psi=-d_{A}\phi, \;\;\; Q\phi=0 , \label{susy2}
\ee
We can add another finite renormalization
\be
\exp{\frac{c_{\mathbf{g}}}{8\pi^{2}} \int_{\Sigma} \Tr{\psi \wedge
\psi} }\nonumber
\ee
to correct this.
\begin{proposition}\label{1loopexact}{\rm{The gauge invariant and supersymmetric
evaluation of the path integral along the fibres of $M$ is
\be
\exp{i\Gamma(A,\phi, \psi)} \simeq \exp{\left(i\frac{\pi}{2}\eta_{0}\right)}\,
\widehat{A}(i\phi) \wedge \exp{\left( \frac{c_{\mathbf{g}}}{4\pi^{2}}
\int_{\Sigma} \Tr{(i \phi.F_{A}+ \frac{1}{2} \psi \wedge \psi +
i \frac{p}{4\pi} \phi^{2}\omega)}
\right)} \nonumber
\ee
}}
\end{proposition}
\begin{remark}{\rm{With
the identifications that $\psi \simeq \Psi$ and $\phi
\simeq -i\Phi$ and as $c_{\lg}=r$ we have
\be
\exp{i\Gamma(A,\phi, \psi)} \simeq\exp{\left(i\frac{\pi}{2}\eta_{0}\right)}
\,\mathrm{Todd}(\mathcal{A})\, \wedge \, \exp{\left(
-i \frac{p}{2\pi}c_{\lg} \Theta(\mathcal{A}) \right) } \nonumber
\ee
}}
\end{remark}
The path integral now becomes one over objects defined on the Riemann
surface directly and we have established Proposition \ref{onS}. \qed
\section{Wilson Lines and Parabolic Points}
How is the picture that we have obtained in the previous sections
affected by the inclusion of Wilson lines? Since our manifold is a
$S^{1}$ fibration there is a special
class of knots which are located at point $x\in \Sigma$ on the base of
the fibration and which run along the fibre. To such a fibre knot we
associate
\be
W_{R}(x) = \Tr_{R}{P\exp{\left(\int_{S^{1}}\kappa \phi/2\pi\right)}} =
\Tr_{R}{\exp{\left(\phi(x)/2\pi\right)} } \nonumber
\ee
the second equality following from the condition that
$\iota_{\kappa}d\phi =0$. As the only addition to path integral
involves functions without
dependence on the fibre, the calculation of the previous section goes
through unchanged.
A geometric way in which to add such traces is through the introduction
of co-adjoint orbits. Let $\lambda \in \lg^{*}$ ($\lg^{*}$ is the dual of $\lg$,
however, we identify the two so that an invariant inner product
$<f,\phi>\equiv \Tr{f\phi}$, $f \in \lg^{*}$, $\phi \in \lg$)
then the orbit through $\lambda$ is
\be
M_{\lambda}=\left\{ g^{-1}\lambda g;\; \forall g \in G \right\} \nonumber
\ee
while the stabilizer of $\lambda$ is
\be
G(\lambda) = \left\{ g \in G :\; g^{-1}\lambda g =\lambda\right\}\nonumber
\ee
If $\lambda$ is regular ($\det_{\mathbf{k}}{(\ad{\lambda})} \neq 0$) then
$G(\lambda)=T$ and we consider this case for now so that
$M_{\lambda}=G/G(\lambda)=G/T$.
The homogeneous space $G/T$ comes equipped with a natural $G$
invariant symplectic 2-form (the Kirillov-Konstant form)
$\Omega_{\lambda}$ given by
\be
\Omega_{\lambda}(X,Y) = <\lambda, \, [X,Y]> = \Tr{(\lambda\,[X,Y])
}\;\;\; X,\,Y\in \lg \nonumber
\ee
Kirillov tells us
that for $\lambda= \Lambda + \rho$ regular, $\Lambda$ an element of
the weight lattice and $\rho$ the Weyl vector then
\be
\Tr_{\lambda}{(\exp{\phi/2\pi})} = j_{\lg}^{-1/2}(\phi/2\pi)\,
\int_{M_{\lambda}} \exp{\left( i\frac{1}{2\pi}<\lambda, \phi> +
\Omega_{\lambda} \right)} \nonumber
\ee
Now we see that geometrically we should product in the co-adjoint
orbits so consider the space $\mathcal{A}_{3} \times \prod_{i} M_{R_{i}}$,
and we have
\be
Z_{k,G}[M,(x_{i},R_{i})] = \frac{1}{\mathrm{Vol}(\mathcal{G}_{3})}
\int_{\mathcal{A}_{3}\times \prod_{i}
M_{R_{i}}} \exp{\left(I(\mathscr{A}) \right)}\, \prod_{i=1}
j_{\mathbf{g}}^{-1/2}(\phi(x_{i})/2\pi)\, \exp{\omega(M_{R_{i}})}\nonumber
\ee
where, in analogy with $\Omega(\mathcal{A})$,
\be
\omega(M_{R_{i}})= \frac{i}{2\pi}\Tr \lambda_{i}\phi(x_{i}) +
\Omega_{R_{i}}\nonumber
\ee
We have the following:
\begin{lemma}{\rm{(Lemma 8.5 \cite{BGV}) The equivariant
$\widehat{A}$-genus, $\widehat{A}_{\mathbf{g}}(X,G/T)$, of the
Riemannian manifold $G/T$ and $j_{\mathbf{g}}^{-1/2}(X)$
represent the same class in equivariant deRham cohomology.
}}
\end{lemma}
Consequently Proposition \ref{onSKnot} is proved. \qed
\begin{remark}{{\rm C. Beasley \cite{Beasley-Knot} has computed, in
the spirit of \cite{Beasley-Witten}, the
localization formula for
$\left. Z_{k,G}[M,(x_{i},R_{i})]\right|_{\mathfrak{M}}$. This formula
agrees with that in Proposition \ref{onSKnot} when restricted to
$\mathfrak{M}$.
}}
\end{remark}
\bibliographystyle{amsplain}
| 2023-04-23T06:41:08.886Z | 2010-01-17T11:48:02.000Z | redpajama/arxiv | arxiv_0001 | 1,771 | 6,835 |
168c22f23379afde09aab6fb360990923b5dea46 | \section{Introduction}
\label{sec:intro}
The technique of interference alignment has expanded what is known
about achievable rates for wireless interference channels. First
proposed by Maddah-Ali \textit{et
al.}~\cite{maddah-ali:mimo_x_channels} and then applied to wireless
interference channels by Cadambe and
Jafar~\cite{cadambe:int_align_K_user_int_channel}, interference
alignment employs a transmission strategy that compensates for the
interference channel between transmitters and receivers. At each
receiver, the interference components can then be consolidated into a
part of the channel that is orthogonal to the signal component. In
fact, the interference is isolated to half of the received signal
space, while the desired signal is located in the other half---leading
to the statement that every receiver can have ``half the cake.'' This
is a significant improvement over every receiver receiving only $1/K$
of the cake, which is the case if standard orthogonalization
techniques are used (where $K$ is the number of transmitter-receiver
pairs).
Interference alignment in an ergodic setting is studied in Nazer
\textit{et al.}~\cite{nazer:ergodic_int_align}, and provides the basis
for our analysis. Using their Gaussian achievable scheme, we delve
deeper into the associated decoding delays and consider how delays may
be reduced, although at the cost of decreased rate. Even though the
analysis in \cite{nazer:ergodic_int_align} additionally considers a
scheme for finite field channels (also similar to the method in
\cite{jeon:capacity_class_multisource_relay_networks}), we defer to
the reader the extension of our analysis to the finite field case.
Our approach for reducing delays is to consider interference alignment
where alignment may require more than one additional instance of
channel fading. In \cite{nazer:ergodic_int_align}, interference is
aligned by transmitting the same message symbol during complementary
channel realizations. In contrast, our approach will utilize multiple
channel realizations (potentially more than two), which when summed
together yield cancelled interference (and amplified signal). We call
such a set of channel matrices an \emph{alignment set}---which will be
more formally defined later. Using multiple channel realizations to
align interference has also been studied in
\cite{nazer:int_align_general_message_sets} for different cases of
receiver message requirements; however, we instead consider how to
utilize these many channel realizations to reduce the delay of
individual messages at each receiver. At first glance, it may seem
that using alignment sets of larger sizes will only increase the
delay; but if we allow alignment using alignment sets of multiple
sizes simultaneously, then we can decrease the time required for a
message symbol to be decoded.
We now give a simple example of an alignment set and show the concept
of ergodic interference alignment.
\begin{example}
\label{ex:align_set_4}
Consider a $3$-user Gaussian interference channel with channel
response given by $\vect{Y} = \mat{H} \vect{X} + \vect{Z}$, where
$\vect{X}$ denotes the transmitted symbols (with power constraint
$E[|X_k|^2] \leq P$ for each user $k = 1,2,3$), $\mat{H}$ is the
channel matrix, $\vect{Z}$ is independently and identically
distributed zero-mean unit-variance additive white Gaussian noise, and
$\vect{Y}$ gives the received symbols. Suppose the following channel
matrices occur at time steps $t_0$, $t_1$, $t_2$, and $t_3$,
respectively:
\[
\small
\begin{array}{ll}
\mat{H}^{(0)} = \left[ \begin{array}{rrr}
1 & -1 & 1 \\ 1 & 1 & -1 \\ -1 & 1 & 1
\end{array} \right] &
\mat{H}^{(1)} = \left[ \begin{array}{rrr}
1 & -1 & -1 \\ -1 & 1 & 1 \\ 1 & 1 & 1
\end{array} \right] \\ \\
\mat{H}^{(2)} = \left[ \begin{array}{rrr}
1 & 1 & -1 \\ -1 & 1 & 1 \\ -1 & -1 & 1
\end{array} \right] &
\mat{H}^{(3)} = \left[ \begin{array}{rrr}
1 & 1 & 1 \\ 1 & 1 & -1 \\ 1 & -1 & 1
\end{array} \right]
\end{array}
\mbox{.}
\]
If the same [complex] vector~$\vect{X}$ is sent at all these times,
then the sum of the non-noise terms is given by $\sum_{i=0}^3
\mat{H}^{(i)} \vect{X} = 4 [X_1, X_2, X_3]^T$ because $\sum_{i=0}^3
\mat{H}^{(i)} = 4 \mat{I}$. By utilizing all four channel
realizations together, the signals (diagonal entries) are amplified,
while the interference terms (off-diagonal entries) are cancelled, so
this collection of matrices is an alignment set. As long as a
receiver knows when an alignment set occurs, then in order to decode
his own message, he does not need to know the channel fades to the
other receivers.
\end{example}
Inferring from \cite{cadambe:multiple_access_outerbounds} or
\cite{cadambe:parallel_int_channels_not_always_separable}, the astute
reader may notice that in the example, the sum capacity when sending
across each channel matrix separately is actually greater than the
alignment rate---a capacity of $4 \log (1 + 3P)$ for separate coding,
compared to a rate of $3 \log (1 + 4 P)$ by using the indicated
interference alignment scheme. However, when the number of
transmitters (and receivers) exceeds the number of alignment channel
realizations, then the rate benefits of using alignment sets start to
become evident. Aligning across $4$ channel realizations with $K$
transmitter-receiver pairs, a rate of $K \log (1 + 4P)$ is achievable,
which can quickly eclipse the separate-coding sum capacity of $4 \log
(1 + K P)$. Moreover, as we will discuss, the benefit of using larger
alignment sets is not in the rate, but rather in the reduction of
decoding delay.
In the next section, we will formally describe the interference
alignment setup, and define our notions of rate and delay. In
Section~\ref{sec:align_complement}, we will take a brief look at the
conventional ergodic interference alignment scheme, by considering the
rate and delay inherent in aligning interference using complementary
channel realizations. Section~\ref{sec:align_multiple} will give the
main result of this work, which is the analysis of rate and delay when
aligning interference by utilizing multiple channel realizations. We
will also give a scheme for trading off the rate and the delay. We
conclude in Section~\ref{sec:conclusion}.
\section{Preliminaries}
\label{sec:prelim}
The setup is the same as the $K$-user interference channel of
\cite{nazer:ergodic_int_align} and
\cite{nazer:int_align_general_message_sets}, where there are $K$
transmitter-receiver pairs. The number of channel uses is $n$. For
the $k$-th transmitter, $k = 1,\ldots,K$, each message~$w_k$ is chosen
independently and uniformly from the set $\{1, 2, \ldots, 2^{n
\tilde{R}_k}\}$ for some $\tilde{R}_k \geq 0$. Only transmitter~$k$
knows message~$w_k$. Let $\mathcal{X}$ be the channel input and
output alphabet. The message~$w_k$ is encoded into the $n$ channel
uses using the encoder $\mathcal{E}_k : \{1, 2, \ldots, 2^{n
\tilde{R}_k}\} \to \mathcal{X}^n$. The output of the encoding
function is the transmitted symbol $X_k(t) = [\mathcal{E}_k(w_k)]_t$
at time~$t$, for $t = 1,\ldots,n$.
The communication channel undergoes fast fading, so the channel fades
change at every time step. At time~$t$, the channel
matrix~$\mat{H}(t)$ has complex entries $[\mat{H}(t)]_{kl} =
h_{kl}(t)$ for $k,l = 1,\ldots,K$. In this model, all transmitters
and receivers are given perfect knowledge of $\mat{H}(t)$ for all
times~$t$. We call $\mathcal{H}$ to be the set of all possible channel
fading matrices.
The message symbol~$X_k(t)$ is transmitted at time~$t$. We assume
zero delay across the channel, so the channel output seen by
receiver~$k$ at time~$t$ is the received symbol
\begin{equation}
Y_k(t) = \sum_{l=1}^K h_{kl}(t) X_l(t) + Z_k(t)
\mbox{,}
\label{eq:channel_model}
\end{equation}
where $Z_k(t)$ is an additive noise term. Each receiver~$k$ then
decodes the received message symbols according to $\mathcal{D}_k :
\mathcal{X}^n \to \{1, 2, \ldots, 2^{n \tilde{R}_k}\}$, to produce an
estimate~$\hat{w}_k$ of $w_k$.
\begin{definition}
The ergodic rate tuple $(R_1, R_2, \ldots, R_K)$ is \emph{achievable}
if for all $\epsilon > 0$ and $n$ large enough, there exist channel
encoding and decoding functions $\mathcal{E}_1, \ldots,
\mathcal{E}_K$, $\mathcal{D}_1, \ldots, \mathcal{D}_K$ such that
$\tilde{R}_k > R_k - \epsilon$ for all $k = 1,2,\ldots,K$, and $P
\left( \bigcup_{k=1}^K \{\hat{w}_k \ne w_k\} \right) < \epsilon$.
\end{definition}
We assume a Gaussian channel with complex channel inputs and outputs,
so $\mathcal{X} = {\mathbb{C}}$. Each transmitter~$k$ has power
constraint
\[
E[ |X_k(t)|^2] \leq \mathrm{SNR}_k
\mbox{,}
\]
where $\mathrm{SNR}_k \geq 0$ is the signal-to-noise ratio. The channel
coefficients~$h_{kl}(t)$, $k,l = 1,\ldots,K$, are independently and
identically distributed both in space and time. We require also that
$h_{kl}$ be drawn from a distribution which is symmetric about zero,
so $P(h_{kl}) = P(-h_{kl})$. The noise terms~$Z_k(t)$ are drawn
independently and identically from a circularly-symmetric complex
Gaussian distribution; thus, $Z_k(t) \sim \mathcal{CN}(0,1)$.
\subsection{Channel Quantization}
\label{subsec:channel_quant}
In this exposition, we consider quantized versions of the channel
matrix. For some quantization parameter $\gamma > 0$, let
$Q_\gamma(h_{kl})$ be the closest point in $({\mathbb{Z}} + j {\mathbb{Z}})
\gamma$ to $h_{kl}$ in Euclidean distance. The $\gamma$-quantized
version of the channel matrix $\mat{H} \in {\mathbb{C}}^{K \times K}$ is
given by the entries $[\mat{H}_\gamma]_{kl} = Q_\gamma(h_{kl})$.
Our scheme uses typical realizations of the channel matrices. For any
$\epsilon > 0$, choose the maximum magnitude $\tau > 0$ such that
$P(\bigcup_{k,l} \{|h_{kl}| > \tau \}) < \frac{\epsilon}{3}$. Throw
out all time indices with any channel coefficient magnitude larger
than $\tau$. Let $\gamma$ and $\delta$ be small positive constants.
Then choose $n$ large enough so that the typical set of sequences
$A_\delta^n$ of channel matrices has probability $P(A_\delta^n) \geq
1-\frac{\epsilon}{3}$ (see \cite{nazer:ergodic_int_align} for
details). Because this sequence of $\gamma$-quantized channel
matrices is $\delta$-typical, the corresponding rate decrease is no
more than a fraction of $\delta$.
In the remainder of this paper, we will only deal with the
$\gamma$-quantized channel matrices~$\mat{H}_\gamma$, so we drop the
subscript~$\gamma$; all further occurrences of $\mat{H}$ refer to the
quantized channel realization~$\mat{H}_\gamma$. We also redefine the
channel alphabet~$\mathcal{H}$ to only include the typical set of quantized
channel matrices, which has cardinality $|\mathcal{H}| = (2 \tau /
\gamma)^{2K^2}$.
\subsection{Aligning Interference}
\label{subsec:align_int}
In the standard interference alignment approach, the interference is
aligned by considering the channel matrix~$\mat{H}$ in tandem with its
complementary matrix~$\mat{H}^c$, where
\[
\mat{H}^c = \left[ \begin{array}{r@{}lr@{}lcr@{}l}
& h_{11} & - & h_{12} & \cdots & - & h_{1K} \\
- & h_{21} & & h_{22} & \cdots & - & h_{2K} \\
& \; \vdots & & \; \vdots & \ddots & & \; \vdots \\
- & h_{K1} & - & h_{K2} & \cdots & & h_{KK} \\
\end{array} \right]
\mbox{.}
\]
That is, $\mat{H}^c$ has entries $h_{kl}$ for $k = l$ and $-h_{kl}$
for $k \ne l$.
For alignment using more channel realizations, we define the concept
of an alignment set.
\begin{definition}
\label{def:align_set}
An \emph{alignment set} of size $m \in 2 {\mathbb{Z}}^+$ is a collection
of matrices $\mathcal{A} = \{\mat{H}^{(0)}, \mat{H}^{(1)}, \ldots,
\mat{H}^{(m-1)}\}$ such that the diagonal entries (signal terms) are
the same:
\begin{equation}
h^{(0)}_{kk} = h^{(1)}_{kk} = \cdots = h^{(m-1)}_{kk}
\label{eq:align_set_def_signal}
\end{equation}
for $k = 1,\ldots,K$, and the sum of interference terms cancel:
\begin{equation}
\left| h^{(0)}_{kl} \right| = \left| h^{(1)}_{kl} \right| = \cdots =
\left| h^{(m-1)}_{kl} \right|
\label{eq:align_set_def_interference_magnitude}
\end{equation}
and
\begin{eqnarray}
\left| \{h^{(i)}_{kl} = h^{(0)}_{kl} \; | \; i=1,\ldots,m-1\} \right|
& = & \frac{m}{2} - 1 \label{eq:align_set_def_interference_pos} \\
\left| \{h^{(i)}_{kl} = -h^{(0)}_{kl} \; | \; i=1,\ldots,m-1\} \right|
& = & \frac{m}{2} \label{eq:align_set_def_interference_neg}
\end{eqnarray}
for $k = 1,\ldots,K$, $l = 1,\ldots,K$, $k \ne l$. Within an
alignment set, the sum of channel matrices, $\mat{B} =
\sum_{i=0}^{m-1} \mat{H}^{(i)}$, will have entries $b_{kk} = m
h_{kk}^{(0)}$ and $b_{kl} = 0$, for $k,l = 1,\ldots,K$, $k \ne l$.
We denote $\mathcal{A}_{\mat{H}}$ to be an alignment set of which
$\mat{H}$ is a member.
\end{definition}
We have seen some examples of alignment sets already. Any channel
realization~$\mat{H}$ and its complement~$\mat{H}^c$ together form an
alignment set of size~$2$. Additionally, the set of matrices given in
Example~\ref{ex:align_set_4} is an alignment set of size~$4$.
Since channel transmission is instantaneous, the only delay considered
is due to waiting for the appropriate channel realizations before a
message symbol can be decoded.
\begin{definition}
The \emph{average delay} of an ergodic interference alignment scheme
is the expected number of time steps between the first instance a
message symbol~$\vect{X}$ is sent and the time until $\vect{X}$ is
recovered at the receiver.
\end{definition}
If $\vect{X}(t_0)$ is sent at time~$t_0$ but can not be decoded until
the appropriate interference alignment occurs at time~$t_1$, then the
delay is $t_1 - t_0$. Note that the delay does not consider the
decoding of the entire message~$w_k$---just the symbols transmitted at
each individual time, $X_k(t)$, $k = 1,\ldots,K$.
\section{Interference Alignment using Complementary Channel Realization}
\label{sec:align_complement}
The method of interference alignment via sending the same channel
input vector when a complementary channel realization occurs is given
in \cite{nazer:ergodic_int_align}. Call $R_k^{(2)}$ the achievable
rate for interference alignment using complements ({\it i.e.}, requiring two
channel realizations before decoding each message symbol).
\begin{lemma}[{\cite[Theorem~3]{nazer:ergodic_int_align}}]
\label{thm:rate_align_2}
An achievable rate tuple by aligning using complementary channel
realizations is
\[
\textstyle
R_k^{(2)} = \frac{1}{2} E[\log (1 + 2 |h_{kk}|^2 \mathrm{SNR}_k)]
\]
for $k = 1,\ldots,K$, where the expectation is over the distribution
of channel fades~$h_{kk}$ drawn from the matrices in $\mathcal{H}$.
\end{lemma}
When a channel realization~$\mat{H}$ occurs, then the sent message
symbol is decoded when the complementary channel
realization~$\mat{H}^c$ occurs. Let $d^{(2)}$ denote the average
delay between channel realizations $\mat{H}$ and $\mat{H}^c$.
\begin{lemma}
\label{thm:delay_align_2}
When all channel realizations are equally likely, the average delay
incurred by interference alignment with complementary channel
realizations is $d^{(2)} = |\mathcal{H}|$.
\end{lemma}
\begin{proof}
Each channel realization is equally likely at each time. The time
until $\mat{H}^c$ occurs is a geometric random variable with parameter
$P(\mat{H}^c) = 1/|\mathcal{H}|$. The average delay is $|\mathcal{H}|$.
\end{proof}
Note that the delay~$d^{(2)}$ can be quite large. Using our
quantization scheme, $d^{(2)} = |\mathcal{H}| = (2 \tau / \gamma)^{2K^2}$.
\section{Interference Alignment using Multiple Channel Realizations}
\label{sec:align_multiple}
This section will focus on using alignment sets of sizes $m = 2$ and
$m = 4$. Extensions for larger alignment sets will be discussed in
Section~\ref{subsec:align_multiple_larger_align_sets}.
For ease of analysis, we assume that each channel
realization~$\mat{H}$ is equally likely, although the ideas presented
may be readily extended to the cases where the distribution of channel
realizations is non-uniform. However, for this particular
interference alignment scheme to work, all channel realizations within
the same alignment set must be equiprobable: for an alignment set
$\mathcal{A}_{\mat{H}} = \{\mat{H}, \mat{H}^{(1)}, \mat{H}^{(2)},
\ldots, \mat{H}^{(m-1)}\}$, we require that $P(\mat{H}) =
P(\mat{H}^{(1)}) = P(\mat{H}^{(2)}) = P(\mat{H}^{(m-1)})$.
Fortunately, this holds since we assume that channel entries are drawn
from distributions that are symmetric about zero.
\subsection{First-to-Complete Alignment}
\label{subsec:align_multiple_first_to_complete}
We call the following scheme for achieving lower delay the
\emph{first-to-complete} scheme, which is essentially a
coupon-collecting race between an alignment set of size~$2$ and an
alignment set of size~$4$. For some channel realization $\mat{H} \in
\mathcal{H}$ (occurring at a time~$t_0$)---since the entire future of
channel realizations is known---we can collect the realizations
occurring at future times $t > t_0$. Now we say that an alignment
set~$\mathcal{A}_{\mat{H}}$ of size~$4$ has been \emph{completed} once
all matrices $\tilde{\mat{H}} \in \mathcal{A}_{\mat{H}}$ have been
realized. If $\mat{H}^c$ occurs before $\mathcal{A}_{\mat{H}}$ is
completed, then pair up $\mat{H}$ with that realization of
$\mat{H}^c$. Otherwise, group together $\mat{H}$ with the other
members of the alignment set $\mathcal{A}_{\mat{H}}$.
We derive the achievable rate by separately finding the rates when
decoding using alignment sets of different sizes, and then weighting
these rates by the probabilities that a particular-sized set is
completed before the other. From \cite{nazer:ergodic_int_align}, if
$\mat{H}$ at time~$t_0$ is paired with $\mat{H}^c$ at time~$t_1$, then
the same symbol vector~$\vect{X}(t_0)$ is transmitted at both times
$t_0$ and $t_1$. Since this is alignment with channel complements,
the rate $R_k = \frac{1}{2} E[\log (1 + 2|h_{kk}|^2 \mathrm{SNR}_k)] -
\epsilon$ is achievable with probability $1 - \epsilon$.
Now we find the rate when $\mat{H}$ at time $\hat{t}_0 = t_0$ is
instead grouped with the members of its size-$4$ alignment
set~$\mathcal{A}_{\mat{H}}$. Assume that the channel realizations of
the other members of the alignment set occur at times $\hat{t}_1$,
$\hat{t}_2$, and $\hat{t}_3$, respectively. In the scheme, we send
the same message symbol~$X_k(\hat{t}_0)$ at times $\hat{t}_0$,
$\hat{t}_1$, $\hat{t}_2$, and $\hat{t}_3$. The channel outputs are
\begin{equation}
Y_k(t) = h_{kk}(t) X_k(\hat{t}_0) + \sum_{l \ne k} h_{kl}(t)
X_l(\hat{t}_0) + Z_k(t)
\label{eq:channel_output_align_4}
\end{equation}
for $t = \hat{t}_0, \hat{t}_1, \hat{t}_2, \hat{t}_3$. From the
alignment set definition, we know $h_{kk}(\hat{t}_0) =
h_{kk}(\hat{t}_1) = h_{kk}(\hat{t}_2) = h_{kk}(\hat{t}_3)$ and
$h_{kl}(\hat{t}_0) + h_{kl}(\hat{t}_1) + h_{kl}(\hat{t}_2) +
h_{kl}(\hat{t}_3) = 0$ for $k = 1,\ldots,K$ and $l \ne k$. Thus, the
signal-to-interference-plus-noise ratio of the channel from
$X_k(\hat{t}_0)$ to $Y_k(\hat{t}_0) + Y_k(\hat{t}_1) + Y_k(\hat{t}_2)
+ Y_k(\hat{t}_3)$ is at least
\[
\frac{\mathrm{SNR}_k ((4 |\Re(h_{kk})| - 2 \gamma)^2 + (4 |\Im(h_{kk})| - 2
\gamma)^2)}{4 + (2 \gamma)^2 \sum_{l \ne k} \mathrm{SNR}_l}
\mbox{.}
\]
Taking the channel quantization parameter $\gamma \to 0$, the SINR is
$4 |h_{kk}|^2 \mathrm{SNR}_k$, which gives the rate (as $\tau \to \infty$):
\begin{equation}
\textstyle R_k = \frac{1}{4} E[\log (1 + 4|h_{kk}|^2 \mathrm{SNR}_k)] -
\frac{2 \epsilon}{3}
\mbox{.}
\label{eq:rate_limit_align_4}
\end{equation}
Thus there exist $\gamma$ and $\tau$ such that we achieve $R_k >
\frac{1}{4} E[\log (1 + 4|h_{kk}|^2 \mathrm{SNR}_k)] - \epsilon$ with
probability $1 - \epsilon$ when aligning using an alignment set of
size $4$.\footnotemark
\footnotetext{Higher rates may be possible by optimizing power
allocations, for example via water-filling. Here we only consider
rates achievable using equal-power allocations.}
Recall that $\mat{H}$ at time~$t_0$ is only grouped with the channel
realizations of the alignment set which completes first, so that the
realizations corresponding to the other alignment sets are \emph{not}
associated with $\mat{H}$ and can be used for some other
transmissions. For example, if $\mat{H}^c$ occurs between times
$\hat{t}_1$ and $\hat{t}_2$ ({\it i.e.}, $t_0 < \hat{t}_1 < t_1 < \hat{t}_2 <
\hat{t}_3$), then since the transmitter knows the sequence of channel
realizations in advance, it may avoid utilizing $\mat{H}(\hat{t}_1)$
to send $\vect{X}(t_0)$, which would become a wasted transmission when
$\mat{H}^c$ occurs at time $t_1$. In this example, decoding is via
channel complements, so $\vect{X}(t_0)$ is sent during times $t_0$ and
$t_1$, but never during times $\hat{t}_1$, $\hat{t}_2$, and
$\hat{t}_3$.
We now determine the probability that the first-to-complete scheme
decodes using the alignment set of size~$4$ rather than the alignment
set of size~$2$. This can be computed by considering a Markov chain
with the following states:
\begin{center}
\footnotesize
\begin{tabular}{ll}
$s_{-1}$: & Decode using $\mat{H}$ and its complement, $\mat{H}^c$ \\
$s_0$: & No matches yet to any alignment set \\
$s_1$: & First match with size-$4$ alignment set \\
$s_2$: & Second match with size-$4$ alignment set \\
$s_3$: & Third match with size-$4$ alignment set, so decode using
$\mathcal{A}_{\mat{H}}$
\end{tabular}
\end{center}
The Markov chain is shown in Figure~\ref{fig:markov_chain_2_4}.
States $s_{-1}$ and $s_3$ are absorbing. Because this is a success
runs Markov chain~\cite{taylor:stochastic_modeling}, its absorption
probabilities and hitting times are known. The probability of
decoding via the alignment set of size~$4$ is the probability of
absorption at state~$s_3$ starting from state~$s_0$, and is computed
to be $\beta_4 = 1/4$. Note that $\beta_4$ does not depend on the
number of possible channel realizations, $|\mathcal{H}|$. This is intuitive
since matrices not belonging to an alignment set do not affect the
probability that one set completes before another.
\begin{figure}[tbp]
\centering
\psfrag{s0}[c][c]{$s_{-1}$}
\psfrag{s1}[c][c]{$s_0$}
\psfrag{s2}[c][c]{$s_1$}
\psfrag{s3}[c][c]{$s_2$}
\psfrag{s4}[c][c]{$s_3$}
\psfrag{p00}[c][c]{\small $1$}
\psfrag{p10}[c][c]{\small $\frac{1}{|\mathcal{H}|}$}
\psfrag{p11}[c][c]{\small $1 - \frac{4}{|\mathcal{H}|} $}
\psfrag{p12}[c][c]{\small $\frac{3}{|\mathcal{H}|} $}
\psfrag{p20}[c][c]{\small $\frac{1}{|\mathcal{H}|}$}
\psfrag{p22}[c][c]{\small $1 - \frac{3}{|\mathcal{H}|}$}
\psfrag{p23}[c][c]{\small $\frac{2}{|\mathcal{H}|}$}
\psfrag{p30}[c][c]{\small $\frac{1}{|\mathcal{H}|}$}
\psfrag{p33}[c][c]{\small $1 - \frac{2}{|\mathcal{H}|}$}
\psfrag{p34}[c][c]{\small $\frac{1}{|\mathcal{H}|}$}
\psfrag{p44}[c][c]{\small $1$}
\includegraphics[width=3.0in]{markov_chain_2_4.eps}
\caption{Success runs Markov chain associated with first-to-complete
alignment. States indicate progress towards completion of the
alignment sets. Quantities above the arrows indicate transition
probabilities.}
\label{fig:markov_chain_2_4}
\end{figure}
\begin{lemma}
\label{thm:rate_first_compl_2_4}
An achievable rate tuple for the first-to-complete scheme has rates
(for all $k = 1,\ldots,K$):
\begin{eqnarray*}
R_k^{(2,4)}
& = & \textstyle \frac{3}{8} E[ \log (1 + 2 |h_{kk}|^2 \mathrm{SNR}_k)] \nonumber \\
& & \textstyle {+} \: \frac{1}{16} E[ \log (1 + 4 |h_{kk}|^2 \mathrm{SNR}_k)]
\mbox{.}
\end{eqnarray*}
\end{lemma}
\begin{proof}
Because decoding via the size-$2$ alignment set occurs $1 - \beta_4$
of the time, and decoding via the size-$4$ alignment set occurs
$\beta_4$ of the time, an achievable rate is $R_k^{(2,4)} =
\frac{1}{2} (1-\beta_4) E[ \log (1 + 2 |h_{kk}|^2 \mathrm{SNR}_k)] +
\frac{1}{4} \beta_4 E[ \log (1 + 4 |h_{kk}|^2 \mathrm{SNR}_k)]$. Plugging in
$\beta_4 = 1/4$ gives the result.
\end{proof}
\begin{lemma}
\label{thm:delay_first_compl_2_4}
For the first-to-complete scheme, the average decoding delay is
$d^{(2,4)} = (3/4) |\mathcal{H}| = (3/4) d^{(2)}$.
\end{lemma}
\begin{proof}
The delay until either alignment set is completed is the mean hitting
time until one of the corresponding absorption states is reached in
the Markov chain of Figure~\ref{fig:markov_chain_2_4}. A simple
computation for the hitting time yields $d^{(2,4)} = (3/4) |\mathcal{H}|$.
\end{proof}
\subsection{Delay-Rate Tradeoff}
\label{subsec:align_multiple_tradeoff}
Although the first-to-complete scheme achieves lower delay than
interference alignment using only complements, it has the
drawback of having lower rate. By using time-sharing, we can achieve
any delay~$d$ such that $(3/4) |\mathcal{H}| = d^{(2,4)} \leq d \leq
d^{(2)} = |\mathcal{H}|$, and every user $k \in \{1,\ldots,K\}$ will still
have increased data rate over that of $R_k^{(2,4)}$.
In the time-sharing scheme, with probability $1 - \alpha$ where $0
\leq \alpha \leq 1$, pair up $\mat{H}$ with the first instance of
$\mat{H}^c$ which occurs later in time; this is alignment using only
complements. With probability~$\alpha$, however, perform the
first-to-complete scheme: pair up $\mat{H}$ with $\mat{H}^c$ only if
$\mat{H}^c$ occurs before any alignment set of size~$4$ is completed;
otherwise, group $\mat{H}$ with the size-$4$ alignment set which
completes first.
\begin{theorem}
\label{thm:rate_time_sharing}
The achievable rate when time-sharing with probability~$\alpha$ of
using the first-to-complete scheme is
\begin{eqnarray*}
R_k(\alpha)
& = & (1-\alpha) R_k^{(2)} + \alpha R_k^{(2,4)} \\
& = & \textstyle \frac{1}{2} \left( 1 - \frac{\alpha}{4} \right) E[
\log (1 + 2 |h_{kk}|^2 \mathrm{SNR}_k)] \nonumber \\
& & \textstyle {+} \: \frac{\alpha}{16} E[ \log (1 + 4 |h_{kk}|^2 \mathrm{SNR}_k)]
\mbox{.}
\end{eqnarray*}
\end{theorem}
\begin{proof}
Evident.
\end{proof}
\begin{theorem}
\label{thm:delay_time_sharing}
The average delay when time-sharing is
\[
d(\alpha) = (1-\alpha) d^{(2)} + \alpha d^{(2,4)} = (1 - \alpha/4)
|\mathcal{H}|
\mbox{.}
\]
\end{theorem}
\begin{proof}
Evident.
\end{proof}
\begin{corollary}
\label{thm:time_sharing_better}
The average delay, when time-sharing between the first-to-complete
scheme (using alignment sets of both sizes $2$ and $4$) and
channel-complement alignment, is lower than the average delay when
using only complements.
\end{corollary}
\begin{proof}
By choosing any $\alpha > 0$, we get delay $d(\alpha)$ strictly less
than $|\mathcal{H}| = d^{(2)}$.
\end{proof}
The reduced delay is an intuitive result since the first-to-complete
scheme allows additional opportunities to align, without disallowing
existing opportunities.
\subsection{Extension to Larger Alignment Sets}
\label{subsec:align_multiple_larger_align_sets}
We now extend our analysis to more general collections of alignment
sets. Consider a finite tuple of positive even numbers $I =
(m_1,m_2,\ldots,m_{|I|})$, possibly with repetitions. We generalize
first-to-complete alignment by using non-overlapping alignment sets
with sizes dictated by the entries of $I$. As soon as all members of
any particular alignment set have been seen, we say that that
alignment set has been completed; we transmit and decode using the
particular alignment set. As an example, the first-to-complete
alignment scheme given in the first part of this section corresponds
to $I = (2,4)$. For the case of a general tuple $I$, the process is
identical to the multiple subset coupon collecting problem of Chang
and Ross~\cite{chang:multiple_subset_coupon_collecting}, in which
coupons are repeatedly drawn with replacement until any one of several
preordained subsets of coupons have been collected.
To compute the achievable rates $(R_1^{I}, R_2^{I}, \ldots, R_K^{I})$
and delay~$d^I$ associated with running first-to-complete alignment
among $I$-sized alignment sets, we construct the associated Markov
chain. The state vector $\vect{s} = (s_1, s_2, \ldots, s_{|I|})$ is
defined so that element $s_i$ counts how many members of the $i$-th
alignment set have already occurred, excluding the initial
matrix~$\mat{H}$. Initially, the Markov chain is at state $\vect{s} =
\vect{0}$, since no alignment set member aside from $\mat{H}$ has yet
been realized. At each time~$t$, if $\mat{H}(t)$ is a member of the
${\hat{\imath}}$-th alignment set and has not yet been realized, then
increment $s_{\hat{\imath}} := s_{\hat{\imath}} + 1$. When
$s_{\hat{\imath}} = m_{\hat{\imath}} - 1$ for some $\hat{\imath}$,
this means that the ${\hat{\imath}}$-th alignment set (of
size~$m_{\hat{\imath}}$) has been completed. The Markov chain enters
an absorbing state, and the receiver decodes. Let $V$ denote the set
of absorbing states. The state transition probabilities are
\[
\small
P_{\vect{s}, \vect{s}'} =
\left\{ \begin{array}{ll}
\frac{m_{\hat{\imath}} - 1 - s_{\hat{\imath}}}{|\mathcal{H}|}
& s'_{\hat{\imath}} = s_{\hat{\imath}} + 1 ~\mbox{for some}
~\hat{\imath},\ldots \\
& \quad s'_i = s_i ~\mbox{for all} ~i \ne \hat{\imath}, ~\vect{s}
\not\in V \\
1 - \sum_i \frac{m_i - 1 - s_i}{|\mathcal{H}|}
& \vect{s}' = \vect{s}, ~\vect{s} \not\in V \\
1 & \vect{s}' = \vect{s}, ~\vect{s} \in V ~\mbox{(absorption)}\\
0 & \mbox{otherwise}
\end{array} \right.
\mbox{.}
\]
Let $\beta_{m}^{I}$ be the probability that the first completed
alignment set is the alignment set of size $m \in I$. Equivalently,
$\beta_m^{I}$ is the probability that the Markov chain reaches the
absorption state corresponding to the completion of a specific
size-$m$ alignment set. These absorption probabilities can be
computed via matrix inversion (see the Appendix or Taylor and
Karlin~\cite{taylor:stochastic_modeling} for more details).
Table~\ref{table:sample_absorp_prob_delays} gives example values for
$\beta_m^{I}$.
\begin{table}[tbp]
\centering
\begin{threeparttable}
\caption{Absorption Probabilities and Delays\tnote{$\dagger$}}
\label{table:sample_absorp_prob_delays}
\begin{tabular*}{0.9\columnwidth}{@{\extracolsep{\fill}} lllll @{}}
\toprule
\multicolumn{1}{c}{Set sizes} & \multicolumn{3}{c}{Absorption
probability} & \multicolumn{1}{c}{Delay} \\
\cmidrule{1-1} \cmidrule{2-4} \cmidrule{5-5}
\multicolumn{1}{c}{$I$} & \multicolumn{1}{c}{$\beta_{m_1}^I$} &
\multicolumn{1}{c}{$\beta_{m_2}^I$} &
\multicolumn{1}{c}{$\beta_{m_3}^I$} & \multicolumn{1}{c}{$d^I$} \\
\midrule
$(2,4)$ & $0.75$ & $0.25$ & & $0.75 |\mathcal{H}|$ \\
$(2,6)$ & $0.8333$ & $0.1667$ & & $0.8333 |\mathcal{H}|$ \\
$(2,4,4)$ & $0.6429$ & $0.1786$ & $0.1786$ & $0.6429 |\mathcal{H}|$ \\
$(2,4,6)$ & $0.6944$ & $0.2083$ & $0.0972$ & $0.6944 |\mathcal{H}|$ \\
$(4,4)$ & $0.5$ & $0.5$ & & $1.2167 |\mathcal{H}|$ \\
$(4,6)$ & $0.625$ & $0.375$ & & $1.3988 |\mathcal{H}|$ \\
$(4,8)$ & $0.7$ & $0.3$ & & $1.4972 |\mathcal{H}|$ \\
$(4,4,4)$ & $0.3333$ & $0.3333$ & $0.3333$ & $0.9790 |\mathcal{H}|$ \\
$(6,10)$ & $0.6429$ & $0.3571$ & & $1.8607 |\mathcal{H}|$ \\
\bottomrule
\end{tabular*}
\begin{tablenotes}
\item[$\dagger$] For values to be valid, $|\mathcal{H}| \geq 1 +
\sum_{i=1}^{|I|} (m_i - 1)$ must hold.
\end{tablenotes}
\end{threeparttable}
\end{table}
Following a similar argument as in
Lemma~\ref{thm:rate_first_compl_2_4}, the rate for receiver $k \in
\{1,\ldots,K\}$ by using a first-to-complete scheme with specific
alignment sets of sizes drawn from $I$ is
\[
R_k^{I} = \sum_{m \in I} \frac{1}{m} \beta_m^{I} E[ \log (1 + m
|h_{kk}|^2 \mathrm{SNR}_k)]
\mbox{.}
\]
We now incorporate time-sharing and describe the delay-rate tradeoff.
Let $\mathcal{I}$ be a finite collection of these tuples~$I$; that is,
$\mathcal{I} \subseteq \{I = (m_1,\ldots,m_{|I|}) \; | \; m_i \in 2
{\mathbb{Z}}^+\}$. We can do time-sharing between first-to-complete
schemes, with sizes drawn from $I \in \mathcal{I}$, according to the
vector $\vect{\alpha} = (\alpha_{I_1}, \alpha_{I_2}, \ldots,
\alpha_{I_{|\mathcal{I}|}})$ where $\sum_{I \in \mathcal{I}} \alpha_I
= 1$ and $\alpha_I \geq 0$ for all $I \in \mathcal{I}$. The rate will
be
\begin{equation}
R_k(\vect{\alpha}) = \sum_{I \in \mathcal{I}} \alpha_I R_k^{I}
\mbox{.}
\label{eq:rate_larger_align_sets_time_sharing}
\end{equation}
Alternatively, to be explicit about the rates due to alignment sets of
particular sizes, the rate can also be written as
\[
\small
R_k(\vect{\alpha}) = \sum_{m \in 2 {\mathbb{Z}}^+} \left( \sum_{I \in
\mathcal{I} \, : \, m \in I} \alpha_I \beta_m^{I} \right) \frac{1}{m}
E[ \log(1 + m |h_{kk}|^2 \mathrm{SNR}_k)]
\mbox{.}
\]
The average delay using alignment sets of sizes $I = (m_1, m_2,
\ldots, m_{|I|})$ is equal to the mean absorption time for the Markov
chain. From \cite{chang:multiple_subset_coupon_collecting}, by using
Poisson embedding, this delay can be computed as\footnotemark
\begin{equation}
d^I = |\mathcal{H}| \int_0^1 \frac{1}{1-u} \prod_{i=1}^{|I|} (1 - u^{m_i -1})
\; du
\mbox{.}
\label{eq:delay_larger_align_sets}
\end{equation}
Table~\ref{table:sample_absorp_prob_delays} gives average delays for
some representative collections of alignment sets. Then the delay
using time-sharing is
\begin{equation}
d(\vect{\alpha}) = \sum_{I \in \mathcal{I}} \alpha_I d^{I}
\mbox{,}
\label{eq:delay_larger_align_sets_time_sharing}
\end{equation}
which is linear in the number of possible channel realizations,
$|\mathcal{H}|$.
\footnotetext{This evaluates to an inclusion-exclusion sum of harmonic
numbers $H_n$:
\[
d^I = |\mathcal{H}| \left[ \sum_{U \subseteq I, U \neq \emptyset}
(-1)^{1-|U|} H_{\scriptstyle \left( -|U| + \sum_{m \in U} m \right)}
\right]
\mbox{.}
\]
The delay can also be expressed analytically using the digamma
function $\Psi$, giving
\begin{eqnarray*}
d^I
& = & |\mathcal{H}| \sum_{U \subseteq I} (-1)^{1-|U|} \Psi \left(1-|U| +
\sum_{m \in U} m \right) \\
& = & |\mathcal{H}| \left[
\gamma + \sum_{m_1 \in I} \Psi (m_1) - \sum_{m_1,m_2 \in I} \Psi
(-1+m_1+m_2) \right. \nonumber \\
& & \left. {+} \: \sum_{m_1,m_2,m_3 \in I} \Psi (-2+m_1+m_2+m_3) -
\cdots \right]
\mbox{,}
\end{eqnarray*}
where $\gamma$ is the Euler-Mascheroni constant. Also, from
\cite{chang:multiple_subset_coupon_collecting}, we can find the
variance of this delay, as well as the average delay when alignment
sets overlap.}
From Table~\ref{table:sample_absorp_prob_delays}, we can make an
observation regarding the computed absorption probabilities and
associated delays. When the first alignment set has size~$2$, notice
that $d^I = \beta_2^I |\mathcal{H}|$. This holds for any tuple $I$ which
contains an alignment set of size~$2$ (see Appendix).
\subsection{Further Considerations}
\label{subsec:align_multiple_further}
In this analysis, we only consider alignment sets that do not share
any common matrices. However, as the number of allowable sizes,
$|I|$, grows larger, this condition will become harder to fulfill
since there will be greater potential for collisions. Finding tuples
of alignment sets such that there are no overlapping channels is an
avenue for future work. One thing to note is that because only
$2^{K(K-1)}$ matrices satisfy $h_{kk}^{(i)} = h_{kk}$ and
$|h_{kl}^{(i)}| = |h_{kl}|$ for $k = 1,\ldots,K$ and $l \ne k$, an
alignment set of size $m = 2^{K(K-1)}$ would consist of all possible
channel matrices which might align with $\mat{H}$, and so necessarily
must collide with any other alignment set.
A related issue is that of allowing decoding using \emph{all}
alignment sets of a particular size~$m$, of which there are
$\binom{m-1}{m/2}^{K(K-1)}$ such alignment sets. For example, a
system could choose to perform first-to-complete alignment among
\emph{any} alignment set of sizes $2$ and $4$. Because
non-intersection between different alignment sets may no longer be
guaranteed, the analysis will be more complicated.
From Table~\ref{table:sample_absorp_prob_delays}, we can start to
notice the potential for delay reduction via using multiple alignment
sets of the same size. Although the delay will still scale linearly
in $|\mathcal{H}|$, it is possible to significantly reduce the delay below
$d^{(2)} = |\mathcal{H}|$. As an example, from
Figure~\ref{fig:multiple_4_delay} we can observe the behavior of the
linear scaling factor, in the case of allowing alignment using more
and more size-$4$ alignment sets.\footnotemark ~Thus a deeper
consideration of alignment with multiple same-size alignment sets may
be a fruitful area for further inquiry.
\footnotetext{Of course, the trend shown in the
Figure~\ref{fig:multiple_4_delay} only holds for scenarios where the
number of users~$K$ is large enough that there exists enough distinct
alignment sets of size~$4$ for alignment.}
\begin{figure}[tbp]
\centering
\psfrag{num4}[c][c]{\scriptsize $n$}
\psfrag{delay}[c][c]{\scriptsize $d^I / |\mathcal{H}|$}
\includegraphics[width=2.5in]{multiple_4_delay.eps}
\caption{Plot showing the decrease of the delay linear scaling factor,
for multiple disjoint alignment sets of size~$4$. The number of
alignment sets of size~$4$ is $n$. Thus each point represents the
delay associated with the tuple $I = (4,4,4,\ldots)$, where the tuple
has $n$ elements.}
\label{fig:multiple_4_delay}
\end{figure}
There are myriad other ways in which alignment may occur; {\it i.e.}, there
is more than one way to align channel matrices.
Definition~\ref{def:align_set} gives one set of sufficient conditions
for channel realizations to align, in order to keep the analysis
tractable---and the benefits which arise by considering larger
alignment sets are already evident. An obvious extension to this
would be to consider alignment sets in which arbitrary linear
combinations add up to multiples of the identity, and to only consider
alignment among subsets of users. Subsequent work by
\cite{johnson:drt} takes a step in this direction.
The moral of this story, however, is that delay can always be reduced
by allowing alignment using a greater number of possible choices of
alignment sets. The data rate may decrease correspondingly, so the
tradeoff needs to be appropriately chosen according to the needs of
the communication system.
\section{Conclusion}
\label{sec:conclusion}
In our analysis, we have not considered the delays between when a
message symbol is available and when it is first transmitted. We have
only defined delay as the time between when the symbol is first
transmitted and when it is able to be recovered by the receiver. We
believe this is a reasonable metric of delay, as long as message
symbols are not all generated at one time. However, an analysis using
queueing theory may be necessary to verify this claim.
In this work, we have proposed an interference alignment scheme which
reduces delay, although with potentially decreased data rate. Delay
is mitigated by allowing more ways to align interference---through the
utilization of larger alignment sets. We have also introduced a
scheme to trade off the delay and rate. In the end, even though the
rate may be reduced, we can still say, in the parlance of interference
aligners, that each person gets $\kappa$ of the cake, where $1/K \leq
\kappa \leq 1/2$---so our scheme can still be an improvement over
non-aligning channel-sharing strategies in terms of data rate.
\addtolength{\textheight}{-7cm}
| 2023-04-23T06:41:09.101Z | 2010-10-04T02:01:01.000Z | redpajama/arxiv | arxiv_0001 | 1,781 | 6,362 |
185bf02c4614055e50c1c5ae72652911ff37a90d | \section{Introduction.}
Given a real Lie algebra $\mathfrak{g},$ the determination up to equivalence of
zero torsion linear maps from $\mathfrak{g}$ to $\mathfrak{g}$ plays an important
role in the computation of complex structures on direct products involving
$\mathfrak{g}$
(\cite{u2revisited}). In the present note, we consider the question of
whether or not
any such zero torsion linear map
for non abelian $\mathfrak{g}$ is necessarily an extension of some
$CR$-structure.
We answer the question in the negative by computing (up to equivalence) all zero torsion linear maps
from the real 3-dimensional Heisenberg Lie algebra $\mathfrak{n}$ into itself. The result is then used to exhibit
a complete set of
representatives of equivalence classes of complex structures on $\mathfrak{n}\times \mathfrak{n}$.
We also compute
all zero torsion linear maps
on
$\mathfrak{sl}(2,\mathbb{R}).$ In that case they are extensions of $CR$-structures. We deduce
a complete set of
representatives of equivalence classes of complex structures on
$\mathfrak{sl}(2,\mathbb{R}) \times \mathfrak{sl}(2,\mathbb{R}).$
\section{Preliminaries.}
Let $G_0$ be a connected finite dimensional real Lie group, with
Lie algebra $\mathfrak{g}.$
A linear map $J \, : \, \mathfrak{g} \rightarrow \mathfrak{g}$
is said to have \textit{zero torsion} if it satisfies the condition
\begin{equation}
\label{Jnotorsion}
[{J} X, {J}Y]-[X,Y]-{J}[{J}X,Y]-J[X,{J}Y] =0 \quad \forall X,Y \in \mathfrak{g}.
\end{equation}
If $J$ has zero torsion and satisfies in addition $J^2=-1,$
$J$ is an (integrable) complex structure on
$\mathfrak{g}.$
That means that $G_0$
can be given the structure of a complex manifold with the same underlying
real structure and such that the canonical complex structure on $G_0$ is the left invariant almost
complex structure
$\hat{J}$ associated to $J$
(For more details, see \cite{artmagnin1}).
To any (integrable) complex structure $J$ is associated
the complex subalgebra
$\mathfrak{m}=\left\{ \tilde{X} := X - i J X \, ; X \in \mathfrak{g} \right\}$
of the complexification $\mathfrak{g}_\mathbb{C}$ of $\mathfrak{g}.$
In that way, (integrable) complex structures can be identified with
complex subalgebras $\mathfrak{m} $ of
$\mathfrak{g}_\mathbb{C}$ such that
$\mathfrak{g}_\mathbb{C} =\mathfrak{m} \oplus \bar{\mathfrak{m}}$, bar denoting conjugation.
$J$ is said to be abelian if $\mathfrak{m}$ is.
When computing the matrices of the zero torsion maps in some
fixed basis
$(x_j)_{ 1 \leqslant j \leqslant n}$
of $\mathfrak{g}$,
we will denote by
$ij|k$ ($1 \leqslant i,j,k \leqslant n$)
the torsion equation
obtained by projecting on $x_k$ the equation
(\ref{Jnotorsion}) with $X=x_i, Y=x_j.$
The automorphism group
$\text{Aut } \mathfrak{g}$
of $\mathfrak{g}$ acts on the set
of all zero torsion linear maps and on the set
of all complex structures on $\mathfrak{g}$ by
$J \mapsto \Phi \circ J \circ \Phi^{-1} \quad \forall \Phi \in
\text{Aut } \mathfrak{g}.$
Two $J,J^{\prime}$
on $\mathfrak{g}$ are said to be \textit{equivalent}
(notation: $J \equiv J^{\prime}$)
if they are on the same
$\text{Aut } \mathfrak{g}$ orbit.
For complex structures and simply connected $G_0,$
this amounts to the existence of an $f \in \text{Aut } G_0$ such that
$f : (G_0,J) \rightarrow (G_0,J^{\prime})$ is biholomorphic.
\section{Case of
$\mathfrak{sl}(2,\mathbb{R}).$}
Let $G=SL(2,\mathbb{R})$ denote the Lie group of
real $2\times 2$ matrices with determinant 1
\begin{equation}
\label{sigma}
\sigma = \begin{pmatrix} a&b \\ c&d \end{pmatrix} \quad , \quad ad-bc=1.
\end{equation}
Its Lie algebra
$\mathfrak{g} =\mathfrak{sl}(2,\mathbb{R})$
consists of the zero trace
real $2\times 2$ matrices
$$X = \begin{pmatrix} x&y \\ z&-x \end{pmatrix}= xH+yX_+ +zX_- $$
with basis
$H = \left(\begin{smallmatrix} 1&0 \\ 0&-1 \end{smallmatrix}\right)$,
$X_+ = \left(\begin{smallmatrix} 0&1 \\ 0&0 \end{smallmatrix}\right)$,
$X_- = \left(\begin{smallmatrix} 0&0 \\ -1&0 \end{smallmatrix}\right)$
and commutation relations
\begin{equation}
\label{relationssl2}
[H,X_+]=2X_+, \; [H,X_-]=-2X_-, \; [X_+,X_-]=H.
\end{equation}
Beside the basis $(H,X_+,X_-),$ we shall also make use of the basis
$(Y_1,Y_2,Y_3)$
where
$Y_1=\frac{1}{2} H,$
$Y_2=\frac{1}{2} (X_+ -X_-),$
$Y_3=\frac{1}{2} (X_+ + X_-),$ with
commutation relations
\begin{equation}
\label{relationssl2bis}
[Y_1,Y_2]=Y_3,
[Y_1,Y_3]=Y_2,
[Y_2,Y_3]=Y_1.
\end{equation}
The adjoint representation of $G$ on $\mathfrak{g}$ is given by
$\text{Ad}(\sigma) X = \sigma X \sigma^{-1}.$
The matrix $\Phi$ of $\text{Ad}(\sigma)$
($\sigma$ as in (\ref{sigma}))
in the basis $(H,X_+,X_-)$ is
\begin{equation}
\Phi =
\begin{pmatrix}
1+2bc&-ac&bd\\
-2ab&a^2&-b^2\\
2cd&-c^2&d^2
\end{pmatrix}.
\end{equation}
The adjoint group $\text{Ad}(G)$ is the identity component of
$\text{Aut } \mathfrak{g}$
and one has
\begin{equation}
\label{Aut(sl2)}
\text{Aut } \mathfrak{g}
= \text{Ad}(G) \cup \Psi_0 \text{Ad}(G) \quad , \quad
\Psi_0= \text{diag}(1,-1,-1).
\end{equation}
The adjoint action of $G$ on $\mathfrak{g}$ preserves the form $x^2 +yz.$
The orbits are :
\\ (i) the trivial orbit $\{0\};$
\\ (ii) the upper sheet $z>0$ of the cone $x^2+yz=0$ (orbit of $X_-$);
\\ (iii) the lower sheet $z<0$ of the cone $x^2+yz=0$ (orbit of $-X_-$);
\\ (iv) for all $s >0$ the one-sheet hyperboloid $x^2+yz=s^2$ (orbit of $s H$);
\\ (v)
for all $s >0$
the upper sheet $z>0$ of the hyperboloid $x^2+yz=-s^2$
(orbit of $s(-X_+ +X_-$));
\\ (vi)
for all $s >0$
the lower sheet $z<0$ of the hyperboloid $x^2+yz=-s^2$ (orbit of $s(X_+ -X_-$)).
\\
The orbits of $\mathfrak{g}$ under the whole
$\text{Aut } \mathfrak{g}$
are, beside $\{0\}$:
\\ (I) the cone $x^2+yz=0$ (orbit of $X_-$);
\\ (II) the one-sheet hyperboloid $x^2+yz=s^2$ (orbit of $s H$)
($s >0$);
\\ (III) the two-sheet hyperboloid $x^2+yz=-s^2$ (orbit of $s(X_+ -X_-$))
($s >0$).
\begin{lemma}
\label{sl2notorsion}
Let $\mathfrak{g}=
\mathfrak{sl}(2,\mathbb{R}),$ and
$J \, : \,
\mathfrak{g}
\rightarrow
\mathfrak{g}
$ any linear map.
$J$ has zero torsion
if and only if it is equivalent to
the endomorphism defined in the basis
$(Y_1,Y_2,Y_3)$
(resp.
$(H,X_+,X_-)$)
by
\begin{equation}
\label{Jsl2notorsionbis}
J_*(\lambda)=\begin{pmatrix}
0&0&-1\\
0&\lambda&0\\
1&0&0
\end{pmatrix}
\quad , \quad \lambda \in \mathbb{R} \quad ,
\end{equation}
$J_*(\lambda) \not \equiv J_*(\mu)$ for $\lambda \neq \mu$
(resp.
\begin{equation}
\label{Jsl2notorsion}
J(\alpha)=\begin{pmatrix}
0&-\frac{1}{2}&-\frac{1}{2}\\
1&\alpha&-\alpha\\
1&-\alpha&\alpha
\end{pmatrix}
\quad , \quad \alpha \in \mathbb{R} \quad ,
\end{equation}
$J(\alpha) \not \equiv J(\beta)$ for $\alpha \neq \beta$).
\end{lemma}
\begin{proof}
Let $J = (\xi^i_j)_{1 \leqslant i,j \leqslant 3}$
in the basis $(H,X_+,X_-).$
The 9 torsion equations are
in the basis $(H,X_+,X_-)$:
\begin{eqnarray*}
12|1
&\quad
2 (\xi^2_2 + \xi^1_1) \xi^1_2 + (\xi^2_2 - \xi^1_1) \xi^3_1 - (\xi^2_1
+ 2 \xi^1_3) \xi^3_2=0,\\
12|2
&\quad
2 (\xi^2_1 \xi^1_2 + 1 + (\xi^2_2)^2) - \xi^3_1 \xi^2_1 - 2 \xi^3_2 \xi^2
_3=0,\\
12|3
&\quad
(\xi^3_1 + 2 \xi^1_2) \xi^3_1 - 2 (\xi^2_2 + 2 \xi^1_1) \xi^3_2 + 2 \xi^3_3 \xi^3_2=0,\\
13|1
&\quad
(\xi^2_1 - 2 \xi^1_3) \xi^1_1 + 2 \xi^2_3 \xi^1_2 + \xi^3_1 \xi^2_3 - (
\xi^2_1 + 2 \xi^1_3) \xi^3_3=0,\\
13|2
&\quad
2 (\xi^2_2 - 2 \xi^1_1) \xi^2_3 + (\xi^2_1 + 2 \xi^1_3) \xi^2_1 - 2 \xi^
3_3 \xi^2_3=0,\\
13|3
&\quad
\xi^3_1 \xi^2_1 - 2 \xi^3_1 \xi^1_3 - 2 + 2 \xi^3_2 \xi^2_3 - 2 (\xi^3_3)^2=0,\\
23|1
&\quad
4 \xi^1_3 \xi^1_2 - 1 - \xi^2_2 \xi^1_1 - \xi^3_2 \xi^2_3 + (\xi^2_2 - \xi^1_1) \xi^3_3=0,\\
23|2
&\quad
4 \xi^2_3 \xi^1_2 - (\xi^2_2 + \xi^3_3) \xi^2_1=0,\\
23|3
&\quad
4 \xi^3_2 \xi^1_3 - (\xi^2_2 + \xi^3_3) \xi^3_1=0.
\end{eqnarray*}
$J$ has at least one real eigenvalue $\lambda.$ Let $v \in \mathfrak{g},$ $v \neq 0,$ an
eigenvector associated to $\lambda.$
From the classification of the
$\text{Aut } \mathfrak{g}$ orbits of
$\mathfrak{g},$ we then get 3 cases according to whether $v$ is on the orbit (I),(II),(III)
(in the cases (II), (III) one may choose $v$ so that $s=1$).
\\Case 1. There exists
$\varphi \in \text{Aut } \mathfrak{g}$ such that $v= \varphi(X_-).$ Then,
replacing $J$ by $\varphi^{-1} J \varphi,$
we may suppose
$\xi^1_3=\xi^2_3=0.$
That case is impossible from $13|2$ and $13|3.$
\\Case 2. There exists
$\varphi \in \text{Aut } \mathfrak{g}$ such that $v= \varphi(H).$ Then we may suppose
$\xi^2_1=\xi^3_1=0.$ Then from $12|2$, $\xi^2_3\xi^3_2 \neq 0,$ and $23|2$, $23|3$
yield $\xi^1_2=\xi^1_3=0.$
Then $12|3$ and $13|2$ successively give $\xi^3_3=\xi^2_2+2\xi^1_1$ and
$\xi^1_1=0.$ Now $12|2$ and $23|1$ read resp. $-\xi^2_3\xi^3_2 + (\xi^2_2)^2 +1 =0,$
and
$\xi^2_3\xi^3_2 - (\xi^2_2)^2 +1 =0.$
Hence that case is impossible.
\\Case 3. There exists
$\varphi \in \text{Aut } \mathfrak{g}$ such that $v= \varphi(X_+ -X_-).$
Then we may suppose that $v=X_+ -X_-.$
Now instead of the basis $(H,X_+,X_-),$ we consider the basis
$(Y_1,Y_2,Y_3).$
The matrix of $J$ in the basis
$(Y_1,Y_2,Y_3)$
has the form
\begin{equation*}
J_* =
\begin{pmatrix}
\eta^1_1&0&\eta_3^1\\
\eta^2_1&\lambda&\eta_3^2\\
\eta^3_1&0&\eta_3^2
\end{pmatrix}.
\end{equation*}
Then the 9 torsion equations
$*ij|k$ (the star is to underline that the new basis is in use)
for $J$ in that basis are:
\begin{eqnarray*}
*12|1
&\quad
(\eta^3_1+ \eta^1_3 )\lambda - (\eta^3_1-\eta^1_3) \eta^1_1
=0,\\
*12|2
&\quad
(\eta^1_1+\lambda) \eta^2_3 - \eta^2_1 \eta^3_1
=0,\\
*12|3
&\quad
\eta^1_1 \lambda-1 +(\eta^3_1)^2 -(\eta^1_1 + \lambda)\eta^3_3
=0,\\
*13|1
&\quad
\eta^2_3 \eta^1_3 +\eta^2_1 \eta^1_1 + \eta^2_3 \eta^3_1 - \eta^2_1 \eta^3_3
=0,\\
*13|2
&\quad
\eta^1_1 \lambda +1 + (\eta^2_1)^2 + (\eta^2_3)^2+ \eta^3_1 \eta^1_3 - (\eta^1_1
-\lambda)\eta^3_3
=0,\\
*13|3
&\quad
\eta^2_3 \eta^1_1 -\eta^2_1 (\eta^1_3 + \eta^3_1) - \eta^2_3 \eta^3_3
=0,\\
*23|1
&\quad
\eta^1_1 \lambda +1 - (\eta^1_3)^2 + (\eta^1_1 -\lambda)\eta^3_3
=0,\\
*23|2
&\quad
\eta^2_3 \eta^1_3 - (\eta^3_3 +\lambda)\eta^2_1
=0,\\
*23|3
&\quad
(\eta^3_1 +\eta^1_3)\lambda + (\eta^3_1 -\eta^1_3)\eta^3_3
=0.
\end{eqnarray*}
From $*12|1$ and $*23|3$,
\begin{equation}
\label{III}
\eta^1_1(\eta^3_1-\eta^1_3)
=-\eta^3_3(\eta^3_1-\eta^1_3).
\end{equation}
1) Suppose first that $\eta^3_1=\eta^1_3.$
Then $\lambda \eta^3_1=0.$
1.1) Consider the subcase $\eta^3_1 = 0.$
$*13|1$ and $*13|3$ read resp.
$(\eta^3_3-\eta^1_1)\eta^2_1=0,
(\eta^3_3-\eta^1_1)\eta^2_3=0.$
Suppose $\eta^3_3\neq \eta^1_1.$ Then
$\eta^2_1= \eta^2_3 =0,$ and
$*13|2$ gives $\eta^1_1 \lambda +1 = (\eta^1_1 -\lambda ) \eta^3_3,$
which implies $\eta^3_3=0$ by $*23|1.$
As $*12|3$ then reads $1=0,$
this case $\eta^3_3\neq \eta^1_1$ is not possible.
Now, the case
$\eta^3_3 = \eta^1_1$ is not possible either since then
$*23|1$ would read $(\eta^1_1)^2 + 1 =0.$
We conclude that the subcase 1.1) is not possible.
Hence we are in the subcase
1.2) $\eta^3_1 \neq 0.$ Then $\lambda=0.$
From $*13|2$, $\eta^3_3\eta^1_1 \neq 0.$
Then $*23|1$ yields $\eta^3_3= \frac{-1+(\eta^3_1)^2}{\eta^1_1}$ and $*13|2$
reads $(\eta^2_1)^2 +(\eta^2_3)^2+2=0.$ This subcase 1.2) is not possible either.
Hence case 1) is not possible,
and we are necessarily in the case 2)
$\eta^3_1 \neq \eta^1_3.$
From (\ref{III}), $\eta^3_3=-\eta^1_1.$
Then $*13|2$ reads
$ (\eta^1_1)^2 + (\eta^2_1)^2+ (\eta^2_3)^2 +1 + \eta^3_1 \eta^1_3 =0$
hence $\eta^3_1 \neq 0$ and
$\eta^1_3= -\frac{
(\eta^1_1)^2 + (\eta^2_1)^2+ (\eta^2_3)^2 +1}{ \eta^3_1}.$
From $*12|2,$
$\eta^2_1=\frac{\eta^2_3(\eta^1_1+\lambda)}{\eta^3_1}.$
Then $*23|2$ reads
$\eta^2_3(((\eta^2_3)^2 +\lambda^2+1)(\eta^3_1)^2+(\eta^1_1+\lambda)^2(\eta^2_3)^2)=0,$
i.e. $\eta^2_3=0,$
which implies
$\eta^2_1=0.$
Now $*12|1$ reads
$\lambda(1+(\eta^1_1)^2 -(\eta^3_1)^2)=-\eta^1_1(1+(\eta^1_1)^2 +(\eta^3_1)^2).$
The subcase $\eta^1_1 \neq 0$ is not possible
since then
$*12|3$ would yield
$\lambda = -\frac{(\eta^1_1)^2+(\eta^3_1)^2-1}{2\eta^1_1}$ and
$*12|1$ would read
$((\eta^1_1)^2+(\eta^3_1+1)^2)
((\eta^1_1)^2+(\eta^3_1-1)^2)=0.$
Hence
$\eta^1_1 = 0.$ Then $*12|3$ reads $(\xi^3_1)^2=1.$
The condition $(\xi^3_1)^2=1$ now implies the vanishing of all the torsion equations.
In that case
\begin{equation*}
J_* =
\begin{pmatrix}
0&0&-\varepsilon\\
0&\lambda&0\\
\varepsilon&0&0
\end{pmatrix}
\quad,\quad \varepsilon = \pm 1.
\end{equation*}
Then
in the basis $(H,X_+,X_-)$
\begin{equation*}
J =
\begin{pmatrix}
0&-\frac{\varepsilon}{2}&-\frac{\varepsilon}{2}\\
\varepsilon&\frac{\lambda}{2}&-\frac{\lambda}{2}\\
\varepsilon&-\frac{\lambda}{2}&\frac{\lambda}{2}
\end{pmatrix}
\end{equation*}
The cases $\varepsilon =\pm 1$ are equivalent under $\Psi_0.$
\end{proof}
\begin{remark}
\rm
Recall that a
rank $r$ ($r\geqslant 1$)
$CR$-structure on a real Lie algebra
$\mathfrak{g}$ can be defined (\cite{gigante})
as $(\mathfrak{p},J_\mathfrak{p})$ where
$\mathfrak{p}$ is some
$2r$-dimensional vector subspace of
$\mathfrak{g}$ and
$J_\mathfrak{p}\, : \, \mathfrak{p} \rightarrow \mathfrak{p} $ is a linear map such that
(a): $J_\mathfrak{p}^2=-1,$
(b): $[X,Y] -[ J_\mathfrak{p}X, J_\mathfrak{p}Y] \in \mathfrak{p} \quad \forall X,Y \in
\mathfrak{p},$
(c):
(\ref{Jnotorsion}) holds for
$J_\mathfrak{p}$ for all $X,Y \in \mathfrak{p}.$
Then clearly
$J_*(\lambda)$
is an extension of a $CR$-structure.
\end{remark}
\section{Case of
$\mathfrak{sl}(2,\mathbb{R}) \times \mathfrak{sl}(2,\mathbb{R}).$}
We consider the basis $(Y_1^{(1)}, Y_2^{(1)}, Y_3^{(1)}, Y_1^{(2)}, Y_2^{(2)}, Y_3^{(2)})$
of $\mathfrak{sl}(2,\mathbb{R}) \times \mathfrak{sl}(2,\mathbb{R}),$
with the upper index referring to the first or second factor.
The automorphisms
of $\mathfrak{sl}(2,\mathbb{R}) \times \mathfrak{sl}(2,\mathbb{R})$
fall into 2 kinds: the first kind is comprised of
the $\text{diag}(\Phi_1,\Phi_2)$,
$\Phi_1,\Phi_2 \in \text{Aut} \,\mathfrak{sl}(2,\mathbb{R}),$ and the second kind
is comprised of the
the $\Gamma \circ \text{diag}(\Phi_1,\Phi_2)$, with $\Gamma$ the switch between the two factors
of $\mathfrak{sl}(2,\mathbb{R}) \times \mathfrak{sl}(2,\mathbb{R}).$
\begin{lemma}
Any integrable complex structure $J$ on
$\mathfrak{sl}(2,\mathbb{R}) \times \mathfrak{sl}(2,\mathbb{R})$
is equivalent under some first kind automorphism
to the endomorphism given in the basis
$(Y_1^{(1)}, Y_2^{(1)}, Y_3^{(1)}, Y_1^{(2)}, Y_2^{(2)}, Y_3^{(2)})$
by the matrix
\begin{equation}
\label{sl2xsl2bis}
\tilde{J}_*(\xi^2_2,\xi^2_5)=
\begin{pmatrix}
0&0&-1& 0&0&0\\
0&\xi^2_2&0& 0&\xi^2_5&0\\
1&0&0&0&0&0 \\
0&0&0&0&0&-1 \\
0&-\frac{(\xi^2_2)^2+1}{\xi^2_5}&0&0&-\xi^2_2&0 \\
0&0&0&1&0&0
\end{pmatrix}
, \;
\xi^2_2, \xi^2_5 \in \mathbb{R}
\, , \,
\xi^2_5 \neq 0 .
\end{equation}
$\tilde{J}_*(\xi^2_2,\xi^2_5)$
is equivalent to
$\tilde{J}_*({\xi^{\prime}}^2_2,{\xi^{\prime}}^2_5)$
under some first (resp. second) kind automorphism if and only if
$ {\xi^{\prime}}^2_2= \xi^2_2,$
$ {\xi^{\prime}}^2_5= \xi^2_5$
(resp.
$ {\xi^{\prime}}^2_2= - \xi^2_2,$
$ {\xi^{\prime}}^2_5=
-\frac{(\xi^2_2)^2+1}{\xi^2_5}$).
\end{lemma}
\begin{proof}
Let $J = (\xi^i_j)_{1 \leqslant i,j \leqslant 6}=
\begin{pmatrix} J_1&J_2\\J_3&J_4 \end{pmatrix},$ ($J_1,J_2,J_3,J_4$ $3\times 3$ blocks),
an integrable complex structure
in the basis $(Y^{(k)}_\ell).$
From lemma \ref{sl2notorsion}, with some first kind automorphism, one may suppose
$J_1=\begin{pmatrix} 0&0&-1\\ 0&\xi^2_2&0\\ 1&0&0 \end{pmatrix},$
$J_4=\begin{pmatrix} 0&0&-1\\0&\xi^5_5&0\\ 1&0&0 \end{pmatrix}.$
As $Tr(J)=0,$ $\xi^5_5=-\xi^2_2.$
Then one is led to
(\ref{sl2xsl2bis}) and the result follows (see \cite{companionarchive}, CSsl22.red and its output).
\end{proof}
\begin{remark}
\rm
The complex subalgebra $\mathfrak{m}$ associated to
$\tilde{J}_*(\xi^2_2,\xi^2_5)$
has basis $\tilde{Y}^{(1)}_1 ={Y}^{(1)}_1 - i {Y}^{(1)}_3,$
$\tilde{Y}^{(2)}_1 ={Y}^{(2)}_1 - i {Y}^{(2)}_3,$
$\tilde{Y}^{(2)}_2 =-i\xi^2_5 {Y}^{(1)}_2 +(1+ i\xi^2_2) {Y}^{(2)}_2.$
The complexification
$\mathfrak{sl}(2) \times \mathfrak{sl}(2)$
of $\mathfrak{sl}(2,\mathbb{R}) \times \mathfrak{sl}(2,\mathbb{R})$
has weight spaces decomposition with respect to the Cartan subalgeba
$\mathfrak{h}=\mathbb{C} {Y}^{(1)}_2 \oplus \mathbb{C} {Y}^{(2)}_2 :$
$$\mathfrak{h}\oplus \mathbb{C} ({Y}^{(1)}_1 + i {Y}^{(1)}_3)
\oplus \mathbb{C} ({Y}^{(2)}_1 + i {Y}^{(2)}_3)
\oplus \mathbb{C} \tilde{Y}^{(1)}_1
\oplus \mathbb{C} \tilde{Y}^{(2)}_1.$$
Then
$\mathfrak{m}= (\mathfrak{h} \cap \mathfrak{m}) \oplus \mathbb{C} \tilde{Y}^{(1)}_1
\oplus \mathbb{C} \tilde{Y}^{(2)}_1$ with
$\mathfrak{h} \cap \mathfrak{m} =
\mathbb{C} \tilde{Y}^{(2)}_2,$ which is a special case of the general fact proved in \cite{snow}
that any complex (integrable) structure on a reductive Lie group of class I is regular.
\end{remark}
\section{Case of $\mathfrak{n}$.}
Let $\mathfrak{n}$ the real 3-dimensional Heisenberg Lie algebra with
basis $(x_1,x_2,x_3)$ and
commutation relations $[x_1,x_2]=x_3.$
\begin{lemma}
\label{lemma2}
Let $J \, : \,
\mathfrak{n}
\rightarrow
\mathfrak{n}
$ any linear map.
$J$ has zero torsion
if and only if it is equivalent to
one of the endomorphisms defined in the basis $(x_1,x_2,x_3)$ by:
\begin{eqnarray}
\label{S}
(i) & &
S(\xi^3_3)=
\begin{pmatrix}
0&-1&0\\
1&0&0\\
0&0&\xi^3_3
\end{pmatrix}
\quad, \quad \xi^3_3 \in \mathbb{R}
\\
\label{D}
(ii) & &
D(\xi^1_1)=
\begin{pmatrix}
\xi^1_1&0&0\\
0&\xi^1_1&0\\
0&0&\frac{(\xi^1_1)^2-1}{2\xi^1_1}
\end{pmatrix} \quad , \quad \xi^1_1 \in \mathbb{R}, \; \xi^1_1 \neq 0
\\
\label{T}
(iii) & &
T(a,b)=
\begin{pmatrix}
0&-ab&0\\
1&b&0\\
0&0&\frac{ab-1}{b}
\end{pmatrix} \quad , \quad a, b \in \mathbb{R}, \; b \neq 0
\end{eqnarray}
Any two distinct endomorphisms in the preceding list are non equivalent.
$T(a,b)$ is equivalent to
\begin{equation}
T^{\prime}(a,b)
=\begin{pmatrix}
b&-b&0\\
a&0&0\\
0&0&\frac{ab-1}{b}
\end{pmatrix}
\end{equation}
\end{lemma}
\begin{proof}
Let $J = (\xi^i_j)_{1 \leqslant i,j \leqslant 3}$
in the basis $(x_1,x_2,x_3).$
The 9 torsion equations are:
\begin{eqnarray*}
12|1
&\quad
\xi^1_3 (\xi^2_2 + \xi^1_1) =0,\\
12|2
&\quad
\xi^2_3 (\xi^2_2 + \xi^1_1) =0,\\
12|3
&\quad
\xi^3_3 (\xi^2_2 + \xi^1_1)- \xi^2_2 \xi^1_1 + \xi^2_1 \xi^1_2 +1 =0,\\
13|1
&\quad
\xi^2_3 \xi^1_3=0,\\
13|2
&\quad
(\xi^2_3)^2=0,\\
13|3
&\quad
\xi^2_3 ( \xi^3_3 - \xi^1_1) + \xi^1_2 \xi^1_3 =0,\\
23|1
&\quad
(\xi^1_3)^2=0,\\
23|2
&\quad
\xi^2_3 \xi^1_3=0,\\
23|3
&\quad
\xi^1_3 ( \xi^2_2 - \xi^3_3) - \xi^2_3 \xi^1_2 =0.
\end{eqnarray*}
Hence $\xi^1_3=\xi^2_3=0$ , and we are left only with equation
$12|3$ which reads
\begin{equation}
\xi^3_3\; Tr{(A)} =\det{(A)} -1
\end{equation}
where $A = \left(\begin{smallmatrix} \xi^1_1&\xi^1_2\\\xi^2_1&\xi^2_2 \end{smallmatrix}\right) .$
Suppose first $Tr{(A)}=0.$ Then $A^2=-I,$ so that $A$ is similar over $\mathbb{C}$, hence over $\mathbb{R}$, to
$ \left(\begin{smallmatrix} 0&-1 \\ 1&0 \end{smallmatrix}\right).$
Hence $J \equiv \left(
\begin{smallmatrix}
0&-1&0\\
1&0&0\\
*&*&\xi^3_3
\end{smallmatrix} \right)$. Now, since $\xi^3_3$ does not belong to the spectrum of
$ \left(\begin{smallmatrix} 0&-1 \\ 1&0 \end{smallmatrix}\right),$
taking the automorphism
$ \left(\begin{smallmatrix} 1&0&0\\ 0&1&0\\ \alpha&\beta&1 \end{smallmatrix}\right) $
of $\mathfrak{n}$
for suitable $\alpha, \beta \in \mathbb{R},$ one gets
$J \equiv S( \xi^3_3 ).$
Suppose now $Tr{(A)}\neq 0.$ Then $\xi^3_3= \frac{\det{(A)}-1}{Tr{(A)}}.$ If $A$ is a scalar matrix,
i.e.
$A = \xi^1_1 I,$ then
$J = \left(
\begin{smallmatrix}
\xi^1_1&0&0\\
0&\xi^1_1&0\\
*&*&\frac{(\xi^1_1)^2-1}{2\xi^1_1}
\end{smallmatrix} \right)
\equiv D(\xi^1_1).$
If $A$ is not a scalar matrix, then $A$ is similar to
$ \left(\begin{smallmatrix} 0&-ab \\ 1&b \end{smallmatrix}\right)$
for some $a,b \in \mathbb{R},$ and $b \neq 0$ from the trace.
Then
$J \equiv T(a,b).$
Finally, $T'(a,b) \equiv T(a,b)$
since the matrices
$\left(\begin{smallmatrix} 0&-ab\\1&b \end{smallmatrix}\right)$
and
$\left(\begin{smallmatrix} b&-b\\a&0 \end{smallmatrix}\right)$
are similar for they have the same spectrum and are no scalar matrices.
\end{proof}
\begin{remark}
\rm
$S(\xi^3_3)$ is an extension of a rank 1 $CR$-structure, however
\linebreak[4]
$D(\xi^1_1),T(a,b)$ are not.
\end{remark}
\section{$CR$-structures on $\mathfrak{n}$.}
\begin{lemma}
(i)
Any linear map $J \, : \, \mathfrak{n} \rightarrow \mathfrak{n}$ which has zero torsion and is an extension of a rank 1 $CR$-structure
on $\mathfrak{n}$ such that $\mathfrak{p}$ is nonabelian is equivalent to a unique
\begin{equation}
\begin{pmatrix}
0&-1&0\\
1&0&0\\
0&0&\xi^3_3
\end{pmatrix}
\; ,\; \xi^3_3 \in \mathbb{R}.
\end{equation}
\\
(ii)
Any linear map $J \, : \, \mathfrak{n} \rightarrow \mathfrak{n}$ which is an extension of a rank 1 $CR$-structure
on $\mathfrak{n}$ such that $\mathfrak{p}$ is abelian is equivalent to a unique
\begin{equation}
\begin{pmatrix}
\xi^1_1&0&0\\
0&0&1\\
0&-1&0
\end{pmatrix}
\; ,\; \xi^1_1 \in \mathbb{R}.
\end{equation}
$J$ has nonzero torsion.
\end{lemma}
\begin{proof}
For any nonzero $X \in \mathfrak{p},$ $(X,J_\mathfrak{p}X)$ is a basis of $\mathfrak{p}$.
In case (i),
$[X,J_\mathfrak{p}X]\neq 0,$ since $\mathfrak{p}$ is non abelian. Then
$[X,J_\mathfrak{p}X]= \mu x_3$ , $\mu \neq 0$, and $x_3 \not \in \mathfrak{p}$
since otherwise
$\mathfrak{p}$ would be abelian. One may extend $J_\mathfrak{p}$ to
$\mathfrak{n}$ in the basis
$(X,J_\mathfrak{p}X, \mu x_3)$
as
\begin{equation}
J=\begin{pmatrix}
0&-1&\xi^1_3\\
1&0&\xi^2_3\\
0&0&\xi^3_3
\end{pmatrix}
\end{equation}
and $J$ has zero torsion only if $\xi^1_3=\xi^2_3=0.$
In case (ii), necessarily $x_3 \in \mathfrak{p}$ since $\mathfrak{p}$ is abelian.
Hence $(x_3, J_\mathfrak{p}x_3)$ is a basis for $\mathfrak{p}.$
Take any linear extension $J$ of $J_\mathfrak{p}$ to $\mathfrak{n}$.
There exists some eigenvector
$y_1\neq 0$ of $J$ associated to some eigenvalue $\xi^1_1 \in \mathbb{R}.$
Then $y_1 \not \in \mathfrak{p},$
which implies $[y_1,Jx_3] \neq 0,$ for otherwise
$y_1$ would commute to the whole of $\mathfrak{n}$ and then be some multiple of $x_3 \in \mathfrak{p}.$
Hence
$[y_1,Jx_3]= \lambda x_3 $, $\lambda \neq 0,$ and dividing $y_1$ by $\lambda$ one may suppose
$\lambda=1.$ In the basis $y_1, y_2= Jx_3, y_3=x_3$
one has
\begin{equation}
J=\begin{pmatrix}
\xi^1_1&0&0\\
0&0&1\\
0&-1&0
\end{pmatrix}
\end{equation}
and (ii) follows.
\end{proof}
\section{Complex structures on $\mathfrak{n}\times \mathfrak{n}$.}
We will use for commutation relations
$[x_1,x_2]=x_5,
[x_3,x_4]=x_6.$
The automorphisms of
$\mathfrak{n}\times \mathfrak{n}$ fall into 2 kinds. The first kind is comprised
of the matrices
\begin{multline}
\Phi = \left(\begin{array}{cccc|cc}
b^1_1&b^1_2&0&0&0&0\\
b^2_1&b^2_2&0&0&0&0\\
0&0&b^3_3&b^3_4&0&0\\
0&0&b^4_3&b^4_4&0&0\\
\hline
b^5_1&b^5_2&b^5_3&b^5_4 & b^1_1 b^2_2 -b^1_2 b^2_1& 0\\
b^6_1&b^6_2&b^6_3&b^6_4 & 0 &b^3_3 b^4_4 -b^3_4 b^4_3
\end{array}
\right)
\quad , \quad \\
(b^1_1 b^2_2 -b^1_2 b^2_1)
(b^3_3 b^4_4 -b^3_4 b^4_3) \neq 0.
\end{multline}
The second kind ones are $\Psi = \Theta \Phi $ where $\Phi$ is
first kind and
\begin{equation}
\Theta = \left(\begin{array}{cccc|cc}
0&0&1&0&0&0\\
0&0&0&1&0&0\\
1&0&0&0&0&0\\
0&1&0&0&0&0\\
\hline
0&0&0&0&0&1\\
0&0&0&0&1&0
\end{array}
\right).
\end{equation}
\begin{lemma}
Any integrable complex structure $J$ on
$\mathfrak{n}\times \mathfrak{n}$
is equivalent under some first kind automorphism to one of the following:
\begin{equation}
\label{case1}
(i) \quad
\tilde{S}_\varepsilon(\xi^5_5)=
\begin{pmatrix}
0&-1&0& 0&0&0\\
1&0&0& 0&0&0\\
0&0&0&-1&0&0 \\
0&0&1&0&0&0 \\
0&0&0&0&\xi^5_5&-\varepsilon((\xi^5_5)^2+1) \\
0&0&0&0&\varepsilon&-\xi^5_5
\end{pmatrix}
, \;
\varepsilon =\pm 1
\, , \,
\xi^5_5 \in \mathbb{R}
.
\end{equation}
$\tilde{S}_{\varepsilon^{\prime}} ({\xi^{\prime}}^5_5)$
is equivalent to
$\tilde{S}_\varepsilon(\xi^5_5)$
($\varepsilon,\varepsilon^{\prime}= \pm1; {\xi^{\prime}}^5_5, \xi^5_5 \in \mathbb{R}$)
under some first (resp. second) kind automorphism if and only if
$\varepsilon^{\prime}= \varepsilon , \, {\xi^{\prime}}^5_5= \xi^5_5$
(resp. $\varepsilon^{\prime}= -\varepsilon ,\, {\xi^{\prime}}^5_5= - \xi^5_5$ ).
\begin{multline}
(ii) \quad
\label{case2}
\tilde{D}(\xi^1_1)=
\begin{pmatrix}
\xi^1_1&0&-((\xi^1_1)^2+1)& 0&0&0\\
0&\xi^1_1&0&-((\xi^1_1)^2+1)& 0&0\\
1&0&-\xi^1_1&0&0&0 \\
0&1&0&-\xi^1_1&0&0 \\
0&0&0&0&\frac{(\xi^1_1)^2-1}{2\xi^1_1}&-\frac{((\xi^1_1)^2+1)^2}{2\xi^1_1} \\
0&0&0&0&\frac{1}{2\xi^1_1}&\frac{1-(\xi^1_1)^2}{2\xi^1_1}
\end{pmatrix}
,\\
\; \xi^1_1 \in \mathbb{R}\setminus \{0\}.
\end{multline}
$\tilde{D}({\xi^{\prime}}^1_1)$
is equivalent to
$\tilde{D}(\xi^1_1)$
($ {\xi^{\prime}}^1_1, \xi^1_1 \in \mathbb{R}$)
under some first (resp. second) kind automorphism if and only if
$ {\xi^{\prime}}^1_1= \xi^1_1$
(resp. $ {\xi^{\prime}}^1_1= - \xi^1_1$ ).
\begin{multline}
(iii) \quad
\label{case3}
\tilde{T}(\xi^3_3,\xi^4_3)= \\
\begin{pmatrix}
0&-\xi^4_3\xi^3_3&-\xi^4_3\xi^3_3&\xi^4_3\xi^3_3-1&0&0\\
1&-\xi^3_3&-\frac{(\xi^3_3)^2+1-\xi^4_3 \xi^3_3}{\xi^3_3}&\xi^3_3& 0&0\\
0&\xi^3_3&\xi^3_3&-\xi^3_3& 0&0\\
1&0&\xi^4_3&0&0&0 \\
0&0&0&0&-\frac{\xi^4_3\xi^3_3-1}{\xi^3_3}&-\frac{(\xi^4_3\xi^3_3-2)\xi^4_3\xi^3_3+(\xi^3_3)^2+1}{(\xi^3_3)^2} \\
0&0&0&0&1&\frac{\xi^4_3\xi^3_3-1}{\xi^3_3}
\end{pmatrix},
\\
\xi^3_3 \in \mathbb{R}\setminus \{0\} , \;
\xi^4_3 \in \mathbb{R}.
\end{multline}
$\tilde{T}({\xi^{\prime}}^3_3,{\xi^{\prime}}^4_3)$
is equivalent to
$\tilde{T}(\xi^3_3,\xi^4_3)$
($
{\xi^{\prime}}^3_3,
\xi^3_3 \in \mathbb{R}\setminus \{0\} , \;
{\xi^{\prime}}^4_3,
\xi^4_3 \in \mathbb{R}.$)
under some first (resp. second) kind automorphism if and only if
$ {\xi^{\prime}}^3_3= \xi^3_3, \, {\xi^{\prime}}^4_3= \xi^4_3$
(resp.
$ {\xi^{\prime}}^3_3= - \xi^3_3, \, {\xi^{\prime}}^4_3= - \xi^4_3$).
\par
Finally, the cases (i),(ii), (iii) are mutually non equivalent, either under first
or second kind automorphism.
\end{lemma}
\begin{proof}
Let $J = (\xi^i_j)_{1 \leqslant i,j \leqslant 6}$ an integrable complex structure
in the basis $(x_k)_{1\leqslant k \leqslant 6}.$
Denote
$J_1=\left(\begin{smallmatrix}
\xi^1_1&\xi^1_2\\
\xi^2_1&\xi^2_2
\end{smallmatrix} \right)
$,
$J_2=\left(\begin{smallmatrix}
\xi^1_3&\xi^1_4\\
\xi^2_3&\xi^2_4
\end{smallmatrix} \right)
$,
$J_3=\left(\begin{smallmatrix}
\xi^3_1&\xi^3_2\\
\xi^4_1&\xi^4_2
\end{smallmatrix} \right)
$,
$J_4=\left(\begin{smallmatrix}
\xi^3_3&\xi^3_4\\
\xi^4_3&\xi^4_4
\end{smallmatrix} \right)
$.
Then
$J_1^*=\left(\begin{smallmatrix}
\xi^1_1&\xi^1_2&\xi^1_5\\
\xi^2_1&\xi^2_2&\xi^2_5\\
\xi^5_1&\xi^5_2&\xi^5_5
\end{smallmatrix} \right)
$
and
$J_3^*=\left(\begin{smallmatrix}
\xi^3_1&\xi^3_4&\xi^3_6\\
\xi^4_3&\xi^4_4&\xi^4_6\\
\xi^6_3&\xi^6_4&\xi^6_6
\end{smallmatrix} \right) $
are zero torsion linear maps from $\mathfrak{n}$ to
$\mathfrak{n},$ hence equivalent to type
(\ref{S}), (\ref{D}) or (\ref{T}) in lemma \ref{lemma2}.
It can be checked that their being of different types would contradict with
$J^2=-1.$
Hence, modulo equivalence
under some first kind automorphism,
we get 3 cases:
case 1:
$J_1^*=
\left(\begin{smallmatrix}
0&-1&0\\
1&0&0\\
0&0&\xi^5_5
\end{smallmatrix} \right)
$,
$J_3^*=
\left(\begin{smallmatrix}
0&-1&0\\
1&0&0\\
0&0&\xi^6_6
\end{smallmatrix} \right) ;
$
case 2: $J_1^*=D(\xi^1_1), J_3^* =D(\xi^3_3)$,
($\xi^1_1, \,\xi^3_3 \neq 0$);
case 3:
$J_1^*=\left(\begin{smallmatrix}
0&\xi^1_2&0\\
1&\xi^2_2&0\\
0&0&\xi^5_5
\end{smallmatrix} \right)
$,
$J_3^*=\left(\begin{smallmatrix}
\xi^3_3&-\xi^3_3&0\\
\xi^4_3&0&0\\
0&0&\xi^6_6
\end{smallmatrix} \right)
$,
($\xi^2_2, \,\xi^3_3 \neq 0$).
Case 1 (resp. 2, 3) leads to
(\ref{case1}) (resp.
(\ref{case2}),
(\ref{case3}))
(see \cite{companionarchive}, programs
"n2case1.red", "n2case2.red", "n2case3.red", and their outputs.)
The assertion about equivalence in cases 1,2 are readily proved, as is equivalence under
some first kind automorphism in case 3 and the nonequivalence of the 3 types.
Consider now $\Theta \tilde{T}(\xi^3_3,\xi^4_3) \Theta^{-1}.$
It is equivalent under some first kind automorphism to some
$\tilde{T}(\eta^3_3,\eta^4_3).$ That implies that the matrices
$\left(\begin{smallmatrix}
\xi^3_3&-\xi^3_3\\
\xi^4_3&0
\end{smallmatrix} \right) $,
$\left(\begin{smallmatrix}
0&-\eta^4_3\eta^3_3\\
1&-\xi^3_3
\end{smallmatrix} \right) $
are similar, which amounts to their having same trace and same determinant, i.e.
$\eta ^3_3= -\xi^3_3, \eta^4_3=-\xi^4_3.$
As
$\tilde{T}({\xi^{\prime}}^3_3, {\xi^{\prime}}^4_3)$ is equivalent
to
$ \tilde{T}(\xi^3_3,\xi^4_3)$
under some second kind automorphism if and only if
it is equivalent to
$\Theta \tilde{T}(\xi^3_3,\xi^4_3) \Theta^{-1}$ under some first kind automorphism,
the assertion about second kind equivalence in case 3 follows.
\end{proof}
\begin{remark}
\rm
In case 3, had we used
$J_3^*=\left(\begin{smallmatrix}
0&\xi^3_4&0\\
1&\xi^4_4&0\\
0&0&\xi^6_6
\end{smallmatrix} \right)
$,
then we would have to separate further into 2 subcases:
subcase $\xi^1_2 \neq 0$:
\begin{multline*}
\tilde{T}(\xi^1_2,\xi^2_2)=
\begin{pmatrix}
0&\xi^1_2&-\frac{\xi^2_2}{\xi^1_2}&-(\xi^1_2+1)&0&0\\
1&\xi^2_2& \frac{\xi^1_2+1}{\xi^1_2}&-\xi^2_2& 0&0\\
0&-\xi^1_2&0&\xi^1_2& 0&0\\
1&\xi^2_2&1&-\xi^2_2& 0&0\\
0&0&0&0&-\frac{\xi^1_2+1}{\xi^2_2}&-\frac{(\xi^2_2)^2+(\xi^1_2+1)^2}{\xi^2_2\xi^1_2}\\
0&0&0&0&\frac{\xi^1_2}{\xi^2_2}&\frac{\xi^1_2+1}{\xi^2_2}
\end{pmatrix}, \quad
\xi^1_2 \xi^2_2 \neq 0 \, ;
\end{multline*}
subcase $\xi^1_2 = 0$:
\begin{multline*}
\tilde{T}(\xi^2_2)=
\begin{pmatrix}
0&0&-1&0&0&0\\
1&\xi^2_2&0&1& 0&0\\
1&0&0&0& 0&0\\
-\xi^2_2&-((\xi^2_2)^2+1)&1&-\xi^2_2& 0&0\\
0&0&0&0&-\frac{1}{\xi^2_2}&\frac{1}{\xi^2_2}\\
0&0&0&0&-\frac{(\xi^2_2)^2+1}{\xi^2_2}&\frac{1}{\xi^2_2}
\end{pmatrix}, \quad
\xi^2_2 \neq 0 \, .
\end{multline*}
\end{remark}
\begin{remark}
\rm
$\tilde{S}_\varepsilon(\xi^5_5)$
is abelian.
\end{remark}
\begin{remark}
\rm
If one looks for zero torsion linear maps instead of complex structures, then
$J_1^*$ and $J_3^*$ may be of different types.
\end{remark}
| 2023-04-23T06:41:09.434Z | 2010-01-15T15:33:41.000Z | redpajama/arxiv | arxiv_0001 | 1,796 | 5,545 |
ec3b7d22718f6a3761f89bba0cffb7f962f49e2e | \section{Introduction}
A minimal surface $M$ is called {\em doubly periodic} if it is invariant under two linearly independent orientation-preserving translations in euclidean space, which we can assume to be horizontal. The first such example was discovered by Scherk \cite{sche1}.
We denote the 2-dimensional lattice generated by the maximal group of such translations by $\Lambda$.
If the quotient $M/\Lambda$ is complete, properly embedded, and of finite topology, Meeks and Rosenberg \cite{mr3} have shown that the quotient has a finite number of annular top and bottom
ends which are asymptotic to flat annuli.
There are two cases to consider: either the top and bottom ends are parallel, or not. By results of Hauswirth and Traizet \cite{hatr1}, a {\em non-degenerate} such surface is a smooth point of a moduli space of dimensions 1 in the non-parallel and 3 in the parallel case.
Moreover, Meeks and Rosenberg \cite{mr3} have shown that in the parallel case, the number of top and bottom ends is equal to the same even number.
Lazard-Holly and Meeks \cite{lm1} have shown that the doubly
periodic Scherk surfaces are the only embedded doubly periodic surfaces of genus 0.
In particular, the case of parallel ends doesn't occur for this genus.
For genus 1, there is an example of Karcher with orthogonal ends as well as a 3-dimensional family of such surfaces with parallel ends by Karcher \cite{ka4} and Meeks-Rosenberg \cite{mr4}. Moreover, P\'{e}rez, Rodriguez and Traizet \cite{prt1} have shown that any doubly periodic minimal surface of genus one with parallel ends belongs to this family.
\begin{figure}[H]
\centerline{
\includegraphics[width=2.2in]{Scherk}
\hspace{.5in}
\includegraphics[width=2.2in]{kmr}
}
\caption{Scherk's surface and a Karcher-Meeks-Rosenberg surface
}
\label{figure:scherk}
\end{figure}
Douglas \cite{dou1} and indepently Baginsky and Batista \cite{brb1} have shown that the Karcher example can be deformed to a 1-parameter family by changing the angle between the ends. The family limits in the translation invariant helicoid with handles \cite{howeka4, whw1}
For higher genus, only a few examples and families have been known so far:
In the non-parallel case, Weber and Wolf \cite{ww3} have constructed examples of arbitrary genus, generalizing Karcher's example of genus 1.
Wei found a 1-parameter family of examples of genus 2 with parallel ends \cite{wei2}. This family has been generalized considerably by Rossman, Thayer and Wohlgemuth \cite{rtw1} to allow for more ends. Rossman, Thayer, and Wohlgemuth did also construct an example with genus 3.
\begin{figure}[H]
\centerline{
\includegraphics[width=2in]{Wei12}
\hspace{.5in}
\includegraphics[width=2in]{RTW}
}
\caption{Genus two Wei surface and genus two RTW surface
}
\label{figure:wei}
\end{figure}
Our goal is to prove
\begin{theorem}
\label{thm:main}
For any genus $g\ge1$ and any even number $N\ge 2$, there are
3-dimensional families of complete, embedded, doubly periodic minimal surfaces in euclidean space
of genus $g$ and $N$ top and $N$ bottom ends in the quotient.
\end{theorem}
Thus all topological types permitted by the results of Meeks and Rosenberg actually occur.
Figure \ref{figure:ex4} shows two translational copies in each direction of an example of genus 7.
\begin{figure}[H]
\centerline{ \includegraphics[width=2in]{Wei17}\hspace{1cm}\includegraphics[width=1.8in]{Wei17alt}}
\caption{Two views of a genus 7 surface}
\label{figure:ex4}
\end{figure}
The methods used in this paper are an adaptation of Traizet's techniques developed in \cite{tr2}. There, Traizet constructs singly periodic minimal surfaces akin to Rieman's examples which limit in a foliation of euclidean space by horizontal planes. Near the limit, the surfaces look like a collection of parallel planes joined by catenoidal necks. In the limit, these necks develop into nodes so that the quotient surface becomes a noded Riemann surface. The components of the smooth part are punctured spheres, where the punctures have to satisfy Traizet's balance equations. Vice versa, given a finite collection of punctured spheres where the punctures satisfy the balance equations and are non-degenerate in a suitable sense, Traizet constructs a moduli space of Riemann surfaces which forms an open neighborhood of the noded surface. On these Rieman surfaces, he constructs Weierstrass data and solves the period problem using the implicit function theorem.
We will closely follow Traizet's paper, indicating all differences.
The paper is organized as follows: In section 2, we state the results. In section 3, we give examples. The main theorem is proven in sections 4 through 8. We prove the embeddedness of our surfaces and show they satisfy certain properties in section 8.
\section{Results}
In this section, we will state precise formulations of our main theorems and introduce the relevant notation.
\subsection{Description of the surfaces and its properties}
Our goal is to construct three-dimensional families of embedded doubly periodic minimal surfaces $M$ of arbitrary genus and with an even number $N$ pairs of annular ends in the quotient. The surfaces will depend on a small real parameter $t$ (produced by the implicit function theorem) and a complex parameter $T$ explained below.
In contrast to the introduction, we will choose the ends to be horizontal: This allows us to follow the notation and set-up of \cite{tr2} more closely.
Denote the maximal group of orientation preserving translations of $M$ by $\Gamma$. This group will contain a cyclic subgroup of horizontal translations. Denote one of its generators by ${\cal T}$.
By rotating and scaling the surface,
we can assume that ${\cal T}=(0,2\pi,0)$. We will identify the horizontal $(x_1,x_2)$-plane with the complex plane ${\hbox{\bb C}}$ using $z=x_1+i x_2$. Note that the horizontal planar ends become flat annular ends in the quotient. Label a non-horizontal generator of $\Lambda$ by ${\cal T}_t$. For $t\to 0$, ${\cal T}_t$ will converge to a horizontal vector $\bar T$, where $T$ is an arbitrary complex parameter. The conjugation is due to orientation issues that will become clear later on.
Also, order the ends by height and label them $0_k$ and $\infty_k$, with $k\in\mathcal{Z}$. Most of our work takes place on the quotient surfaces.
There, the ends will be labeled $0_k$ and $\infty_k$ as well, with $k=1,\ldots,N$ for some even integer $N$.
Our surfaces will have two additional properties.
\begin{property}
The quotient surface $\tilde{M}_t=M_t/\Lambda$ is a union of the following types of domains: for each pair of ends $E_k=\{0_k,\infty_k\}$, $k=1,\ldots,N$, there is an unbounded domain $E_{k,t}\subset \tilde{M}_t$ containing the ends $0_k$ and $\infty_k$ that is a graph over a domain in ${\hbox{\bb C}}^*={\hbox{\bb C}}\setminus\{0\}$ with $n_k+n_{k-1}$ topological disks removed.
$\tilde{M}_t-( E_k\cup E_{k+1})$ consists of $n_k$ bounded annular components $C_{k,i,t}$ on which the Gauss map is one-to-one, called \emph{catenoidal necks}.
\label{property1}
\end{property}
\begin{figure}[H]
\centerline{
\includegraphics[width=2.5in]{domains.pdf}
}
\caption{Annular regions and catenoid-shaped necks.
}
\label{figure:domains}
\end{figure}
\begin{property}
There is a non-horizontal period ${\cal T}_t$ such that as $t\rightarrow 0$:
\begin{enumerate}
\item
The nonhorizontal period ${\cal T}_t$ converges to a (possibly $0$) horizontal vector $\bar T$.
\item
The surfaces limit in a foliation of $\hbox{\bb R}^3$ by parallel planes.
\item
The necksize of each annular component $C_{k,i,t}$ shrinks to $0$, and the center of the neck $C_{k,i,t}$ converges to a point $p_{k,i}$.
\item
The underlying Riemann surfaces limit in a noded Riemann surface consisting of $N$ copies of ${\hbox{\bb C}}^*={\hbox{\bb C}}\setminus \{0\}$, with nodes at the points $p_{k,i}$.
\end{enumerate}
\label{property2}
\end{property}
Note that when we draw a model of $\tilde{M}_t$, the $E_{k,t}$ components should have the shape of an infinite annulus. As this is impossible to draw, we model the $E_{k,t}$ components with infinite flat cylinders.
After rotating the KMR and Wei's surfaces so that the ends are horizontal, the behavior of both families near one of their limit fits the description given above.
\subsection{Forces and Balance Equations}
The location of the nodes introduced above is not arbitrary but governed by a system of algebraic equations.
Consider $N$ copies of ${\hbox{\bb C}}^*$, labeled ${\hbox{\bb C}}_k^*$ for $k=1,\ldots,N$. On each ${\hbox{\bb C}}_k^*$, place $n_k$ points $p_{k,1},\ldots,p_{k,n_k}$. Extend this definition of $p_{k,i}$ for any integer $k$ by making it periodic with respect to a horizontal vector $T$ in the sense that $p_{k+N,i}=p_{k,i}e^T$ for $k=1,\ldots,N$ and $i=1,\ldots,n_k$, with $n_{k+N}=n_k$. The difference between our $p_{k,i}$ terms and the ones in \cite{tr2} is that the periodic condition in \cite{tr2} is given by $p_{k+N,i}=p_{k,i}+T$. The reason for this is that the quotient map for us is given by $\text{exp}:{\hbox{\bb C}}\mapsto{\hbox{\bb C}}^*$. Thus, when we look at pictures of our surfaces, the nodes are really located at $\log{p_{k,i}}$ and are subject to the period vector $\log{e^T}=T$.
This set of points must satisfy a balancing condition given in terms of the following force equations.\\
\begin{definition}
The force exerted on $p_{k,i}$ by the other points in $\{p_{k,i}\}$ is defined by
{\footnotesize \[
F_{k,i}:=\sum_{j \neq i}\frac{p_{k,i}+p_{k,j}}{n_k^2(p_{k,i}-p_{k,j})}+(-1)^k\left(\sum_{j=1}^{n_{k+1}}\frac{p_{k+1,j}^{(-1)^k}}{n_kn_{k+1}\left(p_{k+1,j}^{(-1)^k}-p_{k,i}^{(-1)^k}\right)}-\sum_{j=1}^{n_{k-1}}\frac{p_{k,i}^{(-1)^k}}{n_kn_{k-1}\left(p_{k,i}^{(-1)^k}-p_{k-1,j}^{(-1)^k}\right)}\right).
\]}
\end{definition}
\begin{definition}
The configuration $\{p_{k,i}\}$ is called a \textit{balanced configuration} if $F_{k,i}=0$ for $k=1,\ldots,N$ and $i=1,\ldots,n_k$.
\end{definition}
Note that while the force equations don't seem to contain the parameter $T$, it enters the picture implicitly as the $p_{k,i}$ are assumed to form a $T$-periodic set.
\begin{definition}
Let $m=\sum_{i=1}^Nn_k$ and $F$ and $p$ be the vectors in ${\hbox{\bb C}}^m=\hbox{\bb R}^{2m}$ whose components are made up of the $F_{k,i}$ and $p_{k,i}$ respectively. The balanced configuration $\{p_{k,i}\}$ is said to be {\it non-degenerate} if the differential of the map $p\mapsto F$ has rank $2(m-1)$.
\end{definition}
The differential of the map $p\mapsto F$ can't have full rank $2m$ because $$\sum_{k=1}^N\sum_{i=1}^{n_k}F_{k,i}=0.$$ This holds whether or not the configuration $\{p_{k,i}\}$ is balanced.
Observe also that whenever we have a solution $p$ for the balance equations, $\lambda p$ will also be a solution for any $\lambda\in{\hbox{\bb C}}^*$.
Now, we can state our main result.
\begin{theorem}
If $\{p_{k,i}\}$ is a non-degenerate balanced configuration then there exists a corresponding three-dimensional family of embedded doubly periodic minimal surfaces with genus
$$g=1+\sum_{k=1}^N(n_k-1),$$
$2N$ horizontal ends and properties \ref{property1} and \ref{property2}.
\label{main theorem}
\end{theorem}
Our Main Theorem \ref{thm:main} will follow from this theorem and the non-degeneracy of the balance configurations of Proposition \ref{prop:crucial}.
\section{Examples}
In this section, we will discuss examples of non-degenerate balanced configurations.
\subsection{Adding handles to Wei's genus two examples}
In all known instances of Traizet's regeneration technique, the simplest non-trivial configurations are given as the roots of
special polynomials that satisfy a hypergeometric differential equation. So far, there is no explanation of this phenomenon, neither a general understanding of the more complicated solutions of the balance equations. In the case at hand, we have the following:
\begin{proposition}
Let $n \in \hbox{\bb N}$ and $a_1,a_2, \cdots, a_n$ be the roots of the polynomial
\[
p_n(z)=\sum_{k=0}^n{n \choose k}^2z^k.
\]
The following configuration is balanced and non-degenerate: $N=2$, $n_1=1$, $n_2=n$, $p_{1,1}=1$, $p_{2,i}=a_i$ for $i=1,\cdots,n$, and $T=0$.
\label{handles}
\end{proposition}
\begin{figure}[H]
\centerline{
\includegraphics[width=3in]{Wei18side}
\includegraphics[width=3in]{Wei18top}
}
\caption{Genus 8 surface. The locations of the six small necks correspond to the roots of the polynomial $p_8(z)=z^8+64z^7+784z^6+3136z^5+4900z^4+3136z^3+784z^2+64z+1$.}
\label{figure:Wei(1,8)}
\end{figure}
\begin{proof}
In this case, the balance equations are given by the following equations.
\[
\begin{split}
F_{1,1} &=\sum_{j=1}^n\frac{1+a_j}{n(1-a_j)} \\
F_{2,i} &=\sum_{j \neq i}\frac{a_i+a_j}{n^2(a_i-a_j)}+\frac{1+a_i}{n(1-a_i)} \\
\end{split}
\]
Observe first that the polynomials $p_n$ satisfy the
hypergeometric differential equation
\[
z(1-z)p_n''(z)+(1+(2n-1)z)p_n'(z)-n^2p_n(z)=0.
\]
In particular, all roots are simple. Furthermore,
\[
p_n(z)=z^n p_n(1/z)
\]Thus, for $n=2k$, the roots will be $a_1,\cdots,a_k,1/a_1,\cdots,1/a_k$ and for $n=2k+1$, the roots will be $a_1,\cdots,a_k,1/a_1,\cdots,1/a_k,-1$. Hence, $F_{1,1}=0$ by symmetry.
Since $p_n$ only has simple zeroes, for each zero $a_k$ we get the following equation.
\[
p_n''(a_k)=2p_n'(a_k)\sum_{j \neq k}\frac{1}{a_k-a_j}
\]
Plugging this into the hypergeometric differential equation for $p_n$, we get that
\[
0=2a_k(1-a_k)\sum_{j \neq k}\frac{1}{a_k-a_j}+1+(2n-1)a_k.
\]
This implies easily that $F_{2,k}=0$ for $1 \leq k \leq n$, and so the given configuration is balanced $\forall n \in \hbox{\bb N}$.
To show that the configuration is non-degenerate, let M be the matrix with entries
\[
M_{i,j}=\frac{\partial F_{2,i}}{\partial p_{2,j}}.
\]
Then
\[
M_{i,i}=\sum_{k \neq i}\frac{-2a_k}{n(a_i-a_k)^2}+\frac{2}{(1-a_i)^2}
\]
and, if $i \neq j$,
\[
M_{i,j}=\frac{2a_i}{n(a_i-a_j)^2}.
\]
Thus,
\[
\begin{split}
\sum_{i \neq j}|M_{i,j}| &=\sum_{i \neq j}\frac{-2a_i}{n(a_i-a_j)^2} \\
&=\sum_{i \neq j}\frac{-2a_i}{n(a_j-a_i)^2} \\
&=M_{j,j}-\frac{2}{(1-a_j)^2} \\
&\leq M_{j,j} \\
\end{split}
\]
for $j=1,\cdots,n$. Hence, M is invertible and the differential of F has rank n. Thus, this configuration is non-degenerate.
\end{proof}
\subsection{Combining non-degenerate balanced configurations}
The next proposition requires two new definitions. They are adjustments on similar terms from \cite{tr2}. Let $F_{k,i}^+$ be the sum of the forces exerted by the $p_{k+1,j}$ terms on $p_{k,i}$ and $F_{k,i}^-$ be the sum of the forces exterted by the $p_{k-1,j}$ terms on $p_{k,i}$, i.e.
\[
F_{k,i}^+=(-1)^k\sum_{j=1}^{n_{k+1}}\frac{p_{k+1,j}^{(-1)^k}}{n_kn_{k+1}\left(p_{k+1,j}^{(-1)^k}-p_{k,i}^{(-1)^k}\right)}
\]
and
\[
F_{k,i}^-=(-1)^{k+1}\sum_{j=1}^{n_{k-1}}\frac{p_{k,i}^{(-1)^k}}{n_kn_{k-1}\left(p_{k,i}^{(-1)^k}-p_{k-1,j}^{(-1)^k}\right)}\quad.
\]
\begin{proposition}
Let $p_{k,i}$ and $p_{k,i}'$ be two balanced configurations. Assume that:
\begin{enumerate}
\item $n_1=n_1'=1$,
\item $p_{1,1}=p_{1,1}'=1$,
\item $F_{1,1}^+=F_{1,1}'^+\neq 0$.
\end{enumerate}
Define $p_{k,i}''$ as follows:
\[
\begin{split}
& \forall k \in\{1,\cdots,N\},\,n_k''=n_k \text{ and } p_{k,i}''=p_{k,i} \\
& \forall k \in\{1,\cdots,N'\},\,n_{k+N}''=n_k' \text{ and } p_{k+N,i}''=p_{k,i}'e^T \\
& \forall k \in \mathbb{Z},\,p_{k+N+N',i}''=p_{k,i}''e^{T+T'} \\
\end{split}
\]
The configuration $p_{k,i}''$ is periodic with $N''=N+N'$ and $T''=T+T'$. Then the configuration $p_{k,i}''$ is balanced.
\label{combining}
\end{proposition}
\begin{proof}
The proof of this proposition is exactly the same as the proof of part one of proposition $3$ in \cite{tr2}.
\end{proof}
\begin{remark}
Assuming that $p_{k,i}$ and $p_{k,i}'$ are non-degenerate balanced configurations satisfying the hypotheses of proposition \ref{combining} then, we would like to prove that $p_{k,i}''$ is also non-degenerate. Combining this with propositions \ref{handles} and \ref{combining} would then show the existence of surfaces with an arbitrary number of ends that satisfy properties \ref{property1} and \ref{property2}. This is quite technical, however, and we omit the proof. We will treat a special case in Proposition \ref{prop:crucial} that allows us
to establish the existence of surfaces with arbitrarily many ends and arbitrary genus.
\end{remark}
\begin{proposition}
Let $N=2,n_1=1,n_2=n,T=0, N'=2, n_1'=1, n_2'=m$, and $T'=0$. Also, let $a_1,\ldots,a_n$ be the roots of the polynomial $p_n(z)=\sum_{k=0}^n{n \choose k}^2z^k$ and $b_1,\ldots,b_m$ be the roots of the polynomial $p_m(z)=\sum_{k=0}^m{m \choose k}^2z^k.$ Then there exists a non-degenerate balanced configuration $\{p_{k,i}''\}$ with $p_{1,1}''=1$, $p_{2,i}''=a_i$ for $i=1,\ldots,n$, $p_{3,1}''=1$, and $p_{4,i}''=b_i$ for $i=1,\ldots,m$.
\label{8ends}
\end{proposition}
\begin{figure}[H]
\centerline{
\includegraphics[width=2in]{1,2,1,3setup.pdf}
}
\caption{Surface corresponding to non-degenerate balanced configuration with $n=2$, and $m=3$.
}
\label{(1,2,1,3)setup}
\end{figure}
\begin{proof}
Let $p_{1,1}=p_{1,1}'',p_{2,i}=p_{2,i}'',p_{1,1}'=p_{3,1}''$, and $p_{2,i}'=p_{4,i}''$. Then $n_1=n_1'=1$ and $p_{1,1}=p_{1,1}'=1$. Also,
\[
F_{1,1}^+=-\sum_{j=1}^n\frac{1}{n(1-a_j)}
\]
and
\[
F_{1,1}'^+=-\sum_{j=1}^n\frac{1}{n(1-b_j)}.
\]
If $n$ is even then order the roots of $p_n$ such that $a_{n/2+k}=1/a_k$ for $k=1,\ldots,n/2$. Then, after a brief computation,
\[
F_{1,1}^+=-\frac{1}{2}.
\]
If $n$ is odd then order the roots of $p_n$ such that $a_{(n-1)/2+k}=1/a_k$ for $k=1,\ldots,(n-1)/2$ and $a_n=-1$. Then
\[
F_{1,1}^+=-\frac{1}{2}
\]
Thus, $F_{1,1}^+=-\frac{1}{2}$ and, similarly, $F_{1,1}'^+=-\frac{1}{2}.$ Hence, the hypotheses of proposition \ref{combining} are met. Therefore, $\{p_{k,i}''\}$ is a balanced configuration. Since we didn't really prove the non-degeneracy portion of proposition \ref{combining}, we can prove that directly for this balanced configuration.
Let $N$ be the matrix with entries $$N_{i,j}=\frac{\partial F_{2,i}''}{\partial p_{2,j}''}$$ and $M$ be the matrix with entries $$M_{i,j}=\frac{\partial F_{4,i}''}{\partial p_{4,j}''}.$$ As shown in proposition \ref{handles}, $M$ and $N$ are invertible. Also, let
\[
\alpha=\sum_{j=1}^m\frac{p_{4,j}}{m(p_{4,j}-1)^2}+\sum_{j=1}^n\frac{p_{2,j}}{n(p_{2,j}-1)^2},
\]
\[
\beta=\left(\frac{-1}{n(p_{2,1}-1)^2},\ldots,\frac{-1}{n(p_{2,n}-1)^2}\right),
\]
and
\[
\gamma=\left(\frac{-1}{m(p_{4,1}-1)^2},\ldots,\frac{-1}{m(p_{4,m}-1)^2}\right).
\]
Then $DF_{1,1}''=\left(\alpha,\beta,0,\gamma\right)$ and $DF_{3,1}''=\left(0,\beta,\alpha,\gamma\right)$. Therefore,
\[
DF''=\begin{bmatrix}
\alpha & \beta & 0 & \gamma \\ \cdot & N & \cdot & 0 \\ 0 & \beta & \alpha & \gamma \\ \cdot & 0 & \cdot & M
\end{bmatrix}
\]
and $\text{rank}\left(DF''\right)\geq n+m+1$. Since the sum of forces is always zero, $DF''$ can't have full rank. Thus, $\text{rank}\left(DF''\right)=n+m+1$, and so $\{p_{k,i}''\}$ is a non-degenerate balanced configuration.
\end{proof}
\newpage
\begin{proposition}\label{prop:crucial}
Let $N\in\mathbb{N},n_k=1,p_{k,1}=(-1)^{k+1}$ for $k=1,\ldots,N$, $T=0$, $N'=2,n_1'=1,n_2'=n\in\mathbb{N},p_{1,1}'=1$, $p_{2,i}'=a_i$ where $a_1,\ldots,a_n$ are the distinct real roots of the polynomial $p_n(z)=\sum_{k=0}^n{n \choose k}^2z^k$, and $T'=0$.. Also, let $N''=N+N'=N+2$, $n_k''=1$ for $k=1,\ldots,N+1$, $n_N''=n$, $p_{1,k}''=(-1)^{k+1}$ for $k=1,\ldots,N+1$, $p_{N+2,i}''=a_i$ for $i=1,\ldots,n$, and $T''=T+T'=0$. Then $\{p_{k,i}''\}$ is a non-degenerate balanced configuration.
\label{many ends}
\end{proposition}
\begin{figure}[H]
\centerline{
\includegraphics[width=2in]{1,1,1,1,1,1,1,3.pdf}
}
\caption{Surface corresponding to non-degenerate balanced configuration with $n_k''=1$ for $k=1,\ldots,7$ and $n_8''=3$.
}
\label{(1,1,1,1,1,1,1,3)}
\end{figure}
\begin{proof}
First, we need to show that $\{p_{k,i}\}$ is a balanced configuration:
\[
F_{2k,1}=\frac{p_{2k+1,1}}{p_{2k+1,1}-p_{2k,1}}-\frac{p_{2k,1}}{p_{2k,1}-p_{2k-1,1}}=\frac{1}{2}-\frac{-1}{-2}=0
\]
and
\[
F_{2k+1,1}=\frac{p_{2k,1}}{p_{2k,1}-p_{2k+1,1}}-\frac{p_{2k+1}}{p_{2k+1}-p_{2k+2}}=\frac{-1}{-2}-\frac{1}{2}=0.
\]
That $\{p_{k,i}'\}$ is a balanced configuration follows from proposition \ref{handles}. Now,
\[
F_{1,1}^+=-\frac{p_{1,1}}{p_{1,1}-p_{2,1}}=-\frac{1}{2}
\]
and, similar to the proof of proposition \ref{8ends}, $F_{1,1}'^+=-\frac{1}{2}$. Thus, by proposition \ref{combining}, $\{p_{k,i}''\}$ is a balanced configuration.
As far as the non-degeneracy, let's first write out the forces $F_{k,i}''$:
\[
F_{1,1}''=\sum_{j=1}^n\frac{p_{n,j}''}{n\left(p_{n,j}''-p_{1,1}''\right)}-\frac{p_{1,1}''}{p_{1,1}''-p_{2,1}''};
\]
\[
F_{k,1}''=
\begin{cases}
\frac{p_{k+1,1}''}{p_{k+1,1}''-p_{k,1}''}-\frac{p_{k,1}''}{p_{k,1}''-p_{k-1,1}''}\text{, $k$ even},
\\
\frac{p_{k-1,1}''}{p_{k-1,1}''-p_{k,1}''}-\frac{p_{k,1}''}{p_{k,1}''-p_{k+1,1}''}\text{, $k$ odd}.
\end{cases}
\]
for $k=2,\ldots,N$;
\[
F_{N+1}''=\frac{p_{N,1}''}{p_{N,1}''-p_{N+1,1}''}-\sum_{j=1}^n\frac{p_{N+1,1}''}{n\left(p_{N+1,1}''-p_{N+2,j}''\right)};
\]
and
\[
F_{N+2,i}''=\sum_{j\neq i}\frac{p_{N+2,i}''+p_{N+2,j}''}{n^2\left(p_{N+2,i}''-p_{N+2,j}''\right)}+\frac{p_{1,1}''}{n\left(p_{1,1}''-p_{N+2,i}''\right)}-\frac{p_{N+2,i}''}{n\left(p_{N+2,i}''-p_{N+1,j}\right)}.
\]
Let $M$ be the $n\times n$ matrix with entries
\[
M_{i,j}=\frac{\partial F_{N+2,i}''}{\partial p_{2,j}''}.
\]
As shown in the proof of proposition \ref{handles}, $M$ is invertible.
Let $N$ be the $N\times N$ matrix with entries
\[
N_{i,j}=\frac{\partial F_{i+1,1}}{\partial p_{j,1}''}.
\]
If $k\in\{2,\ldots,N\}$ then
\[
\frac{\partial F_{k,1}''}{\partial p_{k-1,1}''}=\frac{-1}{4}, \frac{\partial F_{k,1}''}{\partial p_{k+1,1}''}=\frac{1}{4}, \text{ and }\frac{\partial F_{k,1}''}{\partial p_{j,1}''}=0 \text{ if }j\neq k-1,k+1.
\]
Also,
\[
\frac{\partial F_{N+1,1}''}{\partial p_{k,1}''}=0 \text{ if }k<N+1 \text{ and }\frac{\partial F_{N+1,1}''}{\partial p_{N,1}''}=\frac{-1}{4}.
\]
Hence, $N_{i,i}=-\frac{1}{4}$ for $i=1,\ldots,N$ and $N_{i,j}=0$ if $j<i$, and so $N$ is an invertible matrix. Therefore,
\[
DF''=\begin{bmatrix}
\cdot & \cdot & \cdot \\ N & \cdot & P \\ Q & \cdot & M
\end{bmatrix}
\]
where $P$ is the $N\times n$ matrix with $P_{k,i}=0$ if $k=1,\ldots,N-1$ and $P_{N,i}=\frac{1}{n(a_i-1)^2}$ and $Q$ is the $n\times N$ matrix with $Q_{k,1}=-\frac{a_k}{n(a_k-1)^2}$ for $k=1,\ldots,n$ and $Q_{k,i}=0$ if $k=2,\ldots,N$. Thus, $DF''$ has rank $N+n$ and $\{p_{k,i}''\}$ is non-degenerate.
\end{proof}
\subsection{Other examples and non-examples}
\begin{proposition}
There does not exist a balanced configuration $\{p_{k,i}\}$ with $N=2,n_1=n_2=2$, and $T=0$.
\end{proposition}
\begin{proof}
Using Mathematica to solve the balance equations, we found that the only possible solution in which the $p_{k,i}$ are distinct is $\{p_{1,1},p_{1,2},p_{2,1},p_{2,2}\}=\{1,-1,I,-I\}$. However, this is the same as the balanced configuration with $N=4$, $n_1=n_2=n_3=n_4=1$, and $T=0$.
\end{proof}
\begin{proposition}
There exists a non-degenerate balanced configuration $\{p_{k,i}\}$ with $N=2,n_1=2,n_2=3$, and $T=0$.
\label{Wei(2,3)}
\end{proposition}
\begin{figure}[H]
\centerline{
\includegraphics[width=3.3in]{Wei23side}
\includegraphics[width=2.7in]{Wei23top}
}
\caption{Side and top views of a genus four surface with $N=2,n_1=2,n_2=3$.
}
\label{figure:Wei(2,3)}
\end{figure}
\begin{proof}
The force equations corresponding to this setup are
\[
F_{1,i}=(-1)^i\frac{p_{1,1}+p_{1,2}}{4\left(p_{1,2}-p_{1,1}\right)}-\sum_{j=1}^3\frac{p_{1,i}+p_{2,j}}{6\left(p_{1,i}-p_{2,j}\right)}
\]
for $i=1,2$ and
\[
F_{2,i}=\sum_{j\neq i}\frac{p_{2,i}+p_{2,j}}{9\left(p_{2,i}-p_{2,j}\right)}+\sum_{j=1}^2\frac{p_{1,j}+p_{2,i}}{6\left(p_{1,j}-p_{2,i}\right)}
\]
for $i=1,2,3$.
Let $a_1=4+2\sqrt{5}+\sqrt{35+16\sqrt{5}}$ and $a_2=\frac{1}{2}\left(-17-9\sqrt{5}-\sqrt{690+306\sqrt{5}}\right)$, and let $p_{1,1}=a_1,p_{1,2}=1/a_1,p_{2,1}=a_2,p_{2,2}=-1$, and $p_{2,3}=1/a_2$. Then $F_{k,i}=0$ for $k=1,2$ and $i=1,\ldots,n_k$, and so $\{p_{k,i}\}$ is a balanced configuration.
Elementary row operations show that $DF$ row reduces to
\[
DF=\begin{bmatrix}
1 & 0 & 0 & 0 & \cdot \\ 0 & 1 & 0 & 0 & \cdot \\ 0 & 0 & 1 & 0 & \cdot \\ 0 & 0 & 0 & 1 & \cdot \\ 0 & 0 & 0 & 0 & 0
\end{bmatrix}
\approx
\begin{bmatrix}
1 & 0 & 0 & 0 & 626.396 \\ 0 & 1 & 0 & 0 & 2.19707 \\ 0 & 0 & 1 & 0 & -1376.24 \\ 0 & 0 & 0 & 1 & -37.0977 \\ 0 & 0 & 0 & 0 & 0
\end{bmatrix}.
\]
Therefore, $\{p_{k,i}\}$ is a non-degenerate balanced configuration.
\end{proof}
Numerical evidence suggests:
\begin{conjecture}
There exists a non-degenerate balanced configuration $\{p_{k,i}\}$ with $N=2,n_1=2,n_2=2k-1$, and $T=0$ for $k\in\mathbb{N}$.
\end{conjecture}
\section{Weierstrass Data}
We begin the proof of Theorem $1$ by parametrizing a set of Riemann surfaces and Weierstrass data that are candidates for the minimal surfaces we want to construct. The construction is almost exactly the same as in \cite{tr2}. The main difference is our definition of the Gauss map $G$. We repeat the details for the convenience of the reader.
Let ${\bar {\hbox{\bb C}}}_k=\overline{\mathbb{C}}$ for $k=1,\ldots,N$, and for each $k\in\{1,\ldots,N\}$ let $G_k:{\bar {\hbox{\bb C}}}_k:\mapsto{\bar {\hbox{\bb C}}}$ be the meromorphic function defined by
\[
G_k(z)=\delta_k z\left(
\sum_{i=1}^{n_k} \frac{\alpha_{k,i}}{z-a_{k,i}} -
\sum_{i=1}^{n_{k-1}} \frac{\beta_{k,i}}{z-b_{k,i}}
\right)
\]
where $\delta_k\in(0,\infty)$, the poles $a_{k,i}$ and $b_{k,i}$ are distinct non-zero complex numbers, and the $\alpha_{k,i}$ and $\beta_{k,i}$ are non-zero complex numbers such that
\[
\sum_{i=1}^{n_k} \alpha_{k,i}=
\sum_{i=1}^{n_{k-1}} \beta_{k,i}
=1.
\]
The first equality ensures that $G_k(z)$ has a zero at $\infty$. The zeroes at $0$ and $\infty$ are needed to ensure that the Gauss map is vertical at the annular ends. The $\delta_k$ terms will be used to ensure that the periods at the ends are the same. In \cite{tr2}, the corresponding map is $g_k(z)=\frac{G_k(z)}{\delta_k z}$.
Let $\alpha_k=(\alpha_{k,1},\ldots,\alpha_{k,n_k})$ and $\alpha=(\alpha_1,\ldots,\alpha_N)$, and define $\beta,\gamma,a$ and $b$ in the same way. Let $\delta=(\delta_1,\ldots,\delta_N)$ and $X=(\alpha,\beta,\delta,\gamma,a,b)$. The set $X$ is our parameter space used to construct the Riemann surfaces and Weierstrass data. Within this space, we will solve the period problem.
The surfaces we are constructing have $n_k$ catenoid-shaped necks between the $k$ and $k+1$ levels. In oder to achieve this, we use the $G_k$ functions to create coordinates near each pole and identify an annulus centered at $a_{k,i}\in{\bar {\hbox{\bb C}}}_k$ with an annulus centered at $b_{k+1,i}\in{\bar {\hbox{\bb C}}}$ for $k=1\ldots,N$ and $i=1,\ldots,n_k$ using the following procedure.
The function $v_{k,i} = 1/G_k$ has a simple zero at $a_{k,i}$. Thus, there exists $\epsilon>0$ such that $v_{k,i}$ is a biholomorphic map from a neighborhood of $a_{k,i}\in{\bar {\hbox{\bb C}}}_k$ to the disk $D_\epsilon(0)$. In this manner, $v=v_{k,i}$ is a complex coordinate in a neighborhood of $a_{k,i}$. Similarly, $w=w_{k+1,i}=1/G_{k+1}$ is a biholomorphic map from a neighborhood of $b_{k+1,i}\in{\bar {\hbox{\bb C}}}_{k+1}$ to the disk $D_\epsilon(0)$. Thus, for each pair $a_{k,i}$ and $b_{k+1,i}$ we get the pair of coordinates $v=v_{k,i}$ and $w=w_{k+1,i}$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.5in]{gluing}
\caption{Gluing construction.}
\end{center}
\end{figure}
Choose a complex gluing parameter $r$ with parameter $|r|\in (0,\epsilon^2)$ and remove the disks $|v|\le \frac{|r|}\epsilon$ and $|w|\le \frac{|r|}\epsilon$ from ${\bar {\hbox{\bb C}}}_k$ and ${\bar {\hbox{\bb C}}}_{k+1}$, respectively. Then, we create a conformal model of the catenoid-shaped neck by identifying the points in ${\bar {\hbox{\bb C}}}_{k}$
satisfying
\[
\frac {|r|} \epsilon<|v|<\epsilon
\]
with points in ${\bar {\hbox{\bb C}}}_{k+1}$ satisfying
\[
\frac {|r|} \epsilon<|w|<\epsilon
\]
by the equation
\[
v w = r.
\]
Let $\Sigma$ be the compact Riemann surface created by repeating this procedure for each $k=1,\ldots,N$ and $i=1,\ldots,n_k$. Denote by $\Sigma^*$ the surface obtained by removing the points $0_k$ and $\infty_k$ from $\Sigma$ for all $k$. When $r=0$, define $\Sigma$ as the disjoint union ${\bar {\hbox{\bb C}}}_1\cup{\bar {\hbox{\bb C}}}_2\cup\ldots\cup{\bar {\hbox{\bb C}}}_N$. This is the underlying Riemann surface for our minimal surface candidates.
Next, the Gauss map $G:\Sigma\to\bar {{\hbox{\bb C}}}$ is defined by
\begin{equation}
G(z)=
\begin{cases}
\sqrt r G_k(z) & \text{if $z\in {\bar {\hbox{\bb C}}}_k$, $k$ even},
\\
\frac1{\sqrt{r} G_k(z)} & \text{if $z\in {\bar {\hbox{\bb C}}}_k$, $k$ odd}.
\end{cases}
\end{equation}
If $k$ is even, then $G=\sqrt r/v$ on ${\bar{\hbox{\bb C}}}_k$ and
$G=w/\sqrt r$ on ${\bar{\hbox{\bb C}}}_{k+1}$. If $k$ is odd, then $G=v/\sqrt{r}$ on ${\bar {\hbox{\bb C}}}_k$ and $G=\sqrt{r}/w$ on ${\bar {\hbox{\bb C}}}_{k+1}$. Therefore, the relation $vw=r$ implies that $G$ is well-defined on $\Sigma$.
Before defining our height differential $\eta$, we need to choose a basis of the homology of $\Sigma$. Define $A_{k,i}$ to be the circle $|v_{k,i}|=\epsilon$ in ${\bar {\hbox{\bb C}}}_k$ oriented positively. The construction of $\Sigma$ implies that this is homotopic to the circle $\abs{w_{k+1,i}}=\epsilon$ oriented negatively.
Choose $B_{k,i}$, $i\geq2$, to be a closed curve in $\Sigma$ such that $A_{k,1}\cdot B_{k,i}=-1$, $A_{k,i}\cdot B_{k,i}=1$, $A_{m,n}\cdot B_{k,i}=0$ if $m\neq k$, and $B_{k,i}\cdot B_{m,n}=0$ if $(m,n)\neq(k,i)$. Finally, choose $B_{1,1}$ to be a closed curve such that $A_{k,1}\cdot B_{1,1}=1$ for $k=1,\ldots,N$ and it doesn't intersect any of the above curves. Then a basis of $H_1(\Sigma)$ is given by the curves $A_{1,1}, B_{1,1}, A_{k,i}$, and $B_{k,i}$, with $k=1,\ldots,N$ and $i=2,\ldots,n_k$. Note that if we replace the $B_{1,i}$ curves by $B_{1,i}'=B_{1,i}+B_{1,1}$ then we get a canonical basis of $H_1(\Sigma)$.
\begin{figure}[H]
\begin{center}
\includegraphics[width=3in]{cycles}
\caption{Labeling the cycles.}
\end{center}
\end{figure}
\begin{proposition}[\cite{tr2}]
Consider numbers $\gamma_{k,i}$, $k=1,\ldots,N$, $i=1,\ldots,n_k$ such that for any $k$,
\[
\sum_{i=1}^{n_k} \gamma_{k,i} = 1.
\]
Then there exists a unique holomorphic 1-form $\eta$ on $\Sigma$ such that for any $k=1,\ldots,N$, $i=1,\ldots,n_k$,
\[
\int_{A_{k,i}} \eta = 2\pi i \gamma_{k,i}
\]
\end{proposition}
The proof is the same as in the proof of proposition $5$ from section $3.3$ in \cite{tr2}.
We now have a space of Riemann surfaces and Weierstrass data that are candidates for the surfaces we want to construct. The parameters are given by $(r,X))$, and we will look at what happens when $r\rightarrow 0$.
\section{Constraints on the Weierstrass data and period conditions}
We express the Weierstrass data using the notation $$\psi(z)=\text{Re}\int_{z_0}^z\left(\phi_1,\phi_2,\phi_3\right)$$ where $z_0\in\Sigma$ is a base point, $\phi_1=\frac{1}{2}\left(\frac{1}{G}-G\right)\eta$, $\phi_2=\frac{i}{2}\left(\frac{1}{G}+G\right)\eta$, and $\phi_3=\eta$. In order that $(\Sigma,G,\eta)$ are the Weierstrass data of a complete, doubly periodic minimal surface with horizontal embedded ends, we need:
\begin{enumerate}
\item
For any $p\in\Sigma^*$, $\eta$ has a zero at $p$ if and only of $G$ has either a zero or pole at $p$, with the same
multiplicity. At each puncture $0_k$ and $\infty_k$, $G$ has a zero or pole of order $n\geq1$ and $\eta$ has a zero of order $n-1$.
\item
For any closed curve $c$ om $\Sigma^*$, $\mathop{\rm Re}\nolimits \int_c \phi_j$ is an integral linear combination of two linearly independent vectors of $\hbox{\bb R}^3$. We denote the set of these linear combinations by $\Lambda$.
\end{enumerate}
As the zeroes and poles of $G$ are the zeroes of the $G_k$, we can write condition (1) equivalently as
\begin{enumerate}
\item[1'.]
The zeroes of $\eta$ are the zeroes of $G_k dz/z$, $k=1,\ldots,N$, with the same multiplicity.
\end{enumerate}
If condition $(1)$ is satisfied then the 1-forms $\phi_1$ and $\phi_2$ have poles only at the punctures of $\Sigma^*$, and so condition $(2)$ needs to be checked only for a canonical basis of the homology of $\Sigma$ and for small loops around the punctures. Therefore we can rewrite the condition (2) as follows:
Write $\phi=(\phi_1,\phi_2,\phi_3)$.
\begin{enumerate}
\item[2'.1] For any $k=1,\ldots,N$ and $i=1,\ldots,n_k$,
\[
\mathop{\rm Re}\nolimits \int_{A_{k,i}} \phi= 0
\]
\item[2'.2]
For any $k=1,\ldots,N$ and $i=2,\ldots,n_k$,
\[
\mathop{\rm Re}\nolimits \int_{B_{k,i}} \phi = 0
\]
\item[2'.3]
\[
\mathop{\rm Re}\nolimits \int_{B_{1,1}} \phi \in \Lambda
\]
\item[2'.4] For any $k=1,\ldots,N$,
\[
\mathop{\rm Re}\nolimits \int_{\partial D_\epsilon(0_k)} \phi \in \Lambda
\]
\item[2'.5] For any $k=1,\ldots,N$,
\[
\mathop{\rm Re}\nolimits \int_{\partial D_\epsilon(\infty_k)} \phi \in \Lambda
\]
\end{enumerate}
If $\mathop{\rm Re}\nolimits \int_{A_{k,i}} \phi= 0$ and $\mathop{\rm Re}\nolimits \int_{\partial D_\epsilon(0_k)} \phi \in \Lambda$ for each $k,i$ then the period condition at $\infty_k$ is automatically satisfied by Cauchy's theorem.
Observe that the period vectors $\mathop{\rm Re}\nolimits \int_{\partial D_\epsilon(0_k)} \phi $ and $\mathop{\rm Re}\nolimits \int_{\partial D_\epsilon(\infty_k)} \phi $ are necessarily horizontal, as $\eta=\phi_3$ is holomorphic at $0_k$ and $\infty_k$.
\section{Height differential extends holomorphically to $r=0$}
This section follows directly from \cite{tr2}. Recall that when $r=0$, we defined $\Sigma$ as the disjoint union ${\bar {\hbox{\bb C}}}_1\cup
{\bar {\hbox{\bb C}}}_2\cup\ldots\cup{\bar {\hbox{\bb C}}}_N$. The Gauss map is defined when $r=0$ and depends holomorphically on $r$. We need the same to be true for the height differential. When $r=0$, define $\eta$ by $\eta_k\circ{\bar {\hbox{\bb C}}}_k$ where $\eta_k$ is the unique meromorphic 1-form on ${\bar {\hbox{\bb C}}}_k$ with simple poles at $a_{k,i}$ and $b_{k,i}$ with residues $\gamma_{k,i}$ and $-\gamma_{k-1,i}$, i.e.
\[
\eta_k=
\left(\sum_{i=1}^{n_k}\frac{\gamma_{k,i}}{z-a_{k,i}} -
\sum_{i=1}^{n_{k-1}}\frac{\gamma_{k-1,i}}{z-b_{k,i}}
\right)\, dz.
\]
Observe that our conditions ensure that $\eta$ is holomorphic at $0_k$ and $\infty_k$ for each $k$.
The next two propositions are from section $4$ in \cite{tr2}. As our height differential is defined in the same way as in \cite{tr2}, the proofs of these propositions are the same.
\begin{proposition}[\cite{tr2}]
Let $z\in\overline{{\hbox{\bb C}}}_k,\,z\neq a_{k,i},\,z\neq b_{k,i}.$ Then $r\mapsto \eta(z)$ is holomorphic in a neighborhood of $0$.
\label{prop holo1}
\end{proposition}
\begin{proposition}[\cite{tr2}]
Let $v=v_{k,i}$. On the domain $\frac{\abs{r}}{\epsilon}<\abs{v}<\epsilon$ of $\Sigma$, we have the formula
\[
\eta=f\left(v,\frac{r}{v}\right)\frac{dv}{v}=-f\left(\frac{r}{w},w\right)\frac{dw}{w}
\]
where $f$ is a holomorphic function of two complex variables defined in a neighborhood of $(0,0)$.
\label{prop holo2}
\end{proposition}
We can use propositions \ref{prop holo1} and \ref{prop holo2} to estimate the integrals of $\eta,G\eta$, and $1/G\eta$ on the homology cycles and on cycles around the punctures. These are necessary to solve the period problem when $r=0$. As in \cite{tr2}, we will use a term $\text{holo}(r,X)$, meaning a holomorphic function in terms of $(r,X)$ in a neighborhood of $(0,X_0)$.
\begin{proposition}[\cite{tr2}]
\[
\begin{split}
\int_{A_{k,i}}G^{(-1)^k}\eta &=\sqrt{r}\left(2\pi i \text{ res}_{a_{k,i}}G_k\eta_k+\text{$r$ holo}(r,X)\right) \\
\int_{A_{k,i}}G^{(-1)^{k+1}}\eta &=\sqrt{r}\left(-2\pi i \text{ res}_{b_{k+1,i}}G_{k+1}\eta_{k+1}+\text{$r$ holo}(r,X)\right) \\
\int_{B_{k,i}}\eta &=\left(\gamma_{k,i}-\gamma_{k,1}\right)\text{log$(r)$}+\text{holo}(r,X) \\
\int_{B_{k,i}}G^{(-1)^k}\eta &=\frac{1}{\sqrt{r}}\left(\int_{b_{k+1,i}}^{b_{k+1,1}}G_{k+1}^{-1}\eta_{k+1}+r\text{ log$(r)$ holo$(r,X)+r$ holo$(r,X)$}\right) \\
\int_{B_{k,i}}G^{(-1)^{k+1}}\eta &=\frac{1}{\sqrt{r}}\left(\int_{a_{k,1}}^{a_{k,i}}G_{k}^{-1}\eta_{k}+r\text{ log$(r)$ holo$(r,X)+r$ holo$(r,X)$}\right) \\
\end{split}
\]
The proofs are the same as in \cite{tr2}, section 5. The following proposition takes care of the different nature of our annular ends compared to the planar ends on \cite{tr2}.
\label{prop expansion1}
\end{proposition}
\begin{proposition}
\[
\begin{split}
\int_{\partial D_\epsilon(0_k)}G^{(-1)^k}\eta &=0 \\
\int_{\partial D_\epsilon(0_k)}G^{(-1)^{k+1}}\eta &=\frac{1}{\sqrt{r}}\left(2\pi i \text{ res}_{0}\frac{1}{G_k}\eta_{k}+\text{$r$ holo}(r,X)\right) \\
\end{split}
\]
\label{prop expansion2}
\end{proposition}
\begin{proof}
First,
\[
\int_{\partial D_\epsilon(0_k)}G^{(-1)^k}\eta=\sqrt{r}\int_{\partial D_\epsilon(0_k)}G_k\eta=0
\]
because $G_k\eta$ has no poles in a neighborhood of $0_k$. Using proposition \ref{prop holo1},
\[
\begin{split}
\int_{\partial D_\epsilon(0_k)}G^{(-1)^{k+1}}\eta=&\frac{1}{\sqrt{r}}\int_{\partial D_\epsilon(0_k)}\frac{1}{G_k}\left(\eta_k+r\text{ holo}(r,X)dz\right)\\
=&\frac{1}{\sqrt{r}}\left(2\pi i\text{ res}_{0_k}\frac{1}{G_k}\eta_k+r\text{ holo}(r,X)\right)
\end{split}
\]
\end{proof}
\section{Solving the period problem}
We can attempt to solve the constraints on the Weierstrass data and the period problem by adjusting the variables $(r,X)$, and we will express this with a map $\mathcal{F}$. In fact, we will find solutions when $r=0$. This allows us to take advantage of the asymptotic expansion of each of the periods at $r=0$.
Let $\zeta_{k,i}$ be the zeroes of $\frac{1}{\delta_k z}G_kdz$ in $\overline{{\hbox{\bb C}}}_k,\; i=1,\cdots,n_k+n_{k-1}-2$. Define
\[ \mathcal{F}_{1,k,i}=\eta\left(\zeta_{k,i}\right)\text{.}\]
Abbreviate $\mathcal{F}_{1,i}=\left(\mathcal{F}_{1,k,1},\cdots,\mathcal{F}_{1,k,n_k+n_{k-1}-2}\right)$ and $\mathcal{ F}_1=\left(\mathcal{F}_{1,1},\cdots,\mathcal{F}_{1,N}\right)$. The zeroes of $\frac{1}{\delta_k z}G_kdz$ can be thought of as the zeroes of a polynomial, and for now let's assume that they are all simple zeroes. Section 9 in \cite{tr2} takes care of the case where $\frac{1}{\delta_kz}G_kdz$ may not only have simple zeroes, and applies here as well. As argued in \cite{tr2}, the simple zeroes of a polynomial depend analytically on its coefficients and, by proposition \ref{prop holo1}, $\mathcal{F}_1$ depends analytically on $\left(r,X\right)$.
If $\mathcal{F}_1=0$ then $\eta$ has at least a simple zero at each zero of $G_k$. All the zeroes of $\frac{1}{\delta_k z}G_kdz$ are assumed to be simple, and so $G$ has $$\sum_{k=1}^N(n_k+n_{k-1}-2)=2\sum_{k=1}^Nn_k-2N$$ zeroes and poles, counting multiplicity.
The number of zeroes of $\eta$ is $$2\text{ genus}(\Sigma)-2=2\left(1+\sum_{k=1}^N(n_k-1)\right)-2=2+2\sum_{k=1}^Nn_k-2N-2=2\sum_{k=1}^Nn_k-2N.$$ Thus, the zeroes of $\eta$ are precisely the $\zeta_{k,i}$.
The remaining components of the map $\mathcal{F}$ deal with the period problem. The period condition $\text{Re}\int_{A_{k,i}}\eta=0$ is taken care of by letting $\gamma_{k,i}\in\mathbb{R}$. This is simply due to how we defined $\eta$. From this moment on, assume that $\gamma_{k,i}\in\mathbb{R}$. Recall that
\[
\text{Re}\int\phi_1+i\text{Re}\int\phi_2=\frac{1}{2}\left(\overline{\int G^{-1}\eta}-\int G\eta\right).
\]
With this equivalency in mind, we define
\[
\begin{split}
\mathcal{F}_{2,k,i} &=\frac{1}{\log{r}}\text{Re}\int_{B_{k,i}}\eta,\;\; i=2,\cdots,n_k. \\
\mathcal{F}_{3,k,i} &=\sqrt{r}\overline{\left(\overline{\int_{B_{k,i}}G^{-1}\eta}-\int_{B_{k,i}}G\eta\right)},\;\; i=2,\cdots,n_k. \\
\mathcal{F}_{4,k,i} &=\frac{(-1)^k}{\sqrt{r}}\left(\overline{\int_{A_{k,i}}G^{-1}\eta}-\int_{A_{k,i}}G\eta\right),\;\; i=1,\cdots,n_k. \\
\mathcal{F}_{5,k} &=(-1)^{k+1}\sqrt{r}\text{ \hbox{\rm conj}}^k\left(\overline{\int_{\partial D_{\epsilon}(0_k)}G^{-1}\eta}-\int_{\partial D_{\epsilon}(0_k)}G\eta\right)\\
\end{split}
\]
Here, $\hbox{\rm conj}$ denotes the conjugation in ${\hbox{\bb C}}$.
Define the vectors $\mathcal{ F}_2$, $\mathcal{F}_3$, and $\mathcal{F}_4$ as we defined $\mathcal{F}_1$. Let $\mathcal{F}_5=\left(\mathcal{F}_{5,1},\mathcal{F}_{5,2},...,\mathcal{F}_{5,N}\right)$ and $\mathcal{F}=\left(\mathcal{F}_1,\mathcal{F}_2,\mathcal{F}_3,\mathcal{F}_4,\mathcal{F}_5\right)$. Note that the constraints of the Weierstrass data and the period problem listed in section 3.2 are equivalent to $\mathcal{F}=0$. Also, there is no need for $\mathcal{F}_5$ in \cite{tr2}.
The $\log{r}$ terms that show up in $\mathcal{F}$ require us to express the variable $r$ in terms of the variable $t$ using the equation $r(t)=e^{-1/t^2}$ if $t\in\mathbb{R}\setminus\{0\}$ and $r(0)=0$. Otherwise, the map $\mathcal{F}$ won't be differentiable at $r=0$. Propositions \ref{prop expansion1} and \ref{prop expansion2} imply that $\mathcal{F}$ is differentiable at $r=0$.
The next proposition is essentially the same as proposition 9 in \cite{tr2}. The key difference is in the definition of $a_{k,i}$ and $b_{k,i}$. This difference plays out in the rest of the calculations of this section, which lead to the proof of the proposition.
Recall also that the $p_{k,i}$ form a periodic set of points with $p_{k+N,i}= p_{k,i} e^T$. This introduces a similar, but more obfuscated periodicity of the $a_{k,i}$ and $b_{k,i}$ below.
\begin{proposition}
Let $\{p_{k,i}\}$ be a balanced configuration. Define $X_o$ by:
\[\alpha_{k,i}=\gamma_{k,i}=\beta_{k+1,i}=1/n_k,\]
\[a_{k,i}=\left(\hbox{\rm conj}^{k}(p_{k,i})\right)^{(-1)^k},\]
\[b_{k,i}=\left(\hbox{\rm conj}^k(p_{k-1,i})\right)^{(-1)^k}.\]
Then $\mathcal{F}(0,X_o)=0$. Also, if $X$ is a solution to $\mathcal{F}(0,X)=0$ then, up to some identifications, $X=X_o$ for some balanced configuration $\{p_{k,i}\}$. In addition, if $\{p_{k,i}\}$ is a non-degenerate balanced configuration then, up to some identifications, $D_2\mathcal{F}(0,X_o)$ is an isomorphism. By the implicit function theorem, for t in a neighborhood of 0, there exists a unique $X(t)$ in a neighborhood of $X_o$ such that $\mathcal{F}(t,X(t))=0$.
\label{main prop}
\end{proposition}
The Weierstrass data given by each unique $X(t)$ is the map of an immersed doubly periodic minimal surface with embedded planar ends. The rest of this section contains the proof of Proposition \ref{main prop}.
\subsection{Solving the equation $\mathcal{F}_1=0$}
Assume $r=0$. $\mathcal{F}_{1,k}=0$ is equivalent to: $G_kdz$ and $\eta_k$ have the same zeroes on ${\bar C}_k$. Since they already have the same poles they are proportional. By normalization, $\eta_k=\frac{1}{\delta_k z}G_k dz$. Thus, $\mathcal{F}_1=0$ is equivalent to $\alpha_{k,i}=\gamma_{k,i}$ and $\beta_{k,i}=\gamma_{k-1,i}$.
From this moment on, assume that $\mathcal{F}_1=0$ so that $r=0\Rightarrow\eta_k=\frac{1}{\delta_k z}G_kdz$.
\subsection{Solving the equation $\mathcal{F}_2=0$}
Using proposition \ref{prop expansion1},
\[
\begin{split}
\mathcal{F}_{2,k,i} &=\frac{1}{\text{log}(r)}\text{Re}\int_{B_{k,i}}\eta \\
&=\frac{1}{\text{log}(r)}\text{Re}\left(\left(\gamma_{k,i}-\gamma_{k,1}\right)\text{log}(r)+\text{hol}(r,X)\right) \\
&=\gamma_{k,i}-\gamma_{k,1}+\frac{\text{Re}(\text{hol}(r,X))}{\text{log}(r)}
\end{split}
\]
When $r=0$, $\mathcal{F}_{2,k,i}=\gamma_{k,i}-\gamma_{k,1}$. Thus, $\mathcal{F}_2=0 \Rightarrow \gamma_{k,i}=\gamma_{k,1} \forall i \Rightarrow \gamma_{k,i}=\frac{1}{n_k} \forall i$.
\subsection{Solving the equation $\mathcal{F}_3=0$}
Using proposition \ref{prop expansion1},
\[
\begin{split}
\mathcal{F}_{3,k,i} =& \sqrt{r}\overline{\left(\overline{\int_{B_{k,i}}G^{-1}\eta}-\int_{B_{k,i}}G\eta\right)} \\
=& (-1)^k\text{\hbox{\rm conj}}^{k}\left(\int_{a_{k,1}}^{a_{k,1}}G_k^{-1}\eta_k+r\text{ log}(r)\text{hol}(r,X)+r\text{ hol}(r,X)\right) \\
&+(-1)^k\text{\hbox{\rm conj}}^{k+1}\left(\int_{b_{k+1,1}}^{b_{k+1,i}}G_{k+1}^{-1}\eta_{k+1}+r\text{ log}(r)\text{hol}(r,X)+r\text{ hol}(r,X)\right)
\end{split}
\]
When r=0,
\[
\begin{split}
\mathcal{F}_{3,k,i} &=(-1)^k\text{\hbox{\rm conj}}^{k}\left(\int_{a_{k,1}}^{a_{k,i}}G_k^{-1}\eta_k\right)+(-1)^k\text{\hbox{\rm conj}}^{k+1}\left(\int_{b_{k+1}}^{b_{k+1,i}}G_{k+1}^{-1}\eta_{k+1}\right) \\
&=(-1)^k\text{\hbox{\rm conj}}^{k}\left(\int_{a_{k,1}}^{a_{k,i}}\frac{1}{\delta_k z}dz\right)+(-1)^k\text{\hbox{\rm conj}}^{k+1}\left(\int_{b_{k+1,1}}^{b_{k+1,i}}\frac{1}{\delta_{k+1}z}dz\right) \\
&=(-1)^k\text{\hbox{\rm conj}}^{k}\left(\delta_k^{-1}\log\left(\frac{a_{k,i}}{a_{k,1}}\right)\right)+(-1)^k\text{\hbox{\rm conj}}^{k+1}\left(\delta_{k+1}^{-1}\log\left(\frac{b_{k+1,i}}{b_{k+1,1}}\right)\right)
\end{split}
\]
Thus, $\mathcal{F}_3=0$ and $\delta_k=1$ for $k=1,2,\ldots,N \Rightarrow \frac{b_{k+1,i}}{b_{k+1,1}}=\text{\hbox{\rm conj}}\left(\frac{a_{k,1}}{a_{k,i}}\right)$.
\subsection{Solving the equation $\mathcal{F}_4=0$}
Using proposition \ref{prop expansion1},
\[
\begin{split}
\mathcal{F}_{4,k,i} =&\frac{(-1)^k}{\sqrt{r}}\left(\overline{\int_{A_{k,i}}G^{-1}\eta}-\int_{A_{k,i}}G\eta\right)\\\\
=&\frac{1}{\sqrt{r}}\left[\text{\hbox{\rm conj}}^{k+1}\int_{A_{k,i}}G^{(-1)^{k+1}}\eta-\text{\hbox{\rm conj}}^k\int_{A_{k,i}}G^{(-1)^k}\eta\right]\\\\
=&\frac{1}{\sqrt{r}}\text{\hbox{\rm conj}}^{k+1}\left[\sqrt{r}\left(-2\pi i \text{res}_{b_{k+1,i}}G_{k+1}\eta_{k+1}+r\text{ holo}(r,X)\right)\right]\\
&-\frac{1}{\sqrt{r}}\text{\hbox{\rm conj}}^k\left[\sqrt{r}\left(2\pi i\text{ res}_{a_{k,i}}G_k\eta_k+r\text{ holo}(r,X)\right)\right]\\\\
=&\text{\hbox{\rm conj}}^{k+1}\left[-2\pi i\text{ res}_{b_{k+1,i}}G_{k+1}\eta_{k+1}+r\text{ holo}(r,X)\right]\\
&-\text{\hbox{\rm conj}}^k\left[2\pi i\text{ res}_{a_{k,i}}G_k\eta_k+r\text{ holo}(r,X)\right].\\
\end{split}
\]
Thus, when $r=0$,
\[
\begin{split}
\mathcal{F}_{4,k,i} =&\text{\hbox{\rm conj}}^{k+1}\left[-2\pi i\text{ res}_{b_{k+1,i}}G_{k+1}\eta_{k+1}\right]-\text{\hbox{\rm conj}}^k\left[2\pi i\text{ res}_{a_{k,i}}G_k\eta_k\right]\\
=&2\pi i(-1)^{k}\left[-\text{ \hbox{\rm conj}}^{k}\left(\text{res}_{a_{k,i}}G_{k}\eta_{k}\right)+\text{ \hbox{\rm conj}}^{k+1}\left(\text{res}_{b_{k+1,i}}G_{k+1}\eta_{k+1}\right)\right]\\
=&-4\pi \delta_k i(-1)^k\text{\hbox{\rm conj}}^k\left(\sum_{j=1,\neq i}^{n_k}\frac{a_{k,i}}{n_k^2\left(a_{k,i}-a_{k,j}\right)}-\sum_{j=1}^{n_{k-1}}\frac{a_{k,i}}{n_k n_{k-1}\left(a_{k,i}-b_{k,j}\right)}\right) \\
&+4\pi\delta_{k+1} i(-1)^k\text{\hbox{\rm conj}}^{k+1}\left(-\sum_{j=1}^{n_{k+1}}\frac{b_{k+1,i}}{n_k n_{k+1}\left(b_{k+1,i}-a_{k+1,j}\right)}+\sum_{j=1,\neq i}^{n_k}\frac{b_{k+1,i}}{n_k^2\left(b_{k+1,i}-b_{k+1,j}\right)}\right). \\
\end{split}
\]
We will deal with this equation further in section 7.6 below.
\subsection{Solving the Equation $\mathcal{F}_5=0$}
Using proposition \ref{prop expansion2},
{\footnotesize
\[
\begin{split}
\overline{\int_{\partial D_{\epsilon}(0_k)}\frac{1}{G}\eta}-\int_{\partial D_{\epsilon}(0_k)}G\eta =&\left((-1)^k\text{\hbox{\rm conj}}^{k+1}\int_{\partial D_{\epsilon}(0_k)}G^{(-1)^k}\eta+(-1)^{k+1}\text{\hbox{\rm conj}}^k\int_{\partial D_{\epsilon}(0_k)}G^{(-1)^{k+1}}\eta\right)\\
=&\frac{1}{\sqrt{r}}(-1)^{k+1}\text{\hbox{\rm conj}}^k\left(2\pi i \text{ res}_{0}\frac{1}{G_k}\eta_{k}+\text{$r$ hol}(r,X)\right). \\
\end{split}
\]}
Thus,
\[
\begin{split}
\mathcal{F}_{5,k} =& 2\pi i \text{ res}_{0}\frac{1}{G_k}\eta_{k}+\text{$r$ hol}(r,X)\\
=&2\pi\delta_k^{-1} i+\text{$r$ hol}(r,X).\\
\end{split}
\]
When $r=0$,
\[
\mathcal{F}_{5,k}=2\pi\delta_k^{-1} i,\;k=1,...,N.
\]
We want this period to be $2 \pi i$. Thus, we need $\delta_k=1$ for $k=1,2,\ldots,N$.
\subsection{Uncovering the force equations and the non-horizontal period $\mathcal{T}_t$}
Our force equations could just be given by $\mathcal{F}_{4,k,i}$ for $k=1,\ldots,N$ and $i=1,\ldots,n_k$. However, the non-horizontal period $\mathcal{T}_t$ whose limit is $\mathcal{T}_0=\overline{T}$ doesn't have a clear relationship to the points $(a,b)$. Therefore, as done in \cite{tr2}, we will construct an isomorphism $(a,b)\mapsto(T,p,q)$.
Let $m=n_1+\cdots+n_N$. Given $p_{k,i}\in {\hbox{\bb C}}$, $k=1,\cdots,N$, $i=1,\cdots,n_k$, let $p\in{\hbox{\bb C}}^m$ be the vector whose components are $p_{k,i}$. Given $(T,p,q)\in {\hbox{\bb C}} \times {\hbox{\bb C}}^m \times {\hbox{\bb C}}^m$, define $(a,b)$ by
\[
a_{k,i}=\left(\text{\hbox{\rm conj}}^{k}p_{k,i}q_{k,1}\right)^{(-1)^k}
\]
\[
b_{k,i}=\left(\text{\hbox{\rm conj}}^{k}p_{k-1,i}q_{k,i}\right)^{(-1)^k}
\]
where $p_{k+N,i}=p_{k,i}e^T$ and $q_{k+N,i}=q_{k,i}$.\\
Note that the way $(a,b)$ are defined is similar to how they were defined in proposition \ref{main prop}. We get the $(a,b)$ in proposition \ref{main prop} if we let $q_{k,i}=1$ for $k=1,\ldots,N$ and $i=1,\ldots,n_k$. Also, our $(a,b)$ is a multiplicative version of the $(a,b)$ in \cite{tr2}.
If $\delta_k=1$ for $k=1,\ldots,N$ then
\[
\begin{split}
\mathcal{F}_{3,k,i}&=(-1)^k\text{\hbox{\rm conj}}^{k}\left(\log\left(\frac{a_{k,i}}{a_{k,1}}\right)\right)+(-1)^k\text{\hbox{\rm conj}}^{k+1}\left(\log\left(\frac{b_{k+1,i}}{b_{k_1,1}}\right)\right)\\
&=\log{\left(\frac{p_{k,i}}{p_{k,1}}\right)}-\log{\left(\frac{p_{k,i}q_{k+1,i}}{p_{k,1}q_{k+1,1}}\right)}\\
&=\log{q_{k+1,i}}-\log{q_{k+1,1}}.
\end{split}
\]
If $\mathcal{F}_{3,k,i}=0$ then $\log{q_{k+1,i}}=\log{q_{k+1,1}}$. Hence, $q_{k,i}=q_{k,1}$ for $k=1,\cdots,N$ and $i=1,\cdots,n_k$. Thus, let $q_k=q_{k,1}$.\\
We finally deal with $\mathcal{F}_{4,k,i}$. Assume $\mathcal{F}_2=0,\mathcal{F}_3=0$, and $\delta_k=1, k=1,2,\ldots N$. Then,
{\footnotesize
\[
\begin{split}
\frac{\mathcal{F}_{4,k,i}}{-4\pi i}=&(-1)^k\text{\hbox{\rm conj}}^k\left(\sum_{j=1,\neq i}^{n_k}\frac{a_{k,i}}{n_k^2\left(a_{k,i}-a_{k,j}\right)}-\sum_{j=1}^{n_{k-1}}\frac{a_{k,i}}{n_k n_{k-1}\left(a_{k,i}-b_{k,j}\right)}\right) \\
&-(-1)^k\text{\hbox{\rm conj}}^{k+1}\left(-\sum_{j=1}^{n_{k+1}}\frac{b_{k+1,i}}{n_k n_{k+1}\left(b_{k+1,i}-a_{k+1,j}\right)}+\sum_{j=1,\neq i}^{n_k}\frac{b_{k+1,i}}{n_k^2\left(b_{k+1,i}-b_{k+1,j}\right)}\right) \\
=&(-1)^k\sum_{j \neq i}^{n_k}\frac{(p_{k,i}q_k)^{(-1)^k}}{n_k^2\left((p_{k,i}q_k)^{(-1)^k}-(p_{k,j}q_k)^{(-1)^k}\right)}\\
&-(-1)^k\sum_{j=1}^{n_{k-1}}\frac{(p_{k,i}q_k)^{(-1)^k}}{n_k n_{k-1}\left((p_{k,i}q_k)^{(-1)^k}-(p_{k-1,j}q_k)^{(-1)^k}\right)} \\
&+(-1)^{k}\sum_{j=1}^{n_{k+1}}\frac{(p_{k,i}q_{k+1})^{(-1)^{k+1}}}{n_k n_{k+1}\left((p_{k,i}q_{k+1})^{(-1)^{k+1}}-(p_{k+1,j}q_{k+1})^{(-1)^{k+1}}\right)}\\
&+(-1)^{k+1}\sum_{j \neq i}^{n_k}\frac{(p_{k,i}q_{k+1})^{(-1)^{k+1}}}{n_{k}^2\left((p_{k,i}q_{k+1})^{(-1)^{k+1}}-(p_{k,j}q_{k+1})^{(-1)^{k+1}}\right)} \\
=&\sum_{j \neq i}\frac{p_{k,i}+p_{k,j}}{n_k^2(p_{k,i}-p_{k,j})}+(-1)^k\left(\sum_{j=1}^{n_{k+1}}\frac{p_{k+1,j}^{(-1)^k}}{n_kn_{k+1}\left(p_{k+1,j}^{(-1)^k}-p_{k,i}^{(-1)^k}\right)}-\sum_{j=1}^{n_{k-1}}\frac{p_{k,i}^{(-1)^k}}{n_kn_{k-1}\left(p_{k,i}^{(-1)^k}-p_{k-1,j}^{(-1)^k}\right)}\right).
\end{split}
\]}
Thus, assuming $\mathcal{F}_2=0$ and $\mathcal{F}_3=0$, we get $\mathcal{F}_{4,k,i}=-4\pi i(-1)^k F_{k,i}$. Now, if $\{p_{k,i}\}$ is a balanced configuration then define $X_o$ as in the statement of Proposition \ref{main prop}. Because of $q_{k,i}=1$, we get $\mathcal{F}(0,X_o)=0$, proving the first statement of Proposition \ref{main prop}.
In order to prove the converse, it is necessary to make some identifications since $\mathcal{F}_3=0$ only implies that $q_{k,i}=q_{k,1}$. We need $q_{k,1}=1$. Our identifications are multiplicative versions of the similar identifications in section $6.5$ of \cite{tr2}. Given complex numbers $\lambda_k$, let $a_{k,i}'=a_{k,i}\lambda_k$ and $b_{k,i}'=b_{k,i}\lambda_k$. Then $\mathcal{ F}_3'=\mathcal{F}_3$ and $\mathcal{F}_4'=\mathcal{F}_4$. Let $\left(\Sigma',G',\eta'\right)$ be the Weierstrass data corresponding to $a_{k,i}'$ and $b_{k,i}'$. Then the map $\phi:\Sigma \rightarrow \Sigma',z \in \overline{{\hbox{\bb C}}}_k \mapsto z \lambda_k$ is an isomorphism with $\phi^*G'dz=Gdz$ and $\phi^*\eta'=\eta$. Thus, the Weierstrass data $\left(\Sigma,G,\eta\right)$ and $\left(\Sigma',G',\eta'\right)$ are isomorphic and define equivalent minimal surfaces. Hence, the above identification makes sense:
\[(a,b) \sim (a',b') \Longleftrightarrow \forall k \; \exists \lambda_k \text{ such that } \forall i,\; a_{k,i}'=a_{k,i}\lambda_k,\; b_{k,i}'=b_{k,i}\lambda_k.\]
We can create similar identifications for $p$ and $q$:
\[p' \sim p \Longleftrightarrow \exists \lambda \text{ such that } \forall k,i, \; p_{k,i}'=p_{k,i}\lambda\]
\[ q' \sim q \Longleftrightarrow \forall k \; \exists \lambda_k \text{ such that } \forall i,\; q_{k,i}'=q_{k,i}\lambda_k.\]
As simple computations yields
\begin{lemma}
The map $(T,p,q) \mapsto (a,b)$ is an isomorphism. $\Box$
\end{lemma}
Using the identifications on $(a,b)$, $p$, and $q$, we get that $\mathcal{F}_3=0 \Rightarrow q_{k,1} \sim 1$. This proves the second part of Proposition \ref{main prop}.
\subsection{$D_2\mathcal{F}(0,X_0)$ is an isomorphism}
The next three lemma's are from \cite{tr2}. Lemmas \ref{lemma2} and \ref{lemma3} are the same as propositions $10$ and $11$ in \cite{tr2}. Our lemma \ref{lemma4} is partly proven in section 6.5 of \cite{tr2}.
\begin{lemma}[\cite{tr2}]
Let $E=\{(\alpha_k',\beta_k')\in\mathbb{C}^{n_k+n_{k-1}} | \sum\alpha_{k,i}'=\sum\beta_{k,i}'=0\}$. The partial differential of $\mathcal{F}_{1,k}$ with respect to $(\alpha_k,\beta_k)$ is an isomorphism from $E$ onto $\mathbb{C}^{n_k+n_{k-1}-2}$.
\label{lemma2}
\end{lemma}
\begin{proof}
See proposition $10$ in section $6.2$ of \cite{tr2}.
\end{proof}
\begin{lemma}[\cite{tr2}]
$$\sum_{k=1}^N\sum_{i=1}^{n_k}\mathcal{F}_{4,k,i}(t,X)=0\;\forall(t,X).$$
\label{lemma3}
\end{lemma}
\begin{proof}
See proposition $11$ in section $6.5$ of \cite{tr2}.
\end{proof}
\begin{lemma}[\cite{tr2}]
The partial differential of $\mathcal{F}$ evaluated at $(0,X_0)$ with respect to the variables $(\alpha,\beta),\gamma,q,p,\delta$ has the form
\[
\begin{bmatrix}\mathcal{ I}_1 & \cdot & 0 & 0 & 0 \\ 0 & \mathcal{I}_2 & 0 & 0 & 0 \\ \cdot & \cdot &\mathcal{I}_3 & 0 & \cdot \\ \cdot & \cdot & \cdot &\mathcal{I}_4 & \cdot \\ \cdot & \cdot & 0 & 0 & \mathcal{I}_5 \end{bmatrix}
\]
with $\mathcal{I}_k$ an invertible linear operator for $k=1,2,3,4,5$, and so it is invertible.
\label{lemma4}
\end{lemma}
\begin{proof}
The arguments explaining the first four entries of the top four rows are explained in section $6.5$ of \cite{tr2}. We repeat those arguments. The key difference is that there is no fifth row or column in \cite{tr2}.
In the first row, $\mathcal{I}_1$ is invertible by lemma \ref{lemma2}. If $\alpha_k=\gamma_k$ and $\beta_k=\gamma_{k-1}$ then $\eta_k=\frac{1}{\delta_k z}G_kdz$, and so $\mathcal{F}_1=0$ independent of $q,p$, and $\delta$. Hence, there are zeroes in the last three entries of the first row.
The second row is clear because $\mathcal{F}_{2,k,i}=\gamma_{k,i}-\gamma_{k,1}$ when $r=0$ and is independent of $\alpha ,\beta ,q,p$, and $\delta$.
The identification on $q$ makes $\mathcal{I}_3$ invertible. The zero in the third row is because $\mathcal{F}_3$ is independent of $p$.
By lemma \ref{lemma3}, we can think of $\mathcal{ F}_4$ as a map into the subspace $\sum\mathcal{F}_{4,k,i}=0$. Also, $$\mathcal{I}_4=4\pi i(-1)^{k+1}\frac{\partial}{\partial p}F.$$
Thus, the non-degeneracy of the force equations implies that $\mathcal{I}_4$ is onto. The identification on $p$ implies that $\mathcal{I}_4$ is invertible.
When $r=0,\alpha_k=\gamma_k$, and $\beta_k=\gamma_{k-1}$, we get $\mathcal{F}_{5,k}=\frac{2\pi i}{\delta_k}$. Thus, $\mathcal{I}_5$ is invertible. The zeroes in row five are due to the fact that $\mathcal{F}_5$ is independent of $p$ and $q$ when $r=0,\alpha_k=\gamma_k$, and $\beta_k=\gamma_{k-1}$.
\end{proof}
Finally, we have shown that $D_2 F(0,X_0)$ is an isomorphism, completing the proof of proposition \ref{main prop}. There are the two free parameters $t\in\mathbb{R}$ and $T\in\mathbb{C}$. Thus, the implicit function theorems provides a three-dimensional space of solutions to the equation $\mathcal{F}(t,X)=0$. As discussed in \cite{hatr1}, this is the expected size of our space of minimal surfaces. Note that in \cite{tr2}, the surfaces are made up of domains ${\hbox{\bb C}}_k$.
The balance configurations can be changed by complex linear transformation that do not affect the minimal surface. In our case, the domains are
punctured planes ${\hbox{\bb C}}_k^*$, and the balance configurations can only be changed by complex multiplications. This explains the difference in the dimensions of the moduli spaces.
\section{Embeddedness and properties \ref{property1} and \ref{property2}}
We can use the technique in \cite{tr2} to prove that our surfaces are embedded. The only variation is that our surfaces have pairs of ends at each level. However, it turns out this is a minor difference when it comes to proving embeddedness. In the process of proving embeddedness, we also show that the surfaces satisfy properties \ref{property1} and \ref{property2}.
Let $\left(\Sigma,G,\eta\right)$ be the Weierstrass data given by proposition \ref{main prop} for some small positive $t$. In this section, it is convenient to express $\psi$ as
\[
\psi(z)=\left(\text{horiz}(z),\text{height}(z)\right)\in \mathbb{C}\times\mathbb{R}.
\]
The following proposition is essentially the same as proposition $12$ in section $7$ of \cite{tr2}. Parts $4,5$, and $6$ have slight differences. We include a calculation of the location of the ends at each level.
\begin{proposition}
There exists a constant $C$, not depending on $t$, such that:
\begin{enumerate}
\item
For any point $z\in {\bar {\hbox{\bb C}}}_k$ such that $\forall i$, $\abs{v_{k,i}}>\epsilon$, $\abs{w_{k,i}}>\epsilon$,
\[
\abs{\text{height}(z)-\text{height}(\infty_k)}\leq C.
\]
\item
For any point $z\in {\bar C}_k$ such that $\frac{r}{\epsilon}<\abs{v_{k,i}}<\epsilon$,
\[
\abs{\text{height}(z)-\text{height}(\infty_k)-\frac{1}{n_k}\log{\abs{v_{k,i}(z)}}}\leq C.
\]
\item
\[
\abs{\text{height}(\infty_{k+1})-\text{height}(\infty_k)-\frac{1}{n_k}\log{r}}\leq C.
\]
\item
Choose $P_{k,i}\in\Sigma$ such that $v_{k,i}(P_{k,i})=\sqrt{r}$. Note that $G(P_{k,i})=1$. Then
\[
2\sqrt{r}\left(\text{horiz}(P_{k,j})-\text{horiz}(P_{k,i})\right)\rightarrow(-1)^k\text{ \hbox{\rm conj}}^{k+1}(a_{k,j}-a_{k,i})=\log{\overline{p_{k,j}}}-\log{\overline{p_{k,i}}}
\]
and
\[
2\sqrt{r}\left(\text{horiz}(P_{k,j})-\text{horiz}(P_{k-1,i})\right)\rightarrow(-1)^k\text{ \hbox{\rm conj}}^{k+1}(a_{k,j}-b_{k,i})=\log{\overline{p_{k,j}}}-\log{\overline{p_{k-1,i}}}.
\]
Thus, we can translate the surface such that $2\sqrt{r}\text{ horiz}(P_{k,i})\rightarrow \log{\overline{p_{k,i}}}\;\forall k,i$.
\item
Let $0<\sigma<\frac{1}{2}$. The image of the domain $r^{1-\sigma}<\abs{v_{k,i}}<r^\sigma$ converges to a catenoid with necksize $\frac{2\pi}{n_k}$, and it is contained in a vertical cylinder with radius $\frac{5r^{\sigma-1/2}}{n_k}$.
\item
The non-horizontal period of $\psi$ is
\[
\mathcal{T}=\text{Re}\int_{B_{1,1}}\phi\simeq \left(\frac{\overline{T}}{2\sqrt{r}},\left(\sum_{k=1}^N\frac{1}{n_k}\right)\log{r}\right).
\]
\item
For each $k=1,\ldots,N$,
\[
2\sqrt{r}\text{Re}(\text{horiz}(0_k))\rightarrow (-1)^{k+1}\infty
\]
and
\[
2\sqrt{r}\text{Re}(\text{horiz}(\infty_k))\rightarrow (-1)^k\infty.
\]
\end{enumerate}
\label{embed prop1}
\end{proposition}
\begin{proof}
The proof of this proposition uses the same techniques used in the proof of proposition $8$ in section $5$ of \cite{tr2}. We show the details of the proof of part $7$.
Let $z_k\in{\bar {\hbox{\bb C}}}_k$ be the point such that $v_{k,1}(z_k)=\epsilon$ for $k=1,\cdots,N$, and let $z_1$ be the base point for $\psi$. Suppose $z\in\overline{\mathbb{C}}_k$. Since $\text{Re}(\text{horiz}(z))=\text{Re}\int_{z_1}^{z_k}\phi_1+\text{Re}\int_{z_k}^z\phi_1$ and $\text{Re}\int_{z_1}^{z_k}\phi_1$ is bounded, we only need to consider $\text{Re}\int_{z_k}^z\phi_1$. In that case,
\[
\begin{split}
2\sqrt{r}\text{Re}(\text{horiz}(z)) &=2\sqrt{r}\text{Re}\int_{z_k}^z\frac{1}{2}\left(\frac{1}{G}-G\right)\eta\\
&=\text{Re}\sqrt{r}(-1)^k\int_{z_k}^z\left(\frac{1}{\sqrt{r}G_k}-\sqrt{r}G_k\right)\eta\\
&=\text{Re}\sqrt{r}(-1)^k\int_{z_k}^z\frac{1}{\sqrt{r}G_k}\left(\eta_k+r\text{ holo}(r,X)dz\right)+\text{Re}\sqrt{r}(-1)^{k+1}\int_{z_k}^z\sqrt{r}G_k\eta\\
&=\text{Re}\left((-1)^k\int_{z_k}^z\frac{1}{\delta_k z}dz+r\text{ holo}(r,X)\right)\\
&=(-1)^k\left(\log{\abs{z}}-\log{\abs{z_k}}\right)+\text{Re}\left(r\text{ holo}(r,X)\right)\\
&\rightarrow(-1)^k\left(\log{\abs{z}}-\log{\abs{z_k}}\right).
\end{split}
\]
Thus,
\[
2\sqrt{r}\text{Re}(\text{horiz}(\infty_k))\rightarrow(-1)^k\infty
\]
and
\[
2\sqrt{r}\text{Re}(\text{horiz}(0_k))\rightarrow(-1)^{k+1}\infty.
\]
\end{proof}
\begin{proposition}[\cite{tr2}]
For small $t>0$, the minimal surface given by proposition $\ref{main prop}$ is embedded.
\label{embed prop2}
\end{proposition}
The same proposition is proven in section $7$ of \cite{tr2}. The only difference in the proof is due to the fact that we have two ends at each level instead of one. In \cite{tr2}, Traizet splits $\mathbb{R}^3$ into the horizontal slabs
\[
\text{height}(\infty_{k+1})+\frac{\sigma}{n_{k+1}}\left|\log{r}\right|\leq x_3 \leq \text{height}(\infty_k)-\frac{\sigma}{n_k}\left|\log{r}\right|.
\]
and
\[
\text{height}(\infty_k)-\frac{\sigma}{n_k}\abs{\log{r}}\leq x_3 \leq \text{height}(\infty_k)+\frac{\sigma}{n_k}\abs{\log{r}}.
\]
Traizet shows that the intersection of the first slab with $\psi(\Sigma)$ is the $n_k$ disjoint components $C_{k,i,t}$, each one converging to a catenoid. Therefore, this portion of the surface is embedded.
Then, he shows that the intersection of the second slab with $\psi(\Sigma)$ is the region $E_{k,t}$, which is a graph over the plane and hence embedded. The difference here is that we have two embedded ends in $E_{k,it}$. However, by part $7$ of proposition \ref{embed prop1}, $\text{horiz}(0_k)=(-1)^{k+1}\infty$ and $\text{horiz}(\infty_k)=(-1)^k\infty$. Hence, the ends in each level are disjoint. Thus, we get that $E_{k,t}$ is embedded. Therefore, $\psi(\Sigma)$ is embedded. Proposition \ref{embed prop1} together with the proof of proposition \ref{embed prop2} show that our surfaces satisfy properties \ref{property1} and \ref{property2}.
\addcontentsline{toc}{section}{References}
\bibliographystyle{plain}
| 2023-04-23T06:41:10.360Z | 2010-01-14T17:22:38.000Z | redpajama/arxiv | arxiv_0001 | 1,835 | 12,069 |
b3ee704844eb7304b4b440ba92812883431ebb4d | \section{\@startsection{section}{0}{\z@}{5.5ex plus .5ex minus
1.5ex}{2.3ex plus .2ex}{\large\bf}}
\def\subsection{\@startsection{subsection}{1}{\z@}{3.5ex plus .5ex minus
1.5ex}{1.3ex plus .2ex}{\normalsize\bf}}
\def\subsubsection{\@startsection{subsubsection}{2}{\z@}{-3.5ex plus
-1ex minus -.2ex}{2.3ex plus .2ex}{\normalsize\sl}}
\renewcommand{\@makecaption}[2]{%
\vskip 10pt
\setbox\@tempboxa\hbox{\small #1: #2}
\ifdim \wd\@tempboxa >\hsize
\small #1: #2\par
\else
\hbox to\hsize{\hfil\box\@tempboxa\hfil}
\fi}
\def\citenum#1{{\def\@cite##1##2{##1}\cite{#1}}}
\def\citea#1{\@cite{#1}{}}
\newcount\@tempcntc
\def\@citex[#1]#2{\if@filesw\immediate\write\@auxout{\string\citation{#2}}\fi
\@tempcnta\z@\@tempcntb\m@ne\def\@citea{}\@cite{\@for\@citeb:=#2\do
{\@ifundefined
{b@\@citeb}{\@citeo\@tempcntb\m@ne\@citea\def\@citea{,}{\bf }\@warning
{Citation `\@citeb' on page \thepage \space undefined}}%
{\setbox\z@\hbox{\global\@tempcntc0\csname b@\@citeb\endcsname\relax}%
\ifnum\@tempcntc=\z@ \@citeo\@tempcntb\m@ne
\@citea\def\@citea{,}\hbox{\csname b@\@citeb\endcsname}%
\else
\advance\@tempcntb\@ne
\ifnum\@tempcntb=\@tempcntc
\else\advance\@tempcntb\m@ne\@citeo
\@tempcnta\@tempcntc\@tempcntb\@tempcntc\fi\fi}}\@citeo}{#1}}
\def\@citeo{\ifnum\@tempcnta>\@tempcntb\else\@citea\def\@citea{,}%
\ifnum\@tempcnta=\@tempcntb\the\@tempcnta\else
{\advance\@tempcnta\@ne\ifnum\@tempcnta=\@tempcntb \else\def\@citea{--}\fi
\advance\@tempcnta\m@ne\the\@tempcnta\@citea\the\@tempcntb}\fi\fi}
\makeatother
\input sbookp_rosetta.tex
\begin{document}
\begin{titlepage}
\pubblock
\vfill
\def\arabic{footnote}{\fnsymbol{footnote}}
\Title{Higgs Pseudo-Observables,\\[0.3cm]
Second Riemann Sheet and All That
\footnote[9]{This work is supported by the European Community's Marie Curie Research
Training Network {\it Tools and Precision Calculations for
Physics Discoveries at Colliders} under contract
MRTN-CT-2006-035505, by the U.S. Department of Energy under
contract No. DE-AC02-98CH10886 and by the Deutsche
Forschungsgemeinschaft through Sonderforschungsbereich/Transregio 9
{\it Computergest\"utzte Theoretische Teilchenphysik}.}}
\vfill
\Author{Giampiero Passarino
\email{[email protected]}}
\Address{\csumb}
\Author{Christian Sturm
\email{[email protected]}}
\Address{\csumc}
\Author{Sandro Uccirati
\email{[email protected]}}
\Address{\csumd}
\vfill
\vfill
\begin{Abstract}
\noindent
The relation between physical observables measured at LHC and Tevatron and
standard model Higgs pseudo-observables (production cross section and partial
decay widths) is revised by extensively using the notion of the Higgs complex pole
on the second Riemann sheet of the $S\,$-matrix. The extension of their definition
to higher orders is considered, confronting the problems that arise when QED(QCD)
corrections are included in computing realistic observables. Numerical results
are presented for pseudo-observables related to the standard model Higgs boson
decay and production. The relevance of the result for exclusion plots of the
standard model Higgs boson for high masses (up to $600\,$GeV) is discussed.
Furthermore, a recipe for the analytical continuation of Feynman loop integrals
from real to complex internal masses and complex Mandelstam invariants is
thoroughly discussed.
\end{Abstract}
\vfill
\begin{center}
Keywords: Feynman diagrams, Loop calculations, Radiative corrections,
Higgs physics \\[5mm]
PACS classification: 11.15.Bt, 12.38.Bx, 13.85.Lg, 14.80.Bn, 14.80.Cp
\end{center}
\end{titlepage}
\def\arabic{footnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\small
\thispagestyle{empty}
\tableofcontents
\normalsize
\clearpage
\setcounter{page}{1}
\section{Introduction \label{intro}}
The search for a mechanism explaining electroweak symmetry breaking has been
a major goal for many years, in particular the search for a standard model (SM)
Higgs boson, see for instance Ref.~\cite{Phenomena:2009pt} and
Ref.~\cite{Fernandez:2009ac}. As a result of this an intense effort in
the theoretical community has been made to produce the most accurate NLO
and NNLO predictions, see
Refs.~\cite{Anastasiou:2008tj,Grazzini:2008zz,Actis:2008uh,Harlander:2009my}.
There is, however, a point that has been ignored in all these calculations: the
Higgs boson is an unstable particle and should be removed from the in/out bases in the
Hilbert space, without destroying the unitarity of the theory. Therefore,
concepts as the {\em production} of an unstable particle or its {\em partial decay
widths} do not have a precise meaning and should be replaced by a
conventionalized definition which respects first principles of quantum field theory
(QFT).
The quest for a proper treatment of a QFT of unstable particles dates back to
the sixties and to the work of Veltman~\cite{Veltman:1963th} (for earlier
attempts see Ref.~\cite{peierls}); more recently the question has been readdressed
by Sirlin and collaborators~\cite{Grassi:2000dz}.
Alternative approaches, within the framework of an effective theory can be found
in Ref.~\cite{Beneke:2003xh}.
In this paper we discuss the relation between physical observables and Higgs
pseudo-observables by considering the extension of their definition to higher
orders in perturbation theory, confronting the problems that arise when perturbative
corrections in quantum electrodynamics (QED) and quantum chromodynamics
(QCD) are included. Numerical results are also presented.
Our work can be seen as an extension of complex-mass schemes to include complex
external momenta (for previous work see also Ref.~\cite{Valent:1974bd}),
addressing systematically the question of the analytical continuation of Feynman
loop integrals.
This paper is organized as follows. In \sect{theP} we summarize the conceptual
setup. In \sect{CP} we present general arguments on complex poles.
In \sects{PDW}{UNI} we discuss pseudo-observables, on-shell observables and
unitarity. The analytical continuation of Feynman
loop integrals into the second Riemann sheet of the $S\,$-matrix is
examined in \sect{allcmplx}.
In \sect{theQs} we present the inclusion of QED and QCD corrections and
renormalization schemes are highlighted in \sect{schemes}.
Numerical results are given in \sect{Nres} and in \sect{Conclu} we close
with our conclusions.
\section{Formulation of the problem \label{theP}}
There are two old questions in relating measurements to theoretical predictions:
\begin{itemize}
\item[--] Experimenters (should) extract so-called {\em realistic observables} from
raw data, e.g. $\sigma (p p \to \gamma \gamma + X)$ and need to present
results in a form that can be useful for comparing them with theoretical
predictions, i.e. the results should be transformed into pseudo-observables;
during the deconvolution procedure one should also account for the interference
background -- signal;
\item[--] Theorists (should) compute pseudo-observables using the best available
technology and satisfying a list of demands from the self-consistency of
the underlying theory~\cite{Bardin:1999gt}.
\end{itemize}
Almost from the start it is clear that a common language must be established
in order to avoid misunderstandings and confusion. A typical example can be found
in Higgs physics where, frequently, one talks about {\em Higgs production
cross section} or {\em Higgs partial decay widths}.
After the discovery phase, in absence of which the future of high energy
physics cannot be ascertained, one will need to probe the properties
of the discovered resonance, like spin and couplings. In this case
different sources will start talking about the same thing but with
different languages%
. We will indicate a reasonable language within the context of a
perturbative expansion of a gauge-invariant QFT in this paper.
The Higgs boson, as well as the $W$ or $Z$ bosons, are unstable
particles; as such they should be removed from in/out bases in the
Hilbert space, without changing the unitarity of the theory.
As mentioned before, concepts as the production of an unstable particle
or its partial decay widths, not having a precise meaning, are only an
approximation of a more complete description.
The inconsistencies associated with the on-shell LSZ formulation of an
unstable external particles become particularly severe starting from two-loops,
as described in Ref.~\cite{Actis:2008uh}.
Suppose that we want to combine a Higgs production mechanism, say
gluon-gluon fusion, with the subsequent decay $H \to \gamma \gamma$. The
process to be considered is, therefore, $pp \to \gamma \gamma + X$
and it is made of a part that defines the signal, e.g.
\bq
pp \to g g ( \to H \to \gamma \gamma ) + X,
\end{equation}
and by a non-resonant background. The question is: how to extract
from the data, without ambiguities, a pseudo-observable to be termed
{\em Higgs partial decay width into two photons} which, at the same time,
does not violate first principles? Once again, one should be aware that there
is no Higgs boson in the in-state, therefore the matrix element
$<\gamma \gamma\; {\rm out} | H\; {\rm in}>$ is not definable in QFT and
this ill-defined quantity should be replaced by a pseudo-observable which
closely resembles the intuitive concept of a decay width, can be
unambiguously extracted from the data and respects all fundamental
properties of the theory; in this way we replace a {\em non existing}
observable with a conventional definition.
A proposal in this direction can be found in Ref.\cite{Grassi:2000dz};
here we revise the proposal, improving it by considering the extension
to higher orders in perturbation theory, confronting the problems that
arise when QED(QCD) corrections have to be included and present numerical
results for Higgs physics.
At the parton level the $S\,$-matrix for the process $i \to f$ can be written
as
\bq
S_{fi} = V_i(s)\,\Delta_{{\scriptscriptstyle{H}}}(s)\,V_f(s) + B_{if}(s),
\label{Smat}
\end{equation}
where $V_i$ is the production vertex $i \to H$ (e.g. $gg \to H$), $V_f$
is the decay vertex $H \to f$ (e.g. $H \to \gamma \gamma$), $\Delta_{{\scriptscriptstyle{H}}}$
is the Dyson re-summed Higgs propagator and $B_{if}$ is the non-resonant
background (e.g. $gg \to \gamma \gamma$ boxes). In the next section we
will introduce the notion of complex pole.
A vertex is defined by the following decomposition~\cite{Grassi:2001bz},
\bq
V_f(s) = \sum_a\,V^a_f\lpar s\,,\,\{S\}\rpar\,F^a_f\lpar \{p_f\}\rpar
\end{equation}
where $s = - P_{{\scriptscriptstyle{H}}}^2$ (with $P_{{\scriptscriptstyle{H}}} = \sum_f p_f$), $s\,\oplus\,\{S\}$
is the set of Mandelstam invariants that characterize the process
$H \to f$, $V^a_f$ are scalar form factors and the $F^a_f$ contain spinors,
polarization vectors, etc.
\section{The complex pole \label{CP}}
In this section we introduce the notion of the complex pole~\cite{Stuart:1991xk}
following closely the analysis of Ref.~\cite{Actis:2006rc}.
Let $\Delta_i$ be the lowest order propagator for particle $i$ and ${\overline \Delta}_i$ the
corresponding dressed propagator, i.e.
\bq
{\overline \Delta}_i = -\,\frac{\Delta_i}{1 + \Delta_i\,\Sigma_{ii}},
\end{equation}
Let us analyze in more details the definition of the dressed propagator:
to begin with, consider a skeleton expansion of the self-energy
$S= 16\,\pi^4\,i\,\Sigma$ with propagators that are resummed up to $\ord{n}$ and define
\bq
\Delta^{(n)}_i(s) = -\,\Delta^{(0)}_i(s)\,\Bigl[
1 + \Delta^{(0)}_i(s)\,\Sigma^{(n)}_{ii}\lpar s\,,\,
\Delta^{(n-1)}_i(s)\rpar\Bigr]^{-1},
\end{equation}
where, omitting an overall factor $-i/(2\,\pi)^4$, the Born propagator
(tensor structures are easily included) is
\bq
\Delta^{(0)}_i(s) = \frac{1}{s - m^2_i}.
\end{equation}
If it exists, we define a dressed propagator as the formal
limit~\cite{Veltman:1963th}
\bq
{\overline \Delta}_i(s) = \lim_{n \to \infty}\,\Delta^{(n)}_i(s),
\qquad
{\overline \Delta}_i(s) =
-\,\Delta^{(0)}_i(s)\,\Bigl[
1 + \Delta^{(0)}_i(s)\,\Sigma_{ii}\lpar s\,,\,{\overline \Delta}_i(s)\rpar\Bigr]^{-1},
\label{dr}
\end{equation}
which coincides with the Schwinger-Dyson solution for the propagator.
The Higgs boson complex pole ($s_{\ssH}$) is the solution of the equation
\bq
s_{\ssH} - M^2_{{\scriptscriptstyle{H}}} + \Sigma_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s_{\ssH}) = 0,
\label{eqCP}
\end{equation}
where $M^2_{{\scriptscriptstyle{H}}}$ is the renormalized Higgs boson mass; here we assume that all
counter-terms have been introduced to make the off-shell Green's function
ultraviolet finite, respecting locality of the counter-terms.
We now examine more carefully the self-energy to all orders in perturbation
theory since, often, there is some confusion with statements that are
formulated to all orders and applied to a truncated perturbative expansion.
Now consider the, all orders, self-energy,
\bq
\Sigma_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s,M^2_{{\scriptscriptstyle{H}}},\xi) = \sum_{n=1}^{\infty}\,
\Sigma^{(n)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s,M^2_{{\scriptscriptstyle{H}}},\xi)\,g^{2n},
\end{equation}
where $\xi$ is the gauge parameter (extension to more than one gauge
parameters is straightforward) and $g$ is the renormalized coupling constant.
From arguments based on Nielsen identities, see Ref~\cite{Grassi:2001bz}, we
know that
\bq
\frac{\partial}{\partial \xi}\,s_{\ssH} = 0,
\qquad
\frac{\partial}{\partial \xi}\,\Sigma_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s_{\ssH},M^2_{{\scriptscriptstyle{H}}},\xi) = 0,
\label{NI}
\end{equation}
i.e. the location of the complex pole is $\xi$ independent; as a
consequence the self-energy is $\xi$ independent too, since the two differ
by a renormalized quantity, obviously $\xi$ independent.
We consider first the one-loop approximation for the self-energy:
from its explicit expression we are able to derive the following relation:
\bq
\Sigma^{(1)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s,M^2_{{\scriptscriptstyle{H}}},\xi) =
\Sigma^{(1)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}\,;\,{\scriptscriptstyle{I}}}(s,M^2_{{\scriptscriptstyle{H}}}) +
( s - M^2_{{\scriptscriptstyle{H}}})\,\Phi_{{\scriptscriptstyle{H}}}(s,M^2_{{\scriptscriptstyle{H}}},\xi).
\label{olSE}
\end{equation}
where, in a general $R_{\xi}$ gauge, we obtain
\bq
M^2_{_W}\Phi_{{\scriptscriptstyle{H}}} =
-\frac{1}{8}\bigg\{
(s \!+\! M^2_{{\scriptscriptstyle{H}}})\Bigl[
B_d(s,M^2_{_Z},M^2_{_Z}; \xi_{{\scriptscriptstyle{Z}}}) + 2\, B_d(s,M_{_W},M_{_W}; \xi_{{\scriptscriptstyle{W}}})
\Bigr]
+ 2\,A_d(M_{_Z},\xi_{{\scriptscriptstyle{Z}}}) + 4\,A_d(M_{_W},\xi_{{\scriptscriptstyle{W}}})
\bigg\},
\nonumber
\end{equation}
\bq
B_d\lpar s,m,m,\xi\rpar =
B_0\lpar s,\xi\,m,\xi\,m\rpar - B_0\lpar s,m,m\rpar,
\quad
A_d\lpar m,\xi \rpar = A_0(\xi\,m) - A_0(m).
\end{equation}
The symbols $A_0, B_0,\,$ etc. are the usual scalar, one-loop functions.
It needs to be stressed that the splitting between gauge dependent and gauge
independent quantities is only defined modulo a $\xi\,$-independent constant.
Our definition of the invariant part is that it coincides with the
expression in the 't Hooft-Feynman gauge (i.e. $\xi = 1$).
Furthermore, finite renormalization (i.e. replacing renormalized
parameters with a set of {\em experimental} data points after having
removed ultraviolet poles by means of local counter-terms) amounts to
replace
\bq
M^2_{{\scriptscriptstyle{H}}} = s_{\ssH} + \Sigma_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s_{\ssH},M^2_{{\scriptscriptstyle{H}}},\xi),
\label{CPmren}
\end{equation}
showing that
\bq
\frac{\partial}{\partial \xi}\,
\Sigma^{(1)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s_{\ssH},s_{\ssH},\xi) = 0,
\end{equation}
so that, at one-loop, the Higgs complex pole is gauge parameter independent
if the self-energy is computed at $M^2_{{\scriptscriptstyle{H}}} = s_{\ssH}$, the basis of the
so-called complex-mass scheme (see Ref.~\cite{Denner:2005fg} and also
Ref.~\cite{Actis:2006rc}).
From \eqn{NI} and from the one-loop result in \eqn{olSE}, we derive the
following
\arraycolsep 0.14em\begin{eqnarray}
\Sigma^{(n)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s_{\ssH},M^2_{{\scriptscriptstyle{H}}},\xi) &=&
\Sigma^{(n)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}\,;\,{\scriptscriptstyle{I}}}(s_{\ssH},M^2_{{\scriptscriptstyle{H}}}) +
\Sigma^{(n)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}\,;\,\xi}(s_{\ssH},M^2_{{\scriptscriptstyle{H}}},\xi),
\nonumber\\
\Sigma^{(n)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}\,;\,\xi}(s_{\ssH},M^2_{{\scriptscriptstyle{H}}},\xi) &=&
\Sigma^{(n-1)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}\,;\,{\scriptscriptstyle{I}}}(s_{\ssH},M^2_{{\scriptscriptstyle{H}}})\,
\Phi_{{\scriptscriptstyle{H}}}(s_{\ssH},M^2_{{\scriptscriptstyle{H}}},\xi).
\label{niII}
\end{eqnarray}
Using \eqn{niII} we can rewrite \eqn{eqCP} at the two-loop level as
\bq
M^2_{{\scriptscriptstyle{H}}} =
s_{\ssH}
+ g^2\,\Sigma^{(1)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}\,;\,{\scriptscriptstyle{I}}}(s_{\ssH},s_{\ssH})
+ g^4\Bigl[
\Sigma^{(2)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}\,;\,{\scriptscriptstyle{I}}}(s_{\ssH},s_{\ssH}) +
\Sigma^{(1)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}\,;\,{\scriptscriptstyle{I}}}(s_{\ssH},s_{\ssH})\,
\frac{\partial}{\partial M^2_{{\scriptscriptstyle{H}}}}\,
\Sigma^{(1)}_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}\,;\,{\scriptscriptstyle{I}}}(s_{\ssH},M^2_{{\scriptscriptstyle{H}}})\Bigr|_{M^2_{{\scriptscriptstyle{H}}}=s_{\ssH}}
\Bigr].
\label{eqCPtl}
\end{equation}
We find as worthy note that \eqn{eqCPtl} can be easily generalized to all
orders in perturbation theory showing that, order-by-order, the gauge
dependent part of the self-energy drops out in the equation for the complex
pole of the particle. The complex pole, sitting on the second Riemann sheet of
the $S\,$-matrix, is usually parametrized as
\bq
s_{\ssH} = \mu^2_{\ssH} - i\,\mu_{\ssH}\,\gamma_{\ssH}.
\end{equation}
It is worth noting that a consistent treatment of external ($s$) and
internal ($M^2_{{\scriptscriptstyle{H}}}$) masses allows the extension of the complex mass
scheme beyond one-loop, without the need of expanding the self-energy
around $s_{\ssH} = \mu^2_{\ssH}$, as frequently done in the literature.
In partial contrast to the traditional complex mass scheme,
Ref.~\cite{Denner:2005fg}, in our approach (described in
Ref.~\cite{Actis:2006rc}) it is the finite renormalization equation and
not the Lagrangian that is modified.
Indeed, calling the scheme {\em complex mass scheme} is somehow misleading;
to the requested order we replace everywhere the renormalized mass
$M^2_{{\scriptscriptstyle{B}}}$ with $s_{{\scriptscriptstyle{B}}} + \Sigma_{{\scriptscriptstyle{B}}\ssB}(s_{{\scriptscriptstyle{B}}})$ which is real by
construction;
if only one-loop is needed then $M^2_{{\scriptscriptstyle{B}}} \to s_{{\scriptscriptstyle{B}}}$ everywhere, therefore
justifying the name {\em complex mass}.
The quest for gauge invariance and the consequent introduction of a
complex pole instead of an on-shell mass signal has a certain degree of
ambiguity in defining the Higgs boson mass (as well as the mass of any
unstable particle).
The most convenient choice, for all practical purposes, is represented
by the square root of the real part of $s_{\ssH}$, although
\bq
{\overline\mu}^2_{\ssH} = \mu_{\ssH}\,\lpar \mu^2_{\ssH} + \gamma^2_{{\scriptscriptstyle{H}}}\rpar^{1/2}
\label{MHdef}
\end{equation}
also has several advantages~\cite{Ghinculov:1996py} and will be used in
our numerical results.
There is a final comment for this section: the complex pole for an unstable
particle, parametrized according to \eqn{eqCP}, must correspond to a negative
imaginary part; otherwise, even the Wick rotation cannot be safely performed.
Consider the case of the Higgs boson, $ii$ channels that do not
satisfy the negativity condition for the imaginary part below the $4\,m^2_i$
(real) threshold are excluded in the evaluation of $s_{\ssH}$.
As we already mentioned the contribution to the imaginary part of $s_{\ssH}$
from a given channel below the corresponding real threshold ($WW$, $ZZ$
and $\overline t t$) represents an approximation to the corresponding $4f$ and
$6f$ cuts, i.e. $H \to WW, ZZ \to 4f$ etc, which is acceptable only when
the corresponding $\gamma_{\ssH}$ is positive, a condition which fails at one-loop
for $\overline t t$ intermediate states when the top quark mass is kept real;
in this case $\overline t t$ intermediate states never contribute, in our scheme,
to $\gamma_{\ssH}$ below threshold, i.e. they are discarded.
It is interesting to note that this problem completely disappears if we allow
for a top quark complex pole (instead of real on-shell mass).
Numerical examples will be discussed in \sect{Nres}; unfortunately the top
quark total (on-shell) width is poorly known, therefore inducing large
uncertainties on the corrections.
In the numerical analysis we use $\Gamma_t \le 13.1\,$GeV, based on the
experimental upper limit of Ref.~\cite{Aaltonen:2008ir}.
\section{Extracting a partial decay width \label{PDW}}
In this section we examine our options to define a pseudo-observable
which is related, as closely as possible, to a realistic cross section
and shares as many features as possible with the corresponding on-shell
definition of a partial decay width.
If we insist that $|H>$ is an asymptotic state in the Hilbert space
then the observable to consider will be $<f\,{\rm out}\,|\,H\,{\rm in}>$,
otherwise one should realize that for stable particles the proof of the LSZ
reduction formulas depends on the existence of asymptotic states
\bq
|\,p\,{\rm in}\, > = \lim_{t \to -\,\infty}\int\,d^3 x\,H(x)\,i\,
{\partial_t}\!\!\!\!\!^{^{\leftrightarrow}} \,
e^{i\,\spro{p}{x}}\,|\,0\,>,
\end{equation}
(in the weak operator sense).
For unstable particles the energy is complex so that this limit either
diverges or vanishes.
Although a modification of the LSZ reduction formulas has been proposed long
ago for unstable particles, see Ref.~\cite{Weldon:1975gu}, we prefer an
alternative approach where one considers extracting information on the Higgs
boson directly from
\bq
<\,f\;{\rm out}\,|\,H\,>\,<\,H\,|\,i\;{\rm in}\,> +
\sum_{n\,\not=\,H}\,
<\,f\;{\rm out}\,|\,n\,>\,<\,n\,|\,i\;{\rm in}\,>,
\end{equation}
for some initial state $i$ and some final state $f$ and where
$\{n\}\,\oplus\,H$ is a complete set of states (not as in the in/out bases).
As we are about to see, the price to be paid is the necessity of moving into
the complex plane. Define $\Pi_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s)$ as
\bq
\Pi_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s) = \frac{\Sigma_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s) - \Sigma_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s_{\ssH})}{s-s_{\ssH}},
\end{equation}
then the, Dyson re-summed, Higgs propagator becomes
\bq
\Delta_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s)= (s - s_{\ssH})^{-1}\,\Bigl[ 1 + \Pi_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s)\Bigr]^{-1},
\qquad
Z_{{\scriptscriptstyle{H}}} = 1 + \Pi_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}.
\label{tres}
\end{equation}
Using \eqn{tres} we can write \eqn{Smat} as
\bq
S_{fi} = \Bigl[ Z^{-1/2}_{{\scriptscriptstyle{H}}}(s)\,V_i(s)\Bigr]\,
\frac{1}{s - s_{\ssH}}\,
\Bigl[ Z^{-1/2}_{{\scriptscriptstyle{H}}}(s)\,V_f(s)\Bigr] + B_{if}(s).
\end{equation}
From the $S\,$-matrix element for a physical process $i \to f$ we extract the
relevant pseudo-observable,
\bq
S\lpar H_c \to f\rpar = Z^{-1/2}_{{\scriptscriptstyle{H}}}(s_{\ssH})\,V_f(s_{\ssH}),
\label{PO}
\end{equation}
which is gauge parameter independent -- by construction -- and satisfies the
relation
\bq
S_{fi} = \frac{S\lpar i \to H_c \rpar\,S\lpar H_c \to f \rpar}{s - s_{\ssH}} +
\hbox{non resonant terms}.
\end{equation}
The partial decay width is further defined as
\bq
\mu_{\ssH}\,\Gamma\lpar H_c \to f\rpar = \frac{(2\,\pi)^4}{2}\,\int\,
d\Phi_f\lpar P_{{\scriptscriptstyle{H}}}\,,\,\{p_f\}\rpar\,
\sum_{\rm spins}\,\Bigr| S\lpar H_c \to f\rpar \Bigr|^2,
\label{GPO}
\end{equation}
where the integration is over the phase space spanned by $| f >$, with the
constraint $P_{{\scriptscriptstyle{H}}} = \sum\,p_f$. One should not confuse phase space and
the real value of $s= -P^2_{{\scriptscriptstyle{H}}}$, where the realistic observable is
measured, with the complex value for $s$, where gauge invariant
loop corrections must be computed.
The choice of $P^2_{{\scriptscriptstyle{H}}}$ (phase space) where to define the
pseudo-observable is conventional, e.g. one can use the real part
of $s_{\ssH}$. Indeed, the r.h.s. of \eqn{PO} satisfies the property
\bq
\frac{\partial}{\partial \xi}\,Z^{-1/2}_{{\scriptscriptstyle{H}}}(s_{\ssH})\,V_f(s_{\ssH}) = 0
\end{equation}
to all orders in perturbation theory. If we define
\bq
V_f\lpar s,M^2_{{\scriptscriptstyle{H}}}\rpar = \sum_{n=0}^{\infty}\,g^{2 n + 1}\,\Bigl[
V^{(n)}_{f\,;\,{\scriptscriptstyle{I}}}\lpar s,M^2_{{\scriptscriptstyle{H}}}\rpar +
V^{(n)}_{f\,;\,\xi}\lpar s,M^2_{{\scriptscriptstyle{H}}}\rpar \Bigr],
\end{equation}
we obtain, expanding in powers of the coupling constant $g$, that
\bq
V^{(1)}_{f;\,\xi}(s_{\ssH},s_{\ssH}) =
\frac{1}{2}\,V^{(0)}\Phi_{\!{\scriptscriptstyle{H}}}(s_{\ssH},s_{\ssH}),
\qquad
V^{(2)}_{f;\,\xi}( s_{\ssH},s_{\ssH}) =
- \frac{1}{2}\Phi_{\!{\scriptscriptstyle{H}}}(s_{\ssH},s_{\ssH})\Bigl[
V^{(1)}_{f;\,\xi}( s_{\ssH},s_{\ssH})
- \frac{1}{4\,}V^{(0)}\Phi_{\!{\scriptscriptstyle{H}}}(s_{\ssH},s_{\ssH})
\Bigr],
\end{equation}
etc.
One last example of a basic fact: Nielsen identities give the structure
of the gauge parameter dependent vertex and self-energy order-by-order in perturbation
theory. It is important to stress at this point that the renormalized mass should
be replaced consistently with the use of \eqn{CPmren}.
To summarize, only $s_{\ssH}$ is a meaningful quantity and a definition of
the real mass or of the total width is conventional. From \eqn{eqCP} one has
\bq
\mu_{\ssH}\,\gamma_{\ssH} = {\rm{Im}}\,\Sigma_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s_{\ssH}),
\end{equation}
and it should be evident, from \eqn{GPO}, that $\gamma_{\ssH} \not= \sum_f\,
\Gamma\lpar H_c \to f\rpar$. The reason can be understood when we consider a
simple example, a toy model with ${\cal L}_{\rm int}=m^2\,\phi\,\sigma^+\,\sigma^-$
(with massless $\sigma\,$-particles). Already at one-loop, we find
\bq
{\rm{Im}}\,\Sigma_{\phi\phi}(s) = \frac{m^2}{16\,\pi^2}\,\pi,
\qquad
{\rm{Im}}\,\Sigma_{\phi\phi}(s_{\phi}) = \frac{m^2}{16\,\pi^2}\,\pi\,
\Bigl( 1 + \frac{1}{\pi}\,\atan{\frac{\gamma_{\phi}}{\mu_{\phi}}} \Bigr).
\label{cut}
\end{equation}
While the first relation in \eqn{cut} (real $s$) satisfies the cutting
equation~\cite{Cutkosky:1960sp} the second (complex $s$) does not.
For a proper perspective it must be recalled that
when we expand, $\Sigma_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(s_{\ssH}) = \Sigma_{{\scriptscriptstyle{H}\scriptscriptstyle{H}}}(\mu^2_{\ssH}) + \dots$,
the cutting equation is restored at NLO but it will still be violated at
NNLO, as pointed out in Ref.~\cite{Grassi:2001bz}.
Therefore, our conventional definition of the Higgs total decay width
will be $\Gamma_{\rm tot}(H_c) = \sum_f\,\Gamma\lpar H_c \to f\rpar$.
To set the stage, it may be well to recall that the breakdown of a process into
products of pseudo-observables can be generalized to include unstable particles
in the final state; an example is given in \fig{Multi} where the
(triply-resonant) signal in $g g \to 4\,$f is split into a chain $gg \to H$
(production), $H \to W^+ W^-$ (decay) and $W \to {\bar f} f$ (decays).
\begin{figure}[h]
\vspace{0.3cm}
\begin{picture}(140,30)(-120,0)
\SetScale{0.5}
\SetWidth{1.8}
\GCirc(50.,0.){10.}{0}
\Gluon(10.,40.)(50.,0.){2}{7}
\Gluon(10.,-40.)(50.,0.){2}{7}
\DashLine(50.,0.)(90.,0.){3}
\Line(100.,20)(100.,-20)
\Line(110.,20)(110.,-20)
\DashLine(120.,0.)(160.,0.){3}
\GCirc(160.,0.){10.}{0}
\Line(160.,0.)(200.,40.)
\Line(160.,0.)(200.,-40.)
\Line(210.,60.)(210.,20.)
\Line(220.,60.)(220.,20.)
\Line(210.,-60.)(210.,-20.)
\Line(220.,-60.)(220.,-20.)
\Line(230.,40.)(270.,40.)
\Line(230.,-40.)(270.,-40.)
\GCirc(270.,40.){10.}{0}
\GCirc(270.,-40.){10.}{0}
\ArrowLine(310.,60.)(270.,40.)
\ArrowLine(270.,40.)(310.,20.)
\ArrowLine(310.,-60.)(270.,-40.)
\ArrowLine(270.,-40.)(310.,-20.)
\Text(55,-30)[cb]{\Large $s_{{\scriptscriptstyle{H}}}$}
\Text(110,-50)[cb]{\Large $s_{{\scriptscriptstyle{W}}}$}
\end{picture}
\vspace{1.5cm}
\caption[]{Gauge-invariant breakdown of the triply-resonant $g g \to 4\,$f
signal into $gg \to H$ production, $H \to W^+ W^-$ decay and subsequent
$W \to {\bar f} f$ decays.}
\label{Multi}
\end{figure}
\section{Pseudo-observables, on-shell observables and unitarity
\label{UNI}}
When we consider all the possible decay channels of an {\em on-shell} standard
model Higgs boson we obtain that up to an on-shell mass $m_{{\scriptscriptstyle{H}}} \approx
140\,$GeV the Higgs boson is very narrow while the width rapidly increases
after the opening of the $WW$ and $ZZ$ channels.
Even this statement should be carefully examined since $W$ and $Z$ bosons are
unstable particles to be removed from the in/out bases of the Hilbert space.
For real $W,Z$ masses the Higgs boson width is related to the cuts of the
self-energy and the statement under examination is based on the (say one-loop)
two-fermion cut, two-boson cut, etc.
Unitarity follows if we add all possible ways in which a diagram with
given topology can be cut in two separating $S$ from $S^{\dagger}$. For
a stable particle the cut line, proportional to the positive energy part
of the propagator, contains a pole term
$2\,i\,\pi\,\theta(p_0)\,\delta(p^2+m^2)$, whereas there is no such
contribution for an unstable particle. We express ${\rm{Im}}\,\Sigma$ in
terms of cut self-energy diagrams and repeat the procedure ad libidum,
therefore proving that cut unstable lines are left with no contribution,
i.e. unstable particles contribute to the unitarity of the $S-$matrix
via their stable decay products~\cite{Veltman:1963th}.
From this point of view the second cut of the Higgs self-energy (after the
two-fermion cut) is the four-fermion cut, not the two-boson one (once again,
the cutting of a line corresponding to an unstable particle contains no pole
term).
How bad is the choice of cutting two, {\em stable}, $W$ boson lines with respect to
cutting four fermion lines and summing over all fermions, i.e. how bad is the
on-shell approach, at least from a numerical point of view?
If one evaluates the ratio
\bq
\Gamma \lpar H \to VV \rpar \, \hbox{BR}\lpar V \to 2f\rpar \,
\hbox{BR}\lpar V \to 2f'\rpar \; / \; \Gamma \lpar H \to 2f + 2f' \rpar
\end{equation}
the results of Ref.~\cite{Bredenstein:2006rh} show that the on-shell phase
space for the $WW$ or $ZZ$ final state introduces an error of the order of
$10\%$ near the threshold, which is still satisfactory. Using the complex
mass scheme which, in turns violates unitarity, will improve upon the
on-shell result since the internal $V$ masses are themselves complex poles.
Remarkably, the complex mass scheme represents a method which is, at the same
time, predictive and gives the best available approximation to the use of
a full (Schwinger-Dyson) re-summed theory, a formal solution of the
problem which, however, poses an insurmountable barrier for the technology
of today.
\section{Loop integrals with complex masses and invariants
\label{allcmplx}}
In this section we analyze the correct definition of Feynman integrals
with complex masses and Mandelstam invariants.
On a more formal bases one should say that unstable states lie in a natural
extension of the usual Hilbert space that corresponds to the second sheet of
the $S\,$-matrix; these states have zero norm and, therefore, escape the
usual prohibition of having an hermitian Hamiltonian with complex
energy~\cite{Weldon:1975gu}.
On a more pragmatic level we use the guiding principle that Green's functions
involving unstable particles should smoothly approach the value for stable
ones (the usual Feynman $-\,i\,0$ prescription) when the couplings of the theory
tend to zero.
The whole problem can be summarized as follows: in the limit of zero
couplings all particles are stable and we define Green's functions in the
cut $s\,$-plane, where $s$ is the selected invariant to be continued into
the complex plane.
For the {\em free} theory of stable particles, according to Feynman
prescription, the value of the argument of some function lies, say below
the cut (which coincides for example to the real negative axis);
during continuation of $s$ we may cross the cut, which means that we have
to continue the function into its second branch.
For the simple case that we have just described the Green's function is then
defined through its value on the principal branch in all quadrants but the
second, where continuation to the second branch is required.
This {\em new} function will have a cut on the positive imaginary axis and
special problems may occur, especially when we want to do analytical
continuation at the level of integrands and also internal masses in a
given Feynman diagram are complex, as required by any realistic
complex-mass scheme.
Green's functions are given in terms of Feynman parametric integrals and
our main point will be: how to define the same integrals but properly
continued to complex internal masses and complex external invariants?
One of the difficulties of the problem lies in having masses and invariants
complex at the same time which introduces subtleties in the analytical
continuation which are not present if, say only masses or only invariants
are made complex.
\subsection{General setup \label{GS}}
To start our analysis, consider a scalar $\phi\,\sigma^2$ theory with
$M_{\phi} > 2\,m_{\sigma}$, i.e. $\phi$ is unstable; the $\phi$ propagator
(with $s = -\,p^2$) is
\bq
\Delta = \Bigl[ s - M^2_{\phi} + \Sigma_{\phi\phi}(s)\Bigr]^{-1},
\end{equation}
where factors $(2\,\pi)^4\,i$ have been omitted.
The inverse function, $\Delta^{-1}(s)$ is analytic in the entire $s\,$-plane
except for a cut from $s = 4\,m^2_{\sigma}$ to infinity along the real
axis. The function is defined above the cut, $\Delta^{-1}(s + i\,0)$ and
the analytical continuation downwards is to the second Riemann sheet, i.e.
\bq
\Delta^{-1}_2(s - i\,0) = \Delta^{-1}(s + i\,0) = \Delta^{-1}(s - i\,0) +
2\,i\,\pi\,\rho(s),
\label{AC}
\end{equation}
where $2\,i\,\pi\,\rho(s)$ is the discontinuity of the function across the cut.
For a complete discussion of the analytical continuation see, e.g.,
Ref.~\cite{Brown:1992db}.
We need a few definitions which will help the understanding of the
procedure for the analytical continuation of functions defined through a
parametric integral representation. The logarithm is defined by
\bq
\ln^{(k)} z = ln^{(0)} z + 2\,i\,\pi\,k,
\quad
k = 0\,,\,\pm 1\,,\,\pm 2\,,\dots
\end{equation}
where $\ln^{(0)} z$ denotes the principal branch ($ - \pi < \arg(z) \le + \pi$).
From now on we will omit the superscript that denotes the principal branch of
the logarithm. Let $z_{\pm} = z_0 \pm i\,0$ and $z= z_{{\scriptscriptstyle{R}}} + i\,z_{{\scriptscriptstyle{I}}}$,
we define
\bq
\ln^{\pm} \lpar z\,;\,z_{\pm} \rpar =
\Bigl\{
\begin{array}{ll}
{} \quad & \quad
\ln z \pm 2\,i\,\pi\,\theta \lpar - z_0 \rpar\,\theta \lpar \mp z_{{\scriptscriptstyle{I}}} \rpar \\
{} \quad & \quad
\ln z \pm 2\,i\,\pi\,\theta \lpar - z_{{\scriptscriptstyle{R}}} \rpar\,\theta \lpar \mp z_{{\scriptscriptstyle{I}}} \rpar,
\end{array}
\label{variants}
\end{equation}
i.e. the first Riemann sheet for all quadrants but the second where the
function is defined in the second Riemann sheet.
Our first definition of the $\ln^{\pm}\,$-functions in \eqn{variants} is the
most natural in defining analytical continuation of Feynman integrals with a
smooth limit into the theory of stable particles; the reason is simple, in
case some of the particles are taken to be unstable we have to perform
analytical continuation only when the corresponding Feynman diagram, in the
limit of all (internal) stable particles, develops an imaginary part (e.g. above some
normal threshold).
However, in all cases where the analytical expression for the diagram is known,
one can easily see that the result does not change when replacing $z_0$ with
$z_{{\scriptscriptstyle{R}}}$, the second variant in \eqn{variants}.
As we are going to discuss in the following sections there are cases where one
would like to perform an analytical continuation at the level of integrand in the
Feynman parametric representation of a given diagram; often the integration
contour has to be distorted into the complex plane with the consequence that
$z_{{\scriptscriptstyle{R}}} \not= z_0$ and ${\rm sign}(z_{{\scriptscriptstyle{R}}}) \not= {\rm sign}(z_0)$.
In this case we need a more general definition of $\ln^\pm$:
\begin{description}
\item[{\bf Definition:}]
Let $z(\Gamma) \in C$ ($\Gamma \in R$) be an arbitrary complex function
of $\Gamma$; when we want to continue $z_0 = z(0)$ (not in the second
quadrant) to $z_f = z(\Gamma_f)$ we must look for a real $\Gamma_c$ with
$0 < \Gamma_c < \Gamma_f$ such that $z_c = z(\Gamma_c)$ is real and negative
(for simplicity we assume the case a monotonic $z_{\Gamma} = z(\Gamma)$):
then, $\forall \Gamma\,:\, \Gamma \ge \Gamma_c$ we replace $\ln z$ with
its analytical continuation into the second Riemann sheet,
\bq
\ln^{\pm} \lpar z_{\Gamma}\,;\,z_0 \rpar =
\ln z_{\Gamma} \pm 2\,i\,\pi\,\theta \lpar - {\rm{Re}}\,z_{\Gamma} \rpar\,
\theta \lpar \mp {\rm{Im}}\,z_{\Gamma} \rpar.
\label{cdef}
\end{equation}
For all practical purposes \eqn{cdef} can be replaced with the second variant
of \eqn{variants} (with $z \to z_\Gamma$) which, from now on, will be our definition
of analytical continuation.
\end{description}
\subsection{Analytical continuation of the Euler dilogarithm \label{ED}}
We consider now the Euler's dilogarithm, $\li{2}{z}$; if we denote its
principal branch by $\mbox{Li}^{(0,0)}_2(z)$ ($0 < \arg(z-1) < 2\,\pi$), than for
any branch (see, e.g.~\cite{HTF}) we have
\bq
\mbox{Li}^{(n,m)}_2(z) = \mbox{Li}^{(0,0)}_2(z) + 2\,n\,\pi\,i\,\ln^{(0)} z + 4\,m\,\pi^2,
\quad
n\,,\,m = 0\,,\,\pm 1\,,\,\pm 2\,,\dots
\label{ACLi}
\end{equation}
The question that we want to analyze is the following: given
\bq
\mbox{Li}_2(M^2+ i\,0) =
- \int_0^1\,\frac{dx}{x}\,\ln \lpar 1- M^2\,x - i\,0\rpar,
\qquad
{\rm{Im}}\,\mbox{Li}_2\lpar M^2 + i\,0\rpar = \pi\,\ln M^2\,\theta\lpar M^2 - 1\rpar,
\end{equation}
how do we understand \eqn{ACLi} in terms of an integral representation?
Let us consider the analytical continuation from $z^+ = M^2 + i\,0$ to $z = M^2 -
i\,M\,\Gamma$ and define
\bq
I = - \int_0^1\,\frac{dx}{x}\,\ln^- \lpar 1 - z\,x\,;\, 1 - z^+\,x\rpar.
\end{equation}
With $\chi(x) = 1 - z\,x = 1 - (M^2-i\,M\Gamma)\,x$, we have $\chi(0) = 1$ and
$\chi(1)= \lpar 1 - M^2\,,\,M\,\Gamma\rpar$. If $M^2 > 1$ we have that $\chi$
crosses the positive imaginary axis with ${\rm{Im}}\,\chi = \Gamma/M$. As a
result we obtain
\bq
I = \mbox{Li}^{(0,0)}_2(z) + 2\,i\,\pi\,\ln M^2,
\end{equation}
which is not the expected result since $I$ does not reproduce the correct
continuation of $\mbox{Li}_2$ given in \eqn{ACLi}.
The mismatch can be understood by observing that $\ln^-$ has a cut along
the positive imaginary axis (of $\chi$) and, in the process of continuation,
with $x \in [0,1]$, we have been crossing the cut.
Nevertheless, we insist on defining analytical continuation at the level of
integrand, instead of working directly on the result, because it is the only
practical way of dealing with multi-loop diagrams where an exact result is not
known.
The solution consists in deforming the integration contour, therefore
defining a new integral,
\bq
I_{{\scriptscriptstyle{C}}} =
\int_{{\scriptscriptstyle{C}}}\,\frac{dx}{x}\,\ln^- \lpar 1 - z\,x\,;\, 1 - z^+\,x\rpar,
\end{equation}
where the curve $C$ is given by two straight segments,
$0 \le x \le 1/M^2 - \epsilon$ and $1/M^2 + \epsilon \le x \le 1$
($\epsilon \to 0^+$), plus a curve $C'$ defined by
\bq
C'(u)\; :\; \{ x = u + i\,\frac{1 - M^2\,u}{M\,\Gamma}\},
\quad
\frac{1}{M^2 + \Gamma^2} \le u \le \frac{1}{M^2},
\end{equation}
The integral over $C'$ is downwards on the first quadrant an upwards on the
second (along the cut of $\ln^-$). Integration of $\ln^-$ over $C'$ gives
$- 2\,i\,\pi\,( \ln M^2 - \ln z)$, showing that
\bq
\mbox{Li}^{(1,0)}_2(z)= I_{{\scriptscriptstyle{C}}},
\end{equation}
the correct analytical continuation. Therefore we can extend our integral, by
modifying the contour of integration, to reproduce the right analytical
continuation of the dilogarithm.
\subsection{Continuation of analytical results \label{contan}}
Having introduced a simple example, we consider now one-loop two-point
functions where both masses and the external invariant are made complex.
Let
\bq
\chi(x) = s_{{\scriptscriptstyle{P}}}\,x^2 + \lpar m^2_2 - m^2_1 - s_{{\scriptscriptstyle{P}}} \rpar\,x + m^2_1,
\end{equation}
\bq
s_{{\scriptscriptstyle{P}}} = M^2 - i\,\Gamma\,M, \qquad
m^2_i = \mu^2_i - i\,\gamma_i\,\mu_i.
\end{equation}
The function $B_0$ is originally defined, for real $s_{{\scriptscriptstyle{P}}}$ and real
(equal for simplicity) internal masses, by
\bq
B_0\lpar M^2\,;\,\mu,\mu\rpar = \frac{1}{{\bar\epsilon}} -
\intfx{x}\,\ln ( \chi - i\,0),
\label{B0integ}
\end{equation}
where ${\bar\epsilon}^{-1}= 2/(4-n) - \gamma_{{\scriptscriptstyle{E}}} - \ln\pi$
($\gamma_{{\scriptscriptstyle{E}}} \approx 0.5772$ being the Euler-Mascheroni constant), and we
need the analytical continuation to arbitrary values of $s_{{\scriptscriptstyle{P}}}$
(i.e. $M^2 \to M^2 - i\,M\,\Gamma$ with $\Gamma > 0$);
we assume, for a moment, real internal masses ($\gamma= 0$) and
$M^2 > 4\,\mu^2$; the analytical result is
\bq
B_0\lpar M^2\,;\,\mu,\mu\rpar =
\frac{1}{{\bar\epsilon}} - \ln\frac{\mu^2}{\mu^2_{{\scriptscriptstyle{R}}}}
+ 2 -\beta\,\ln\frac{\beta+1}{\beta-1},
\label{anB0}
\end{equation}
where $\mu_{{\scriptscriptstyle{R}}}$ is the renormalization scale and $\beta^2 = 1 - 4\,\mu^2/M^2$.
For the continuation to $M^2 \to M^2 - i\,M\,\Gamma$,
$\mu^2 \to \mu^2 - i\,\gamma\,\mu$ we have to compute the logarithm of
$z^{\rm{\scriptscriptstyle{UST}}} =z^{\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{R}}} + i\,z^{\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{I}}}$, which is a function of
$\Gamma, \gamma$ (interacting theory of unstable particles).
Let
\bq
z^{\rm{\scriptscriptstyle{ST}}}_{\pm} =
\lim_{\Gamma, \gamma \to 0}\,z^{\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{R}}} \pm i\,0 =
z^{\rm{\scriptscriptstyle{ST}}}_{{\scriptscriptstyle{R}}} \pm i\,0,
\end{equation}
where the $\pm i\,0$ follows from Feynman prescription $\mu^2 \to \mu^2 - i\,0$.
We use the second variant of \eqn{variants} and define
\bq
\ln^{\pm} \lpar z^{\rm{\scriptscriptstyle{UST}}}\,;\,z^{\rm{\scriptscriptstyle{ST}}}_{\pm} \rpar =
\ln z^{\rm{\scriptscriptstyle{UST}}} \pm 2\,i\,\pi\,\theta \lpar - z^{\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{R}}} \rpar\,
\theta \lpar \mp z^{\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{I}}} \rpar,
\end{equation}
which satisfies
\bq
\lim_{\Gamma, \gamma \to 0}\,\ln^{\pm} \lpar z^{\rm{\scriptscriptstyle{UST}}}\,;\,
z^{\rm{\scriptscriptstyle{ST}}}_{\pm} \rpar = \ln z^{\rm{\scriptscriptstyle{ST}}}_{\pm},
\end{equation}
and it is equivalent to have $\ln\!z$ on the second Riemann sheet, but only
when $z$ is continued into the second quadrant.
There is one awkward possibility; it corresponds to starting from $z^{\rm{\scriptscriptstyle{ST}}}_-$ with
$z^{\rm{\scriptscriptstyle{ST}}}_{{\scriptscriptstyle{R}}} < 0$ and requiring continuation to $z^{\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{I}}} > 0$ and
$z^{\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{R}}} > 0$.
Using \eqn{anB0} we derive
\bq
B_0\lpar M^2\,;\,m,m\rpar \to \frac{1}{{\bar\epsilon}} -
\ln\frac{m^2}{\mu^2_{{\scriptscriptstyle{R}}}}
+ 2 -\beta_c\,\ln^- \lpar \frac{\beta_c+1}{\beta_c-1}\,;\,
\frac{\beta+1}{\beta-1}\rpar
\label{anCB0}
\end{equation}
where $\beta^2_c = 1 - 4\,m^2/s_p$.
It is worth noting that there is never a problem when internal masses are
real and we continue to complex $p^2$.
Otherwise we first continue to complex internal masses using the fact that
internal (complex) squared masses have a negative imaginary part.
Consider this continuation for one-loop diagrams: with $L\,$-external legs
we can always fix a parametrization where the coefficient of $m^2_1$ is
$1-x_1$, the one of $m^2_i$ is $x_{i-1}-x_i$, up to $m^2_{{\scriptscriptstyle{L}}}$ which has
coefficient $x_{{\scriptscriptstyle{L}}-1}$ where the parameters satisfy $0 \le x_{{\scriptscriptstyle{L}}-1} \le
\,\dots\,\le x_1 \le 1$ (i.e. all coefficients are non-negative). Less
straightforwardly the same holds for multi-loop diagrams.
Then we continue from $\Gamma = 0$; considering \eqn{anCB0} and denoting by
$\zeta$ the ratio $(\beta_c+1)/(\beta_c-1)$ we have ${\rm{Re}}\,\beta^2_c >0$ (in
the region above real thresholds) and ${\rm{Im}}\,\beta^2_c > 0$ for $\Gamma= 0$.
At $\Gamma = (M/\mu)\,\gamma$ $\beta^2_c$ crosses the positive real axis from
above; this corresponds to $\zeta$ crossing the cut where we move into the
second Riemann sheet of the logarithm of \eqn{anCB0}.
After that one has ${\rm{Im}}\,\zeta > 0$ and the forbidden region is
reached when ${\rm{Re}}\,\zeta > 0$, which corresponds to $\mid \beta_c \mid > 1$.
Once again, for $\gamma = 0$ the latter is never satisfied. In general, the
condition requires solving a cubic equation in $\Gamma$ with only one real,
negative, solution.
The forbidden region requires, therefore, $\Gamma < 0$.
To continue our analysis of one-loop functions, where analytical results are
known, we only need to define
\bq
\mbox{Li}^{\mp}_2\lpar z^{\rm{\scriptscriptstyle{UST}}}\,;\,z^{\rm{\scriptscriptstyle{ST}}}_{\mp}\rpar =
\li{2}{z} \mp 2\,i\,\pi\,\theta \lpar z^{\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{R}}} - 1 \rpar\,
\theta \lpar \pm z^{\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{I}}} \rpar\,\ln z^{\rm{\scriptscriptstyle{UST}}}.
\label{li2pm}
\end{equation}
For our purposes, namely for the processes that we are considering, we only
need one additional function.
The most general scalar three-point function that is needed will be
\bq
C_0\lpar 0,0,p^2\,;\,m_1,m_2,m_3 \rpar = \frac{1}{p^2}\,\Bigl\{ \sum_{i=1,3}\,
(-1)^{\delta_{i3}}\,\Bigl[
\mbox{Li}_2\lpar\frac{x_0-1}{x_0-x_i}\rpar - \mbox{Li}_2\lpar\frac{x_0}{x_0-x_1}\rpar \Bigr]
+ \ln x_0\,\eta\lpar x_1-x_0\,,\,x_2-x_0\rpar\},
\label{etaAC}
\end{equation}
with four different roots
\bq
x_0 = 1 + \frac{m^2_1-m^2_2}{p^2},
\qquad\quad
x_3 = \frac{m^2_3}{m^2_3-m^2_2},
\qquad\quad
x_{1,2} =
\frac{p^2+m^2_1-m^2_3 \mp
\lambda^{1/2}\lpar -p^2,m^2_1,m^2_3\rpar}{2\,p^2},
\end{equation}
where $\lambda$ is the K\"allen function.
Analytical continuation requires the replacement $\mbox{Li}_2 \to \mbox{Li}^-_2$ with limiting
(free theory of stable particles) cases given by
\bq
x_1 \to x_1 -i\,0,
\qquad\quad
x_2 \to x_2 + i\,0,
\qquad\quad
x_3 \to x_3 - i\,{\rm sign}(m^2_1 - m^2_2)\,0.
\end{equation}
As a final observation, there is no need to continue the square root
$\beta_c$ in \eqn{anCB0} below threshold ($\beta^2_c < 0$) since in this
case $\beta_c$ is imaginary and the change of sign when we move from
the principal root is compensated in the product $\beta_c$ times the logarithm.
Finally, in \eqn{etaAC} and for one-loop processes with more scales and more
than three legs one has to introduce a generalization of 't Hooft-Veltman
$\eta\,$-functions~\cite{'tHooft:1978xw} on the second Riemann sheet. The
definition is as follows:
\bq
\ln^- (x\,y) = \ln^- x + \ln^- y + \eta^-(x,y),
\end{equation}
\bq
\eta^-(x,y) = 2\,i\,\pi\,\Bigl\{
\theta(x_{{\scriptscriptstyle{I}}})\,\Bigl[ \theta( - x_{{\scriptscriptstyle{R}}})
- \theta(y_{{\scriptscriptstyle{I}}})\,\theta( - z_{{\scriptscriptstyle{I}}})\Bigr]
+ \theta(y_{{\scriptscriptstyle{I}}})\,\theta( - y_{{\scriptscriptstyle{R}}})
+ \theta(z_{{\scriptscriptstyle{I}}})\,\Bigl[ \theta( - x_{\scriptscriptstyle{I}})\,\theta( - y_{{\scriptscriptstyle{I}}})
- \theta( - z_{{\scriptscriptstyle{R}}}) \Bigr]\Bigr\},
\end{equation}
with $z= x\,y$ and $z = z_{{\scriptscriptstyle{R}}} + i\,z_{{\scriptscriptstyle{I}}}$ etc.
\subsection{Continuation at the integrand level \label{CIL}}
We now turn to analytical continuation at the integrand level, according
to our procedure where all Feynman integrals are treated according to their
parametric integral representation.
Let us consider the specific example of the previous section: a scalar
two-point function corresponding to two internal equal masses,
$m^2= \mu^2 - i\,\mu\,\gamma$ and $s_p = M^2 - i\,M\,\Gamma$.
Due to the symmetry of $\chi(x)$, in \eqn{B0integ} the integral
with $0 \le x \le 1$ can be written as twice the same integral with
$0 \le x \le 1/2$; the argument of the logarithm goes from ${\rm{Re}}\,\chi = \mu^2 > 0$
to ${\rm{Re}}\,\chi = \mu^2 - M^2/4 < 0$ (above threshold) with
${\rm{Im}}\,\chi = -\,i\,0$.
We have to define the analytical continuation $M^2 \to M^2 - i\,M\,\Gamma$;
since, for any $x$, $\chi$ cannot cross the cut, it must be analytically
continued into a second Riemann sheet above the cut.
A similar situation occurs for complex internal masses: the integration
with respect to $x$ is
\bq
\chi = \lpar \mu^2\,,\,- i\,\mu\,\gamma \rpar \quad \to \quad
\chi = \lpar \mu^2 - \frac{1}{4}\,M^2\,,\, - i\,\mu\,\gamma +
\frac{i}{4}\,M\,\Gamma \rpar.
\end{equation}
Let $X = x\,(1-x)$ with $0 \le X \le 1/4$, select a value for $x$, when
$\Gamma \geq (\mu\,\gamma)(M\,X)$ continuation is into the second Riemann
sheet.
Of course, for $M^2 < 4\,\mu^2$, $\chi$ remains on the first Riemann sheet for
all values of $\Gamma$. The variable $\chi$ is such that
\arraycolsep 0.14em\begin{eqnarray}
{\rm{Re}}\,\chi &=& 0 \qquad \hbox{for} \quad
x = R_{\pm} = \frac{1}{2}\,\Bigl[ 1 \pm \sqrt{1 - 4\,\frac{\mu^2}{M^2}}\Bigr].
\nonumber\\
{\rm{Im}}\,\chi &=& 0 \qquad \hbox{for} \quad
x= I_{\pm} = \frac{1}{2}\,\Bigl[ 1 \pm
\sqrt{1 - 4\,\frac{\mu \gamma}{M \Gamma}}\Bigr].
\label{RpmIpm}
\end{eqnarray}
The second equation requires $M \Gamma \ge 4\,\mu \gamma$ for $I_{\pm}$
to be real and $\in\,[0\,,\,1]$.
At $x = I_{\pm}$ the condition ${\rm{Re}}\,\chi \le 0$ requires
$\Gamma \mu \le M \gamma$.
Therefore, for those values of $\Gamma$ and $x$ that satisfy the conditions
\bq
4\,\frac{\mu}{M}\,\gamma \le \Gamma \le \frac{M}{\mu}\,\gamma,
\qquad\qquad
I_-\leq x \leq I_+,
\qquad\qquad
R_-\leq x \leq R_+,
\label{xcrossing}
\end{equation}
we have ${\rm{Re}}\chi \leq 0$, ${\rm{Im}}\chi\geq 0$ and $\ln\chi$ must be continued
into the second Riemann sheet.
The new definition of the $B_0\,$-function is as follows:
\bq
B_0 = \frac{1}{{\bar\epsilon}} - \intfx{x}\,\ln^{-}\lpar \chi\,;\,\chi_- \rpar,
\qquad\quad
\chi_- = \chi\Bigr|_{\Gamma,\gamma_i = 0} - i\,0,
\label{safe}
\end{equation}
Different possibilities are illustrated in \fig{chip} where we show $\chi(x)$
for two equal (complex) internal masses.
\begin{figure}[th]
\begin{center}
\includegraphics[bb=152 441 476 706,width=7cm]{chip}
\end{center}
\vspace{-0.6cm}
\caption[]{\label{chip}
Analytical continuation from real $p^2$ to complex $p^2$ as seen in the
$\chi\,$-plane with $\chi(x)= - s_{{\scriptscriptstyle{P}}}\,x\,(1-x) +\mu^2 - i\,\mu\,\gamma$,
with $s_{{\scriptscriptstyle{P}}} = M^2 - i\,M\,\Gamma$ and $x\in [0,1]$.
Solid lines represent the continuation for a low value of $M$ with a very small
value for $\Gamma$. With increasing values for $M$ we reach the situation
illustrated by the dot-lines, $\chi$ moving into the second quadrant, i.e.
$\chi$ on the second Riemann sheet. Case $1$ holds for
$\Gamma < (M/\mu)\,\gamma$ whereas case $2$ holds for
$\Gamma > (M/\mu)\,\gamma$.
Black circles correspond to $x=0, x=1$ whereas white circles correspond to
$x= 1/2$.
}
\end{figure}\\
In any realistic application the
complex pole equation returns, for low values of $M$, small values of
$\Gamma$ and ${\rm{Re}}\,\chi$ is always positive, never requiring analytical
continuation into another sheet; when $M$ increases $\Gamma$ increases too and
we find values of $\chi$ that requires the continuation $\ln \to \ln^-$. This
will happen for $x \ge I_-$ in case $1$ (which requires
$\Gamma < (M/\mu)\,\gamma$) and for $x > R_-$ in case $2$ (which requires
$\Gamma > (M/\mu)\,\gamma$).
The same example can be discussed in the $x$ complex plane; in this case, when
$M > 2\,\mu$ and $\Gamma= \gamma= 0$, the cut is on the real axis between $R_-$
and $R_+$ (\eqn{RpmIpm}) and the integration is $0 < x < 1/2$. The integral is
originally defined ($\Gamma, \gamma = 0$) above the cut, i.e. for $x + i\,0$.
Analytical continuation means that for increasing imaginary parts we reach a
point where the integration path is continued into the second Riemann sheet
(at $\Gamma = (M/\mu)\,\gamma$, we have $I_- = R_-$ and, for higher values of
$\Gamma$, the continuation to the second Riemann sheet is required as soon as
the cut is reached).
From this point of view the integral is better understood in terms of
a variable $z(x)= u + i\,v$, such that $\chi = - M^2\,z\,(1-z) + \mu^2$ and the
integration is performed along the curve
\bq
v\,(1 \!-\! 2\,u) = \frac{Z}{M^2},
\qquad
Z = \mu\,\gamma - M\Gamma\,x\,(1-x),
\qquad
u = \frac{1\!-\!U}{2},
\qquad
U^4 + \Big[ 4\,x(1\!-\!x) - 1\Big]\,U^2 - 4\,\frac{Z^2}{M^4} = 0.
\label{ztransf}
\end{equation}
Note that $x= I_-$ corresponds to $z= I_-$, real. For real internal masses we
have $z(0)= 0$ and $z(1/2)= (\Gamma/(2 M))^{1/2}\,(1 - i/2)$.
In the $z\,$-plane the logarithm has a cut on the positive real axis between
$R_-$ and $R_+$.
It is worth mentioning that case $2$ of \fig{chip} corresponds to an
integration path that crosses the cut of $\ln^-$ across the positive imaginary
$x\,$-axis, similar to the case of the dilogarithm discussed above.
Therefore, the correct analytical continuation, for case $2$, goes as follows:
the integration path in $z\,$-space (\eqn{ztransf}) is moved into the complex
plane and goes into the lower half-plane instead of reaching the cut of the
logarithm (which is between $R_-$ and $R_+$, see also \fig{xplane}).
In order to insure that the analytically continued integral has a smooth limit
$\Gamma, \gamma \to 0$ we deform the integration path by insisting that the
cut (of $\ln$) must be crossed at $z= R_-$ (note that for case $2$ we have
$I_- < R_-$) where we perform a continuation into the second Riemann sheet.
In this way we add to $B_0$ (on top of a factor $-\,2\,i\,\pi\,\beta$) a new
contribution which is easily computed in the $x\,$-plane and it is related
to the discontinuity of $\ln\chi$ along a curve $C$ parametrized by
\bq
C(t)\; :\; \{ x = \frac{1-t}{2} + i\,f(t)\}, \qquad
f(t) = \frac{1}{2}\,\Bigl\{ -\,\frac{\Gamma}{M}\,t + \Bigl[ \lpar 1 +
\frac{\Gamma^2}{M^2} \rpar\,t^2 - \beta^2 \Bigr]^{1/2} \Bigr\},
\end{equation}
with $\beta^2 = 1 - 4\,\mu^2/M^2 > 0$ and where $\bar{\beta} < t < \beta$;
here $\bar{\beta}$ is the value of $t$ where ${\rm{Re}}\,\chi(t) = {\rm{Im}}\chi(t) = 0$.
The integral over $C$ is on the segment ${\rm{Re}}\,\chi = \pm \epsilon$ with
$\epsilon \to 0^+$, from $\mu^2 \Gamma/M - \mu \gamma > {\rm{Im}}\chi > 0$ on the first
Riemann sheet and from $\mu^2 \Gamma/M - \mu \gamma < {\rm{Im}}\chi < 0$ on the
second Riemann sheet.
Therefore, we have to add to $B_0$ an additional term $-\,2\,i\,\pi\,\Delta\,A$
with
\bq
\Delta A= A(\bar{\beta}) - A(\beta),
\qquad
A(t) = \lpar 1 + i\,\frac{\Gamma}{M} \rpar\,t - i\,
\Bigl[ \lpar 1 + \frac{\Gamma^2}{M^2} \rpar\,t^2 - \beta^2 \Bigr]^{1/2}.
\end{equation}
Note that in the limit $\Gamma, \gamma \to 0$ we have $\bar{\beta} \to \beta$
and this additional term vanishes.
Furthermore, $A(\beta)= \beta$ and $A(\bar{\beta}) = \beta_c$, with
$\beta^2_c = 1 - 4\,m^2/s_p$.
Therefore, we reproduce the correct result of \eqn{anCB0}.
The recipe is, therefore, replace $\ln$ with $\ln^-$ in the integrand but
deform the integration contour in order to avoid crossing of the positive
imaginary $\chi\,$-axis when this would occur.
In summary, our result with a simple example we observe that
\bq
\mbox{Li}_n \stackrel{\rm {\tiny{Analyt. Cont.}}}{\longmapsto} \mbox{Li}^-_n,
\quad
\mbox{Li}^-_{n+1}(z) \not= \int_0^z\,\frac{dx}{x}\,\mbox{Li}^-_n(x),
\end{equation}
since deformation of the integration contour is required for the general case.
\begin{figure}[ht!]
\begin{center}
\includegraphics[bb=274 486 485 706,width=5cm]{xplane1}
\hspace{3cm}
\includegraphics[bb=274 486 485 706,width=5cm]{xplane2}
\end{center}
\vspace{-0.4cm}
\caption[]{\label{xplane}Analytical continuation of a $B_0\,$-function
as seen in the $z\,$-plane with a cut along the positive real axis
between $R_-$ and $R_+$ (\eqn{RpmIpm}). In the first part the
integration path reaches the point $I_-$ (\eqn{RpmIpm}) and continuation
after $z = I_-$ is in the second Riemann sheet. In the second part,
where $I_- < R_-$ continuation must be, once again, in the second
Riemann sheet; therefore the integration path which has moved into the
lower half-plane must be deformed to cross the cut before moving once
more into the lower half-plane (but on the second Riemann sheet).}
\end{figure}
\subsection{Narrow width approximation \label{NWA}}
The practical implementation for higher point (or higher loop) functions
presents a formidable technical problem, due to the higher dimension of the
$x\,$-space; more will be explained in \sect{ACCD} but, for this reason, we have
also considered analytical continuation in narrow-width-approximation
(hereafter NWA).
Here we replace $\ln$ with $\ln^-$ (or $\ln^+$) at the integrand level and
do not perform any deformation of the integration (hyper-)contour.
The resulting expression is expect to have a range of validity given by
$\Gamma \ll M$.
Numerical investigation of the Higgs complex pole shows that NWA returns
reliable results when compared with the exact expression.
The rationale for analytical continuation in NWA is based on the fact that,
as we are going to show, all higher-point (higher-loop) functions admit
integral representations with integrand of logarithmic nature (one-loop)
or, at most, of poly-logarithmic nature (multi-loop).
Consider now the extension to complex variables of an arbitrary scalar
three-point function $C_0$ (in NWA), defined by
\bq
C_0 = \int_{\scriptstyle 0}^{\scriptstyle 1}\,dx_1\,
\int_{\scriptstyle 0}^{\scriptstyle x_1}\,dx_2\,V^{-1-\epsilon/2}(x_1,x_2),
\label{eqdefC0}
\end{equation}
where $n= 4 - \epsilon$ and $V$ is a quadratic form
\bq
V(x_1,x_2) = a\,x_1^2 + b\,x_2^2 + c\,x_1\,x_2 + d\,x_1 +
e\,x_2 + f - i\,0 \equiv x^t\,H\,x + 2\,K^t\,x + L ,
\end{equation}
whose coefficients are related to the internal masses and the external
momenta by the relations $H_{ij} = -\,\spro{p_i}{p_j}, L = m^2_1$ and
\arraycolsep 0.14em\begin{eqnarray}
K_1 &=& \frac{1}{2} \, ( \spro{p_1}{p_1} + m_2^2 - m_1^2 ),
\quad
K_2 = \frac{1}{2} \, ( \spro{P}{P} - \spro{p_1}{p_1} + m_3^2 - m_2^2 ),
\label{Konetwo}
\end{eqnarray}
with $P = p_1 + p_2$.
Let us define the usual Bernstein - Sato - Tkachov (hereafter BST) factors
(see Ref.~\cite{Ferroglia:2002mz}) as
$B_3 = L - K^t\,H^{-1}\,K$ and BST co-factors $X = -\,H^{-1}\,K$.
It is convenient to introduce special notations, $X_0 = 1,
\, X_3 = 0$, and $V(\widehat {i\;i+1})$ to denote contractions, i.e.
\bq
V(\widehat{0\;1}) = V(1,x_1), \quad
V(\widehat{1\;2}) = V(x_1,x_1), \quad
V(\widehat{2\;3}) = V(x_1,0).
\end{equation}
In this way we obtain a simple integral representation
\bq
C_0 = \frac{1}{B_3}\,\Bigl\{ \frac{1}{2} +
\int_0^1\,dx_1\,\Bigl[ \int_0^{x_1}\,dx_2\,
\ln\,V(x_1,x_2) -
\frac{1}{2}\,\sum_{i=0}^{2}\,(X_i - X_{i+1})\,\ln V(\widehat{i\;i+1}).
\Bigr]\Bigr\}.
\label{sintr}
\end{equation}
When some or all the invariants are complex, $P^2= -\,s_{{\scriptscriptstyle{P}}}$ with
$s_{{\scriptscriptstyle{P}}} = M^2 - i\,\Gamma\,M$ and $m^2_i = \mu^2_i - i\,\gamma_i\,\mu_i$
(in realistic cases, e.g. decay of an unstable particle, $p^2_{1,2}$ are real)
we define
\bq
V_- = V\Bigr|_{\Gamma,\gamma_i = 0},
\end{equation}
which includes the $-\,i\,0$ prescription and write
\arraycolsep 0.14em\begin{eqnarray}
C_0 &=& \frac{1}{B_3}\,\Bigl\{ \frac{1}{2} +
\int_0^1\,dx_1\,\Bigl[ \int_0^{x_1}\,dx_2\,
\ln^{-}\lpar V(x_1,x_2)\,;\,V_-(x_1,x_2)\rpar
\nonumber\\
{}&-& \frac{1}{2}\,\sum_{i=0}^{2}\,(X_i - X_{i+1})\,
\ln^{-}\lpar V(\widehat{i\;i+1})\,;\,V_-(\widehat{i\;i+1})\rpar
\Bigr]\Bigr\}.
\label{sintrC}
\end{eqnarray}
For instance, with $P^2= - M^2 + i\,M\,\Gamma$, $p^2_{1,2}= 0$ and
$m_{1,3} = 0$, $m^2_2 = \mu^2 - i\,\mu\,\gamma$ we find that when
$\Gamma/\gamma \le \mu/M$ $\ln V$ must be continued to the second
Riemann sheet for $0 \le x_2 \le (\mu\gamma)/(M\Gamma)$.
Starting with an integral representation of a three-point function where the
integrand is the logarithm of a polynomial in parametric space is the safest
way of performing analytical continuation; of course, going beyond NWA
requires contour deformation but even the latter admits a consistent numerical
implementation.
Nor should one fail to notice 't Hooft and Veltman emphasis, in their seminal
work~\cite{'tHooft:1978xw}, on this subject: they put up warning signs about
continuation of their result to complex momenta.
For higher point functions ($L = D,E,F,\,\dots$) we apply the BST
algorithm~\cite{Passarino:2001wv} as many times as it is needed to produce
logarithms in the integrand and proceed by replacing $\ln$ with $\ln^-$,
\bq
L = \int_{\{x\}} d\{x\}\,\ln \lpar \chi(\{x\}) - i\,0\rpar \to
\int_{\{x\}_{C(\chi)}} d\{x\}\,\ln^-\,\chi(\{x\}),
\end{equation}
where $\{x\}$ is the $x_1,\,\dots\,,x_n$ simplex and $\{x\}_{C(\chi)}$ is
the path that avoids crossing the positive imaginary $\chi\,$-axis.
NWA amounts to the identification $\{x\}_{C(\chi)} \equiv \{x\}$.
For multi-loop integrals other functions must be extended, e.g. we will
use \eqn{li2pm}, with similar results for all generalized Nielsen
polylogarithms~\cite{Kolbig:1983qt}
\bq
S^-_{n,p}(z^{\!\rm{\scriptscriptstyle{UST}}}\!;\!z^{\rm{\scriptscriptstyle{ST}}}_-) =\,
S_{n,p}(z^{\!\rm{\scriptscriptstyle{UST}}})\,
+ \,\sum_{k=1}^p\frac{(- 2i\pi)^{\!k}}{k\,!}\bigg[
S_{n,p-k}(z^{\!\rm{\scriptscriptstyle{UST}}})\,
- \sum_{j=0}^{n-1}\frac{\ln^j\!z^{\!\rm{\scriptscriptstyle{UST}}}\!}{j\,!}
S_{n-j,p-k}(z^{\!\rm{\scriptscriptstyle{UST}}})
\bigg]
\theta(z^{\!\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{R}}} \!-\! 1 )\,\theta(z^{\!\rm{\scriptscriptstyle{UST}}}_{{\scriptscriptstyle{I}}}),
\end{equation}
which is derived by using $\ln^+$ in the integral representation of the
generalized Nielsen polylogarithms.
Since it has been shown that multi-loop diagrams can be written as integrals of
multivariate generalized Nielsen polylogarithms~\cite{Uccirati:2004vy} our recipe
gives the analytical continuation to all orders in perturbation theory,
but one does have to be careful in one respect: using familiar relations such
as splitting of logarithms should be done with a grain of salt.
\subsection{Analytical continuation and contour deformation \label{ACCD}}
Exact analytical continuation at the integrand level can be performed by
deforming the integration contour into the complex parametric space
(for a general treatment see Ref.~\cite{Nagy:2006xy}). In this
case we need the general definition of $\ln^-$ given in \eqn{cdef}.
To illustrate contour deformation we consider, once again, the case of a
$B_0\,$-function with equal (complex) internal masses. If
\bq
M > 2\,\mu
\qquad
\mu\,\Gamma - M\,\gamma > 0,
\end{equation}
the function $\chi(x), x \in [0,\frac{1}{2}]$ crosses the positive imaginary axis
(the branch cut of $\ln^-$). To avoid crossing we deform the $x\,$-integration
into
\arraycolsep 0.14em\begin{eqnarray}
1) \quad &{}& \quad x = i\,\frac{\Gamma}{M}\,\beta\,t,
\nonumber\\
2) \quad &{}& \quad x = \frac{1}{2}\,t + i\,\frac{\Gamma}{M}\,\beta\,(1 - t),
\end{eqnarray}
with $t \in [0,1]$ and $\beta$ a free parameter.
For $\chi^{(1)}$ we require ${\rm{Im}}\,\chi^{(1)}(t) < 0, \forall t \in[0,1]$;
this is possible if $\beta < \beta_{\rm max}$, with
\bq
\beta_{\rm max} =
\frac{1}{2}\,\frac{M^2}{\Gamma^2}\,\Bigl[
1 + \sqrt{1 + 4\,\frac{\mu\,\gamma\,\Gamma}{M^3}}\Bigr].
\end{equation}
For $\chi^{(2)}$ we require that ${\rm{Re}}\,\chi^{(2)}(t) = 0$ corresponds to
${\rm{Im}}\,\chi^{(2)}(t) < 0$, which requires $\beta > \beta_{\rm min}$,
where $\beta_{\rm min}$ is the largest, real, solution of
the following equation
\bq
\mu\,\lpar \frac{\Gamma}{M^2} - \frac{1}{4\,\Gamma} \rpar\,
\lpar \Gamma\,\mu - M\,\gamma \rpar\,( \beta^2 - 1 ) +
\Bigl[ \frac{1}{4}\,\lpar M^2 + \Gamma^2 \rpar - \mu\,\lpar
\mu + \frac{\Gamma\,\gamma}{M} \rpar\,\Bigr]\,\beta = 0.
\end{equation}
For $\beta_{\rm min} < \beta < \beta_{\rm max}$ we have that $\chi^{(1,2)}$
never cross the positive imaginary axis. Furthermore, we compare
$\chi^{(1,2)}(0)$ with $\chi^{(1,2)}(\Gamma)$ at fixed $t \in [0,1]$ and
replace $\ln \to \ln^-$ when (always at fixed $t$) $\chi^{(1,2)}(\Gamma)$
crosses the negative real axis for some value $\Gamma_c$. In the example
that we are considering, illustrated in \fig{c_def}, everything is
particularly simple since ${\rm{Im}}\,\chi({\rm{Re}}\,\chi)$ is always a straight
line but our recipe works, as well, in the general case and allows for a
straightforward algorithmic implementation.
\begin{figure}[ht!]
\begin{center}
\includegraphics[bb=69 252 505 681,width=9cm]{c_def}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{c_def}Example of contour deformations in computing a
scalar two-point functions with equal (complex) internal masses and
complex $p^2$.}
\end{figure}
For a general recipe of contour deformation, we proceed by analyzing the case
where a Feynman diagram can be written as:
\bq
\int_0^1\!\!dx_1 \cdots dx_n\,
\sum_{i}\,A_i(x_1,\dots,x_n)\,\ln V_i(x_1,\dots,x_n),
\label{quad}
\end{equation}
where $V_i$ are multivariate polynomials in $x_1,\dots,x_n$, at most quadratic
in each variable (note that all one-loop diagrams can be written according to
\eqn{quad}, see Ref.~\cite{Ferroglia:2002mz}); actually the procedure works as
well when each $V_i$ is a quadratic form in, at least, one variable.
For each term in the sum, we select one variable $x \equiv x_i$ (among
$x_1,\dots,x_n$) and study the analytical continuation (assuming that
${\rm Im}[V]_{\rm real\;masses} < 0$)
\bq
\ln V \to \ln^- V
\qquad
\mbox{ with }
\qquad
V= a\,x^2 + b\,x + c,
\end{equation}
where $a,b,c$ are polynomials in the remaining Feynman variables.
The idea is to deform only the $x$ integration contour into the complex plane
(when needed) while keeping all other variables
($x_1,\dots,x_{i-1},x_{i+1},\dots,x_n$) on the real axis.
We define
\bq
a=a_r+i\,a_i
\qquad
b=b_r+i\,b_i
\qquad
c=c_r+i\,c_i
\qquad\quad
x=u+i\,v.
\end{equation}
The real and imaginary parts of $V$ are then given by:
\arraycolsep 0.14em\begin{eqnarray}
{\rm Re}V
&=&
a_r\,u^2 - 2,a_i\,u\,v - a_r\,v^2 + b_r\,u - b_i\,v + c_r
\nonumber\\
&=&
a_r\,(u-u_c)^2 - 2\,a_i\,(u-u_c)\,(v-v_c) - a_r\,(v-v_c)^2 + \delta_r,
\nonumber\\
{\rm Im}V
&=&
a_i\,u^2 + 2,a_r\,u\,v - a_i\,v^2 + b_i\,u + b_r\,v + c_i
\nonumber\\
&=&
a_i\,(u-u_c)^2 + 2\,a_r\,(u-u_c)\,(v-v_c) - a_i\,(v-v_c)^2 + \delta_i,
\end{eqnarray}
where we introduced the following auxiliary variables:
\bq
x_c= u_c + i\,v_c= -\,\frac{a^*\,b}{2\,|a|^2},
\qquad\qquad
\delta= \delta_r + i\,\delta_i= c - \frac{a^*\,b^2}{4\,|a|^2}.
\end{equation}
The curves ${\rm Re}V=0$ and ${\rm Im}V=0$ are hyperbolas with center
in $x_c$. We also define an auxiliary function,
\bq
U= - a_i\,{\rm Re}V + a_r\,{\rm Im}V=
2\,|a|^2\,(u-u_c)\,(v-v_c) + a_r\,\delta_i - a_i\,\delta_r.
\end{equation}
The curve $U=0$ is again an hyperbola with center in $x_c$ and asymptotes
parallel to the $u$ and $v$ axes.
The branch-cut of $\ln^- V$ in the $x$ complex plane is defined by:
\bq
\mbox{cut}:
\qquad
{\rm Re}V=0 \;\,\&\,\; {\rm Im}V \geq 0
\qquad
\Longleftrightarrow
\qquad
\left\{
\ba{c}
{\rm Re}V=0\;\,\&\,\;U\geq 0, \quad\mbox{ if } a_r>0, \\
{\rm Re}V=0\;\,\&\,\;U\leq 0, \quad\mbox{ if } a_r<0.
\end{array}
\right.
\end{equation}
First we study the intersection of the curve ${\rm Re}V=0$ with the real $u\,$
axis. If $\Delta= b_r^2-4\,a_r\,c_r<0$, the hyperbola ${\rm Re}V= 0$ never crosses
the real axis and there is no need of contour deformation (this is the first case
of \fig{defor}).
\begin{figure}[h]
\begin{tabular}{ccc}
\epsfig{file=h01.eps, scale=0.6}
\hspace{-3cm}
&
\epsfig{file=h02.eps, scale=0.6}
\hspace{-3cm}
&
\epsfig{file=h03.eps, scale=0.6}
\\
\epsfig{file=h21.eps, scale=0.6}
\hspace{-3cm}
&
\epsfig{file=h11.eps, scale=0.6}
\hspace{-3cm}
&
\epsfig{file=h22.eps, scale=0.6}
\end{tabular}
\vspace{-0.5cm}
\caption{Examples of deformation in the $x$-complex plane ($x=u+iv$) of the
integration contour $[0,1]$ for integral of $\ln^-V=\ln^-(ax^2+bx+c)$.}
\vspace{-0.1cm}
\label{defor}
\end{figure}
If $\Delta \geq 0$, the intersections are given by:
\bq
{\rm Re}V=0\;\,\&\,\;v=0
\qquad\Longrightarrow\qquad
u= u_0^\pm= \frac{-b_r\pm\sqrt{\Delta}}{2\,a_r}.
\end{equation}
If both solutions are not in $[0,1]$ a distortion is not needed (second plot
in \fig{defor}).
Even if $u_0^\pm \in [0,1]$, it can happen that the intersection occurs
for ${\rm Im}V < 0$ (as in the third case of \fig{defor}) and there is no need
of deformation to avoid the cut.
In order to understand whether it occurs, we can study where the zeros
of $V$ (${\rm Re}V= {\rm Im}V=0$) are lying.
This system of equations has always two and only two solutions
$x_\pm=u_\pm+i\,v_\pm$, whose real and imaginary parts are given by:
\bq
u_\pm= u_c \pm \sqrt{ |\sigma| + \sigma_r },
\qquad\quad
v_\pm= v_c \pm\,\sign{\sigma_i}\sqrt{ |\sigma| - \sigma_r },
\qquad\qquad
\sigma= -\,\frac{a^*\,\delta}{2\,|a|^2}.
\end{equation}
Note that these points are also solution of the equation $U= 0$ and (because of the
simple form of $U$) we can conclude that:
\bq
\mbox{cut}\!:
\;\;
{\rm Re}V\!=0\;\,\&\,\;{\rm Im}V \geq 0
\;\;
\Leftrightarrow
\,
\left\{
\ba{c}
{\rm Re}V\!=0\;\,\&\,\;U\geq 0
\;\;
\Leftrightarrow
\;\;
{\rm Re}V\!=0\;\,\&
\left\{
\ba{c}
v\geq v_+ \;\mbox{ if }\; u \!>\! u_c \\
v\leq v_- \;\mbox{ if }\; u \!<\! u_c \\
\end{array}
\right.
\;
\mbox{ if } a_r \!>\! 0,
\\
{\rm Re}V\!=0\;\,\&\,\;U\leq 0
\;\;
\Leftrightarrow
\;\;
{\rm Re}V\!=0\;\,\&
\left\{
\ba{c}
v\leq v_+ \;\mbox{ if }\; u \!>\! u_c \\
v\geq v_- \;\mbox{ if }\; u \!<\! u_c \\
\end{array}
\right.
\;
\mbox{ if } a_r \!<\! 0.
\end{array}
\right.
\end{equation}
At this point we have all information to fix the new integration contour,
starting from $x=0$ and ending at $x=1$ without crossing the branch-cut.
Of course, as long as the cut is not crossed, all integration contours are
equivalent and give the same result: it may fairly be said that we have
some freedom in defining the deformation and that, at the same time, we can
control the correctness of the result by using different paths.
The general situation is depicted in the fourth plot of \fig{defor} and
the new integration contour is defined by seven segments:
\arraycolsep 0.14em\begin{eqnarray}
(1) &\quad& x \,=\, - \alpha_1\,t, \nonumber\\
(2) &\quad& x \,=\, - \alpha_1 + i\,\beta_1\,t, \nonumber\\
(3) &\quad& x \,=\, - \alpha_1\,(1-t) + \alpha_c\,t + i\,\beta_1, \nonumber\\
(4) &\quad& x \,=\, \alpha_c + i\,\beta_1\,(1-t) + \beta_2\,t, \nonumber\\
(5) &\quad& x \,=\, \alpha_c\,(1-t) + (1+\alpha_2)\,t + i\,\beta_2, \nonumber\\
(6) &\quad& x \,=\, 1 + \alpha_2 + i\,\beta_2\,(1-t), \nonumber\\
(7) &\quad& x \,=\, (1+\alpha_2)\,(1-t) + t.
\label{newpath}
\end{eqnarray}
The coefficients $\alpha_1$, $\beta_1$, $\alpha_2$, $\beta_1$ and $\alpha_c$
can be fixed according to the principle of the minimal deformation
to avoid crossing the cut. This gives the following conditions:
\arraycolsep 0.14em\begin{eqnarray}
&&
\ba{lccccl}
\;\;\; \alpha_1 > \;\;\; |u_-|
&
\quad {\rm if} \quad
&
0 \leq u_0^- \leq 1
&
\quad\,\&\,\quad
&
u_- \leq 0,
&
\qquad \alpha_1= 0 \quad {\rm otherwise},
\\[+0.2cm]
\left\{
\ba{l}
\beta_1 > \;\;\; |v_-| \\
\beta_1 < - \, |v_-|
\end{array}
\right.
&
\quad{\rm if}\quad
&
0 \leq u_0^- \leq 1
&
\quad\,\&\,\quad
&
\ba{l}
v_- \geq 0 \;\,\&\,\; a_r>0, \\
v_- \leq 0 \;\,\&\,\; a_r<0,
\end{array}
&
\qquad \beta_1= 0 \quad {\rm otherwise},
\\[+0.4cm]
\;\;\; \alpha_2 > \;\;\; |u_+|
&
\quad {\rm if} \quad
&
0 \leq u_0^+ \leq 1
&
\quad\,\&\,\quad
&
u_+ \geq 1,
&
\qquad \alpha_2= 0 \quad {\rm otherwise},
\\[+0.2cm]
\left\{
\ba{ll}
\beta_2 < - \, |v_+| \\
\beta_2 > \;\;\; |v_+|
\end{array}
\right.
&
\quad{\rm if}\quad
&
0 \leq u_0^+ \leq 1
&
\quad \,\&\,\quad
&
\ba{ll}
v_+ \leq 0 \;\,\&\,\; a_r>0, \\
v_+ \geq 0 \;\,\&\,\; a_r<0,
\end{array}
&
\qquad \beta_2= 0 \quad {\rm otherwise},
\\[+0.4cm]
\left\{
\ba{ll}
\alpha_c = \;\;\;\; u_c \\
\alpha_c = \;\;\;\;\; 0 \\
\alpha_c = \;\;\;\;\; 1 \\
\end{array}
\right.
&
\quad{\rm if}\quad
&
\ba{l}
0 \leq u_0^- \leq u_0^+ \leq 1, \\
u_0^- \leq 0 \leq u_0^+ \leq 1, \\
0 \leq u_0^- \leq 1 \leq u_0^+,
\end{array}
&&&
\qquad \alpha_c= 0 \quad {\rm otherwise}.
\end{array}
\label{coeff}
\end{eqnarray}
The case where all coefficients vanish corresponds to non-deformation:
in this case the paths in \eqn{newpath} are
\bq
(1),\,(2),\,(3),\,(4)\;\; x=0,\quad(5)\;\;x=t,\quad(6),\,(7)\;\;x=1,
\end{equation}
and refer to $ 0 \leq x \leq 1$ on the real axis.
Note that the case $\Delta < 0$ (where $u_0^\pm$ are not defined) belongs to
this class.
This general recipe for contour deformation works also in special cases
where not all segments are needed.
For example, in the fifth plot of \fig{defor} the new contour consists of
only four segments: using in \eqn{newpath} the conditions of \eqn{coeff}
($\alpha_2=\beta_2=0,\alpha_c=1$), the last three segments reduce in this
case to the point $x=1$.
We can now consider in this framework the example of a $B_0$ function with two
equal masses. In this case we have:
\bq
V(x) =
-s_{\scriptscriptstyle{P}}\,x\,(1-x)+m^2 =
(-M^2+i\,M\,\Gamma)\,x\,(1-x)+\mu^2-i\,\mu\,\gamma,
\end{equation}
which corresponds to
\arraycolsep 0.14em\begin{eqnarray}
&&
x_c= \frac{1}{2}, \qquad\quad
\delta=
c-\frac{a}{4}=
\mu^2 - \frac{M^2}{2} - i\,\Big[ \mu\,\gamma + \frac{M\,\Gamma}{2} \Big],
\nonumber\\
&&
\sigma =
\frac{1}{8} - \frac{a^*c}{2|a|^2} =
\frac{1}{8} - \frac{\mu}{2M}\frac{\mu M+\gamma\Gamma}{M^2+\Gamma^2}
+ i\frac{\mu}{2M}\frac{\mu\Gamma -M\gamma}{M^2+\Gamma^2}.
\end{eqnarray}
Since $a_r > 0$, the cut crosses the segment $[0,1]$ when $\sigma_i\leq 0$
(implying that $v_-\geq 0$, $v_+\leq 0$), a situation which occurs for
$\mu\Gamma-M\gamma\geq 0$.
It can be verified by explicit calculation that, in this case, we always have
$0\leq u_\pm\leq 1$, which corresponds to the situation depicted in the last
diagram of \fig{defor} and the deformation requires five segments
($\alpha_1=\alpha_2=0$, i.e. the first and the last segment in \eqn{newpath}
reduce to a point).
\subsection{Differential operators in a complex domain \label{BSTc}}
The procedure of analytical continuation at the basis of \eqn{quad} deserves
an additional comment: how can we apply a differential operator at the
integrand level, in order to get \eqn{quad}? As an example we consider
the integral of \eqn{eqdefC0} on which we want to apply the BST algorithm:
\bq
C_0 = \intfxy{x}{y}\,\chi^{-1+\epsilon/2}(x,y), \qquad \epsilon \to 0^+,
\end{equation}
where $\chi$ is a quadratic form. As we know \eqn{quad} follows from BST
functional relation; in order to apply the BST algorithm in the
complex domain ($\chi \in C[x,y]$) we introduce
\bq
\Bigl[ \chi \Bigr]_{\pm}^{\mu} = \exp \lpar \mu\,\ln^{\pm}\,\chi\rpar,
\end{equation}
and distort the integration path so that it never crosses the positive
imaginary axis of $\chi^{-}$ (or the negative imaginary axis of $\chi^{+}$ ).
The ($-$) analytical continuation of $C_0$ is defined by
\bq
C_0^- = \int_{\Lambda = 0}\,dx dy\,\Bigl[ \chi(x,y) \Bigr]_-^{-1+\epsilon/2},
\end{equation}
where $\Lambda(x,y) = 0$ is the implicit equation for the integration contour.
In practice we change variables, $x= \alpha_i\,t+\beta_i$
($t \in [0,1]$) with $i=1,\dots,n$, n being the number of segments needed to
avoid crossing the cut (e.g. see \eqn{newpath}).
The BST functional relation~\cite{Ferroglia:2002mz} for quadratic forms
and the corresponding linear differential operator are
\bq
\chi^{\mu}\lpar [x] \rpar =
{\cal D}\lpar \mu\,,\,[x]\,,\,[\partial_x]\rpar\,
\chi^{\mu+1}\lpar [x] \rpar, \qquad
{\cal D} = \frac{1}{B}\,\Bigl[ 1 - \frac{1}{2\,(\mu+1)}\,\sum_{i=1}^n\,
\lpar x_i - X_i\rpar\,\partial_{x_i} \Bigr],
\label{BSTexp}
\end{equation}
where $[x] = x_1\,,\dots\,,x_n$.
Consider the following integral,
\bq
F_- = \int_{z_i\;\Gamma}^{z_f}\,dz\,\Bigl[\chi(z)\Bigr]_-^{\mu}\;,
\label{Fminus}
\end{equation}
where $\Gamma$ is a curve connecting $z_i$ and $z_f$ which never
crosses the positive imaginary axis of $\chi$ for $z \in \Gamma$.
Let $z_f$ be in the second quadrant and $z_i$ outside of it;
let $z_0$ be the point where ${\rm{Im}}\,\chi(z_0) = 0$, ${\rm{Re}}\,\chi(z_0) < 0$.
Thanks to the $-\,$ prescription the integrand in \eqn{Fminus} is a continuous
function of $z$ (for $z \in \Gamma$) and we can write
\bq
F_- = F^{(1)}_- + F^{(2)}_- =
\int_{z_i\;\Gamma}^{z_0}\,dz\,\Bigl[\chi(z)\Bigr]_-^{\mu} +
\int_{z_0\;\Gamma}^{z_f}\,dz\,\Bigl[\chi(z)\Bigr]_-^{\mu}.
\end{equation}
In the first integral $\bigl[\chi\bigr]_-^{\mu} = \chi^{\mu}$ and we can apply
the BST relation of \eqn{BSTexp}. In the second one we find
\arraycolsep 0.14em\begin{eqnarray}
F^{(2)}_- &=& \int_{z_0\;\Gamma}^{z_f}\,dz\,\Bigl[\chi(z)\Bigr]_-^{\mu}
=
\int_{z_0\;\Gamma}^{z_f}\,dz\, \exp\{-\,2\,i\,\pi\,(\mu+1)\}\,\chi^{\mu}(z)
\nonumber\\
{}&=&
\int_{z_0\;\Gamma}^{z_f}\,dz\, \exp\{-\,2\,i\,\pi\,(\mu+1)\}\,
{\cal D}\lpar \mu,z,\partial_z\rpar\,\chi^{\mu+1}(z)
=
\int_{z_0\;\Gamma}^{z_f}\,dz\, {\cal D}\lpar \mu,z,\partial_z\rpar\,
\Bigl[ \chi(z)\Bigr]_-^{\mu+1},
\label{BSTcmplx}
\end{eqnarray}
where we have used $\exp\{-2 i \pi \mu\} = \exp\{- 2 i \pi (\mu+1)\}$, showing
extension of the BST algorithm into the second sheet. Note that the BST
relation in the second equality of \eqn{BSTcmplx} is of pure algebraic nature
and that the integration over $\Gamma$ will always be parametrized in terms
of a $\Gamma(t)\,:\,t \in R$ so that $\chi \in C[t]$ and ${\cal D} \in
C[t]\,< \partial_t >$.
Each segment of \eqn{newpath} is of the type considered in $F^{(1)}_-$,
$F^{(2)}_-$ or $F_-$ and therefore we can apply the BST algorithm to $C_0^-$.
It goes without saying that this is the correct procedure, instead of applying
BST first and performing analytical continuation only in a second step.
The result reads as follows:
\bq
C_0^-= \sum_{i=1}^n\,\alpha_i\,\frac{J_i}{B_3},
\qquad
J_i =
\intsx{x}\!\int_0^{\alpha_i\,x+\beta_i}\!\!\!\!\!\!dy\,
\ln\!\!^-\!\chi (\alpha_i x\!+\!\beta_i,y)
- \frac{1}{2}\,\sum_{j=1}^{4}\!\int_0^{a_{ij}}\!\!\!dx\,A_{ij}\,\ln^- \chi_{ij}
+ \frac{\alpha_i}{2} + \beta_i,
\end{equation}
\bq
\chi_{i1} = \chi( \alpha_i + \beta_i, x ), \qquad
\chi_{i2} = \chi( \beta_i, x ), \qquad
\chi_{i3} = \chi( \alpha_ix + \beta_i, \alpha_ix + \beta_i ), \qquad
\chi_{i4} = \chi( \alpha_ix + \beta_i, 0 ),
\end{equation}
\bq
A_{i1} = \frac{\alpha_i\!+\!\beta_i\!-\!X}{\alpha_i},\quad
A_{i2} = -\,\frac{\beta_i\!-\!X}{\alpha_i},\quad
A_{i3} = X\!-\!Y,\quad
A_{i4} = Y,\quad
a_{i1}\!= \alpha_i\!+\!\beta_i, \quad
a_{i2}\!= \beta_i, \quad
a_{i3}\!= a_{i4}\!= 1,
\end{equation}
where $B_3$ is the BST factor and $X,Y$ are the BST co-factors.
With this example we have shown how to apply differential operators when
complex momenta are present: first the analytical continuation has to be
performed together with the deformation of the integration contour and just at
the end the differential operator can be correctly applied.
In conclusion we have shown a practical implementation of the concept that
the pole at the mass of a stable particle can move into other Riemann sheets
where it describes an unstable particle.
It is worth noting that all cases encountered so far, where both the internal
masses and the Mandelstam invariants are complex, have never been discussed in
the literature, although this step represents the logical extension of the
complex-mass scheme, allowing for a meaningful introduction of pseudo-observables.
\section{Including QED(QCD) corrections \label{theQs}}
In this section we will consider the inclusion of QED(QCD) corrections, both
virtual and real. The choice $s = s_{\ssH}$ in \eqn{PO} is dictated by the request
of a gauge independent definition of pseudo-observables and follows, once
again, from Nielsen identities.
Consider a final state where the inclusion of real QED(QCD) corrections is
mandatory in order to obtain an infrared finite quantity, e. g. $i \to H \to
\overline b b$.
Here, at one-loop, we have wave-function renormalization factors for the
external fermions and vertex corrections; the QED part generates, in the
so-called $\lpar \epsilon\,,\,m_b \rpar$ regulator scheme (dimensional
regularization for the infrared and masses for the collinear limit), a
simple infrared pole and double as well as simple collinear logarithm.
According to our recipe the QED(QCD) vertex correction should be evaluated
at complex Higgs momentum squared.
Let us define
\bq
s_{\ssH} = x_{{\scriptscriptstyle{H}}}\,\mu^2_{\ssH}, \quad
\beta^2_c = 1 - 4\,\frac{m^2_b}{s_{\ssH}}, \quad
\beta^2 = 1 - 4\,\frac{m^2_b}{\mu^2_{\ssH}}.
\end{equation}
The residue of the infrared (virtual) pole reads as follows
\bq
R_{\rm virt} = {\rm{Re}}\,\frac{\beta}{\beta_c}\,\Bigl[
\frac{\beta^2}{x_{{\scriptscriptstyle{H}}}} + 2 - \frac{1}{x_{{\scriptscriptstyle{H}}}}\Bigr]\,
\ln^{-}\frac{\beta_c - 1}{\beta_c + 1}.
\end{equation}
The infrared pole from real emission originates from the end-point
singularity in the phase space integration, where $P_{{\scriptscriptstyle{H}}} = p_b + p_{\overline b}$
and $P^2_{{\scriptscriptstyle{H}}}$ is arbitrary but real, unless one is willing to extend the
phase space definition into the complex plane where $\delta$ functions are
defined in terms of contour integrals~\cite{Durand}.
Using the most obvious choice, namely $P^2_{{\scriptscriptstyle{H}}} = - \mu^2_{\ssH}$ we obtain for the
(real) infrared residue
\bq
R_{\rm real} = \lpar \beta^2 + 1 \rpar\,\ln\frac{1-\beta}{1+\beta}.
\end{equation}
Therefore, as expected, cancellation of infrared divergences is spoiled
by the need of defining virtual corrections at a complex value of $s$.
However, the fact that $Z^{-1/2}_{{\scriptscriptstyle{H}}}(s)\,V_f(s)$ is gauge parameter
independent only at the complex pole does not exclude a gauge independent
sub-set of corrections that can be evaluated at arbitrary $s$.
Consider the situation at the one-loop level; here, in front of the
$\ord{\alpha}$ QED corrections we use $Z_{{\scriptscriptstyle{H}}} = 1$ and the one-loop vertex,
$V^{\rm{\scriptscriptstyle{QED}}}_{\overline b b}$, is gauge independent for all values of $s$.
If we introduce
\bq
Z_{{\scriptscriptstyle{H}}} = 1 + \frac{g^2}{16\,\pi^2}\,\delta Z_{{\scriptscriptstyle{H}}},
\end{equation}
for the wave-function renormalization factor, we can use
\bq
S\lpar H_c \to \overline b b\rpar =
-\,\frac{g}{2}\,\frac{m_b}{M_{_W}}\,\Bigl\{
1 + \frac{g^2}{16\,\pi^2}\,\Bigl[ V^{\rm{\scriptscriptstyle{EW}}}_{\overline b b}(s_{{\scriptscriptstyle{H}}}) -
\frac{1}{2}\,\delta Z_{{\scriptscriptstyle{H}}}(s_{\ssH}) \Bigr] +
\frac{\alpha}{4\,\pi}\,V^{\rm{\scriptscriptstyle{QED}}}_{\overline b b}(\mu^2_{\ssH}) \Bigr\},
\end{equation}
thus preserving gauge invariance without spoiling infrared safety.
Following a well established convention it is also convenient to define a
{\em deconvoluted} pseudo-observable where QED(QCD) corrections are
subtracted according to theory.
There is an intriguing alternative for the treatment of QED(QCD) corrections;
since the definition of the Higgs boson mass is not unique, we could keep
it as a free parameter, $M_{_H}$.
Then, cancellation of infrared poles at the one-loop level requires
\bq
s_{\ssH} = x_{{\scriptscriptstyle{H}}}\,M^2_{_H}, \quad
\beta^2_c = 1 - 4\,\frac{m^2_b}{s_{\ssH}}, \quad
\beta^2 = 1 - 4\,\frac{m^2_b}{M^2_{_H}},
\end{equation}
and $R_{\rm virt} = R_{\rm real}$. Therefore, there is a value of $M_{_H}$ which is
infrared safe,
\bq
M^2_{_H} =
| s_{\ssH} |\,\Bigl\{ 1 + 2\,\frac{m^2_b}{|s_{\ssH}|}\,
\Bigl[ 1 - \mu_{\ssH}\,\lpar \mu^2_{\ssH} + \gamma^2_{{\scriptscriptstyle{H}}}\rpar^{-1/2}\Bigr] +
\ord{m^4_b} \Bigr\}.
\end{equation}
\begin{figure}[h]
\vspace{0.3cm}
$$
\begin{picture}(140,30)(0,0)
\SetScale{0.8}
\SetWidth{1.8}
\DashLine(0,0)(40,0){3}
\ArrowLine(100,-35)(140,-35)
\ArrowLine(140,35)(100,35)
\Line(70,-17.5)(40,0)
\Line(70,17.5)(40,0)
\ArrowLine(70,17.5)(70,-17.5)
\ArrowLine(70,-17.5)(100,-35)
\Photon(100,-35)(100,35){2}{7}
\ArrowLine(100,35)(70,17.5)
\end{picture}
\qquad\qquad
\begin{picture}(140,30)(0,0)
\SetScale{0.8}
\SetWidth{1.8}
\DashLine(0,0)(40,0){3}
\ArrowLine(100,-35)(140,-35)
\ArrowLine(140,35)(100,35)
\Line(70,-17.5)(40,0)
\Line(70,17.5)(40,0)
\ArrowLine(70,17.5)(100,0)
\Line(70,-17.5)(100,-35)
\Photon(100,35)(100,0){2}{4}
\ArrowLine(100,0)(100,-35)
\ArrowLine(100,35)(70,17.5)
\end{picture}
$$
\vspace{0.5cm}
\caption[]{
Examples of mixed electroweak-QED two-loop diagrams contributing to $H \to \overline b b$.
The solid lines, attached to the Higgs boson (dash-line) represent
$Z/\phi^0$ or $W/\phi$ fields.}
\label{MTL}
\end{figure}
The main question in establishing the consistency of the procedure,
definition of QED(QCD) corrections and their subsequent deconvolution,
is about the extendability to higher orders.
Using the following expansions
\bq
Z_{{\scriptscriptstyle{H}}} = 1 + \sum_{n=1}^{\infty}\,\frac{g^{2 n}}{16\,\pi^2}\,
\delta Z^{(n)}_{{\scriptscriptstyle{H}}},
\quad
V_{\overline b b} = \sum_{n=1}^{\infty}\,
\frac{g^{2 n -1}}{16\,\pi^2}\,V^{(n-1)}_{\overline b b},
\quad
V^{(0)}_{\overline b b} = -\,\frac{m_b}{2\,M_{_W}}
\end{equation}
and working at $\ord{g^5}$ we will have terms like
\bq
\delta Z^{(1)}_{{\scriptscriptstyle{H}}}\,V^{(1)\,;\,\rm{\scriptscriptstyle{QED}}}_{\overline b b},
\label{mixedZ}
\end{equation}
which are of the mixed type, electroweak-QED, and where $Z_{{\scriptscriptstyle{H}}}$
cannot be evaluated at arbitrary values of $s$.
In \eqn{mixedZ} $V^{(1)\,;\,\rm{\scriptscriptstyle{QED}}}_{\overline b b}$ is the one-loop QED triangle
contributing to $H \to \overline b b$.
However, we also have mixed two-loop diagrams, as given in \fig{MTL}.
There is a well-known identity which allows us to extract the infrared
behavior of these two-loop diagram, in terms of the product of two
one-loop vertices plus an infrared finite reminder (see \cite{Passarino:2006gv}
for the explicit decomposition in the scalar case).
The decomposition for the scalar case is illustrated in \fig{IRdec} where
the external lines in both one-loop vertices are on-shell.
By scalar we mean those contributions that do not have powers of the
integration momentum in the numerator.
\begin{figure}[h]
\vspace{0.2cm}
$$
\raisebox{0.1cm}{
\begin{picture}(110,30)(0,0)
\SetScale{0.8}
\SetWidth{1.8}
\DashLine(0,0)(40,0){3}
\Line(140,-35)(100,-35)
\Line(140,35)(100,35)
\Line(70,-17.5)(40,0)
\Line(70,17.5)(40,0)
\Line(70,17.5)(70,-17.5)
\Line(70,-17.5)(100,-35)
\Photon(100,-35)(100,35){2}{7}
\Line(100,35)(70,17.5)
\end{picture}}
=
\quad
\raisebox{0.1cm}{
\begin{picture}(80,30)(0,0)
\SetScale{0.6}
\SetWidth{2}
\DashLine(0,0)(40,0){3}
\Line(120,-35)(80,-35)
\Line(120,35)(80,35)
\Line(80,-35)(40,0)
\Line(80,35)(40,0)
\Line(80,-35)(80,35)
\end{picture}}
\otimes
\;
\raisebox{0.1cm}{
\begin{picture}(80,30)(0,0)
\SetScale{0.6}
\SetWidth{2}
\DashLine(0,0)(40,0){3}
\Line(120,-35)(80,-35)
\Line(120,35)(80,35)
\Line(80,-35)(40,0)
\Line(80,35)(40,0)
\Photon(80,-35)(80,35){2}{7}
\end{picture}}
\quad
+
\quad
\mbox{IR finite}
$$
\vspace{0.2cm}
\caption[]{Infrared decomposition of a mixed electroweak-QED two-loop diagram. Here the
scalar case is presented, i.e. the spin structure has been completely neglected
and, for instance, the wavy line represents a scalar massless line.}
\label{IRdec}
\end{figure}
As we have seen, the identity holds at the amplitude level, reflecting
the factorization of virtual infrared corrections and the fact that virtual
infrared poles are always coming from $C_0\,$-functions (the scalar ones).
These identities follow from the fact that any diagram with an infrared
photon line of momentum $q_i + K$, where $K$ is a certain combination of
external momenta as well as of the other loop momenta, gives an infrared
divergence equivalent to the same diagram evaluated at $q_i = -K$.
We thus see that the infrared decomposition into products of tensor
integrals times infrared $C_0\,$-functions follows trivially.
The explicit form of infrared factorization, at the amplitude level, is
illustrated in \fig{IRampdec} which shows a class of diagrams contributing to
the two-loop amplitude for $H \to \overline b b$.
\begin{figure}[ht!]
\vspace{0.2cm}
$$
\raisebox{0.1cm}{
\begin{picture}(110,30)(0,0)
\SetScale{0.8}
\SetWidth{1.8}
\DashLine(0,0)(40,0){3}
\ArrowLine(100,-35)(140,-35)
\ArrowLine(140,35)(100,35)
\Line(70,-17.5)(40,0)
\Line(70,17.5)(40,0)
\ArrowLine(70,17.5)(70,-17.5)
\ArrowLine(70,-17.5)(100,-35)
\Photon(100,-35)(100,35){2}{7}
\ArrowLine(100,35)(70,17.5)
\Text(110,35)[cb]{$v$}
\Text(110,-40)[cb]{$\overline u$}
\end{picture}}
\quad
=
\quad
\frac{g^2 \stw^2}{16\,\pi^2}\,Q^2_b\,( M^2_{_H} - 2\,m^2_b )
\;
\raisebox{0.1cm}{
\begin{picture}(80,30)(0,0)
\SetScale{0.6}
\SetWidth{2}
\DashLine(0,0)(40,0){3}
\ArrowLine(80,-35)(120,-35)
\ArrowLine(120,35)(80,35)
\Line(80,-35)(40,0)
\Line(80,35)(40,0)
\ArrowLine(80,35)(80,-35)
\Text(70,28)[cb]{$v$}
\Text(70,-33)[cb]{$\overline u$}
\end{picture}}
\!\!
\otimes
\;\;
C^{\rm IR}_0
\qquad + \quad \hbox{IR finite}
$$
\vspace{0.2cm}
\caption[]{Infrared decomposition of a mixed electroweak-QED two-loop amplitude.
Solid lines represent $Z, \phi^0$ or $W, \phi$ particles; Dirac spinors for
the external lines are included and $C^{\rm IR}_0$ is the scalar,
infrared divergent, three-point function.}
\label{IRampdec}
\end{figure}
When added to the contribution coming from
$\delta Z^{(1)}_{{\scriptscriptstyle{H}}}\,V^{(1)}_{\overline b b}$, the combination
\bq
V^{(1) \rm{\scriptscriptstyle{EW}}}_{\overline b b}(s) - \frac{1}{2}\,\delta Z^{(1)}_{{\scriptscriptstyle{H}}}(s)
\label{combi}
\end{equation}
arises naturally in front of an infrared $C_0$.
Therefore, our recipe will be to evaluate \eqn{combi} at $s = s_{\ssH}$ while
keeping the remaining QED-like $C_0\,$functions (the infrared divergent
ones) at $s= \mu^2_{\ssH}$.
The difference with the original diagram is non resonant and mixes with
infrared divergent background contributions, e.g. from boxes.
For one-loop real emission we have diagrams as illustrated in
the l.h.s. of \fig{RIRdec} where we use $p^2_{{\scriptscriptstyle{H}}}= -s_{\ssH}$ and where the
infrared singularity arises from the end-point of phase space integration
which is controlled by a real Higgs boson momentum.
The corresponding amplitude is gauge independent by construction, a fact
that can be easily seen in the infrared divergent soft approximation,
the r.h.s. of \fig{RIRdec}, where we have introduced the eikonal
factor
\bq
J_{\rm eik}(p)= -\,Q_b\,\frac{\spro{p}{\epsilon}}{\spro{p}{k}},
\qquad \spro{\epsilon(k)}{k}= 0,
\label{eikf}
\end{equation}
$\epsilon$ being the photon polarization. The vertex correction, first diagram in
the r.h.s. of \fig{RIRdec}, when summed with
$Z^{-1/2}_{{\scriptscriptstyle{H}}}\,\otimes\,{\rm LO}$ gives a gauge invariant contribution if
both are evaluated at the Higgs complex pole.
\begin{figure}[h]
\vspace{0.2cm}
\begin{eqnarray*}
\raisebox{0.1cm}{
\begin{picture}(70,30)(5,0)
\SetScale{0.5}
\SetWidth{2.2}
\DashLine(5,0)(40,0){3}
\ArrowLine(80,-35)(130,-35)
\ArrowLine(130,35)(105,35)
\ArrowLine(105,35)(80,35)
\Photon(105,35)(130,0){2}{4}
\Line(80,-35)(40,0)
\Line(80,35)(40,0)
\ArrowLine(80,35)(80,-35)
\LongArrow(135,45)(115,45) \Text(68,28)[cb]{$p_2$}
\LongArrow(135,-45)(115,-45) \Text(68,-33)[cb]{$p_1$}
\end{picture}}
\raisebox{0.1cm}{
\begin{picture}(15,0)(0,0)
\LongArrow(0,0)(15,0)
\Text(7,5)[cb]{soft}
\end{picture}}
\quad
\raisebox{0.1cm}{
\begin{picture}(55,30)(5,0)
\SetScale{0.5}
\SetWidth{2.2}
\DashLine(5,0)(40,0){3}
\ArrowLine(80,-35)(120,-35)
\ArrowLine(120,35)(80,35)
\Line(80,-35)(40,0)
\Line(80,35)(40,0)
\ArrowLine(80,35)(80,-35)
\end{picture}}
\otimes\;
J_{\rm eik}(p_2)\,,
&\qquad&
\raisebox{0.1cm}{
\begin{picture}(70,30)(5,0)
\SetScale{0.5}
\SetWidth{2.2}
\DashLine(5,0)(40,0){3}
\ArrowLine(80,35)(130,35)
\ArrowLine(105,-35)(130,-35)
\ArrowLine(80,-35)(105,-35)
\Photon(105,-35)(130,0){2}{4}
\Line(80,-35)(40,0)
\Line(80,35)(40,0)
\ArrowLine(80,35)(80,-35)
\LongArrow(135,45)(115,45) \Text(68,28)[cb]{$p_2$}
\LongArrow(135,-45)(115,-45) \Text(68,-33)[cb]{$p_1$}
\end{picture}}
\raisebox{0.1cm}{
\begin{picture}(15,0)(0,0)
\LongArrow(0,0)(15,0)
\Text(7,5)[cb]{soft}
\end{picture}}
\quad
\raisebox{0.1cm}{
\begin{picture}(55,30)(5,0)
\SetScale{0.5}
\SetWidth{2.2}
\DashLine(5,0)(40,0){3}
\ArrowLine(80,-35)(120,-35)
\ArrowLine(120,35)(80,35)
\Line(80,-35)(40,0)
\Line(80,35)(40,0)
\ArrowLine(80,35)(80,-35)
\end{picture}}
\otimes\;
J_{\rm eik}(p_1)\,,
\\[1.4cm]
\delta Z^{(1)}_{{\scriptscriptstyle{H}}}\;
\raisebox{0.1cm}{
\begin{picture}(45,30)(10,0)
\SetScale{0.5}
\SetWidth{2}
\DashLine(10,0)(40,0){3}
\ArrowLine(40,0)(90,-35)
\ArrowLine(90,35)(65,17.5)
\ArrowLine(65,17.5)(40,0)
\Photon(65,17.5)(90,0){2}{4}
\LongArrow(90,48.75)(75,40) \Text(42,28)[cb]{$p_2$}
\LongArrow(90,-48.75)(75,-40) \Text(42,-37)[cb]{$p_1$}
\end{picture}}
\raisebox{0.1cm}{
\begin{picture}(15,0)(0,0)
\LongArrow(0,0)(15,0)
\Text(7,5)[cb]{soft}
\end{picture}}
\quad
\delta Z^{(1)}_{{\scriptscriptstyle{H}}}\;
\raisebox{0.1cm}{
\begin{picture}(25,30)(10,0)
\SetScale{0.5}
\SetWidth{2}
\DashLine(10,0)(40,0){3}
\ArrowLine(40,0)(70,35)
\ArrowLine(70,-35)(40,0)
\end{picture}}
\otimes\,
J_{\rm eik}(p_2)\,,
&\qquad&
\delta Z^{(1)}_{{\scriptscriptstyle{H}}}\;
\raisebox{0.1cm}{
\begin{picture}(45,30)(10,0)
\SetScale{0.5}
\SetWidth{2}
\DashLine(10,0)(40,0){3}
\ArrowLine(40,0)(90,35)
\ArrowLine(90,-35)(65,-17.5)
\ArrowLine(65,-17.5)(40,0)
\Photon(65,-17.5)(90,0){2}{4}
\LongArrow(90,48.75)(75,40) \Text(42,28)[cb]{$p_2$}
\LongArrow(90,-48.75)(75,-40) \Text(42,-37)[cb]{$p_1$}
\end{picture}}
\raisebox{0.1cm}{
\begin{picture}(15,0)(0,0)
\LongArrow(0,0)(15,0)
\Text(7,5)[cb]{soft}
\end{picture}}
\quad
\delta Z^{(1)}_{{\scriptscriptstyle{H}}}\;
\raisebox{0.1cm}{
\begin{picture}(25,30)(10,0)
\SetScale{0.5}
\SetWidth{2}
\DashLine(10,0)(40,0){3}
\ArrowLine(40,0)(70,35)
\ArrowLine(70,-35)(40,0)
\end{picture}}
\otimes\,
J_{\rm eik}(p_1)\,,
\end{eqnarray*}
\vspace{0.2cm}
\caption[]{
Examples of the infrared decomposition of $\ord{g^3}$ electroweak diagrams
with real photon emission. The last term in the r.h.s of the equation is
the corresponding eikonal factor of \eqn{eikf}.
}
\label{RIRdec}
\end{figure}
There is another example where the introduction of QED corrections seems
to be controversial if only internal masses are made complex\footnote{This point
was raised by Thomas Binoth in one of our last conversations.}. Let us consider
the pseudo-observable $\Gamma \lpar H \to W^+ W^-\rpar$, with on-shell
external vector bosons and internal complex masses: the infrared behavior
of the one-loop corrections is reducible to a scalar vertex
\bq
C_0\lpar -M^2_{_W}\,,\,-M^2_{_W}\,,\,-s\,,\,s_{{\scriptscriptstyle{W}}}\,,0\,,s_{{\scriptscriptstyle{W}}}\rpar,
\end{equation}
where the difference $s_{{\scriptscriptstyle{W}}} - M^2_{_W}$ ($s_{{\scriptscriptstyle{W}}}$ being the $W$ complex pole) acts
as an infrared regulator, removing the infrared virtual pole,
\bq
C_0\lpar -M^2_{_W}\,,\,-M^2_{_W}\,,\,-s\,,\,s_{{\scriptscriptstyle{W}}}\,,0\,,s_{{\scriptscriptstyle{W}}}\rpar =
\frac{2}{\beta_{{\scriptscriptstyle{W}}}\,s}\,\ln\frac{\beta_{{\scriptscriptstyle{W}}}+1}{\beta_{{\scriptscriptstyle{W}}}-1}\,
\ln\frac{s_{{\scriptscriptstyle{W}}}-M^2_{_W}}{s} + \dots
\end{equation}
where $\beta^2_{{\scriptscriptstyle{W}}} = 1-4\,M^2_{_W}/s$. If we continue the external masses the
result is instead
\bq
C_0\lpar -s_{{\scriptscriptstyle{W}}}\,,\,-s_{{\scriptscriptstyle{W}}}\,,\,-s\,,\,s_{{\scriptscriptstyle{W}}}\,,0\,,s_{{\scriptscriptstyle{W}}}\rpar =
\frac{1}{\beta_{c{\scriptscriptstyle{W}}}\,s}\,\ln^{-}\frac{\beta_{c{\scriptscriptstyle{W}}}+1}{\beta_{c{\scriptscriptstyle{W}}}-1}\,
\frac{1}{\bar\epsilon} + \hbox{IR finite},
\end{equation}
where $\beta^2_{c{\scriptscriptstyle{W}}} = 1-4\,s_{{\scriptscriptstyle{W}}}/s$. Let us consider a realistic example,
e.g. $gg \to 4\,$f of \fig{Multi}; for the complete process there is no
problem at all because a photon attached to an internal $W$ boson line cannot
be infrared divergent.
However, the goal is a breakdown of the full process into three components,
one of which is the pseudo-observable $\Gamma \lpar H \to W^+ W^-\rpar$;
in order to define $\Gamma\lpar H \to W^+ W^- (\gamma)\rpar$ it is
important to control the cancellation between virtual and real infrared
corrections and, in this case, the extension to external complex masses is
more than an option.
\section{Schemes \label{schemes}}
For processes which are relevant for the LHC and, in particular, for $H \to \overline b b$,
$\gamma\gamma$, $gg$ and $gg \to H$ etc, we define three different schemes
and compare their results. The schemes are:
\begin{itemize}
\item the RMRP scheme which is the usual on-shell scheme where all
masses and all Mandelstam invariants are real;
\item the CMRP scheme~\cite{Actis:2008uh}, the complex mass scheme with complex
internal $W$ and $Z$ poles (extendable to top complex pole) but with real,
external, on-shell Higgs, $W, Z\,$, etc. legs and with the standard LSZ wave-function
renormalization;
\item the CMCP scheme, the (complete) complex mass scheme with complex,
external, Higgs ($W, Z$, etc.) where the LSZ procedure is carried out at the
Higgs complex pole (on the second Riemann sheet).
\end{itemize}
The introduction of three different schemes does not reflect a theoretical
uncertainty; only the CMCP scheme is fully consistent and comparisons only
serve the purpose of quantifying deviations of more familiar schemes from
the CMCP scheme.
\section{Numerical results \label{Nres}}
In this section we examine the numerical impact of computing Higgs
pseudo-observables at the Higgs complex pole (on the second Riemann sheet
of the $S\,$-matrix).
We use the parametrization $s_{\ssH}= \mu^2_{\ssH} - i\,\mu_{\ssH}\,\gamma_{\ssH}$ for the Higgs complex pole
where now $\mu_{\ssH}$ is an input parameter and $\gamma_{\ssH}$ is computed
in the standard model. In this case $\mu_{\ssH}$ plays the role of an
input parameter while we prefer ${\overline\mu}^2_{\ssH}$ of \eqn{MHdef} as conventional definition
of the Higgs boson {\em mass}. The results are compared with on-shell pseudo
observables.
As input parameters for the numerical evaluation we have used the
following values
\[
\begin{array}{llll}
M_{_W} = 80.398\,\mathrm{GeV}, \;\; & \;\;
M_{_Z} = 91.1876\,\mathrm{GeV}, \;\; & \;\;
m_t = 170.9\,\mathrm{GeV}, \;\; & \;\;
\Gamma_{{\scriptscriptstyle{W}}} = 2.093\,\mathrm{GeV},\\ [0.3cm]
G_{{\scriptscriptstyle{F}}} = 1.16637\,\times\,10^{-5}\,\mathrm{GeV}^{-2}, \;\; & \;\;
\alpha(0) = 1/137.0359911, \;\; & \;\;
\alpha_{{\scriptscriptstyle{S}}}\lpar M_{_Z}\rpar= 0.118, \;\; & \;\;
\Gamma_{{\scriptscriptstyle{Z}}} = 2.4952\,\mathrm{GeV}.
\end{array}
\]
For the $W, Z$ complex poles we use
\bq
s_{{\scriptscriptstyle{V}}}= \mu^2_{{\scriptscriptstyle{V}}} - i\,\mu_{{\scriptscriptstyle{V}}}\,\gamma_{{\scriptscriptstyle{V}}},
\quad
\mu^2_{{\scriptscriptstyle{V}}}= M^2_{{\scriptscriptstyle{V}}} - \Gamma^2_{{\scriptscriptstyle{V}}},
\quad
\mu_{{\scriptscriptstyle{V}}}\gamma_{{\scriptscriptstyle{V}}}=\,
M_{{\scriptscriptstyle{V}}}\,\Gamma_{{\scriptscriptstyle{V}}}\,
\lpar 1 - \frac{1}{2}\,\frac{\Gamma^2_{{\scriptscriptstyle{V}}}}{M^2_{{\scriptscriptstyle{V}}}} \rpar.
\end{equation}
In computing $H \to gg(gg \to H)$ we have used a running
$\alpha_{{\scriptscriptstyle{S}}}(\mu_{\ssH})\,$(CMRP) or $\alpha_{{\scriptscriptstyle{S}}}({\overline\mu}_{\ssH})\,$(CMCP).
Results for the computed $\gamma_{\ssH}$ are collected in \tabn{TgammaH}.
\begin{table}[h!]\centering
\setlength{\arraycolsep}{\tabcolsep}
\renewcommand\arraystretch{1.2}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
&&&&&&&& \\[-0.4cm]
$\mu_{\ssH}\;$[GeV] & $100$ & $120$ & $160$ & $170$ & $180$ & $200$ & $250$ & $400$ \\
&&&&&&&& \\[-0.4cm]
\hline
&&&&&&&& \\[-0.4cm]
$\gamma_{\ssH}\,$[GeV] &
$0.051$ & $0.043$ & $0.105$ & $0.391$ & $0.637$ & $1.448$ & $4.296$ & $39.729$ \\
&&&&&&&& \\[-0.4cm]
\hline
&&&&&&&& \\[-0.4cm]
$\gamma_{\ssH}\,$[GeV] &
$0.051$ & $0.043$ & $0.105$ & $0.391$ & $0.637$ & $1.448$ & $4.373$ & $39.829$ \\
&&&&&&&& \\[-0.4cm]
\hline
&&&&&&&& \\[-0.4cm]
$\gamma_{\ssH}\,$[GeV] &
$0.051$ & $0.043$ & $0.105$ & $0.391$ & $0.637$ & $1.498$ & $5.069$ & $40.847$
\\[-0.4cm]
&&&&&&&& \\
\hline
\end{tabular}
\caption[] {Standard model prediction for $\gamma_{\ssH}$ in GeV as a function of
$\mu_{\ssH}$. The first entry corresponds to a real on-shell top quark mass,
the second entry to a top quark complex pole derived from
$\Gamma_t =\Gamma^{\rm{\scriptscriptstyle{NLO}}}_t = 1.31\,$GeV and the last entry to a top quark complex
pole derived from $\Gamma_t$ equal to the experimental upper bound of
$13.1\,$GeV.}
\label{TgammaH}
\end{table}
For the evaluation of all one-loop functions in the CMCP scheme where, sometimes,
a continuation to the second Riemann sheet is required we used both the analytical
results and the exact numerical integration; the two in-house (independent) libraries
return results in excellent agreement (typically on the sixth digit on one-loop
percentage radiative corrections).
\subsection{Numerical differences between the CMRP and CMCP schemes \label{NDiff}}
For a better understanding of comparisons we define weak corrections to
$H \to \overline b b$ as
\bq
\Delta_{\rm weak} = \sqrt{2}\,\frac{G_{{\scriptscriptstyle{F}}}\,{\overline\mu}^2_{\ssH}}{\pi^2}\,
\lpar C_{\rm part} + B_{\rm part} + R \rpar,
\qquad
{\overline\mu}^2_{\ssH} = \mu_{{\scriptscriptstyle{H}}}\,\lpar \mu^2_{\ssH} + \gamma^2_{{\scriptscriptstyle{H}}} \rpar^{1/2},
\label{parts}
\end{equation}
separating the corrections into a part coming from three-point functions($C_{\rm part}$),
two-point functions($B_{\rm part}$) and a rational term($R$).
It is worth noting that there are, in general, strong cancellations among
the three contributions: for instance, at $120\,$GeV we have a $C_{\rm part}$
of $-8.233\,\%$ from CMCP exact while the bracket in \eqn{parts} is $-0.790\,\%$.
Differences between the two schemes are roughly of $\ord{\gamma_{\ssH}/\mu_{\ssH}}$,
as expected, and become significant above the $\overline t t\,$-threshold
where the Higgs boson width becomes larger and larger.
Since the width of an heavy Higgs boson is large it is natural to
investigate the goodness of the separation of the production stage
from the decay process.
In general these stages are not independent and may be interconnected
by radiative effects.
Our results confirm the theorem of Ref.~\cite{Fadin:1993kt}:
radiative effects are not enhanced in totally inclusive pseudo-observables with
respect to the naive $\ord{\gamma_{\ssH}/\mu_{\ssH}}$ argument, unless the Higgs boson is
very heavy, in which case this ratio is large (at $\mu_{\ssH} = 500\,$GeV it reaches
$29\%$) and typical cancellations in the total weak correction factor are
disturbed, increasing the effect.
To further understand the differences between the two schemes for high
values of $\mu_{\ssH}$, we recall the well-known fact that the Higgs
wave-function renormalization shows an inverse $\beta\,$-behavior at the
$W, Z$ threshold. In the two schemes, exactly at threshold, we will have
\bq
\beta^2= 1 - 4\,\frac{\mu^2_{{\scriptscriptstyle{B}}} - i\,\gamma_{{\scriptscriptstyle{B}}}\,\mu_{{\scriptscriptstyle{B}}}}{m^2_{{\scriptscriptstyle{H}}}}
\Bigr|_{\rm thr} = i\,\frac{\gamma_{{\scriptscriptstyle{B}}}}{\mu_{{\scriptscriptstyle{B}}}},
\qquad\quad
\beta^2_c= 1 - 4\,\frac{\mu^2_{{\scriptscriptstyle{B}}} - i\,\gamma_{{\scriptscriptstyle{B}}}\,\mu_{{\scriptscriptstyle{B}}}}
{\mu^2_{\ssH} - i\,\gamma_{\ssH}\,\mu_{\ssH}}
\Bigr|_{\rm thr} \sim i\,\frac{\mid 2\,\gamma_{{\scriptscriptstyle{B}}} - \gamma_{\ssH}\mid}
{2\,\mu_{{\scriptscriptstyle{B}}}},
\end{equation}
where $B= W, Z$. The parameter that regularizes the divergence is therefore
$\gamma_{{\scriptscriptstyle{B}}} - \gamma_{\ssH}/2$ with some sizable effect around the $ZZ$
threshold. To analyze differences between the CMRP and CMCP schemes we fix $\mu_{\ssH}$
and compute $\gamma_{\ssH}$; then we use \eqn{parts} and compare results
with the limit $\gamma_{\ssH} = 0$.
Results are given in \tabn{CMRPvsCMCP} where we see variations induced
by a finite $\gamma_{\ssH}$.
\begin{table}[h!]\centering
\setlength{\arraycolsep}{\tabcolsep}
\renewcommand\arraystretch{1.2}
\begin{tabular}{|c|c|c|c|c|}
\hline
&&&& \\[-0.4cm]
$\gamma_{\ssH}/\mu_{\ssH}$ & $C_{\rm part}$ & $B_{\rm part}$ & $R$ & tot \\
&&&& \\[-0.4cm]
\hline
&&&& \\[-0.4cm]
$0$ & $-3.673$ & $-1.999$ & $+4.514$ & $-1.658$ \\
$0.03 $ & $-3.760$ & $-1.990$ & $+4.009$ & $-1.741$ \\
&&&& \\[-0.4cm]
\hline
&&&& \\[-0.4cm]
$0$ & $-0.308$ & $-0.130$ & $+3.450$ & $+3.058$ \\
$0.29$ & $-0.986$ & $+0.974$ & $+2.714$ & $+2.702$ \\[-0.4cm]
&&&& \\
\hline
\end{tabular}
\caption[] {Variations (in percent) at $\mu_{\ssH} = 300, 500\,$GeV in the components
of the total weak corrections to $H\to\bar{b}b$ according to \eqn{parts}.}
\label{CMRPvsCMCP}
\end{table}
\subsection{Testing the NWA approximation \label{TNWA}}
In order to analyze the quality of the NWA approximation (\sect{NWA}) we have
considered the pure one-loop weak corrections to the decay width $H \to \overline b b$.
It turns out, that up to $\mu_{\ssH} = 250\,$GeV (where $\gamma_{\ssH}/\mu_{\ssH} = 0.011$) the
approximation is very good, less than $1\%$ (of a $\approx\,1\%$ correction).
Note that analytical continuation of three-point functions in the
exact CMCP scheme is required above $220\,$GeV.
\subsection{Complete set of results \label {CSR}}
As far as the Higgs boson production cross section in gluon-gluon fusion
is concerned we find that the effect of replacing the on-shell scheme (for the
external Higgs boson) with the complex-pole one is completely negligible (around $3-4$
per mill) for low values of the Higgs mass, a fact that is largely expected. Only
for higher values, say starting from the $\overline t t\,$threshold, where $\gamma_{\ssH}$ becomes
larger and larger, we reach sizable differences, above $10\%$ and rapidly
increasing.
\begin{itemize}
\item $\mathbf{H \to \gamma\gamma, gg, \;\; gg \to H}$
\end{itemize}
\noindent
A detailed comparison of predictions in the CMRP and CMCP schemes for
$H \to \gamma \gamma , gg$ is shown in \fig{c_dev}. The partial decay width
$H \to gg$ is shown in \fig{G_hgg} where we compare the CMRP(=RMRP) and CMCP
schemes.
Similarly we compare the partial decay width $H \to \gamma\gamma$
in the RMRP, CMRP and CMCP schemes in \fig{G_hgamgam}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[bb=0 0 567 384,width=12cm]{c_dev}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{c_dev}Comparison of predictions in the CMRP
and CMCP schemes for the decays $H \to \gamma \gamma$ (blue, solid line) and
for $H \to gg$ (red, dashed line). See \sect{schemes} for the scheme definitions.}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[bb=0 0 567 384,width=12cm]{G_hgg}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{G_hgg}Comparison of the decay width $\Gamma(H \to gg)$
in the CMRP (dashed line) and the CMCP (dotted line) scheme in the high mass
region. The effect of a complex top quark pole in CMCP (with a top total,
on-shell, width of $13.1\,$GeV) is given by the blue, solid line. See
\sect{schemes} for the scheme definitions.}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[bb=0 0 567 384,width=12cm]{G_hgamgam}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{G_hgamgam}Comparison of the decay width $\Gamma(H \to
\gamma\gamma)$ in the RMRP (dotted line), the CMRP (dashed line) and the
CMCP (solid line) scheme. The effect of a complex top quark pole in CMCP
(with a top total, on-shell, width of $13.1\,$GeV) is given by the
blue, dash-dotted line. The red, dash-double-dotted line corresponds
to a top width of $1.31\,$GeV. See \sect{schemes} for the scheme
definitions.}
\end{figure}
The relatively large effects in $\Gamma(H \to \gamma\gamma)$ or
$\Gamma(H \to gg)$ at large values of $\mu_{\ssH}$ are still compatible with
the naive $\ord{\gamma_{\ssH}/\mu_{\ssH}}$ argument. Consider \fig{c_dev}
for $\Gamma(H \to \gamma\gamma)$; for instance, at $\mu_{\ssH} = 365\,$GeV
we have $\gamma_{\ssH}/\mu_{\ssH}= 0.065$ with a variation between the CMRP and
CMCP schemes of $13.3\%$ giving a correction factor of $2\,\gamma_{\ssH}/\mu_{\ssH}$.
Clearly, the large increase in the Higgs boson width for increasing values of
$\mu_{\ssH}$ makes it questionable to use a perturbative description for
the Higgs-resonant part of $pp \to X$ when we have a very heavy Higgs boson.
The relevance of this result is clear: if a light Higgs boson is not discovered,
one of the goals of LHC will be to exclude a standard model Higgs up
to $600\,$GeV~\cite{EP600}; already at $500\,$GeV we have
\bq
\frac{\sigma_{\rm{\scriptscriptstyle{CMCP}}}(gg \to H)}{\sigma_{\rm{\scriptscriptstyle{CMRP}}}(gg \to H)} = 1.64,
\qquad \hbox{parton level},
\end{equation}
comparable to the effect of NLO QCD corrections. We have computed also
$\sigma(pp \to H)$ in the two schemes using MSTW 2008 LO parton
distribution functions (PDF)~\cite{Martin:2009iq}.
The ratio is given in \fig{R_ppH}, for different values of $s$; in this
figure we present
\bq
\frac{\sigma_{\rm{\scriptscriptstyle{CMCP}}}(pp \to H)}{\sigma_{\rm{\scriptscriptstyle{CMRP}}}(pp \to H)},
\end{equation}
\begin{figure}[t!]
\begin{center}
\includegraphics[bb=0 0 567 384,width=12cm]{R_ppH}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{R_ppH}The ratio $\sigma_{\rm{\scriptscriptstyle{CMCP}}}/\sigma_{\rm{\scriptscriptstyle{CMRP}}}$ for
the production cross section $pp \to H$, as a function of $\mu_{\ssH}$, for
different energies, $\sqrt{s} = 3\,$TeV (red, dotted line), $\sqrt{s} =
10\,$TeV (blue, dashed line) and $\sqrt{s} = 14\,$TeV (black, solid
line). The cross sections are computed with MSTW2008 LO PDFs with
factorization scale $\mu_{{\scriptscriptstyle{F}}}=\mu_{\ssH}$ for CMRP and $\mu_{{\scriptscriptstyle{F}}}={\overline\mu}_{\ssH}$
for CMCP.}
\end{figure}
where the numerator is evaluated at ${\overline\mu}_{\ssH}$ (\eqn{MHdef}) while
the denominator corresponds to $\mu_{\ssH}$. Here we use
\bq
\sigma(pp \to H) = \sigma_0\,\tau_{{\scriptscriptstyle{H}}}\,\frac{dL^{gg}}{d \tau_{{\scriptscriptstyle{H}}}},
\qquad
\frac{dL^{gg}}{d \tau} = \int_{\tau}^1\,\frac{dx}{x}\,
g\lpar x,\mu_{{\scriptscriptstyle{F}}}\rpar\,g\lpar \frac{\tau}{x},\mu_{{\scriptscriptstyle{F}}}\rpar,
\end{equation}
where $\tau = \mu^2_{\ssH}({\overline\mu}^2_{\ssH})/s$, $\sigma_0$ is the parton
level cross section and $g$ is the gluon PDF (with factorization scale
$\mu_{{\scriptscriptstyle{F}}}=\mu_{\ssH}$ for CMRP and $\mu_{{\scriptscriptstyle{F}}}={\overline\mu}_{\ssH}$ for CMCP).
Note that this ratio is only an indicator of the (large) size of the effect
since, for a realistic value of the cross section, NLO(NNLO) QCD corrections
should be included, see Ref.~\cite{deFlorian:2009hc} for updated cross sections
at the Tevatron and the LHC.
In \fig{S_ppH_3T} we show the corresponding cross section for
$\sqrt{s}= 3\,$TeV, including an estimate of the uncertainty induced
by varying renormalization and factorization scales (kept equal for simplicity);
$\mu_{\ssH}/2 \le \mu_{{\scriptscriptstyle{R}}} = \mu_{{\scriptscriptstyle{F}}} \le 2\,\mu_{\ssH}$ for
CMRP and ${\overline\mu}_{\ssH}/2 \le \mu_{{\scriptscriptstyle{R}}} = \mu_{{\scriptscriptstyle{F}}} \le 2\,{\overline\mu}_{\ssH}$ for
CMCP (${\overline\mu}_{\ssH}$ is given in \eqn{MHdef}). In \fig{S_ppH_3T} we do not include
the uncertainty associated to PDFs.
\begin{figure}[ht]
\begin{center}
\includegraphics[bb=0 0 567 384,width=12cm]{S_ppH_3T}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{S_ppH_3T}The production cross section $pp \to H$ at
$\sqrt{s}= 3\,$TeV for CMRP (red, wide-dashed line) and CMCP (blue, solid
line). The shaded areas surrounded by the dashed lines give the scale
uncertainty obtained by varying $\mu_{\ssH}/2 < \mu_{{\scriptscriptstyle{R}}} = \mu_{{\scriptscriptstyle{F}}} <
2\,\mu_{\ssH}$ in the CMRP scheme and ${\overline\mu}_{\ssH}/2 < \mu_{{\scriptscriptstyle{R}}} = \mu_{{\scriptscriptstyle{F}}} <
2\,{\overline\mu}_{\ssH}$ (\eqn{MHdef}) in the CMCP scheme. We have used MSTW2008 LO
PDFs.}
\end{figure}
The ratio between the two cross sections is stable under these
variations, e.g. is between $1.4546$ and $1.4574$ at $\sqrt{s}= 10\,$TeV
and $\mu_{\ssH} = 500\,$GeV. The production cross sections for $\sqrt{s} = 10, 14\,$
TeV are shown in \fig{S_ppH_10a14T}; note that here we use
$\alpha_{{\scriptscriptstyle{S}}}(M_{{\scriptscriptstyle{Z}}}) = 0.13934$ to be consistent with the LO
PDFs~\cite{Martin:2009iq}.
\begin{figure}[ht]
\begin{center}
\includegraphics[bb=0 0 567 384,width=12cm]{S_ppH_10a14T}
\end{center}
\vspace{-0.5cm}
\caption[]{The production cross section $pp \to H$ at $\sqrt{s}=
10\,$TeV for CMRP (red, dashed-dotted line) and CMCP (blue, dotted
line); at $\sqrt{s} = 14\,$TeV for CMRP (cyan, dashed line) and CMCP
(black, solid line). We have used MSTW2008 LO PDFs with factorization
scale $\mu_{{\scriptscriptstyle{F}}}=\mu_{\ssH}$ for CMRP and $\mu_{{\scriptscriptstyle{F}}}={\overline\mu}_{\ssH}$ for CMCP.}
\label{S_ppH_10a14T}
\end{figure}
In order to better understand the numerical differences in the three schemes we show in
\fig{B0} a scalar two-point function with two internal $Z$ masses.
\begin{figure}[h!]
\begin{center}
\includegraphics[bb=0 0 567 547,width=9cm]{NB0}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{B0}The scalar two-point function with two real
internal masses (RMRP scheme) $M^2_{{\scriptscriptstyle{Z}}}$ or two complex internal
masses (CMRP - CMCP schemes) $s_{{\scriptscriptstyle{Z}}}$. $B_0$ is computed in the RMRP
(red dash-line), CMRP (blue dot-line) and CMCP (black solid-line) schemes.
The Mandelstam invariant is
$\mu^2_{\ssH}$ in the RMRP - CMRP schemes and $s_{\ssH}$ in the CMCP scheme, with
$100\,\mathrm{GeV} < \mu_{\ssH} < 500\,\mathrm{GeV}$. See \sect{schemes} for the scheme
definitions.}
\end{figure}
In the RMRP scheme both the internal masses and the Mandelstam invariant
$\mu^2_{\ssH}$ are real; in the CMRP scheme we replace $M^2_{{\scriptscriptstyle{Z}}}$ with the
corresponding complex pole, $s_{{\scriptscriptstyle{Z}}}$; finally, in the CMCP scheme,
also the invariant becomes complex and equal to $s_{\ssH}$. In \fig{B0} we
vary $\mu_{\ssH}$ between $100\,$GeV and $500\,$GeV where $\gamma_{\ssH} =
146.89\,$GeV is huge; here deviations between CMRP and CMCP become
sizable. This simple example shows the general features of one-loop
corrections in the three schemes; CMRP - CMCP smoothly interpolate the
RMRP results around normal thresholds and, when $\mu^2_{\ssH}$ becomes larger
and larger, CMCP starts deviating from RMRP - CMRP.
We also consider the expression for the amplitude $A\lpar H \to
\gamma \gamma\rpar$ which can be split into a part containing only
$C_0\,$-functions and a rational term. We write
\bq
\Gamma\lpar H \to \gamma\gamma\rpar =
\frac{\alpha^2\,G_{{\scriptscriptstyle{F}}}}{32\,\sqrt{2}\,\pi^3}\,
\frac{\mid s_{{\scriptscriptstyle{W}}}\mid^2}{{\overline\mu}_{\ssH}}\,\Bigr| A\Bigr|^2.
\end{equation}
If we introduce auxiliary variables,
\bq
{\overline\mu}^2_{\ssH}= \mu_{\ssH}\,\lpar \mu^2_{\ssH} + \gamma^2_{{\scriptscriptstyle{H}}}\rpar^{1/2}, \quad
x_t= \frac{m^2_t}{{\overline\mu}^2_{\ssH}}, \quad
x_{{\scriptscriptstyle{H}}}= \frac{s_{{\scriptscriptstyle{H}}}}{{\overline\mu}^2_{\ssH}}, \quad
x_{{\scriptscriptstyle{W}}}= \frac{s_{{\scriptscriptstyle{W}}}}{{\overline\mu}^2_{\ssH}},
\end{equation}
the amplitude can be written as $A= A_{{\scriptscriptstyle{C}}} + R$ where
\arraycolsep 0.14em\begin{eqnarray}
A_{{\scriptscriptstyle{C}}} &=& -\,\frac{8}{3}\,x_t\,\frac{x_{{\scriptscriptstyle{H}}} - 4\,x_t}{x_{{\scriptscriptstyle{W}}}}\,
C_0\lpar 0\,,\,0\,,\,-s_{{\scriptscriptstyle{H}}}\,,\,m_t\,,\,m_t\,,\,m_t \rpar
+ 6\,\lpar x_{{\scriptscriptstyle{H}}} - 2\,x_{{\scriptscriptstyle{W}}} \rpar\,
C_0\lpar 0\,,\,0\,,\,-s_{{\scriptscriptstyle{H}}}\,,\,s_{{\scriptscriptstyle{W}}}\,,\,s_{{\scriptscriptstyle{W}}}\,,\,s_{{\scriptscriptstyle{W}}} \rpar,
\nonumber\\
R &=& -\,\frac{16}{3}\,\frac{x_t}{x_{{\scriptscriptstyle{W}}}} + \frac{x_{{\scriptscriptstyle{H}}}}{x_{{\scriptscriptstyle{W}}}} + 6.
\label{CpRat}
\end{eqnarray}
A comparison for the real and imaginary parts in the CMRP and CMCP schemes is
shown in \fig{C0rat}.
\begin{figure}[h!]
\begin{center}
\includegraphics[bb=0 0 567 384,width=12cm]{C0rat}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{C0rat}The $C_0$ part of the amplitude for $H \to
\gamma \gamma$ and the corresponding rational term of \eqn{CpRat} with
$100\,\mathrm{GeV} < \mu_{\ssH} < 300\,\mathrm{GeV}$. The black, solid (CMCP) and red, dashed
(CMRP) lines give the real part whereas blue, dotted (CMRP) and magenta,
dash-dotted (CMCP) lines give the imaginary part. The cyan, wide-dashed
(orange, dash-double-dotted) line gives the rational real(imaginary)
part of the amplitude without appreciable differences between the
schemes. The imaginary part of the rational term is always small and
negligible. See \sect{schemes} for the scheme definitions.}
\end{figure}
\begin{itemize}
\item $\mathbf{H \to \overline b b}$
\end{itemize}
\noindent
Results for $H \to \overline b b$ are shown in \tabn{THbbWwide} for the pure
weak percentage one-loop corrections in the three schemes and for a wide
range of Higgs masses;
\begin{table}[ht]\centering
\setlength{\arraycolsep}{\tabcolsep}
\renewcommand\arraystretch{0.9}
\begin{tabular}{|c|c|c|c|c|}
\hline
&&&& \\
$M_{_H}\;$[GeV] &
$\delta_{{\scriptscriptstyle{W}}}$ RMRP [$\%$] & $\delta_{{\scriptscriptstyle{W}}}$ CMRP [$\%$] &
$\delta_{{\scriptscriptstyle{W}}}$ CMCP [$\%$] & $\delta_{{\scriptscriptstyle{W}}}$ CMCP [$\%$] \\
& massless & massless & massless & massive \\
&&&& \\
\hline
&&&& \\
$ 120 $ & $ -0.7890 $ & $ -0.7904 $ & $ -0.7904$ & $ -0.7837$ \\
$ 130 $ & $ -0.9557 $ & $ -0.9572 $ & $ -0.9573$ & $ -0.9509$ \\
$ 140 $ & $ -1.1978 $ & $ -1.1986 $ & $ -1.1986$ & $ -1.1922$ \\
$ 150 $ & $ -1.6215 $ & $ -1.6146 $ & $ -1.6149$ & $ -1.6078$ \\
$ 160 $ & $ -4.2656 $ & $ -2.6458 $ & $ -2.6690$ & $ -2.6587$ \\
$ 170 $ & $ -1.3987 $ & $ -1.4914 $ & $ -1.4875$ & $ -1.4822$ \\
$ 180 $ & $ -2.1989 $ & $ -1.9435 $ & $ -1.9912$ & $ -1.9858$ \\
$ 190 $ & $ -1.0338 $ & $ -1.1744 $ & $ -1.1590$ & $ -1.1569$ \\
$ 200 $ & $ -1.1547 $ & $ -1.1967 $ & $ -1.1987$ & $ -1.1974$ \\
$ 210 $ & $ -1.2452 $ & $ -1.2621 $ & $ -1.2730$ & $ -1.2723$ \\
$ 220 $ & $ -1.3132 $ & $ -1.3198 $ & $ -1.3379$ & $ -1.3376$ \\
$ 230 $ & $ -1.3647 $ & $ -1.3665 $ & $ -1.3914$ & $ -1.3917$ \\
$ 240 $ & $ -1.4047 $ & $ -1.4044 $ & $ -1.4363$ & $ -1.4370$ \\
$ 250 $ & $ -1.4376 $ & $ -1.4365 $ & $ -1.4759$ & $ -1.4769$ \\
$ 260 $ & $ -1.4674 $ & $ -1.4665 $ & $ -1.5138$ & $ -1.5151$ \\
$ 270 $ & $ -1.4985 $ & $ -1.4981 $ & $ -1.5539$ & $ -1.5555$ \\
$ 280 $ & $ -1.5357 $ & $ -1.5361 $ & $ -1.6008$ & $ -1.6026$ \\
$ 290 $ & $ -1.5851 $ & $ -1.5865 $ & $ -1.6604$ & $ -1.6624$ \\
$ 300 $ & $ -1.6557 $ & $ -1.6582 $ & $ -1.7410$ & $ -1.7431$ \\
$ 400 $ & $ -0.4736 $ & $ -0.4865 $ & $ -0.8589$ & $ -0.8633$ \\
$ 450 $ & $ +1.4855 $ & $ +1.4687 $ & $ +1.1579$ & $ +1.1517$ \\
&&&& \\
\hline
\end{tabular}
\caption[] {Percentage one-loop pure weak corrections for $H \to \overline b
b$; the first entry is the RMRP scheme, the second entry is the CMRP
scheme, the third entry is the CMCP scheme while the fourth entry is the
CMCP scheme with finite $m_b$ (\sect{schemes}).}
\label{THbbWwide}
\end{table}
in \tabn{THbbwide} the electroweak (weak + QED) percentage one-loop
corrections are given.
\begin{table}[ht]\centering
\setlength{\arraycolsep}{\tabcolsep}
\renewcommand\arraystretch{0.9}
\begin{tabular}{|c|c|c|}
\hline
&& \\
$M_{_H}\;$[GeV] &
$\delta_{\rm{\scriptscriptstyle{EW}}}$ CMRP [$\%$] &
$\delta_{\rm{\scriptscriptstyle{EW}}}$ CMCP [$\%$] \\
&& \\
\hline
&& \\
$ 120 $ & $ -0.9728 $ & $ -0.9729$ \\
$ 130 $ & $ -1.1467 $ & $ -1.1468$ \\
$ 140 $ & $ -1.3941 $ & $ -1.3942$ \\
$ 150 $ & $ -1.8151 $ & $ -1.8154$ \\
$ 160 $ & $ -2.8485 $ & $ -2.8716$ \\
$ 170 $ & $ -1.7039 $ & $ -1.7000$ \\
$ 180 $ & $ -2.1606 $ & $ -2.2083$ \\
$ 190 $ & $ -1.3990 $ & $ -1.3837$ \\
$ 200 $ & $ -1.4262 $ & $ -1.4283$ \\
$ 210 $ & $ -1.4961 $ & $ -1.5071$ \\
$ 220 $ & $ -1.5580 $ & $ -1.5761$ \\
$ 230 $ & $ -1.6087 $ & $ -1.6337$ \\
$ 240 $ & $ -1.6503 $ & $ -1.6824$ \\
$ 250 $ & $ -1.6861 $ & $ -1.7256$ \\
$ 260 $ & $ -1.7194 $ & $ -1.7669$ \\
$ 270 $ & $ -1.7543 $ & $ -1.8102$ \\
$ 280 $ & $ -1.7953 $ & $ -1.8602$ \\
$ 290 $ & $ -1.8487 $ & $ -1.9228$ \\
$ 300 $ & $ -1.9232 $ & $ -2.0061$ \\
$ 400 $ & $ -0.7754 $ & $ -1.1488$ \\
$ 450 $ & $ +1.1697 $ & $ +0.8571$ \\
&& \\
\hline
\end{tabular}
\caption[] {Percentage one-loop electroweak (weak + QED) corrections for
$H \to \overline b b$ with a massive b-quark; the first entry is the CMRP
scheme, the second entry is the CMCP scheme (\sect{schemes}).}
\label{THbbwide}
\end{table}
For weak corrections the results for $H \to \overline b b$ are presented
graphically in \fig{hbb_w};
\begin{figure}[h!]
\begin{center}
\includegraphics[bb=0 0 567 384,width=12cm]{hbb_w}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{hbb_w} The weak one-loop radiative corrections to $H \to
\overline b b$ in the RMRP scheme (red, dotted line), in the CMRP scheme (black,
dashed line) and in the CMCP scheme (black, solid line). The effect of a
complex top quark pole in CMCP (with a top total, on-shell, width of
$13.1\,$GeV) is given by the blue wide-dashed line. See \sect{schemes} for
the scheme definitions. The result corresponding to $\Gamma_t
=\Gamma^{\rm{\scriptscriptstyle{NLO}}}_t = 1.31\,$GeV has no appreciable difference compared to the one
at $\Gamma_t = 0$.}
\end{figure}
the figure shows cusps at the $\overline t t\,$-threshold due to the fact that
the top quark mass is kept real. The origin of the cusps is in a $B_0\,$-function
with $p^2$ fixed at the complex Higgs pole. The size of the cusps can be related
to the large Higgs width at the $\overline t t\,$-threshold as illustrated in
\fig{tt_threshold} where we analyze ${\rm{Re}}\,B_0(-s_{\ssH}\,;\,m_t,m_t)$ around
the $\overline t t$ threshold. Here the solid line corresponds to $\gamma_{\ssH} = 0$,
whereas dash-lines correspond to increasing values of $\gamma_{\ssH}$.
As one can see the limit $\gamma_{\ssH} \to 0$ is continuous and there is no
artifact due to the analytical continuation.
The wide-dashed blue line of \fig{tt_threshold} corresponds to a finite value of
$\gamma_{\ssH}$ and to a complex top pole (with an on-shell width of
$13.1\,$GeV).
\begin{figure}[h!]
\begin{center}
\includegraphics[bb=0 0 567 384,width=12cm]{tt_threshold}
\end{center}
\vspace{-0.5cm}
\caption[]{\label{tt_threshold}The ${\rm{Re}}\,B_0(-s_{\ssH}\,;\,m_t,m_t)$ around the
$\overline t t$ threshold. The solid line corresponds to a real Higgs mass, $\gamma_{\ssH} =
0$, whereas dashed lines correspond to increasing values of $\gamma_{\ssH}$. The
blue, wide-dashed line corresponds to a finite $\gamma_{\ssH}$ and to a complex top quark pole
(with an on-shell width of $13.1\,$GeV).}
\end{figure}
As it is evident the introduction of a complex top quark pole
completely cures the shape of the corrections. Therefore, this gives further
evidence to using the CMCP scheme, at least from a theoretical point of view (the
top quark total width is, unfortunately, poorly known). It is worth noting that the
numerical effect given by the blue curve should be interpreted as an upper bound
on the effect of a top quark complex pole since the experimental result is an
upper bound, $\Gamma_t < 13.1\,$GeV at $95\%$ C.L. Note that, from theory it
follows $\Gamma^{\rm{\scriptscriptstyle{LO}}}_t = 1.47\,$GeV and $\Gamma^{\rm{\scriptscriptstyle{NLO}}}_t= 1.31\,$GeV.
Finally, we observe that the $\ord{\gamma_{\ssH}/\mu_{\ssH}}$ effects, which can
reach several percent at large values of $\mu_{\ssH}$, have a modest effect on
all those processes which start at $\ord{g^2}$ (the effect being on NLO
corrections) whereas the effect is considerably larger for those processes, e.g.
$H \to \gamma \gamma (gg)$, which start directly at NLO ($\ord{g^6}$).
\section{Conclusions \label{Conclu}}
In this paper we have shown how to continue Feynman integrals into the second
Riemann sheet, in a way that can be easily implemented in any program aimed to
compute pseudo-observables related to Higgs physics at Tevatron and LHC.
Pseudo-observables give, in a natural way, the possibility of translating
experimental data into a language that has a direct connection to
unambiguous theoretical calculations. Using our framework one can freely
compute quantities (otherwise non-existing) like Higgs production cross section
and Higgs partial decay widths.
An unstable particle cannot belong to the in/out basis of the Hilbert space,
nevertheless concepts like production or decay of an unstable particle becomes
aliases for pseudo-observables that have a well defined meaning and a direct
relation to measured data.
In this paper a new scheme is introduced which is the (complete) complex mass
scheme with complex, external Higgs boson (or, equivalently, any other external
unstable particle) where the LSZ procedure is carried out at the Higgs complex pole
(on the second Riemann sheet).
Pseudo-observables have been a very useful concept at LEP
(e.g. Ref.\cite{Grunewald:2000ju}) and will continue to play an important role
at LHC, although more difficult if deviations from the SM will emerge; in this
case model independent approaches are required allowing for the extraction of useful
quantities that can be fitted with different models.
The usual objection against moving standard model Higgs pseudo-observables into
the second Riemann sheet of the $S\,$-matrix is that a light Higgs boson, say
below $140\,$GeV, has a very narrow width and the effects induced are tiny.
Admittedly, it is a well taken point for all practical consequences but
one should remember that the Higgs boson width rapidly increases after
the opening of the $WW$ and $ZZ$ channels and, because of this, the on-shell
treatment of an external Higgs particle becomes inadequate as a description of data
if the Higgs is not (very) light.
Furthermore, most of the experimental plots concerning Higgs physics extend well
above, say, $200\,$GeV and, if a light Higgs boson is not discovered, one of the
goals of LHC will be to exclude a standard model Higgs up to $600\,$GeV (see
Ref.~\cite{EP600} for an exclusion plot of the SM Higgs boson for the various
channels as well as the combination for masses up to $600\,$GeV). Already at
$500\,$GeV the ratio CMCP/CMRP for the $gg \to H$ cross section is large and
comparable to higher order QCD corrections.
On top of all practical implications one should admit that it is hard to sustain
a wrong theoretical description of experimental data if the correct one is
available, independently on the size of the effect.
Finally, our results show that, above the $\overline t t\,$-threshold the Higgs-resonant
contribution to $pp \to X$, correctly described in the CMCP scheme, is strongly
influenced by the large imaginary part of the Higgs complex pole and the use
of the conventional on-shell description of Higgs pseudo-observables becomes highly
questionable, even from a numerical point of view.
\bigskip \bigskip \begin{center
G.P. is indebted to Stefan Dittmaier for an important discussion
on $H \to 4\,$f.
We gratefully acknowledge several discussions with W.~Hollik and R.~Pittau.
\clearpage
\begin{appendix}
| 2023-04-23T06:41:10.798Z | 2010-01-19T17:49:38.000Z | redpajama/arxiv | arxiv_0001 | 1,848 | 21,337 |
9643968df20e60c7e80f97457a07ddfc1d658dac | \section{Introduction}
\label{intro}
\subsection{Universal Hypothesis Testing}
In universal hypothesis testing, the problem is to design a test to decide in favor of either of two hypothesis $H0$ and $H1$, under the assumption that we know the probability distribution $\pi^0$ under $H0$, but have uncertainties about the probability distribution $\pi^1$ under $H1$. One of the applications that motivates this paper is detecting abnormal behaviors \cite{den87p222}: In the applications envisioned, the amount of data from abnormal behavior is limited, while there is a relatively large amount of data for normal behavior.
To be more specific, we consider the hypothesis testing problem in which a sequence of observations $Z_1^n:=(Z_1,\dots,Z_n)$ from a finite observation space $\field{Z}$ is given, where $n$ is the number of samples. The sequence $Z_1^n$ is assumed to be i.i.d. with marginal distribution $\pi^i \in {\cal P}(\zstate)$ under hypothesis $Hi$ ($i=0,1$), where ${\cal P}(\zstate)$ is the probability simplex on $\field{Z}$.
Hoeffding~\cite{hoe65p369} introduced a universal test, defined using the empirical distributions and the Kullback-Leibler divergence. The empirical distributions $\{\Gamma^n: n \geq 1 \}$ are defined as elements of ${\cal P}(\zstate)$ via,
\[
\Gamma^n(A) = \frac{1}{n} \sum_{k =1}^n \field{I} \{Z_k \in A \}, \qquad A \subset \field{Z}.
\]
The Kullback-Leibler divergence for two probability distributions $\mu^1, \mu^0 \in {\cal P}(\zstate)$ is defined as,
\begin{equation}
D(\mu^1 \| \mu^0) = \langle \mu^1 , \log(\mu^1/\mu^0) \rangle.\nonumber
\end{equation}
where the notation $\langle \mu, f \rangle$ denotes expectation of $f$ under the distribution $\mu$, i.e., $\langle \mu, f\rangle=\sum_{z}\mu(z)f(z)$. The Hoeffding test is the binary sequence,
\begin{equation}
\phi^{\sf{H}}_n =\field{I}\{D(\Gamma^n\|\pi^0) \geq \eta\},\nonumber
\end{equation}
where $\eta$ is a nonnegative constant. The test decides in favor of $H1$ when $\phi^{\sf{H}}=1$.
It was demonstrated in \cite{unnhuameysurvee09} that the performance of the Hoeffding test is characterized by both its error exponent and the variance of the test statistics. We summarize this in \Theorem{performhoe}. The error exponent is defined for a test sequence $\mbox{\protect\boldmath$\phi$}\mathbin{:=} \{\phi_1,\phi_2,\dots\}$ adapted to $Z_1^n$ as
\begin{equation}
\begin{aligned}
J_{\phi}^0 &\mathbin{:=} \mathop{\rm lim{\,}inf}_{n \to \infty} -\frac{1}{n}\log({\pi^0}\{
\phi_n=1\} ),\\
J_{\phi}^1 &\mathbin{:=} \mathop{\rm lim{\,}inf}_{n \to \infty} -\frac{1}{n}\log({\pi^1}\{
\phi_n=0\} ).
\end{aligned}\nonumber
\end{equation}
\begin{theorem}\label{t:performhoe}
\begin{enumerate}
\item The Hoeffding test achieves the optimal error exponent $J_{\phi}^1$ among all tests satisfying a given constant bound $\eta \ge 0$ on the exponent $J_{\phi}^0$, i.e., $J_{\phi^{\sf{H}}}^0\geq \eta$ and
\begin{equation}
J_{\phi^{\sf{H}}}^1= \sup \{ J_{\phi}^1 :
\ \mbox{\it subject to} \ J_{\phi}^0 \ge \eta\}, \nonumber
\end{equation}
\item The asymptotic variance of the Hoeffding test depends on the size of the observation space. When $Z_1^n$ has marginal $\pi^0$, we have
\[
\lim_{n \to \infty} \hbox{\sf Var}\, [n D(\Gamma^n \| \pi^0)] =\half(|\field{Z}| -1).
\]
\end{enumerate}
\end{theorem}
\Theorem{performhoe} is a summary of results from \cite{hoe65p369,unnhuameysurvee09}. The second result can be derived from \cite{wil38p60,clabar90p453,csishi04}. It has been demonstrated in \cite{unnhuameysurvee09} that the variance implies a drawback of the Hoeffding test, hidden in the analysis of the error exponent: Although asymptotically optimal, this test is not effective when the size of the observation space is large compared to the number of observations.
\subsection{Mismatched Universal Test}
It was demonstrated in \cite{unnhuameysurvee09} that the potentially large variance in the Hoeffding test can be addressed by using a generalization of the Hoeffding test called the \emph{mismatched universal test}, which is based on the relaxation of KL divergence introduced in \cite{abbmedmeyliz07p284}. The name of the mismatched divergence comes from literature on mismatched decoding \cite{merkaplapshi94p1953}.
The mismatched universal test enjoys several advantages:
\begin{enumerate}
\item It has smaller variance.
\item It can be designed to be robust to errors in the knowledge of $\pi^0$.
\item It allows us to incorporate into the test partial knowledge about $\pi^1$ (see \Lemma{pisenough}),
as well as other considerations such as the heterogeneous cost of incorrect decisions.
\end{enumerate}
The mismatched universal test is based on the following variational representation of KL divergence,
\begin{equation}
D(\mu\|\pi)=\sup_{f}\bigl(\langle\mu,f\rangle-\log(\langle\pi,e^f\rangle)\bigr)\label{e:klvar}
\end{equation}
where the optimization is taken over all functions $f\colon\field{Z}\to\field{R}$.
The supremum is achieved by the log-likelihood ratio.
The mismatched divergence is defined by restricting the supremum in \eqref{e:klvar} to a function class ${\cal F}$:
\begin{equation}
D^{\hbox{\rm\tiny MM}}_{\cal F}(\mu\| \pi) := \sup_{f\in {\cal F}}\bigl(\langle\mu,f\rangle-\log(\langle\pi,e^f\rangle)\bigr).
\label{e:Dmm}
\end{equation}
The associated mismatched universal test is defined as
\begin{equation}
\phi_n^{\sf{MM}} =\field{I}\{D^{\hbox{\rm\tiny MM}}(\Gamma^n\|\pi^0) \geq \eta\}.\nonumber
\end{equation}
In this paper we restrict to the special case of a
\emph{linear} function class: ${\cal F}=\bigl\{f_r \mathbin{:=} \sum_i^{d} r_i \psi_i \bigr\}$
where
$\{\psi_i\}$ is a set of \emph{basis} functions, and $r$ ranges over $\field{R}^d$. We assume throughout the paper that $\{\psi_i\}$ is \emph{minimal}, i.e., $\{\mathbf{1},\psi_1,\ldots,\psi_d\}$ are linearly independent. The basis functions can be interpreted as \emph{features} for the universal test.
In this case, the definition \eqref{e:Dmm} reduces to the convex program,
\begin{equation}
D^{\hbox{\rm\tiny MM}}(\mu\|\pi)=\sup_{r \in \field{R}^d}\bigl(\langle\mu,f_r\rangle-\log(\langle\pi,e^{f_r}\rangle).\nonumber
\end{equation}
The asymptotic variance of the mismatched universal test is proportional to the dimension of the function class $d$ instead of $|\field{Z}|-1$ as seen in the Hoeffding test:
\[
\lim_{n \to \infty} \hbox{\sf Var}\, [n D^{\hbox{\rm\tiny MM}}(\Gamma^n \| \pi^0)] =\half d,
\]
when $Z_1^n$ has marginal $\pi^0$ \cite{unnhuameysurvee09}. In this way we can expect substantial variance reduction by choosing a small $d$. The function class also determines how well the mismatched divergence $D^{\hbox{\rm\tiny MM}}(\pi^1 \|\pi^0)$ approximates the KL divergence $D(\pi^1 \|\pi^0)$ for possible alternate distributions $\pi^1$and thus the error exponent of the mismatched universal test \cite{huaunnmeyveesur09p62}. In sum, the choice of the basis functions $\{\psi_i\}$ is critical for successful implementation of the mismatched universal test. The goal of this paper is to construct algorithms to construct a suitable basis.
\subsection{Contributions of this paper}
In this paper we propose a framework to design the function class ${\cal F}$, which allows us to make the tradeoff between the error exponent and variance. One of the motivations comes from results presented in \Section{disdis} on the maximum number of \emph{$\varepsilon$-distinguishable distributions} in an exponential family, which suggests that it is possible to use approximately $d=\log(p)$ basis functions to design a test that is effective against $p$ different distributions. In \Section{s:featureextract} we cast the feature extraction problem as a rank constrained optimization problem, and propose a gradient-based algorithm with provable local convergence property to solve it.
The construction of a basis studied in this paper is a particular case of the feature extraction problems that have been studied in many other contexts. In particular, the framework in this paper is connected to the exponential family PCA setting of \cite{coldassch01p617}. The most significant difference between this work and the exponential PCA is that our framework finds features that capture the \emph{difference} between distributions, and the latter finds features that are \emph{common} to the distributions considered.
The mismatched divergence using empirical distributions can be interpreted as an estimator of KL divergence. To improve upon the Hoeffding test, we may apply other estimators, such as those using data dependent features \cite{wankulver05p3064, qinkulver09p2392}, or those motivated by source-coding techniques \cite{zivmer93p1270} and others \cite{nguwaijor07}. Our approach is different from them in that we exploit the limited possibilities of alternate distributions.
\section{Distinguishable Distributions}
\label{disdis}
The quality of the approximation of KL divergence using the mismatched divergence depends on the dimension of the function class. The goal of this section is to quantify this statement.
\subsection{Mismatched Divergence and Exponential Family}
We first describe a simple result suggesting how a basis might be chosen given a finite set of alternate distributions, so that the mismatched divergence is equal to the KL divergence for those distributions:
\begin{lemma}
\label{t:pisenough}
For any $p$ possible alternate distributions $\{\pi^1, \pi^2, \ldots, \pi^p\}$, absolutely continuous with respect to $\pi^0$, there exist $d=p$ basis functions $\{\psi_1, \ldots, \psi_d\}$ such that $D^{\hbox{\rm\tiny MM}}(\pi^i\|\pi^0)=D(\pi^i\|\pi^0)$ for each $i$.
These functions can be chosen to be the log-likelihood ratios $\{\psi_i = \log(\pi^i/\pi^0)\}$.
\qed
\end{lemma}
It is overly pessimistic to say that given $p$ distributions we require $d=p$ basis functions. In fact, \Lemma{perfectapprox} demonstrates that if all $p$ distributions are in the same $d$-dimensional \emph{exponential family}, then $d$ basis functions suffices. We first recall the definition of an exponential family: For a function class ${\cal F}$ and a distribution $\nu$, the exponential family $\mathcal{E}(\nu, {\cal F})$ is defined as:
\[
\mathcal{E}(\nu,{\cal F})=\{\mu: \mu(z)=\frac{\nu(z) e^{f(z)}}{\langle\nu, e^{f}\rangle}, f \in {\cal F}\}.
\]
We will restrict to the case of linear function class, and we say that the exponential family is $d$-dimensional if this is the dimension of the function class ${\cal F}$. The following lemma is a reinterpretation of \Lemma{pisenough} for the exponential family:
\begin{lemma}
\label{t:perfectapprox}
Consider any $p+1$ mutually absolutely continuous distributions $\{\pi^i: 0\leq i \leq p\}$. Then $D^{\hbox{\rm\tiny MM}}_{\cal F}(\pi^i\|\pi^j)=D(\pi^i\|\pi^j)$ for all $i\neq j$ if and only if $\pi^i \in \mathcal{E}(\pi^0, {\cal F})$ for all $i$.
\end{lemma}
\subsection{Distinguishable Distributions}
Except in trivial cases, there are obviously infinitely many distributions in an exponential family. In order to characterize the difference between different exponential families of different dimension, we consider a subset of distributions which we call $\varepsilon$-distinguishable distributions.
The motivation comes from the fact that KL divergences between two distributions are infinite if neither is absolutely continuous with respect to the other, in which case we say they are \emph{distinguishable}. When the distributions are distinguishable, we can design a test that achieves infinite error exponent. For example, consider two distributions $\pi^0,\pi^1$ on $\field{Z}=\{z_1,z_2,z_3\}$: $\pi^0(z_1)=\pi^0(z_2)=0.5$; $\pi^1(z_2)=\pi^1(z_3)=0.5$. It is easy to see that the two error exponents of the test $\phi_n(Z_1^n)=\field{I}\{\Gamma^n(z_3)>0.2\}$ are both infinite. It is then natural to ask: Given $p$ distributions that are pairwise distinguishable, how many basis functions do we need to design a test that is effective for them?
Distributions in an exponential family must have the same support. We thus consider distributions that are approximately distinguishable, which leads to the definitions listed below: Consider the set-valued function $F^\epsilon$ parametrized by $\epsilon>0$,
\[ F^\epsilon(x):=\{z: x(z) \geq \max_z(x(z))-\epsilon\}\]
\vspace{-0.2in}
\begin{romannum}
\item[$\bullet$]
Two distributions $\pi^1,\pi^2$ are \textit{$\epsilon$-distinguishable} if $F(\pi^1)\setminus F(\pi^2) \neq \emptyset$ and $F(\pi^2)\setminus F(\pi^1) \neq \emptyset$.
\item[$\bullet$]
A distribution $\pi$ is called \textit{$\epsilon$-extremal} if $\pi(F^\epsilon(\pi)) \geq 1-\epsilon$,
and a set of distributions $\mathcal{A}$ is called $\epsilon$-extremal if every $\pi \in \mathcal{A}$ is $\epsilon$-extremal.
\item[$\bullet$]
For an exponential family $\mathcal{E}$,
the integer $N(\mathcal{E})$ is defined as the maximum $N$ such that there exists an $\epsilon_0>0$ such that for any $0<\varepsilon<\epsilon_0$, there exists an $\varepsilon $-extremal ${\mathcal{A}} \subseteq \mathcal{E}$ such that $|\mathcal{A}| \geq N$ and any two distributions in $\mathcal{A}$ are $\varepsilon $-distinguishable.
\end{romannum}
One interpretation of the final definition is that the test using a function class ${\cal F}$
is effective against $N(\mathcal{E})$ distributions, in the sense that the error exponents for the mismatched universal test are the same as for the Hoeffding test, where $\mathcal{E} =\mathcal{E}(\nu, {\cal F})$:
\begin{lemma}\label{t:expfimplication}
Consider a function class ${\cal F}$ and its associated exponential family $\mathcal{E} =\mathcal{E}(\nu, {\cal F})$, where $\nu$ has full support, and define $N=N(\mathcal{E}(\nu,{\cal F}))$. Then, there exists a sequence $\{A^{(1)}, A^{(2)}, \ldots, A^{(m)}: m\ge 1 \} $, such that for each $k$ the set $A^{(k)}\subset \mathcal{E}$ consists of $N$ distributions,
\[D^{\hbox{\rm\tiny MM}}_{\cal F}(\pi\|\pi')=D(\pi,\pi') \quad \textrm{for any $\pi,\pi' \in A^{(k)}$}\]
and
\begin{equation}
\lim_{k\to\infty} \min_{\atop{\pi,\pi'\in A^{(k)} }{\pi\neq \pi'}}
D^{\hbox{\rm\tiny MM}}_{\cal F}(\pi\|\pi') = \infty. \nonumber
\end{equation}\vspace{-0.15in}\qed
\end{lemma}
Let $\mathcal{P}(d)$ denote the collection of all $d$-dimensional exponential families. Define $\bar{N}(d)=\max_{\mathcal{E} \in \mathcal{P}(d)} N(\mathcal{E})$. In the next result we give lower and upper bounds on $\bar{N}(d)$, which imply that $\bar{N}(d)$ depends exponentially on $d$:
\begin{proposition}
\label{t:lowerupperexp}
The maximum $\bar{N}(d)=\max_{\mathcal{E} } N(\mathcal{E})$ admits the following lower and upper bounds:
\begin{eqnarray}
\!\!\!
\!\!\!
\bar{N}(d) &\ge& \exp\big(\lfloor \frac{d}{2}\rfloor [\log(|\field{Z}|)-\log \lfloor \frac{d}{2}\rfloor -1]\big)
\label{e:lowerexp}
\\[.25cm]
\!\!\!
\!\!\!
\bar{N}(d) &\le& \exp\big((d+1)(1+\log(|\field{Z}|)-\log(d+1))\big)
\label{e:upperexp}
\end{eqnarray}
\end{proposition}
It is important to point out that $\bar{N}(d)$ is exponential in $d$. This answers the question asked at the beginning of this section: There exist $p$ approximately distinguishable distributions for which we can design an effective mismatched test using approximately $\log(p)$ basis functions.
\section{Feature Extraction via \\hbox{\sf rank}\,-constrained Optimization}\label{s:featureextract}
Suppose that it is known that the alternate distributions can take on $p$ possible values, denoted by $\pi^1,\pi^2,\ldots,\pi^p$. Our goal is to choose
the function class ${\cal F}$ of dimension $d$ so that the mismatched divergence approximates the KL divergence for these alternate distributions, while at the same time keeping the variance small in the associated universal test. The choice of $d$ gives the tradeoff between the quality of the approximation and the variance in the mismatched universal test.
We assume that $0<D(\pi^i\|\pi^0)<\infty$ for all $i$.\footnote{In practice the possible alternate distributions will likely take on a continuum of possible values. It is our wishful thinking that we can choose a finite approximation with $p$ distributions, and choose $d$ much smaller than $p$, and the resulting mismatched universal test will be effective against all alternate distributions. Validation of this optimism will be left to future work.}
We propose to use the solution to the following problem as the function class:
\begin{equation}\label{e:optproori}
\max_{{\cal F}} \{\frac{1}{p}\sum_{i=1}^p \gamma^i D^{\hbox{\rm\tiny MM}}_{{\cal F}}(\pi^i\|\pi^0): \dim({\cal F})\leq d\}
\end{equation}
where $\dim{{\cal F}}$ is the dimension of the function class ${\cal F}$. The weights $\{\gamma_i\}$ can be chosen to reflect the importance of different alternate distributions. This can be rewritten as the following rank-constrained optimization problem:
\begin{equation}\label{e:optpro}
\begin{array}{rl}
\max & \frac{1}{p}\sum_{i=1}^p \gamma^i \bigl(\langle\pi^i,X_i\rangle-\log(\langle\pi^0,e^{X_i}\rangle\bigr)
\\[.25cm]
\hbox{\rm subject to\ } & \hbox{\sf rank}\,(X)\leq d
\end{array}
\end{equation}
where the optimization variable $X$ is a $p\times |\field{Z}|$ matrix,
and $X_i$ is the $i$th row of $X$, interpreted as a function on $\field{Z}$.
Given an optimizer $X^*$, we choose $\{\psi_i\}$ to be the set of right singular vectors of $X^*$ corresponding to nonzero singular values.
\vspace{-0.03in}
\subsection{Algorithm}\label{subsecalgorithm}
The optimization problem in \eqref{e:optpro} is not a convex problem since it has a rank constraint. It is generally very difficult to design an algorithm that is guaranteed to find a global maximum. The algorithm proposed in this paper is a generalization of the Singular Value Projection (SVP) algorithm of \cite{mekjaidhi09} designed to solve a low-rank matrix completion problem. It is globally convergent under certain conditions valid for matrix completion problems. However, in this prior work the objective function is quadratic; we are not aware of any prior work generalizing these algorithms to the case of a general convex objective function.
Let $h(X)$ denote the objective function of \eqref{e:optpro}. Let $\mathcal{S}$ denote the set of matrices satisfying $\hbox{\sf rank}\,(X)\leq d$. Let $\mathcal{P}_{\mathcal{S}}$ denote the projection onto $\mathcal{S}$:
\[\mathcal{P}_{\mathcal{S}}(Y)=\mathop{\rm arg{\,}min}\{\|Y-X\|: \hbox{\sf rank}\,(X)\leq d\}.\]
where we use $\|\cdot\|$ to denote the Frobenius norm.
The algorithm proposed here is defined as the following iterative gradient projection:
\begin{enumerate}
\item $Y^{k+1}=X^k+\alpha^k \nabla h(X^k)$.
\item $X^{k+1}=\mathcal{P}_{\mathcal{S}}(Y^{k+1})$.
\end{enumerate}
The projection step is solved by keeping only the $d$ largest singular values of $Y^{k+1}$.
The iteration is initialized with some arbitrary $X^0$ and is stopped when the $\|X^{k+1}-X^{k}\| \leq \epsilon$ for some small $\epsilon>0$.
\vspace{-0.05in}
\subsection{Convergence Result}
We can establish local convergence:
\begin{proposition}\label{t:localconverge}
Suppose $\bar{X}$ satisfies $\hbox{\sf rank}\,(\bar{X})=d$ and is a local maximum, i.e. there exists $\delta>0$ such that for any matrix $X \in \mathcal{S}$ satisfying $\|X-\bar{X}\| \leq \delta$, we have $h(\bar{X}) > h(X)$. Choose $\alpha^k=\alpha$ for all $k$ where $0<\alpha<2/(\frac{1}{p}\max_i \gamma^i)$. Then there exists a $\delta'>0$ such that if $X^0$ satisfies $\|X^0-\bar{X}\| \leq \delta'$ and $\hbox{\sf rank}\,(X^0)\leq d$, then $X^k \rightarrow \bar{X}$ as $k \rightarrow \infty$. Moreover, the convergence is geometric. \qed
\end{proposition}
Let $\mathcal{H}$ denote the hyperplane $\mathcal{H}=\{\bar{X}W_1+W_2\bar{X}: W_1 \in \field{R}^{n \times n}, W_2 \in \field{R}^{p \times p}\}$. The main idea of the proof is that near $\bar{X}$ the set $\mathcal{S}$
can be approximated by this hyperplane $\mathcal{H}$, as demonstrated in \Lemma{approx1}
\begin{lemma}\label{t:approx1}
There exist $\delta>0$ and $M>0$ such that: 1) for any $X \in \mathcal{S}$ satisfying $\|X-\bar{X}\| \leq \delta$, there exists $Z \in \mathcal{H}$ such that $\|Z-X\|\leq M\|X-\bar{X}\|^2$; 2) for any $Z \in \mathcal{H}$ satisfying $\|Z-\bar{X}\| \leq \delta$,
there exists $X \in \mathcal{S}$ satisfying $\|X-Z\|\leq M\|Z-\bar{X}\|^2$.
\end{lemma}
Let $Z^{k}=\mathcal{P}_{\mathcal{H}}(Y^{k})$, i.e., the projection of $Y^k$ onto $\mathcal{H}$. We obtain from \Lemma{approx1} that $Z^k$ is close to $X^k$ as follows:
\begin{lemma}\label{t:projerror}
Consider any $\bar{X}$ satisfying $\hbox{\sf rank}\,(\bar{X})=d$. There exist $\delta>0$ and $M>0$ such that if $\|Z^k-\bar{X}\|\leq \delta$, then $\|Z^k-X^k\| \leq M\|{Y}^k-\bar{X}\|^\frac{3}{2}$.
\end{lemma}
\begin{lemma}
Gradients of $h(X)$ are Lipschitz with constant $L=\frac{1}{p}\max_i \gamma^i$, i.e. $\|\nabla h(X_1)-\nabla h(X_2)\| \leq L\|X_1-X_2\|$.
\end{lemma}
\begin{lemma}
Suppose $\bar{X}$ is a local maximum in $\mathcal{S}$ and $\hbox{\sf rank}\,(\bar{X})=d$. Then $\bar{X}$ is also a local maximum in $\mathcal{H}$.
\end{lemma}
\begin{proof}[Outline of Proof of \Proposition{localconverge}]
Using standard results form optimization theory, we can prove that for any small enough $\delta>0$, if $\|X^k-\bar{X}\| \leq \delta$ and $\alpha < \frac{2}{L}$, then $\|Z^{k+1}-\bar{X}\|\leq q \|X^k-\bar{X}\|$ for some $q<1$ where $q$ could depend on $\delta$, and $\|Y^{k+1}-\bar{X}\|\leq \|X^k-\bar{X}\|$. Thus, we can choose a $\delta$ small enough so that
$M\delta^\frac{1}{2}\leq \frac{1-q}{2}$. With this choice, we have
\begin{eqnarray}
\|X^{k+1}-\bar{X}\| &\leq& \|Z^{k+1}-\bar{X}\|+\|Z^{k+1}-X^{k+1}\|\nonumber\\
&\leq& \|Z^{k+1}-\bar{X}\|+M\delta^{\half}\|{Y}^{k+1}-\bar{X}\|\nonumber\\
&\leq& (q+\half(1-q)) \|X^k-\bar{X}\|.\nonumber
\end{eqnarray}
\Proposition{localconverge} then follows from induction.
\end{proof}
\section{Simulations}
We consider probability distributions in an exponential family of the form $\pi^i(z)=\exp\{\sum_{k=1}^{q} \theta_{i,k} \psi_i(z)+\sum_{i=k}^{q'}\theta'_{i,k} \psi'_i(z)\}$. We first randomly generate $\{\psi_i\}$ and $\{\psi'_i\}$ to fix the model. A distribution is obtained by randomly generating $\{\theta_{i,k}\}$ and $\{\theta'_{i,k}\}$ according to uniform distributions on $[-1,1]$ and $[-0.1,0.1]$, respectively. In application of the algorithm presented in \Section{subsecalgorithm}, the bases $\{\psi_i\}$ and $\{\psi'_i\}$ are not given. This model can be interpreted as a perturbation to $q$-dimensional exponential family with basis $\{\psi_i\}$.
In the experiment we have two phases: In the feature extraction (training) phase, we randomly generate $p+1$ distributions, taken as $\pi^0, \ldots, \pi^p$. We then use our techniques in \eqref {e:optproori} with the proposed algorithm to find the function class ${\cal F}$. The weights $\gamma_i$ are chosen as $\gamma^i=1/D(\pi^i\|\pi^0)$ so
that the objective value is no larger than $1$. In the testing phase, we randomly generate $t$ distributions, denoted by $\mu^1,\ldots, \mu^t$. We then compute the average of $D^{\hbox{\rm\tiny MM}}(\mu^i\|\pi^0)/D(\mu^i\|\pi^0)$.
For the experimental results shown in \Figure{dmmtraining}, the parameters are chosen as $q=8$, $q'=5$, and $t=500$. Shown in the figure is an average of $D^{\hbox{\rm\tiny MM}}(\pi^i\|\pi^0)/D(\pi^i\|\pi^0)$ (for training) as well as $D^{\hbox{\rm\tiny MM}}(\mu^i\|\pi^0)/D(\mu^i\|\pi^0)$ (for testing) for two cases: $p=50$ and $p=500$.
We observe the following:
\begin{compactenum}
\item The objective value increases gracefully as $d$ increases. For $d\geq7$, the values are close to 1.
\item The curve for training and testing are closer when $p$ is larger, which is expected.
\end{compactenum}
\begin{figure
\vspace{-.4cm}
\centering
\includegraphics[width=0.4\textwidth]{DMMtraining.eps}
\vspace{-.1cm}
\caption{Dashed curve: average of $D^{\hbox{\rm\tiny MM}}(\mu^i\|\pi^0)/D(\mu^i\|\pi^0)$. Solid curve: average of $D^{\hbox{\rm\tiny MM}}(\pi^i\|\pi^0)/D(\pi^i\|\pi^0)$}
\label{f:dmmtraining}
\vspace{-.5cm}
\end{figure}
\section{Conclusions}
The main contribution of this paper is a framework to address the feature extraction problem for universal hypothesis testing, cast as a rank-constrained optimization problem. This is motivated by results on the number of easily distinguishable distributions, which demonstrates that it is possible to use a small number of features to design effective universal tests for a large number of possible distributions. We propose a gradient-based algorithm to solve the rank-constrained optimization problem, and the algorithm is proved to converge locally. Directions considered in current research include: applying the nuclear-norm heuristic \cite{fazhinboy01p4734} to solve the optimization problem \eqref{e:optproori}, applying this framework to real-world data, and extension of this framework to incorporate other form of partial information.
| 2023-04-23T06:41:10.868Z | 2010-06-15T02:01:31.000Z | redpajama/arxiv | arxiv_0001 | 1,849 | 4,141 |
0e78e8b17760746476ef2f805c9d945e17b5aeb7 | \section{The puzzling structure of NGC4650A}
In the latest years, our group has given a big effort to study the
formation and evolution of Polar Ring Galaxies (PRGs). NGC4650A is one
of the best-investigated object among the class of PRGs, in the South
hemisphere. Its luminous components, inner spheroid and polar
structure, have been studied with optical and near-infrared (NIR)
photometry and spectroscopy, and in the radio, HI 21 cm line and
continuum. The polar structure in NGC~4650A has been shown to be a
disk, rather than a ring. Its stars and dust can be reliably traced
inward within the stellar spheroid to $\sim 1.2$ kpc radius from the
galaxy nucleus (\citet{Iod02}; \citet{Gal02}). Emission and
absorption line optical spectra are also consistent with an extended
stellar disk rather than a narrow ring (\citet{SwR03}): both rotation
curves show a linear inner gradient and a plateau, as expected for a
disk in differential rotation. Furthermore, the HI 21 cm observations
(\citet{Arn97}) show that the gas is 5 times more extended
than the luminous polar structure, with a position-velocity diagram
very similar to those observed for edge-on gaseous disks. The polar
disk is very massive, since the total HI mass in this component is
about $10^{10} M_{\odot} $, which added to the mass of stars is
comparable with the total mass in the central spheroid
(\citet{Iod02}). New high resolution spectroscopy in NIR, obtained with
FORS2@UT4, along the main axes of the central HG suggests that it
resembles a nearly-exponential oblate spheroid supported by rotation
(\citet{Iod06}). These new spectroscopic data set important
constraints, both on current models for the formation scenarios and on
the mass models proposed till now for NGC4650A, which were the
starting points for the two ongoing works presented in the following
sections. They aimed i) to test the formation of polar disks by cold
accretion of gas through a "cosmic filament", and ii) to constrain the
dark halo (DH) content and shape.
\section{Chemical abundances in the polar disk: implications for the cold
accretion}
According to simulations of galaxy mergers (e.g. \citet{Bou05}),
polar merging of two disk galaxies fails to form a massive polar disk
around a spheroid with rotation velocities as large as observed along
the HG major axis (\citet{Iod06}); the tidal accretion scenario fails in
reproducing disk-like polar structures as extended as observed in
NGC4650A (\citet{Iod02}). Very recently, a new formation mechanism has
been proposed within the scenario proposed for the build-up of high
redshift disk galaxies: a long-lived polar structure will form through
cold gas accretion along a filament, extended for $\sim 1$ Mpc, into
the virialized DH (\citet{Mac06}; \citet{Bro08}). In this
formation scenario, there is no limits to the mass of the accreted
material, thus a very massive polar disk may develop either around a
stellar disk or a spheroid: the morphology and kinematics of one
simulated object are quite similar to those observed for NGC4650A. In
order to test the cold accretion scenario for this object,
\citet{Spav09} (see also Spavone et al. in this book) have studied
the abundance ratios and metallicities of the HII regions associated
to the polar disk in NGC4650A, by using new deep longslit spectroscopy
obtained with FORS2@ESO-VLT. Main results show {\it i)} that NGC4650A
has metallicity lower than spiral galaxy disks of the same total
luminosity (see Fig.\ref{fig1} left panel), where $Z= 0.35 Z_{\odot}$,
which is consistent with values predicted for the formation of disks
by cold accretion processes; {\it ii)} the absence of any metallicity
gradient along the polar disk (see Fig.\ref{fig1} right panel), which
suggests that the metal enrichment is not influenced by the stellar
evolution of the older central spheroid and, thus, the disk was formed
later by the infall of metal-poor gas from outside which is still
forming the disk.
\section{Dynamical model for NGC~4650A: solving the enigma
of the flattening of its Dark Halo}
The existence of two orthogonal components of the angular momentum let
the PRGs the ideal laboratory to put limits on the 3-D shape of the DH.
The question of the DH shape is important because cosmological
simulations (\citet{NFW97}) predict the distribution of
the DH shapes and the universal radial dependence (NFW). NFW density
profiles are shown to describe the DH in many morphological types of
galaxies, but divergences have been found (\citet{dBl03};
\citet{Nap05}). In view of these applications, if one derives
the most likely DH flattening distribution from the PRGs dynamics,
tests can be performed on the likelihood of the different cosmological
models. The latest dynamical models developed for NGC4650A were by
\citet{Sack94}, that found a flattened (E6-E7) DH, whose axes
are aligned with those of the central galaxy, and by \citet{CA96} that
derived a best fit with a DH flattened in the plane of the polar disk
itself. The biggest uncertainties in the mass models proposed until
1996 are due to the low resolution of the HI data and to the
difficulty in measuring the velocity dispersion $\sigma$ along the
spheroid major axis. The new high resolution CaT spectra
(\citet{Iod06}) are better tracers of the kinematics for the
NGC4650A spheroid than was available before: they allow us to measure
a flat $\sigma$ profile along the spheroid major axis,
while previous measurements were too scattered to reliably
establish any trend with radius. The new $\sigma$ profiles show that
both the linear decreasing fit proposed by \citet{Sack94} and the
exponential empirical law proposed by \citet{CA96} do not reproduce
the observed trend with radius. Previous conclusions that the same
authors drew were based on data that are no longer valid, thus we aim
to develop a new dynamical model for NGC4650A (Iodice, Napolitano,
Arnaboldi et al. in preparation) by using the new spectroscopy
available for the HG (\citet{Iod06}) and rotation curves of stars
and gas in the polar disk (\citet{Arn97}; \citet{SwR03};
\citet{Iod08}). As a first step we fitted the kinematics of the two
components (HG and polar disk) separately: 1) we have assumed a
centrifugal equilibrium along the polar disk, $V^2_c(R) = GM(R)/R$,
where the circular velocity $V^2_c = V^2_{star} + V^2_{gas} +
V^2_{DH}$; 2) we have performed an axisymmetric Jeans analysis on the
equatorial plane of the spheroid. The best fit for the polar disk (see
Fig.\ref{fig2} left panel) was obtained with $M/L=0.25$ (K band) by
adding to an exponential disk a DH with a NFW radial profile, with a
scale radius $r^{PD}_s = 54$ kpc and a DH mass of about 5.9 $10^{10}
M_{\odot}$ inside a radius of 25 kpc. To fit the rotation velocity and
velocity dispersion along the spheroid equatorial plane, we assumed a
NFW DH, with virial mass and the concentrations fixed from the polar
axis, and isotropy of the velocity ellipsoid: the best fit was
obtained with a scale radius $r^{HG}_s = 18$ kpc for the DH (see
Fig.\ref{fig2} right panel). By such a simple analysis we have
obtained that the two scale radii are different $r^{HG}_s / r^{PD}_s =
0.3$ which suggests a DH flattened along the polar axis with a
flattening of the potential $(c/a)_{\phi} \sim 0.7$ (E7). This
preliminary result, together with the DH mass, are consistent with the
DH content and flattening given by
\citet{CA96}. We are going to perform a more accurate dynamical model
to fit at the same time the kinematics along the equatorial and
meridian plane, by introducing the shape parameter in the DH profile.
\begin{figure}
\includegraphics[height=.3\textheight]{001eiodice_fig1.ps}
\includegraphics[height=.3\textheight]{001eiodice_fig1b.ps}
\caption{Left panel - Oxygen abundance vs absolute blue magnitude
for samples of different morphological type of galaxies, including
Compact Narrow Emission Lines Galaxies (CNELG), Emission Lines
Galaxies (ELG), for NGC4650A (star) and the polar disk galaxy IIZw71
\citep{Per09}. The dashed line indicates the solar oxygen
abundance. The arrow indicates the shift of the value of the oxygen
abundance if we use the direct methods to evaluate it (see Spavone et
al. 2009 for details). Right panel - Oxygen abundance along the polar
disk derived with empirical methods (see Spavone et al. 2009 for
details). The superimposed lines are the linear best fit derived by
\citet{Pil06}; the dotted line represents the best fit to the abundance
of oxygen-rich spirals, while the long-dashed line is those related to
ordinary spirals.}\label{fig1}
\end{figure}
\begin{figure}
\includegraphics[height=.25\textheight]{001eiodice_fig2.ps}
\includegraphics[height=.25\textheight]{001eiodice_fig2b.ps}
\caption{Left panel - Points are RC of stars (open triangles, by
(\citet{SwR03}) and HI gas (filled squares, see \citet{Iod08}
for details); the continuous line is the best fit to the observed
velocities which accounts for an exponential disk (long-dash line), HI
gas density (short dash - long dash line) and a NFW DH (short-dash
line). Right panel - Points are the observed velocity dispersion along
the HG major axis (see \citet{Iod06}) and the continuous line is
the best fit obtained with a Jeans analysis. }\label{fig2}
\end{figure}
\bibliographystyle{aipproc}
| 2023-04-23T06:41:11.052Z | 2010-01-18T15:55:16.000Z | redpajama/arxiv | arxiv_0001 | 1,854 | 1,549 |
32262567407cf0454a9dccc08a7a8cf88935e0ec | \section{Introduction}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec1}
One of the oldest but still actual problems in nuclear physics is related to
the determination of the interaction between the nucleons. A quantitative understanding
of the nuclear force is crucial in order to describe the properties of nuclei and
nuclear matter in terms of hadronic degrees of freedom. The conventional way to
parametrize the nuclear force utilizes the meson-exchange picture,
which goes back to the seminal work by Yukawa \cite{Yukawa:1935aa}.
His idea, followed by the experimental discovery of $\pi$- and heavier mesons
($\rho$, $\omega$, $\dots$), stimulated the development of boson-exchange models
which still provide a basis for many modern, highly sophisticated phenomenological
nucleon-nucleon (NN) potentials.
According to our present understanding, the nuclear force is due to residual
strong interactions between the color-charge neutral hadrons. A direct derivation
of the nuclear force from QCD, the underlying theory of strong interactions, is not
yet possible, see however Ref.~\cite{Ishii:2006ec} for a recent attempt using lattice QCD.
In order to provide reliable input for few- and many-body calculations,
a (semi-)phenomenological approach has been followed over the past few decades aiming
to achieve the best possible description of the available low-energy NN data.
As will be discussed in section \ref{sec2}, the two-nucleon potential
can be decomposed in only few different spin-space structures, so that
the corresponding radial functions can be parameterized using an extensive set of data.
Although the resulting models provide an excellent description of experimental
data in many cases, there are certain major conceptual deficiencies that cannot
be overcome. In particular, one important concern is related to the problem
of the construction of \emph{consistent} many-body forces. These can
only be meaningfully defined in a consistent scheme with a given two-nucleon
interaction \cite{Gloeckle:1990aa}.
Notice that because of the large variety of different possible structures in the
three-nucleon force, following the
same phenomenological path as in the NN system and parametrizing its most general structure
seems not to be feasible without additional theoretical guidance.
Clearly, the same problem of consistency arises in the context of reactions
with electroweak probes, whose description requires the knowledge of the corresponding
consistent nuclear current operator.
Further, one lacks within phenomenological treatments a method of systematically improving
the theory of the nuclear force in terms of the dominant dynamical contributions.
Finally, and most important, the phenomenological scheme
provides only a loose connection to QCD.
Chiral perturbation theory (ChPT) is an effective field theory (EFT) of QCD which
exploits its symmetries and symmetry-breaking pattern and allows to analyze the
properties of hadronic systems at low energies in a systematic and model independent way.
We will see in section \ref{sec3} that QCD with two flavors of the $u$- and $d$-quarks
and, to a less extent, with three
flavors of the $u$-, $d$- and $s$-quarks, exhibits an approximate chiral symmetry
which is explicitly broken due to non-vanishing (but small) quark masses. In
addition, the chiral symmetry is also spontaneously broken down to its vector
subgroup. These symmetry/symmetry-breaking pattern manifest themselves in the
hadron spectrum leading, in particular, to a natural explanation of the very
small masses (compared to other hadrons) of pions which play the role of the
corresponding Goldstone bosons. Most important, the Goldstone boson nature of the
pions implies that they interact weakly at low energy and allows to calculate
low-energy observables in the pion and single-nucleon sector in perturbation
theory. The situation in the few-nucleon sector is conceptually much
more complicated due to the strong nature of the nuclear force which
manifests itself in the appearance of self-bound atomic nuclei and invalidates
a naive application of perturbation theory. As pointed out by Weinberg, the
breakdown of perturbation theory in the few-nucleon sector can be traced back
to the infrared enhancement of reducible time-ordered diagrams which involve purely
nucleonic intermediate states and can be resummed by iterating the corresponding
dynamical equation \cite{Weinberg:1990rz,Weinberg:1991um}.
These important observation made in Weinberg's seminal papers
opened a new era in nuclear physics and
has triggered an intense research activity along these lines. In these lectures I will
outline the basic concepts of chiral effective field theory and its application to
nucleon-nucleon scattering and the derivation of the nuclear force.
The manuscript is organized as follows. In section \ref{sec2} I discuss
the general structure of the nuclear force and outline the main ingredients of the
conventional NN potentials. Section \ref{sec3} provides an elementary introduction
to chiral perturbation theory. Generalization of EFT to strongly interacting nuclear
systems is discussed in section \ref{sec4}. Derivation of the nuclear forces in chiral
EFT is outlined in section \ref{sec5}.
A brief summary is given in section \ref{sec6}.
\section{Nuclear potentials and nucleon-nucleon scattering}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec2}
The most general structure of a non-relativistic two-nucleon potential is expressible
in terms of just a few operators. The potential can be viewed as an operator
acting in the position, spin and isospin spaces of the nucleons. It is instructive to
discuss its isospin structure separately from the operators acting in the position-spin space.
The isospin structure of the two-nucleon force falls into the four different classes
according to the classification of Ref.~\cite{Henley:1979aa}:
\begin{equation}
\label{2NF_classes}
\begin{array}{lcl}
\mbox{Class I:} & \mbox{\hskip 1 true cm} & V_{\rm I} = \alpha_{\rm I} + \beta_{\rm I} \,\fet \tau_1 \cdot \fet \tau_2 \,,\\ [0.6ex]
\mbox{Class II:} & & V_{\rm II} = \alpha_{\rm II} \, \tau_1^3 \, \tau_2^3 \,,\\[0.6ex]
\mbox{Class III:} & & V_{\rm III} = \alpha_{\rm III} \,( \tau_1^3 + \tau_2^3) \,,\\[0.6ex]
\mbox{Class IV:} & & V_{\rm IV} = \alpha_{\rm IV} \, (\tau_1^3 - \tau_2^3) + \beta_{\rm IV} \, [\fet \tau_1 \times \fet \tau_2]^3 \,.
\end{array}
\end{equation}
Here, $\alpha_{\rm i}$, $\beta_{\rm i}$ are position-spin operators and $\fet \tau_i$ are
Pauli isospin matrices of a nucleon $i$.
The operator $\beta_{\rm IV}$ has to be odd under a time reversal transformation.
While class (I) forces are isospin-invariant, all other classes (II), (III) and (IV) are isospin-breaking.
Class (II) forces, $V_{\rm II}$, maintain charge symmetry but break charge independence. They are usually referred
to as charge independence breaking (CIB) forces. Charge symmetry represents invariance under reflection about the
1-2 plane in charge space.
The charge symmetry operator $P_{cs}$ transforms proton and neutron states into each other and
is given by $P_{cs} = e^{i \pi T_2}$ with $\fet T \equiv \sum_i \fet \tau_i/2$ being the total isospin operator.
Class (III) forces break charge symmetry but do not lead to isospin mixing in the NN system, i.e.~they do not
give rise to transitions between isospin-singlet and isospin-triplet two-nucleon states.
Finally, class (IV) forces break charge symmetry and cause isospin mixing in the NN system.
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: show that class-III two-nucleon forces do not lead to isospin mixing in the
two-nucleon system, i.e.~they commute with the operator
$T^2$.
Does this still hold true for systems with three and more nucleons?
} \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
Let us now discuss the position-spin structure of the potential.
For the sake of simplicity, I restrict myself to the isospin-invariant case.
The available vectors are given by the position,
momentum and spin operators for individual nucleons:
$\vec r_1, \, \vec r_2, \, \vec p_1, \, \vec p_2, \, \vec \sigma_1, \, \vec \sigma_2$.
The translational and Galilean invariance of the potential
implies that it may only depend on the relative distance between the nucleons,
$\vec r \equiv \vec r_1 - \vec r_2$, and the relative momentum,
$\vec p \equiv (\vec p_1 - \vec p_2)/2$.
Further constraints due to (i) rotational invariance, (ii) invariance
under a parity operation, (iii) time reversal invariance, (iv) hermiticity
as well as (v) invariance with respect to interchanging the nucleon labels,
$1 \leftrightarrow 2$, lead to the following operator form of the
potential \cite{Okubo:1958aa}:
\begin{equation}
\label{pot_operat}
\left\{ \fet 1_{\rm spin}, \; \vec \sigma_1 \cdot \vec \sigma_2, \;
S_{12} (\vec r \, ), \; S_{12} (\vec p \, ), \; \vec L \cdot \vec S, \;
(\vec L \cdot \vec S\, )^2 \right\} \times
\left\{ \fet 1_{\rm isospin}, \; \fet \tau_1 \cdot \fet \tau_2 \right\}\,,
\end{equation}
where $\vec L \equiv \vec r \times \vec p$, $\vec S \equiv
(\vec \sigma_1 + \vec \sigma_2 )/2$ and $S_{12} ( \vec x \,) \equiv
3 \vec \sigma_1 \cdot \hat x \, \vec \sigma_2 \cdot \hat x - \vec \sigma_1
\cdot \vec \sigma_2$ with $\hat x \equiv \vec x/| \vec x \, |$.
The operators entering the above equation are multiplied by scalar operator-like
functions that depend on $r^2$, $p^2$ and $L^2$.
Throughout this work, two-nucleon observables will be computed by
solving the Lippmann-Schwinger equation in momentum space. It is, therefore,
instructive to look at the momentum-space representation of the potential,
$V (\vec p \, ', \, \vec p \, ) \equiv \langle \vec p\, ' | V | \vec p \, \rangle$,
with $\vec p$ and $\vec p \, '$ denoting the two-nucleon center of mass momenta before and
after the interaction takes place. Following the same logic as above, the
most general form of the potential potential in momentum space can be shown to be:
\begin{equation}
\label{pot_mom}
\left\{ \fet 1_{\rm spin}, \; \vec \sigma_1 \cdot \vec \sigma_2, \;
S_{12} (\vec q \, ), \; S_{12} (\vec k \, ), \; i \vec S \cdot \vec q \times \vec k, \;
\vec \sigma_1 \cdot \vec q \times \vec k \, \vec \sigma_2 \cdot \vec q \times \vec k \right\} \times
\left\{ \fet 1_{\rm isospin}, \; \fet \tau_1 \cdot \fet \tau_2 \right\}\,,
\end{equation}
where $\vec q \equiv \vec p \, ' - \vec p$ and $\vec k \equiv (\vec p \, ' + \vec p \, )/2$.
The operators are multiplied with the scalar functions that depend on $p^2$, ${p '}^2$ and
$\vec p \cdot \vec p \, '$.
Notice that contrary to Eq.~(\ref{pot_operat}) which involves the \emph{operator} $\vec p$,
$\vec p$ and $\vec p \, '$ that enter Eq.~(\ref{pot_mom})
denote the corresponding eigenvalues. It should also be emphasized that further
spin-momentum operators contribute in the case of class-IV isospin-breaking interactions.
For low-energy processes I will be focused in here, it is convenient to switch
to the partial wave basis $| \vec p \, \rangle \to | p l m_l \rangle$. A two-nucleon
state $| p (l s) j m_j \rangle$ in the partial-wave basis depends on the orbital
angular momentum $l$, spin $s$, the total angular momentum $j$ and the corresponding
magnetic quantum number $m_j$. The partial wave decomposition of the potential in
Eq.~(\ref{pot_mom}) is given by:
\begin{equation}
\label{partw}
\langle p ' (l' s') j' m_j' | V | p (l s) j m_j \rangle
\equiv \delta_{j' j} \, \delta_{m_j' m_j} \, \delta_{s' s}\,
V^{sj}_{l' l} (p ', \, p) \,,
\end{equation}
with
\begin{eqnarray}
\label{partw1}
V^{sj}_{l' l} (p ', \, p)
&=& \sum_{m_l' , \, m_l} \int d \hat p ' \, d \hat p \,
c(l', s, j; m_l', m_j-m_l ', m_j) \, c(l, s, j; m_l, m_j-m_l , m_j)\nonumber \\
&& {} \times
Y^\star_{l' m_l'} (\hat p ') \, Y_{l m_l} (\hat p ) \,
\langle s \, m_j-m_l ' | V (\vec p \, ', \, \vec p \, ) | s \, m_j-m_l \rangle \,,
\end{eqnarray}
where $c(l, s, j; m_l, m_j - m_l, m_j)$ are Clebsch-Gordan coefficients and
$Y_{l m_l} (\hat p) $ denote the spherical harmonics. The first two Kronecker
$\delta$'s on the right-hand side of the first line in Eq.~(\ref{partw}) reflect
the conservation of the total angular momentum. Rotational invariance of the
potential prevents the dependence of the matrix elements on the magnetic quantum
number $m_j$. The conservation of the total spin of the nucleons can be easily
verified explicitly for all operators entering Eq.~(\ref{pot_mom}). I stress, however,
that transitions between the spin-singlet and spin-triplet channels are possible
in a more general case of the broken isospin symmetry. For each individual operator
entering Eq.~(\ref{pot_mom}), the expression
(\ref{partw1}) can be simplified and finally expressed as an integral
over $\hat p \cdot \hat p \, '$ with the integrand being written
in terms of the corresponding scalar function and Legendre polynomials.
Explicit formulae can be found e.g.~in \cite{Epelbaum:2004fk}, see also
Ref.~\cite{Golak:2009ri} for a
recent work on this topic.
The Lippmann-Schwinger (LS) equation for the half-shell $T$-matrix in the partial wave
basis has the form
\begin{equation}\label{LSeq}
T^{sj}_{l'l} (p',p) = V^{sj}_{l'l} (p',p) + \sum_{l''} \,
\int_0^\infty \frac{dp'' \, {p''}^2}{(2 \pi )^3} \, V^{sj}_{l'l''} (p',p'')
\frac{m}{p^2-{p''}^2 +i\eta} T^{sj}_{l''l} (p'',p)~,
\end{equation}
with $m$ denoting the nucleon mass and $\eta \to 0^+$.
In the uncoupled case, $l$ is conserved.
The relation between the on-shell $S$- and $T$-matrices is given by
\begin{equation}
S_{l' l}^{s j} (p) = \delta_{l' l} - \frac{i}{8 \pi^2}
\, p \, m \, T_{l' l}^{s j} (p)~.
\end{equation}
The phase shifts in the uncoupled cases can be obtained from the
$S$-matrix via
\begin{equation}
S_{jj}^{0j} = \exp{ \left( 2 i \delta_{j}^{0j} \right)} \; , \quad
S_{jj}^{1j} = \exp{ \left( 2 i \delta_{j}^{1j} \right)} \;,
\end{equation}
where I use the notation $\delta^{sj}_l$.
The so-called Stapp parametrization
of the $S$-matrix in the coupled channels ($j>0$) is defined as:
\begin{equation}
S = \left( \begin{array}{cc} S_{j-1 \, j-1}^{1j} & S_{j-1 \, j+1}^{1j} \\[3pt]
S_{j+1 \, j-1}^{1j} & S_{j+1 \, j+1}^{1j} \end{array} \right)
=
\left( \begin{array}{cc} \cos{(2 \epsilon)} \exp{(2 i \delta^{1j}_{j-1})} &
i \sin{(2 \epsilon)} \exp{(i \delta^{1j}_{j-1} +i \delta^{1j}_{j+1})} \\[3pt]
i \sin{(2 \epsilon)} \exp{(i \delta^{1j}_{j-1} +i \delta^{1j}_{j+1})} &
\cos{(2 \epsilon)} \exp{(2 i \delta^{1j}_{j+1})} \end{array} \right)~\nonumber ,
\end{equation}
and is related to another frequently used parametrization due to Blatt and Biedenharn
in terms of $\tilde{\delta}$ and $\tilde{\epsilon}$ via
the following equations:
\begin{equation}
\label{blattb}
\delta_{j-1} + \delta_{j+1} = \tilde{\delta}_{j-1} + \tilde{\delta}_{j+1}\,, \quad\quad
\sin ( \delta_{j-1} - \delta_{j+1} ) = \frac{\tan ( 2 \epsilon)}{\tan (2 \tilde{\epsilon})}\,,
\quad\quad
\sin (\tilde{\delta}_{j-1} - \tilde{\delta}_{j+1}) =
\frac{\sin ( 2 \epsilon)}{\sin (2 \tilde{\epsilon})}\,.
\end{equation}
The appearance of the electromagnetic interaction requires special care when calculating
scattering observables due to its long-range nature. In particular, the S-matrix has to be formulated in terms of
asymptotic Coulomb states. The electromagnetic interaction between the nucleons is driven
by the Coulomb force and, to a lesser extent, magnetic moment interactions and vacuum polarization.
It should also be emphasized that the
expansion of the scattering amplitude in partial waves converges very slowly in the presence
of the magnetic moment interactions.
For explicit expressions and a detailed discussion on their implementation when
calculating nucleon-nucleon observables the reader is referred to \cite{Stoks:1993tb}.
The deuteron wave function and binding energy $E_d$ are
obtained from the homogeneous part of Eq.~(\ref{LSeq}):
\begin{equation}\label{LSb}
\phi_l (p) = \frac{1}{E_d -p^2/m} \, \sum_{l'} \, \int_0^\infty \frac{dp'
\, {p'}^2}{(2 \pi )^3} \, V^{sj}_{ll'} (p,p') \, \phi_{l'} (p')~,
\end{equation}
where $s=j=1$, $l=l'=0,2$. Once phase shifts are calculated, nucleon-nucleon
scattering observables can be computed in a standard way, see
\cite{Bystricky:1976jr,Gloeckle:1983aa}.
The appearance of only a few structures in the most general expression for the two-nucleon force,
see Eq.~(\ref{pot_mom}), and the large amount of available
low-energy nucleon-nucleon scattering data motivated and enabled the development of
modern high-precision phenomenological potential models such as e.g.~the
CD-Bonn 2000 \cite{Machleidt:2000ge}, Argonne $V_{18}$
(AV18) \cite{Wiringa:1994wb} and
Nijmegen I, II potentials \cite{Stoks:1994wp}. The general strategy involves incorporating
the proper long-range behavior due to the electromagnetic interaction and the
one-pion exchange potential which is important to correctly describe the low-energy behavior
of the amplitude, cf.~section \ref{sec:analyt},
and parametrizing the medium- and short-range
contributions in a general way. AV18, a local $r$-space potential, can be viewed
as a representative example. It
includes (i) electromagnetic interactions multiplied by short-range functions to account for the
finite size of the nucleon, (ii) regularized one-pion exchange potential including
isospin-breaking corrections due to different masses of the charged and neutral pions, (iii)
some additional phenomenological isospin-breaking terms of a shorter range, (iv)
medium-range (short-range) contributions of Yukawa-type (Woods-Saxon type) multiplying
the operators in Eq.~(\ref{pot_operat}). With about 40
adjustable parameters, it describes the proton-proton and neutron-proton scattering data
with $\chi_{\rm datum}^2 = 1.09$. Other high-precision potentials are constructed in
a similar way and allow to reproduce the data or phase shifts from e.g.~the Nijmegen
partial wave analysis (PWA) with a comparable accuracy. This is visualized in
Fig.\ref{phases_nijm}. I refer the reader to Ref.~\cite{Machleidt:2001rw}
for a recent review article on the modern high-precision potentials.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.32\textwidth]{3S1nijm.pdf} \hfill
\includegraphics[width=0.315\textwidth]{E1nijm.pdf} \hfill
\includegraphics[width=0.32\textwidth]{3D1nijm.pdf}
}
\vspace{-0.2cm}
\caption[fig4aa]{\label{phases_nijm} $^3S_1$ (left panel) and $^3D_1$ (right panel) phase shifts
and the mixing angle $\epsilon_1$ (middle panel) calculated from several modern high-precision
potentials in comparison with the results of the Nijmegen PWA. The phase shifts and the mixing
angle are shown in degrees. Plots are generated through the NN-Online web site {\sl http://nn-online.org}.
}
\vspace{0.2cm}
\end{figure*}
While various phenomenological potentials
provide an accurate representation of the nucleon-nucleon phase shifts and
most of the deuteron properties, the situation is much less satisfactory
when it comes to the much weaker but necessary three-nucleon forces.
Such three-body forces are needed to describe the nuclear binding
energies and levels, as most systematically shown by the
Urbana-Argonne group~\cite{Pieper:2001mp}. Systematic studies of
the dynamics and reactions of systems with three or four-nucleons
further sharpen the case for the necessity of including three-nucleon
forces, see e.g.~\cite{Gloeckle:1995jg}. A phenomenological path to modeling
the three-nucleon force following the same strategy as in the two-nucleon case
seems to be not feasible (at least, at present). Indeed, in the case of two nucleons,
the potential can be decomposed in only a few different spin-space structures, and
the corresponding radial functions can be adjusted to the extensive set of data.
Such an approach would, however, fail for the three-nucleon force due to the large variety
of different possible structures, a scarcer data base and considerably more time consuming
calculations required.
While the conventional approach based on the high-precision two-nucleon potentials
accompanied with the existing three-nucleon force models enjoyed
many successes and is frequently used in e.g.~nuclear structure and reaction
calculations, it remains incomplete as there are certain deficiencies
that can only be overcome based on EFT approaches. These are: (i) it is
very difficult - if not impossible - to assign a trustworthy theoretical
error, (ii) gauge and chiral symmetries are difficult to implement, (iii)
none of the three-nucleon forces is consistent with the underlying
nucleon-nucleon interaction models/approaches and (iv) the connection
to QCD is not at all obvious.
\section{Chiral perturbation theory: An elementary introduction}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec3}
Effective field theories have proved to be an important and very useful tool in nuclear and particle physics.
One understands under an effective (field) theory an approximate theory whose scope is to describe
phenomena which occur at a chosen length (or energy) range.
The main idea of this method can be illustrated with the following example from classical electrodynamics.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.6\textwidth]{eft.pdf}
}
\vspace{-0.2cm}
\caption[fig4aa]{\label{fig0} A localized charge distribution generates an electrostatic potential which can be
described in terms of the multipole expansion.
}
\vspace{0.2cm}
\end{figure*}
Consider a localized charge distribution in space of a size $a$. The resulting electrostatic potential
at any given position $\vec R$ can be calculated by integrating over the elementary charges and using the
familiar expression for the Coulomb potential generated by a point charge:
\begin{equation}
\label{multipole0}
V (\vec R ) \propto \int d^3 r \, \frac{\rho (\vec r \, ) }{| \vec R - \vec r \,| }
\end{equation}
Expanding $1/| \vec R - \vec r \, |$ for $r \ll R$,
\begin{equation}
\frac{1}{| \vec R - \vec r \,| } = \frac{1}{R} + \sum_i r_i\frac{R_i}{R^3}
+ \frac{1}{2} \sum_{ij} r_i r_j \frac{3 R_i R_j - \delta_{ij} R^2}{R^5} + \ldots\,,
\end{equation}
with $i, \, j$ denoting the Cartesian components
allows to rewrite the integral as
\begin{equation}
\label{multipole}
\int d^3 r \, \frac{\rho (\vec r \, ) }{ | \vec R - \vec r \,| } =
\frac{q}{R} + \frac{1}{R^3} \sum_i R_i P_i +
\frac{1}{6R^5} \sum_{ij} (3 R_i R_j - \delta_{ij} R^2 ) Q_{ij} + \ldots
\end{equation}
where the total charge $q$, dipole moment $P_i$ and quadrupole moment $Q_{ij}$ are
defined via
\begin{equation}
q = \int d^3 r \, \rho (\vec r \, ), \quad \quad
P_i = \int d^3 r \, \rho (\vec r \, ) \, r_i, \quad \quad
Q_{ij} = \int d^3 r \, \rho (\vec r \, ) (3 r_i r_j - \delta_{ij} r^2 ) \,.
\end{equation}
The expression in Eq.~(\ref{multipole})
represents the well-known multipole expansion for the electrostatic potential. When truncated,
it provides an approximation to the ``underlying theory'' given by the exact expression (\ref{multipole0}).
The multipoles entering every term in this expansion contain certain amount of information
about the charge distribution and can, of course, be calculated provided $\rho( \vec r \, )$ is known.
The multipole expansion is, however, particularly useful if $\rho( \vec r \, )$ is unknown
(except for the fact that it is localized). It then allows to describe the electrostatic potential
at every point in space far from the charge distribution with, in principle, an arbitrarily high
accuracy provided one has enough data (e.g.~experimentally measured values of the electrostatic
potential at some points) to determine the desired number of the multipoles.
Chiral Perturbation Theory (CHPT) is the effective theory of QCD (more generally,
of the Standard Model) which was formulated by Weinberg \cite{Weinberg:1978kz}
and developed in to a systematic tool for analyzing low-energy hadronic observables
by Gasser and Leutwyler \cite{Gasser:1983yg,Gasser:1984gg}. In this section,
I give a brief overview of the foundations of this approach. My main purpose here is to
outline the logical
steps which are needed in order to set up this theoretical framework.
I will also give references to the existing extensive literature on this subject
which is suitable for further reading.
\subsection{Chiral symmetry of QCD}
Symmetries provide powerful constraints on effective interactions and thus
play the crucial role for effective field theories. In the following, I will discuss the
symmetries of QCD which are relevant in the context of ChPT.
Consider the QCD Lagrangian in the two-flavor case of the light up and down quarks
\begin{equation}
\label{LQCD}
\mathcal{L}_{\rm QCD} = \bar q \, ( i \gamma_\mu D^\mu - \mathcal{M} ) q - \frac{1}{4}
G^a_{\mu \nu} G^{a\, \mu \nu}\,,
\end{equation}
where $D_\mu = \partial_\mu - i g_s G_\mu^a T^a$ with $T^a$, (with $a = 1 \ldots 8$)
are the SU(3)$_{\rm color}$ Gell-Mann matrices and $q$
the quark fields. Further, $G_{\mu \nu}^a$ are the
gluon field strength tensors, and the quark mass matrix is given by $\mathcal{M} = \mbox{diag} (m_u, \, m_d)$.
I do not show in Eq.~(\ref{LQCD}) the
$\theta$- and gauge fixing terms
which are not relevant for our consideration. It is instructive to write the QCD Lagrangian
in terms of the left- and right-handed quark field components defined by
$q_{R} = (1/2) (1 + \gamma_5 ) q$, $q_{L} = (1/2) (1 - \gamma_5 ) q$:
\begin{equation}
\label{qcdumgeschr}
\mathcal{L}_{\rm QCD} = \bar{q}_L i D \hspace{-7.0pt}/ \, q_L + \bar{q}_R i D \hspace{-7.0pt}/ \, q_R - \bar{q}_L \mathcal{M} q_R
- \bar{q}_R \mathcal{M} q_L
- \frac{1}{4} G^\alpha_{\mu \nu} G^{\alpha, \mu \nu}\,.
\end{equation}
We see that the left- and right-handed quark fields are only connected through the mass term.
Given the smallness of the light quark masses \cite{Amsler:2008zzb}
\footnote{The following values correspond to the $\overline{\rm MS}$ scheme at scale $\mu = 2$ GeV.},
\begin{equation}
m_u \simeq 1.5 \ldots 3.3 \mbox{ MeV,} \quad \quad
m_d \simeq 3.5 \ldots 6.0 \mbox{ MeV,}
\end{equation}
as compared to the typical hadron masses of the order of $1$ GeV, the quark mass term can, to
a good approximation, be neglected. The Lagrangian in Eq.~(\ref{qcdumgeschr}) is, therefore,
approximately invariant under independent global flavor rotations of the left- and right-handed
quark fields:
\begin{equation}
\label{trafo_quarks}
q_L \longrightarrow q_L'= L q_L = \exp{\left(- i \fet{\theta}_L \cdot \fet{\tau} /2 \right)} q_L , \quad \quad
q_R \longrightarrow q_R'= R q_R = \exp{\left(- i \fet{\theta}_R \cdot \fet{\tau} /2 \right)} q_R \,,
\end{equation}
where $\fet \tau$ denote the Pauli matrices in the flavor space and $\fet \theta_{L,R}$ are the corresponding
rotation angles. The corresponding symmetry group $SU(2)_L \times SU(2)_R$ is referred to as
the $SU(2)$ chiral group. According to Noether's theorem, there are six conserved currents
\begin{equation}
L_\mu^i = \bar q_L \gamma_\mu \frac{\tau^i}{2} q_L \,, \quad \quad
R_\mu^i = \bar q_R \gamma_\mu \frac{\tau^i}{2} q_R \,,
\end{equation}
which can equally well be expressed in terms of the vector and axial-vector currents
$V_\mu^i = L_\mu^i + R_\mu^i$ and $A_\mu^i = R_\mu^i - L_\mu^i$. The corresponding conserved
charges generate the algebra of the chiral group
\begin{equation}
\label{lee1}
\left[ Q_{I}^i, \; Q_{I}^j \right] = i \epsilon^{ijk} Q_I^k \quad \mbox{with $I=L,R$,}
\quad \quad
\left[ Q_{L}^i, \; Q_{R}^j \right] = 0\,,
\end{equation}
or, equivalently,
\begin{equation}
\label{lee2}
\left[ Q_{V}^i, \; Q_{V}^j \right] = i \epsilon^{ijk} Q_V^k\,, \quad \quad
\left[ Q_{A}^i, \; Q_{A}^j \right] = i \epsilon^{ijk} Q_V^k\,, \quad \quad
\left[ Q_{V}^i, \; Q_{A}^j \right] = i \epsilon^{ijk} Q_A^k\,.
\end{equation}
Application of the above commutation relations to hadronic reactions was at the heart of the
current algebra calculations in the early seventies of the last century.
The Lagrangian for massless u- and d-quarks is, in fact, invariant under even a larger
group of transformations in the flavor space, namely
$SU(2)_L \times SU(2)_R \times U(1)_V \times U(1)_A$. While the vector $U(1)$ corresponds to
quark number conservation, the axial $U(1)_A$ current is known to be broken by quantum
effects (the so-called $U(1)_A$ anomaly) and thus does not represent a symmetry of the quantum
theory.
In spite of the fact that QCD for two light flavors is approximately chiral invariant,
its ground state is not symmetric with respect to $SU(2)_L \times SU(2)_R$ but only with
respect to its vector subgroup $SU(2)_V \subset SU(2)_L \times SU(2)_R$ generated by
the charges $\{Q_V^i \}$. This means that the axial charges do not annihilate the vacuum,
that is $Q_V^i |0 \rangle = 0$ while $Q_A^i |0 \rangle \neq 0$.
Evidence of the spontaneous breakdown of the chiral symmetry comes from various sources.
For example, hadrons occur in nearly degenerate isospin multiplets corresponding to $SU(2)_V$ which
implies that this group is realized in the usual Wigner-Weyl mode. If this were the case
for the chiral group, one would observe larger chiral multiplets containing
particles of opposite parity since the charges $Q_{V}^i$
and $Q_{A}^i$ have opposite parity. Generally, no such parity doubling is observed in the
hadron spectrum. Another strong argument in favor of the spontaneous breakdown of
the chiral symmetry comes from the existence of unnaturally light (in comparison with
other hadrons) pseudoscalar mesons (pions) being natural candidates
for the corresponding Nambu-Goldstone bosons. Pions are not exactly massless but
acquire a small mass due to the explicit chiral symmetry breaking by the nonvanishing quark
masses. These and further arguments coming from both the theory and experiment indicate
undoubtly that the chiral $SU(2)_L \times SU(2)_R$ group is spontaneously broken
down to $SU(2)_V$.
I now pause to summarize the content of this section. QCD Lagrangian in the two-flavor
case of the up- and down-quarks is approximately invariant under global chiral $SU(2)_L \times SU(2)_R$
transformations. The chiral symmetry is broken explicitly due to the nonvanishing quark masses,
$m_u \neq 0$, $m_d \neq 0$. In addition to this explicit symmetry breaking, $SU(2)_L \times SU(2)_R$
is also broken spontaneously down to the isospin group $SU(2)_V$. The three corresponding
pseudoscalar Goldstone bosons are identified with pions whose small masses emerge due to nonvanishing
quark masses. There exist a large mass gap in the hadron spectrum: $M_\rho \simeq 770\mbox{ MeV} \; \gg
M_\pi \simeq 140\mbox{ MeV}$.
\subsection{Effective Lagrangian for Goldstone bosons}
We now turn to the \emph{effective} description of low-energy QCD dynamics. The simplest possible case
emerges when the energy is chosen so small that only pions need to be treated
as explicit degrees of freedom. All other hadrons are much heavier and can be
integrated out from the theory. The main ingredient of any effective field theory
is the most general effective Lagrangian that involves \emph{all} possible terms which are consistent
with the symmetries of the underlying theory. Let us, for the time being, consider the so-called chiral
limit of QCD, i.e.~the idealized world in which quarks are massless and the chiral symmetry of
$\mathcal{L}_{QCD}$ is exact. The task is then to construct the most general chiral invariant
Lagrangian for pion fields. In order to do that we first need to figure out how pions transform
with respect to chiral rotations. Our knowledge of the pion transformation properties with respect
to $SU(2)_L \times SU(2)_R$ can be summarized by the following two observations:
\begin{itemize}
\item
Pions build an isospin triplet and thus transform linearly
under $SU(2)_V \subset SU(2)_L \times SU(2)_R$
according to the corresponding irreducible representation;
\item
The chiral group must be realized \emph{nonlinearly}. This follows immediately
from the geometrical argument based on the
fact that the Lie algebra of $SU(2)_L \times SU(2)_R$ in Eq.~(\ref{lee1}) is isomorphic to that of $SO(4)$.
We know that one needs three coordinates in order to construct the smallest non-trivial
representation, the so-called fundamental representation, of the three-dimensional rotation group.
Similarly, the smallest nontrivial representation of the four-dimensional rotation group $SO(4)$
is four-dimensional. We have, however, only three ``coordinates'' at our disposal
(the triplet of the pion fields)!
\end{itemize}
To construct a non-linear realization of $SO(4)$ we begin with the usual
representation describing four-dimensional rotations of a vector
$( \fet \pi , \, \sigma ) \equiv ( \pi_1 , \, \pi_2 , \, \pi_3 , \, \sigma )$.
For an infinitesimal rotation parametrized by six angles $\{ \theta_i^{V,A} \}$, with $i =1, 2, 3$, we have:
\begin{equation}
\label{linear}
\left(
\begin{array}{c}
\fet \pi \\ \sigma
\end{array}
\right) \stackrel{SO(4)}{\longrightarrow}
\left(
\begin{array}{c}
\fet \pi' \\ \sigma '
\end{array}
\right) = \left[ \fet 1_{\rm 4 \times 4} + \sum_{i=1}^3 \theta_i^V V_i + \sum_{i=1}^3 \theta_i^A A_i \right]
\left(
\begin{array}{c}
\fet \pi \\ \sigma
\end{array}
\right)\,,
\end{equation}
where
\begin{equation}
\sum_{i=1}^3 \theta_i^V V_i = \left(
\begin{array}{cccc}
0 & -\theta^V_3 & \theta^V_2 & 0 \\
\theta^V_3 & 0 & -\theta_1^V & 0 \\
-\theta^V_2 & \theta_1^V & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}
\right)\,, \quad \quad \quad
\sum_{i=1}^3 \theta_i^A A_i = \left(
\begin{array}{cccc}
0 & 0 & 0 & \theta^A_1 \\
0 & 0 & 0 & \theta^A_2 \\
0 & 0 & 0 & \theta^A_3 \\
-\theta^A_1 & -\theta_2^A & -\theta_3^A & 0
\end{array}
\right)\,.
\end{equation}
Notice that the set of rotations generated by $V_i$ builds a subgroup of $SO(4)$, namely
the group of three-dimensional rotations $SO(3) \subset SO(4)$ which is locally isomorphic
to $SU(2)$.
The four real quantities $( \fet \pi , \, \sigma )$ define the smallest nontrivial chiral
multiplet and represent the field content of the well-known linear sigma model. To switch from
the above linear realization (i.e.~representation) of $SO(4)$ to the nonlinear one, we
observe that, in fact, only three of the four components of $( \fet \pi , \, \sigma )$ are
independent with respect to four-dimensional rotations. These three independent components
correspond to coordinates on a four-dimensional sphere since $\fet \pi$ and $\sigma$
are subject to the constraint
\begin{equation}
\label{constraint}
\fet \pi^2 + \sigma^2 = F^2\,,
\end{equation}
where $F$ is a constant of dimension mass. Making use of this equation to eliminate $\sigma$ in Eq.~(\ref{linear})
we end up with the following transformation properties of $\fet \pi$ under $SO(4)$:
\begin{eqnarray}
\fet \pi &\stackrel{\theta^V}{\longrightarrow}& \fet \pi ' = \fet \pi + \fet \theta^V \times \fet \pi\,, \nonumber \\
\fet \pi &\stackrel{\theta^A}{\longrightarrow}& \fet \pi ' = \fet \pi + \fet \theta^A
\sqrt{ F^2 - \fet \pi^2}\,,
\end{eqnarray}
where $\fet \theta^{V,A} \equiv \{ \theta^{V,A}_i \}$ with $i = 1,2,3$.
The nonlinear terms (in $\fet \pi$) on the right-hand side of the
second equation give rise to the nonlinear realization of $SO(4)$.
This is exactly what we wanted to achieve: the chiral group $SU(2)_L \times SU(2)_R \simeq SO(4)$
is realized nonlinearly on the triplet of pions which, however, transform linearly under isospin
$SU(2)_V \simeq SO(3)$ rotations parametrized through the angles $\{ \fet \theta_V \}$.
As a last remark note that the four-dimensional rotations of $( \fet \pi , \, \sigma )$
can be conveniently written using the $2 \times 2 $ matrix notation by introducing the unitary
matrix\footnote{For $U$ to be unitary, $\sigma$ and $\fet \pi$ have to fulfill Eq.~(\ref{constraint}).}
\begin{equation}
\label{matrU}
U = \frac{1}{F} \left( \sigma \fet 1_{\rm 2\times 2} + i \fet \pi \cdot \fet \tau \right)\,,
\end{equation}
and demanding the transformation properties of $U$ under chiral rotations to be:
\begin{equation}
\label{trafoU}
U \longrightarrow U' = L U R^\dagger\,.
\end{equation}
Here, $L$ and $R$ are $SU(2)_L \times SU(2)_R$ matrices defined in Eq.~(\ref{trafo_quarks}).
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: verify that infinitesimal transformations of
$( \fet \pi , \, \sigma )$ induced by Eq.~(\ref{trafoU})
with $\fet \theta^V =(\fet \theta_R + \fet \theta_L )/2$ and $\fet \theta^A =(\fet \theta_R - \fet \theta_L )/2$
have indeed the same form as the ones given in Eq.~(\ref{linear}).
} \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
Clearly, the transition to the nonlinear realization is achieved by
\begin{equation}
U = \frac{1}{F} \left( \sigma \, \fet 1_{\rm 2 \times 2} + i \fet \pi \cdot \fet \tau \right) \; \longrightarrow
\; U = \frac{1}{F} \left( \sqrt{F^2 - \fet \pi^2} \, \fet 1_{\rm 2 \times 2} + i \fet \pi \cdot \fet \tau \right) \,,
\end{equation}
leaving pions as the only remaining degrees of freedom.
Notice that the ground state of the theory is characterized by a vanishing vacuum expectation values
of $\fet \pi$ and corresponds to a particular point on the considered four-dimensional sphere (one of two
crossing points between the sphere and the $\sigma$-axis). In accordance with the spontaneous breaking
of chiral symmetry, it not $SO(4)$- but only $SO(3) \simeq SU(2)_V$-invariant.
It is now a simple exercise to construct the most general chiral-invariant Lagrangian for
pions in terms of the matrix $U$. The building blocks are given by $U$, $U^\dagger$ and derivatives
of these quantities. Notice that since I consider here only global chiral rotations, i.e.~$L$ and $R$
do not depend on space-time, the quantities like e.g.~$\partial_\mu \partial_\nu U$ transform in
the same way as $U$ itself, i.e.~according to Eq.~(\ref{trafoU}).
Chiral invariant terms in the effective Lagrangian terms can be constructed by taking a trace over products
of $U$, $U^\dagger$ and their derivatives. Lorentz invariance implies that the number of derivatives
must be even, so that the effective Lagrangian can be written as
\begin{equation}
\mathcal{L}_\pi = \mathcal{L}_\pi^{(2)} + \mathcal{L}_\pi^{(4)} + \ldots\,.
\end{equation}
Notice that $\mathcal{L}_\pi^{(0)}$ is simply a constant since $U U^\dagger = \fet 1_{2\times 2}$.
The lowest-order Lagrangian involves just a single term
\begin{equation}
\label{lagr1}
\mathcal{L}_\pi^{(2)} = \frac{F^2}{4} \langle \partial_\mu U \partial^\mu U^\dagger \rangle \,,
\end{equation}
where $\langle \ldots \rangle$ denotes the trace in the flavor space.
Terms involving $\partial_\mu \partial^\mu U$ or $\partial_\mu \partial^\mu U^\dagger$ are
not independent and can be brought to the form of Eq.~(\ref{lagr1}) by using partial integration.
The constant $F^2/4$
ensures that the Lagrangian has a proper dimension (both $F$ and $\pi$ have a dimension
of mass) and is chosen in such a way that Eq.~(\ref{lagr1}) matches the usual
free Lagrangian for massless scalar field when written in terms of pions:
\begin{equation}
\label{lagr2}
\mathcal{L}_\pi^{(2)} = \frac{1}{2} \partial_\mu \fet \pi \cdot \partial^\mu \fet \pi + \frac{1}{2 F^2}
\left( \partial_\mu \fet \pi \cdot \fet \pi \right)^2 + \mathcal{O} ( \fet \pi^6 )\,.
\end{equation}
The a-priori unknown constants that accompany the terms in the effective Lagrangian are
commonly called the low-energy constants (LECs), cf.~the multipoles in Eq.~(\ref{multipole}).
The LEC $F$ can be identified with the
pion decay constant in our idealized world (with quark masses being set to zero) \cite{Gasser:1983yg}. In the real
world, it is measured to be $F_\pi = 92.4$ MeV. Notice further that the pions are massless in
the idealized world. Higher-order Lagrangians $\mathcal{L}_\pi^{(4)}$,
$\mathcal{L}_\pi^{(6)}$, $\ldots$, can be constructed
along the same lines by using partial integration and equations of motion to eliminate
the redundant terms.
Let us now pause to summarize what has been achieved so far. We have explicitly constructed
a particular nonlinear realization of $SU(2)_L \times SU(2)_R$ in terms of
pion fields which, as desired, build a representation of $SU(2)_V \subset SU(2)_L \times SU(2)_R$
and learned how to write down the most general possible effective Lagrangian. The constructed
nonlinear realization of the chiral group is, however, not unique. Different realizations emerge
by choosing different parametrizations of the matrix $U$ in Eq.~(\ref{trafoU}) in terms of pion fields such as,
for example, the exponential parametrization $U = \exp (i \fet \pi \cdot \fet \tau/F )$.
Generally, only the first three terms in the expansion of $U$ in powers of the pion field are
fixed by unitarity,
\begin{equation}
\label{Uexplicit}
U( \fet \pi ) = \fet 1_{2\times 2} + i\frac{\fet \tau \cdot \fet \pi}{F} - \frac{\fet \pi^2}{2 F^2} - i \alpha
\frac{\fet \pi^2 \fet \tau \cdot \fet \pi}{F^3} + (8 \alpha - 1) \frac{\fet \pi^4}{8 F^4} + \mathcal{O} (\fet \pi^5)\,.
\end{equation}
Here, $\alpha$ is an arbitrary constant which reflects the freedom in parametrizing the matrix $U$.
This raises the concern that observables that
one intends to compute in the EFT are possibly affected by this non-uniqueness which would be a disaster.
Fortunately, this is not the case. As shown by Coleman, Callan, Wess and Zumino \cite{Coleman:1969sm,Callan:1969sn},
all realizations of the chiral
group are equivalent to each other modulo nonlinear field redefinitions
\begin{equation}
\pi_i \to \pi_i ' = \pi_i \, F \left[ \fet \pi \right] \quad \mbox{with} \quad
F \left[ \fet 0 \right] = 1\,.
\end{equation}
According to Haag's theorem \cite{Haag:1958vt}, such nonlinear field redefinitions do not affect
S-matrix elements.
So far, I considered the chiral limit corresponding to the idealized world with the masses of the
up- and down-quarks being set to zero. This is fine as a first approximation but, of course, one would like to
systematically improve on it by taking into account corrections
due to nonvanishing quark masses. For that, we have to include in the
effective Lagrangian all possible terms which break chiral symmetry in exactly the same way as does the
quark mass term in $\mathcal{L}_{QCD}$. Consider, for example, the
quark mass term with $m_u = m_d = m_q \neq 0$
which breaks chiral but preserves isospin symmetry. Recalling the geometrical interpretation
with the four-dimensional rotation group and coordinates $(\fet \pi , \, \sigma)$, the quark mass term can be viewed
as a vector that points along the $(\fet 0 , \, \sigma )$-direction. Its effects can be systematically taken
into account by including in the effective Lagrangian not only $SO(4)$-scalars but also the corresponding
components of all possible $SO(4)$-tensors and multiplying the resulting terms by the appropriate powers of
$m_q$, see \cite{VanKolck:1993ee} for more details and the explicit construction along these lines.
A simpler (but equivalent) method makes use of the following trick. Consider the massless QCD Lagrangian
in the presence of an external hermitian scalar field $s$ interacting with the quarks
via the term $- \bar q s q$. The resulting Lagrangian is chiral invariant provided the
scalar source $s$ transforms under chiral rotations according to:
\begin{equation}
\label{fieldS}
s \to s' =L s R^\dagger = R s L^\dagger \,,
\end{equation}
where the second equality follows from the hermiticity of $s$. To recover QCD from the new theory,
the external field needs to be set to the value $s = \mathcal{M}$. To account for the explicit chiral symmetry
breaking, we first write down the effective Lagrangian for the new theory by listing all possible
chiral invariant terms constructed from $s$ and (derivatives of) $U$ and $U^\dagger$ and then set
$s = \mathcal{M}$. Since the quark masses are treated as a small perturbation, the
leading symmetry-breaking terms should contain a minimal possible number of derivatives and
just one insertion of $s$. Given that $U \to U^\dagger$ under parity transformation, there exist only
one symmetry-breaking term without derivatives:
\begin{equation}
\label{4.55}
\mathcal{L}_{\rm SB} = \frac{F^2 B}{2} \langle s U + s U^\dagger \rangle \bigg|_{s=\mathcal{M}}
= F^2 B ( m_u + m_d) - \frac{B}{2} (m_u + m_d) \fet{\pi}^2 + \mathcal{O}
(\fet{\pi}^4) \;,
\end{equation}
where $B$ is a LEC. The first term is a constant and does not contribute to the $S$-matrix.
The second one gives rise to the pion mass $-(1/2) M^2 \fet{\pi}^2$
with $M^2 = (m_u + m_d ) B$. Note that to leading order in $m_{u,d}$, one
has equal masses for all pions $\pi^+$, $\pi^-$ and $\pi^0$. Further, the LEC
$B$ can be shown to be related to the quark condensate according to
$\langle 0 | \bar{u} u | 0 \rangle= \langle 0 | \bar{d} d | 0 \rangle=
- F_\pi^2 B (1 + \mathcal{O} (\mathcal{M}))$ \cite{Gasser:1983yg}. Modulo corrections
of higher order in the quark masses, the
experimentally measured pion mass $M_\pi$ coincides with $M$: $M_\pi^2 = M^2 + \mathcal{O}(m_q^2)$.
How important are chiral-symmetry breaking terms as compared to chiral-invariant ones?
When constructing the most general chiral-invariant effective Lagrangian, various terms
were classified according to the number of derivatives.
When calculating observables, the derivatives acting on pion fields
generate powers of external momenta which are assumed to be low. The EFT considered so far is
typically applied to processes characterized by pion momenta of the order of the pion
mass.\footnote{For an example of EFT in a different kinematical regime with nonrelativistic
mesons see \cite{Colangelo:2006va}.} It is, therefore, natural to count the pion mass in the effective Lagrangian
on the same footing as a derivative. We thus end up with the following
lowest-order Lagrangian:
\begin{equation}
\label{lagrpifin}
\mathcal{L}_\pi^{(2)} = \frac{F^2}{4} \langle \partial_\mu U \partial^\mu U^\dagger + 2 B
(\mathcal{M} U + \mathcal{M} U^\dagger )\rangle \,.
\end{equation}
For the sake of completeness, the next-higher order Lagrangian reads \cite{Gasser:1983yg}:
\begin{eqnarray}
\mathcal{L}_\pi^{(4)} &=& \frac{l_1}{4} \langle \partial_\mu U \partial^\mu U^\dagger \rangle^2 +
\frac{l_2}{4} \langle \partial_\mu U \partial_\nu U^\dagger \rangle \langle \partial^\mu U \partial^\nu U^\dagger \rangle
+ \frac{l_3}{16} \langle 2 B \mathcal{M} (U + U^\dagger ) \rangle^2 + \ldots \nonumber \\
&-& \frac{l_7}{16} \langle 2 B \mathcal{M} (U - U^\dagger ) \rangle^2\,,
\end{eqnarray}
where the ellipses refer to terms that involve external sources and the $l_i$ are the corresponding LECs.
\subsection{Power counting}
Having constructed the most general effective Lagrangian for pions in harmony with the chiral
symmetry of QCD, we now need to figure out how to compute observables. At first sight, the effective
Lagrangian seems to be of less practical value due to an infinite number of the unknown LECs.
Even worse, all interaction terms entering $\mathcal{L}_\pi$ are non-renormalizable in the usual
sense\footnote{This implies that the structure of local ultraviolet divergences generated by loop diagrams
with vertices from $\mathcal{L}_\pi^{(2)}$ is different from $\mathcal{L}_\pi^{(2)}$.}
contrary to field theories such as e.g.~QED and QCD. What at first sight appears to cause a problem,
namely non-renormalizability of the theory, in fact, turns out to be a crucial feature for the whole approach
to be useful. As demonstrated in the seminal paper by Weinberg \cite{Weinberg:1978kz}, the effective Lagrangian $\mathcal{L}_\pi$
can be used to compute low-energy observables (such as e.g.~scattering amplitudes) in a systematically improvable
way via an expansion in powers of $Q/\Lambda_\chi$, where $Q$ represents the soft scale associated with external
momenta or the pion mass $M_\pi$ while $\Lambda_\chi$, the so-called chiral-symmetry-breaking scale, is the
hard scale that drives the LECs in $\mathcal{L}_\pi$. This expansion is referred to as the chiral expansion,
and the whole approach carries the name of chiral perturbation theory.
Consider an arbitrary multi-pion scattering process
with all initial and final pion momenta of the order of $M_\pi$. In order to decide on the importance
of a particular Feynman diagram, we have to determine the power of the soft scale associated with it.
For that we first need to clarify an important issue related to the counting of virtual momenta
which are being integrated over in the loop integrals. What scale do we associate with such virtual momenta?
When calculating Feynman diagrams in ChPT, one generally encounters two kinds of loop integrals.
First, there are cases in which the integrand dies out fast enough when the loop momenta go to infinity so
that the corresponding integrals are well-defined. Since the hard scale only enters the LECs
and thus factorizes out, the integrands involve only soft scales (external momenta and the pion mass)
and the loop momenta. Given that the integration is carried over the whole
range of momenta, the resulting mass dimension of the integral is obviously driven by
the soft scales. Thus, in this case we can safely count all virtual momenta as the soft scale.
The second kind of integrals involves ultraviolet divergences and requires regularization
and renormalization. Choosing renormalization conditions in a suitable way, one can ensure that
virtual momenta are (effectively) of the order of the soft scale. This is achieved automatically
if one uses a mass-independent regularization such as e.g.~dimensional regularization (DR). Consider, for example,
the integral
\begin{equation}
I = \int \frac{d^4 l}{(2 \pi)^4} \frac{i}{l^2 - M^2 + i \epsilon}\,,
\end{equation}
that enters the pion self energy due to the tadpole diagram shown in Fig.~\ref{fig1}.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.5\textwidth]{pions_self_energy.pdf}
}
\vspace{-0.2cm}
\caption[fig4aa]{\label{fig1} Tadpole contribution from $\mathcal{L}_\pi^{(2)}$ (left) and tree
contribution from $\mathcal{L}_\pi^{(4)}$ (right) to the pion self energy.
}
\vspace{0.2cm}
\end{figure*}
Evaluating this quadratically divergent integral in dimensional regularization one obtains
\begin{equation}
I \to I^{\rm reg} = \mu^{4 - d} \int \frac{d^d l}{(2 \pi)^d} \frac{i}{l^2 - M^2} =
\frac{M^2}{16 \pi^2} \, \ln \left( \frac{M^2}{\mu^2} \right)
+ 2 M^2 L ( \mu )
+ \mathcal{O} (d - 4) \,,
\end{equation}
where $\gamma_E$ is the Euler constant and $\mu$ is the scale introduced by dimensional
regularization and the quantity $L (\mu ) $ is given by
\begin{equation}
\label{lmu}
L (\mu ) = \frac{\mu ^{d-4}}{16 \pi^2} \left\{ \frac{1}{d-4} - \frac{1}{2} (\ln (4 \pi ) + \Gamma ' (1) + 1 ) \right\}\,,
\mbox{\hskip 1 true cm}
\Gamma ' (1) = -0.577215\ldots \,.
\end{equation}
The second term on the right-hand side of the above expression diverges
in the limit $d \to 4$ but can be absorbed into an appropriate redefinition of the LECs in $\mathcal{L}_\pi^{(4)}$
(renormalization). Notice that, as desired, the mass dimension of the finite term is driven by the
soft scale $M$. I further emphasize that the scale $\mu$ introduced by dimensional
regularization has to be chosen of the order $\mu \sim M_\pi$ in order to prevent the
appearance of large logarithms in DR expressions. Last but not least, note that one can, in principle, use
different regularization methods such as e.g.~cutoff regularization provided they
respect chiral symmetry.\footnote{When cutoff regularization is used, a special care is required regarding
the treatment of non-covariant pieces in the pion propagator, see.~\cite{Gerstein:1971fm}.}
Contrary to dimensionally-regularized expressions, cutoff-regularized integrals
do not scale properly i.e.~their mass dimension is not generated exclusively by the
soft scales. The renormalized expressions emerging after absorbing the positive powers and
logarithms of the cutoff into an appropriate redefinition of the LECs do, however,
feature the expected scaling behavior.
I am now in the position to discuss the chiral power counting, i.e.~the expression
that determines the power $\nu$ of the expansion parameter $Q/\Lambda_\chi$ for a given Feynman
diagram.
This can be achieved by carefully counting the powers of small momenta associated with
derivatives entering the vertices in $\mathcal{L}_\pi$, pion propagators,
integrations over the loop momenta and the $\delta$-functions.
Using certain topological identities, one obtains the following expression for the
chiral dimension of a connected Feynman diagram:
\begin{equation}
\label{pow_orig0}
\nu = 2 + 2 L + \sum_i V_i \Delta_i \,, \quad \quad
\Delta_i = d_i - 2\,,
\end{equation}
where $L$ refers to the number of loops.
This result has been first obtained by Weinberg in \cite{Weinberg:1978kz}.
Notice that in order for perturbation theory to work, $\mathcal{L}_\pi$ must contain no
interactions with $\Delta_i \leq 0$ since, otherwise, adding new vertices would not increase or even lower the chiral dimension
$\nu$. This feature is guaranteed by the spontaneously broken chiral symmetry of QCD which
ensures that only non-renormalizable interactions with at least two derivatives
or powers of the pion mass appear in $\mathcal{L}_\pi$. In particular, chiral symmetry
forbids the renormalizable derivative-less interaction of the type $\fet \pi^4$.
Consider now pion-pion scattering as an illustrative example. Eq.~(\ref{pow_orig0})
tells us that the leading contribution to the scattering amplitude is generated
by tree diagrams ($L=0$) constructed from the lowest-order vertices with $\Delta_i =0$
i.e.~the ones from $\mathcal{L}_\pi^{(2)}$,
see Fig.~\ref{fig2}. The amplitude scales as $Q^2$.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.9\textwidth]{pipi.pdf}
}
\vspace{-0.2cm}
\caption[fig4aa]{\label{fig2} Diagrams contributing to pion-pion scattering in at
leading- and next-to-leading order in ChPT. Solid dots and filled rectangles represent vertices
from $\mathcal{L}_\pi^{(2)}$ and $\mathcal{L}_\pi^{(4)}$, respectively.
}
\vspace{0.2cm}
\end{figure*}
The corrections result from one-loop graphs involving all vertices from $\mathcal{L}_\pi^{(2)}$
as well as tree graphs with a single insertion from $\mathcal{L}_\pi^{(4)}$, see Fig.~\ref{fig2}.
They appear at order $Q^4$ and are suppressed by two powers of momenta or one power of the quark masses compared to the
leading-order contribution. It is easy to verify that all diagrams in the bottom line of this figure
scale, indeed, as $Q^4$. For example, for the first diagram, four powers of momenta arise from
the vertices and another four powers of momenta emerge from the loop integration. One should further take
into account four powers of momenta generated in the denominator by the pion propagators. Thus, the total
power of the soft scale is indeed four. All ultraviolet divergences entering the loop integrals are local
and absorbable into redefinition of the LECs in
$\mathcal{L}_\pi^{(4)}$ (when using dimensional regularization),
as it represents the most general, approximately chiral invariant,
local interaction of Goldstone bosons at order $Q^4$.
The divergent parts of the LECs $l_i$ have been worked out in \cite{Gasser:1983yg} using the heat-kernel method.
The finite parts of the $l_i$'s are not fixed by chiral symmetry and have to be determined
from the data or lattice QCD calculations.
In the Goldstone boson sector, even a number of two-loop
calculations (i.e.~at order $Q^6$) have already been performed, see Ref.~\cite{Bijnens:2006zp} for a review article.
In particular, an impressive theoretical prediction has been made for
the isoscalar S-wave $\pi \pi$ scattering length $a_0^0$ by Colangelo et al.~\cite{Colangelo:2000jc} who
combined the two-loop calculation \cite{Bijnens:1997vq} with dispersion relations to predict
$a_0^0 = 0.220 \pm 0.005$.
To compare, the leading-order calculation by Weinberg yielded $a_0^0 = 0.16$ \cite{Weinberg:1966kf} while
the next-to-leading value obtained by Gasser and Leutwyler is $a_0^0 = 0.20$ \cite{Gasser:1983yg}.
The results of the recent E865 experiment at Brookhaven \cite{Pislak:2003sv}
and the NA48/2 experiment
at CERN \cite{Batley:2007zz} combined with the older measurement by the Geneva-Saclay collaboration
beautifully confirmed the prediction of the two-loop analysis of
Ref.~\cite{Colangelo:2000jc} yielding the value
$a_0^0 = 0.217 \pm 0.008\mbox{ (exp)} \pm 0.006\mbox{ (th)}$ \cite{Colangelo:2008sm}.
This combined result accounts for isospin breaking corrections, see Ref.~\cite{Colangelo:2008sm}
for more details.
The procedure outlined above can, in principle, be extended to arbitrarily high orders in the
low-energy expansion. Clearly, the accuracy of the calculations depends crucially on the value
of the hard scale $\Lambda_\chi$ which sets the (maximal) radius of convergence of the
chiral expansion. The $\rho$-meson is the first meson of the non-Goldstone type
and shows up as a resonance in p-wave $\pi \pi$ scattering. Such resonances represent truly non-perturbative
phenomena that cannot be described in standard ChPT\footnote{For an extension of ChPT to the resonance
region, the so-called unitarized ChPT, see \cite{Pelaez:2003ip} and references therein.}. Consequently, their appearance
signals the breakdown of the chiral expansion. This leads to the estimation
$\Lambda_\chi \sim M_\rho \simeq 770$ MeV. A related observation that matches naturally
the above estimation was made by Manohar and Georgi who pointed out that $\Lambda_\chi$
cannot be larger than $4 \pi F_\pi \simeq 1200$ MeV since this number sets the scale
that controls the running of the renormalized LECs when shifting the renormalization point.
Last but not least, I would like to summarize and underline the special role and importance
of the chiral symmetry for the whole approach.
First of all, it implies severe constraints on the interactions in the effective
Lagrangian and relates the strengths of various multi-pion vertices. For example, the leading-order Lagrangian
$\mathcal{L}_\pi^{(2)}$ in Eq.~(\ref{lagrpifin}) gives rise to infinitely many vertices, when
expanded in powers of pion fields, whose strengths are determined by just two (!) LECs $F$ and $B$.
$\mathcal{L}_\pi^{(2)}$ allows to compute the leading contribution to scattering amplitudes for multi-pion processes
and to relate the strengths of the corresponding matrix elements, thus featuring a remarkable predictive
power. Moreover, as we saw through the explicit construction, the spontaneously
broken chiral symmetry of QCD prevents the appearance of derivative-less interactions between pions
in the effective Lagrangian. The only derivative-less interactions in $\mathcal{L}_\pi$ are due
to explicit chiral symmetry breaking in $\mathcal{L}_{QCD}$ and are suppressed by powers of the
light quark masses. When calculating S-matrix elements, the derivatives entering the vertices generate powers
of external momenta. Consequently, the interaction between pions becomes weak at vanishingly low
energies and would even completely disappear if chiral symmetry were exact. This turns out to be
a general feature of Goldstone bosons and is not restricted to the $SU(2)_L \times SU(2)_R$ group.
This allows to compute low-energy hadronic observables in a systematic way via the chiral expansion,
i.e.~the dual expansion in powers of momenta and quark masses about the kinematical point
corresponding to the free theory (assuming that the actual quark masses in the real world
are low enough for such an
expansion to converge).
\subsection{Inclusion of nucleons}
So far we only discussed interactions between Goldstone bosons.
We now extend these considerations to include nucleons. More precisely, we are interested
in describing reactions involving pions with external momenta of the order of $M_\pi$
and (essentially) non-relativistic nucleons whose three-momenta
are of the order of $M_\pi$. Similarly to the triplet
of pion fields, the isospin doublet of the nucleon fields should transform
nonlinearly under the chiral $SU(2)_L \times SU(2)_R$ but linearly under the vector subgroup
$SU(2)_V$. The unitary matrix $U$ introduced in Eq.~(\ref{matrU}) is less
useful when constructing the Lagrangian involving the nucleons. It is more convenient to
introduce its square root $u$, $U = u^2$. The transformation properties of $u$ under chiral
rotations can be read off from Eq.~(\ref{trafoU}):
\begin{equation}
\label{defu}
u \to u' = \sqrt{L u R^\dagger} \equiv L u h^{-1} = h u R^\dagger \,,
\end{equation}
where I have introduced the unitary matrix $h = h (L, R, U)$ given by
$h = \sqrt{L U R^\dagger}^{-1} L \sqrt{U}$ which is sometimes referred to as a compensator field.
The last equality in Eq.~(\ref{defu})
follows from $U' = u' u' = Luh^{-1} u'= L u u R^\dagger$. Notice that
since pions transform linearly under isospin rotations corresponding to $L = R = V$ with
$U \to U' = V U V^\dagger$ and, accordingly, $u \to u' = V u V^\dagger$, the compensator
field in this particular case becomes $U$-independent and coincides with $V$.
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: calculate the explicit form of the compensator field $h(L, R, \fet \pi)$
for infinitesimal chiral transformations using
Eq.~(\ref{Uexplicit}) and keeping only terms that are at most
linear in the pion fields. Verify that $h$ indeed reduces to the isospin transformation for $L = R = V$.
} \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
It can be shown that $\{ U, \, N \}$ define a nonlinear realization of the chiral group
if one demands that
\begin{equation}
N \to N' = h N\,.
\end{equation}
I do not give here the proof of this statement and refer the interesting reader
to Ref.~\cite{Coleman:1969sm,Callan:1969sn}. Moreover, this nonlinear realization
obviously fulfills the desired feature that pions and nucleons transform linearly under
isospin rotations. Similarly to the purely Goldstone boson case, one can show that
all other possibilities to introduce the nucleon fields are identical with the above realization
modulo nonlinear field redefinitions. The most general chiral invariant Lagrangian
for pions and nucleons can be constructed from \emph{covariantly} transforming building
blocks, i.e.~$O_i \to O_i ' = h O_i h^{-1}$, by writing down all possible terms of
the form $\bar N O_1 \ldots O_n N$. The covariant (first) derivative of the pion field
is given by
\begin{equation}
u_\mu \equiv i u^\dagger (\partial_\mu U) u^\dagger = -
\frac{\fet \tau \cdot \partial_\mu \fet \pi}{F} + \mathcal{O} (\fet \pi^3) \to u_\mu ' = h u_\mu h^{-1}\,,
\end{equation}
and is sometimes referred to as chiral vielbein. The derivative of the nucleon field,
$\partial_\mu N$, does not transform covariantly, i.e.~$\partial_\mu N \to (\partial_\mu N)' \neq
h \partial_\mu N$ since the compensator field $h$ does, in general, depend on space-time
(through its dependence on $U$). The covariant derivative of the nucleon field $D_\mu N$,
$D_\mu N \to (D_\mu N)' = h D_\mu N$, is given by
\begin{equation}
D_\mu N \equiv (\partial_\mu + \Gamma_\mu ) N\,, \quad
\mbox{with} \quad
\Gamma_\mu \equiv \frac{1}{2} \left( u^\dagger \partial_\mu u + u \partial_\mu u^\dagger \right) =
\frac{i}{4 F^2} \fet \tau \cdot \fet \pi \times \partial_\mu \fet \pi + \mathcal{O} (\fet \pi^4)
\,.
\end{equation}
The so-called connection $\Gamma_\mu$ can be used to construct higher covariant derivatives
of the pion field, for example:
\begin{equation}
u_{\mu \nu} \equiv \partial_\mu u_\nu + [ \Gamma_\mu , \, u_\nu ]\,.
\end{equation}
To first order in
the derivatives, the most general pion-nucleon
Lagrangian takes the form \cite{Gasser:1987rb}
\begin{equation}
\label{LeffN}
\mathcal{L}_{\pi N}^{(1)} = \bar N \left( i \gamma^\mu D_\mu - m + \frac{g_A}{2} \gamma^\mu \gamma_5 u_\mu \right) N\,,
\end{equation}
where $m$ and $g_A$ are the bare nucleon mass and the axial-vector coupling constant and
the superscript of $\mathcal{L}_{\pi N}$ denotes the power of the soft scale $Q$.
Contrary to the pion mass, the nucleon mass
does not vanish in the chiral limit and introduces an additional hard scale in the problem.
Consequently, terms proportional to $D_0$ and $m$ in Eq.~(\ref{LeffN})
are individually large.
It can, however, be shown that $(i \gamma^\mu D_\mu - m )N \sim \mathcal{O} (Q )$ \cite{Krause:1990xc}.
The appearance of the additional hard scale associated with the nucleon mass invalidates the
power counting for dimensionally regularized expressions since the contributions from
loop integrals involving nucleon propagators are not automatically suppressed.
To see this consider the correction to the nucleon mass $m_N$ due to the pion loop shown in Fig.~\ref{fig_nucl}.
Assuming that the nucleon and pion propagators scale as $1/Q$ and $1/Q^2$, respectively, and taking into account
$Q^4$ from the loop integration and $Q^2$ from the derivatives entering the $g_A$-vertices, the pion loop
contribution to the nucleon self energy $\Sigma ( p )$ is expected to be of the order $\sim Q^3$.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.3\textwidth]{nucleon_self_energy.pdf}
}
\vspace{-0.2cm}
\caption[fig4aa]{\label{fig_nucl} Leading pion loop contribution to the nucleon self energy.
Solid line represents the nucleon.
}
\vspace{0.2cm}
\end{figure*}
Consequently, the corresponding nucleon mass shift $\delta m_N = \Sigma (m_N)$
is expected to be $\propto M_\pi^3$ (since no other soft scale is left).
Explicit calculation, however, shows that the resulting nucleon mass shift does not vanish in the
chiral limit \cite{Gasser:1987rb}:
\begin{equation}
\label{mNrelativ}
\delta m_N \big|_{\rm loop , \, rel} \stackrel{\mathcal{M} \to 0}{=}
- \frac{3 g_A^2 m^3}{F^2} \left( L(\mu ) + \frac{1}{32 \pi^2} \ln \frac{m^2}{\mu^2} \right) + \mathcal{O} (d-4 )\,,
\end{equation}
where
the quantity $L(\mu )$ is defined in Eq.~(\ref{lmu}).
The result in Eq.~(\ref{mNrelativ}) implies that the nucleon mass receives a contribution
which is formally of the order $\sim m \, (m/4 \pi F )^2$ and is not suppressed compared to $m$.
The bare nucleon mass $m$ that enters the lowest-order Lagrangian $\mathcal{L}_{\pi N}^{(1)}$
gets renormalized. This is in contrast
to the purely mesonic sector where loop contributions are always suppressed by powers of the soft
scale and the parameters $F$ and $B$ in the lowest-order Lagrangian $L_\pi^{(2)}$ remain unchanged
by higher-order corrections (if mass-independent regularization is used). I emphasize, however,
that even though DR expressions do not automatically obey the dimensional power counting with nucleons
being treated relativistically, the proper scaling in agreement with naive dimensional analysis
can be restored via appropriately chosen renormalization
conditions \cite{Fuchs:2003qc}. Stated differently, one can (and should in order for the EFT to be useful)
choose renormalization conditions in such a way,
that all momenta flowing through diagrams are effectively of the order of $Q$.
Another, simpler way to ensure the proper power counting exploits
the so-called heavy-baryon formalism \cite{Jenkins:1990jv,Bernard:1992qa} which is closely related to the
nonrelativistic expansion due to Foldy and Wouthuysen \cite{Foldy:1950aa} and is also widely used in
heavy-quark effective field theories. The idea is to decompose
the nucleon four-momentum $p^\mu$ according to
\begin{equation}
\label{HBmomentum}
p_\mu = m v_\mu + k_\mu \,,
\end{equation}
with $v_\mu$ the four-velocity of the nucleon satisfying $v^2 = 1$ and $k_\mu$ its small residual momentum,
$v \cdot k \ll m$. One can thus decompose the nucleon field $N$ in to the velocity eigenstates
\begin{equation}
N_v = e^{i m v \cdot x} P_v^+ N\,, \mbox{\hskip 1.5 true cm}
h_v = e^{i m v \cdot x} P_v^- N\,,
\end{equation}
where $P_v^\pm = (1 \pm \gamma_\mu v^\mu )/2$ denote the corresponding projection operators.
In the nucleon rest-frame with
$v_\mu = (1, 0, 0, 0 )$, the quantities $N_v$ and $h_v$ coincide with the
familiar large and small components of the
free positive-energy Dirac field (modulo the modified time dependence).
One, therefore, usually refers to $N_v$ and $h_v$ as to the large and small
components of $N$. The relativistic Lagrangian $\mathcal{L}_{\pi N}^{(1)}$ in Eq.~(\ref{LeffN})
can be expressed in terms of $N_v$ and $h_v$ as:
\begin{equation}
\mathcal{L}_{\pi N}^{(1)} = \bar N_v \mathcal{A} N_v + \bar h_v \mathcal{B} N_v + \bar N_v \gamma_0 \mathcal{B}^\dagger
\gamma_0 h_v - \bar h_v \mathcal{C} h_v \,,
\end{equation}
where
\begin{equation}
\mathcal{A} = i (v \cdot D ) + g_A (S \cdot u )\,,\quad
\mathcal{B} = - \gamma_5 \left[ 2 i (S \cdot D) + \frac{g_A}{2} (v \cdot u ) \right]\,, \quad
\mathcal{C} = 2 m + i (v \cdot D ) + g_A (S \cdot u )\,,
\end{equation}
and $S_\mu = i \gamma_5 \sigma_{\mu \nu} v^\nu$ is the nucleon spin operator.
One can now use the equations of motion for the large and small component fields to
completely eliminate $h_v$ from the Lagrangian. Utilizing the more elegant path integral formulation \cite{Mannel:1991mc},
the heavy degrees of freedom can be integrated out performing the Gaussian integration over the
(appropriately shifted) variables $h_v$, $\bar h_v$. This leads to the effective Lagrangian of the form \cite{Bernard:1992qa}
\begin{equation}
\label{Lfin}
\mathcal{L}_{\pi N} = \bar N_v \left[ \mathcal{A} + (\gamma_0 \mathcal{B}^\dagger \gamma_0 )
\mathcal{C}^{-1} \mathcal{B} \right] N_v
= \bar N_v \left[ i (v \cdot D ) + g_A (S \cdot u ) \right] N_v + \mathcal{O} \left(\frac{1}{m} \right)\,.
\end{equation}
Notice that the (large) nucleon mass term has disappeared from the Lagrangian,
and the dependence on $m$ in $\mathcal{L}_{\pi N}^{\rm eff}$
resides entirely in new vertices suppressed by powers of $1/m$. The heavy-baryon propagator
of the nucleon is simply $1/(v \cdot k + i \epsilon )$ and can be obtained from the $1/m$ expansion
of the Dirac propagator using Eq.~(\ref{HBmomentum}) and assuming $v \cdot k \ll m$:
\begin{equation}
\label{Dirac}
\frac{p \hspace{-7.0pt}/ \, + m}{p^2 - m^2 + i \epsilon} = \frac{\Lambda_+}{v \cdot k + i \epsilon} +
\mathcal{O} \left( m^{-1} \right)\,,
\end{equation}
where $\Lambda_+ = (p \hspace{-7.0pt}/ \, + m)/(2m)$ is a projection operator on the states of positive energy.
The advantage of the heavy-baryon formulation (HBChPT) compared to
the relativistic one can be illustrated using the previous example of
the leading one-loop correction to the nucleon mass
\begin{equation}
\label{mNHB}
\delta m_N \big|_{\rm loop, \, HB} = - \frac{3 g_A^2 M_\pi^3}{32 \pi F^2} \,.
\end{equation}
Contrary to the relativistic CHPT result in Eq.~(\ref{mNrelativ}), the loop correction in HBChPT
is finite (in DR) and vanishes in the chiral limit. The parameters in the lowest-order
Lagrangian do not get renormalized due to higher-order corrections which are suppressed by powers of $Q/\Lambda_\chi$.
Notice further that Eq.~(\ref{mNHB}) represents the leading contribution to the nucleon mass which
is nonanalytic in quark masses. It agrees with the result obtained by Gasser et al.~based on the relativistic
Lagrangian in Eq.~(\ref{LeffN}) \cite{Gasser:1987rb}.
In general, the power $\nu$ of a soft scale $Q$ for connected contributions to
the scattering amplitude can be read off from the extension of Eq.~(\ref{pow_orig0}) to the single-nucleon
sector which has the form:
\begin{equation}
\label{pow1N}
\nu = 1 + 2 L + \sum_i V_i \Delta_i\,,
\quad \quad \mbox{with}
\quad \quad \Delta_i = -2 + \frac{1}{2} n_i + d_i\,,
\end{equation}
with $n_i$ being the number of nucleon field operators at a vertex $i$ with the chiral dimension $\Delta_i$.
Notice that no closed fermion loops appear in the heavy-baryon approach, so that exactly one nucleon line
connecting the initial and final states runs through all diagrams in the single-baryon sector.
The heavy-baryon formulation outlined above can be straightforwardly extended to higher
orders in the chiral expansion. At lowest orders in the derivative expansion,
the effective Lagrangian $\mathcal{L}^{\Delta_i}$
for pions and nucleons takes the form \cite{Fettes:1998ud}:
\begin{eqnarray}
\label{lagr0}
\mathcal{L}_{\pi N}^{(0)} &=& \bar{N} \left[
i \, v\cdot D + g_A \, u \cdot S \right] N\,,\nonumber \\
\mathcal{L}^{(1)}_{\pi N} & = & \bar{N} \left[ c_1 \, \langle \chi_+ \rangle + c_2 \, (v \cdot
u)^2 + c_3 \, u \cdot u
+ c_4 \, [ S^\mu, S^\nu ] u_\mu u_\nu + c_5 \langle \hat \chi_+ \rangle \right]
N \,, \nonumber \\
\mathcal{L}_{\pi N}^{(2)} &=& \bar N \left[ \frac{1}{2 m} (v \cdot D)^2
- \frac{1}{ 2 m}
D \cdot D + d_{16} S \cdot u \langle \chi_+ \rangle
+ i d_{18} S^\mu [ D_\mu , \, \chi_-] + \ldots
\right] N \,, \nonumber \\
\mathcal{L}_{\pi NN}^{(0)} &=& {} -\frac{1}{2} C_S ( \bar N N) ( \bar N N ) +
2 C_T ( \bar N S N )
\cdot ( \bar N S N ) \,, \nonumber \\
{\cal L}^{(1)}_{\pi NN} & = & \frac{D}{2} (\bar{N} N) (\bar{N} S \cdot u N)\,, \nonumber \\
\mathcal{L}_{\pi NN}^{(2)} &=& {} - \tilde C_1 \left[ ( \bar N D N) \cdot ( \bar N D N)
+ ( ( D \bar N) N) \cdot ((D \bar N) N) \right]\nonumber \\
&-& 2 (\tilde C_1 + \tilde C_2 ) ( \bar N D N) \cdot ( (D \bar N ) N)
- \tilde C_2 ( \bar N N) \cdot \left[ (D^2 \bar N ) N + \bar N D^2 N \right] + \ldots
\,, \nonumber \\
\mathcal{L}_{\pi NNN}^{(1)} & = &{} - \frac{E}{2} (\bar{N} N) (\bar{N} \fet \tau N)
\cdot (\bar{N} \fet \tau N)\,.
\end{eqnarray}
Here, the ellipses refer to terms which do not contribute to the nuclear
forces up to next-to-next-to-leading order (N$^2$LO) except for
$\mathcal{L}_{\pi NN}^{(2)}$ where I have shown only a few terms in order to keep
the presentation compact. Further, here and in what follows I omit the subscript $v$ of the
nucleon field operators.
The quantity $\chi_+ = u^\dagger \chi u^\dagger + u
\chi^\dagger u$ with $\chi = 2 B \mathcal{M}$ involves the explicit chiral
symmetry breaking due to the finite light quark masses and $\tilde O \equiv O - \langle O \rangle /2$.
Finally, $c_i$, $d_i$, $C_i$, $\tilde C_i$, $D$ and $E$ denote the corresponding LECs.
The presented elementary introduction into ChPT aims at providing the
main conceptual ideas of this framework and is neither complete nor comprehensive.
Excellent lecture notes on the discussed and related subjects
\cite{Leutwyler:1994fi,Meissner:1997ws,Ecker:1998ai,Pich:1998xt,Gasser:2003cg,Kubis:2007iy}
are highly recommended for
further reading. A very comprehensive, textbook-like
lecture notes can be found in Ref.~\cite{Scherer:2002tk}. Current frontiers and challenges in these
fields are addressed in recent review articles \cite{Bernard:1995dp,Bernard:2007zu}, see also Ref.~\cite{CD09}.
\section{EFT for two nucleons}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec4}
\subsection{ChPT and nucleon-nucleon scattering}
\label{sec:nuclearEFT}
As outlined in the previous section, ChPT can be straightforwardly extended to the single-nucleon sector
(apart from the complication related to the treatment of the nucleon mass).
A generalization to processes involving two and more nucleons is much more difficult.
Contrary to the interaction between Goldstone bosons, nucleons do interact with each other
in the limit of vanishingly small momenta and quark masses. Chiral symmetry does not constrain
few-nucleon operators in the effective Lagrangian which contains derivative-less terms,
see Eq.~(\ref{lagr0}). In fact, the interaction between the nucleons at low energy is even strong
enough to bind them together. Shallow bound states such as the deuteron, triton etc.~represent
non-perturbative phenomena that cannot be described in perturbation theory.
On the other hand, just following the naive dimensional analyses as we did in the previous section,
the power counting can be straightforwardly generalized to connected Feynman diagrams involving $N$ nucleons
leading to
\begin{equation}
\label{powNN}
\nu = 2 - N + 2 L + \sum_i V_i \Delta_i \,.
\end{equation}
This implies the usual suppression for loop diagrams and
thus suggests that the interaction is weak. This conclusion is certainly not correct.
So, what goes wrong? The reason why the naive dimensional analysis yields a wrong
result is due to the appearance of infrared divergences (in the HBChPT)
in diagrams which contain purely nucleonic intermediate states \cite{Weinberg:1990rz,Weinberg:1991um}.
Consider the two-pion- ($2\pi$-) exchange box Feynman diagram shown in Fig.~\ref{fig4aa} (the diagram on the left-hand side).
In the nucleon rest frame with $v_\mu = (1, 0, 0, 0 )$, the four-momenta of the incoming nucleons
are $(\vec k^2/(2 m) + \mathcal{O} (m^{-3}), \, \vec k )$ and
$( \vec k^2/(2 m) + \mathcal{O} (m^{-3}), \,- \vec k )$. In the infrared regime with $\vec k =0$,
the contribution of the box diagram takes the form
\begin{equation}
\int \frac{d^4 l }{(2 \pi)^4} \frac{P (l)}{( l^0 + i \epsilon ) ( - l^0 + i \epsilon ) ( l^2 - M_\pi^2 + i \epsilon )^2}\,,
\end{equation}
where $l$ is the loop momentum and
$P(l)$ is a polynomial whose explicit form is determined by the pion-nucleon vertex.
The integral over $l^0$ possesses the so-called pinch singularity due to the poles at $l^0 = \pm i \epsilon$.
Notice that such pinch singularities only show up in the case of at least two nucleons since
for a single nucleon the contour of integration can be distorted to avoid the singularity.
The singularities that appear in the box diagram are not ``real'' but an artefact of the heavy-baryon
approximation for the nucleon propagators (static nucleons) that is not valid for such diagrams.
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: verify this statement by
using the standard Dirac propagators for the nucleon field, see Eq.~(\ref{Dirac}),
and making the nonrelativistic expansion \underline{after} carrying out the integration over $l^0$.
} \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
An alternative and, perhaps, more instructive way to explore the origin of the infrared enhancement is
by using the so-called ``old-fashioned'' time-ordered perturbation theory instead of the covariant one.
In time-ordered perturbation theory, the $T$-matrix is given by
\begin{equation}
\label{old-fashioned}
T_{\alpha \beta} = (H_I)_{\alpha \beta} + \sum_a \frac{(H_I)_{\alpha a} (H_I)_{a \beta}}{E_\beta - E_a + i \epsilon}
+ \sum_{a b} \frac{(H_I)_{\alpha a} (H_I)_{ab} (H_I)_{b \beta}}{(E_\beta - E_a + i \epsilon)
(E_\beta - E_{b} + i \epsilon)} + \ldots\,,
\end{equation}
where $H_I$ is the interaction Hamiltonian corresponding to the effective Lagrangian for pions and nucleons.
This expression should be familiar from Quantum Mechanics. Its derivation and application to
quantum field theory can be found e.g.~in \cite{Schweber:1966aa}.
Here, I use Latin letters for intermediate states,
which, in general, may contain any number of pions, in order to distinguish them from purely nucleonic states
denoted by Greek letters. I remind the reader that no nucleon-antinucleon pairs can be created or destroyed
if nucleons are treated nonrelativistically. Consequently, all states contain the same number of nucleons.
It is useful to represent various contributions to the scattering amplitude in terms of time-ordered diagrams.
For example, the Feynman box diagram for NN scattering via $2\pi$-exchange can be expressed
as a sum of six time-ordered graphs, see Fig.~\ref{fig4aa},
which correspond to the following term in Eq.~(\ref{old-fashioned}):
\begin{equation}
\label{TPEold-fashioned}
\sum_{abc} \frac{(H_{\pi NN})_{\alpha a} (H_{\pi NN})_{ab}
(H_{\pi NN})_{bc} (H_{\pi NN})_{c \beta}}{(E_\beta - E_a + i \epsilon)
(E_\beta - E_b + i \epsilon) (E_\beta - E_c + i \epsilon)} \,,
\end{equation}
where $H_{\pi NN}$ denotes the $\pi NN$ vertex. Actually, this expression can be obtained
from carrying out the $l^0$-integration in the corresponding Feynman diagram (using Dirac propagators for the nucleons).
It is easy to see that the contributions of diagrams (d-g) are
\emph{enhanced} due to the presence of the small (of the order $Q^2/m$) energy denominator associated with the
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.85\textwidth]{fig20.pdf}
}
\vspace{-0.2cm}
\caption[fig4aa]{\label{fig4aa} Two-pion exchange: Feynman diagram (a) and the corresponding time-ordered graphs (b-g).
Solid (dashed) lines correspond to nucleons (pions). }
\vspace{0.2cm}
\end{figure*}
purely nucleonic intermediate state $| b \rangle $ which in the center-of-mass system (CMS) takes the form:
\begin{equation}
\frac{1}{E_\beta - E_b + i \epsilon} = \frac{1}{\vec p_\beta^{\, 2} /m - \vec p_b^{\, 2} /m + i \epsilon}\,.
\end{equation}
Notice that the energy denominators corresponding to the $\pi NN$ states
$| a \rangle$ and $| c \rangle$ contain the pion energy $\omega_k \equiv \sqrt{\vec k \, ^2 + M_\pi^2}$ and
are of the order $M_\pi \sim Q$ in agreement with the dimensional analysis.
According to Weinberg, the failure of perturbation theory in the few-nucleon sector is caused
by the enhanced contributions of reducible diagrams, i.e.~those ones which contain purely
nucleonic intermediate states. It should, however, be emphasized that the
infrared enhancement is not sufficient to justify the need of non-perturbative
resummation of the amplitude if one counts $m = \mathcal{O} (\Lambda_\chi )$. According to Eq.~(\ref{powNN})
and taking into account the infrared enhancement $\sim m/Q$ due to the purely nucleonic intermediate states,
loop contributions are still suppressed by $\sim Q m /\Lambda_\chi^2 \sim Q /\Lambda$ for $m \sim \Lambda_\chi$.
To overcome this conceptual difficulty, Weinberg proposed to treat the nucleon mass as
a separate hard scale according to the rule \cite{Weinberg:1990rz,Weinberg:1991um}:
\begin{equation}
\label{counting_m}
m \sim \frac{\Lambda_\chi^2}{Q} \gg \Lambda_\chi\,.
\end{equation}
The resulting power counting
is referred to as the Weinberg power counting.
I will also discuss some alternative scenarios.
The infrared enhancement of the few-nucleon diagrams can be naturally taken into account
by re-arranging the expansion in Eq.~(\ref{old-fashioned}) and casting it into
the form of the Lippmann-Schwinger (LS) equation
\begin{equation}
\label{LSeqO}
T_{\alpha \beta}
= (V_{\rm eff})_{\alpha \beta} + \sum_\gamma \frac{(V_{\rm eff})_{\alpha \gamma}
T_{\gamma \beta}}{E_\beta - E_\gamma + i \epsilon}\,,
\end{equation}
with the effective potential $(V_{\rm eff})_{\alpha \beta}$ defined as a sum of all
possible irreducible diagrams (i.e. the ones
which do not contain purely nucleonic intermediate states):
\begin{equation}
\label{ep}
(V_{\rm eff})_{\alpha \beta} = (H_I)_{\alpha \beta} + \sum_{a} \frac{(H_I)_{\alpha a} (H_I)_{a \beta}}
{E_\beta - E_{a} + i \epsilon}
+ \sum_{a b} \frac{(H_I)_{\alpha a} (H_I)_{ a b}
(H_I)_{ b \beta}}{(E_\beta - E_{ a} + i \epsilon)
(E_\beta - E_{ b} + i \epsilon)} + \ldots\,.
\end{equation}
Here, the states $| a \rangle$, $| b \rangle$ contain at least one pion.
The effective potential in Eq.~(\ref{ep}) does not contain small energy denominators and
can be worked out within the low-momentum expansion
following the usual procedure of ChPT.
After the potential is obtained at a given order in the chiral expansion, few-nucleon
observables can be computed by solving the LS equation (\ref{LSeqO}),
which leads to a nonperturbative resummation of the contributions resulting from reducible diagrams.
The resulting two-step approach will be referred to as ChEFT in order to distinguish it
from ChPT in the Goldstone boson and single-nucleon sectors.
\subsection{Analytic properties of the non-relativistic scattering amplitude}
\label{sec:analyt}
Before discussing various scenarios of organizing EFT for two nucleons, it is useful
to recall general constraints imposed on the partial wave scattering amplitude
by analyticity.
Consider two non-relativistic nucleons interacting via a potential
$V$. The corresponding $S$-matrix for an uncoupled channel with the orbital angular momentum $l$
is parametrized in terms of a single phase shift $\delta_l$ and can be written in terms of $T$-matrix as
\begin{equation}
\label{Tnorm}
S_l = e^{2 i \delta_l (k) } = 1 - i \left( \frac{k m}{8 \pi^2} \right) T_l (k)\,,
\end{equation}
with $k$ denoting the CMS scattering momentum.
The $T$-matrix can then be
expressed in terms of the so-called effective range function $F_l (k) \equiv k^{2l+1} {\rm cot} \delta_l (k)$ via
\begin{equation}
\label{tmat}
T_l (k) = -\frac{16 \pi^2}{m} \frac{k^{2l}}{F_l (k) - i k^{2l+1}} \,.
\end{equation}
In the complex energy plane, the scattering amplitude and thus also the $T$-matrix
possess a so-called unitarity cut, a kinematic singularity due to two-body unitarity.
The unitarity cut starts from the branch point at the threshold ($E=0$) and goes to
positive infinity. The dynamic singularities are associated with the interaction mechanism
and are located at the negative real axis.
For example, in the case of Yukawa potential $\sim \exp (-M r )/r$ corresponding to an exchange of
a meson of mass $M$, the amplitude has a left-hand cut starting at $k^2 = -M^2/4$.
Bound and virtual states reside as poles at the negative real axis ($k = i | k |$ and $k = -i | k |$ for
bound- and virtual-state poles, respectively) while resonances show up as poles at complex energies.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.31\textwidth]{T.pdf}
\hskip 0.035\textwidth
\includegraphics[width=0.31\textwidth]{ere.pdf}
\hskip 0.035\textwidth
\includegraphics[width=0.31\textwidth]{mere.pdf}
}
\vspace{-0.2cm}
\caption{\label{analyticity} Some singularities of the partial wave $T$-matrix (left panel), effective range function $F_l (E)$ (middle panel)
and the modified effective range function $F_l^M (E)$ (right panel) . The shaded areas show the (maximal) range of applicability of the ERE
and MERE.
}
\vspace{0.2cm}
\end{figure*}
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: verify the appearance of the left-hand cut in the scattering amplitude
for the one-pion exchange potential
\begin{displaymath}
V(\vec p \, ', \, \vec p \, ) \propto \frac{\vec \sigma_1 \cdot \vec q \, \vec \sigma_2 \cdot \vec q}{\vec q\, ^2 + M_\pi^2}\,,
\quad \mbox{ with } \quad
\vec q \equiv \vec p \, ' - \vec p\,,
\end{displaymath}
using the first Born approximation. } \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
Contrary to the scattering amplitude, the effective range function does not possess the kinematic
unitarity cut and can be shown to be a
real meromorphic function of $k^2$ near the origin for non-singular
potentials of a finite range \cite{Blatt:49,Bethe:49}. It can, therefore, be
Taylor-expanded about the origin leading to the well-known effective range expansion (ERE)
\begin{equation}
\label{ere}
F_l (k^2 ) = - \frac{1}{a} + \frac{1}{2}r k^2 + v_2 k^4 + v_3 k^6 +
\ldots\,,
\end{equation}
with $a$, $r$ and $v_i$ being the scattering length, effective range and
the so-called shape parameters. Generally, the maximal radius of convergence of the ERE
is limited by the lowest-lying left-hand dynamic singularity associated with the
potential. For Yukawa-type potentials with the range $r \sim M^{-1}$,
the (maximal) radius of convergence of the ERE is given
by $k^2 < M^2/4$. For (strong) nucleon-nucleon interaction with the one-pion- ($1\pi$-) exchange potential
constituting the longest-range contribution, the ERE is expected to
converge for energies up to $ E_{\rm lab} \sim M_\pi^2/(2 m_N) =
10.5$ MeV.
Notice that apart from the singularities associated with the structure of the potential,
$F_l (k^2)$ may also contain discrete poles whose positions are determined by the strength
of the interaction. The appearance of such poles near the origin would spoil
the convergence of the ERE.
The framework of ERE can be generalized to the case in which the potential is
given by a sum of a long-range ($r_l \sim
m_l^{-1}$) and short-range ($r_s \sim m_s^{-1} \ll m_l^{-1}$) potentials $V_L$ and $V_S$,
respectively. Following van Haeringen and Kok \cite{vanHaeringen:1981pb}, one
can define the \emph{modified} effective range function $F_l^M$ via
\begin{equation}
\label{mere}
F_l^M (k^2) \equiv M_l^L (k) + \frac{k^{2l+1}}{|f_l^L (k)|} \cot [\delta_l
(k) - \delta_l^L (k)]\,,
\end{equation}
The
Jost function $f_l^L (k)$ is defined according to $f_l^L (k) \equiv f_l^L (k, r) \big|_{r = 0}$
with $f_l^L (k, r)$ being the Jost solution of the Schr\"odinger equation corresponding to the potential $V_L$,
i.e.~the particular solution that fulfills
\begin{equation}
\lim_{r \to \infty} e^{- i k r} f_l (k, \, r) = 1\,.
\end{equation}
Further, $\delta_l^L (k)$ denotes to the phase shift associated with the potential $V_L$
and the the quantity $M_l^L (k)$ can be computed from $f_l^L (k, r)$ as follows:
\begin{equation}
M_l^L (k) = \left( - \frac{i k}{2} \right)^l \frac{1}{l!} \, \lim_{r \to 0} \left[\frac{d^{2l+1}}{dr^{2l+1}}
\, r^l\frac{f_l (k, \, r) }{f_l (k)} \right] \,.
\end{equation}
I denote here with the superscript ``L'' all quantities that can be computed solely from the
long-range part of the potential. The modified effective range function $F_l^M (k^2)$
defined in this way does not contain the left-hand singularity
associated with the long-range potential and
reduces, per construction, to the ordinary effective range function
$F_l (k^2)$ for $V_L=0$.
It is a real meromorphic function in a much larger region given by $r_s^{-1}$ as compared
to $F_l (k^2)$.\footnote{Note that the existence of $M_l^L (k)$ implies
certain constraints on
the small-$r$ behavior of $V_L(r)$.}
If the long-range interaction is due to a Coulomb potential, $V_L(r) = \alpha/r$, the
Jost solution and, consequently, the function $M_l^L (k)$
can be calculated analytically. For example, for $l=0$ and the repulsive Coulomb potential,
the MERE takes the following well-known form:
\begin{equation}
F_C (k^2) = C_0^2 (\eta ) \, k \, \cot[\delta
(k) - \delta^C (k)] + 2 k \, \eta \, h (\eta )\,,
\end{equation}
where the Coulomb phase shift is $\delta^C \equiv \arg \, \Gamma (1 + i \eta )$
and the quantity $\eta$ is given by
\begin{equation}
\eta = \frac{m}{2 k} \alpha \,.
\end{equation}
Further, the functions $C_0^2 (\eta )$ (the Sommerfeld factor) and $h (\eta )$ read
\begin{equation}
C_0^2 (\eta ) = \frac{2 \pi \eta }{e^{2 \pi \eta } - 1} \,, \quad \quad \mbox{and} \quad \quad
h (\eta ) = {\rm Re} \Big[ \Psi ( i \eta ) \Big] - \ln (\eta ) \,.
\end{equation}
Here, $\Psi (z) \equiv \Gamma ' (z)/\Gamma (z)$ denotes the digamma function.
For more details on the analytic properties of the scattering amplitude and related topics
I refer the reader to the review article \cite{Badalian:1981xj}.
After these preparations, we are now in the position to discuss the implications of the
long-range interaction on the energy dependence of the phase shift.
It is natural to assume that the coefficients in the ERE and MERE
(except for the scattering length)
are driven by the scales $m_l$ and $m_s$ associated with the lowest left-hand singularities,
see \cite{Steele:1998un} for a related discussion. The knowledge of the long-range
interaction $V_L$ allows to compute the quantities $f_l^L (k)$, $M_l^L (k)$ and $\delta_l^L (k)$
entering the right-hand side of Eq.~(\ref{mere})
and thus to express $\delta_l (k)$ and the ordinary effective range function $F_l(k^2)$
in terms of the modified one, $F_l^M (k^2)$.
The MERE for $F_l^M
(k^2)$ then yields an expansion of the subthreshold parameters entering Eq.~(\ref{ere})
in powers of $m_l/m_s$. In particular, using the first few terms in the MERE
as input allows to make predictions for \emph{all} coefficients in the ERE.
The appearance of the correlations between the
subthreshold parameters in the above-mentioned sense which I will
refer to as low-energy theorems (LETs) is the only signature of the long-range interaction
at low energy (in the two-nucleon system). The LETs allow to test whether the long-range interactions
are incorporated properly in nuclear chiral EFT and thus
provide an important consistency check.
\subsection{EFT for two nucleons at very low energy}
\label{sec_piless}
Before discussing chiral EFT for two nucleons, let us consider, as a warm-up exercise,
a simpler EFT for very low energies with $Q \ll M_\pi$.
Then, no pions need to be taken into account explicitly,
and the only relevant degrees of freedom are the nucleons themselves. The corresponding
EFT with the hard scale $\Lambda \sim M_\pi$
is usually referred to as pionless EFT. The most general effective Lagrangian
consistent with Galilean invariance, baryon number conservation and the isospin symmetry
takes in the absence of external sources the following form:
\begin{equation}
\label{Lagr_nopi}
\mathcal{L} = N^\dagger \left( i \partial_0 + \frac{\vec \nabla ^2}{2 m} \right) N
- \frac{1}{2} C_S \, ( N^\dagger N ) ( N^\dagger N ) - \frac{1}{2} C_T \,
( N^\dagger \vec \sigma N ) \cdot ( N^\dagger \vec \sigma N ) + \ldots\,,
\end{equation}
where $C_{S,T}$ are LECs and the ellipses denote operators with derivatives.
Isospin-breaking and relativistic corrections to Eq.~(\ref{Lagr_nopi}) can be
included perturbatively.
What can be expected from the pionless EFT as compared to the ERE?
In the absence of external sources and restricting ourselves to the two-nucleon system,
both approaches provide an expansion of NN low-energy observables
in powers of $k/M_\pi$, have the same validity range and incorporate the same physical
principles. Pionless EFT can, therefore, not be expected to do any better than
ERE. Our goal will be thus to design the EFT in such a way that it matches
the ERE for the scattering amplitude
\begin{equation}
\label{tmat1}
T = - \frac{16 \pi^2}{m} \, \frac{1}{\left(- \frac{1}{a} + \frac{1}{2} r_0 k^2
+ v_2 k^4 + v_3 k^6 + \ldots \right) - i k}\,.
\end{equation}
Here, I restrict myself to S-waves only.
While the coefficients in the effective range expansion are, in general, driven by the range of the potential
and thus expected to scale with the appropriate powers of $M_\pi$,
the scattering length can, in principle, take any value.
In particular, it diverges in the presence of a bound or virtual state at threshold.
It is, therefore, useful to distinguish between a natural case with $|a | \sim M_\pi^{-1}$
and an unnatural case with $|a | \gg M_\pi^{-1}$. In the natural case,
the $T$-matrix in Eq.~(\ref{tmat1}) can be expanded in powers of $k$ as:
\begin{equation}
\label{Tnatural}
T = T^{(0)} + T^{(1)} + T^{(2)} + \ldots = \frac{16 \pi^2 a}{m} \left[ 1 - i a k + \left(\frac{a r_0}{2} -
a^2 \right) k^2 + \ldots \right] \,,
\end{equation}
where the superscripts of $T$ denote the power of the soft scale $Q$.
A natural value of the scattering length implies the absence of bound and virtual states
close to threshold.
The $T$-matrix can then be evaluated perturbatively in the EFT provided one uses an appropriate
renormalization scheme (i.e.~the one that does not introduce an additional large scale). When solving the
LS equation with point-like contact interactions,
one encounters divergent loop integrals of the kind
\begin{equation}
I_n = -{m \over (2\pi )^3}\int {d^3l}\, l^{n-3} \,,\; \mbox{
with } \; n=1,3,5,\ldots\,, \quad \quad
I(k) = {m\over (2\pi )^3}\int d^3l \, \frac{1}{k^2-l^2+i\eta} \,.
\end{equation}
The integrals can be evaluated using a cutoff regularization:
\begin{eqnarray}
I_n &\to & I_n^\Lambda = -{m \over (2\pi )^3}\int {d^3l}\, l^{n-3}\,\theta \left(\Lambda-l
\right) = -\frac{m\,\Lambda^n}{2n\pi^2} \,, \nonumber \\
I (k) &\to & I^\Lambda(k) = {m\over (2\pi )^3}\int {d^3l \,\theta \left(\Lambda-l
\right)\over
k^2-l^2+i\eta} = I_1^\Lambda -\frac{i\,m\, k}{4\pi} -
\frac{m k}{4\pi^2} \,\ln \frac{\Lambda-k}{\Lambda+k}
\,, \label{int_def}
\end{eqnarray}
where the last equation is valid for $k < \Lambda$.
To renormalize the scattering amplitude, I divide loop integrals into the divergent and
finite parts and take the limit $\Lambda \to \infty$\footnote{While extremely convenient
in the case under consideration, taking the limit $\Lambda \to \infty$ is, strictly speaking,
not necessary in an EFT. It is sufficient to ensure that the error from keeping the cutoff
finite is within the accuracy of the EFT expansion. In the considered case, taking
$\Lambda \sim M_\pi$ would do equally good job in describing the scattering amplitude.
}:
\begin{eqnarray}
I_n & \equiv & \lim_{\Lambda \to \infty} I_n^\Lambda = \lim_{\Lambda \to
\infty} \left(I_n^\Lambda + \frac{m \mu_n^n}{2 n \pi^2}\right)
-\frac{m \mu_n^n}{2 n \pi^2} \equiv \Delta_n(\mu_n)+I_n^R(\mu_n)\,, \nonumber\\
I(k) & \equiv & \lim_{\Lambda \to \infty} I^\Lambda (k) = \lim_{\Lambda \to \infty}
\left(I_1^\Lambda + \frac{m \mu}{2 \pi^2}\right)
+\left[-\frac{m \mu}{2 \pi^2} -\frac{i\,m\, k}{4\pi} \right] \equiv
\Delta(\mu)+I^R(\mu,k)\,. \label{splitting2}
\end{eqnarray}
Here, $\Delta_n(\mu_n)$ and $\Delta(\mu)$ denote the divergent parts
of the loop integrals while $I_n^R(\mu_n)$ and $I^R(\mu,k)$ are finite.
The procedure is analogous to the standard treatment of divergences arising from pion loops
in ChPT, see section \ref{sec3}. The splitting of
loop integrals in Eq.~(\ref{splitting2}) is not unique. The freedom in the
choice of renormalization
conditions is parameterized by $\mu$ and $\mu_n$. The divergent parts
$\Delta_n(\mu_n)$ and $\Delta(\mu)$ are to be canceled by contributions of
counterterms. The renormalized expression for the amplitude, therefore, emerge from
dropping the divergent parts in Eq.~(\ref{splitting2}) and replacing the
bare LECs by the renormalized ones $C_i \to C_i^r (\{ \mu ,\,\mu_n \} )$.
The proper choice of renormalization conditions requires choosing
$\mu, \, \mu_n \sim Q \ll M_\pi$. Dimensional regularization with the minimal subtraction
can be viewed as a special case corresponding to $\mu = \mu_n =0$. Another special case
is given by DR with the power divergence subtraction (PDS)
\cite{Kaplan:1998tg,Kaplan:1998we}. In this scheme, the power law divergences,
which are normally discarded in DR, are
explicitly accounted for by subtracting from dimensionally regulated loop
integrals not only $1/(d-4)$-poles but also the $1/(d-3)$-poles.
Its formulation used in Refs.~\cite{Kaplan:1998tg,Kaplan:1998we}
corresponds to the choice $\mu_n = 0$ and $\mu \to \mu \pi/2$ in Eq.~(\ref{splitting2}).
The dimensional analysis for the renormalized scattering amplitude
implies that the leading and subleading terms $T^{(0)}$
and $T^{(1)}$ are given by the tree- and one-loop graphs constructed with the lowest-order vertices
from Eq.~(\ref{Lagr_nopi}), see Fig.~\ref{natural}.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.5\textwidth]{natural.pdf}
}
\vspace{0.2cm}
\caption[fig4aa]{\label{natural} Leading, subleading and sub-subleading contributions to the S-wave $T$--matrix
in the case of a natural scattering length. Solid dots (filled rectangles) refer to contact vertices without
(with two) derivatives. Lines represent the nucleon propagators.
}
\vspace{0.2cm}
\end{figure*}
$T^{(2)}$ receives a contribution from the two-loop graph
with the lowest-order
vertices and the tree graph with a subleading vertex \cite{Kaplan:1998tg,Kaplan:1998we}.
Higher-order corrections can be evaluated straightforwardly.
Matching the resulting $T$-matrix to the ERE in Eq.~(\ref{Tnatural})
order by order in the low-momentum expansion allows to
fix the LECs $C_i^r$. At next-to-next-to-leading order (N$^2$LO), for example, one finds:
\begin{equation}
\label{Cnatural}
C_0^r = \frac{4 \pi a}{m} \Big[ 1 + \mathcal{O} (a \mu ) \Big] \,,
\mbox{\hskip 2 true cm}
C_2^r = \frac{2 \pi a^2}{m} \,r_0\,,
\end{equation}
where the LECs $C_0$ and $C_2$ are defined via the tree-level $T$-matrix:
$T_{\rm tree} = 4 \pi (C_0 + C_2 \, k^2 + \ldots )$. The LEC $C_0$ is related to $C_{S,T}$ in Eq.~(\ref{Lagr_nopi})
as $C_0 = C_S - 3 C_T$.
Here and in the remaining part of this section, the expressions are given in DR with PDS.
For the physically interesting case of neutron-proton scattering,
the two S-wave scattering lengths appear to be large:
\begin{equation}
\label{scattl}
a_{^1S_0} = -23.714 \mbox{ fm} \sim - 16.6 \, M_\pi^{-1}\,,
\mbox{\hskip 2 true cm}
a_{^3S_1} = 5.42 \mbox{ fm} \sim 3.8 \, M_\pi^{-1}\,.
\end{equation}
Instead of using the low-momentum representation in Eq.~(\ref{Tnatural}) which is valid only for $k < 1/a$,
it is advantageous to expand the $T$-matrix in powers of $k$ keeping
$a k \sim 1$ \cite{Kaplan:1998tg,Kaplan:1998we}:
\begin{eqnarray}
\label{Tunnatural}
T &=& T^{(-1)} + T^{(0)} + T^{(1)} + \ldots \\
&=& \frac{16 \pi^2}{m} \, \frac{1}{(a^{-1} + i k)} \,
\left[ 1 + \frac{r_0}{2 (a^{-1} + i k)} k^2 + \left(
\frac{r_0^2}{4 (a^{-1} + i k)^2} + \frac{v_2}{(a^{-1} + i k)} \right) k^4 + \ldots \right]\,.
\nonumber
\end{eqnarray}
The EFT expansion can be cast into the form of Eq.~(\ref{Tunnatural}) if one assumes that the
LECs $C_i^r$ scale according to $C_{2n} \sim 1/Q^{n+1}$. The leading contribution $T^{(-1)}$
then results from summing up an infinite chain of bubble diagrams with the
lowest-order vertices, see Fig.~\ref{unnatural}. All diagrams constructed only from $C_0^r$ scale as $1/Q$.
For example, for the one-loop graph one has $Q^3$ from the integration, $1/Q^2$ from the
nucleon propagator and another $1/Q^2$ from the LECs $C_0^r$.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.9\textwidth]{unnatural.pdf}
}
\vspace{0.2cm}
\caption{\label{unnatural} Leading and subleading contributions to the S-wave $T$-matrix
in the case of an unnaturally large scattering length. For notation, see Fig.~\ref{natural}.
}
\vspace{0.2cm}
\end{figure*}
The corrections are given by perturbative insertions of higher-order interactions dressed
to all orders by the leading vertices. Matching the resulting $T$-matrix with the one in
Eq.~(\ref{Tunnatural}) one finds at NLO:
\begin{equation}
\label{matchingKSW}
C_0^r = \frac{4 \pi}{m} \; \frac{1}{a^{-1} - \mu } \,,
\mbox{\hskip 2 true cm}
C_2^r = \frac{4 \pi}{m} \; \frac{1}{(a^{-1} - \mu )^2} \frac{r_0}{2}\,.
\end{equation}
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: calculate the S-wave scattering amplitude up to NLO for the case of
unnaturally large scattering length and verify the expressions for the LECs
given in Eq.~(\ref{matchingKSW}).
} \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
The real power of pionless EFT comes into play when one goes beyond the two-nucleon system by
considering e.g.~low-energy reactions involving external electroweak sources and/or three- and
more nucleon systems. A discussion of these topics goes beyond the scope of these lectures.
I refer an interested reader to the recent review articles
\cite{Beane:2000fx,Bedaque:2002mn,Braaten:2004rn,Platter:2009pi}.
\subsection{Chiral EFT for two nucleons with perturbative pions}
\label{KSW}
We have seen in the previous section how the EFT without explicit pions can be organized to describe
strongly interacting nucleons at low energy. The limitation in energy of this approach, cf.~the discussion
in section \ref{sec:analyt}, appears to be
too strong for most nuclear physics applications. To
go to higher energies it is necessary to include pions as explicit degrees of freedom.
I have already outlined in section \ref{sec:nuclearEFT} one possible way to extend ChPT to the
few-nucleon sector following Weinberg's original proposal \cite{Weinberg:1990rz,Weinberg:1991um}.
In this approach, the nonperturbative dynamics is generated by iterating the lowest-order
two-nucleon potential $V^{(0)}_{2N}$ which subsumes irreducible (i.e.~non-iterative) contributions
from tree diagrams with the leading vertices (i.e.~$\Delta_i =0$), see Eq.~(\ref{powNN}).
The only possible contributions are due to derivative-less contact interaction and the static
$1\pi$-exchange, so that the resulting potential reads:
\begin{equation}
\label{VLO}
V_{\rm 2N}^{(0)} = -\frac{g_A^2}{4 F_\pi^2} \frac{\vec
\sigma_1 \cdot \vec q \, \vec \sigma_2 \cdot \vec q}{\vec q \, ^2 +M_\pi^2} \fet \tau_1
\cdot \fet \tau_2 + C_S + C_T
\vec \sigma_1 \cdot \vec \sigma_2\,.
\end{equation}
Here, $\vec \sigma_i$ ($\fet \tau_i$) are the Pauli spin (isospin) matrices of the nucleon $i$, $\vec q
= \vec p \, ' - \vec p$ is the nucleon momentum transfer and $\vec{p}$
($\vec{p}~'$) refers to initial (final) nucleon momenta in the CMS.
As pointed out in section \ref{sec:nuclearEFT}, the justification for resumming $V_{\rm 2N}^{(0)}$
to all orders in the LS equation is achieved in the Weinberg approach via a fine
tuning of the nucleon mass, see Eq.~(\ref{counting_m}). With this counting rule, it follows immediately
that every iteration of $V_{\rm 2N}^{(0)}$ in Eq.~(\ref{LSeqO}) generates a contribution of the order
$Q^0/\Lambda_\chi$.
On the other hand, in the pionless EFT with unnaturally large scattering length outlined in section
\ref{sec_piless}, the non-perturbative resummation of the amplitude was enforced by fine-tuning the
LECs accompanying the contact interactions while treating the nucleon mass on the same footing as
all other hard scales. While these two scenarios are essentially equivalent in the pionless case,
they lead to an important difference in organizing EFT with explicit pions.
The approach due to Kaplan, Savage and Wise (KSW) \cite{Kaplan:1998tg,Kaplan:1998we}
represents a straightforward generalization of the pionless EFT
to \emph{perturbatively} include diagrams with exchange of one or more pions.
The scaling of the contact interactions is assumed to be the same as
in pionless EFT (provided one uses DR with PDS or an equivalent scheme to regularize divergent
loop integrals). Notice that in contrast to pionless EFT, the pion mass is treated as a soft scale with
$Q \sim M_\pi \sim a^{-1}$. The only new ingredients in the calculation of the amplitude
up to next-to-leading order in the KSW expansion are given by the dressed $1\pi$-exchange
potential and derivative-less interaction $\propto M_\pi^2$, see Fig.~\ref{pionsKSW}.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.82\textwidth]{pionsKSW.pdf}
}
\vspace{0.2cm}
\caption{\label{pionsKSW} Leading and subleading contributions to the S-wave $T$-matrix
in the case of unnaturally large scattering length in the KSW approach with explicit pions.
Filled rectangle denotes contact interactions with two derivatives or a single insertion of $M_\pi^2$.
For remaining notation, see Fig.~\ref{natural}.
}
\vspace{0.2cm}
\end{figure*}
$2\pi$-exchange is suppressed
and starts to contribute at N$^2$LO. At each order in the perturbative expansion, the amplitude is made
independent on the renormalization scale by an appropriate running of the LECs.\footnote{Strictly speaking,
an exact scale independence of the NLO amplitude in the KSW approach with explicit pions is achieved at the
cost of resumming a certain class of higher-order terms, see the discussion in Ref.~\cite{Epelbaum:2009sd}.}
Compact analytic expressions for the scattering amplitude represent another nice feature of the KSW approach.
As explained in section \ref{sec:analyt}, the appearance of a long-range interaction implies
strong constraints on the energy dependence of the amplitude and imposes certain correlations between the
coefficients in the ERE (LETs). EFT with explicit pions aims at a correct description of
non-analyticities in the scattering amplitude associated with exchange of pions which in this
framework represent truly long-range phenomena.
Thus, the correct treatment of the long-range interaction by including pions perturbatively can
be ultimately judged by testing the corresponding low-energy theorems. This idea was picked up
by Cohen and Hansen \cite{Cohen:1998jr,Cohen:1999ia}.
Fixing the LECs accompanying the contact interactions from the scattering length and effective range,
they obtained the following predictions for the first three S-wave shape parameters at NLO in the KSW
scheme \cite{Cohen:1998jr,Cohen:1999ia}:
\begin{eqnarray}
v_2 &=& \frac{g_A^2 m}{16 \pi F_\pi^2} \left( -\frac{16}{3 a^2 M_\pi^4} + \frac{32}{5 a M_\pi^3}
- \frac{2}{M_\pi^2} \right) \,, \nonumber \\
v_3 &=& \frac{g_A^2 m}{16 \pi F_\pi^2} \left( \frac{16}{a^2 M_\pi^6} - \frac{128}{7 a M_\pi^5}
+ \frac{16}{3 M_\pi^4} \right) \,, \nonumber \\
v_4 &=& \frac{g_A^2 m}{16 \pi F_\pi^2} \left( -\frac{256}{5 a^2 M_\pi^8} + \frac{512}{9 a M_\pi^7}
- \frac{16}{M_\pi^6} \right) \,.
\end{eqnarray}
Plugging in the numbers for the nucleon and pion masses, $g_A \simeq 1.27$, $F_\pi = 92.4$ MeV
and the corresponding scattering lengths, see Eq.~(\ref{scattl}), Cohen and Hansen obtained
the results quoted in Table~\ref{tab1}.\footnote{A slightly different values compared to the ones
quoted in this table are extracted from the Nijmegen PWA in Ref.~\cite{PavonValderrama:2005ku}.}
A clear failure for the LETs in both channels serves as an indication that the long-range physics
is not properly taken into account if pions are treated perturbatively. The convergence
of the KSW expansion was further tested in Ref.~\cite{Fleming:1999ee} where the amplitude is calculated
up to N$^2$LO. While the results for the $^1S_0$ and some other partial waves including spin-singlet channels
were found to be in a reasonable agreement with the Nijmegen PWA, large corrections show up in spin-triplet
channels already at momenta of the order of $\sim 100$ MeV and lead to strong disagreements with the data.
The perturbative inclusion of the pion-exchange contributions does not allow to increase the region of
validity of the EFT compared to the pionless theory,
see however, Ref.~\cite{Beane:2008bt} for a new formulation
which is claimed to yield a convergent expansion. The failure of the KSW approach in the spin-triplet channels
was attributed in \cite{Fleming:1999ee} to the iteration of the tensor part of the $1\pi$-exchange potential.
This appears to be in line with phenomenological successes of Weinberg's approach
which treats pion exchange contributions nonperturbatively. The most
advanced analyses of the NN system at next-to-next-to-next-to-leading order (N$^3$LO)
in Weinberg's power counting scheme demonstrate the ability to accurately
describe NN scattering data up to center-of-mass momenta at least of the order $\sim 2
M_\pi$ \cite{Entem:2003ft,Epelbaum:2004fk}.
\begin{table}[tb]
\begin{center}
\begin{tabular}{|c|ccc|ccc|}
\hline
&&&&&& \\[-10pt]
& \multicolumn{3}{|c|}{$^1S_0$ partial wave} & \multicolumn{3}{|c|}{$^3S_1$ partial wave} \\
\hline
&&&&&& \\[-10pt]
& $v_2\; $ [fm$^3$] & $v_3\; $ [fm$^5$] & $v_4\; $ [fm$^7$] & $v_2\; $ [fm$^3$] & $v_3\; $ [fm$^5$] & $v_4\; $ [fm$^7$] \\
\hline
&&&&&& \\[-10pt]
LETs & $-3.3$ & $17.8$ & $-108$ & $-0.95$ & $4.6$ & $-25$ \\
Nijmegen PWA & $-0.48$ & $3.8$ & $-17$ & $0.4$ & $0.67$ & $-4$ \\
\hline
\end{tabular}
\caption{A comparison of the predicted S-wave shape parameters from Ref.~\cite{Cohen:1998jr} with coefficients
extracted from the Nijmegen PWA.}
\label{tab1}
\end{center}
\end{table}
\subsection{Towards including pions nonperturbatively: playing with toy models}
\label{toy}
While the power counting approach due to Weinberg allows for a nonperturbative
resummation of the $1\pi$-exchange potential, there is a price to pay.
Contrary to the KSW approach, the leading NN potential is non-renormalizable in the traditional sense,
i.e.~iterations of the LS equation generate divergent terms with structures
that are not included in the original potential.
Moreover, resummation of the $1\pi$-exchange potential in the LS equation can
only be carried out numerically. This prevents the use of regularization prescriptions
such as e.g.~DR
which avoid the appearance of a hard scale and maintain the manifest power counting
for regularized loop contributions making renormalization considerably more subtle. Most of the
available calculations employ various forms of cutoff regularization with the cutoff
being kept finite. The purpose of this section is to provide an in-depth discussion
on regularization and renormalization in this context
by considering a simple and exactly solvable quantum mechanical
model for two nucleons interacting via the long- and short-range forces. This may be
regarded as a toy model for chiral EFT in the two-nucleon sector. In this model, resummation
of the long-range interaction can be carried out analytically. This allows to employ
and compare the subtractive renormalization that maintains the manifest power counting and the cutoff
formulation of the effective theory. I also explore the consequences of taking very
large values of the cutoff in this model. The presentation follows closely the one of
Ref.~\cite{Epelbaum:2009sd}.
\subsubsection{The model}
Consider two spin-less nucleons interacting
via the two-range separable potential
\begin{equation}
\label{Vunderlying}
V (p ',\, p) = v_{l} \, F_{l}(p')\, F_{l}(p)+
v_{s}\, F_{s}(p')\, F_{s}(p)\,, \quad
F_l (p) \equiv \frac{\sqrt{p^2 + m_s^2}}{p^2 + m_l^2}\,, \quad
F_s (p) \equiv \frac{1}{\sqrt{p^2 + m_s^2}}\,,
\end{equation}
where the masses $m_l$ and $m_s$ fulfill the condition $m_l \ll
m_s$. Further, the dimensionless quantities
$v_l$ and $v_s$ denote the strengths of the long- and short-range
interactions, respectively. The choice of the explicit form of $F_{l,s}(p)$
is motivated by the simplicity of calculations.
The potential in Eq.~(\ref{Vunderlying})
does not depend on the angle between $\vec p$ and $\vec p \, '$ and, therefore, only gives rise
to S-wave scattering. The projection onto the S-wave in this case is trivial and simply yields
the factor of $4 \pi$ from the integration over the angles.
For an interaction of a separable type, the off-shell
T-matrix and, consequently, also the coefficients in the ERE, see Eqs.~(\ref{tmat}) and (\ref{ere}),
can be calculated analytically by solving the corresponding LS equation
\begin{equation}
\label{LS}
T (p' ,\, p; \, k ) = V (p' ,\, p) + 4 \pi \int \frac{l^2 dl}{(2 \pi)^3} V
(p' ,\, l)
\frac{m}{k^2-l^2 + i \epsilon} T (l ,\, p; \, k )\,,
\end{equation}
where $m$ is the nucleon mass and $k$ corresponds to the on-shell momentum
which is related to the two-nucleon center-of-mass energy via $E_{\rm CMS} =
k^2/m$. Note that we have absorbed the factor $4 \pi$ into the
normalization of the T-matrix which is, therefore, different from the one in Eq.~(\ref{LSeq}).
In particular, the relation between the S- and T-matrices is given by
$S (p) = 1 - i p m T (p, \, p; \, p)/(2 \pi )$.
As explained in section \ref{sec:analyt}, the coefficients in the ERE generally scale
with the mass corresponding to the long-range interaction that gives rise to
the first left-hand singularities in the T-matrix. Thus, in the considered two-range model,
the coefficients in the ERE can be expanded in powers of $m_l/m_s$ leading to the ``chiral'' expansion:
\begin{eqnarray}
\label{EREexpanded}
a &=& \frac{1}{m_l} \bigg( \alpha_a^{(0)} + \alpha_a^{(1)} \frac{m_l}{m_s} +
\alpha_a^{(2)} \frac{m_l^2}{m_s^2} + \ldots \bigg) \,, \nonumber \\
r &=& \frac{1}{m_l} \bigg( \alpha_r^{(0)} + \alpha_r^{(1)} \frac{m_l}{m_s} +
\alpha_r^{(2)} \frac{m_l^2}{m_s^2} + \ldots \bigg) \,, \nonumber \\
v_i &=& \frac{1}{m_l^{2 i -1}} \bigg( \alpha_{v_i}^{(0)} + \alpha_{v_i}^{(1)}
\frac{m_l}{m_s} +
\alpha_{v_i}^{(2)} \frac{m_l^2}{m_s^2} + \ldots \bigg) \,,
\end{eqnarray}
where $\alpha_a^{(m)}$, $\alpha_r^{(m)}$ and $\alpha_{v_i}^{(m)}$ are
dimensionless constants whose values are determined by the form of the
interaction potential. I fine tune the strengths of the long- and short-range
interactions in our model in such a way that they generate scattering lengths of a natural
size. More
precisely, I require that the scattering length takes the value $a =
\alpha_l/m_l$
($a = \alpha_s/m_s$) with a dimensionless constant $| \alpha_l | \sim 1$ ($|
\alpha_s | \sim 1$)
when the short-range (long-range) interaction is switched off. This allows to express the corresponding
strengths $v_l$ and $v_s$ in terms of $\alpha_l$ and $\alpha_s$ as follows:
\begin{equation}
\label{strengths}
v_l = -\frac{8 \pi m_l^3 \alpha _l}{m \left(\alpha _l m_s^2+m_l^2 \alpha _l-2
m_s^2\right)}\,,
\quad
v_s = -\frac{4 \pi m_s \alpha _s}{m \left(\alpha _s-1\right)}\,.
\end{equation}
One then finds the following expressions for the first three terms in the
``chiral'' expansion of the scattering length
\begin{equation}
\label{LET1}
\alpha_a^{(0)} = \alpha_l \,, \quad \quad
\alpha_a^{(1)} = (\alpha_l -1 )^2 \alpha_s \,, \quad \quad
\alpha_a^{(2)} = (\alpha_l -1 )^2 \alpha_l \alpha_s^2 \,,
\end{equation}
and effective range
\begin{eqnarray}
\label{LET2}
\alpha_r^{(0)} &=& \frac{3 \alpha_l - 4}{\alpha_l} \,, \nonumber \\
\alpha_r^{(1)} &=& \frac{2 \left(\alpha _l-1\right) \left(3 \alpha
_l-4\right) \alpha _s}{\alpha _l^2} \,, \nonumber \\
\alpha_r^{(2)} &=& \frac{\left(\alpha _l-1\right) \left(3 \alpha _l-4\right)
\left(5 \alpha _l-3\right) \alpha _s^2+\left(2-\alpha _l\right) \alpha
_l^2}{\alpha _l^3} \,.
\end{eqnarray}
Notice that in the model considered the leading terms in the $m_l/m_s$-expansion of the ERE
coefficients are completely fixed by the long-range interaction. The scenario
realized corresponds to a strong (at momenta $k \sim
m_l$) long-range interaction which needs to be treated non-perturbatively and
a weak short-range interaction which can be accounted for in perturbation theory.
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: calculate the $T$-matrix by solving the LS equation (\ref{LS}) for the potential given
in Eq.~(\ref{Vunderlying}). Verify the ``chiral'' expansion for the scattering length and effective range.
Work out the first terms in the ``chiral'' expansion of the shape parameters $v_2$ and $v_3$.
The results can be found in Ref.~\cite{Epelbaum:2009sd}.
} \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
\subsubsection{KSW-like approach}
At momenta of the order $k \sim m_l \ll m_s$, the details of the short-range
interaction cannot be resolved. An EFT description emerges by keeping the
long-range interaction and replacing the short-range one by a series of
contact terms $V_{\rm short} (p', \, p ) = C_0 + C_2 (p^2 + {p'} ^2) +
\ldots$. Iterating such an effective potential in the LS equation leads to ultraviolet
divergences which need to be regularized and renormalized. The renormalization prescription
plays an important role in organizing the EFT expansion. I first consider the most
convenient and elegant formulation by employing the subtractive renormalization
which respects dimensional power counting at the level of diagrams. In this sense,
the considered formulation is conceptually similar to the KSW framework with perturbative
pions discussed above and will be referred to as the KSW-like
approach. The available soft
and hard scales in the problem are given by $Q = \{k, \, \mu, \, m_l \}$ and
$\lambda = \{m_s , \, m \}$, respectively. Here $\mu \sim m_l$ denotes the subtraction point.
(There is just a single linearly divergent integral at the order in the low-momentum expansion I will consider.)
The contributions to the amplitude up to N$^2$LO in the
$Q/\lambda$-expansion are visualized in Fig.~\ref{fig1_1} and can be easily verified
using naive dimensional analysis.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.99\textwidth]{perturbative.pdf}
\vskip 0.5 true cm
\caption{Leading, next-to-leading and next-to-next-to-leading order
contributions to the scattering amplitude in the KSW-like approach. The
solid lines denote nucleons while the dashed ones represent an insertion
of the lowest-order (i.e.~$\mathcal{O} (Q^{-1})$) long-range
interaction. Solid dots (dotted lines)
denote an insertion of the lowest-order contact interaction $\propto
C_0$ (subleading order-$\mathcal{O} (Q)$ contribution to the long-range interaction).
\label{fig1_1}
}
\end{center}
\end{figure}
In particular, the leading term arises at order $Q^{-1}$ is
generated by the leading term in the $Q/\lambda$-expansion of the long-range
interaction
\begin{eqnarray}
\label{long}
V_{\rm long} (p', \, p ) &=& v_{l} \, F_{l}(p')\, F_{l}(p) \\
&\simeq&-
\frac{8 \pi m_l^3 \alpha _l}{m \left(\alpha _l-2\right) ( p^2 + m_l^2)( {p
'}^2 + m_l^2)}
\left[ 1 -\frac{\alpha _l m_l^2 }{\left(\alpha _l-2\right) m_s^2}
+ \frac{p^2}{2 m_s^2} + \frac{{p '}^2}{2 m_s^2} + \mathcal{O} \left(
\frac{Q^4}{\lambda^4}
\right)\right]\,, \nonumber
\end{eqnarray}
which scales as $Q^{-1}$ and, therefore,
needs to be summed up to an infinite order, see Fig.~\ref{fig1_1}.
Notice that the natural size of
the short-range effects in our model suggests the scaling of the short-range
interactions in agreement with the naive dimensional analysis, i.e.~$C_{2n}
\sim Q^0$.
This leads to the following expression for the on-the-energy shell T-matrix:
\begin{equation}
T^{(-1)}=-\frac{8 \pi m_l^3 \alpha _l}{m \left(k-i m_l\right){}^2 \left[k^2
\left(\alpha _l-2\right)+2 i k m_l \left(\alpha _l-2\right)+2
m_l^2\right]} \,,
\end{equation}
from which one deduces
\begin{equation}
k \cot \delta = - \frac{4 \pi}{m} \frac{1}{T^{(-1)}} + i k
= {} -\frac{m_l}{\alpha _l} + \frac{ \left(3
\alpha _l-4\right)}{2 m_l \alpha _l} k^2 +
\frac{\left(\alpha _l-2\right)}{2 m_l^3 \alpha _l} k^4 \,.
\end{equation}
Not surprisingly, one observes that the leading terms in the expansion of the ERE
coefficients in Eq.~(\ref{EREexpanded}) are correctly reproduced.
The first correction at order $Q^0$ is given by
the leading-order contact interaction dressed with the iterated leading
long-range interaction as visualized in
Fig.~\ref{fig1_1}. One finds
\begin{equation}
T^{(0)} = \frac{C_0 \left(k+i m_l\right){}^2 \left[k^2 \left(\alpha
_l-2\right)+2 m_l^2 \left(\alpha _l-1\right)\right]{}^2}{\left(k-i
m_l\right){}^2 \left[k^2 \left(\alpha
_l-2\right)+2 i k m_l \left(\alpha _l-2\right)+2 m_l^2\right]{}^2}.
\end{equation}
Notice that all integrals entering $T^{(-1)}$ and $T^{(0)}$ are finite.
The effective range function $k \cot \delta$ at NLO
can be computed via
\begin{equation}
k \cot \delta = - \frac{4 \pi}{m} \frac{1}{T^{(-1)}} \bigg( 1 -
\frac{T^{(0)}}{T^{(-1)}} \bigg) + i k \,.
\end{equation}
The ``chiral'' expansion of the coefficients in the ERE results from
expanding the right-hand side in this
equation in powers of $k^2$ and, subsequently, in powers of $m_l$. The
LEC $C_0$ can be determined from matching to $\alpha_a^{(1)}$ in
Eq.~(\ref{LET1}) which yields
\begin{equation}
\label{C0_LO}
C_0 = \frac{4 \pi \alpha_s}{m m_s}\,.
\end{equation}
This leads to the following predictions for the effective range:
\begin{equation}
r=\frac{1}{m_l} \Bigg[\frac{3 \alpha_l-4}{\alpha _l}
+\frac{2 \left( \alpha_l-1\right) \left(3 \alpha _l-4\right)
\alpha _s}{\alpha _l^2 m_s} m_l \Bigg]\,.
\end{equation}
One observes that $\alpha_r^{(1)}$ is correctly reproduced at NLO. The same holds true
for the first shape parameters, see Ref.~\cite{Epelbaum:2009sd} for explicit expressions.
Moreover, using dimensional analysis it is easy to verify
that, in fact, $\alpha_{v_i}^{(1)}$ for
all $i$ \emph{must} be reproduced correctly at this order.
This is a manifestation of the LETs discussed in section \ref{sec:analyt}.
At N$^2$LO, the linearly divergent integral $I(k)$ occurs, see Eq.~(\ref{int_def}), which is treated
according to Eq.~(\ref{splitting2}). Renormalization is carried out by dropping its divergent part
$\Delta (\mu )$ and replacing the bare LEC $C_0$ by the renormalized one $C_0 (\mu)$ in the expression for
the amplitude. A straightforward calculation using MATHEMATICA yields:
\begin{eqnarray}
T^{(1)} &=& \frac{\left(k+i m_l\right){}^2}{4 \pi ^2 m m_s^2 \left(k-i
m_l\right){}^2 \left[k \alpha _l \left(k+2 i m_l\right)-2 \left(k+i
m_l\right){}^2\right]{}^2} \Bigg[
-32 \pi ^3 k^2 m_l^3 \left(\alpha _l-2\right) \alpha _l \nonumber \\
&& {}+ \left( C_0 (\mu)\right) ^2 m^2 m_s^2 \left[k^2 \left(\alpha
_l-2\right)+2 m_l^2
\left(\alpha _l-1\right)\right]{}^2 \\
&& {} \times \frac{
\alpha _l \left[k^2 (-2 \mu
-i
\pi k)+2 k (\pi k-2 i \mu ) m_l+2 \pi m_l^3\right]+2 (2 \mu +i
\pi k) \left(k+i m_l\right){}^2}{k \alpha _l \left(k+2 i
m_l\right)-2 \left(k+i m_l\right){}^2} \Bigg].
\nonumber
\end{eqnarray}
The LEC $C_0( \mu )$ can be written in terms of a perturbative expansion as
follows
\begin{equation}
C_0( \mu ) = C_0^{(0)} + C_0^{(1)}( \mu ) + \ldots \,,
\end{equation}
where the superscript refers to the power of the soft scale $Q$. The first term
does not depend on $\mu$ and equals $C_0$ in Eq.~(\ref{C0_LO}).
The $\mu$-dependence of $C_0^{(1)} (\mu )$ can be determined by solving the
renormalization group equation
\begin{equation}
\frac{d}{d \mu} \bigg[ T^{(-1)} + T^{(0)} + T^{(1)} \bigg] = 0\,.
\end{equation}
One also needs one additional input parameter, such as e.~g.~$\alpha_a^{(2)}$,
in order to fix the
integration constant. This leads to
\begin{equation}
\label{C0_NLO}
C_0^{(1)} (\mu ) = \frac{8 \mu \alpha _s^2}{m m_s^2}\,.
\end{equation}
It is then easy to verify that the scattering amplitude $T^{(-1)} + T^{(0)} +
T^{(1)}$ is $\mu$-independent up to terms of order $Q^2$. Further, the
effective range function is given at this order by
\begin{equation}
k \cot \delta = - \frac{4 \pi}{m} \frac{1}{T^{(-1)}} \Bigg[ 1 -
\frac{T^{(0)}}{T^{(-1)}} +
\left( \frac{T^{(0)}}{T^{(-1)}} \right)^2 -
\frac{T^{(1)}}{T^{(-1)}}
\Bigg] + i k \,,
\end{equation}
which can be used to predict the ``chiral'' expansion for the coefficients in the ERE.
Here I list only the result for the effective range which is sufficient for our purposes.
The expressions for $v_{2,3}$ can be found in \cite{Epelbaum:2009sd}.
\begin{eqnarray}
\label{LETKSW}
r &=&\frac{1}{m_l} \bigg[\frac{3\alpha _l -4}{\alpha _l}
+ \frac{2 \left(\alpha _l-1\right) \left(3 \alpha _l-4\right)
\alpha _s}{\alpha _l^2 m_s} m_l
+\frac{\left(\alpha _l-1\right) \left(3
\alpha _l-4\right) \left(5 \alpha _l-3\right) \alpha _s^2+\left(2-\alpha
_l\right) \alpha _l^2}{\alpha _l^3 m_s^2} m_l^2 \nonumber \\
&-& \frac{4 \mu m_l \left(\alpha _l-1\right) \left(3 \alpha _l-4\right)
\alpha _s^3 \left(\pi m_l \left(3-5 \alpha _l\right)+4 \mu \alpha
_l\right)}{\pi ^2 \alpha _l^3 m_s^3}
+ \mathcal{O} \left( Q^4 \right)\bigg] \,.
\end{eqnarray}
As expected, the first three terms in the ``chiral'' expansion of $r$ are correctly
reproduced at N$^2$LO being protected by the LETs. The same holds true for the
shape parameters $v_i$, see Ref.~\cite{Epelbaum:2009sd}.
The knowledge of $\alpha_{x_j}^{(i)}$ for one particular
$x_j$ is sufficient to predict $\alpha_{x_k}^{(i)}$ for all $k \neq j$.
\subsubsection{Weinberg-like approach with a finite cutoff}
An elegant EFT formulation like the one described above which respects the
manifest power counting at every stage of the calculation is not
available in the realistic case of nucleon-nucleon interaction.
Here, one lacks a regularization prescription for \emph{all} divergent integrals
resulting from iterations of the potential in the LS equation
which would keep regularization artefacts small without, at the same time,
introducing a new hard scale in the problem. Contrary to the considered model,
the $1\pi$-exchange potential is non-separable and cannot be analytically
resummed in the LS equation. In the context of chiral EFT for
few-nucleon systems, the divergent integrals are usually dealt with by
introducing an UV cutoff $\Lambda$, which needs to be taken of the order
$\Lambda \sim m_s$ or higher in order to keep regularization artefacts small.
Clearly, cutoff-regularized diagrams will not obey dimensional power
counting anymore. This, however, does not mean a breakdown of EFT since power
counting is only required for the \emph{renormalized} amplitude.
I now consider the
Weinberg-like formulation in which the effective potential, given by the
long-range interaction and a series of contact terms, is iterated in
the LS equation to all orders, see the work by Lepage \cite{Lepage:1997} for a
related discussion. This is visualized in Fig.~\ref{figW}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.75\textwidth]{nonperturbative.pdf}
\vskip 0.5 true cm
\caption{Effective potential and scattering amplitude in the Weinberg-like
approach. The dashed-dotted line refers to the full long-range
interaction. Solid dot and filled rectangle refer to the leading and
subleading contact interactions, respectively. For remaining notation
see Fig.~1.
\label{figW}
}
\end{center}
\end{figure}
I carry out \emph{renormalization} by literally following
the steps outlined in Ref.~\cite{Lepage:1997} and summarized
in Ref.~\cite{Gasser:1982ap} in the following way: \emph{''The
theory is fully specified by the values of the bare constants ...
once a suitable regularization procedure is chosen. In principle,
the renormalization program is straightforward: one calculates
quantities of physical interest in terms of the bare parameters at
given, large value of (ultraviolet cutoff) $\Lambda$. Once a
sufficient number of physical quantities have been determined as
functions of the bare parameters one inverts the result and
expresses the bare parameters in terms of physical quantities,
always working at some given, large value of $\Lambda$. Finally,
one uses these expressions to eliminate the bare parameters in all
other quantities of physical interest. Renormalizability
guarantees that this operation at the same time also eliminates
the cutoff.''} Notice that by iterating the truncated expansion for the
effective potential in the LS equation one unavoidably generates
higher-order contributions without being able to absorb all arising
divergences into redefinition of the LECs present in the considered truncated
potential. Thus, for the case at hand,
cutoff dependence in observables is expected to be eliminated only up to the
considered order in the EFT expansion.
I further emphasize that expressing the bare parameters
(i.~e.~LECs $C_i$) in terms of physical quantities is a non-trivial step as
the resulting nonlinear
equations for $\{C_i \}$ do not necessarily possess real solutions for all values
of $\Lambda$, especially when it is chosen to be considerably larger than $m_s$.
To be specific, consider the effective potential at subleading order
in the Weinberg-like approach as depicted in Fig.~\ref{figW}
\begin{equation}
V_{\rm eff} (p' ,\, p) = v_{l} \, F_{l}(p')\, F_{l}(p)
+ C_0 \,.
\end{equation}
The off-shell T-matrix $T (p',\, p; \, k)$ can be easily calculated by
solving the $2 \times 2$ matrix equation
\begin{equation}
\label{LSmatrix}
t (k) = v_{\rm eff} + v_{\rm eff} \, \mathcal{G}(k) \, t(k)
\end{equation}
where I have defined
\begin{equation}
V_{\rm eff} (p' ,\, p) =\gamma^T (p') \, v_{\rm eff} \, \gamma (p ), \quad
T (p' ,\, p, \, k) =\gamma^T (p') \, t (k) \, \gamma (p )\,,
\end{equation}
with
\begin{equation}
v_{\rm eff} \equiv \left( \begin{array}{cc} v_l & 0 \\ 0 & C_0 \end{array}
\right)\,, \quad
\gamma ( p) \equiv \left( \begin{array}{c} F_l (p) \\ 1 \end{array}
\right)\,, \quad
\mathcal{G}(k) \equiv \left( \begin{array}{cc} I_l(k) & I_{l1}^{\rm reg}(k) \\
I_{l1}^{\rm reg} (k)
& I_1^{\rm reg} (k) \end{array}
\right)\,.
\end{equation}
The integral $I_l(k)$ is given by
\begin{equation}
I_l (k) = 4 \pi m \int_0^\infty \frac{l^2 \, dl}{(2 \pi)^3}
\frac{l^2+ m_s^2}{[k^2 - l^2 + i \epsilon][l^2 + m_l^2]^2} \nonumber \\
= \frac{m \left(-2 i k m_l+m_l^2+m_s^2\right)}{8 \pi m_l \left(k+i
m_l\right){}^2} \,,
\end{equation}
and is ultraviolet-finite. The divergent integrals $I_1 (k)$ and $ I_{l1}(k)$ are regularized by
means of a finite cutoff $\Lambda$:
\begin{eqnarray}
\label{otherintegrals}
I_1^{\rm reg} &\equiv& 4 \pi m \int_0^\Lambda \frac{l^2 dl}{(2 \pi)^3}
\frac{1}{k^2 - l^2 + i \epsilon} = - \frac{m \Lambda}{2 \pi^2} - i \frac{m k
}{4 \pi} + \mathcal{O} (\Lambda^{-1}) \,, \nonumber \\
I_{l1}^{\rm reg} &\equiv &
4 \pi m \int_0^\Lambda \frac{l^2 \, dl}{(2 \pi)^3}
\frac{\sqrt{l^2+ m_s^2}}{[k^2 - l^2 + i \epsilon][l^2 + m_l^2]}
= \frac{m}{2 \pi^2}
\bigg[ k \frac{\sqrt{k^2 + m_s^2}}{k^2 + m_l^2} \ln \bigg(\frac{k + \sqrt{k^2
+ m_s^2}}{m_s} \bigg) \nonumber \\
&-&
\frac{m_l m_s s}{2(k^2 + m_l^2)} +
\ln \left(\frac{m_s}{2 \Lambda }\right) - \frac{i \pi k
\sqrt{k^2+m_s^2}}{2 \left(k^2+m_l^2\right)} \bigg) + \mathcal{O}
(\Lambda^{-1})\,,
\end{eqnarray}
where $s \equiv \left( 2 \sqrt{m_s^2-m_l^2}/m_s \right)
\, {\rm arccot}\left(m_l/\sqrt{m_s^2-m_l^2}\right)$.
Neglecting, for the sake of simplicity, the finite cutoff artefacts
represented by the $\mathcal{O} (\Lambda^{-1})$-terms in
Eq.~(\ref{otherintegrals}) and performing straightforward calculations, one obtains for the
scattering length:
\begin{equation}
\label{aWeinb1}
a_\Lambda = \frac{\pi m_s \left\{C_0 m \left[2 \alpha _l \left(m_s \left(\Lambda
-s m_l\right)+2 m_l^2 \ln (m_s/2 \Lambda )
\right)+\pi m_l m_s\right]+4 \pi ^2 \alpha _l m_s\right\}}{m_l
\left\{2 \pi m_s^2 \left(C_0 m \Lambda +2 \pi ^2\right)-C_0 m m_l
\alpha _l \left[s m_s-2 m_l \ln (m_s/2 \Lambda)
\right]^2\right\}} \,.
\end{equation}
\emph{Renormalization} is
carried out by matching the above expression to the value of the scattering length
in the underlying model which is regarded as data,
\begin{equation}
\label{a_data}
a_{\rm underlying} = \frac{m_l \left(2 \alpha _l-1\right) \alpha _s-\alpha _l
m_s}{m_l \left(m_l \alpha _l \alpha _s-m_s\right)}\,,
\end{equation}
and expressing $C_0 (\Lambda ) $ in terms of $a_{\rm underlying}$.
A straightforward calculation yields the following
\emph{renormalized} expression for the effective range:
\begin{eqnarray}
\label{LETWeinberg}
r_\Lambda &=& \frac{1}{m_l} \bigg[\frac{3 \alpha_l - 4}{\alpha _l}
+\frac{2 \left(\alpha _l-1\right) \left(3 \alpha _l-4\right) \alpha _s}{\alpha
_l^2 m_s} m_l + \bigg( \frac{4 \left(\alpha _l-2\right) \alpha _s }{\pi
\alpha _l m_s^2} \left(\ln \frac{m_s}{2 \Lambda }+1\right) \nonumber \\
&+&
\frac{\left(\alpha _l-1\right) \left(3 \alpha _l-4\right) \left(5 \alpha
_l-3\right) \alpha _s^2+\left(2-\alpha _l\right) \alpha
_l^2}{\alpha _l^3 m_s^2} \bigg) m_l^2 + \mathcal{O} \left( m_l^3
\right)\bigg]\,.
\end{eqnarray}
In agreement with the LETs discussed above, one observes that the subleading terms in the
``chiral'' expansion of $r$ (and $v_i$, see \cite{Epelbaum:2009sd})
are correctly reproduced once $C_0$ is appropriately tuned. Notice that the smallness
of the subleading correction to $r_\Lambda$ due to the $C_0$-term in the effective potential
as compared to the leading contribution given by the first term on the right-hand side of
Eq.~(\ref{LETWeinberg}) is only guaranteed \emph{after} carrying out renormalization by
properly tuning $C_0 (\Lambda )$. The sub-subleading
and higher-order terms in the ``chiral'' expansion of $r$ and $v_i$ are not
reproduced correctly being not protected by the LETs at the considered order.
Moreover, since the included LEC is insufficient to absorb all divergencies
arising from iterations of the LS equation, nothing prevents the appearance of
positive powers or logarithms of the cutoff $\Lambda$ in the expressions for
$\alpha_{r}^{(\geq 2)}$. The results
in Eq.~(\ref{LETWeinberg}) show that this is indeed the case. The
dependence on $\Lambda$ occurs, however, only in contributions beyond the
accuracy of calculation and, obviously, does not affect the predictive power
of the EFT as long as the cutoff is chosen to be of the order of the
characteristic hard scale in the problem, $\Lambda \sim m_s$.
An important misconception that appears frequently in the literature is related
to the treatment of the cutoff by employing very large values of $\Lambda$ or even
regarding $\Lambda \to \infty$. While this is perfectly fine
in ChPT, where observables are calculated perturbatively and \emph{all} emerging UV divergencies can
be absorbed by the corresponding counterterms at any fixed order in the chiral
expansion, this is not a valid procedure for the case at hand. Let us further elaborate
on this issue using the above example.
At first sight, the appearance of positive powers of $\Lambda$ and/or logarithmic terms in the
predicted ``chiral'' expansion of the subthreshold parameters, see
Eq.~(\ref{LETWeinberg}), may give a (wrong)
impression that no finite limit exists for $r_\Lambda $ and $(v_i)_\Lambda$
as $\Lambda \to \infty$. Actually, taking
the limit $\Lambda \to \infty$ does not commute with the Taylor expansion of
the ERE coefficients in
powers of $m_l$.
Substituting the value for $C_0 (\Lambda )$ resulting from matching Eq.~(\ref{aWeinb1}) to (\ref{a_data})
into the solution of the LS equation (\ref{LSmatrix}) and taking the limit $\Lambda
\to \infty$ yields the following finite, cutoff-independent result for the
inverse amplitude:
\begin{eqnarray}
\label{Tperatized}
(T_{\infty})^{-1} &=& i \frac{k m}{4 \pi} - \frac{m}{8 \pi m_l^3
\left(k^2+m_s^2\right)
\left(\alpha _l m_s+m_l \left(1-2 \alpha _l\right) \alpha _s\right)} \Big(
2 m_l^4 m_s^2 \left(m_s-m_l \alpha _l \alpha _s\right) \nonumber \\
&& {} + k^2 m_l^2 \left(\left(4-3 \alpha _l\right) m_s^3+m_l^2 \alpha _l
m_s+m_l \alpha _s \left(\left(2 \alpha _l-3\right) m_s^2+m_l^2 \left(1-2
\alpha
_l\right)\right)\right) \nonumber \\
&& {} + k^4 \left(-\alpha _l m_s \left(m_l^2+m_s^2\right)-m_l \alpha _s
\left(m_l^2 \left(1-2 \alpha _l\right)+m_s^2\right)+2 m_s^3\right) \Big)\,.
\end{eqnarray}
The corresponding infinite-cutoff prediction for the effective range has the form:
\begin{equation}
r_{\infty} = \frac{1}{m_l} \bigg[\frac{3 \alpha_l - 4}{\alpha _l}
+\frac{4 \left(\alpha _l-1\right){}^2 \alpha _s}{\alpha _l^2 m_s} m_l
+
\frac{\alpha _l^3 \left(8 \alpha _s^2-1\right)+\alpha _l^2
\left(2-20 \alpha _s^2\right)+16 \alpha _l \alpha _s^2-4 \alpha
_s^2}{\alpha _l^3 m_s^2} m_l^2
+ \ldots \bigg],
\end{equation}
where the ellipses refer to $\mathcal{O} \left( m_l^3 \right)$-terms.
One observes that the result after removing the cutoff fails to reproduce the
low-energy theorem by yielding a wrong value for $\alpha_{r}^{(1)}$. This also
holds true for the $\alpha_{v_i}^{(1)}$ \cite{Epelbaum:2009sd}. Notice that, by
construction, the scattering length is still correctly reproduced.
The breakdown of LETs in the Weinberg-like approach in the $\Lambda \to
\infty$ limit can be traced back to spurious $\Lambda$-dependent contributions
still appearing in renormalized expressions for observables,
see Eq.~(\ref{LETWeinberg}), which are irrelevant at the order
the calculations are performed in the regime $\Lambda
\sim m_s$ but become numerically dominant if $\Lambda \gg m_s$.
Due to non-renormalizability of the effective potential as discussed above,
such spurious terms do, in general,
involve logarithms and positive powers of $\Lambda$ which, as $\Lambda$
gets increased beyond the hard scale $m_s$, become, at some point, comparable
in size with lower-order terms in the ``chiral'' expansion.
For example, the appearance of terms linear in $\Lambda$ would suggest the breakdown
of LETs as the cutoff approaches the scale $\Lambda \sim m_s^2/m_l$.
The unavoidable appearance of ever higher power-law divergences when going to
higher orders in the EFT expansion implies that the cutoff should not be
increased beyond the pertinent hard scale in Weinberg-like or Lepage-like
approach to NN scattering leading to $\Lambda \sim m_s$ as the optimal
choice. It is furthermore instructive to compare the predictions for the effective range
in Eqs.~(\ref{LETKSW}) and (\ref{LETWeinberg}) corresponding to two different
renormalization schemes. One observes that taking $\Lambda \gg m_s$ in
Eq.~(\ref{LETWeinberg}) has an effect which is qualitatively similar to choosing $\mu \gg m_l$ in
Eq.~(\ref{LETKSW}) and corresponds to an improper choice of renormalization conditions in
the EFT framework.
\subsubsection{Toy model with local interactions: a numerical example}
Having clarified the important conceptual issues related to nonperturbative
renormalization in the context of chiral EFT for two nucleons, I now turn to the
last toy-model example and give some numerical results.
Consider two nucleons interacting via the local force given by a superposition of two Yukawa potentials
corresponding to the (static) exchange of the scalar light and heavy mesons of masses
$m_l$ and $m_s$, respectively:
\begin{equation}
\label{pot_r}
V (r) = \alpha_l \frac{e^{-m_l r}}{r} + \alpha_s \frac{e^{-m_s r}}{r}
\end{equation}
This type of potentials is sometimes referred to as Malfliet-Tjon potential.
Motivated by the realistic case of the two-nucleon force, I choose
the meson masses to be $m_l = 200$ MeV and $m_s = 750$ MeV. Further, I adjust the dimensionless
strengths $\alpha_{l,s}$ in such a way that the potential features an S-wave bound state
(``deuteron'') with the binding energy $E_B = 2.2229$ MeV. A suitable combination is given by
$\alpha_l = -1.50$ and $\alpha_s = 10.81$.
With the parameters specified in this way, the potential is depicted in Fig.~\ref{fig_toy2}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{MTcoord.pdf}
\end{center}
\vskip -0.7 true cm
\caption{Toy-model potential in Eq.~(\ref{pot_r}).
The dashed (dashed-dotted) line depicts the short-range (long-range) part proportional to $\alpha_s$ ($\alpha_l$)
while the full potential is shown by the solid line. } \label{fig_toy2}
\end{figure}
The corresponding momentum-space potential can be easily obtained by making the Fourier transformation:
\begin{equation}
\label{pot_loc_mom}
V (\vec p \,', \, \vec p \, ) = \frac{4 \pi \alpha_l}{\vec q \, ^2 + m_l^2} + \frac{4 \pi \alpha_s}{\vec q \, ^2 + m_s^2}\,.
\end{equation}
Here, $\vec q = \vec p \, ' - \vec p$ denotes the momentum transfer.
I treat the nucleons in this example as identical, spin-less particles.
Thus, in the partial wave basis, the only nonvanishing matrix
elements $\langle l ', j, p' \, | V | l , j , p \rangle$ correspond to $l = l' = j$.
I only consider the S-wave here. Because of no spin dependence, the matrix element
$V_0 (p', \, p) \equiv \langle 0, 0, p' \, | V | 0 , 0 , p \rangle$ can be obtained by simply integrating over
the angle $\theta$ between $\vec p \, '$ and $\vec p\,$:
\begin{eqnarray}
V_0 (p', \, p) &=& 2 \pi \int_{-1}^{+1} d (\cos \theta)\,
V (p', p, \theta ) \nonumber \\
&=& \alpha_l \frac{4\pi^2}{p' p} \ln \left( \frac{(p' + p)^2 + m_l^2}{(p' - p)^2 + m_l^2} \right)
+ \alpha_s \frac{4\pi^2}{p' p} \ln \left( \frac{(p' + p)^2 + m_s^2}{(p' - p)^2 + m_s^2} \right) \nonumber \\
&\equiv& V_0^l (p', \, p) + V_0^s (p', \, p) \,.
\end{eqnarray}
Contrary to the previously considered case of a separable interaction, the LS equation
\begin{equation}
T_0 (p ,\, p'; \, k ) = V_0 (p ,\, p') + \int \frac{l^2 dl}{(2 \pi )^3} V_0
(p ,\, l)
\frac{m}{k^2-l^2 + i \epsilon} T_0 (l ,\, p'; \, k )\,,
\end{equation}
cannot be solved analytically for the Malfliet-Tjon-type potentials. It can, however, be
solved numerically using the standard methods, see e.g.~\cite{Gloeckle:1983aa}. With the parameters
specified above, one obtains the phase shift which is shown by the solid line in the left panel
of Fig.~\ref{toy_results}. It is fairly similar to the neutron-proton $^3$S$_1$ phase shift,
cf.~the left panel of Fig.~\ref{phases_nijm}.
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{phase1.pdf}
\hfill
\includegraphics[width=0.49\textwidth]{err1.pdf}
\vskip 0.0 true cm
\caption{Left panel: phase shifts resulting from the original and effective potentials in Eqs.~(\ref{pot_loc_mom})
and (\ref{pot_eff_mom}), respectively. Right panel: Lepage plot showing the absolute error in phase shifts
at various orders in the low-momentum expansion versus lab.~energy.
} \label{toy_results}
\end{figure}
I now develop an effective potential that describes the same physics as the underlying
one at momenta of the order of $Q \sim m_l$. Up to N$^2$LO, it takes the form:\footnote{One can write
down another contact interaction with four derivatives whose matrix elements in the on-shell kinematics,
i.e. with $p = p'$, cannot be disentangled from the $C_4$-term.}
\begin{equation}
\label{pot_eff_mom}
V^{\rm eff}_0 (p', \, p) = V_0^l (p', \, p) + \left[ C_0 + C_2 (p'^2 + p^2) + C_4 p'^2 p^2 \right]
f_\Lambda (p', p)
\end{equation}
The regulator function prevents the appearance of ultraviolet divergences in the LS equation
and is chosen of the form:
\begin{equation}
f_\Lambda (p', p) = \exp \left(- \frac{p'^2 + p^2}{\Lambda^2} \right) \,.
\end{equation}
I set the cutoff $\Lambda = 500$ MeV, solve the LS equation with the effective potential
$V^{\rm eff}_0 (p', \, p)$ and adjust the LECs to reproduce the coefficients in the ERE as follows:
\begin{itemize}
\item
LO: $C_2 = C_4 =0$, $C_0$ is tuned to reproduce $a$;
\item
NLO: $C_4 =0$, $C_{0,2}$ are tuned to reproduce $\{a, \, r \}$;
\item
N$^2$LO: $C_{0,2,4}$ are tuned to reproduce $\{a, \, r, \, v_2 \}$.
\end{itemize}
With the LECs being fixed as described above, the predictions for the S-wave phase shift at various
orders in the low-momentum expansion are summarized in the left panel of Fig.~\ref{toy_results}.
The so-called Lepage plot in the right panel of this figure shows absolute errors
in the phase shift, $\Delta \delta (E_{\rm lab} ) \equiv \delta_{\rm underlying} - \delta_{\rm eff} $,
versus energy. It is plotted in radians. One reads off from this plot that the laboratory energy, at
which the expansion breaks down, is of the order of $E_{\rm lab} = 2 k^2/m \sim 250$ MeV. This corresponds
to the momentum scale of the order of $\tilde \Lambda \sim 350$ MeV, in a good agreement with the
expected breakdown scale of the modified effective range expansion of the order of $m_s/2$, see section \ref{sec:analyt}.
Notice that the effective theory is, as desired, able to go beyond the ERE, whose range of
convergence is indicated by the vertical lines in Fig.~\ref{toy_results}. The ``deuteron'' binding energy
is found to be reproduced correctly with 5 significant digits at N$^2$LO:
\begin{equation}
E_B^{LO} + \delta E_B^{NLO} + \delta E_B^{N^2LO} = 2.1594 + 0.638 - 0.0003 = 2.2229 \mbox{ MeV}\,.
\end{equation}
For further illustrative quantum mechanical examples and a discussion on renormalization in the context
of the Schr\"odinger equation, the reader is referred to the excellent lecture notes by Lepage
\cite{Lepage:1997}.
\section{Nuclear forces from chiral EFT}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec5}
In this section I outline and exemplify some methods which can be used to derive nuclear
forces from chiral EFT.
\subsection{Derivation of nuclear potentials from field theory}
The derivation of a potential from field theory is an extensively studied problem in nuclear physics.
Historically, the important conceptual achievements in this field have been done in the fifties
of the last century
in the context of the so-called meson field theory. The problem can be formulated in
the following way: given a field theoretical Lagrangian for interacting mesons
and nucleons, how can one reduce the (infinite dimensional) equation of motion for
mesons and nucleons to an effective Schr\"odinger equation for nucleonic
degrees of freedom, which can be solved by standard methods?
It goes beyond the scope of this work to address the whole variety of
different techniques which have been developed to construct effective interactions,
see Ref.~\cite{Phillips:1959aa}
for a comprehensive review. I will now briefly outline a few methods which have been used in the
context of chiral EFT. Similar methods are frequently used in computational nuclear
physics in order to reduce a problem to a smaller model space which can be treated numerically.
I begin with the approach developed by Tamm \cite{Tamm:1945qv} and Dancoff \cite{Dancoff:1950ud} which in the following
will be referred to as the Tamm-Dancoff (TD) method. Consider the time-independent Schr\"odinger equation
\begin{equation}
\label{schroed1}
(H_0 + H_I) | \Psi \rangle = E | \Psi \rangle\,,
\end{equation}
where $|\Psi \rangle$ denotes an eigenstate of the Hamiltonian $H$
with the eigenvalue $E$.
One can divide the full Fock space in to the nucleonic subspace $|\phi \rangle$
and the complementary one $|\psi \rangle$ and rewrite the Schr\"odinger equation
(\ref{schroed1}) as
\begin{equation}
\label{schroed2}
\left( \begin{array}{cc} \eta H \eta & \eta H \lambda \\
\lambda H \eta & \lambda H
\lambda \end{array} \right) \left( \begin{array}{c} | \phi \rangle \\
| \psi \rangle \end{array} \right)
= E \left( \begin{array}{c} | \phi \rangle \\
| \psi \rangle \end{array} \right)~,
\quad \,
\end{equation}
where I introduced the projection operators $\eta$ and
$\lambda$ such that $|\phi \rangle = \eta | \Psi \rangle$,
$| \psi \rangle = \lambda | \Psi \rangle$.
Expressing the state $| \psi \rangle$ from the second line
of the matrix equation (\ref{schroed2}) as
\begin{equation}
\label{5.3}
| \psi \rangle = \frac{1}{ E - \lambda H \lambda} H | \phi \rangle~,
\end{equation}
and substituting this in to
the first line, one obtains the Schroedinger-like equation for the projected
state $| \phi \rangle$:
\begin{equation}
\label{TDschroed}
\left( H_0 + V_{{\rm eff}}^{\rm TD} ( E ) \right) | \phi \rangle = E | \phi \rangle \,,
\end{equation}
with an effective potential $V_{\rm eff} (E)$ given by
\begin{equation}
\label{TDpot}
V_{\rm eff}^{\rm TD} (E)= \eta H_I \eta + \eta H_I \lambda
\frac{1}{E - \lambda H \lambda} \lambda H_I \eta \,\, .
\end{equation}
This definition of the effective potential corresponds exactly to the one given in section \ref{sec:nuclearEFT}
in the context of ``old-fashioned'' time-ordered perturbation theory.
To evaluate $V_{\rm eff}^{\rm TD} (E)$ one usually relies on perturbation theory.
For example, for the Yukawa theory with a single $\pi NN$ vertex $H_I = g H_1$, $V_{\rm eff}^{\rm TD} (E)$
up to the fourth order in the coupling constant $g$ is given by
\begin{equation}
\label{TDg4}
V_{\rm eff}^{\rm TD} (E) = - \eta ' \bigg[ g^2 H_1 \frac{\lambda^1}{H_0 - E} H_1 + g^4
H_1 \frac{\lambda^1}{H_0 - E} H_1 \frac{\lambda^2}{H_0 - E} H_1 \frac{\lambda^1}{H_0 - E} H_1 + \mathcal{O} (g^6) \bigg] \eta \,,
\end{equation}
where the superscripts of $\lambda$ refer to the number of mesons in the corresponding state.
It is important to realize that the effective potential $V_{{\rm eff}} ( E )$ in this scheme depends explicitly on the energy,
which makes it inconvenient for practical applications (especially for calculations beyond the two-nucleon system).
In addition, the projected nucleon states $| \phi \rangle$ have a different
normalization compared to the states $| \Psi \rangle$ we have started from
(which are assumed to span a complete and orthonormal set in the whole Fock space)
\begin{equation}
\langle \phi_i | \phi_j \rangle =
\langle \Psi_i | \Psi_j \rangle - \langle \psi_i | \psi_j \rangle=
\delta_{ij} - \langle \phi_i | H_I \lambda
\left( \frac{1}{E - \lambda H \lambda} \right)^2
\lambda H_I | \phi_j \rangle ~,
\end{equation}
since the components $\psi_i$ do, in general, not vanish.
The above mentioned deficiencies are naturally avoided in the method of unitary transformation \cite{Okubo:1954aa,Fukuda:1954aa}.
In this approach, the decoupling of the $\eta$- and $\lambda$-subspaces of the Fock space is achieved via a
unitary transformation $U$
\begin{equation}
\label{decoupling}
\tilde H \equiv U^\dagger H U = \left( \begin{array}{cc} \eta \tilde H \eta & 0 \\ 0 &
\lambda \tilde H \lambda \end{array} \right)\,.
\end{equation}
Following Okubo \cite{Okubo:1954aa}, the unitary operator $U$ can be parametrized as
\begin{equation}
\label{5.9}
U = \left( \begin{array}{cc} \eta (1 + A^\dagger A )^{- 1/2} & -
A^\dagger ( 1 + A A^\dagger )^{- 1/2} \\
A ( 1 + A^\dagger A )^{- 1/2} &
\lambda (1 + A A^\dagger )^{- 1/2} \end{array} \right)~,
\end{equation}
with the operator $A= \lambda A \eta$. The operator $A$ has to satisfy
the decoupling equation
\begin{equation}
\label{5.10}
\lambda \left( H - \left[ A, \; H \right] - A H A \right) \eta = 0
\end{equation}
in order for the transformed Hamiltonian $\tilde H$ to be of block-diagonal form.
The effective $\eta$-space potential $\tilde V_{\rm eff}^{\rm UT}$ can be expressed in terms of the operator $A$ as:
\begin{equation}
\label{effpot}
\tilde{V}_{\rm eff}^{\rm UT} = \eta (\tilde H - H_0 ) = \eta \bigg[ (1 + A^\dagger A)^{-1/2} (H + A^\dagger H + H A + A^\dagger H A )
(1 + A^\dagger A)^{-1/2} - H_0 \bigg] \eta~.
\end{equation}
The solution of the decoupling equation and the calculation of the effective potential
according to Eq.~(\ref{effpot}) can be carried out perturbatively in the weak-coupling case.
For the previously considered case of the Yukawa theory, the decoupling equation can be solved
recursively by making the ansatz
\begin{equation}
A = \sum_{n=1}^\infty g^n A^{(n)}.
\end{equation}
The resulting effective potential $V_{\rm eff}^{\rm UT}$ takes the form:
\begin{eqnarray}
\label{UTg4}
V_{\rm eff}^{\rm UT} &=& - g^2 \, \eta ' \Bigg[ \frac{1}{2} H_1 \frac{\lambda^1}{H_0 - E_\eta} H_1 + \mbox{h.~c.} \Bigg] \eta
- g^4 \, \eta ' \Bigg[ \frac{1}{2} H_{1} \frac{\lambda^1}{(H_0 - E_{\eta})} H_{1} \, \frac{\lambda^2}{(H_0 - E_{\eta})}
\, H_{1} \frac{\lambda^1}{(H_0 - E_{\eta} )} H_{1} \nonumber \\
&& {} -
\frac{1}{2} H_{1} \frac{\lambda^1}{(H_0 - E_{\eta '})} H_{1} \, \tilde \eta
\, H_{1} \frac{\lambda^1}{(H_0 - E_{\tilde \eta} )( H_0 - E_{\eta '} )} H_{1} \nonumber \\
&& {}
+\frac{1}{8} H_{1} \frac{\lambda^1}{(H_0 - E_{\eta '})} H_{1} \, \tilde \eta
\, H_{1} \frac{\lambda^1}{(H_0 - E_{\tilde \eta} )( H_0 - E_{\eta} )} H_{1} \nonumber \\
&& {} - \frac{1}{8} H_{1} \frac{\lambda^1}{(H_0 - E_{\eta '}) ( H_0 - E_{\tilde \eta} )}
H_{1} \, \tilde \eta
\, H_{1} \frac{\lambda^1}{(H_0 - E_{\tilde \eta} )} H_{1} + \mbox{h.~c.}
\Bigg] \eta + \mathcal{O}(g^6)\,.
\end{eqnarray}
Here, $\eta$, $\eta '$ and $\tilde \eta$ denote projection operators onto the purely nucleonic states.
Different notation is only used to indicate what state the energies in the denominators correspond to.
In contrast to $V_{\rm eff}^{\rm TD}$, $V_{\rm eff}^{\rm UT}$ does not depend on
the energy $E$ which enters the Schr\"odinger equation.
Another difference to the Tamm-Dancoff method is given by the presence of terms
with the projection operator $\tilde \eta$ which give rise to purely nucleonic intermediate states.
These terms are responsible for the proper normalization of the few-nucleon states.
In spite of the presence of the purely nucleonic intermediate states, such terms are not
generated through the iteration of the dynamical equation and are truly irreducible.
Since \emph{all} energy denominators entering $V_{\rm eff}^{\rm UT}$ correspond to intermediate states
with at least one pion, there is no enhancement by large factors of $m/Q$ that occurs for reducible
contributions.
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: \\
1. Calculate $A^{1}$, $A^{(2)}$ and $A^{(3)}$ by solving the decoupling equation for the considered case of
Yukawa theory and verify the expression for $V_{\rm eff}^{\rm UT}$ in Eq.~(\ref{UTg4}). \\
2. Consider the disconnected Feynman diagram in Fig.~\ref{figEx} and draw all possible time-ordered diagrams.
Using Eqs.~(\ref{TDg4}) and (\ref{UTg4}) show that, in contrast to the TD approach,
these diagrams do not contribute to the nucleon-nucleon potential in the method of unitary transformation.
Use the static approximation for the nucleons in order to simplify the calculations (i.e. set:
$E = E_\eta = E_{\eta '} = E_{\tilde \eta} =0$).
} \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.1\textwidth]{disconn.pdf}
}
\vspace{-0.2cm}
\caption[fig4aa]{\label{figEx} An example of a disconnected diagram that does not contribute to the NN
potential in the method of unitary transformation. }
\vspace{0.2cm}
\end{figure*}
The two methods of deriving effective nuclear potentials are quite general
and can, in principle, be applied to any field theoretical meson-nucleon Lagrangian.
In the weak-coupling case, the potential can be obtained straightforwardly via the expansion
in powers of the corresponding coupling constant(s).
For practical applications, it is helpful to use time-ordered diagrams to visualize
the contributions to the potential, see Fig.~\ref{fig4aa}.
In ``old-fashioned'' perturbation theory or, equivalently, the Tamm-Dancoff approach,
only irreducible diagrams contribute to the potential.
In the method of unitary transformation one can draw both irreducible and reducible graphs
whose meaning, however, differs from that of diagrams emerging in time-ordered perturbation theory.
The coefficients in front of various operators and the energy denominators can, in general,
not be guessed by looking at a given diagram and have to be determined by
solving the decoupling equation (\ref{5.10}) for the operator $A$ and using Eq.~(\ref{effpot}).
Application of the above methods to the effective chiral Lagrangian requires
the expansion in powers of the coupling constants to be replaced by the chiral expansion in
powers of $Q/\Lambda_\chi$. This issue will be dealt with in the next section.
\subsection{Method of unitary transformation}
To apply the method of unitary transformation to derive nuclear forces in chiral EFT
it is useful to rewrite the power counting discussed in section \ref{sec:nuclearEFT} into
a different form which is more suitable to carry out algebraic manipulations described
above.
We begin with Weinberg's original power counting expression for $N$-nucleon
diagrams involving $C$ separately connected pieces:
\begin{equation}
\label{pow_orig}
\nu = 4 - N + 2 (L - C) + \sum_i V_i \Delta_i \,, \quad \quad
\Delta_i = d_i + \frac{1}{2} n_i - 2\,.
\end{equation}
This expression is a generalization of Eq.~(\ref{powNN}) to the case $C > 1$.
Its derivation can be found in Ref.~\cite{Weinberg:1992yk}.
There is one subtlety here that needs to be addressed:
according to Eq.~(\ref{pow_orig}), the chiral dimension $\nu$ for a given
process depends on the total number of nucleons in the system. For example,
one-pion exchange in the two-nucleon system corresponds to $N=2$, $L=0$,
$C=1$ and $\sum_i V_i \Delta_i =0$ and, therefore, contributes at order $\nu
=0$. On the other hand, the same process in the presence of a third
(spectator) nucleon leads, according to Eq.~(\ref{pow_orig}), to $\nu = -3$
since $N=3$ and $C=2$. The origin of this seeming discrepancy is due to the
different normalization of the 2N and 3N states:
\begin{eqnarray}
&2N:& \quad \langle \vec p_1 \, \vec p_2 | \vec p_1 {}' \, \vec p_2 {}'
\rangle = \delta^3 (\vec p_1 {} ' - \vec p_1 \, ) \,
\delta^3 (\vec p_2 {}' - \vec p_2 \, ) \,,\nonumber \\
&3N:& \quad \langle \vec p_1 \, \vec p_2 \, \vec p_3 | \vec p_1 {}' \,
\vec p_2 {}' \, \vec p_3 {}' \rangle =
\delta^3 (\vec p_1 {} ' - \vec p_1 \, ) \, \delta^3 (\vec p_2 {}' - \vec p_2
\, ) \,\delta^3 (\vec p_3 {}' - \vec p_3 \, ) \,.
\end{eqnarray}
It can be circumvented by assigning a chiral dimension to the transition
operator rather than to its matrix elements in the $N$-nucleon space.
Adding the factor $3N-6$ to the right-hand side of Eq.~(\ref{pow_orig}) in
order to account for the normalization
of the $N$-nucleon states and to ensure that the LO contribution to the nuclear
force appears at order $\nu = 0$ we obtain
\begin{equation}
\label{pow_mod}
\nu = -2 + 2 N + 2 (L - C) + \sum_i V_i \Delta_i \,.
\end{equation}
This expression provides a natural qualitative explanation of
the observed hierarchy of nuclear forces $V_{\rm 2N} \gg
V_{\rm 3N} \gg V_{\rm 4N} \ldots $ with
\begin{eqnarray}
V_{\rm 2N} &=& V_{\rm 2N}^{(0)} + V_{\rm 2N}^{(2)} + V_{\rm 2N}^{(3)} + V_{\rm 2N}^{(4)} + \ldots \,, \nonumber \\
V_{\rm 3N} &=& V_{\rm 3N}^{(3)} + V_{\rm 3N}^{(4)} + \ldots \,, \nonumber \\
V_{\rm 4N} &=& V_{\rm 4N}^{(4)} + \ldots \,,
\end{eqnarray}
as shown in Fig.~\ref{hierarchy}.
\begin{figure}[t]
\includegraphics[width=0.99\textwidth]{forces.pdf}
\caption{Diagrams that give rise to nuclear forces in ChEFT based on Weinberg's power
counting. Solid and dashed lines denote nucleons
and pions, respectively. Solid dotes, filled circles and filled squares
and crossed squares refer to vertices with $\Delta_i =0, \, 1, \, 2$ and $4$, respectively.}
\label{hierarchy}
\end{figure}
The form of power counting in Eq.~(\ref{pow_mod}) is still of less use for our purpose since the resulting
chiral dimension is given it terms of the topological quantities such as $N$, $C$ and $L$ which is not
appropriate for algebraic approaches such as the method of unitary transformation.
Using certain topological identities, see \cite{Epelbaum:2007us}, Eq.~(\ref{pow_mod}) can be
rewritten in a more suitable form:
\begin{equation}
\label{pow_fin}
\nu = -2 + \sum V_i \kappa_i \,, \quad \quad \kappa_i = d_i + \frac{3}{2} n_i + p_i - 4\,.
\end{equation}
The quantity $\kappa_i$ which enters this expression is nothing but the
canonical field dimension of a vertex of type $i$ (up to the additional
constant $-4$) and gives the inverse mass dimension of the corresponding
coupling constant. In fact, this result can be obtained immediately by counting
inverse powers of the hard scale $\Lambda_\chi$ rather than powers of the soft scale $Q$
(which is, of course, completely equivalent). Indeed, since the only way
for the hard scale to be generated is through the physics behind the LECs,
the power $\nu$ is just the negative of the overall mass dimension of all LECs. The
additional factor $-2$ in Eq.~(\ref{pow_fin}) is a convention
to ensure that the contributions to the nuclear force start at $\nu = 0$.
I encourage the reader to
verify the equivalence of Eqs.~(\ref{pow_fin}) and (\ref{pow_mod})
for specific diagrams. One immediately reads off from Eq.~(\ref{pow_fin}) that
in order for perturbation theory to work, the effective Lagrangian must contain no renormalizable
and super-renormalizable interactions with $\kappa_i =0$ and $\kappa_i < 0$, respectively,
since otherwise adding new vertices would not increase or even lower the chiral dimension
$\nu$. This feature is guaranteed by the spontaneously broken chiral symmetry of QCD which
ensures that only non-renormalizable interactions enter the effective Lagrangian.
While Eq.~(\ref{pow_fin}) does not say much about the topology and is,
therefore, not particularly useful to
deal with diagrams, it is very convenient for algebraical calculations. In fact,
it formally reduces the chiral expansion to the expansion in powers of the coupling constant,
whose role is now played by the ratio $Q/\Lambda$. Applying the canonical transformation to the chiral
Lagrangian and writing the resulting Hamiltonian in the form
\begin{equation}
\label{n11}
H_I = \sum_{\kappa = 1}^{\infty} H^\kappa\,,
\end{equation}
the operator $A$ can be calculated by solving Eq.~(\ref{decoupling}) recursively,
\begin{eqnarray}
\label{n13}
A &=& \sum_{\alpha = 1}^\infty A^{(\alpha )}\,, \\
A^{( \alpha )} &=& \frac{1}{E_\eta - E_\lambda} \lambda \bigg[ H^{(\alpha )} + \sum_{i =
1}^{\alpha -1} H^{(i)} A^{(\alpha -i)} - \sum_{i=1}^{\alpha -1} A^{(\alpha -i)} H^{(i)}
- \sum_{i = 1}^{\alpha -2} \; \sum_{j =1}^{\alpha - j - 1} A^{(i)} H^{(j)} A^{(\alpha -i-j)}
\bigg] \eta\,. \nonumber
\end{eqnarray}
The expressions for the unitary operator and the effective potential then
follow immediately by substituting Eqs.~(\ref{n11}) and (\ref{n13}) into
Eq.~(\ref{effpot}).
\subsection{The $1\pi$- and the leading $2\pi$-exchange potentials}
I now illustrate how the above ideas can be applied in practice. I begin with the simple
case of the $1\pi$-exchange potential at leading order, i.e.~$\nu =0$.
The only relevant contribution to the interaction Hamilton density is given by
\begin{equation}
\label{vertex_ga}
\mathcal{H}^{(1)} = \frac{g_A}{2 F_\pi} N^\dagger \vec \sigma \cdot ( \vec \nabla \fet \pi \cdot \fet \tau ) N\,,
\end{equation}
where the superscript of $\mathcal{H}$ gives the canonical dimension $\kappa_i$ defined in Eq.~(\ref{pow_fin}).
The relevant operator that contributes to the effective Hamiltonian after performing the unitary transformation
is given by the first two terms in Eq.~(\ref{UTg4}):
\begin{equation}
\label{tempX1}
V_{\rm eff}^{\rm UT} = - \eta H^{(1)} \frac{\lambda^1}{\omega} H^{(1)} \eta\,,
\end{equation}
where $\omega$ denotes the pion free energy and
I made use of the static approximation as appropriate at LO.\footnote{Corrections to the static $1\pi$-exchange
potential are suppressed by $Q^2/m^2$.} Notice that $V_{\rm eff}^{\rm UT}$ in the above equation
also gives rise to a one-body operator that contributes to the nucleon mass shift, see graph (a) in
Fig.~\ref{1pi}.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.5\textwidth]{1pi.pdf}
}
\vspace{0.2cm}
\caption{\label{1pi} Diagrams that correspond to the operator in Eq.~(\ref{tempX1}).
Graph (a) yields a single-body contribution to the nuclear Hamilton operator while diagrams (b)
and (c) give rise to the $1\pi$-exchange NN potential.
}
\vspace{0.2cm}
\end{figure*}
To compute the expression for the $1\pi$-exchange potential I first express
the pion and nucleon fields in the interaction picture in terms of the creation and destruction operators:
\begin{eqnarray}
\label{2quant}
\pi_i ( x ) &=& \int \frac{d^3 k}{( 2 \pi )^{3 / 2}} \frac{1}{\sqrt{2 \omega_k}}
\left[ e^{- i k \cdot x} a_i ( \vec k \, ) + e^{i k \cdot x} a_i^\dagger ( \vec k \, ) \right] \,, \nonumber \\
N ( x) &=& \sum_{t s} \int \frac{d^3 p}{( 2 \pi )^{3 / 2}} \sum_{t s} e^{- i p \cdot x } \upsilon ( s )
\epsilon ( t ) b_t ( \vec p, \,\, s ) \;,
\end{eqnarray}
where $\omega_k =\sqrt {{\vec k}^2 + m_\pi^2}$ and $\upsilon$ ($\epsilon$) denotes a Pauli spinor (isospinor).
$a_i ( \vec k \, )$ and $a_i^\dagger ( \vec k \, )$ denote a destruction and creation operator of
a pion with isospin $i$.
Further, $b_t ( \vec p, \,\, s )$ ($b_t^\dagger ( \vec p, \,\, s )$) is the
destruction (creation) operator of a non-relativistic nucleon (i.e.~$p_0 = \vec p\, ^2/(2m)$)
with the spin and isospin quantum numbers $s$
and $t$ and momentum $\vec{p}$. The creation and destruction operators of the pion (nucleon)
field satisfy the usual commutation (anti-commutation) relations. The $1\pi$-exchange potential can be
calculated by substituting the expressions in Eq.~(\ref{2quant}) for pion and nucleon
fields into Eq.~(\ref{tempX1}) and evaluating the matrix element
\begin{equation}
\label{matr_el}
\langle \alpha_1 ' \alpha_2 ' | - \eta H^{(1)} \frac{\lambda^1}{\omega} H^{(1)} \eta \, | \alpha_1 \alpha_2 \rangle
\equiv \frac{1}{(2 \pi)^3} \delta^3 (\vec P \, ' - \vec P \, ) V_{2N}^{1\pi} \,.
\end{equation}
Here, $\vec P$ ($\vec P \,'$) denotes the total momentum of the nucleons before (after)
the interaction takes place. Further, $\alpha_i$ and $\alpha_i '$ denote collectively
the initial and final quantum numbers of the nucleon $i$ (momentum, spin and isospin).
To keep the expressions for the potential compact, they are commonly given in the operator
form with respect to the spin and isospin quantum numbers using the corresponding Pauli matrices
$\vec \sigma_i$ and $\fet \tau_i$ of a nucleon $i$. A straightforward
calculation yields the final form of the $1\pi$-exchange potential:
\begin{equation}
\label{1pi_res}
V_{2N}^{1\pi} = -\frac{g_A^2}{4 F_\pi^2} \frac{\vec
\sigma_1 \cdot \vec q \, \vec \sigma_2 \cdot \vec q}{\vec q \, ^2 +M_\pi^2} \fet \tau_1
\cdot \fet \tau_2\,.
\end{equation}
Clearly, this familiar result for the static $1\pi$-exchange potential can be obtained in a much
simpler way by evaluating the corresponding Feynman diagram since it does not generate reducible topologies.
One-loop corrections to the static $1\pi$-exchange potential and renormalization within the method of unitary
transformation are discussed in detail in Ref.~\cite{Epelbaum:2002gb}.
Notice further that when calculating the matrix element in Eq.~(\ref{matr_el}), I discarded
the contributions corresponding to graph (a) in Fig.~\ref{tempX1} with one of the nucleons being a spectator
and the contributions from diagrams (b) and (c) with the nucleon labels $\alpha_1 '$ and $\alpha_2 '$ being
interchanged. The latter emerge automatically when the potential is inserted into the corresponding
dynamical equation due to the antisymmetric nature of the two-nucleon wave function.
As a final example, I discuss the leading $2\pi$-exchange potential arising
at order $\nu = 2$ from the box and crossed box diagrams
(the last two diagrams in the second raw in Fig.~\ref{hierarchy}. Again, the only vertex we need
is given in Eq.~(\ref{vertex_ga}). The relevant operators that contribute to the effective nuclear Hamiltonian
after performing the unitary transformation are listed in Eq.~(\ref{UTg4})
\begin{eqnarray}
\label{tempX2}
V_{\rm eff}^{\rm UT} &=& {} - \eta H^{(1)} \frac{\lambda^1}{\omega} H^{(1)} \frac{\lambda^2}{\omega_1 + \omega_2}
H^{(1)} \frac{\lambda^1}{\omega} H^{(1)} \eta + \frac{1}{2} \eta H^{(1)} \frac{\lambda^1}{\omega^2} H^{(1)} \eta
H^{(1)} \frac{\lambda^1}{\omega} H^{(1)} \eta \nonumber \\
&+& \frac{1}{2} \eta H^{(1)} \frac{\lambda^1}{\omega} H^{(1)} \eta
H^{(1)} \frac{\lambda^1}{\omega^2} H^{(1)} \eta\,.
\end{eqnarray}
The contribution to the $2\pi$-exchange potential results from evaluating the
matrix element $\langle \alpha_1 ' \alpha_2 ' | V_{\rm eff}^{\rm UT} | \alpha_1 \alpha_2 \rangle$
which can be computed along the same lines as above. Calculations of that kind can be optimized by
using a diagrammatic approach and formulating a sort of ``Feynman'' rules. The building blocks
are given by vertices and energy denominators that play the role of propagators in Feynman diagrams.
Consider, for example, time-ordered box diagrams (b)-(g) in Fig.~\ref{fig4aa}. All these graphs
have an identical sequence of non-commuting vertices generating exactly the same isospin-spin-momentum
structure in the resulting potential. Thus, the energy denominators for different diagrams arising from
the operators in Eq.~(\ref{tempX2}) can be added together yielding the result
\begin{equation}
2 \frac{\omega_1^2 + \omega_1 \omega_2 + \omega_2^2}{\omega_1^2 \omega_2^2 (\omega_1 + \omega_2)}\,.
\end{equation}
The same result but with an opposite sign is obtained for the sum of the energy denominators
for the crossed-box diagrams.
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: show that the operators in Eq.~(\ref{tempX2}) do not give rise to the
$2\pi$-exchange three-nucleon force. What would be the result for the
three-nucleon force if one would employ time-ordered perturbation theory
(in the static approximation) instead of the method of unitary transformation?
} \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
The vertex in Eq.~(\ref{vertex_ga}) gives rise to the ``Feynman'' rule
\begin{equation}
i \frac{g_A}{2 F_\pi} \tau_i^a \vec \sigma_i \cdot \vec q \frac{1}{\sqrt{2 \omega_q}}\,,
\end{equation}
for an incoming (outgoing) pion with momentum $\vec q$ ($- \vec q\, $) and the isospin quantum number $a$.
Here, $i$ is the nucleon label. Putting everything together, we end up with the contribution
from the box diagram of the form
\begin{eqnarray}
V_{2N}^{2\pi, \, \rm box} (\vec q \, ) &=& (2 \pi)^3 \left( \frac{g_A}{2 F_\pi } \right)^4 \fet \tau_1 \cdot \fet \tau_2
\, \fet \tau_1 \cdot \fet \tau_2 \int \frac{d^3 l_1}{(2 \pi)^3} \frac{d^3 l_2}{(2 \pi)^3} \,
\vec \sigma_1 \cdot \vec l_1 \, \vec \sigma_1 \cdot \vec l_2 \,
\vec \sigma_2 \cdot \vec l_1 \, \vec \sigma_2 \cdot \vec l_2 \nonumber \\
&\times& \frac{1}{2 \omega_{l_1}} \frac{1}{2 \omega_{l_2}} \,
2 \frac{\omega_{l_1}^2 + \omega_{l_1} \omega_{l_2} + \omega_{l_2}^2}{\omega_{l_1}^2 \omega_{l_2}^2
(\omega_{l_1} + \omega_{l_2})}\,
\delta (\vec l_1 + \vec l_2 - \vec q \, ) \,,
\end{eqnarray}
where the factor $ (2 \pi)^3$ in front of the integral
is due to the normalization of the potential, see Eq.~(\ref{matr_el}).
The contribution of the crossed-box diagrams can be written as
\begin{eqnarray}
V_{2N}^{2\pi, \, \rm cr.-box} (\vec q \, ) &=& - (2 \pi)^3 \left( \frac{g_A}{2 F_\pi } \right)^4 \sum_a \tau_1^a
\, \fet \tau_1 \cdot \fet \tau_2 \, \tau_2^a \int \frac{d^3 l_1}{(2 \pi)^3} \frac{d^3 l_2}{(2 \pi)^3} \,
\vec \sigma_1 \cdot \vec l_1 \, \vec \sigma_1 \cdot \vec l_2 \,
\vec \sigma_2 \cdot \vec l_2 \, \vec \sigma_2 \cdot \vec l_1 \nonumber \\
&\times& \frac{1}{2 \omega_{l_1}} \frac{1}{2 \omega_{l_2}} \,
2 \frac{\omega_{l_1}^2 + \omega_{l_1} \omega_{l_2} + \omega_{l_2}^2}{\omega_{l_1}^2 \omega_{l_2}^2
(\omega_{l_1} + \omega_{l_2})}\,
\delta (\vec l_1 + \vec l_2 - \vec q \, ) \,,
\end{eqnarray}
Adding the two expressions together and performing straightforward simplifications one
obtains the total contribution to the leading $2\pi$-exchange proportional to $g_A^4$:
\begin{equation}
\label{2pi_prom}
V_{2N}^{2\pi, \, \rm total} (\vec q \, ) = - \frac{g_A^4}{32 F_\pi^4 }
\int \frac{d^3 l}{(2 \pi)^3} \left[ \fet \tau_1 \cdot \fet \tau_2 \left( \vec l\, ^2 - \vec q\, ^2 \right)^2
+ 6 \, \vec \sigma_1 \cdot \vec q \times \vec l \, \vec \sigma_2 \cdot \vec q \times \vec l \, \right]
\frac{\omega_{+}^2 + \omega_{+} \omega_{-} + \omega_{-}^2}{\omega_{+}^3 \omega_{-}^3
(\omega_{+} + \omega_{-})}\,,
\end{equation}
with $\omega_\pm \equiv \sqrt{(\vec q \pm \vec l)^2 + 4 M_\pi^2}$. The integrals appearing in the
above expressions are ultraviolet divergent and need to be regularized. This can be achieved using standard
methods such as e.g.~dimensional regularization. Cutoff regularization can be applied equally well.
In the infinite-cutoff limit, $\Lambda \to \infty$, the regularized integrals can be decomposed into
a \emph{finite} non-polynomial part (with respect to the momentum transfer $\vec q\,$) and polynomial
in momenta terms that may diverge as $\Lambda$ goes to infinity. Such a decomposition follows from
the local nature of the ultraviolet divergences and implies the uniqueness of the non-polynomial part
(in the limit $\Lambda \to \infty$). This makes perfect sense from the physics point of view since
the nonpolynomial part of the potential controls its long-range behavior which
should not depend on the details of regularization at short distances. For the non-polynomial parts
of the relevant integrals one obtains:
\begin{eqnarray}
I_1 \equiv \int \frac{d^3 l}{(2 \pi)^3} \frac{\vec l\, ^2}{\omega_+ \omega_- (\omega_+ + \omega_-)}
&=& \frac{1}{6 \pi^2} \left( 4 M_\pi^2 + q^2 \right) L (q) + \ldots \,, \nonumber \\
I_2 \equiv \int \frac{d^3 l}{(2 \pi)^3} \frac{\vec l \, ^4 + \vec q \, ^4}{\omega_+ \omega_- (\omega_+ + \omega_-)}
&=& -\frac{1}{60 \pi^2} \frac{512 M_\pi^6 + 384 M_\pi^4 q^2 + 156 M_\pi^2 q^4 +23 q^6}{4 M_\pi^2 + q^2} \,
L (q) + \ldots \,,\nonumber \\
I_3 \equiv \int \frac{d^3 l}{(2 \pi)^3} \frac{(\vec q \cdot \vec l\, )^2}{\omega_+ \omega_- (\omega_+ + \omega_-)}
&=& + \ldots \,,
\end{eqnarray}
where $q \equiv | \vec q \, |$ and the ellipses refer to terms polynomial in $q$.
Note that $I_3$ does not give rise to any non-polynomial terms. Further,
I have introduced the loop function $L (q)$ defined as:
\begin{equation}
\label{Lq}
L(q) = \frac{1}{q}\sqrt{4 M_\pi^2 + q^2}\,
\ln\frac{\sqrt{4 M_\pi^2 + q^2}+q}{2M_\pi}~.
\end{equation}
Using the identity
\begin{equation}
\label{nnn3}
\frac{\omega_+^2 + \omega_+ \omega_- + \omega_-^2}{\omega_+^3 \omega_-^3 (\omega_+ + \omega_-)}
= - \frac{1}{2}\, \frac{\partial}{\partial (M_\pi^2) } \,\frac{1}{\omega_+ \omega_- (\omega_+ + \omega_-)} \,,
\end{equation}
one can express all integrals entering Eq.~(\ref{2pi_prom}) in terms of $I_{1,2,3}$ as follows
\begin{eqnarray}
\label{nnn4}
\int \, \frac{d^3 l}{(2 \pi)^3} \, \frac{\omega_+^2 + \omega_+ \omega_- + \omega_-^2}
{\omega_+^3 \omega_-^3 (\omega_+ + \omega_-)} \left( l^2 - q^2 \right)^2 &=&
- \frac{1}{2} \,\frac{\partial}{\partial (M_\pi^2) } \,\left( I_2 - 2 q^2 I_1 \right) \,, \\
\int \, \frac{d^3 l}{(2 \pi)^3} \, \frac{\omega_+^2 + \omega_+ \omega_- + \omega_-^2}
{\omega_+^3 \omega_-^3 (\omega_+ + \omega_-)} l_i l_j &=& \frac{1}{4} \frac{\partial}{\partial (M_\pi^2) }
\left\{ \left( - I_1 + \frac{1}{q^2} I_3 \right) \delta_{ij} + \left( \frac{1}{q^2} I_1 -
\frac{3}{q^4} I_3 \right) q_i q_j \right\} \,. \nonumber
\end{eqnarray}
The final result for the $2\pi$-exchange potential $\propto g_A^4$ then takes the form:
\begin{eqnarray}
\label{pot_res}
V_{2N}^{2\pi, \, \rm total} (\vec q \, ) &=&{}
- \frac{g_A^4}{384 \pi^2 F_\pi^4}\, \fet{\tau}_1 \cdot \fet{\tau}_2\,
\left( 20 M_\pi^2 + 23 q^2 + \frac{48 M_\pi^4}{4 M_\pi^2 + q^2} \right)
L(q) \nonumber \\
&&{} - \frac{3 g_A^4}{64 \pi^2 F_\pi^4} \left(
\vec{\sigma}_1 \cdot\vec{q}\,\vec{\sigma}_2\cdot\vec{q} - q^2 \,
\vec{\sigma}_1 \cdot\vec{\sigma}_2 \right) L(q)
+ \ldots \,.
\end{eqnarray}
The polynomial in momenta, divergent (in the limit $\Lambda \to \infty$) terms
have the form of contact interactions that are anyway present in the potential at a given order
and can be simply absorbed into an appropriate redefinition of the LECs $C_i$.
\begin{minipage}{\textwidth}
\vskip 0 true cm
\rule{\textwidth}{.2pt}
{\it
Exercise: verify the result for the non-polynomial part of $V_{2N}^{2\pi, \, \rm total}$
using dimensional regularization. Use the equality
\begin{equation}
\frac{1}{\omega_+ \omega_- (\omega_+ + \omega_-)} = \frac{2}{\pi} \int_0^\infty d\beta
\frac{1}{\omega_-^2 + \beta^2}\frac{1}{\omega_+^2 + \beta^2}\,,
\end{equation}
to get rid of the square roots in the integrand.
The resulting integrals can be dealt with in the usual way by introducing the corresponding Feynman parameters.
} \\
\vskip -0.8 true cm
\rule{\textwidth}{.2pt}
\end{minipage}
\medskip
In coordinate space, contact interactions have the form of the delta function
at the origin, $\delta ( \vec r \, )$, and derivatives thereof. In contrast,
the nonpolynomial pieces give rise to the potential at finite distances. To see this
let us take a closer look at the obtained expression for the $2\pi$-exchange potential.
First, it should be emphasized that the Fourier transformation of the nonpolynomial
terms alone is ill defined since they grow as $q$ goes to infinity. The potential
$V_{2N}^{2\pi, \, \rm total} (\vec r \, )$ at a finite distance, $r \neq 0$,
can be obtained from $V_{2N}^{2\pi, \, \rm total} (\vec q \, )$ via
\begin{equation}
V_{2N}^{2\pi, \, \rm total} (\vec r \, ) = \lim_{\Lambda \to \infty} \int \frac{d^3 q}{(2 \pi )^3 }\,
e^{-i \vec q \cdot \vec r} \,
V_{2N}^{2\pi, \, \rm total} (\vec q \, ) \, F \left( \frac{q}{\Lambda} \right) \,,
\end{equation}
where $F \left( q/\Lambda \right)$ is an appropriately chosen regulator function
such as e.g.~$F = \exp ( -q^2/\Lambda^2 )$. Alternatively, one can use a (twice-subtracted) dispersive representation by
expressing the potential $V_{2N}^{2\pi, \, \rm total} (\vec q \, )$ in terms of a continuous superposition
of Yukawa functions. For example, the central part of the potential in Eq.~(\ref{pot_res}) can be written as
\cite{Kaiser:1997mw,Epelbaum:2003gr}
\begin{equation}
V_{2N}^{2\pi, \, \rm central} (q ) = \frac{2 q^4}{\pi} \int_{2 M_\pi}^\infty
d\mu \frac{1}{\mu^3} \frac{\rho (\mu )}{\mu^2 + q^2}\,,
\end{equation}
where the spectral function $\rho (q)$ is given by
\begin{eqnarray}
\rho ( \mu ) &=& {\rm Im } \left[ V_{2N}^{2\pi, \, \rm central} ( 0^+ - i \mu ) \right]\nonumber \\
&=& {}
- \frac{g_A^4}{768 \pi F_\pi^4}\,
\left( 20 M_\pi^2 - 23 \mu^2 + \frac{48 M_\pi^4}{4 M_\pi^2 - \mu^2} \right)
\frac{\sqrt{\mu^2- 4 M_\pi^2}}{\mu}\,\fet \tau_1 \cdot \fet \tau_2\,.
\end{eqnarray}
The Fourier transformation can be easily carried out in this spectral representation by first
integrating over $\vec q$ and then over the spectrum $\mu$. This leads to the central potential
of the form
\begin{equation}
V_{2N}^{2\pi, \, \rm central} ( r ) = -\frac{g_A^4 M_\pi}{128 \pi^3 F_\pi^4 r^4} \, \fet \tau_1 \cdot \fet \tau_2 \,
\left[
(23 + 12 x^2 ) K_1 ( 2 x) + x (23 + 4 x^2 ) K_0 ( 2 x) \right]\,,
\end{equation}
where $K_i$ denote the modified Bessel functions and $x\equiv M_\pi r$.
At large distances,
the potential behaves as $\exp (-2 M_\pi r )/r^{3/2}$. The expressions for the remaining components
of the $2\pi$-exchange potential up to the chiral order $Q^3$, both in momentum and coordinate space,
can be found in Ref.~\cite{Kaiser:1997mw}. The order-$Q^4$ contributions are given in
Ref.~\cite{Kaiser:2001pc}.
The expressions for
pion exchange potentials derived in chiral EFT at large distances are controlled by low values
of $\mu$ for which the chiral expansion is expected to converge. At shorter distances, the
large-$\mu$ components in the spectrum start to contribute which cannot be computed reliably in
chiral EFT. This is visualized in Fig.~\ref{fig:poten}.
An extended discussion on the resulting theoretical uncertainty can be found
in Ref.~\cite{Epelbaum:2003gr}.
It is instructive to compare the toy models considered in section \ref{toy}
with the nucleon-nucleon potential derived in chiral EFT whose structure is symbolically
illustrated in Fig.~\ref{fig:poten}.
\begin{figure*}
\vspace{0.3cm}
\centerline{
\includegraphics[width=0.7\textwidth]{poten_artist1.pdf}
}
\vspace{-0.2cm}
\caption[fig4aa]{\label{fig:poten} A schematic picture of the two-nucleon potential derived in chiral EFT
in a given partial wave.
}
\vspace{0.2cm}
\end{figure*}
The main conceptual difference is due to the lack of an exact (regular) expression for the long-range
force in the realistic case of nucleon-nucleon interaction. Rather, it is represented in terms of the
chiral expansion of the pion-exchange potential which is valid at large distances and behaves singular
as $r \to 0$. This raised debates on the relative importance of the long- and short-range
components in the potential and the most efficient way to organize the expansion for low-energy
observables. There is little consensus on this issue in the literature (yet).
Chiral $2\pi$-exchange potential is, clearly, the most interesting new ingredient of the two-nucleon force from the
chiral EFT point of view: it is the next-longest-range contribution after the well established
$1\pi$-exchange potential whose form is strongly constrained due to the chiral symmetry of QCD.
Notice that three-pion exchange is already considerably less important for low-energy nuclear dynamics.
The evidence of the chiral $2\pi$-exchange potential up to N$^2$LO has been verified in the
Nijmegen PWA \cite{Rentmeester:1999vw}, see Ref.~\cite{Birse:2003nz} for a similar investigation.
In their analysis, the Nijmegen group utilized the long-range interaction
above some distance $b$ as input in order
to constrain the behavior of high partial waves. The missing intermediate and short-range components
are simulated by suitably chosen energy-dependent boundary conditions. The number of
parameters entering the boundary conditions needed to achieve a perfect description
of the data thus may be viewed as a measure of physics that is missing in the assumed
long-range force. As demonstrated in Ref.~\cite{Rentmeester:1999vw}, adding the two-pion exchange potential
derived at N$^2$LO in chiral EFT to the $1\pi$-exchange potential and the appropriate
electromagnetic interactions allowed for a considerable reduction of parameters (from 31 to 23
for $b = 1.4$ fm in the case of proton-proton scattering) with even a slightly better
resulting $\chi^2$.
This is a big success of chiral EFT in the two-nucleon sector.
\section{Summary}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec6}
In these lectures, I have outlined the foundations of chiral effective field theory
and the application of this theoretical framework to the nuclear force problem.
The method allows for a systematic derivation of nuclear forces with a direct connection
to QCD via its symmetries. These lecture notes are mainly focused on the conceptual aspects and
do not cover applications of the novel chiral potentials to the few-nucleon problem
and various related topics such as e.g.~isospin breaking effects, few-baryon systems with
strangeness, electroweak and pionic probes in the nuclear environment, nuclear parity violation
and chiral extrapolations of few-baryon observables. For a discussion on these and other
topics as well as for a detailed description of the structure of the two-, three- and four-nucleon
forces in chiral EFT the reader is referred to recent review articles \cite{Epelbaum:2005pn,Epelbaum:2008ga},
see also \cite{Bedaque:2002mn}. There are many frontiers where future
work is required. These include a better understanding of
the power counting in the few-nucleon sector,
the consistent inclusion of electroweak currents, and the development
of chiral EFT with explicit $\Delta$(1232) degrees of freedom.
\section*{Acknowledgments}
It is a great pleasure to thank Fa{\"i}{\c c}al AZAIEZ and other organizers of the
International Joliot-Curie School 2009 for the superb organization and the pleasant
atmosphere at the meeting. I also thank all my collaborators for sharing their insights into
the topics discussed here. Special thanks are due to Hermann Krebs and Ulf-G.~Mei{\ss}ner
for a careful reading of the manuscript and their helpful comments.
This work was supported by funds provided from the Helmholtz Association
to the young investigator group ``Few-Nucleon Systems in
Chiral Effective Field Theory'' (grant VH-NG-222), by the DFG (SFB/TR 16 ``Subnuclear Structure
of Matter''),
and by the EU Integrated Infrastructure Initiative Hadron
Physics Project under contract number RII3-CT-2004-506078.
\setlength{\bibsep}{0.2em}
\bibliographystyle{h-physrev3}
| 2023-04-23T06:41:11.133Z | 2010-01-19T10:06:37.000Z | redpajama/arxiv | arxiv_0001 | 1,859 | 30,944 |
da28aaf17130a0fa420c6aba38c100d23beeb06c | \section{Introduction}
In \cite{bernstein}, Bernstein introduced the Bernstein polynomials. Since
that time, many authors have studied on these polynomials and other related
subjects (see cf. \cite{acikgoz}-\cite{wu}), and see also the references
cited in each of these earlier works. The Bernstein polynomials can also be
defined in many different ways. Thus, recently, many applications of these
polynomials have been looked for by many authors. These polynomials have
been used not only for approximations of functions in various areas in
Mathematics but also the other fields such as smoothing in statistics,
numerical analysis and constructing Bezier curve which have many interesting
applications in computer graphics (see cf. \cite{bernstein}, \cite{Guan},
\cite{Oruc}, \cite{SOstrovskaAM}, \cite{Ost}, \cite{Nowak}, \cite{Phillips},
\cite{wu}) and see also the references cited in each of these earlier works.
The ($q$-) Bernstein polynomials have been investigated and studied by many
authors without\textit{\ generating function}. So far, we have not found any
generating function of ($q$-) Bernstein polynomials in the literature.
Therefore, we will consider the following question:
\textit{How can one construct \textbf{generating function} of (}$q$\textit{%
-) Bernstein type polynomials}?
The aim of this paper is to give answer this question and to construct
generating function of the ($q$-) Bernstein type polynomials which is given
in Section 3. By using this generating function, we not only give recurrence
relation and derivative of the ($q$-) Bernstein type polynomials but also
find relations between higher-order Bernoulli polynomials, the second kind
Stirling numbers and the Hermite polynomials. In Section 5, by applying
Mellin transformation to the generating function of the ($q$-) Bernstein
type polynomials, we define interpolation function, which interpolates the ($%
q$-) Bernstein type polynomials at negative integers.
\section{Preliminary results related to the classical Bernstein,
higher-order Bernoulli and Hermit polynomials, the second kind Stirling
numbers}
The Bernstein polynomials play a crucial role in approximation theory and
the other branches of Mathematics and Physics.\ Thus in this section we give
definition and some properties of these polynomials.
Let $f$ be a function on $\left[ 0,1\right] $. The classical Bernstein
polynomials of degree $n$ are defined by%
\begin{equation}
\mathbb{B}_{n}f(x)=\sum_{j=0}^{n}f\left( \frac{j}{n}\right) B_{j,n}(x),\text{
}0\leq x\leq 1, \label{be1}
\end{equation}%
where $\mathbb{B}_{n}f$\ is called the Bernstein operator and%
\begin{equation}
B_{j,n}(x)=\left(
\begin{array}{c}
n \\
j%
\end{array}%
\right) x^{j}(1-x)^{n-j}, \label{be2}
\end{equation}%
$j=0$, $1$,$\cdots $,$n$ are called the Bernstein basis\ polynomials (or the
Bernstein polynomials of degree $n$). There are $n+1$ $n$th degree Bernstein
polynomials. For mathematical convenience, we set $B_{j,n}(x)=0$ if $j<0$ or
$j>n$ cf. (\cite{bernstein}, \cite{Guan}, \cite{Joy}).
If $f:\left[ 0,1\right] \rightarrow \mathbb{C}$ is a continuous function,
the sequence of Bernstein\ polynomials $\mathbb{B}_{n}f(x)$ converges
uniformly to $f$ on $\left[ 0,1\right] $ cf. \cite{kowalski}.
A recursive definition of the $k$th $n$th Bernstein polynomials can be
written as%
\begin{equation*}
B_{k,n}(x)=(1-x)B_{k,n-1}(x)+xB_{k-1,n-1}(x).
\end{equation*}%
For proof of the above relation see \cite{Joy}.
For $0\leq k\leq n$, derivative of the $n$th degree Bernstein polynomials
are polynomials of degree $n-1$:%
\begin{equation}
\frac{d}{dt}B_{k,n}(t)=n\left( B_{k-1,n-1}(t)-B_{k,n-1}(t)\right) ,
\label{be3}
\end{equation}%
cf. (\cite{bernstein}, \cite{Guan}, \cite{Joy}). On the other hand, in
Section 3, using our a new generating function, we give the other proof of (%
\ref{be3}).
Observe that the Bernstein polynomial of degree $n$, $\mathbb{B}_{n}f$, uses
only the sampled values of $f$ at $t_{nj}=\frac{j}{n}$, $j=0$, $1$,$\cdots $,%
$n$. For $j=0$, $1$,$\cdots $,$n$,%
\begin{equation*}
\beta _{j,n}(x)\equiv (n+1)B_{j,n}(x),\text{ }0\leq x\leq 1,
\end{equation*}%
is the density function of beta distribution $beta(j+1,n+1-j)$.
Let $y_{n}(x)$ be a binomial $b(n,x)$ random variable. Then%
\begin{equation*}
E\left\{ y_{n}(x)\right\} =nt,
\end{equation*}%
and%
\begin{equation*}
var\left\{ y_{n}(x)\right\} =E\left\{ y_{n}(x)-nx\right\} ^{2}=nx(1-x),
\end{equation*}%
and%
\begin{equation*}
\mathbb{B}_{n}f(x)=E\left[ f\left\{ \frac{y_{n}(x)}{n}\right\} \right] ,
\end{equation*}%
cf. \cite{Guan}.
The classical higher-order Bernoulli polynomials $\mathcal{B}_{n}^{(v)}(z)$
defined by means of the following generating function%
\begin{equation}
F^{(v)}(z,t)=e^{tx}\left( \frac{t}{e^{t}-1}\right) ^{v}=\sum_{n=0}^{\infty }%
\mathcal{B}_{n}^{(v)}(z)\frac{t^{n}}{n!}\text{.} \label{I1}
\end{equation}%
The higher-order Bernoulli polynomials play an important role in the finite
differences and in (analytic) number theory. So, the coefficients in all the
usual cenral-difference formulae for interpolation, numerical
differentiation and integration, and differences in terms of derivatives can
be expressed in terms of these polynomials cf. (\cite{acikgoz}, \cite%
{Norlund}, \cite{LopezTemme}, \cite{SimsekKurt}). These polynomials are
related to the many branches of Mathematics. By substituting $v=1$ into the
above, we have%
\begin{equation*}
F(t)=\frac{te^{tx}}{e^{t}-1}=\sum_{n=1}^{\infty }B_{n}\frac{t^{n}}{n!},
\end{equation*}%
where $B_{n}$ is usual Bernoulli polynomials cf. \cite{simsekJmaa}.
The usual second kind Stirling numbers with parameters $(n,k),$ denote by $%
S(n,k)$, that is the number of partitions of the set $\left\{ 1,2,\cdots
,n\right\} $\ into $k$ non empty set. For any $t$, it is well known that the
second kind Stirling numbers are defined by means of the generating function
cf. (\cite{cakic Milovanovic}, \cite{pinter}, \cite{SimsekARXIV})%
\begin{equation}
F_{S}(t,k)=\frac{(-1)^{k}}{k!}(1-e^{t})^{k}=\sum_{n=0}^{\infty }S(n,k)\frac{%
t^{n}}{n!}. \label{I2}
\end{equation}%
These numbers play an important role in many branches of Mathematics, for
example, combinatorics, number theory, discrete probability distributions
for finding higher order moments. In \cite{Joarder}, Joarder and \ Mahmood
demonstrated the application of Stirling numbers of the second kind in
calculating moments of some discrete distributions, which are binomial
distribution, geometric distribution and negative binomial distribution.
The Hermite polynomials defined by the following generating function:
For $z$, $t\in \mathbb{C}$,%
\begin{equation}
e^{2zt-t^{2}}=\sum_{n=0}^{\infty }H_{n}(z)\frac{t^{n}}{n!}, \label{I3}
\end{equation}%
which gives the Cauchy-type integral%
\begin{equation*}
H_{n}(z)=\frac{n!}{2\pi i}\int_{\mathcal{C}}e^{2zt-t^{2}}\frac{dt}{t^{n+1}},
\end{equation*}%
where $\mathcal{C}$ is a circle around the origin and the integration is in
positive direction cf. \cite{LopezTemme}. The Hermite polynomials play a
crucial role in certain limits of the classical orthogonal polynomials.
These polynomials are related to the higher-order Bernoulli polynomials,
Gegenbauer polynomials, Laguerre polynomials, the Tricomi-Carlitz
polynomials and Buchholz polynomials, cf. \cite{LopezTemme}. These
polynomials also play a crucial role in not only in Mathematics but also in
Physics and in the other sciences. In section 4 we give relation between the
Hermite polynomials and ($q$-) Bernstein type polynomials.
\section{Generating Function of the Bernstein type polynomials}
Let $\left\{ B_{k,n}(x)\right\} _{0\leq k\leq n}$ be a sequence of Bernstein
polynomials. The aim of this section is to construct generating function of
the sequence $\left\{ B_{k,n}(x)\right\} _{0\leq k\leq n}$. It is well known
that most of generating functions are obtained from the recurrence formulae.
However, we do not use the recurrence formula of the Bernstein polynomials
for constructing generating function of them.
We now give the following notation:%
\begin{equation*}
\lbrack x]=[x:q]=\left\{
\begin{array}{c}
\frac{1-q^{x}}{1-q}\text{, }q\neq 1 \\
\\
x\text{, }q=1.%
\end{array}%
\right.
\end{equation*}
If $q\in \mathbb{C}$, we assume that $\mid q\mid <1$.
We define%
\begin{eqnarray}
F_{k,q}(t,x) &=&(-1)^{k}t^{k}\exp \left( \left[ 1-x\right] t\right)
\label{F1} \\
&&\times \sum_{m,l=0}\left(
\begin{array}{c}
k+l-1 \\
l%
\end{array}%
\right) \frac{q^{l}S(m,k)\left( x\log q\right) ^{m}}{m!}, \notag
\end{eqnarray}%
where $\left\vert q\right\vert <1$, $\exp (x)=e^{x}$\ and $S(m,k)$ denotes
the second kind Stirling numbers and%
\begin{equation*}
\sum_{m,l=0}f(m)g(l)=\sum_{m=0}^{\infty }f(m)\sum_{l=0}^{\infty }g(l).
\end{equation*}%
By (\ref{F1}), we define the following a new generating function of
polynomial $Y_{n}(k;x;q)$ by%
\begin{equation}
F_{k,q}(t,x)=\sum_{n=k}^{\infty }Y_{n}(k;x;q)\frac{t^{n}}{n!}, \label{F2}
\end{equation}%
where $t\in \mathbb{C}$.
Observe that if $q\rightarrow 1$ in (\ref{F2}), we have%
\begin{equation*}
Y_{n}(k;x;q)\rightarrow B_{k,n}(x),
\end{equation*}%
hence%
\begin{equation*}
F_{k}(t,x)=\sum_{n=k}^{\infty }B_{k,n}(x)\frac{t^{n}}{n!}.
\end{equation*}
From (\ref{F2}), we obtain the following theorem.
\begin{theorem}
Let $n$ be a positive integer with $k\leq n$. Then we have%
\begin{eqnarray}
Y_{n}(k;x;q) &=&\left(
\begin{array}{c}
n \\
k%
\end{array}%
\right) \frac{(-1)^{k}k!}{(1-q)^{n-k}} \label{F2a} \\
&&\times \sum_{m,l=0}\sum_{j=0}^{n-k}\left(
\begin{array}{c}
k+l-1 \\
l%
\end{array}%
\right) \left(
\begin{array}{c}
n-k \\
k%
\end{array}%
\right) \frac{(-1)^{j}q^{l+j(1-x)}S(m,k)\left( x\log q\right) ^{m}}{m!}.
\notag
\end{eqnarray}
\end{theorem}
By using (\ref{F1}) and (\ref{F2}), we obtain%
\begin{eqnarray}
F_{k,q}(t,x) &=&\frac{\left( \left[ x\right] t\right) ^{k}}{k!}\exp (\left[
1-x\right] t) \label{be33} \\
&=&\sum_{n=k}^{\infty }Y_{n}(k;x;q)\frac{t^{n}}{n!}. \notag
\end{eqnarray}%
The generating function $F_{k,q}(t,x)$ depends on integer parameter $k$,
real variable $x$ and complex variable $q$ and $t$. Therefore the
proprieties of this function are closely related to these variables and
parameter. By using this function, we give many properties of the ($q$-)
Bernstein type polynomials and the other well-known special numbers and
polynomials. By applying Mellin transformation to this function, in Section
5, we construct interpolation function of the ($q$-) Bernstein type
polynomials.
By the \textit{umbral calculus }convention in (\ref{be33}), then we obtain%
\begin{equation}
\frac{\left( \left[ x\right] t\right) ^{k}}{k!}\exp (\left[ 1-x\right]
t)=\exp \left( Y(k;x;q)t\right) . \label{be3Yn}
\end{equation}%
By using the above, we obtain all recurrence formulae of $Y_{n}(k;x;q)$\ as
follows:%
\begin{equation*}
\frac{\left( \left[ x\right] t\right) ^{k}}{k!}=\sum_{n=0}^{\infty }\left(
Y(k;x;q)-\left[ 1-x\right] \right) ^{n}\frac{t^{n}}{n!},
\end{equation*}%
where each occurrence of $Y^{n}(k;x;q)$ by $Y_{n}(k;x;q)$ (symbolically $%
Y^{n}(k;x;q)\rightarrow Y_{n}(k;x;q)$).
By (\ref{be3Yn}),%
\begin{equation*}
\left[ u+v\right] =\left[ u\right] +q^{u}\left[ v\right]
\end{equation*}%
and%
\begin{equation*}
\left[ -u\right] =-q^{u}\left[ u\right] ,
\end{equation*}%
we obtain the following corollary:
\begin{corollary}
\label{corl-1} Let $n$ be a positive integer with $k\leq n$. Then we have%
\begin{equation*}
Y_{n+k}(k;x;q)=\left(
\begin{array}{c}
n+k \\
k%
\end{array}%
\right) \sum_{j=0}^{n}(-1)^{j}q^{j(1-x)}\left[ x\right] ^{j+k}.
\end{equation*}
\end{corollary}
\begin{remark}
By Corollary \ref{corl-1}, for all $k$ with $0\leq k\leq n$, we see that%
\begin{equation*}
Y_{n+k}(k;x;q)=\left(
\begin{array}{c}
n+k \\
k%
\end{array}%
\right) \sum_{j=0}^{n}(-1)^{j}q^{j(1-x)}\left[ x\right] ^{j+k},
\end{equation*}%
or%
\begin{equation*}
Y_{n+k}(k;x;q)=\left(
\begin{array}{c}
n+k \\
k%
\end{array}%
\right) \left[ x\right] ^{k}\left[ 1-x\right] ^{n}.
\end{equation*}%
The polynomials $Y_{n+k}(k;x;q)$ are so-called $q$-\textbf{Bernstein-type
polynomials}. It is easily seen that%
\begin{equation*}
\lim_{q\rightarrow 1}Y_{n+k}(k;x;q)=B_{k,n+k}(x)=\left(
\begin{array}{c}
n+k \\
k%
\end{array}%
\right) x^{k}\left( 1-x\right) ^{n},
\end{equation*}%
which give us (\ref{be2}).
\end{remark}
By using derivative operator%
\begin{equation*}
\frac{d}{dx}\left( \lim_{q\rightarrow 1}Y_{n+k}(k;x;q)\right)
\end{equation*}%
in (\ref{F1}), we obtain%
\begin{eqnarray*}
&&\sum_{n=k}^{\infty }\frac{d}{dx}\left( Y_{n}(k;x;1)\right) \frac{t^{n}}{n!}
\\
&=&\sum_{n=k}^{\infty }nY_{n-1}(k-1;x;1)\frac{t^{n}}{n!}-\sum_{n=k}^{\infty
}nY_{n-1}(k;x;1)\frac{t^{n}}{n!}.
\end{eqnarray*}%
Consequently, we have%
\begin{equation*}
\frac{d}{dx}\left( Y_{n}(k;x;1)\right) =nY_{n-1}(k-1;x;1)-nY_{n-1}(k;x;1),
\end{equation*}%
or%
\begin{equation*}
\frac{d}{dx}\left( B_{k,n}(x)\right) =nB_{k-1,n-1}(x)-nB_{k,n-1}(x).
\end{equation*}
Observe that by using our generating function, we give different proof of (%
\ref{be3}).
Let $f$ be a function on $\left[ 0,1\right] $. The ($q$-) Bernstein type
polynomial of degree $n$ is defined by%
\begin{equation*}
\mathbb{Y}_{n}f(x)=\sum_{j=0}^{n}f\left( \frac{\left[ j\right] }{\left[ n%
\right] }\right) Y_{n}(j;x;q),
\end{equation*}%
where $0\leq x\leq 1$. $\mathbb{Y}_{n}$ is called the ($q$-) Bernstein type
operator and $Y_{n}(j;x;q)$, $j=0,\cdots ,n$, defined in (\ref{F2a}), are
called the ($q$-) Bernstein type (basis) polynomials.
\section{New identities on Bernstein type polynomials, Hermite polynomials
and\ first kind Stirling numbers}
\begin{theorem}
Let $n$ be a positive integer with $k\leq n$. Then we have%
\begin{equation*}
Y_{n}(k;x;q)=\left[ x\right] ^{k}\sum_{j=0}^{n}\left(
\begin{array}{c}
n \\
j%
\end{array}%
\right) \mathcal{B}_{j}^{(k)}\left( \left[ 1-x\right] \right) S(n-j,k),
\end{equation*}%
where $\mathcal{B}_{j}^{(k)}(x)$ and $S(n,k)$ denote the classical
higher-order Bernoulli polynomials and the second kind Stirling numbers,
respectively.
\end{theorem}
\begin{proof}
By using (\ref{I1}), (\ref{I2}) and (\ref{F2}), we obtain%
\begin{eqnarray*}
&&\sum_{n=k}^{\infty }Y_{n}(k;x;q)\frac{t^{n}}{n!} \\
&=&\left[ x\right] ^{k}\sum_{n=0}^{\infty }S(n,k)\frac{t^{n}}{n!}%
\sum_{n=0}^{\infty }\mathcal{B}_{j}^{(k)}\left( \left[ 1-x\right] \right)
\frac{t^{n}}{n!}.
\end{eqnarray*}%
By using Cauchy product in the above, we have%
\begin{eqnarray*}
&&\sum_{n=k}^{\infty }Y(k,n;x;q)\frac{t^{n}}{n!} \\
&=&\left[ x\right] ^{k}\sum_{n=0}^{\infty }\sum_{j=0}^{n}\mathcal{B}%
_{j}^{(k)}\left( \left[ 1-x\right] \right) S(n-j,k)\frac{t^{n}}{j!\left(
n-j\right) !}.
\end{eqnarray*}%
From the above, we have%
\begin{eqnarray}
&&\sum_{n=k}^{\infty }Y_{n}(k;x;q)\frac{t^{n}}{n!} \label{b1} \\
&=&\left[ x\right] ^{k}\sum_{n=0}^{k-1}\sum_{j=0}^{n}\mathcal{B}%
_{j}^{(k)}\left( \left[ 1-x\right] \right) S(n-j,k)\frac{t^{n}}{j!\left(
n-j\right) !} \notag \\
&&+\left[ x\right] ^{k}\sum_{n=k}^{\infty }\sum_{j=0}^{n}\mathcal{B}%
_{j}^{(k)}\left( \left[ 1-x\right] \right) S(n-j,k)\frac{t^{n}}{j!\left(
n-j\right) !}. \notag
\end{eqnarray}%
By comparing coefficients of $t^{n}$ in the both sides of the above
equation, we arrive at the desired result.
\end{proof}
\begin{remark}
In \cite{H. W. Gould}, Gould gave a different relation between the Bernstein
polynomials, generalized Bernoulli polynomials and the second kind Stirling
numbers. Oruc and Tuncer \cite{Oruc} gave relation between the $q$-Bernstein
polynomials and the second kind $q$-Stirling numbers. In \cite{Nowak}, Nowak
studied on approximation properties for generalized $q$-Bernstein
polynomials and also obtained Stancu operators or Phillips polynomials.
\end{remark}
From (\ref{b1}), we get the following corollary:
\begin{corollary}
Let $n$ be a positive integer with $k\leq n$. Then we have%
\begin{equation*}
\left[ x\right] ^{k}\sum_{n=0}^{k-1}\sum_{j=0}^{n}\frac{\mathcal{B}%
_{j}^{(k)}\left( \left[ 1-x\right] \right) S(n-j,k)}{j!\left( n-j\right) !}%
=0.
\end{equation*}
\end{corollary}
\begin{theorem}
Let $n$ be a positive integer with $k\leq n$. Then we have%
\begin{equation*}
H_{n}(1-y)=\frac{k!}{y^{k}}\sum_{n=0}^{\infty }Y_{n+k}(k;y;q)\frac{2^{n}}{%
\left( n+k\right) !}.
\end{equation*}
\end{theorem}
\begin{proof}
By (\ref{I3}), we have%
\begin{equation*}
e^{2zt}=\sum_{n=0}^{\infty }\frac{t^{2n}}{n!}\sum_{n=0}^{\infty }H_{n}(z)%
\frac{t^{n}}{n!}.
\end{equation*}%
By Cauchy product in the above, we obtain%
\begin{equation}
e^{2zt}=\sum_{n=0}^{\infty }\left( \sum_{j=0}^{n}\left(
\begin{array}{c}
n \\
j%
\end{array}%
\right) H_{j}(z)\right) \frac{t^{2n-j}}{n!}. \label{b2}
\end{equation}%
By substituting $z=1-y$ into (\ref{b2}), we have%
\begin{equation*}
\sum_{n=0}^{\infty }\left( \sum_{j=0}^{n}\left(
\begin{array}{c}
n \\
j%
\end{array}%
\right) H_{j}(1-y)\right) \frac{t^{2n-j}}{n!}=\frac{k!}{y^{k}}%
\sum_{n=0}^{\infty }\left( 2^{n}Y_{n+k}(k;y;q)\right) \frac{t^{n}}{\left(
n+k\right) !}.
\end{equation*}%
By comparing coefficients of $t^{n}$ in the both sides of the above
equation, we arrive at the desired result.
\end{proof}
\section{Interpolation Function of the ($q$-) Bernstein type polynomials}
The classical Bernoulli numbers interpolate by Riemann zeta function, which
has a profound effect on number theory and complex analysis. Thus, we
construct interpolation function of the ($q$-) Bernstein type polynomials.
For $z\in \mathbb{C}$, and $x\neq 1$, by applying the Mellin transformation
to (\ref{F1}), we get%
\begin{equation*}
S_{q}(z,k;x)=\frac{1}{\Gamma (s)}\int_{0}^{\infty }t^{z-k-1}F_{k,q}(-t,x)dt.
\end{equation*}%
By using the above equation, we defined interpolation function of the
polynomials, $Y_{n}(k;x;q)$ as follows:
\begin{definition}
Let $z\in \mathbb{C}$ and $x\neq 1$. We define%
\begin{equation}
S_{q}(z,k;x)=\left( 1-q\right) ^{z-k}\sum_{m,l=0}\left(
\begin{array}{c}
z+l-1 \\
l%
\end{array}%
\right) \frac{q^{l(1-x)}S(m,k)\left( x\log q\right) ^{m}}{m!}. \label{F1A}
\end{equation}
\end{definition}
By using (\ref{F1A}), we obtain%
\begin{equation*}
S_{q}(z,k;x)=\frac{(-1)^{k}}{k!}\left[ x\right] ^{k}\left[ 1-x\right] ^{-z},
\end{equation*}%
where $z\in \mathbb{C}$ and $x\neq 1$.
By (\ref{F1A}), we have $S_{q}(z,k;x)\rightarrow S(z,k;x)$ as $q\rightarrow
1 $. Thus we have%
\begin{equation*}
S(z,k;x)=\frac{(-1)^{k}}{k!}x^{k}\left( 1-x\right) ^{-z}.
\end{equation*}%
By substituting $x=1$ into the above, we have%
\begin{equation*}
S(z,k;1)=\infty .
\end{equation*}%
We now evaluate the $m$th $z$-derivatives of $S(z,k;x)$\ as follows:%
\begin{equation}
\frac{\partial ^{m}}{\partial z^{m}}S(z,k;x)=\log ^{m}\left( \frac{1}{1-x}%
\right) S(z,k;x), \label{F1b}
\end{equation}%
where $x\neq 1$.
By substituting $z=-n$ into (\ref{F1A}), we obtain%
\begin{equation*}
S_{q}(-n,k;x)=\frac{1}{\left( 1-q\right) ^{n}}\sum_{m,l=0}\left(
\begin{array}{c}
-n+l-1 \\
l%
\end{array}%
\right) \frac{q^{l(1-x)}S(m,k)\left( x\log q\right) ^{m}}{m!}.
\end{equation*}%
By substituting (\ref{F2a}) into the above, we arrive at the following
theorem, which relates the polynomials $Y_{n+k}(k;x;q)$ and the function $%
S_{q}(z,k;x)$.
\begin{theorem}
Let $n$ be a positive integer with $k\leq n$ and $0<x<1$. Then we have%
\begin{equation*}
S_{q}(-n,k;x)=\frac{(-1)^{k}n!}{(n+k)!}Y_{n+k}(k;x;q).
\end{equation*}
\end{theorem}
\begin{remark}
\begin{eqnarray*}
\lim_{q\rightarrow 1}S_{q}(-n,k;x) &=&S(-n,k;x) \\
&=&\frac{(-1)^{k}n!}{(n+k)!}x^{k}\left( 1-x\right) ^{n} \\
&=&\frac{(-1)^{k}n!}{(n+k)!}B_{k,n+k}(x).
\end{eqnarray*}%
Therefore, for $0<x<1$, the function%
\begin{equation*}
S(z,k;x)=\frac{(-1)^{k}}{k!}x^{k}\left( 1-x\right) ^{-z}
\end{equation*}%
interpolates the classical Bernstein polynomials of degree $n$ at negative
integers.
\end{remark}
By substituting $z=-n$ into (\ref{F1b}), we obtain the following corollary.
\begin{corollary}
Let $n$ be a positive integer with $k\leq n$ and $0<x<1$. Then we have%
\begin{equation*}
\frac{\partial ^{m}}{\partial z^{m}}S(-n,k;x)=\frac{(-1)^{k}n!}{(n+k)!}%
B_{k,n+k}(x)\log ^{m}\left( \frac{1}{1-x}\right) .
\end{equation*}
\end{corollary}
\section{Further remarks and observation}
The Bernstein polynomials are used for important applications in many
branches of Mathematics and the other sciences, for instance, approximation
theory, probability theory, statistic theory, number theory, the solution of
the differential equations, numerical analysis, constructing Bezier curve, $%
q $-analysis, operator theory and applications in computer graphics. Thus we
look for the applications of our new functions and the ($q$-) Bernstein type
polynomials. Due to Oruc and Tuncer \cite{Oruc}, the $q$-Bernstein
polynomials shares the well-known shape-preserving properties of the
classical Bernstein polynomials. When the function $f$ is convex then%
\begin{equation*}
\beta _{n-1}(f,x)\geq \beta _{n}(f,x)\text{ for }n>1\text{ and }0<q\leq 1,
\end{equation*}%
where%
\begin{equation*}
\beta _{n}(f,x)=\sum_{r=0}^{n}f_{r}\left[
\begin{array}{c}
n \\
r%
\end{array}%
\right] x^{r}\prod_{s=0}^{n-r-1}\left( 1-q^{s}x\right)
\end{equation*}%
and%
\begin{equation*}
\left[
\begin{array}{c}
n \\
r%
\end{array}%
\right] =\frac{\left[ n\right] \cdots \left[ n-r+1\right] }{\left[ r\right] !%
}.
\end{equation*}%
As a consequence of this one can show that the approximation to convex
function by the $q$-Bernstein polynomials is one sided, with $\beta
_{n}f\geq f$ for all $n$. $\beta _{n}f$ behaves is very nice way when one
vary the parameter $q$. In \cite{acikgoz}, the authors gave some
applications on the approximation theory related to Bernoulli and Euler
polynomials.
We conclude this section by the following questions:
1) \textit{How can one demonstrate approximation by (}$q$-) \textit{%
Bernstein type polynomials, }$Y_{n+k}(k;x;q)$?
2) \textit{Is it possible to define uniform expansions of the (}$q$-)
\textit{Bernstein type polynomials, }$Y_{n+k}(k;x;q)$\textit{?}
3) \textit{Is it possible to give applications of the (}$q$\textit{-)
Bernstein type polynomials in calculating moments of some distributions in
Statistics, }$Y_{n+k}(k;x;q)$\textit{?}
4) \textit{How can one give relations between the (}$q$\textit{-) Bernstein
type polynomials, }$Y_{n+k}(k;x;q)$\textit{\ and the Milnor algebras.}
\begin{acknowledgement}
The first author is supported by the research fund of Akdeniz University.
\end{acknowledgement}
| 2023-04-23T06:41:11.894Z | 2010-01-19T21:27:18.000Z | redpajama/arxiv | arxiv_0001 | 1,890 | 4,044 |
8183ada7c503659fbf207bd0cf0591c6928309cb | \subsection{Motivation}
The \emph{low-rank matrix recovery problem}
\cite{recht_guaranteed_2007,
candes_exact_2009,candes_power_2009,
singer_uniqueness_2009,
keshavan_matrix_2009,
wright_robust_2009,
gross_quantum_2009,
recht_simpler_2009,
gross_recovering_2009,
candes_robust_2009}
is:
Reconstruct a low-rank matrix $\rho$
from $m$ randomly selected matrix elements. The more general
version introduced in
\cite{gross_recovering_2009} reads:
Reconstruct $\rho$
from $m$ randomly selected expansion
coefficients with respect to any fixed matrix basis.
Let us consider what seems to be the most mundane aspect of the
problem: the way in which the $m$ coefficients are ``randomly
selected''. Assume we are dealing with an $n\times n$ matrix $\rho$.
The statement of the matrix recovery problem calls for us to sample
$m$ of the $n^2$ coefficients characterizing $\rho$ \emph{without
replacing}. This yields a random subset $\Omega$ consisting of $m$ of
the $n^2$ coefficients, from which the matrix $\rho$ is then to be
recovered.
Due to the requirement that the drawn coefficients be distinct, the
$m$ samples are not independent. Their dependency turns out to impede
the technical analysis of the recovery algorithms. In order to avoid this
complication, most authors chose to first
analyze a variant where the revealed coefficients are drawn
independently and then, in a second step, relate the modified question
to the original one. Two such proxies for sampling without replacement
have been discussed:
\emph{1. The Bernoulli model}
\cite{candes_exact_2009,candes_power_2009,
candes_robust_2009}. Here, each of the $n^2$ coefficients is assumed
to be known with probability $\frac{m}{n^2}$. Thus the number of
revealed coefficients is itself a random variable (with expectation
value $m$). The minor draw-back of this approach is that, with finite
probability, significantly more than $m$ coefficients will be
uncovered. These possible violations of the rules of the original
problem have to be factored in, when the success probability of the
algorithm is computed.
\emph{2. The i.i.d.\ approach}
\cite{gross_quantum_2009,gross_recovering_2009,recht_simpler_2009}.
The known coefficients are obtained by sampling $m$ times \emph{with}
replacement. The draw-back here is that, with fairly high probability,
some coefficients will be selected more than once. To understand why
this is undesirable, we need to recall some technical definitions from
\cite{gross_recovering_2009}.
Let $A_1,\dots, A_m$ be random variables taking values in $[1,n^2]$.
For now, assume the $A_i$'s are distributed uniformly and
independently. Let $\{w_a\}_{a=1}^{n^2}$ be an orthonormal Hermitian
basis in the space of $n\times n$-matrices.
A central object in the analysis is the \emph{sampling operator},
defined as
\begin{equation}\label{eqn:r}
\mathcal{R}: \rho \mapsto
\frac{n^2}{m} \sum_{i=1}^m \tr(\rho w_{A_i}) \,w_{A_i}.
\end{equation}
If the $A_i$ are all distinct, then $\frac{m}{n^2}\mathcal{R}$ is a
projection operator. If, on the other hand, some basis elements occur
more than once, the spectrum of the sampling operator will be more
complicated. More importantly, the operator norm $\|\frac{m}{n^2}\mathcal{R}\|$
may become fairly large. The latter effect is undesirable, as the
logarithm of the operator norm appears as a multiplicative constant in
the final bound on the number of coefficients which need to be known
in order for the reconstruction process to be successful.
There seem to be three ways to cope with this problem. First, use the
worst-case estimate $\|\frac{m}{n^2}\mathcal{R}\|\leq m$ (done in Section~II.C
of \cite{gross_recovering_2009}). Second, use the fact that the
operator norm is very likely to be of order $O(\log n)$ (suggested at
the end of Section~II.C in \cite{gross_recovering_2009} and
implemented in later versions of
\cite{recht_simpler_2009}). Third, prove that the arguments in
\cite{gross_recovering_2009} remain valid when the $A_i$'s are chosen
without replacement. Supplying such a proof is the purpose of the
present note.
Following earlier work \cite{candes_exact_2009,candes_power_2009},
Ref.~\cite{gross_recovering_2009} reduces the
analysis of the matrix recovery problem to the problem of controlling
the operator norm of various linear functions of $\mathcal{R}$ (c.f.\ Lemma~4
and Lemma~6 of \cite{gross_recovering_2009}). This, in turn, is done
by employing a large-deviation bound for the sum of independent
matrix-valued random variables, which was derived in
\cite{ahlswede_strong_2002}. Below, we point out that in some
situations this bound remains valid when the random variables are not
independent, but represent sampling without replacing.
\subsection{Statement}
Let $C$ be a finite set. For $1\leq m\leq |C|$, let $X_i$ be a random
variable taking values in $C$ with uniform probability. We assume that
all the $X_i$ are independent, so that ${\bf X}=\langle X_1, \dots,
X_m\rangle$ is a $C^m$-valued random vector modeling sampling
\emph{with} replacement from $C$. Likewise, let ${\bf Y}=\langle Y_1,
\dots, Y_m\rangle$ be a random vector of $C$'s sampled uniformly
\emph{without} replacement.
We are mainly interested in the case where $C$ is a finite set of
Hermitian matrices with some additional properties: We assume the
set is centered $\mathbbm{E}[X_i]=0$ and that there are constants
$c,\sigma_0\in\mathbbm{R}$ bounding the operator norm $\|X_i\|\leq c$ and the
variance $\|\mathbbm{E}[X_i^2]\|\leq\sigma_0^2$ of the random variables. Then:
\begin{theorem}[Operator-Bernstein inequality]\label{thm:bernstein}
With the definitions above, let
$S_{\bf X}=\sum_{i=1}^m X_i$ and
$S_{\bf Y}=\sum_{i=1}^m Y_i$.
Let $V=m \sigma_0^2$. Then for both $S=S_{\bf X}$ and $S=S_{\bf Y}$
it holds that
\begin{equation}\label{eqn:bernstein}
\Pr\big[\|S\|>t\big]
\leq 2 n \exp\left(-\frac{t^2}{4V}\right),
\end{equation}
for $t \leq 2 V/c$, and
\begin{equation}\label{eqn:bernstein2}
\Pr\big[\|S\|>t\big]
\leq 2 n \exp\left(-\frac{t}{2c}\right),
\end{equation}
for larger values of $t$.
\end{theorem}
The version involving $S_{\bf X}$ has been proved in
\cite{gross_recovering_2009} as a minor variation of the
operator-Chernoff bound from \cite{ahlswede_strong_2002}. In the
proof, the failure probability is bounded from above in terms of the
``operator moment-generating function''
\begin{equation*}
M_{\bf X}(\lambda)=\mathbbm{E}[\tr\exp(\lambda S_{\bf X})].
\end{equation*}
To establish the more general statement, it would be sufficient to
show that $M_{\bf Y}\leq M_{\bf X}$. In fact, this relation is
well-known to hold for real-valued random variables. One popular way
of proving it involves the notion of \emph{negative association}
\cite{joag-dev_negative_1983,dubhashi_concentration_2009}. Indeed, the
author of \cite{gross_recovering_2009} tried to generalize this
concept to the case of matrix-valued random variables, but failed to
overcome its apparent dependency on the \emph{total} order
of the
real numbers. However, he overlooked a much older and more
elementary argument given in \cite{hoeffding_probability_1963}, which
only relies on certain convexity properties and applies without change
to the matrix-valued case (see below).
\subsection{Implications}
As a consequence of Theorem~\ref{thm:bernstein}, the analysis in
Section~II.C of \cite{gross_recovering_2009} can be simplified and
improved, by setting the constant $C$ equal to one. The remark at the
end of that section applies. In particular, in the rest of that paper,
one may assume that $\|\Delta_T\|_2 < n^{1/2}\|\Delta_T^\bot\|_2$.
Thus, the conditions on the certificate $Y$ in Section~II.E may be
relaxed to $\|\mathcal{P}_T Y - \operatorname{sgn} \rho\|_2 \leq
\frac{1}{2 n^{1/2}}$. This implies that $l$, the number of iterations
of the ``golfing scheme'', may be reduced to $l=\lceil \log_2(2 n^{1/2}
\sqrt r) \rceil $. The estimates on $|\Omega|$ in Theorems~1, 2, and 3
therefore all improve by a factor of $\frac{\log_2 n^2}{\log_2
n^{1/2}} = 4$.
In \cite{recht_simpler_2009}, Proposition~3.3 becomes superfluous. The
final bounds improve accordingly.
The consequences are more pronounced for an upcoming detailed analysis
\cite{becker__2009} of noise resilience (in the spirit of
\cite{candes_matrix_2009}) of quantum mechanical applications.
The present note makes no statements about approaches which either
rely on the Bernoulli model, or use the non-commutative Kintchine
inequality instead of the operator Chernoff bound
\cite{candes_exact_2009,candes_power_2009,
candes_robust_2009}.
Finally, note that the ``golfing scheme'' employed in
\cite{gross_quantum_2009,gross_recovering_2009} demands that $l$
independent batches of coefficients be sampled. As a consequence of
Theorem~\ref{thm:bernstein}, every single batch may be assumed to be
drawn without replacement. However, for technical reasons, it is still
necessary that the batches remain independent. This does not
constitute a problem. Indeed, let $\Omega$ be the set of distinct
coefficients used by the golfing scheme. It is shown that, with high
probability, there exists a ``dual certificate'' in the space spanned
by the basis elements corresponding to the coefficients in $\Omega$.
Since $\Omega$ is just a random subset of cardinality $|\Omega|\leq
m$, the probability that there is a dual certificate in the space
spanned by $m$ distinct random basis elements (obtained from sampling
without replacing) can only be higher. A very similar argument has
recently been given in \cite{candes_robust_2009}, where the golfing
scheme has been modified to work with the Bernoulli model.
\subsection{Proof}
In this section, we repeat an argument from
\cite{hoeffding_probability_1963} which implies that for all
$\lambda\in\mathbbm{R}$ the inequality $M_{\bf Y}(\lambda)\leq M_{\bf
X}(\lambda)$ holds. We emphasize that the proof of
\cite{hoeffding_probability_1963} does not need to be modified in
order to apply matrix-valued random variables. However, the
version given below makes some steps explicit which were omitted in
the original paper.
For now, let $C$ be any finite set; let ${\bf X}, {\bf Y}$ be as
above.
The central observation is that one can generate the distribution of
${\bf X}$ by first sampling ${\bf y}=\langle y_1, \dots, y_m\rangle$
without replacement, and then drawing the $\langle x_1, \dots, x_m
\rangle$ from $\{y_1, \dots, y_m\}$ in a certain (unfortunately not
completely trivial) way.
To make that second step precise, we introduce a random partial
function ${\bf Z}$ from $C^m$ to $C^m$. The domain of $f$ is the set
of vectors ${\bf y}\in C^m$ with pairwise different components ($y_i
\neq y_j$). Given such a vector ${\bf y}$, we sequentially assign
values to the components $Z_1, \dots, Z_m$ of ${\bf Z(y)}$ by sampling
from $\{y_1, \dots, y_m\}$ according to the following recipe. At the
$k$th step, let $D_k$ be the subset of $\{y_1, \dots, y_m\}$ of values
which have already been drawn in a previous step. To get $Z_k$:
\begin{enumerate}
\item with probability $\frac{|D_k|}{|C|}$ take a random element from
$D_k$, and
\item
with probability $1-\frac{|D_k|}{|C|}$ take a random element from
the $\{y_1, \dots, y_m\}$ not contained in $D_k$.
\end{enumerate}
(Here, by a ``random'' element, we mean one sampled uniformly at
random from the indicated set). Then
\begin{lemma}\label{lem:emulate}
With the definitions above, ${\bf X}$ and ${\bf Z(Y)}$ are
identically distributed.
What is more, if $C$ is a subset of a vector space, then
\begin{equation}\label{eqn:symmetric}
\mathbbm{E}_{\bf Z}
\Big[\sum_{i=1}^m Z_i({\bf Y})\Big]
=\sum_{i=1}^m Y_i.
\end{equation}
\end{lemma}
\begin{proof}
Choose $k\in\{1,\dots,m\}$, let ${\bf x}\in C^m$. We compute the
conditional probability
\begin{equation*}
\operatorname{Pr}\big[
Z_k({\bf Y})=x_k \,|\,
Z_1({\bf Y})=x_1, \dots, Z_{k-1}({\bf Y})=x_{k-1}
\big].
\end{equation*}
If there is a $j<k$ such that $x_k = x_j$, then, according to
the first rule above, the probability is
\begin{equation*}
\frac{|D_k|}{|C|} \frac1{|D_k|} = \frac{1}{|C|}.
\end{equation*}
Otherwise, by the second rule, the probability reads
\begin{equation*}
\left(1-\frac{|D_k|}{|C|}\right) \frac1{|C|-|D_k|} = \frac1{|C|}
\end{equation*}
as well. Iterating:
\begin{eqnarray*}
&&
\Pr[Z_1({\bf Y})=x_1, \dots, Z_m({\bf Y})=x_m] \\
&=&
\Pr[Z_1({\bf Y})=x_1, \dots, Z_{m-1}({\bf Y})=x_{m-1}]
\frac1{|C|} \\
&=&
\Pr[Z_1({\bf Y})=x_1, \dots, Z_{m-2}({\bf Y})=x_{m-2}]
\frac1{|C|^2} \\
&=& \dots = \frac1{|C|^m}.
\end{eqnarray*}
This proves the first claim.
We turn to the second statement. The left hand side of
(\ref{eqn:symmetric}) is manifestly a linear combination of the
random variables $Y_i$. From the definition of ${\bf Z}$, it is also
invariant under any permutation $Y_i \mapsto Y_{\pi(i)}$. As a
linear and symmetric function, it is of the form
$K \sum_i^m Y_i$ for some constant $K$. To compute $K$, we use the
fact that the $Y_i$ are identically distributed, so that
\begin{eqnarray*}
\mathbbm{E}_{\bf Y} \Big[\mathbbm{E}_{\bf Z}
\Big[\sum_{i=1}^m Z_i({\bf Y})\Big]\Big]
&=& m\,\mathbbm{E}[Y_1], \\
\mathbbm{E}_{\bf Y}\Big[K \sum_{i=1}^m Y_i\Big]&=&K m\,\mathbbm{E}[Y_1].
\end{eqnarray*}
Thus $K=1$ and we are done.
\end{proof}
Now let $f$ be a convex function on the convex hull of $C$. Using
Jensen's inequality and
Lemma~\ref{lem:emulate},
\begin{eqnarray*}
\mathbbm{E}_{\bf X} \Big[ f\big(\sum_{i=1}^m X_i\big) \Big]
&=&
\mathbbm{E}_{\bf Y} \mathbbm{E}_{\bf Z}
\Big[ f\big(\sum_{i=1}^m Z_i({\bf Y})\big) \Big] \\
&\geq&
\mathbbm{E}_{\bf Y}
\Big[ f\big(
\mathbbm{E}_{\bf Z}\big[
\sum_{i=1}^m Z_i({\bf Y})
\big]
\big) \Big] \\
&=&
\mathbbm{E}_{\bf Y}
\Big[ f\big(
\sum_{i=1}^m Y_i
\big) \Big].
\end{eqnarray*}
Finally, specialize to the case where $C$ is a finite set of Hermitian
matrices. Since the function $c \mapsto \tr\exp(\lambda c)$ is convex
on the set of Hermitian matrices for all $\lambda\in\mathbbm{R}$, any upper
bound on moment generating functions derived for matrix-valued
sampling with replacing is also valid for sampling without replacing.
\bibliographystyle{IEEEtran}
| 2023-04-23T06:41:12.025Z | 2010-05-07T02:01:38.000Z | redpajama/arxiv | arxiv_0001 | 1,895 | 2,322 |
c4fa2b1b05d8a2e4a2cd844acfee212cab478aa7 | \section{Introduction}
When transporting data through a wireless mobile ad-hoc network, the
Delay/Disruption-Tolerant Network (DTN)~\cite{dtn_fall_sigcomm}
paradigm uses node mobility as an advantage while compromising on
message delivery delays~\cite{GrossglauserTse2002}. Message forwarding
decisions are made on a \textit{per-encounter} basis, for example by
using utility functions based on aggregating statistics on node
meeting probabilities~\cite{lindgren03,daly07,LER}. At any given
time, a node's vision of the network topology is limited to its
current neighbor. It does not have complete or even local knowledge of
the actual network topology as in the conventional Mobile Ad-hoc
Network (MANET) routing schemes. While this makes perfect sense in
extremely sparse networks~\cite{Burgess:2006,crawdad}, there are
situations where a highly mobile network is dense and sufficiently
well connected to provide end-to-end connectivity between a
significant subset of its nodes.
These nodes may even form small islands of stability. Using MANET
principles within such islands can bring great improvements. Indeed,
it considerably increases each node's information of its local
topology, thus leading to better forwarding decisions. When high
mobility rates and more generally high link instabilities reduce route
life-times and threaten network-wide end-to-end connectivity, a MANET
routing protocol can still succeed locally even if it fails globally.
In this paper we propose HYMAD, a Hybrid DTN-MANET routing protocol.
HYMAD combines techniques from both traditional ad-hoc routing and DTN
approaches. HYMAD periodically scans for network topology changes and
builds temporary disjoint groups of connected nodes. Intra-group
delivery is performed by a conventional ad-hoc routing protocol and
inter-group delivery by a DTN protocol.
HYMAD constantly adapts to the dynamics of the wireless ad-hoc network
using only topological information. As in traditional ad-hoc routing,
no extra information on geographical location or social community
membership is required. It does not rely on a priori knowledge of
connectivity patterns or inter-meeting times. This makes HYMAD
amenable to implementation in a DTN stack or ad-hoc routing
protocol~\cite{JOTT06}. In a dense network, HYMAD can function
similarly to a traditional MANET protocol. In the other extreme case
of very sparse connectivity, each node is a group on its own and HYMAD behaves like
a classical DTN routing protocol. In any other intermediate case its
hybrid nature takes over.
We implemented the HYMAD hybrid approach with a self-stabilizing group
service~\cite{r_operators,DKP08} and the multi-copy Spray-and-Wait
protocol as the DTN routing scheme~\cite{spyro_sw}. We evaluated the
scheme by performing simulation runs on the Rollernet data
set~\cite{tournoux08_rollernet}, an example of a highly dynamic ad-hoc
network, and show that it brings substantial performance improvements
over \textit{pure} Spray-and-Wait.
In the next section, we further describe how our hybrid approach
positions itself compared to existing DTN and MANET approaches. In
section~\ref{hymad}, we describe the HYMAD routing protocol
principles. We explain how nodes can agree on forming disjoint groups
and how such groups rather than individual nodes can be used as the
basis for DTN routing. We then evaluate the scheme on a real data set,
the Rollernet experiment, in section~\ref{results}.
Finally we conclude our work in
section~\ref{conclusion}.
\section{Routing in a mobile wireless network}
\label{comparison}
\begin{figure}
\centering
\scalebox{0.8}{\includegraphics{mob_density}}
\caption{Mobility vs Density: when different paradigms apply}
\label{mod_density}
\end{figure}
Mobile wireless ad-hoc networks were first studied under the
assumptions of moderate node mobility and sufficient density to ensure
end-to-end connectivity. Both conditions are necessary for traditional
MANET approaches, be they proactive
or reactive.
Let us characterize the various occurrences of mobile wireless networks
along the two main parameters of node density and node mobility. In
Fig.~\ref{mod_density}, which maps the different routing approaches on
the bi-dimensional mobile wireless network space, traditional MANET
routing appears in the top left corner.
When the density of nodes diminishes end-to-end connectivity can
disappear. In such sparse networks nodes have very few, if any,
neighbors within their transmission ranges. The topology eventually
splits into several non-communicating connected components. This is
typically the realm of Delay Tolerant Networking which one can further
subdivide in two~\cite{BorrelAmmar07}: the Assisted DTNs (A-DTN), in
case of low mobility of nodes, or Unassisted DTNs (U-DTN) where
mobility is high. The latter corresponds to traditional DTN scenarios.
Routing in A-DTNs typically involves special mobile
nodes, known as message ferries or data mules, which relay the messages
between the separate connected components
~\cite{ZA-ACMMOBIHOC2004,Shah2003}. The packet-switching method of
MANETs is replaced with a store-and-forward approach.
When the mobility in sparse networks increases, mobile nodes begin to
meet others. This is the traditional DTN scenario, where nodes forward
one or more copies of a given message until it reaches its
destination. There are many strategies for optimizing the forwarding
decision. The most straightforward approaches, such as Epidemic or
Spray-and-Wait~\cite{spyro_sw} do not require nodes to acquire
information on the others' positions, movements or trajectories. More
elaborate schemes involve a utility function where each node collects
direct and indirect knowledge of other nodes' meeting
probabilities. They require a certain learning period to aggregate
statistics before making good forwarding decisions. For example,
Lindgren et al.~\cite{lindgren03} use past encounters to predict the
probability of meeting a node again while Daly et al.~\cite{daly07}
use local estimates of betweeness and similarity.
In dense networks, conventional MANET protocols start to break down
under high mobility down even if the network is almost always fully
connected. Indeed the sheer instability of the links would result in a
deluge of topology updates in the proactive case and \textit{route
error} and new \textit{route requests} messages in the reactive
case. DTNs protocols on the other hand can handle high mobility
regardless of the density of the network. However by narrowly focusing
on per-encounter events, they ignore a lot of available
information. For example, simply asking nodes to regularly broadcast
a list of their neighbors would give all nodes a picture of its
two-hop neighborhood even under high mobility. Repeat this once and
everyone knows their three hop neighborhood. A node may therefore
have a topology ``knowledge horizon'' which determines how far into
the real topology a node can ``see''. The more extreme the mobility,
the shorter the ``horizon''.
The Hybrid DTN-MANET approach that we advocate in this paper aims at
filling the gap for efficient routing in highly connected and highly
mobile networks, which have so far, to the best of our knowledge,
received little attention. Hybrid DTN-MANET routing, like the HYMAD
protocol that we describe below, combines the resilience of DTNs with
the greater knowledge of local network topology provided by a MANET
protocol. It adapts naturally to the dynamics of the network and its
applicability spans a large spectrum of the mobile wireless network
space.
\section{The HYMAD protocol}
\label{hymad}
\subsection{Overview}
The core idea in HYMAD is to use whole groups of nodes instead of
individual nodes as the focus of a DTN protocol. The analogy is
detailed as follows:
\renewcommand{\arraystretch}{1.2}
\begin{center}
\begin{tabular}{m{2.5cm}m{5.5cm}}
DTN & HYMAD \\
\hline
Node & Group of nodes \\
A node has message $m$ & One node in the group has message $m$ and all other nodes in the group know that. \\
Two nodes meet & Two disjoint groups become connected. \\
\hline
\end{tabular}
\end{center}
Each node $u$ regularly broadcasts a list detailing for each group
member $v$ including itself the following elements:
\begin{enumerate}
\item The minimal number of hops from $u$ to $v$.
\item A list of the messages held by $v$.
\item A bit indicating if $v$ is a \textit{border node}
(i.e. in contact with other groups).
\end{enumerate}
The first two elements are necessary for the inter-group routing
protocol. The second one in particular allows a group to agree on what
messages it carries and which node (hereafter call the
message's custodian) specifically holds it. The last one enables use of an
intra-group distance vector routing. As in traditional distance vector
algorithms, the number of iterative broadcasts necessary for all
members of a group to agree on this information is equal to the
diameter of the group.
HYMAD then uses a DTN protocol to transfer messages between
groups. The approach is generic and many existing DTN protocols could be employed.
In this paper, we use Spray-and-wait~\cite{spyro_sw} to forward messages
between disjoint groups. As in Spray-and-Wait, the source of a message
will create a certain number of copies of it. In HYMAD however, this
source node is part of a group and copies of the message will be
distributed among the adjacent groups instead of simply the nodes that
the source encounters. If a group has more than one copy, it will, in
turn, distribute extra copies to its other adjacent groups. If a group
has just one copy it will wait until encountering the destination's
group to transfer it. Once inside the destination's group, the
intra-group routing protocol delivers the message to the destination.
\subsection{Intra-group routing}
\label{intra_group}
\begin{figure}
\centering
\scalebox{0.8}{\includegraphics{group_spread}}
\caption{Self-stabilizing groups: convergence in two iterations}
\label{group_spread}
\end{figure}
In HYMAD, the intra-group routing is handled by a simple distance
vector algorithm.
The nodes are dynamically grouped with a distributed network partitioning algorithm.
In our implementation, we chose to consider
\emph{diameter-constrained} groups. Groups will accept new members as
long as its diameter is less than a maximum diameter parameter
($D_{max}$). If a group's diameter expands due to internal link
failure, then some members are excluded to satisfy the diameter
constraint. Ducourthial et al.~\cite{DKP08} propose a
self-stabilizing, asynchronous distributed algorithm that achieves
this using an \mbox{$r-operator$} on a slightly modified distance
vector. This algorithm converges in $O(D_{max})$ iterations. The proof
of self-stabilization using asynchronous message passing can be found
in~\cite{r_operators}.
The main ideas behind group creation and modification are illustrated
in Fig.~\ref{group_spread} for a maximum diameter $D_{max}=2$. In the
first iteration, node $a$ begins by broadcasting the distance vector
$(a:0)$. Nodes $b$,$c$ and $d$ decide they want to join the group and
broadcast $(b:0,a:1)$, $(c:0,a:1)$ and $(d:0,a:1)$ respectively. After
receiving the broadcast from $d$, node $e$ also decides that it wants
to join the group and broadcasts $(e:0,d:1,a:2)$ (or
$(e:0,d:1,c:1,a:2)$ if $c$ spoke before $d$). In the second iteration,
$a$ now broadcasts $(a:0,b:1,c:1,d:1)$, $d$ realizes that the distance
between $b$ and $e$ is greater than $D_{max}$ and therefore chooses to
exclude $e$ from the group and broadcasts $(d:0,a:1,c:1,b:2)$. Finally
$e$ understands that it is not part of the group. After two
iterations, the group has stabilized on $a,b,c,d$. Now lets suppose
that at a later date the link between $a$ and $c$ goes down. Node $c$
now only receives the broadcasted distance vector $(d:0,a:1,b:2)$ from
$d$. It then understands that it is no longer part of the group. As is
obvious from this example, a given topology can result in very
different groups depending on the order in which the nodes speak.
In this paper, we used this algorithm in a proactive fashion where
each node node periodically runs the algorithm and broadcasts its
distance vector. Group composition therefore changes in reaction to
topology changes rather than routing needs.
\subsection{Inter-group routing}
\label{inter_group}
\textit{Border nodes} take care of most of the inter-group DTN
routing. Indeed, the periodic broadcast protocol described in
\ref{intra_group} puts them in the unique position of knowing both the
composition of two adjacent groups as well as the messages they
hold. \textit{Border nodes} may request the custodian of a message to
transfer one of more copies to it.
When a \textit{border node} learns that its group has acquired copies
of a message that a neighboring group does not possess, it has the
following choices:
\begin{itemize}
\item If the message's destination is in the neighboring group,
request the message from its custodian and pass it on.
\item If its group has more than one copy of the message, request
$min\left(1,\left\lfloor \frac{n_c}{n_b} \right\rfloor \right)$ copies from its
custodian and pass them on. ($n_c$ is the number of copies and
$n_b$ the current number of border nodes in the group). The idea is
to fairly spread a group's copies among its adjacent groups.
\item Otherwise do nothing
\end{itemize}
Conversely, when a \textit{border node} receives copies of a new
message from an adjacent group it can either:
\begin{itemize}
\item If the destination is in its group, forward the message to
it using the inter-group routing protocol.
\item Otherwise, randomly select a group member to be the custodian
for the copies. This is done to spread the burden over members of a
group.
\end{itemize}
With this in place, when a node wants to send a message, it simply
adds it to its own list of messages. Through the intra-routing protocol,
in $O(D_{max})$ time, the group's \textit{border nodes} will become
aware of the new message and request copies to forward it on to the
adjacent groups.
\subsection{Discussion}
An internal link failure may cause a group to split into several
separate sub-groups due to its diameter increasing. In such a
situation, each sub-group only has a fraction of the messages of the
original group. Fortunately this is not really a problem. Firstly, the
intra-group protocol detailed in~\ref{intra_group} ensures that nodes
will update their message lists accordingly when removing nodes from
their group member lists, thereby preventing a sub-group from advertising
messages it does not have or any other such incoherences. Secondly,
certain subgroups may still be connected to each other. If either
sub-group has more than one copy of some messages, these will be
copied over the other sub-group. In any case HYMAD recovers gracefully
from group splits.
Choosing a diameter parameter for the group self-stabilization
algorithm involves a trade-off. On the one hand, increasing it will
expand each node's individual ``knowledge horizon'' of the actual
network topology. Fewer copies will cover a larger portion of the
network, which will naturally lead to faster delivery. On the other
hand, this comes at the cost of increasing the convergence time and
overhead of the group service. Ideally, the convergence speed should
be considerably faster than the speed of topology changes. In a sense,
extreme mobility may fundamentally limit a node's possible knowledge
of the network's topology. The increased overhead results from each
node regularly broadcasting a list of all messages in its
group. Larger groups mechanically lead to longer control messages. If
one is willing to incur the extra cost, the diameter can be set to
encompass the entire network. In such a situation, HYMAD resembles a
resilient MANET routing protocol using store-and-forward for message
transfers. Furthermore, in many mobile wireless scenarios, there are
underlying social dynamics at work which can sometimes drive nodes to
gather into loose communities. $D_{max}$ should be chosen so as to
allow the expected number of members per social group to neatly fit
into one self-stabilizing group.
\section{Results on Rollernet data}
\label{results}
\subsection{Methodology}
\begin{figure}[t]
\centering
\subfloat[Average node degree \label{avg_node_degree}]{\scalebox{0.55}{\includegraphics{avg_node_degree}}} \\
\subfloat[Number of connected components (ccs) \label{num_ccs}]{\scalebox{0.55}{\includegraphics{num_ccs}}} \\
\caption{The accordion effect in Rollernet}
\label{accordeon}
\end{figure}
\begin{figure}[t]
\centering
\scalebox{0.55}{\includegraphics{group_stability}}
\caption{Topology changes: average number of links within a given
that appear or fail vs. number of nodes that leave or join that
group}
\label{group_stab}
\end{figure}
We evaluate HYMAD's performance on
Rollernet~\cite{tournoux08_rollernet}, a highly connected and
extremely mobile connectivity trace. The Rollernet experiment involved
equipping 62 participants of the regular Sunday afternoon rollerblading
tour through Paris with contact loggers (Intel iMotes). In order to
witness different behavior profiles the 62 bluetooth loggers were
distributed among groups of friends, members of rollerblading
associations and staff operators. In particular, one member of the
staff was instructed to remain behind the tour times while another
stayed in front for the entire duration of the experiment. This allows
us to get a rough sense of the relative geographic position of the
participants by looking at the connectivity graph. A snapshot of the
connectivity graph can be seen in Fig.~\ref{diffusion} and an
animation is available online~\cite{rollernet_youtube}.
The Rollernet trace is ideal for evaluating HYMAD. Indeed it exhibits
the following characteristics:
\begin{itemize}
\item \emph{High density:} Contrary to many DTN traces, Rollernet is
\emph{not} sparse. A look at Fig.~\ref{avg_node_degree}, shows that
the average node degree of the connectivity graph oscillates between
2.9 and 7.8. The average for the whole tour is 4.8.
\item \emph{High mobility:} Everyone eventually meets everyone
else. On average, each of the 62 nodes meets 56 others during the
course of the tour. Additionally the topology evolves extremely
quickly. The average lifetime of a given link is 26 seconds. The
average lifetime of a shortest path between two nodes is 15.5
seconds. Considering that the sampling period is 15 seconds, it
follows that links are highly unstable and valid routes transient.
\item \emph{Accordion Effect:} This is an interesting consequence of
the rollerblading context. The tour alternates between acceleration
and deceleration phases in which the network topology respectively
expands, leading to several separate connected component, and
contracts, leading to a single connected
component. Fig.~\ref{num_ccs} shows that the number of connected
components varies between 1 and 7 (17 if counting isolated
nodes). In fact, Figures \ref{avg_node_degree} and \ref{num_ccs} have
roughly alternating phases.
\end{itemize}
We compare HYMAD to both Epidemic and regular Spray-and-Wait. Epidemic
provides an upper bound on achievable performance in terms of both
delay and delivery ratio while Spray-and-Wait provides a DTN
state-of-the-art comparison. We slightly adapted Spray-and-Wait to the
more connected context of Rollernet. A node no longer splits half of
its copies with the other nodes it meets, but instead splits its
copies equally among itself and its neighbors.
We chose to use $D_{max}=2$ for all our results because it ensures a
very fast convergence rate, keeps the overhead reasonable and seemed
to accurately reflect the size of separate connected components (small
groups of friends for example), particularly during the accelerating
phases. We also tested greater values of $D_{max}$, which yield , at
the cost of greater overhead, a small but noticeable improvement of
the delivery ratio.
The sampling period of the Rollernet traces is 15 seconds. We did not
try to extrapolate the events (link failures, new contact
opportunities, etc..) in the time between multiples of 15 seconds. We
also assume that 15 seconds is enough for a message to traverse any
connected component in Rollernet. Therefore, all our results on delays
when simulating protocols on top over the Rollernet traces will be in
multiples of 15 seconds.
\subsection{Performance}
\begin{figure}[t]
\centering
\subfloat[5 copies \label{cfs5}]{\scalebox{0.50}{\includegraphics{routing_d2_nc5}}} \\
\subfloat[20 copies \label{cfs20}]{\scalebox{0.50}{\includegraphics{routing_d2_nc20}}}
\caption{Comparison of delivery probabilities}
\label{cfs}
\end{figure}
Extremely high link instability could mean one of two things. Either
nodes only briefly stay in the vicinity of one another or that nodes
may remain geographically close but that the link fails for other
reasons such as briefly moving out of transmission range or excessive
contention. We measured between each time step and from each node's
point of view, how many of its group members changed (number of new
members + number of excluded members) and how many links between
members of its group changed (either by appearing or
disappearing). The averages for all the nodes are shown in
Fig.~\ref{group_stab}. The composition of a given group appears much
more stable than the links among its members. This supports the idea
that small communities like groups of friends tend to stick together
during the tour and that link failures do not necessarily mean that
two nodes have clearly moved away from each other. Furthermore, the
rate of change of group composition, unlike most other metrics, seems
to smooth the accordion effect. This suggests that these groups are
indeed a good support for our hybrid approach.
To evaluate the performance of HYMAD we replayed the 3000 seconds of
the trace. Every 15 seconds, during the first 2000 seconds, we
randomly selected 60 pairs of nodes which were instructed to send a
message to each other using Epidemic, HYMAD and Spray-and-Wait. This
averages results over both the connected and disconnected phases of
Rollernet. Figures \ref{cfs5} and \ref{cfs20} were obtained using the
aggregate data from 10 runs of this scenario with respectively 5 and
20 maximum number of copies for HYMAD and Spray-and-Wait. They compare
the cumulative distribution function of the delivery probability for
the three protocols. A few observations can be made:
\begin{itemize}
\item HYMAD clearly outperforms Spray-and-Wait in terms of delay and
quickly achieves comparable performance with Epidemic.
\item With a low number of copies, HYMAD also outperforms
Spray-and-Wait in terms of delivery ratio for reasons explained
hereafter.
\item Predictably, performance increases with the number of
copies. The maximum number of groups (including singletons) obtained
at a given time is 29. Therefore, HYMAD with 20 copies will spray
practically the entire network and therefore quickly and reliably
reach the destination if in the same connected component as the
source.
\end{itemize}
Spray-and-Wait's simple forwarding scheme performs very well
\emph{under the assumption of independent and identically distributed
node mobility}~\cite{spyro_sw}. However this is absolutely
\emph{not} the case in Rollernet where groups of friends tend to stick
together. It is also usually \emph{not} the case in many real-world
situations where underlying social dynamics are often at work.
This can have a impact on performance. For example, when using just 5
copies, Spray-and-Wait simply fails to deliver about 5\% of messages
even after waiting for more than 15 minutes. Using 10 copies, the
average delay with Spray-and-Wait (133 seconds) is nearly three times
that of HYMAD (48 seconds). To further illustrate this point,
Fig~\ref{diffusion} compares the propagation of 10 copies after 15
seconds for HYMAD (Fig.~\ref{swg15}) and Spray-and-wait
(Fig.~\ref{sw15}). The rightmost node is the head of the rollerblading
tour. The bold lines represent intra-group links while the dashed gray
lines represent inter-group links. The nodes holding at least one copy
are represented by a diamond. In HYMAD's case, the destination is a
diamond meaning that our hybrid approach has delivered its message
within 15 seconds. On the other hand, the regular Spray-and-Wait
protocol distributed copies mainly within its own local group. These
nodes remain close to each other thus increase the delay. In this
particular case (Fig.~\ref{sw15}) it will take 525 seconds for a node
with a copy to meet the destination
\begin{figure*}[t]
\centering
\includegraphics{legend} \\
\subfloat[HYMAD: success within 15 seconds. \label{swg15}]{\includegraphics{swg_15s}} \quad
\subfloat[Spray-and-Wait: the copies stagnate around the source. It will take a total of 525 seconds to hit the destination. \label{sw15}]{\includegraphics{sw_15s}}
\caption{Regular vs Hybrid Spray and Wait routing in Rollernet. (Partial view of the topology at t=15s)}
\label{diffusion}
\end{figure*}
\section{Conclusion and further work}
\label{conclusion}
In this paper we identified a new class of dense and highly mobile
networks not well addressed by conventional DTN or MANET approaches. We
proposed a new hybrid approach, HYMAD, that uses nodes' knowledge of
their local group topology to improve the performance of a simple DTN
protocol. In our case we used diameter-constrained groups along with
distance vector for intra-group routing and Spray-and-Wait for
inter-group routing. Simulations of our implementation in a dense and
highly mobile network show significant performance improvements over
regular Spray-and-Wait. Further work includes more comprehensive testing
against other real and synthetic mobility scenarios.
HYMAD is an example of a larger class of hybrid DTN-MANET routing
protocols which can handle a very wide spectrum of networks that
overlaps with those usually handled by either DTN or MANET. We
believe that the first results that we obtained are encouraging for
further research in this direction. In particular, other more
elaborate DTN/MANET protocol pairs could conceivably be used for intra
and inter-group routing and would be worth exploring.
\section*{Acknowledgments}
This work has been partially supported by the RNRT project Airnet under contract
01205
\small
\bibliographystyle{latex8}
| 2023-04-23T06:41:12.047Z | 2010-01-19T22:11:35.000Z | redpajama/arxiv | arxiv_0001 | 1,898 | 4,236 |
7e2db8ab272a360433450e75f1b2c8e6d5ed3c0d | \section{Preliminaries}
\setcounter{equation}{0} Finding new topological invariants of differentiable manifolds is still an open problem for geometries. The cohomology groups are such invariants. The Finsler manifolds are interesting models for some physical phenomena, so their properties are also useful to investigate, \cite{Bej}, \cite{Mir}. The cohomology groups of manifolds, related sometimes to some foliations on them, have been studied in the last decades, \cite{T}, \cite{V}, \cite{Bull}. Our present work intends to develop the study of the Finsler manifolds and the foliated structures of the tangent bundle of such a manifold.
For the beginning, we present two foliations on the slit tangent manifold $TM^0$ of a $n$-dimensional Finsler manifold $(M,F)$, following \cite{Bej}. In this paper the indices take the values $i,j,i_1,j_1,...$ $=\overline{1,n}$ and $a,b,a_1,b_1,...$$=\overline{1,n-1}$.
Let $(M,F)$ be a $n$-dimensional Finsler manifold and $G$ the Sasaki-Finsler metric on its slit tangent manifold $TM^0$. The vertical bundle $VTM^0$ of $TM^0$ is the tangent (structural) bundle to the vertical foliation $F_V$ determined by fibers $\pi :TM^0\rightarrow M$. If $(x^i,y^i)_{i=\overline{1,n}}$ are local coordinates on $TM^0$, then $VTM^0$ is locally spanned by $\{\frac{\partial}{\partial y^i}\}_i$. A canonical transversal (also called horizontal) distribution is constructed in \cite{Bej} as follows. We denote by $(g^{ij}(x,y))_{i,j}$ the inverse matrix of $g=(g_{ij}(x,y))_{i,j}$, where
\begin{eqnarray}
g_{ij}(x,y)=\frac{1}{2}\frac{\partial^2 F^2}{\partial y^i \partial y^j}(x,y),
\end{eqnarray}
and $F$ is the fundamental function of the Finsler manifold. Obviously, we have the equalities $\frac{\partial g_{ij}}{\partial y^k}$$=\frac{\partial g_{ik}}{\partial y^j}$$=\frac{\partial g_{jk}}{\partial y^i}$.
Then locally define the functions
\[G^i=\frac{1}{4}g^{ik}\left(\frac{\partial^2 F^2}{\partial y^k \partial x^h}y^h-\frac{\partial F^2}{\partial x^k}\right),\quad G^j_i=\frac{\partial G^j}{\partial y^i}.
\]
There exists on $TM^0$ a $n$ distribution $HTM^0$ locally spanned by the vector fields
\begin{eqnarray}
\frac{\delta }{\delta x^i}=\frac{\partial }{\partial x^i}-G_i^j\frac{\partial }{\partial y^j},\quad (\forall)i=\overline{1,n}.
\end{eqnarray}
The Riemannian metric $G$ on $TM^0$ is satisfying
\begin{eqnarray}
G(\frac{\delta}{\delta x^i},\frac{\delta}{\delta x^j})=G(\frac{\partial}{\partial y^i},\frac{\partial}{\partial y^j})=g_{ij},\quad G(\frac{\delta}{\delta x^i},\frac{\partial}{\partial y^j})=0,\quad (\forall)i,j.
\end{eqnarray}
The local basis $\{\frac{\delta}{\delta x^i},\frac{\partial}{\partial y^i}\}_i$ is called adapted to the vertical foliation $F_V$ and we have the decomposition
\begin{eqnarray}
TTM^0=HTM^0\oplus VTM^0.
\end{eqnarray}
Now, let $Z$ be the global defined vertical Liouville vector field on $TM^0$,
\begin{eqnarray}
Z=y^i\frac{\partial}{\partial y^i},
\end{eqnarray}
and $L$ the space of line fields spanned by $Z$. We call this space \textit{the Liouville distribution} on $TM^0$. The complementary orthogonal distributions to $L$ in $VTM^0$ and $TTM^0$ are denoted by $L'$ and $L^\perp$, respectively. It is proved, \cite{Bej}, that the both distributions $L'$ and $L^\perp$ are integrable and we also have the decomposition
\begin{eqnarray}
VTM^0=L'\oplus L.
\end{eqnarray}
Moreover, we have, \cite{Bej}:
\begin{proposition}
a) The foliation determined by the distribution $L^\perp$ is just the foliation determined by the level hypersurfaces of the fundamental function $F$ of the Finsler manifold.
b) For every fixed point $x_0\in M$, the leaves of the Liouville foliation $F_{L'}$ determined by the distribution $L'$ on $T_{x_0}M$ are just the $c$-indicatrices of $(M,F)$:
\begin{eqnarray}
I_{x_0}M(c):\quad F(x_0,y)=c,\quad (\forall)y\in T_{x_0}M.
\end{eqnarray}
c) The foliation $F_{L'}$ is a subfoliation of the vertical foliation.
\end{proposition}
As we already saw, the vertical bundle is locally spanned by $\{\frac{\partial}{\partial y^i}\}_{i=\overline{1,n}}$ and it admits decomposition (1.6). In the following we give another basis on $VTM^0$, adapted to $F_{L'}$.
There are some useful facts which follow from the homogeneity of the fundamental function of the Finsler manifold $(M,F)$. By the Euler theorem on positively homogeneous functions we have, \cite{Bej},
\begin{eqnarray}
F^2(x,y)=y^iy^jg_{ij}(x,y),\quad \frac{\partial F}{\partial y^k}=\frac{1}{F}y^ig_{ki}, \quad y^i\frac{\partial g_{ij}}{\partial y^k}=0,\quad \forall k=\overline{1,n}.
\end{eqnarray}
Hence it results
\begin{eqnarray}
G(Z,Z)=F^2.
\end{eqnarray}
We consider the following vertical vector fields:
\begin{eqnarray}
X_k=\frac{\partial}{\partial y^k}-t_kZ,\quad k=\overline{1,n},
\end{eqnarray}
where the functions $t_i$ are defined by the conditions
\begin{eqnarray}
G(X_k,Z)=0, \forall k=\overline{1,n}.
\end{eqnarray}
The above conditions become
\[G(\frac{\partial}{\partial y^k}, y^i\frac{\partial}{\partial y^i})-t_kG(Z,Z)=0,
\]
so, taking into account also (1.3) and (1.9), we obtain the local expression of functions $t_k$ in a local chart $(U,(x^i,y^i))$:
\begin{eqnarray}
t_k=\frac{1}{F^2}y^ig_{ki}=\frac{1}{F}\frac{\partial F}{\partial y^k},\quad \forall k=\overline{1,n}.
\end{eqnarray}
If $(\tilde{U},(\tilde{x}^{i_1},\tilde{y}^{i_1}))$ is another local chart on $TM^0$, in $U \cap \tilde{U} \neq \oslash $ we have:
\[\tilde{t}_{k_1}=\frac{1}{F^2}\tilde{y}^{i_1}\tilde{g}_{i_1k_1}=\frac{1}{F^2}\frac{\partial \tilde{x}^{i_1}}{\partial x^i}y^i \frac{\partial x^k}{\partial \tilde{x}^{k_1}}\frac{\partial x^i}{\partial \tilde{x}^{i_1}}g_{ki}=\frac{\partial x^k}{\partial \tilde{x}^{k_1}}t_k,
\]
so we obtain the following changing rule for the vector fields (1.10):
\begin{eqnarray}
\tilde{X}_{i_1}=\frac{\partial x^k}{\partial \tilde{x}^{i_1}}X_k,\quad \forall i_1=\overline{1,n}.
\end{eqnarray}
By a straightforward computation, using (1.8), it results:
\begin{proposition}
\quad The functions $\{t_k\}_{k=\overline{1,n}}$ defined by (1.12) are satisfying:
\begin{eqnarray}
\it{a}) \quad y^it_i=1;\quad y^i X_i=0;
\end{eqnarray}
\begin{eqnarray}
\it{b})\quad \frac{\partial t_l}{\partial y^k}=-2t_kt_l+\frac{1}{F^2} g_{kl},\quad Zt_k=-t_k,\quad \forall k,l=\overline{1,n};
\end{eqnarray}
\begin{eqnarray}
d)\quad y^j\frac{\partial t_j}{\partial y^i}=-t_i,\quad \forall i=\overline{1,n},\quad y^i(Zt_i)=-1,\quad y^i(ZX_i)=0.;
\end{eqnarray}
\end{proposition}
\begin{proposition}
There are the relations:
\begin{eqnarray}
[X_i,X_j]=t_iX_j-t_jX_i,
\end{eqnarray}
\begin{eqnarray}
[X_i,Z]=X_i,
\end{eqnarray}
for all $i,j=\overline{1,n}$.
\end{proposition}
By the conditions (1.11), $\{X_1,...,X_n\}$ are $n$ vector fields orthogonal to $Z$, so they belong to the $(n-1)$-dimensional distribution $L'$. It results that they are linear dependent and, from (1.14),
\begin{eqnarray}
X_n=-\frac{1}{y^n}y^aX_a,
\end{eqnarray}
since the local coordinate $y^n$ is nonzero everywhere.
We also proved that, \cite{Manea}:
\begin{proposition}
The system of vector fields $\{X_1,X_2,...,X_{n-1},Z\}$ of vertical vector fields is a locally adapted basis to the Liouville foliation $F_{L'}$, on $VTM^0$.
\end{proposition}
The whole proofs for propositions from this section are given in \cite{Manea}.
More clearly, let $(\tilde{U},(\tilde{x}^{i_1},\tilde{y}^{i_1}))$, $(U,(x^i,y^i))$ be two local charts which domains overlap, where $\tilde{y}^k$ and $y^n$ are nonzero functions (in every local charts on $TM^0$ there is at least one nonzero coordinate function $y^i$). The adapted basis in $\tilde{U}$ is $\{\tilde{X}_1,\tilde{X}_2,...,\tilde{X}_{k-1}, \tilde{X}_{k+1},...,\tilde{X}_n,Z\}$. In $U \cap \tilde{U}$ we have (1.13) and (1.19), hence
\[\tilde {X}_{i_1}=\sum_{i=1}^{n-1}(\frac{\partial x^i}{\partial \tilde{x}^{i_1}}-\frac{y^i}{y^n}\frac{\partial x^n}{\partial \tilde{x}^{i_1}})X_i;\quad X_j=\sum_{j_1=1, j_1 \neq k}^n(\frac{\partial \tilde{x}^{j_1}}{\partial x^j}-\frac{\tilde{y}^{j_1}}{\tilde{y}^k}\frac{\partial \tilde{y}^k}{\partial x^j})\tilde{X}_{j_1},
\]
for all $i_1=\overline{1,n}$, $i_1\neq k$, $j=\overline{1,n-1}$. Ones can see that the above relations also imply
\[ \frac{\partial x^i}{\partial \tilde{x}^{k}}-\frac{y^i}{y^n}\frac{\partial x^n}{\partial \tilde{x}^{k}}=-\sum_{i_1=1,i_1 \neq k}^{n}\frac{\tilde{y}^{i_1}}{\tilde{y}^k}(\frac{\partial x^i}{\partial \tilde{x}^{i_1}}-\frac{y^i}{y^n}\frac{\partial x^n}{\partial \tilde{x}^{i_1}}).
\]
By a straightforward calculation we have that the determinant of the changing matrix $\{X_1,X_2,...,X_{n-1},Z\}$$ \rightarrow$ $\{\tilde{X}_1,\tilde{X}_2,...,\tilde{X}_{k-1}, \tilde{X}_{k+1},...,\tilde{X}_n,Z\}$ on $L'$ is equal to
\[ (-1)^{n+k}\frac{\tilde{y}^k}{y^n} det\left(\frac{\partial x^i}{\partial \tilde{x}^j}\right)_{i,j=\overline{1,n}}.
\]
\section{New types of vertical forms with respect to Liouville foliation on $TM^0$}
\setcounter{equation}{0}
Now, let $\{\delta y^i=dy^i+G_j^idx^j\}_{i=\overline{1,n}}$ be the dual basis of $\{\frac{\partial}{\partial y^i}\}_{i=\overline{1,n}}$ on $VTM^0$. We also consider the space $\Omega^0(TM^0)$ of differentiable functions on $TM^0$, the module $\Omega^{0,q}(TM^0)$ of vertical $q$-forms and the foliated derivative $d_{01}$ with respect to the vertical foliation on $TM^0$.
\begin{proposition}
The vertical $1$-form $\omega_0=t_i\delta y^i$ is globally defined and
\begin{eqnarray}
\omega_0(Z)=1,\quad \omega_0(X_a)=0,\quad \omega_0=d_{01}(lnF),
\end{eqnarray}
for all $a=\overline{1,n-1}$, $X_a$ given by (1.10) and $F$ the fundamental function of the Finsler manifold.
\end{proposition}
\textit{Proof:} In $\tilde{U}\cap U$ we have
\[\tilde{\omega}_0=\tilde{t}_{i_1}\delta \tilde{y}^{i_1}=\frac{\partial x^i}{\partial \tilde{x}^{i_1}}t_i\frac{\partial \tilde{x}^{i_1}}{\partial x^j}\delta y^j =t_i\delta y^i =\omega_0.
\]
We also have $\delta y^i(Z)=y^i$, for all $i=\overline{1,n}$, and taking into account the first relation (1.14), \[\omega_0(Z)=1,\quad \omega_0(X_a)=t_i \delta y^i(\frac{\partial }{\partial y^a}-t_aZ)=t_i\delta^i_a-t_at_iy^i=0,\]
where $\delta^i_a$ is the Kronecker symbol. By the relation (1.19) it results also $\omega_0(X_n)=0$. In the last, locally we have
\[d_{01}(lnF)=\frac{\partial (lnF)}{\partial y^i}\delta y^i=\frac{1}{F}\frac{\partial F}{\partial y^i}\delta y^i=\omega_0,
\]where we used relation (1.12).
The equality $\omega_0=d_{01}(lnF)$ shows that $\omega_0$ is a $d_{01}$-exact vertical 1-form and the Liouville foliation $L'$ is defined by the equation $\omega_0=0$.
\begin{definition}
A vertical $q$-form $\omega \in \Omega ^{0,q}(TM^0)$ is called a vertical $(s,t)$-form or a $(0,s,t)$-form iff for vertical vector fields $Y_1,Y_2,...,Y_q$, $\omega(Y_1,...,Y_q)\neq 0$ only if $s$ arguments are in $L'$ and $t$ arguments are in $L$.
\end{definition}
Since $L$ is a line distribution, we can talk only about $(0,s,t)$-forms with $t\in \{0,1\}$. We denote the space of $(0,s,t)$-forms by $\Omega^{0,s,t}(TM^0)$. By the above definition, we have the equivalence
\begin{eqnarray}
\omega \in \Omega^{0,q-1,1}(TM^0)\quad \Longleftrightarrow \quad \omega(Y_1,...,Y_q)= 0,\quad (\forall)Y_1,...,Y_q \in \{X_1,...,X_{n-1}\},
\end{eqnarray}
where $\{X_i\}_{i=\overline{1,n-1}}$ is the local basis in $L'$ from proposition 1.4.
\begin{proposition}
Let $\omega$ be a nonzero vertical $q$-form. The following assertions are true:
a) $\omega \in \Omega ^{0,q,0}(TM^0)$ iff $i_Z\omega =0$, where $i_Z$ is the interior product with the vertical Liouville vector field $Z$.
b) The vertical $(q-1)$-form $i_Z\omega$ is a $(0,q-1,0)$-form.
c) $\omega \in \Omega ^{0,q-1,1}(TM^0)$ implies $i_Z\omega \neq 0$.
d) If there is a $(0,q-1,0)$-form $\alpha$ such that $\omega=\omega_0\wedge \alpha$, then $\omega \in \Omega ^{0,q-1,1}(TM^0)$.
\end{proposition}
\textit{Proof:}a) Let $\omega \in \Omega ^{0,q,0}(TM^0)$, hence $\omega(Y_1,...,Y_q)\neq 0$ only if all the arguments are in $L'$. So, $i_Z\omega $ is a $(q-1)$-form and $i_Z\omega(Y_1,...,Y_{q-1})=\omega (Z,Y_1,...,Y_{q-1})=0$, for every vertical vector fields $Y_1,...,Y_{q-1}$. That means $i_Z\omega=0$. Conversely, if $\omega$ is a $(0,q)$-form such that $i_Z\omega=0$, then $\omega(Y_1,...,Y_q)= 0$ since there is an index $i\in \{1,..., q\}$ such that $Y_i=Z$. Hence $\omega$ does not vanish only on $L'$, and by definition it is a $(0,q,0)$-form.
b) We have $i_Zi_Z\omega=0$, from the definition of a form on a manifold, and taking into account a), it results that $i_Z\omega$ is a $(0,q-1,0)$-form.
c) If $\omega$ is a nonzero $(0,q-1,1)$-form, then $\omega(Y_1,...,Y_q)\neq 0$ only if exactly one of the arguments is from the line distribution $L=span{Z}$. Then $i_Z\omega ((Y_1,...,Y_{q-1})\neq 0$ for some vertical vector fields $Y_1,...,Y_{q-1}\in L'$.
d) Let $\alpha$ be a form like in hypothesis, and $Y_1,...,Y_q$, $q$ arbitrary vertical vector fields.
\[ \omega(Y_1,...,Y_q)=(\omega_0 \wedge \alpha)(Y_1,...,Y_q)=\sum_{\sigma \in S_q}\epsilon(\sigma) \omega_0(Y_{\sigma (1)})\alpha (Y_{\sigma (2)},...,Y_{\sigma (q)}).
\]
But $\omega_0$ vanishes on $L'$ and for the all arguments $Y_1,...,Y_q$ in $L'$, the above sum has all terms nulls. Taking into account relation (2.2), we have $\omega \in \Omega^{0,q-1,1}(TM^0)$.
\begin{proposition}
For every vertical $q$-form $\omega$ there are $\omega_1\in \Omega^{0,q,0}(TM^0)$ and $\omega_2\in \Omega^{0,q-1,1}(TM^0)$ such that $\omega=\omega_1+\omega_2$, uniquely.
\end{proposition}
\textit{Proof:} Let $\omega$ be a nonzero vertical $q$-form. If $i_Z\omega =0$, then $\omega \in \Omega^{0,q,0}(TM^0)$ from Proposition 2.2, so $\omega =\omega +0$.
If $i_Z\omega \neq 0$, then let $\omega_2$ be the vertical $q$-form $\omega_0\wedge i_Z\omega$. By Proposition 2.2 d), it results $\omega_2$ is a $(0,q-1,1)$-form. Moreover, putting $\omega_1=\omega-\omega_2$,
\begin{eqnarray}
i_Z\omega_1=i_Z\omega-i_Z(\omega_0\wedge i_Z\omega)=i_Z\omega-\omega_0(Z)i_Z\omega=0,
\end{eqnarray}
where we used relation(2.1). So, $\omega_1$ is a $(0,q,0)$-form and $\omega_1$, $\omega_2$ are unique defined by $\omega$. Obviously $\omega=\omega_1+\omega_2$.
We have to remark that only the zero $q$-form could be a $(0,q,0)$- and a $(0,q-1,1)$-form at the same time. The proposition 2.3 proves the decomposition
\begin{eqnarray}
\Omega^{0,q}(TM^0) = \Omega^{0,q,0}{TM^0} \oplus \Omega^{0,q-1,1}(TM^0).
\end{eqnarray}
A consequence of the propositions 2.2 and 2.3 is:
\begin{proposition}
Let $\omega$ be a $(0,q)$-form. We have the equivalence:
\begin{eqnarray}
\omega \in \Omega^{0,q-1,1}(TM^0)\quad \Longleftrightarrow \quad (\exists)\alpha \in \Omega^{0,q-1,0}(TM^0),
\omega=\omega_0\wedge \alpha.
\end{eqnarray}
\end{proposition}
Taking into account the characterization given in Proposition 2.2a) and relation (2.5), ones can see that:
\begin{proposition}
We have the following facts:
a) If $\omega\in \Omega^{0,q,0}(TM^0)$ and $\theta \in \Omega^{0,r,0}(TM^0)$, then $\omega\wedge \theta \in \Omega^{0,q+r,0}(TM^0)$.
b) If $\omega\in \Omega^{0,q,1}(TM^0)$ and $\theta \in \Omega^{0,r,0}(TM^0)$, then $\omega\wedge \theta \in \Omega^{0,q+r,1}(TM^0)$.
c)If $\omega\in \Omega^{0,q,1}(TM^0)$ and $\theta \in \Omega^{0,r,1}(TM^0)$, then $\omega\wedge \theta =0$.
\end{proposition}
\begin{example}
a) $\omega_0$ is a $(0,0,1)$-form because there is the $(0,0,0)$-form, the constant 1 function on $TM^0$, such that $\omega_0=\omega_0\cdot 1$.
b) $\theta_i=\delta y^i-y^i\omega_0$ is a $(0,1,0)$-form, for each $i-\overline{1,n}$. Indeed,
\[\theta_i(Z)=\delta y^i(Z)-\omega_0(Z)y^i=0,
\]so $i_Z\theta_i=0$. We have to remark that the vertical $1$-forms $\{\theta_i\}_{i=\overline{1,n}}$ are linear dependent, since $\sum t_i\theta_i=0$.
c) $i_Z(\theta_i\wedge \theta_j)(Y)=\theta_i(Z)\theta_j(Y)-\theta_j(Z)\theta_i(Y)=0$, for every vertical vector field $Y$, hence $\theta_i\wedge \theta_j \in \Omega^{0,2,0}(TM^0)$.
\end{example}
\begin{proposition}
The foliated derivative $d_{01}:\Omega^{0,q}(TM^0)\rightarrow\Omega^{0,q+1}(TM^0)$ has the following property: for every $(0,q-1,1)$-form $\omega$, $d_{01}\omega$ is a $(0,q,1)$-form.
\end{proposition}
\textit{Proof:} Let $\omega$ be a $(0,q-1,1)$-form. From relation (2.5), there is a $(0,q-1,0)$ form $\alpha$ such that $\omega=\omega_0\wedge \alpha$. Proposition 2.3 proved also that $\alpha=i_Z\omega$. Taking into account that $\omega_0$ is an $d_{01}$-exact form, it follows
\[d_{01}\omega=d_{01}(\omega_0\wedge \alpha)=-\omega_0\wedge d_{01}\alpha=-\omega_0\wedge \beta_1-\omega_0\wedge\beta_2,
\]with $\beta_1$, $\beta_2$ the $(0,q,0)$-, $(0,q-1,1)$-forms, components of the $(0,q)$-form $d_{01}\alpha$. But $\beta_2=\omega_0\wedge \theta$ from relation (2.5), so we have $d_{01}\omega=-\omega_0\wedge \beta_1$; it follows $d_{01}\omega \in \Omega^{0,q,1}(TM^0)$. We can write
\begin{eqnarray}
d_{01}(\Omega^{0,q-1,1}(TM^0)) \subset \Omega^{0,q,1}(TM^0).
\end{eqnarray}
Let as consider $\xi_1$, $\xi_2$ the projections of the module $\Omega^{0,q}(TM^0)$ on its direct summands from relation (2.4).
\begin{eqnarray}
\xi_1:\Omega^{0,q}(TM^0)\rightarrow \Omega^{0,q,0}(TM^0),\quad \xi_1(\omega)=\omega-\omega_0\wedge i_Z\omega,\quad (\forall)\omega\in \Omega^{0,q}(TM^0),
\end{eqnarray}
\begin{eqnarray}
\xi_2:\Omega^{0,q}(TM^0)\rightarrow \Omega^{0,q-1,1}(TM^0),\quad \xi_2(\omega)=\omega_0\wedge i_Z\omega,\quad (\forall)\omega\in \Omega^{0,q}(TM^0),
\end{eqnarray}
\begin{remark}
For an arbitrary $(0,q)$-form $\omega$, $d_{01}\omega=d_{01}(\xi_1(\omega))+d_{01}((\xi_2\omega))$. Relation (2.6) show that $d_{01}(\xi_2(\omega))$ is a $(0,q,1)$-form, hence $\xi_1(d_{01}(\xi_2(\omega)))=0$. It results
\begin{eqnarray}
\xi_1(d_{01}\omega)=\xi_1(d_{01}(\xi_1(\omega),\quad \xi_2(d_{01}\omega)=\xi_2(d_{01}(\xi_1(\omega)+d_{01}(\xi_2(\omega)).
\end{eqnarray}
The above relations prove that $d_{01}(\Omega^{0,q,0}(TM^0)) \subset \Omega^{0,q+1,0}(TM^0)\oplus \Omega^{0,q,1}(TM^0)$.
\end{remark}
Let us define the following operators:
\begin{eqnarray}
d':\Omega^{0,q,0}(TM^0)) \rightarrow \Omega^{0,q+1,0}(TM^0),\quad d'(\omega)=\xi_1(d_{01}\omega),
\end{eqnarray}
\begin{eqnarray}
d":\Omega^{0,q,0}(TM^0)) \rightarrow \Omega^{0,q,1}(TM^0),\quad d"(\omega)=\xi_2(d_{01}\omega),
\end{eqnarray}
so we have
\begin{eqnarray}
d_{01}|_{\Omega^{0,q,0}(TM^0)}=d'+d".
\end{eqnarray}
\begin{proposition}
The operator $d'$ defined in relation (2.10) satisfies the relations:
a) $d'(\omega\wedge\theta)=d'\omega \wedge\theta +(-1)^q\omega\wedge d'\theta$, $(\forall)$$\omega\in \Omega^{0,q,0}(TM^0)$ and $\theta \in \Omega^{0,r,0}(TM^0)$.
b) $d'^2=0$.
\end{proposition}
\textit{Proof:}a) Let $\omega\in \Omega^{0,q,0}(TM^0)$ and $\theta \in \Omega^{0,r,0}(TM^0)$. It is known that $d_{01}(\omega\wedge\theta)=d_{01}\omega\wedge\theta +(-1)^q\omega\wedge d_{01}\theta$, and from relation (2.12), it follows
\[d'(\omega\wedge\theta)+d"(\omega\wedge\theta)=d'\omega\wedge\theta+d"\omega\wedge\theta +(-1)^q\omega\wedge d'\theta+(-1)^q\omega\wedge d"\theta.
\]
Considering the $(0,q+1,0)$-component in the both members, we have the desired result.
b) Let $\omega$ be a $(0,q,0)$-form. The definition (2.10) of the operator $d'$ says that $d'\omega=d_{01}\omega-\omega_0\wedge i_Zd_{01}\omega$. Hence we have
\[d'^2\omega=d_{01}(d'\omega)-\omega_0\wedge i_Zd_{01}d'\omega=\omega_0\wedge d_{01}i_Zd_{01}\omega+\omega_0\wedge i_Z(d_{01}\omega\wedge i_Zd_{01}\omega-\omega_0\wedge d_{01}i_Zd_{01}\omega,
\]
where we used relations $d_{01}^2=0$, $d_{01}\omega_0=0$. Computing the last member in the above equalities, we obtain $d'^2=0$.
\begin{remark}
From the relation (2.11) ones can see that the operator $d"$ could not be composed to itself, but it can be proved the equality $d_{01}\circ d"+d"\circ d'=0$.
\end{remark}
\begin{example}
a) For a $(0,1)$-form $\omega$, we have $\xi_1(\omega)=\omega-\omega(Z)\omega_0$, and $\xi_2(\omega)=\omega(Z)\omega_0$.
b) Let $f\in \Omega^0(TM^0)$, and $d_{01}f$ its foliated derivative, locally given by $d_{01}f=\frac{\partial f}{\partial y^i}\delta y^i$. Locally we have
\[d"f=\xi_2(d_{01}f)=(d_{01}f)(Z)\omega_0=Z(f)\omega_0,
\]
\[d'f=d_{01}f-Z(f)\omega_0=\frac{\partial f}{\partial y^i}\delta y^i-y^i\frac{\partial f}{\partial y^i}\omega^0=\frac{\partial f}{\partial y^i}\theta_i,
\]
where $\theta_i$ are the $(0,1,0)$-forms defined in Example 2.1. Moreover, taking into account relation (1.10) and the fact $\sum_{i=1}^n t_i\theta_i=0$, it results that locally
\begin{eqnarray}
d'f=(X_if)\theta_i.
\end{eqnarray}
We have $d'y^j=(X_iy^j)\theta_i=\delta_i^j\theta_i-t_iZ(y^j)\theta_i=\theta_j-(t_i\theta_i)y^j=\theta_j$, so the $(0,1,0)$-forms $\theta_i$ are exactly the $d'$-derivatives of the local coordinates $y^i$, for all $i=\overline{1,n}$.
c) The $(0,2,0)$-forms $d'y^i\wedge d'y^j$ are $d'$-closed forms, for all $i,j=\overline{1,n}$.
\end{example}
Let us consider an arbitrar $(0,1)$-form on $TM^0$. It is locally given in $U$ by $\omega=\sum a_i\delta y^i$, with $a_i\in \Omega^0(U)$ such that in $U\cap \tilde{U}$,
\begin{eqnarray}
\tilde{a}_{i_1}=\frac{\partial x^i}{\partial \tilde{x}^{i_1}}a_i.
\end{eqnarray}
From Proposition 2.2, $\omega$ is a $(0,1,0)$-form iff $i_Z\omega=0$, which is equivalent locally with $\sum a_i y^i=0$.
Then, locally we have
\[\omega=\sum a_i \delta y^i=\sum a_i(d'y^i+y^i\omega_0)=\sum a_id'y^i+(\sum a_i y^i)\omega_0=\sum a_i d'y^i.
\]
Conversely, the expression locally given by $\sum a_id'y^i$, with functions $a_i$ satisfying (2.14) is a $(0,1,0)$-form because $d'y^i(Z)=0$, for all $i=\overline{1,n}$.
\section{The d'-cohomology }
\setcounter{equation}{0}
\begin{definition}
A function $f\in \Omega^0(TM^0)$ is called vertical Liouville basic if $d'f=0$. We denote by $\Sigma^0(TM^0)$ the space of such functions.
\end{definition}
The above definition and the Proposition 2.7 b) prove that the sequence
\begin{eqnarray}
O\rightarrow \Sigma^0(TM^0)\stackrel{i}{\rightarrow} \Omega^0(TM^0)\stackrel{d'}{\rightarrow}\Omega^{0,1,0}(TM^0) \stackrel{d'}{\rightarrow}\Omega^{0,2,0}(TM^0)\stackrel{d'}{\rightarrow}...\stackrel{d'}{\rightarrow},
\end{eqnarray}
is a semiexact one.
\begin{definition}
We say that a $(0,q,0)$-form $\omega$ is $d'$-closed if $d'\omega=0$. We say that it is $d'$-exact if $\omega=d'\theta$ for some $\theta\in \Omega^{0,q-1,0}(TM^0)$. We denote by $Z^{0,q,0}(TM^0)$, $B^{0,q,0}(TM^0)$ the spaces of the $d'$-closed and $d'$-exacts $(0,q,0)$-forms, respectively.
\end{definition}
Taking into account that $d'^2=0$, we have the inclusion
\[B^{0,q,0}(TM^0)\subset Z^{0,q,0}(TM^0).
\]
We call the \textit{d'-cohomology group of $TM^0$} the quotient group
\begin{eqnarray}
H_{d'}^{0,q,0}(TM^0)=\frac{Z^{0,q,0}(TM^0)}{B^{0,q,0}(TM^0)}.
\end{eqnarray}
This group is the de Rham group of the sequence (3.1).
\begin{theorem}
The operator $d'$ is satisfying a Poincare type lemma: Let $\omega \in \Omega^{0,q,0}(TM^0)$ be a d'-closed form. For every domain $U$ there is a $(0,q-1,0)$-form $\theta$ on $U$ such that locally $\omega=d'\theta$.
\end{theorem}
\textit{Proof:} Let $\omega\in \Omega^{0,q,0}(TM^0)$ such that $d'\omega=0$. Then
\[d_{01}\omega=d'\omega+d"\omega=d"\omega=\omega_0\wedge i_Zd_{01}\omega,
\]so
\[d_{01}\omega \equiv 0(mod \quad \omega_0).
\]Hence on the space $\omega_0=0$ we have $\omega$ $d_{01}$-exact. But the foliated derivative with respect to the vertical foliation satisfies a Poincare type lemma, so in every domain $U$ there is a vertical $(q-1)$-form $\theta$ such that
\[\omega=d_{01}\theta +\lambda \wedge \omega_0, \quad \lambda\in \Omega^{0,q-1}(TM^0).
\]
Following proposition 2.3, $\theta=\omega_0\wedge i_Z\theta+\theta_1$, with $\theta_1=\xi_1(\theta)\in \Omega^{0,q-1,0}(TM^0)$. We obtain
\[\omega=-\omega_0\wedge d_{01}i_Z\theta+d_{01}\theta_1+\lambda \wedge \omega_0.
\]
Here $\omega$ is a $(0,q,0)$-form, $\omega_0\wedge (d_{01}i_Z\theta+\lambda)$ is $(0,q-1,1)$-form and
\[d_{01}\theta_1=d'\theta_1+d"\theta_1\in \Omega^{0,q,0}(U)\oplus \Omega^{0,q-1,1}(U).
\]
It results $\omega |_U=d'\theta_1$, q.e.d.
Taking into account that $\Omega^0$ and the shaves of germs of $(0,q,0)$-forms are fins (see \cite{Mir1}, P.6.2, p.269), a consequence of the theorem 3.1 is the following:
\begin{proposition}
The sequence of shaves
\begin{eqnarray}
O\rightarrow \Sigma^0\stackrel{i}{\rightarrow} \Omega^0\stackrel{d'}{\rightarrow}\Omega^{0,1,0} \stackrel{d'}{\rightarrow}\Omega^{0,2,0}\stackrel{d'}{\rightarrow}...\stackrel{d'}{\rightarrow},
\end{eqnarray}
is a fine resolution of the sheaf $\Sigma^0$ of germs of vertical Liouville basic functions.
\end{proposition}
Now, by a well-known theorem of algebraic topology (see \cite{Mir1} T.3.6, p.205), we have the main result of this paper, a de Rham type theorem for the $d'$-cohomology:
\begin{theorem}
The q-dimensional Cech cohomology group of $TM^0$ with coefficients in $\Sigma^0$ is isomorphic with $H^{0,q,0}_{d'}(TM^0)$.
\end{theorem}
\textbf{Acknowledgment}: This work was supported by Contract with Sinoptix
No. 8441/2009.
| 2023-04-23T06:41:12.383Z | 2010-01-15T13:28:23.000Z | redpajama/arxiv | arxiv_0001 | 1,911 | 4,689 |
011daacdb675ad27305d548e28e884d4b45fabf2 | \section{Introduction}
In a recent article \cite{Vout}, we investigated Thue's Fundamentaltheorem
\cite{Thue}, showing when it can be used and how to use it in these cases.
Using the notation of Theorems~1 and 2 of \cite{Vout}, we also showed that the
case when $[\bK(\beta_{1}):\bK]=1$ is equivalent to the ``usual'' hypergeometric
method (see Corollary~1 of \cite{Vout}), where, here and in what follows, $\bK$
is either $\bQ$ or an imaginary quadratic field.
We also considered the case of $[\bK(\beta_{1}):\bK]=2$ in \cite{Vout}.
The approximants $P_{r}(x)$ and $Q_{r}(x)$ that we defined in Lemma~3.3 of
\cite{Vout} have a particularly nice form: an algebraic number plus or minus
its algebraic conjugate. This raises the intriguing question of why.
We address that question here and show that the form of $P_{r}(x)$ and $Q_{r}(x)$
arises from the fact that Thue's Fundamentaltheorem is a special case of the
application to hypergeometric polynomials of a simple observation regarding
diophantine approximations.
We present this observation here along with a generalisation and extension of
Thue's Fundamentaltheorem. In the notation of \cite{Vout}, we are now able to
consider more general expressions in place of $W(x)$ (see also Remark~\ref{rem:w})
as well as more general expressions for the denominator of $\cA(x)$. There are
also smaller improvements such as the consideration of powers $m/n$ rather than
just $1/n$, simplification of the numerator of $\cA(x)$,\ldots
The cost of these improvements is merely in the constant $c$ that appears in
our results below. The irrationality measure, $\kappa$, itself remains unchanged.
\section{Notation}
For positive integers $m$ and $n$ with $0<m<n$, $(m,n)=1$ and a non-negative
integer $r$, we put
\begin{displaymath}
X_{m,n,r}(x) = {} _{2}F_{1}(-r,-r-m/n;1-m/n;x),
\end{displaymath}
where $_{2}F_{1}$ denotes the classical hypergeometric function.
We use $X_{m,n,r}^{*}$ to denote the homogeneous polynomials derived from these
polynomials, so that
\begin{displaymath}
X_{m,n,r}^{*}(x,y) = y^{r} X_{m,n,r}(x/y).
\end{displaymath}
We let $D_{n,r}$ denote the smallest positive integer such that $D_{n,r} X_{m,n,r}(x)$
has rational integer coefficients.
For a positive integer $d$, we define $N_{d,n,r}$ to be the greatest common
divisor of the numerators of the coefficients of $X_{m,n,r}(1-dx)$.
We will use $v_{p}(x)$ to denote the largest power of a prime $p$ which divides
into the rational number $x$. With this notation, for positive integers $d$ and
$n$, we put
\begin{equation}
\label{eq:Ndn}
\cN_{d,n} =\prod_{p|n} p^{\min(v_{p}(d), v_{p}(n)+1/(p-1))}.
\end{equation}
For any complex number $w$, we can write $w=|w|e^{i \varphi}$, where $|w| \geq 0$
and $-\pi < \varphi \leq \pi$ (with $\varphi=0$, if $w=0$). With such a
representation, unless otherwise stated, $w^{m/n}$ will signify
$\left( |w|^{1/n} \right)^{m} e^{im \varphi/n}$ for positive integers $m$ and
$n$, where $|w|^{1/n}$ is the unique non-negative $n$-th root of $|w|$.
Lastly, following the function name in PARI, we define $\core(n)$ to be the
unique squarefree divisor, $n_{1}$, of $n$ such that $n/n_{1}$ is a perfect square.
\section{Results}
\begin{prop}
\label{prop:1}
Let $\bK$ be either $\bQ$ or an imaginary quadratic field. Let $s \geq 2$ be a
positive integer and $\bL$ be a number field with $[\bL:\bK]=s$.
Let $\theta_{1}=1$, $\theta_{2}$, \ldots, $\theta_{s} \in \bC$ be linearly-independent
over $\bK$ and let $\sigma_{1}=\id$, \ldots, $\sigma_{s}$ be the $s$
embeddings of $\bL$ into $\bC$ that fix $\bK$.
Suppose that there exist real numbers $k_{0},l_{0} > 0$ and $E,Q > 1$ such that
for all non-negative integers $r$, there are algebraic integers $p_{r} \in \bL$
with $\max_{1 \leq i \leq s} |\sigma_{i}(p_{r})|<k_{0}Q^{r}$.
Let $\beta$ and $\gamma$ be algebraic integers in $\bL$.
\noindent
{\rm (i)} Assume that
$\sum_{1 \leq i,j \leq s} \left\{ \sigma_{i}(\beta)\sigma_{j}(\gamma)-\sigma_{j}(\beta)\sigma_{i}(\gamma) \right\}
\sigma_{i}(p_{r})\sigma_{j}(p_{r+1}) \neq 0$ and
$\max_{2 \leq i \leq s} \left| p_{r}\theta_{i}-\sigma_{i}(p_{r}) \right| < l_{0}E^{-r}$.
Put
$$
\alpha = \frac{\sum_{i=1}^{s} \sigma_{i}(\beta)\theta_{i}}{\sum_{i=1}^{s} \sigma_{i}(\gamma)\theta_{i}}.
$$
For any algebraic integers $p$ and $q$ in $\bK$ with $q \neq 0$, we have
$$
\left| \alpha - \frac{p}{q} \right| > \frac{1}{c |q|^{\kappa+1}},
$$
where
$$
c = 2 \left( \sum_{i=1}^{s} \left| \sigma_{i}(\gamma) \right| \right) k_{0}Q
\max \left\{ E, 2 \left( \sum_{i=2}^{s} \left| \sigma_{i}(\beta) - \alpha \sigma_{i}(\gamma) \right| \right)
l_{0}E \right\}^{\kappa}
\mbox{ and } \kappa = \frac{\log Q}{\log E}.
$$
\noindent
{\rm (ii)} For $s=2$, assume that $\beta/\gamma, p_{r}/p_{r+1} \not\in \bK$,
and either $\left| p_{r}\theta_{2}-\sigma_{2}(p_{r}) \right| < l_{0}E^{-r}$
or
$\left| -p_{r}\theta_{2}-\sigma_{2}(p_{r}) \right| < l_{0}E^{-r}$. Put
$$
\alpha = \frac{\sigma_{2}(\beta) \theta_{2} \pm \beta}
{\sigma_{2}(\gamma) \theta_{2} \pm \gamma},
$$
where the operation in the numerator matches the operation in the denominator.
If $\bK=\bQ$, then let $\tau=1$, else let $\tau$ be an algebraic integer in
$\bK$ such that $\bL=\bK(\sqrt{\tau})$.
For any algebraic integers $p$ and $q$ in $\bK$ with $q \neq 0$, we have
$$
\left| \alpha - \frac{p}{q} \right| > \frac{1}{c |q|^{\kappa+1}},
$$
where
$$
c = 2 |\sqrt{\tau}| \left( |\gamma| +|\sigma_{2}(\gamma)| \right) k_{0}Q
\max \left\{ E, 2|\sqrt{\tau}| |\sigma_{2}(\beta)-\alpha \sigma_{2}(\gamma)| l_{0}E \right\}^{\kappa}
\mbox{ and } \kappa = \frac{\log Q}{\log E}.
$$
\end{prop}
We will use part~(ii) of this Proposition to prove the following theorems.
\begin{thm}
\label{thm:general-hypg}
\footnote{Note that our Theorems and Corollary here correct a small error in
Theorems~2.1, 2.4 and Corollary~2.7 of \cite{Vout}, where $\max(1,\ldots$ in
the expressions for $c$ should read $\max(E,\ldots$.}
Let $\bK$ be either $\bQ$ or an imaginary quadratic field. Let $\bL$ be a
number field with $[\bL:\bK]=2$ and let $\sigma$ be the non-trivial element of
$\Gal(\bL/\bK)$. If $\bK=\bQ$, then let $\tau=1$, else let $\tau$ be an
algebraic integer in $\bK$ such that $\bL=\bK(\sqrt{\tau})$. Let $\beta$,
$\gamma$, $\eta$ be algebraic integers in $\bL$.
Let $g$ be an algebraic number such that $\eta/g$ and $\sigma(\eta)/g$ are
algebraic integers $($not necessarily in $\bL)$. For each non-negative integer
$r$, let $h_{r}$ be a non-zero algebraic integer with $h_{r}/g^{r} \in \bK$
and $|h_{r}| \leq h$ for some fixed positive real number $h$. Let $d$ be the
largest positive rational integer such that $(\sigma(\eta)-\eta)/(dg)$
is an algebraic integer and let $\cC_{n}$ and $\cD_{n}$ be positive real
numbers such that
\begin{equation}
\label{eq:num-denom-bnd-a}
\max \left( 1, \frac{\Gamma(1-m/n) \, r!}{\Gamma(r+1-m/n)},
\frac{n\Gamma(r+1+m/n)}{m \Gamma(m/n)r!} \right)
\frac{D_{n,r}}{N_{d,n,r}} < \cC_{n} \left( \frac{\cD_{n}}{\cN_{d,n}} \right)^{r}
\end{equation}
holds for all non-negative integers $r$.
Put
\begin{eqnarray*}
\alpha & = & \frac{\beta(\eta/\sigma(\eta))^{m/n} \pm \sigma(\beta)}
{\gamma(\eta/\sigma(\eta))^{m/n} \pm \sigma(\gamma)}, \\
E & = & \left\{ \frac{\cD_{n}}{|g|\cN_{d,n}} \min \left( \left| \sqrt{\eta}-\sqrt{\sigma(\eta)} \right|^{2},
\left| \sqrt{\eta}+\sqrt{\sigma(\eta)} \right|^{2} \right) \right\}^{-1}, \\
Q & = & \frac{\cD_{n}}{|g|\cN_{d,n}} \max \left( \left| \sqrt{\eta}-\sqrt{\sigma(\eta)} \right|^{2},
\left| \sqrt{\eta}+\sqrt{\sigma(\eta)} \right|^{2} \right), \\
\kappa & = & \frac{\log Q}{\log E} \mbox{ and } \\
c & = & 4h |\sqrt{\tau}| \left( |\gamma| + |\sigma(\gamma)| \right) \cC_{n}Q \\
& & \times \max \left\{ E, 5h|\sqrt{\tau}|\left| 1- (\eta/\sigma(\eta))^{m/n} \right|
|\beta-\alpha \gamma| \cC_{n} E \right\}^{\kappa},
\end{eqnarray*}
where the operation in the numerator of the definition of $\alpha$ matches the
operation in its denominator.
If $E > 1$ and either $0 < \eta/\sigma(\eta) < 1$ or $|\eta/\sigma(\eta)|=1$ with
$\eta/\sigma(\eta) \neq -1$, then
\begin{equation}
\label{eq:gen-result}
\left| \alpha - p/q \right| > \frac{1}{c |q|^{\kappa+1}}
\end{equation}
for all algebraic integers $p$ and $q$ in $\bK$ with $q \neq 0$.
\end{thm}
\begin{rem}
\label{rem:w}
Observe that in our definition of $\alpha$, we take the $n$-th root of
$\eta/\sigma(\eta)$. However, this is more general than it may first
appear. It can be applied to any quantity $\mu \eta/\sigma(\eta)$ where
$\mu \in \bL$ and $\mu=\nu/\sigma(\nu)$ for some $\nu \in \bL$.
For example, although in Thue's Fundamentaltheorem we take the $n$-th root of
$-\eta/\sigma(\eta)$, it, and its generalisations, still follows from our
results. Suppose $\bL=\bK(\sqrt{\tau})$ and put $\eta'=\sqrt{\tau}\eta$, then
$-\eta/\sigma(\eta)=\eta'/\sigma(\eta')$,
so we can express $-\eta/\sigma(\eta)$ in the form here (i.e., take $\mu=-1$
and $\nu=\sqrt{\tau}$ in the above notation). There appears to be an
extra factor of $\sqrt{\tau}$ that will arise in our expressions for $E$ and
$Q$, but these are in fact cancelled out since $g$ also increases by a factor
of $\sqrt{\tau}$, so $\kappa$ is unaffected.
Similarly, if $\bK \neq \bQ(i)$ and $\bL=\bK(i)$. Then
$i\eta/\sigma(\eta)=\eta'/\sigma(\eta')$, where $\eta'=(1+i)\eta$.
Also, if $\bK \neq \bQ(\sqrt{-3})$ and $\bL=\bK(\sqrt{-3})$. Then
$\zeta_{3}\eta/\sigma(\eta)=\eta'/\sigma(\eta')$, where $\eta'=(1-\sqrt{-3})\eta/2$.
And $\zeta_{6}\eta/\sigma(\eta)=\eta'/\sigma(\eta')$, where $\eta'=(3+\sqrt{-3})\eta$.
As for the other roots of unity of degree at most $4$ over $\bQ$, it can be
shown, via algebraic manipulation, that this is not possible for $\zeta_{8}$
and $\zeta_{12}$. And since $\bQ(\zeta_{5})$ contains no subfields besides
$\bQ$ and $\bQ(\sqrt{5})$, we cannot consider $\zeta_{5}\eta/\sigma(\eta)$.
\end{rem}
\begin{rem}
From Lemma~7.4 of \cite{Vout}, the inequality (\ref{eq:num-denom-bnd-a}) holds
for $\cC_{n}$ and $\cD_{n}$ as in \cite{Vout} and hence it does not impose any
constraint.
\end{rem}
\begin{thm}
\label{thm:general-hypg-unitdisk}
Let $\bK$ be an imaginary quadratic field and $\alpha, \beta, \gamma, \eta$,
$\tau$, $\sigma$, $d, g$, $h$, $n$, $\cC_{n}$, $\cD_{n}, \cN_{d,n}$ be as in
Theorem~$\ref{thm:general-hypg}$.
Put
\begin{eqnarray*}
E & = & \frac{4|g|\cN_{d,n}}{\cD_{n}} \frac{\left( |\eta| - |\sigma(\eta) - \eta | \right)}{|\sigma(\eta) - \eta |^{2}}, \\
Q & = & \frac{2\cD_{n}}{|g|\cN_{d,n}} \left( \left| \eta \right| + \left| \sigma(\eta) \right| \right), \\
\kappa & = & \frac{\log Q}{\log E} \mbox{ and } \\
c & = & 4h |\sqrt{\tau}|\left( |\gamma| + |\sigma(\gamma)| \right) \cC_{n}Q \\
& & \times \max \left\{ E, 2h|\sqrt{\tau}|\left| 1- (\eta/\sigma(\eta))^{m/n} \right|
|\beta - \alpha \gamma| \cC_{n} E \right\}^{\kappa}.
\end{eqnarray*}
If $E>1$ and $\max \left( |1-\eta/\sigma(\eta)|, |1-\sigma(\eta)/(\eta)| \right)<1$,
then
\begin{equation}
\label{eq:gen-unitdisk-result}
\left| \alpha - p/q \right| > \frac{1}{c |q|^{\kappa+1}}
\end{equation}
for all algebraic integers $p$ and $q$ in $\bK$ with $q \neq 0$.
\end{thm}
\begin{rem}
The condition that $\bK$ be an imaginary quadratic field is no restriction since
the case of $\bK=\bQ$ is completely covered by Theorem~\ref{thm:general-hypg}.
\end{rem}
We now present a corollary of Theorem~\ref{thm:general-hypg} when $\bK=\bQ$.
\begin{cor}
\label{cor:cor-1}
Let $\bK=\bQ$ and $\alpha, \beta, \gamma, \eta$, $\sigma$, $n$, $\cC_{n}$,
$\cD_{n}, \cN_{d,n}$ be as in Theorem~$\ref{thm:general-hypg}$. Suppose that
$\eta=(u_{1} + u_{2} \sqrt{t})/2$ where $t, u_{1}, u_{2} \in \bZ$ and $t \neq 0$.
Put
\allowdisplaybreaks
\begin{eqnarray*}
g_{1} & = & \gcd \left( u_{1}, u_{2} \right), \\
g_{2} & = & \gcd(u_{1}/g_{1}, t), \\
g_{3} & = & \left\{ \begin{array}{ll}
1 & \mbox{if $t \equiv 1 \bmod 4$ and $(u_{1}-u_{2})/g_{1} \equiv 0 \bmod 2$}, \\
2 & \mbox{if $t \equiv 3 \bmod 4$ and $(u_{1}-u_{2})/g_{1} \equiv 0 \bmod 2$},\\
4 & \mbox{otherwise,}
\end{array}
\right. \\
g_{4} & = & \gcd \left( \core(tg_{2}g_{3}), \frac{n}{\gcd((u_{2}/g_{1})\sqrt{tg_{3}/g_{2}/\core(tg_{2}g_{3})}, n)} \right), \\
g_{5} & = & \left\{ \begin{array}{ll}
2 & \mbox{if $2|n$ and $v_{2} \left( u_{2}^{2}tg_{3}/(g_{1}^{2}g_{2}) \right) = v_{2} \left( 2n^{2} \right)$}, \\
1 & \mbox{otherwise}
\end{array}
\right.
\hspace{3.0mm} \mbox{and} \\
g & = & \frac{g_{1}\sqrt{g_{2}}}{\sqrt{g_{3}g_{4}g_{5}}},\\
E & = & \frac{|g|\cN_{d,n}}{\cD_{n}\min \left( \left| u_{1} \pm \sqrt{u_{1}^{2}-u_{2}^{2}t} \right| \right)}, \\
Q & = & \frac{\cD_{n}\max \left( \left| u_{1} \pm \sqrt{u_{1}^{2}-u_{2}^{2}t} \right| \right)}{|g|\cN_{d,n}}, \\
\kappa & = & \frac{\log Q}{\log E} \mbox{ and } \\
c & = & 4 \sqrt{|2t|} \left( |\gamma| + |\sigma(\gamma)| \right) \cC_{n} Q \\
& & \times \left( \max \left( E, 5 \sqrt{|2t|} \left| 1- (\eta/\sigma(\eta))^{m/n} \right|
|\beta-\alpha\gamma| \cC_{n}E \right) \right)^{\kappa},
\end{eqnarray*}
where $d$ is the largest positive rational integer such that $u_{2}\sqrt{t}/(dg)$
is an algebraic integer.
If $E > 1$ and either $0 < \eta/\sigma(\eta) < 1$ or $|\eta/\sigma(\eta)|=1$
with $\eta/\sigma(\eta) \neq -1$, then
\begin{equation}
\label{eq:cor-1-result}
\left| \alpha - p/q \right| > \frac{1}{c |q|^{\kappa+1}}
\end{equation}
for all rational integers $p$ and $q$ with $q \neq 0$.
\end{cor}
\begin{rem}
\label{rem:g}
The factors, $g_{i}$, used to construct $g$ each arise in natural and distinct
ways. $g_{1}$ through $g_{3}$ provide ways to remove common factors from $\eta$
and $\sigma(\eta)$. $g_{4}$ and $g_{5}$ arise from the interplay of $d$ and $g$:
under some circumstances (captured by $g_{4}$ and $g_{5}$), decreasing $g$ can
increase $d$ and hence $\cN_{d,n}$ by more to provide a net benefit.
\end{rem}
\begin{rem}
Using the same argument as in the proof of this Corollary, we can also improve
Corollary~2.7 of \cite{Vout}, replacing $g_{4}$ there by
$$
\gcd \left( \core(g_{2}g_{3}), \frac{n}{\gcd((u_{1}/g_{1})\sqrt{g_{2}/g_{3}/\core(g_{2}g_{3})}, n)} \right)
$$
and adding an appropriate version of the $g_{5}$ above by setting $g_{5}=2$ if
$2|n$ and
$v_{2} \left( u_{1}g_{3}/(g_{1}^{2}g_{2}) \right) = v_{2} \left( 2n^{2} \right)$
and setting $g_{5}=1$ otherwise, since the definition of $d$ in Corollary~2.7
of \cite{Vout} uses $u_{1}/(dg)$ rather than $u_{2}\sqrt{t}/(dg)$ as here.
This improved version of Corollary~2.7 of \cite{Vout} will yield the same
results as in the Corollary here together with Remark~\ref{rem:w}.
\end{rem}
\section{Preliminary Lemmas}
The next lemma contains the relationship that allows the hypergeometic method to
provide good sequences of rational approximations.
\begin{lem}
\label{lem:relation}
For any positive integers $m$ and $n$ with $(m,n)=1$, any non-negative integer
$r$ and for any complex number $z$ that is not a negative number and not zero,
\begin{equation}
\label{eq:approx}
z^{m/n} z^{r} X_{m,n,r}(z^{-1}) - X_{m,n,r}(z) = (z-1)^{2r+1} R_{m,n,r}(z),
\end{equation}
where
$$
(z-1)^{2r+1} R_{m,n,r}(z) = \frac{\Gamma(r+1+m/n)}{r!\Gamma(m/n)}
\int_{1}^{z} (1-t)^{r}(t-z)^{r}t^{m/n-r-1}dt.
$$
\end{lem}
\begin{rem}
Note that the expression $(z-1)^{2r+1} R_{m,n,r}(z)$ here is the same as the
$R_{m,n,r}(z)$ defined in Lemma~7.1 of \cite{Vout}.
\end{rem}
\begin{proof}
This is shown in the case of $m=1$ in the proof of Lemma~2.3 of \cite{CV}.
The proof for arbitrary $m$ is identical.
\end{proof}
\begin{lem}
\label{lem:approx}
Let $\theta \in \bC$ and let $\bK$ be either $\bQ$ or an imaginary quadratic field.
Suppose that there exist real numbers $k_{0},l_{0} > 0$ and $E,Q > 1$ such that
for all non-negative integers $r$, there are algebraic integers $p_{r}$ and $q_{r}$
in $\bK$ with $|q_{r}|<k_{0}Q^{r}$ and $|q_{r}\theta-p_{r}| \leq l_{0}E^{-r}$
satisfying $p_{r}q_{r+1} \neq p_{r+1}q_{r}$. Then for any algebraic integers $p$
and $q$ in $\bK$ with $q \neq 0$, we have
$$
\left| \theta - \frac{p}{q} \right| > \frac{1}{c |q|^{\kappa+1}},
\mbox{ where } c=2k_{0}Q \left( \max ( 1, 2l_{0}) E \right)^{\kappa}
\mbox{ and } \kappa = \frac{\log Q}{\log E}.
$$
Moreover, if $p/q \neq p_{i}/q_{i}$ for any non-negative integer $i$, then we
can put $c=2k_{0} \left( \max ( 1, 2l_{0}) E \right)^{\kappa}$.
\end{lem}
\begin{proof}
This follows from Lemma~6.1 of \cite{Vout}.
There we proved a similar result for $|q| \geq 1/(2l_{0})$ and
$c=2k_{0}Q(2l_{0}E)^{\kappa}$. Here we merely observe that if we replace $l_{0}$
with $\max ( 0.5, l_{0})$, then all the hypotheses of the Lemma still hold.
Moreover, $1/(2\max ( 0.5, l_{0})) \leq 1$, so the result holds for all non-zero
algebraic integers $q \in \bK$.
The last statement in the Lemma follows since the $Q$ which appears in the
expression for $c$ in the statement of Lemma~6.1 of \cite{Vout} arises only
from consideration of the case $p/q=p_{i}/q_{i}$ for some positive integer $i$.
\end{proof}
\section{Proof of Proposition~\ref{prop:1}}
Assume that we have a sequence of $p_{r}$'s satisfying the hypotheses of
Proposition~\ref{prop:1}.
\noindent
{\bf (i)} Suppose we have $p_{r}\theta_{i}-\sigma_{i}(p_{r})=\delta_{i,r}$ for
each $i=1$, \ldots, $s$. Then we can write
$$
\alpha = \frac{\sum_{i=1}^{s} \sigma_{i}(\beta)(\delta_{i,r}+\sigma_{i}(p_{r}))}
{\sum_{i=1}^{s} \sigma_{i}(\gamma)(\delta_{i,r}+\sigma_{i}(p_{r}))}.
$$
and hence
$$
\alpha\sum_{i=1}^{s} \sigma_{i}(\gamma p_{r})
- \sum_{i=1}^{s} \sigma_{i}(\beta p_{r})
= \sum_{i=2}^{s} \left( \sigma_{i}(\beta) - \alpha\sigma_{i}(\gamma) \right)\delta_{i,r},
$$
since $\delta_{1,r}=0$.
Put $p_{r}'=\sum_{i=1}^{s} \sigma_{i}(\beta p_{r})$ and
$q_{r}'=\sum_{i=1}^{s} \sigma_{i}(\gamma p_{r})$. Note that both
$p_{r}'$ and $q_{r}'$ are algebraic integers in $\bK$.
Observe that
$$
\left| \alpha q_{r}' - p_{r}' \right|
< l_{0} \left( \sum_{i=2}^{s} \left| \sigma_{i}(\beta)-\alpha \sigma_{i}(\gamma) \right| \right)
E^{-r}
$$
and
$$
\left| q_{r}' \right|
\leq k_{0} \left( \sum_{i=1}^{s} \left|\sigma_{i}(\gamma) \right| \right)
Q^{r}.
$$
Since
$$
p_{r}'q_{r+1}'-p_{r+1}'q_{r}'
=
\sum_{1 \leq i,j \leq s} \left\{ \sigma_{i}(\beta)\sigma_{j}(\gamma)-\sigma_{j}(\beta)\sigma_{i}(\gamma) \right\}
\sigma_{i}(p_{r})\sigma_{j}(p_{r+1})
\neq 0
$$
by our assumption in the statement of the Proposition, we can apply
Lemma~\ref{lem:approx} with $p_{r}'$ and $q_{r}'$ instead of $p_{r}$ and $q_{r}$,
respectively, to complete the proof in this case.
\noindent
{\bf (ii)} Suppose we have $\zeta_{2}p_{r}\theta_{2}-\sigma_{2}(p_{r})
=\delta_{2,r}$ for some square root of $1$, $\zeta_{2}$, fixed for a given
value of $r$. As above, we can write
$$
\alpha \left\{ \sigma_{2} \left( \gamma p_{r} \right) \pm \zeta_{2}\gamma p_{r} \right\}
- \left\{ \sigma_{2} \left( \beta p_{r} \right) \pm \zeta_{2}\beta p_{r} \right\}
= \delta_{2,r} \left( \sigma_{2} \left(\beta\right) - \alpha \sigma_{2} \left( \gamma \right) \right).
$$
We break the proof into two cases depending on the value of $\zeta_{2}$.
\vspace{1.0mm}
\noindent
\underline{Case 1:} $\pm \zeta_{2}=1$
This case is identical to part~(i) with $s=2$.
Note that in this case ($s=2$), the condition in part~(i) reduces to
$$
\left( \sigma_{2}(\beta)\gamma - \beta\sigma_{2}(\gamma) \right)
\left( \sigma_{2}(p_{r})p_{r+1} - p_{r}\sigma_{2}(p_{r+1}) \right) \neq 0.
$$
This is true under the conditions we have stipulated here, namely
$\beta/\gamma \not\in \bK$ and $p_{r}/p_{r+1} \not\in \bK$
(since the fixed field of $\sigma_{2}$ is $\bK$).
Also since $|\tau| \geq 1$, our definition of $c$ is valid.
\vspace{1.0mm}
\noindent
\underline{Case 2:} $\pm \zeta_{2}=-1$
We break this case into two subcases.
\vspace{1.0mm}
\noindent
\underline{Case 2(i):} $\pm \zeta_{2}=-1$ and $\bK=\bQ$
If $\bK=\bQ$, then we can write $\beta p_{r}=(a+b\sqrt{t})/2$ for some choice
of rational integers $a$, $b$ and $t$ with $t \neq 0$. Hence
$\beta p_{r} - \sigma_{2} \left( \beta p_{r} \right)=b\sqrt{t}$ and
$(\beta p_{r} - \sigma_{2} \left( \beta p_{r} \right))/\sqrt{t} \in \bZ$.
Similarly,
$(\gamma p_{r} - \sigma_{2} \left( \gamma p_{r} \right))/\sqrt{t} \in \bZ$.
In this case, we put
$q_{r}' = \left( \gamma p_{r} - \sigma_{2} \left( \gamma p_{r} \right) \right)/\sqrt{t}$
and $p_{r}' = \left( \beta p_{r} - \sigma_{2} \left( \beta p_{r} \right) \right)/\sqrt{t}$
and observe that
$$
\left| \alpha q_{r}' - p_{r}' \right|
< \frac{l_{0} |\sigma_{2}(\beta)-\alpha \sigma_{2}(\gamma)|}{|\sqrt{t}|} E^{-r}
\leq l_{0} |\sqrt{\tau}| |\sigma_{2}(\beta)-\alpha \sigma_{2}(\gamma)| E^{-r}
$$
and
$$
\left| q_{r}' \right|
\leq \frac{k_{0} \left( |\gamma| + |\sigma_{2}(\gamma)| \right)}{|\sqrt{t}|} Q^{r}
\leq k_{0} |\sqrt{\tau}| \left( |\gamma| + |\sigma_{2}(\gamma)| \right) Q^{r},
$$
since $|t| \geq 1$.
\vspace{1.0mm}
\noindent
\underline{Case 2(ii):} $\pm \zeta_{2}=-1$ and $\bK$ is an imaginary quadratic field
If $\bK$ is an imaginary quadratic field, then $\beta p_{r}=a+b\sqrt{\tau}$ for
some $a, b \in \bK$ and where $\tau$ is as in the statement of the Proposition.
Hence $\beta p_{r} - \sigma_{2} \left( \beta p_{r} \right)=2b\sqrt{\tau}$ is an
algebraic integer and
$\left( \beta p_{r} - \sigma_{2} \left( \beta p_{r} \right) \right) \sqrt{\tau}$
is an algebraic integer in $\bK$. Similarly,
$(\gamma p_{r} - \sigma_{2} \left( \gamma p_{r} \right))\sqrt{\tau}$ is an
algebraic integer in $\bK$.
In this case, we put
$q_{r}' = \left( \gamma p_{r} - \sigma_{2} \left( \gamma p_{r} \right) \right)\sqrt{\tau}$
and $p_{r}' = \left( \beta p_{r} - \sigma_{2} \left( \beta p_{r} \right) \right)\sqrt{\tau}$
and observe that
$$
\left| \alpha q_{r}' - p_{r}' \right|
< l_{0} |\sqrt{\tau}| |\sigma_{2}(\beta)-\alpha \sigma_{2}(\gamma)| E^{-r}
$$
and
$$
\left| q_{r}' \right|
\leq k_{0} |\sqrt{\tau}| \left( |\gamma| + |\sigma_{2}(\gamma)| \right) Q^{r}.
$$
Note that in both these subcases, we obtain the same upper bound for
$\left| \alpha q_{r}' - p_{r}' \right|$ and for $\left| q_{r}' \right|$.
Here
$$
p_{r}'q_{r+1}'-p_{r+1}'q_{r}'
= \tau \left( \beta\sigma_{2}(\gamma) - \sigma_{2}(\beta)\gamma \right)
\left( \sigma_{2}(p_{r})p_{r+1} - p_{r}\sigma_{2}(p_{r+1}) \right),
$$
which we saw in Case~1 can only be zero if $\beta/\gamma \in \bK$ or
$p_{r}/p_{r+1} \in \bK$.
Therefore, we can apply Lemma~\ref{lem:approx} to find that
$\kappa = \log (Q)/\log (E)$ and
$$
c = 2k_{0} |\sqrt{\tau}| \left( |\gamma| + |\sigma_{2}(\gamma)| \right) Q
\max \left\{ E, 2l_{0} |\sqrt{\tau}| |\sigma_{2}(\beta)-\alpha \sigma_{2}(\gamma)| E \right\}^{\kappa},
$$
concluding the proof of Case~2 and the Proposition.
\section{Proof of Theorem~\ref{thm:general-hypg}}
\subsection{Construction of approximations}
We construct the approximations under more general conditions. The point is not
to generalise for its own sake, but to illustrate the requirements and limitations
of our method of proof.
Let $\zeta_{k}$ be a $k$-th root of unity for some $k$. We apply
Lemma~\ref{lem:relation} with $z=\zeta_{k}\eta/\sigma(\eta)$. Multiplying both
sides of (\ref{eq:approx}) by $\sigma(\eta)^{r}$, we obtain
\begin{eqnarray*}
& & \left( \zeta_{k} \eta/\sigma(\eta) \right)^{m/n}
\left( \zeta_{k} \eta \right)^{r} X_{m,n,r} \left( \sigma(\eta)/(\zeta_{k}\eta) \right)
- \sigma(\eta)^{r} X_{m,n,r} \left( \zeta_{k}\eta/\sigma(\eta) \right) \\
& = & \sigma(\eta)^{r} \left( \zeta_{k}\eta/\sigma(\eta)-1 \right)^{2r+1}
R_{m,n,r} \left( \zeta_{k}\eta/\sigma(\eta) \right),
\end{eqnarray*}
which we can rewrite as
\begin{eqnarray*}
& & \left( \zeta_{k}\eta/\sigma(\eta) \right)^{m/n}
X_{m,n,r}^{*} \left( \sigma(\eta),\zeta_{k}\eta \right)
- X_{m,n,r}^{*} \left( \zeta_{k}\eta, \sigma(\eta) \right) \\
& = & \sigma(\eta)^{r} \left( \zeta_{k}\eta/\sigma(\eta)-1 \right)^{2r+1}
R_{m,n,r} \left( \zeta_{k}\eta/\sigma(\eta) \right).
\end{eqnarray*}
Observe that
\begin{eqnarray*}
X_{m,n,r}^{*} \left( \zeta_{k}\eta, \sigma(\eta) \right)
& = & g^{r} X_{m,n,r}^{*} \left( \frac{\zeta_{k}\eta}{g}, \frac{\sigma(\eta)}{g} \right) \\
& = & \left( g \frac{\sigma(\eta)}{g} \right)^{r}
X_{m,n,r} \left( 1- d \frac{ (\sigma(\eta)-\zeta_{k}\eta)/g}{d\sigma(\eta)/g} \right).
\end{eqnarray*}
From Lemma~7.4(a) of \cite{Vout},
$$
\frac{D_{n,r}}{N_{d,n,r}} X_{m,n,r} \left( 1- d \frac{ (\sigma(\eta)-\zeta_{k}\eta)/g}{d\sigma(\eta)/g} \right)
\in \bZ \left[ \frac{ (\sigma(\eta)-\zeta_{k}\eta)/g}{d\sigma(\eta)/g} \right],
$$
and, as a consequence,
$$
\left( \frac{\sigma(\eta)}{g} \right)^{r}
\frac{D_{n,r}}{N_{d,n,r}} X_{m,n,r} \left( 1- d \frac{ (\sigma(\eta)-\zeta_{k}\eta)/g}{d\sigma(\eta)/g} \right)
$$
is an algebraic integer, since $(\sigma(\eta)-\zeta_{k}\eta)/(gd)$ is an algebraic
integer by the definition of $d$ in the statement of the Theorem. Hence
$$
p_{r} = \frac{h_{r}D_{n,r}}{g^{r} N_{d,n,r}}X_{m,n,r}^{*} \left( \zeta_{k}\eta, \sigma(\eta) \right)
$$
is an algebraic integer in $\bL$.
Similarly,
$$
q_{r} = \frac{h_{r}D_{n,r}}{g^{r} N_{d,n,r}}X_{m,n,r}^{*} \left( \sigma(\eta), \zeta_{k}\eta \right)
$$
is an algebraic integer in $\bL$.
Now we want $p_{r}$ and $q_{r}$, or at least numbers obtained from them, to be
algebraic conjugates. For this purpose, we must suppose that
$1/\zeta_{k}=\sigma(\zeta_{k})$ (note that this implies that $\zeta_{k} \in \bL$).
With this condition, and since $\sigma^{2}(\cdot)$ is the identity map, we have
\begin{eqnarray*}
\left( \zeta_{k} \right)^{r} \sigma \left( X_{m,n,r}^{*} \left( \zeta_{k} \eta, \sigma(\eta) \right) \right)
& = & \left( \zeta_{k} \right)^{r} \sigma \left( \sigma(\eta)^{r} X_{m,n,r} \left( \zeta_{k}\eta/\sigma(\eta) \right) \right) \\
& = & \left( \zeta_{k}\eta \right)^{r} X_{m,n,r} \left( \sigma \left( \zeta_{k}\eta/\sigma(\eta) \right) \right) \\
& = & \left( \zeta_{k}\eta \right)^{r} X_{m,n,r} \left( \sigma(\eta)/(\zeta_{k}\eta) \right) \\
& = & X_{m,n,r}^{*} \left( \sigma(\eta), \zeta_{k}\eta \right).
\end{eqnarray*}
Hence, $q_{r}=\zeta_{k}^{r}\sigma(p_{r})$ and so $q_{r}$ and $\sigma(\zeta_{k})^{r}p_{r}$
are algebraic conjugates over $\bK$. Letting, $k_{1}=k/(2,k)$, we have $p_{k_{1}r}$
and $\pm q_{k_{1}r}$ are algebraic conjugates for $k=1$, $2$, $3$, $4$ and $6$,
so we could put $p_{r}'=p_{k_{1}r}$ and $q_{r}'=q_{k_{1}r}$.
However here we restrict our attention to $k=1$ and observe that in this case
$p_{r}$ and $q_{r}$ are algebraic conjugates.
\subsection{Estimates}
From Lemmas~7.3(a) and 7.4(c) of \cite{Vout}, we have
\begin{eqnarray*}
\left| q_{r} \right|
& \leq & \frac{2h}{|g|^{r}}
\frac{D_{n,r}}{N_{d,n,r}} \frac{\Gamma(1-m/n) r!}{\Gamma(r+1-m/n)}
\max \left( \left| \sqrt{\eta}+\sqrt{\sigma(\eta)} \right|,
\left| \sqrt{\eta}-\sqrt{\sigma(\eta)} \right| \right)^{2r} \\
& \leq & 2h\cC_{n} \left( \frac{\cD_{n}}{|g|\cN_{d,n}} \right)^{r}
\max \left( \left| \sqrt{\eta}+\sqrt{\sigma(\eta)} \right|,
\left| \sqrt{\eta}-\sqrt{\sigma(\eta)} \right| \right)^{2r}.
\end{eqnarray*}
From Lemma~7.2(a) of \cite{Vout},
\begin{eqnarray*}
& & \left| \left( \sigma(\eta) \right)^{r} (\eta/\sigma(\eta)-1)^{2r+1} R_{m,n,r}(\eta/\sigma(\eta)) \right| \\
& \leq & 2.38 \left| 1- (\eta/\sigma(\eta))^{m/n} \right| \frac{n\Gamma(r+1+m/n)}{m\Gamma(m/n)r!} \\
& & \times \min \left( \left| \sqrt{\eta}+\sqrt{\sigma(\eta)} \right|, \left| \sqrt{\eta}-\sqrt{\sigma(\eta)} \right| \right)^{2r}.
\end{eqnarray*}
Hence
\begin{eqnarray*}
& & \left| q_{r} (\eta/\sigma(\eta))^{m/n} - p_{r} \right| \\
& \leq & 2.38h \frac{D_{n,r}}{|g|^{r}N_{d,n,r}} \left| 1- (\eta/\sigma(\eta))^{m/n} \right|
\frac{n\Gamma(r+1+m/n)}{m\Gamma(m/n)r!} \\
& & \times \min \left( \left| \sqrt{\eta}+\sqrt{\sigma(\eta)} \right|,
\left| \sqrt{\eta}-\sqrt{\sigma(\eta)} \right| \right)^{2r} \\
& \leq & \frac{2.38h}{|g|^{r}} \left| 1- (\eta/\sigma(\eta))^{m/n} \right|
\cC_{n} \left( \frac{\cD_{n}}{\cN_{d,n}} \right)^{r} \\
& & \times \min \left( \left| \sqrt{\eta}+\sqrt{\sigma(\eta)} \right|,
\left| \sqrt{\eta}-\sqrt{\sigma(\eta)} \right| \right)^{2r}.
\end{eqnarray*}
Therefore, in the notation of Proposition~\ref{prop:1}, we have
\begin{eqnarray*}
k_{0} & = & 2h \cC_{n}, \\
l_{0} & = & 2.38h \left| 1- (\eta/\sigma(\eta))^{m/n} \right| \cC_{n}, \\
E & = & \left\{ \frac{\cD_{n}}{|g|\cN_{d,n}} \min \left( \left| \sqrt{\eta}-\sqrt{\sigma(\eta)} \right|^{2},
\left| \sqrt{\eta}+\sqrt{\sigma(\eta)} \right|^{2} \right) \right\}^{-1}, \\
Q & = & \frac{\cD_{n}}{|g|\cN_{d,n}} \max \left( \left| \sqrt{\eta}-\sqrt{\sigma(\eta)} \right|^{2},
\left| \sqrt{\eta}+\sqrt{\sigma(\eta)} \right|^{2} \right).
\end{eqnarray*}
From Proposition~\ref{prop:1}, the expression for $\kappa$ in the Theorem
follows immediately, while, upon noting that our $\beta$, $\gamma$,
$\sigma(\beta)$ and $\sigma(\gamma)$ here are $\sigma_{2}(\beta)$,
$\sigma_{2}(\gamma)$, $\beta$ and $\gamma$ respectively in the notation of that
Proposition,
\begin{eqnarray*}
c & = & 2 |\sqrt{\tau}|\left( |\gamma| + |\sigma(\gamma)| \right) k_{0}Q
\max \left\{ E, 2 |\sqrt{\tau}| \left( |\beta-\alpha \gamma| \right) l_{0}E \right\}^{\kappa} \\
& < & 4h |\sqrt{\tau}| \left( |\gamma| + |\sigma(\gamma)| \right) \cC_{n}Q \\
& & \times \max \left\{ E, 5h |\sqrt{\tau}| \left| 1- (\eta/\sigma(\eta))^{m/n} \right|
|\beta-\alpha \gamma| \cC_{n} E \right\}^{\kappa}.
\end{eqnarray*}
\section{Proof of Theorem~\ref{thm:general-hypg-unitdisk}}
The proof of Theorem~\ref{thm:general-hypg-unitdisk} is the same as that of
Theorem~\ref{thm:general-hypg}, except that we use the upper bounds from
parts~(b) of Lemmas~7.2 and 7.3 of \cite{Vout}, rather than parts~(a).
Thus, we find that
\begin{eqnarray*}
k_{0} & = & 2h \cC_{n}, \\
l_{0} & = & h\left| 1- (\eta/\sigma(\eta))^{m/n} \right| \cC_{n}, \\
E & = & \frac{4|g|\cN_{d,n}}{\cD_{n}} \frac{\left( |\eta| - |\sigma(\eta) - \eta | \right)}{|\sigma(\eta) - \eta |^{2}}, \\
Q & = & \frac{2\cD_{n}}{|g|\cN_{d,n}} \left( \left| \eta \right| + \left| \sigma(\eta) \right| \right).
\end{eqnarray*}
So, from Proposition~\ref{prop:1}, $\kappa$ is as in the statement of the
Theorem and, again noting the change of notation mentioned at the end of the
proof of Theorem~\ref{thm:general-hypg},
\begin{eqnarray*}
c & = & 2 |\sqrt{\tau}| \left( |\gamma| + |\sigma(\gamma)| \right) k_{0}Q
\max \left\{ E, 2 |\sqrt{\tau}| |\beta-\alpha \gamma| l_{0}E \right\}^{\kappa} \\
& = & 4h |\sqrt{\tau}| \left( |\gamma| + |\sigma(\gamma)| \right) \cC_{n}Q \\
& & \times \max \left\{ E, 2h |\sqrt{\tau}| \left| 1- (\eta/\sigma(\eta))^{m/n} \right|
|\beta-\alpha \gamma| \cC_{n} E \right\}^{\kappa}.
\end{eqnarray*}
\section{Proof of Corollary~\ref{cor:cor-1}}
This Corollary follows from a direct application of Theorem~\ref{thm:general-hypg}.
We can write
\begin{equation}
\label{eq:ei}
\left( \sqrt{\eta} \pm \sqrt{\sigma(\eta)} \right)^{2}
= \eta + \sigma(\eta) \pm 2 \sqrt{\eta \sigma(\eta)}.
\end{equation}
The right-hand side of (\ref{eq:ei}) is
$u_{1} \pm \sqrt{u_{1}^{2}-u_{2}^{2}t}$ and $\sigma(\eta)-\eta
=-u_{2}\sqrt{t}$. Hence $d$ is as defined in the Corollary.
The analysis of $g_{1}$, $g_{2}$ and $g_{3}$ is identical to that in Section~11 of
\cite{Vout}.
As stated in the remark after Corollary~\ref{cor:cor-1}, $g_{4}$ and $g_{5}$
arise from the interplay of $d$ and $g$. Suppose that $d_{1}$ is the largest
positive rational integer such that $u_{2}\sqrt{t}/(d_{1}g_{1}\sqrt{g_{2}/g_{3}})$
is an algebraic integer. If there are multiplicative factors of the form
$\sqrt{d_{2}}$ in $u_{2}\sqrt{t}/(d_{1}g_{1}\sqrt{g_{2}/g_{3}})$, then by
multiplying $\eta$, and hence $u_{2}\sqrt{t}$, by $\sqrt{d_{2}}$, we can
increase $d_{1}$ by a factor of $d_{2}$. Under some circumstances, this
increases $\cN_{d,n}$ by a factor of $d_{2}$ while increasing
$u_{1} \pm \sqrt{u_{1}^{2}-u_{2}^{2}t}$ only by a factor of $\sqrt{d_{2}}$ for
a net reduction in the size of $\kappa$. We demonstrate here how $g_{4}$ and
$g_{5}$ capture these circumstances.
Consider the integer $u_{2}^{2}tg_{3}/(g_{1}^{2}g_{2})$ and let $d_{1}^{2}$ be
its largest square divisor. Suppose that $p$ is a prime divisor of their quotient.
That is, $p$ is a prime divisor of $\core(u_{2}^{2}tg_{3}/(g_{1}^{2}g_{2}))
=\core(tg_{3}/g_{2})=\core(tg_{2}g_{3})$. Note that
$$
d_{1}=\sqrt{u_{2}^{2}tg_{3}/(g_{1}^{2}g_{2})/\core(tg_{2}g_{3})}
=(u_{2}/g_{1})\sqrt{tg_{3}/g_{2}/\core(tg_{2}g_{3})}
$$.
First, if $p \nmid n$, then $\cN_{pd_{1},n}=\cN_{d_{1},n}$ from the definition
of $\cN_{d,n}$ in (\ref{eq:Ndn}) and there is no benefit.
Second, if $p|n$ and $p \nmid (n/\gcd(d_{1},n))$, then $\cN_{pd_{1},n}$ is at
most $\cN_{d_{1},n}p^{1/(p-1)}$ (again, from (\ref{eq:Ndn})). That is we gain
at most a factor of $p^{1/(p-1)}$, while increasing the size of
$u_{1} \pm \sqrt{u_{1}^{2}-u_{2}^{2}t}$ by a factor of $\sqrt{p}$ and hence
obtain no benefit for $p>2$.
Third, if $p|n$ and $p|(n/\gcd(d_{1},n))$, then we gain a factor of $p$, while
we increase the size of $u_{1} \pm \sqrt{u_{1}^{2}-u_{2}^{2}t}$ by a factor of
$\sqrt{p}$. The product of all such $p$ equals
$$
\gcd \left( \core(tg_{2}g_{3}), \frac{n}{\gcd((u_{2}/g_{1})\sqrt{tg_{3}/g_{2}/\core(tg_{2}g_{3})}, n)} \right),
$$
which is our $g_{4}$.
This covers all possible cases except $2|n$ and $2 \nmid (n/\gcd(d_{1},n))$.
If the power of $2$ dividing $d$ equals the power of $2$ dividing $n$, both are
positive and $2|\core(tg_{2}g_{3})$, then we increase $\cN_{d_{1},n}$ by a factor
of $2$, while we increase the size of $u_{1} \pm \sqrt{u_{1}^{2}-u_{2}^{2}t}$
by a factor of $\sqrt{2}$. Since
$u_{2}^{2}tg_{3}/(g_{1}^{2}g_{2})=d_{1}^{2}\core(tg_{2}g_{3})$, this condition
is equivalent to our condition in the definition of $g_{5}$.
Lastly, we must consider $h_{r}$ and $h$.
Since $g^{2} \in \bQ$, we can take $h_{r}=1$ for $r$ even. Since
$(g_{3}g_{4}g_{5}/g_{2})\core(g_{2}g_{3}g_{4}g_{5})$ is a perfect square, we
can take $h_{r}=\sqrt{\core(g_{2}g_{3}g_{4}g_{5})}$ for $r$ odd. Observe that
$g_{4}g_{5}|(2tg_{3}/g_{2})$, $g_{2}|t$ and $g_{3}|4$. Hence
$h_{r} \leq \sqrt{|2t|}$ for $r$ odd.
\subsection*{Acknowledgements}
The author is very grateful to the referee for their very attentive reading of
this paper and the corrections and improvements that they suggested. Their
diligence has led to a much better paper.
| 2023-04-23T06:41:12.582Z | 2010-10-01T02:02:25.000Z | redpajama/arxiv | arxiv_0001 | 1,918 | 6,356 |
d015082f026af7b95d1ff56905c1899bdb2ac099 | \section{Introduction}
A classic result in the theory of integrable systems \cite{airault1977,choodnovsky1977} states that the soliton dynamics of the Korteweg-de Vries equation is governed by an ($A$-type) Calogero-Moser (CM) system. This relation between two of the best-known integrable systems is but one instance of a \textit{soliton-CM correspondence}, whereby integrable PDEs are linked to many-body systems of CM type. For many such PDEs, including the Korteweg-de Vries \cite{airault1977,choodnovsky1977}, nonlinear Schr\"{o}dinger \cite{hone1997}, Benjamin-Ono \cite{chen1979}, and intermediate long wave \cite[Chapter~3]{ablowitz1981} equations, this is accomplished by making an ansatz for the solution with time-dependent poles in the complex plane and showing that the locations of these poles evolve according to a (complexified) CM system. As such CM systems are exactly-solvable \cite{olshanetsky1983}, this process provides classes of exact analytic solutions to the PDEs.
A complementary approach is to construct integrable systems with infinite degrees of freedom by taking continuum limits of CM systems. The long-range character of the interactions in the CM system corresponds to nonlocal terms in the continuum description, resulting in partial integro-differential equations of Benjamin-Ono type \cite[Chapter~4]{ablowitz1991}. This concept was pioneered by Abanov, Bettelheim, and Wiegmann \cite{abanov2009}, who showed that the continuum dynamics of the rational CM system is described by Euler hydrodynamic equations that are equivalent to an integro-differential variant of the nonlinear Schr\"{o}dinger equation \cite{matsuno2002}. Recent studies have applied this idea to CM-type systems with spin degrees of freedom, first introduced by Gibbons and Hermsen \cite{gibbons1984}; see also \cite{wojciechowski1985}. The half-wave maps (HWM) equation was derived in \cite{zhou2015} and \cite{lenzmann2018,lenzmann2020} as a continuum limit of a classical Haldane-Shastry spin chain \cite{haldane1988,shastry1988}, a limiting case of the trigonometric spin CM system \cite{polychronakos1993}. Lax integrability and an infinite number of conservation laws were established for the HWM equation in \cite{gerard2018} and multi-soliton solutions were constructed in \cite{berntson2020,matsuno2022}. Moreover, the HWM equation admits a family of periodic solutions governed by a trigonometric spin CM system \cite{berntson2020}. Thus, the HWM equation is linked in two distinct ways to the trigonometric spin CM system. In the present paper, we show that this twofold relation can be lifted to the elliptic setting.
The non-chiral intermediate Heisenberg ferromagnet (ncIHF) equation is a generalization of the HWM equation related to the elliptic spin CM system. Together with Langmann, we introduced the ncIHF equation in \cite{berntsonklabbers2021} as a continuum limit of a classical Inozemtsev spin chain \cite{inozemtsev1990}; the latter is simultaneously an elliptic generalization of the Haldane-Shastry spin chain and a limiting case of the elliptic spin CM system. It is important to note that the ncIHF equation comes in two related variants: (i) an equation with periodic boundary conditions and (ii) an equation posed on the real line, which may be obtained as an infinite-period limit of the first. In this paper, we study the former, which we call the \textit{periodic ncIHF equation}. Basic integrability results for the ncIHF equation on the real line, where the analysis is technically simpler, have already been obtained in \cite{berntsonklabbers2021}: the (aperiodic) ncIHF equation admits a Lax pair, an infinite number of conservation laws, and multi-soliton solutions governed by the hyperbolic spin CM system. One major result of this paper is that the periodic ncIHF equation admits a family of solutions, analogous to the multi-solitons of the (aperiodic) ncIHF equation, governed by the elliptic spin CM system. As the elliptic spin CM system is exactly-solvable \cite{krichever1995spin}, this gives a new class of exact analytic solutions to the periodic ncIHF equation.
The periodic ncIHF equation describes the time evolution of two coupled spin densities propagating on the circle of circumference $2\ell>0$; these spin densities are represented by functions\footnote{In this paper, we consider generally complex solutions of the periodic ncIHF equation. See Remark~\ref{rem:complex} for a discussion of this strategy.} $\mathbf{u},\mathbf{v}: \mathbb{R}\times \mathbb{R}\to {\mathbb C}^3$ of $(x,t)\in {\mathbb R}\times {\mathbb R}$ satisfying $\mathbf{u}(x+2\ell,t)=\mathbf{u}(x,t)$, $\mathbf{v}(x+2\ell,t)=\mathbf{v}(x,t)$, and $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$ for some constant $\rho\in{\mathbb C}$. The periodic ncIHF equation reads
\begin{equation}
\label{eq:ncIHF}
\begin{split}
\mathbf{u}_t=&+\mathbf{u}\wedge T\mathbf{u}_x - \mathbf{u}\wedge \tilde{T}\mathbf{v}_x, \\
\mathbf{v}_t=& - \mathbf{v}\wedge T\mathbf{v}_x+\mathbf{v}\wedge \tilde{T}\mathbf{u}_x,
\end{split}
\end{equation}
where $T$ and $\tilde{T}$ are integral operators which act componentwise on three-vectors and are defined by
\begin{equation}
\label{eq:TTe}
\begin{split}
(Tf)(x)\coloneqq &\;\frac{1}{\pi}\Xint-_{-\ell}^{\ell}\zeta_1(x'-x;\ell,{\rm i}\delta)f(x')\,\mathrm{d}x', \\
(\tilde{T} f)(x)\coloneqq &\;\frac{1}{\pi}\int_{-\ell}^{\ell} \zeta_1(x'-x+{\rm i}\delta;\ell,{\rm i}\delta) f(x')\, \mathrm{d}x',
\end{split}
\end{equation}
where the dashed integral indicates a principal value prescription and
\begin{equation}\label{eq:zeta1}
\zeta_1(z;\ell,{\rm i}\delta)\coloneqq \zeta(z;\ell,{\rm i}\delta)-\frac{\zeta(\ell; \ell,{\rm i}\delta)}{\ell}z \quad (z\in{\mathbb C}),
\end{equation}
with $\zeta(z;\ell,{\rm i}\delta)$ the Weierstrass $\zeta$-function with half-periods $\ell$ and ${\rm i}\delta$ ($\delta>0$). The function $\zeta_1(z;\ell,{\rm i}\delta)$ is $2\ell$-periodic and satisfies
\begin{equation}\label{eq:zetalimits}
\lim_{\ell\to\infty} \zeta_1(z;\ell;{\rm i}\delta)=\frac{\pi}{2\delta}\coth\bigg(\frac{\pi}{2\delta}z\bigg),\qquad \lim_{\ell\to\infty} \zeta_1(z+{\rm i}\delta;\ell;{\rm i}\delta)=\frac{\pi}{2\delta}\tanh\bigg(\frac{\pi}{2\delta}z\bigg).
\end{equation}
The ncIHF equation on the real line \cite{berntsonklabbers2021} is obtained in the $\ell\to\infty$ limit; it is given by \eqref{eq:ncIHF} with the $\ell \to \infty$ limit of the operators \eqref{eq:TTe} obtained using \eqref{eq:zetalimits}. We refer to \cite{berntsonlangmann2021} for further details on the relationship between the periodic and real-line versions of \eqref{eq:TTe}. Similarly, in the limit $\delta\to\infty$, it can be shown that \eqref{eq:ncIHF} reduces to two decoupled HWM equations related by a parity transformation $(\mathbf{u},\mathbf{v})\to (-P\mathbf{v},-P\mathbf{u})$, where $(Pf)(x,t)\coloneqq f(-x,t)$. More generally, the non-chirality of the ncIHF equation refers to the invariance of \eqref{eq:ncIHF} under this same parity transformation \cite{berntsonklabbers2021}.
The periodic ncIHF equation generalizes known integrable systems (the HWM and real-line ncIHF equations) and originates from another (the elliptic spin CM system) but a Lax pair for it has not yet been established. While we regard the construction of such a Lax pair as an interesting question for future work, the class of exact solutions presented in this paper provides evidence for the integrability of the periodic ncIHF equation. The construction of these solutions is more involved than that of analogous solutions for the ncIHF equation on the real line \cite{berntsonklabbers2021} due to the presence of elliptic functions in both \eqref{eq:TTe} and the spin-pole ansatz \eqref{eq:ansatz} given below. More specifically, a dynamical background vector is necessitated in the spin-pole ansatz and the resulting spin-pole dynamics must satisfy extra constraints versus the real-line case. To overcome these complications and link the spin-pole dynamics to an elliptic spin CM system, we prove a new B\"{a}cklund transformation for the latter. This B\"{a}cklund transformation is a key result of this paper; we believe it is also of independent interest for its striking difference from known B\"{a}cklund transformations in degenerate cases \cite{gibbons1983,berntsonlangmannlenells2021,berntsonklabbers2020,berntsonklabbers2021}; namely, a new degree of freedom, corresponding to the background vector in the spin-pole ansatz, is required to mediate the transformation between two solutions of spin CM systems.
In the remainder of this section, we focus on stating and describing our two main results, a B\"{a}cklund transformation for the elliptic spin CM system and a class of exact solutions of the periodic ncIHF equation, and describe the organization of the paper. Before proceeding, we introduce notation used in this section and throughout the paper.
\subsection{Notation}
We use the shorthand notation $\sideset{}{'}\sum_{k\neq j}^N$ for sums $\sideset{}{'}\sum_{k=1,k\neq j}^N$, etc. The components of a three-vector $\mathbf{s}\in{\mathbb C}^3$ are denoted by $(s^1,s^2,s^3)$ and the dot and cross products of two vectors $\mathbf{s}, \mathbf{t}\in {\mathbb C}^3$ are defined as
$\mathbf{s}\cdot\mathbf{t}=\sideset{}{'}\sum_{a=1}^3 s^at^a$ and $\mathbf{s}\wedge\mathbf{t}=(s^2t^3-s^3t^2,s^3t^1-s^1t^3,s^1t^2-s^2t^1)$, respectively. The set of real vectors $\mathbf{s} \in {\mathbb R}^3$ satisfying $\mathbf{s}\cdot\mathbf{s}=1$, i.e., the two-sphere, is denoted by $S^2$. We write the zero vector as $\mathbf{0}=(0,0,0)$.
Dots above a variable indicate differentiation with respect to time while primes indicate differentiation with respect to the argument of a function. Complex conjugation and matrix transposition are denoted by $*$ and $\top$, respectively.
\subsection{B\"{a}cklund transformation for an elliptic spin CM system}
(Complexified) spin CM systems describe the time evolution of a system of $N\in{\mathbb Z}_{\geq 1}$ particles with internal degrees of freedom moving in the complex plane. We consider the case where the internal degrees of freedom can be represented by complex three vectors, which is a special case of more general systems introduced by Gibbons and Hermsen \cite{gibbons1984} and Wojciechowski \cite{wojciechowski1985}; see \cite{berntsonklabbers2020} for the precise relation. Each particle is represented by a position $a_j=a_j(t)\in{\mathbb C}$ and a spin vector $\mathbf{s}_j=\mathbf{s}_j(t)\in {\mathbb C}^3$. We define the elliptic spin CM system to be the following system of equations,
\begin{subequations}\label{eq:sCM1}
\begin{align}
\ddot{a}_j=&\; -2\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_j\cdot\mathbf{s}_k \wp_2'(a_j-a_k) \quad (j=1,\ldots,N), \label{eq:sCMa} \\
\dot{\mathbf{s}}_j=&\; -2\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_j\wedge\mathbf{s}_k \wp_2(a_j-a_k) \quad (j=1,\ldots,N), \label{eq:sCMs}
\end{align}
\end{subequations}
where $\wp_2(z)$ is, up to an additive constant, the Weierstrass $\wp$-function with half-periods $\ell$ and ${\rm i}\delta$,
\begin{equation}\label{eq:wp2}
\wp_2(z;\ell,{\rm i}\delta)\coloneqq \wp(z;\ell,{\rm i}\delta)+\frac{\zeta({\rm i}\delta;\ell,{\rm i}\delta)}{{\rm i}\delta} \quad (z\in{\mathbb C}).
\end{equation}
\begin{remark}\label{rem:equivalent}
Our definition of the elliptic spin CM system \eqref{eq:sCM1} differs from others in the literature, e.g. \cite{krichever1995spin,berntsonlangmannlenells2021}. More specifically, we use the potential $\wp_2(z)$ in place of either $\wp(z)$ or $\wp_1(z)\coloneqq \wp(z)+\zeta(\ell)/\ell$, which differ from $\wp_2(z)$ by additive constants. However, by multiplying each $\mathbf{s}_j$ in \eqref{eq:sCM1} by an appropriate time-dependent complex rotation $\mathsf{R}=\mathsf{R}(t)\in \mathrm{SO}(3;{\mathbb C})$, the potential $\wp_2(z)$ can be shifted to $\wp_2(z)+c$ for any constant $c\in{\mathbb C}$. A proof of this claim can be found in Appendix~\ref{app:rotation}.
\end{remark}
Elliptic spin CM systems are known to be exactly-solvable \cite{krichever1995spin}, which gives, in principle, exact analytic solutions of the periodic ncIHF equation via our main result, Theorem~\ref{thm:main}, presented below. From our current perspective, the most important property of \eqref{eq:sCM1} is the existence of a Bäcklund transformation relating certain distinct solutions of the elliptic spin CM system; later we will employ this B\"{a}cklund transformation to link the periodic ncIHF equation to the elliptic spin CM system. A Bäcklund transformation valid for the rational, trigonometric, and hyperbolic Gibbons-Hermsen spin CM systems \cite{gibbons1984} was presented in \cite{gibbons1983}; see \cite{berntsonlangmannlenells2021} for a detailed proof. We now describe a Bäcklund transformation for the elliptic spin CM system \eqref{eq:sCM1} subjected to certain constraints which arise in our analysis of the periodic ncIHF equation \eqref{eq:ncIHF}.
Consider a second elliptic spin CM system for $M\in{\mathbb Z}_{\geq 1}$ particles described by positions $b_j=b_j(t)$ and spin vectors $\mathbf{t}_j=\mathbf{t}_j(t)\in{\mathbb C}^3$; the equations of motion read
\begin{subequations}\label{eq:sCM2}
\begin{align}
\ddot{b}_j=&\; -2\sideset{}{'}\sum_{k\neq j}^M \mathbf{t}_j\cdot\mathbf{t}_k \wp_2'(b_j-b_k) \quad (j=1,\ldots,M), \label{eq:sCMb} \\
\dot{\mathbf{t}}_j=&\; -2\sideset{}{'}\sum_{k\neq j}^M \mathbf{t}_j\wedge\mathbf{t}_k \wp_2(b_j-b_k) \quad (j=1,\ldots,M). \label{eq:sCMt}
\end{align}
\end{subequations}
Under appropriate circumstances, solutions of \eqref{eq:sCM1} and \eqref{eq:sCM2} may be related via a system of first-order differential equations involving also a vector $\boldsymbol{\phi}=\boldsymbol{\phi}(t)\in{\mathbb C}^3$,
\begin{equation}
\begin{split}\label{eq:ajdot}
\mathbf{s}_j\dot{a}_j=&\; -\mathbf{s}_j\wedge \Bigg({\rm i}\boldsymbol{\phi}-\sideset{}{'}\sum_{k\neq j}^N\mathbf{s}_k\zeta_2(a_j-a_k)+\sideset{}{'}\sum_{k=1}^M \mathbf{t}_k \zeta_2(a_j-b_k+{\rm i}\delta)\Bigg) \quad (j=1,\ldots,N), \\
\mathbf{t}_j\dot{b}_j=&\; +\mathbf{t}_j\wedge \Bigg({\rm i}\boldsymbol{\phi}+\sideset{}{'}\sum_{k\neq j}^M \mathbf{t}_k\zeta_2(b_j-b_k)-\sideset{}{'}\sum_{k=1}^N \mathbf{s}_k \zeta_2(b_j-a_k+{\rm i}\delta)\Bigg) \quad (j=1,\ldots,M)
\end{split}
\end{equation}
and
\begin{equation}\label{eq:phidot}
\mspace{4mu}\dot{\mspace{-4mu}\bphi} =\frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_j\wedge\mathbf{s}_k f_2'(a_j-a_k)-\frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^M\sideset{}{'}\sum_{k\neq j}^M \mathbf{t}_j\wedge\mathbf{t}_k f_2'(b_j-b_k),
\end{equation}
where
\begin{equation}\label{eq:zeta2}
\zeta_2(z;\ell,{\rm i}\delta)\coloneqq \zeta(z;\ell,{\rm i}\delta)-\frac{\zeta({\rm i}\delta;\ell,{\rm i}\delta)}{{\rm i}\delta}z \quad (z\in {\mathbb C})
\end{equation}
(note that $\wp_2(z)=-\zeta_2'(z)$) and
\begin{equation}\label{eq:f2}
f_2(z;\ell,{\rm i}\delta)\coloneqq \zeta_2(z;\ell,{\rm i}\delta)^2-\wp_2(z;\ell,{\rm i}\delta) \quad (z\in{\mathbb C}).
\end{equation}
The precise statement is as follows.
\begin{theorem}\label{thm:backlund}
Let $N,M\in {\mathbb Z}_{\geq 1}$ and $T>0$. Suppose that $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ is a solution of the first-order system \eqref{eq:sCMs}, \eqref{eq:sCMt}, \eqref{eq:ajdot}--\eqref{eq:phidot} on the interval $[0,T)$ and that the following constraints hold at $t=0$,
\begin{align}
&\mathbf{s}_j^2=0 \quad (j=1,\ldots,N),\qquad \mathbf{t}_j^2=0 \quad (j=1,\ldots,M), \label{eq:constraint1}\\
\begin{split}
&\mathbf{s}_j\cdot \Bigg({\rm i} \boldsymbol{\phi}- \sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_k\zeta_2(a_j-a_k) + \sideset{}{'}\sum_{k=1}^M \mathbf{t}_k \zeta_2(a_j-b_j+{\rm i}\delta) \Bigg)=0 \quad (j=1,\ldots,N), \label{eq:constraint2} \\
&\mathbf{t}_j\cdot \Bigg({\rm i} \boldsymbol{\phi}+ \sideset{}{'}\sum_{k\neq j}^M \mathbf{t}_k\zeta_2(b_j-b_k) - \sideset{}{'}\sum_{k=1}^N \mathbf{s}_k \zeta_2(b_j-a_k+{\rm i}\delta) \Bigg)=0 \quad (j=1,\ldots,M),
\end{split} \\
&\sideset{}{'}\sum_{j=1}^N \mathbf{s}_j-\sideset{}{'}\sum_{j=1}^M \mathbf{t}_j=\boldsymbol{0}. \label{eq:constraint3}
\end{align}
Then, the second-order equations \eqref{eq:sCMa} and \eqref{eq:sCMb} hold on $[0,T)$.
\end{theorem}
A proof of Theorem~\ref{thm:backlund} is given in Section~\ref{sec:Backlund}. Each of the constraints \eqref{eq:constraint1}-\eqref{eq:constraint3} corresponds to a conserved quantity; if the constraints are satisfied at $t=0$, they also hold at future times when the first-order equations \eqref{eq:phidot}, \eqref{eq:ajdot}, \eqref{eq:sCMs}, and \eqref{eq:sCMt} are satisfied; this fact is proven for \eqref{eq:constraint1} and \eqref{eq:constraint2} in Proposition~\ref{prop:conserved} and for \eqref{eq:constraint3} in Lemma~\ref{lem:totalspin}.
\begin{remark}
The terms ${\rm i}\delta$ appearing in arguments of functions in \eqref{eq:ajdot} and \eqref{eq:constraint2} can be removed by the transformation
\begin{equation}\label{eq:transformation}
a_j\to a_j-{\rm i}\delta/2 \quad (j=1,\ldots,N),\qquad b_j\to b_j+{\rm i}\delta/2 \quad (j=1,\ldots,M),
\end{equation}
using the fact that $\zeta_2(z)$ is $2{\rm i}\delta$-periodic \eqref{eq:imperiod}. The transformation \eqref{eq:transformation} leaves \eqref{eq:sCM1}, \eqref{eq:sCM2}, \eqref{eq:phidot}, \eqref{eq:constraint1}, and \eqref{eq:constraint3} unchanged. We will use Theorem~\ref{thm:backlund} in the proof of Theorem~\ref{thm:main} below; for this application, it is convenient to have the terms ${\rm i}\delta$ in place.
\end{remark}
\subsection{A class of elliptic solutions of the periodic ncIHF equation}
We construct solutions of the periodic ncIHF equation with dynamics governed by a pair of elliptic spin CM systems, which are related to each other through the B\"{a}cklund transformation of Theorem~\ref{thm:backlund}. More specifically, we make the ansatz for solutions of the periodic ncIHF equation,
\begin{align}\label{eq:ansatz}
\left(\begin{array}{c} \mathbf{u}(x,t) \\ \mathbf{v}(x,t) \end{array}\right)= \boldsymbol{\phi}(t)\left(\begin{array}{c} 1 \\ 1 \end{array}\right) &\; +{\rm i} \sideset{}{'}\sum_{j=1}^N \mathbf{s}_j(t) \left(\begin{array}{c} \zeta_2(x-a_j(t)+{\rm i}\delta/2) \\ \zeta_2(x-a_j(t)-{\rm i}\delta/2) \end{array}\right) \nonumber\\
&\; -{\rm i} \sideset{}{'}\sum_{j=1}^M \mathbf{t}_j(t) \left(\begin{array}{c} \zeta_2(x-b_j(t)-{\rm i}\delta/2) \\ \zeta_2(x-b_j(t)+{\rm i}\delta/2) \end{array}\right),
\end{align}
where $\boldsymbol{\phi}(t), \mathbf{s}_j(t),\mathbf{t}_j(t)\in {\mathbb C}^3$ and $a_j(t),b_j(t)\in {\mathbb C}$ and show that these parameters must satisfy the assumptions of Theorem~\ref{thm:backlund}. In this case, the ansatz \eqref{eq:ansatz} will satisfy $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$, for some constant $\rho\in {\mathbb C}$, provided certain constraints on the initial values of the parameters are fulfilled. Theorem~\ref{thm:backlund} and standard results concerning the existence and uniqueness of solutions to systems of ODEs allow us to formulate our result as a relation between (i) certain solutions of the elliptic spin CM systems \eqref{eq:sCM1}, \eqref{eq:sCM2} and background dynamics \eqref{eq:phidot} and (ii) a class of solutions of the periodic ncIHF equation satisfying $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$. The precise statement is now given.
\begin{theorem}\label{thm:main}
For $N,M\in {\mathbb Z}_{\geq 1}$ and $T>0$, let $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ be a solution of the system of equations \eqref{eq:sCM1}, \eqref{eq:sCM2}, and \eqref{eq:phidot} on the interval $[0,T)$ with initial conditions that satisfy \eqref{eq:ajdot}, \eqref{eq:constraint1}-\eqref{eq:constraint3}, and
\begin{align}\label{eq:constraint4}
\boldsymbol{\phi}^2=&\; \rho^2+\frac12 \sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_{j}\cdot \mathbf{s}_{k} f_2(a_{j}-a_{k})+\frac12\sideset{}{'}\sum_{j=1}^M\sideset{}{'}\sum_{k\neq j}^M \mathbf{t}_{j}\cdot \mathbf{t}_{k} f_2(b_{j}-b_{k}) \nonumber\\
&\; -\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k=1}^M \mathbf{s}_{j}\cdot \mathbf{t}_{k} f_2(a_{j}-b_{k}+{\rm i}\delta)
\end{align}
for some constant $\rho\in {\mathbb C}$ at $t=0$. Moreover, suppose that the conditions
\begin{equation}\label{eq:imaj}
\frac{\delta}{2}<\mathrm{Im} \,a_j(t)<\frac{3\delta}{2} \quad (j=1,\ldots,N),\qquad -\frac{3\delta}{2}<\mathrm{Im} \,b_j(t)< -\frac{\delta}{2} \quad (j=1,\ldots,M),
\end{equation}
\begin{equation}\label{eq:ajakbjbk}
a_j(t)\neq a_k(t) \quad (1\leq j<k\leq N),\qquad b_j(t)\neq b_k(t) \quad (1\leq j<k\leq M),
\end{equation}
and
\begin{equation}\label{eq:sjtj}
\mathbf{s}_j\neq \boldsymbol{0} \quad (j=1,\ldots,N),\qquad \mathbf{t}_j\neq \boldsymbol{0} \quad (j=1,\ldots,M)
\end{equation}
hold for $t\in [0,T)$. Then, for all $t\in [0,T)$ such that the functions $\mathbf{u}(x,t)$ and $\mathbf{v}(x,t)$ in \eqref{eq:ansatz} are differentiable with respect to $x$ and $t$ for all $x\in [-\ell,\ell)$, \eqref{eq:ansatz} provides an exact solution of the periodic ncIHF equation \eqref{eq:ncIHF} satisfying $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$.
\end{theorem}
\begin{remark}
It is not obvious that the ansatz \eqref{eq:ansatz} is $2\ell$-periodic. The function $\zeta_2(z)$ is $2\ell$-quasi-periodic \eqref{eq:realperiod} and hence $\mathbf{u}(x+2\ell)-\mathbf{u}(x)$ and $\mathbf{v}(x+2\ell)-\mathbf{v}(x)$ are proportional to $\sideset{}{'}\sum_{j=1}^N \mathbf{s}_j-\sideset{}{'}\sum_{j=1}^M \mathbf{t}_j$, i.e., the left hand side of \eqref{eq:constraint3}. We later show that \eqref{eq:constraint3} corresponds to a conserved quantity of the elliptic spin CM system: if it is satisfied at $t=0$, as required in Theorem~\ref{thm:main}, then it holds for $t\in [0,T)$, see Lemma~\ref{lem:totalspin}. The constraint \eqref{eq:constraint3} is also required for $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$, see Proposition~\ref{prop:constraints}.
\end{remark}
\begin{remark}\label{rem:complex}
We emphasize that the solutions in Theorem~\ref{thm:main} are generically complex-valued, i.e., $\mathbf{u}(x,t),\mathbf{v}(x,t)\in {\mathbb C}^3$ and satisfy $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$ for some constant $\rho\in {\mathbb C}$. Real-valued solutions of unit length are described by the consistent reduction $M=N$, $\rho=1$, $\boldsymbol{\phi}^*=\boldsymbol{\phi}$, $b_j=a_j^*$, and $\mathbf{t}_j=\mathbf{s}_j^*$ of the theorem, which is given as Corollary~\ref{cor:main} in Section~\ref{sec:explicit}, where examples of such solutions are presented. We have chosen our approach because (i) the proofs in the generic, complex case are no more difficult than in the real case and (ii) at least one interesting class of solutions, considered in Section~\ref{subsec:tw}, is necessarily complex: in the case $N=M=1$ of Theorem~\ref{thm:main}, which contains one-soliton, traveling wave solutions in the analogous real-line case \cite[Section~6.1]{berntsonklabbers2021}, there is no solution of the constraints \eqref{eq:constraint1} and \eqref{eq:constraint3} satisfying $\mathbf{s}_1^*=\mathbf{t}_1$; all solutions obtained under these conditions from Theorem~\ref{thm:main} are complex.
\end{remark}
\subsection{Plan of the paper}
We prove Theorem~\ref{thm:main} by establishing a sequence of intermediate results including Theorem~\ref{thm:backlund}. In Section~\ref{sec:solitons}, we derive constraints on the parameters in \eqref{eq:ansatz} and we show that the parameters satisfy the first-order system of ODEs of Theorem~\ref{thm:backlund}. We show that this system of ODEs preserves the constraints \eqref{eq:constraint1}--\eqref{eq:constraint3} and \eqref{eq:constraint4} in Section~\ref{sec:conserved}. In Section~\ref{sec:Backlund}, we prove the Bäcklund transformation, Theorem~\ref{thm:backlund}, in the course of proving Theorem~\ref{thm:main}. Examples of solutions of the ncIHF equation from Theorem~\ref{thm:main} are constructed in Section~\ref{sec:explicit}. Appendix~\ref{app:elliptic} contains identities for the special functions used in the paper. Appendix~\ref{app:rotation} contains a formal statement and proof of the claim in Remark~\ref{rem:equivalent}.
\section{Constraints and first-order dynamics}\label{sec:solitons}
We derive conditions under which the ansatz \eqref{eq:ansatz} satisfies (i) $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$ and (ii) solves \eqref{eq:ncIHF}. The first requirement yields a number of nonlinear constraints on the parameters appearing in \eqref{eq:ansatz}, which are obtained in Section~\ref{subsec:constraints}. In Section~\ref{subsec:firstorder}, we show that when the ansatz \eqref{eq:ansatz} is subjected to one of these constraints and inserted into \eqref{eq:ncIHF}, the latter is reduced to a system of first-order ODEs.
To prove results in this section, we employ certain notation developed in \cite{berntsonklabbers2021}. Given ${\mathbb C}$-valued functions $F_j,G_j$, $j=1,2$, we form two-vectors and define the following product,
\begin{equation}
\left(\begin{array}{c} F_1 \\ F_2 \end{array}\right)\circ \left(\begin{array}{c} G_1 \\ G_2 \end{array}\right)\coloneqq \left(\begin{array}{c} F_1G_1 \\ F_2G_2\end{array}\right).
\end{equation}
Similarly, we can combine pairs of three-vectors $\mathbf{a}_j,\mathbf{b}_j\in {\mathbb C}^3$, $j=1,2$, and define analogs of the dot and wedge products,
\begin{equation}\label{eq:dotwedgecirc}
\left(\begin{array}{c} \mathbf{a}_1 \\ \mathbf{a}_2 \end{array}\right)\dotcirc\left(\begin{array}{c} \mathbf{b}_1 \\ \mathbf{b}_2 \end{array}\right)=\left(\begin{array}{c} \mathbf{a}_1\cdot\mathbf{b}_1 \\ \mathbf{a}_2\cdot\mathbf{b}_2 \end{array}\right),\qquad \left(\begin{array}{c} \mathbf{a}_1 \\ \mathbf{a}_2 \end{array}\right)\wedgecirc\left(\begin{array}{c} \mathbf{b}_1 \\ \mathbf{b}_2 \end{array}\right)=\left(\begin{array}{c} \mathbf{a}_1\wedge\mathbf{b}_1 \\ \mathbf{a}_2\wedge\mathbf{b}_2 \end{array}\right).
\end{equation}
By defining
\begin{equation}
\mathbf{U}(x,t)=\left(\begin{array}{c} \mathbf{u}(x,t) \\ \mathbf{v}(x,t) \end{array}\right)
\end{equation}
and using \eqref{eq:dotwedgecirc} we may write the periodic ncIHF equation \eqref{eq:ncIHF} as
\begin{equation}\label{eq:ncIHF2}
\mathbf{U}_t=\mathbf{U}\wedgecirc \mathcal{T}\mathbf{U}_x,
\end{equation}
where
\begin{equation}\label{eq:cT}
\mathcal{T}=\left(\begin{array}{cc} T & -\tilde{T} \\ \tilde{T} & -T \end{array}\right), \qquad \mathcal{T}: \left(\begin{array}{c} \mathbf{u}_x \\ \mathbf{v}_x\end{array}\right)\mapsto \left(\begin{array}{c} T\mathbf{u}_x-\tilde{T}\mathbf{v}_x \\ \tilde{T}\mathbf{u}_ x-T\mathbf{v}_x \end{array}\right)
\end{equation}
with $T$ and $\tilde{T}$ as defined in \eqref{eq:TTe}.
It is also useful to write the ansatz \eqref{eq:ansatz} using this two-vector notation. We define
\begin{equation}\label{eq:EA}
E\coloneqq \left(\begin{array}{c} 1 \\ 1\end{array}\right),\qquad A_{\pm}(z)\coloneqq \left(\begin{array}{c} \zeta_2(z\pm{\rm i}\delta/2) \\ \zeta_2(z\mp{\rm i}\delta/2) \end{array}\right) \quad (z\in {\mathbb C}),
\end{equation}
so that \eqref{eq:ansatz} can be written as
\begin{equation}\label{eq:ansatzSH}
\mathbf{U}(x,t)=\boldsymbol{\phi}(t) E+\sideset{}{'}\sum_{j=1}^{{\mathcal N}} r_j \mathbf{s}_j(t) A_{r_j}(x-a_j(t)),
\end{equation}
using also the shorthand notation
\begin{equation}\label{eq:shorthand}
(a_j,\mathbf{s}_j,r_j)\coloneqq \begin{cases}
(a_j,\mathbf{s}_j,+) & j=1,\ldots,N, \\
(b_j,\mathbf{t}_j,-) & j=N+1,\ldots,{\mathcal N},
\end{cases} \qquad {\mathcal N}=N+M.
\end{equation}
\subsection{Constraints} \label{subsec:constraints}
The following proposition establishes the conditions required for the functions in ansatz \eqref{eq:ansatz} to have constant length.
\begin{proposition}\label{prop:constraints}
The functions $\mathbf{u}(x,t)$ and $\mathbf{v}(x,t)$ in \eqref{eq:ansatz} satisfy $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$ if and only if the parameters $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ satisfy the conditions \eqref{eq:constraint1}--\eqref{eq:constraint3} and \eqref{eq:constraint4}.
\end{proposition}
\begin{proof}
Using \eqref{eq:ansatzSH} and \eqref{eq:dotwedgecirc}, we compute
\begin{align}\label{eq:UdotU1}
\mathbf{U}\dotcirc\mathbf{U}= &\; \boldsymbol{\phi}^2 E+2{\rm i} \boldsymbol{\phi}\cdot \sideset{}{'}\sum_{j=1}^{{\mathcal N}} r_j\mathbf{s}_jA_{r_j}(x-a_j)-\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k=1}^{{\mathcal N}} r_j r_k \mathbf{s}_j\cdot\mathbf{s}_k A_{r_j}(x-a_j)\circ A_{r_k}(x-a_k).
\end{align}
To proceed, we need the identities
\begin{equation}\label{eq:Aj2Id}
A_{r_j}(x-a_j)\circ A_{r_j}(x-a_j)=-A_{r_j}'(x-a_j)+F_{r_j}(x-a_j)
\end{equation}
and
\begin{align}\label{eq:AjAkId}
A_{r_j}(x-a_j)\circ A_{r_k}(x-a_k)=&\; \zeta_2(\tilde{a}_j-\tilde{a}_k)\big(A_{r_j}(x-a_j)-A_{r_k}(x-a_k)\big) \nonumber \\
&\; +\frac12 \big(F_{r_j}(x-a_j)+F_{r_k}(x-a_k)\big)+\frac12 f_2(\tilde{a}_j-\tilde{a}_k)E +\frac{3\zeta({\rm i} \delta)}{2\delta}E,
\end{align}
where
\begin{equation}\label{eq:at}
\tilde{a}_j\coloneqq a_j-{\rm i} r_j\delta/2 \quad (j=1,\ldots,{\mathcal N})
\end{equation}
and
\begin{equation}\label{eq:F}
F_{\pm}(z)\coloneqq \left(\begin{array}{c} f_2(z\pm{\rm i}\delta/2) \\ f_2(z\mp{\rm i}\delta/2) \end{array}\right) \quad (z\in{\mathbb C}).
\end{equation}
The identities \eqref{eq:Aj2Id} and \eqref{eq:AjAkId} follow from the elliptic identities \eqref{eq:IdV} and \eqref{eq:Idmain}, respectively together with the definitions of $E$, $A_{\pm}(z)$ \eqref{eq:EA} and $F_{\pm}(z)$ \eqref{eq:F}. We evaluate the double sum in \eqref{eq:UdotU1} using \eqref{eq:Aj2Id} for $j=k$ and \eqref{eq:AjAkId} for $j\neq k$:
\begin{align}\label{eq:UdotU2}
\mathbf{U}\dotcirc\mathbf{U}=&\; \boldsymbol{\phi}^2 E +2{\rm i} \boldsymbol{\phi}\cdot \sideset{}{'}\sum_{j=1}^{{\mathcal N}} r_j\mathbf{s}_j A_{r_j}(x-a_j)+\sideset{}{'}\sum_{j=1}^{{\mathcal N}} \mathbf{s}_j^2 A_{r_j}'(x-a_j)-\sideset{}{'}\sum_{j=1}^{{\mathcal N}} \mathbf{s}_j^2 F_{r_j}(x-a_j) \nonumber\\
&\; -\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j r_k \mathbf{s}_j\cdot\mathbf{s}_k \zeta_2(\tilde{a}_j-\tilde{a}_k) \big(A_{r_j}(x-a_j)-A_{r_k}(x-a_k)\big) \nonumber\\
&\; -\frac12 \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k\mathbf{s}_j\cdot\mathbf{s}_k \big(F_{r_j}(x-a_j)+F_{r_k}(x-a_k)+f_2(\tilde{a}_j-\tilde{a}_k)E \big) \nonumber \\
&\;
-\frac{3\zeta({\rm i} \delta)}{2\delta}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j r_k\mathbf{s}_j\cdot\mathbf{s}_k E.
\end{align}
Next, we recall that the functions $\zeta_2(z)$ and $f_2(z)$ appearing in $A_{\pm}(z)$ and $F_{\pm}(z)$ are odd and even, respectively \eqref{eq:parity}. Using this symmetry to rewrite the double sums in the second and third lines of \eqref{eq:UdotU2} and collecting terms, we find
\begin{align}\label{eq:UdotU3}
\mathbf{U}\dotcirc\mathbf{U}=&\; \boldsymbol{\phi}^2 E+2{\rm i} \boldsymbol{\phi}\cdot \sideset{}{'}\sum_{j=1}^{{\mathcal N}} r_j\mathbf{s}_j A(x-\tilde{a}_j)+\sideset{}{'}\sum_{j=1}^{{\mathcal N}} \mathbf{s}_j^2 A_{r_j}'(x-a_j)-\sideset{}{'}\sum_{j=1}^{{\mathcal N}} \mathbf{s}_j^2 F_{r_j}(x-a_j) \nonumber\\
&\; -2\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j r_k \mathbf{s}_j\cdot\mathbf{s}_k \zeta_2(\tilde{a}_j-\tilde{a}_k)A_{r_j}(x-a_j) - \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k\mathbf{s}_j\cdot\mathbf{s}_k F_{r_j}(x-a_j) \nonumber \\
&\; -\frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k\mathbf{s}_j\cdot\mathbf{s}_k f_2(\tilde{a}_j-\tilde{a}_k) E -\frac{3\zeta({\rm i} \delta)}{2\delta}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j r_k\mathbf{s}_j\cdot\mathbf{s}_k E \nonumber\\
=&\; \Bigg( \boldsymbol{\phi}^2-\frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k\mathbf{s}_j\cdot\mathbf{s}_k f_2(\tilde{a}_j-\tilde{a}_k)-\frac{3\zeta({\rm i} \delta)}{2\delta}\sideset{}{'}\sum_{j=1}^{{\mathcal N}} \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j r_k\mathbf{s}_j\cdot\mathbf{s}_k\Bigg) E \nonumber\\
&\; +2\sideset{}{'}\sum_{j=1}^{{\mathcal N}}r_j\mathbf{s}_j \cdot \Bigg({\rm i}\boldsymbol{\phi}-\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k\mathbf{s}_k \zeta_2(\tilde{a}_j-\tilde{a}_k) \Bigg) A_{r_j}(x-a_j) \nonumber \\
&\; -\sideset{}{'}\sum_{j=1}^{{\mathcal N}} \mathbf{s}_j^2 A_{r_j}'(x-a_j)-\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\Bigg(\mathbf{s}_j^2-\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k\mathbf{s}_j\cdot \mathbf{s}_k\Bigg) F_{r_j}(x-a_j).
\end{align}
We set \eqref{eq:UdotU3} equal to $\rho^2E$ and note that $E$, $\{A(x-a_j)\}_{j=1}^{{\mathcal N}}$, $\{A'(x-a_j)\}_{j=1}^{{\mathcal N}}$, and $\{F(x-a_j)\}_{j=1}^{{\mathcal N}}$ are linearly independent as a consequence of \eqref{eq:imaj}--\eqref{eq:ajakbjbk}. The terms proportional to $A'_{r_j}(x-a_j)$ and $A_{r_j}(x-a_j)$ give the conditions
\begin{equation}
\mathbf{s}_j^2=0 \quad (j=1,\ldots,{\mathcal N}), \label{eq:constraint1SH}
\end{equation}
and
\begin{equation}
\mathbf{s}_j\cdot\Bigg(\boldsymbol{\phi}+{\rm i}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k \mathbf{s}_k\zeta_2(\tilde{a}_j-\tilde{a}_k)\Bigg)=0 \quad (j=1,\ldots,{\mathcal N}), \label{eq:constraint2SH}
\end{equation}
respectively. By inserting \eqref{eq:constraint1SH} into the sum in $F_{r_j}(x-a_j)$ in \eqref{eq:UdotU3}, we obtain
\begin{align}\label{eq:constraint3SH}
\sideset{}{'}\sum_{j=1}^{{\mathcal N}} r_j\mathbf{s}_j=\boldsymbol{0},
\end{align}
and by inserting \eqref{eq:constraint3SH} into the sum proportional to $E$ in \eqref{eq:UdotU3}, we obtain
\begin{equation}\label{eq:constraint4SH}
\boldsymbol{\phi}^2-\frac12 \sideset{}{'}\sum_{j=1}^{{\mathcal N}} \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k \mathbf{s}_j\cdot\mathbf{s}_k f_2(\tilde{a}_j-\tilde{a}_k)=\rho^2.
\end{equation}
The constraints \eqref{eq:constraint1SH}--\eqref{eq:constraint4SH} are seen to be equivalent to (\ref{eq:constraint1}-\ref{eq:constraint3}) and \eqref{eq:constraint4} after recalling the notation \eqref{eq:shorthand} and \eqref{eq:at}.
\end{proof}
\subsection{First-order dynamics}\label{subsec:firstorder}
The following proposition describes conditions, in the form of a system of ODEs, when the ansatz \eqref{eq:ansatz} solves the periodic ncIHF equation \eqref{eq:ncIHF} (without the requirement that $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$).
\begin{proposition}\label{prop:firstorder}
Suppose that $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ is a solution of the system of equations \eqref{eq:sCMs}, \eqref{eq:sCMt}, and \eqref{eq:ajdot}--\eqref{eq:phidot} on an interval $[0,T)$
with initial conditions that satisfy \eqref{eq:constraint3} at $t=0$. Moreover, suppose that \eqref{eq:imaj} and \eqref{eq:ajakbjbk} hold for all $t\in [0,T)$. Then, for $t\in[0,T)$ such that the functions $\mathbf{u}(x,t),\mathbf{v}(x,t)$ in \eqref{eq:ansatz} are differentiable with respect to $x$ and $t$ for all $x\in [-\ell,\ell)$, \eqref{eq:ansatz} provides an exact solution of the periodic ncIHF equation \eqref{eq:ncIHF}.
\end{proposition}
\begin{proof}
We compute both terms in the periodic ncIHF equation in the form \eqref{eq:ncIHF2}, again making use of the form \eqref{eq:ansatzSH} of the pole ansatz \eqref{eq:ansatz} and the shorthand notation \eqref{eq:shorthand} and \eqref{eq:at}. First, we have
\begin{align}\label{eq:Ut}
\mathbf{U}_t=&\; \mspace{4mu}\dot{\mspace{-4mu}\bphi} E +{\rm i} \sideset{}{'}\sum_{j=1}^{{\mathcal N}} r_j\big(\dot{\mathbf{s}}_j A_{r_j}(x-a_j) -\mathbf{s}_j\dot{a}_j A_{r_j}'(x-a_j) \big),
\end{align}
using that $\dot{\tilde{a}}_j=\dot{a}_j$. We compute the remaining term in \eqref{eq:ncIHF2} in several steps. To compute $\mathcal{T} \mathbf{U}_x$, we use the fact that the $A_{r_j}'(x-a_j)$ are eigenfunctions of $\mathcal{T}$ when \eqref{eq:imaj} holds,
\begin{equation}\label{eq:cTA}
(\mathcal{T} A_{r_j}'(\cdot-a_j))(x)=-{\rm i} r_j A_{r_j}'(x-a_j) \quad (j=1,\ldots,{\mathcal N}).
\end{equation}
The identity \eqref{eq:cTA} is established by verifying that the functions $\wp_2(z-a_j)$ appearing in $A_{\pm}'(z)$ \eqref{eq:EA} satisfy the conditions of the following result proved in \cite[Appendix A]{berntson2020}:\footnote{Note that the definition of $\mathcal{T}$ in this paper differs from that in \cite{berntson2020} by a similarity transformation, $\mathcal{T}\to D\mathcal{T} D^{-1}$ with $D\coloneqq \mathrm{diag}(1,-1)$; we have modified the statement of the result in \cite{berntson2020} accordingly to meet our present needs.} \textit{for a $2\ell$-periodic function $g(z)$ analytic in a strip $-d<\mathrm{Im} \,z<d$ with $d>\delta/2$ and satisfying $\int_{-\ell}^{\ell}g(x)\,\mathrm{d}x=0$, the functions $G_{\pm}(x)\coloneqq (g(x\pm{\rm i}\delta/2),g(x\mp{\rm i}\delta/2))^{\top}$} are eigenfunctions of the operator $\mathcal{T}$ with eigenvalues $\mp {\rm i}$.
Differentiating \eqref{eq:ansatzSH} with respect to $x$ gives
\begin{equation}\label{eq:Ux}
\mathbf{U}_x={\rm i}\sideset{}{'}\sum_{j=1}^{{\mathcal N}} r_j \mathbf{s}_j A_{r_j}'(x-a_j)
\end{equation}
and hence \eqref{eq:cTA} implies
\begin{equation}\label{eq:TUx}
\mathcal{T} \mathbf{U}_x={\rm i}\sideset{}{'}\sum_{j=1}^{{\mathcal N}} r_j\mathbf{s}_j\mathcal{T} A_{r_j}'(x-a_j)=\sideset{}{'}\sum_{j=1}^{{\mathcal N}} \mathbf{s}_j A_{r_j}'(x-a_j).
\end{equation}
We compute $\mathbf{U}\wedgecirc\mathcal{T}\mathbf{U}_x$ by combining \eqref{eq:ansatzSH} with \eqref{eq:TUx}:
\begin{align}\label{eq:UwedgeTUx1}
\mathbf{U} \wedgecirc \mathcal{T}\mathbf{U}_x=&\; \boldsymbol{\phi} E \wedgecirc \sideset{}{'}\sum_{j=1}^N \mathbf{s}_j A_{r_j}'(x-a_j)+{\rm i}\sideset{}{'}\sum_{j=1}^N r_j \mathbf{s}_j A(x-a_j) \wedgecirc \sideset{}{'}\sum_{k=1}^N \mathbf{s}_k A_{r_k}(x-a_k) \nonumber\\
=&\; \sideset{}{'}\sum_{j=1}^N \boldsymbol{\phi}\wedge\mathbf{s}_j A_{r_j}'(x-a_j) +{\rm i}\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k\neq j}^N r_j\mathbf{s}_j\wedge\mathbf{s}_k A_{r_j}(x-a_j)\circ A_{r_k}'(x-a_k).
\end{align}
Differentiating \eqref{eq:AjAkId} with respect to $\tilde{a}_k$ gives
\begin{align}\label{eq:ArjArk}
A_{r_j}(x-a_j)\circ A_{r_k}'(x-a_k)=&\; -\wp_2(\tilde{a}_j-\tilde{a}_k)\big(A_{r_j}(x-a_j)-A_{r_k}(x-a_k)\big)-\zeta_2(\tilde{a}_j-\tilde{a}_k)A_{r_k}'(x-a_k) \nonumber \\
&\;+\frac12 F_{r_k}'(x-a_k) +\frac12 f_2(\tilde{a}_j-\tilde{a}_k)E;
\end{align}
inserting this identity into \eqref{eq:UwedgeTUx1}, we have
\begin{align}\label{eq:UwedgeTUx2}
\mathbf{U}\wedgecirc\mathcal{T}\mathbf{U}_x=&\; \sideset{}{'}\sum_{j=1}^{{\mathcal N}} \boldsymbol{\phi}\wedge\mathbf{s}_j A_{r_j}(x-a_j) -{\rm i} \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j\mathbf{s}_j\wedge\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_k)\big(A_{r_j}(x-a_j)-A_{r_k}(x-a_k)\big)\nonumber \\
&\; -{\rm i}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_j\mathbf{s}_j\wedge\mathbf{s}_k \zeta_2(\tilde{a}_j-\tilde{a}_k)A_{r_k}'(x-a_k)+\frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_j\mathbf{s}_j\wedge\mathbf{s}_k F_{r_k}'(x-a_k) \nonumber \\
&\; +\frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_j\mathbf{s}_j\wedge\mathbf{s}_k f_{2}(\tilde{a}_j-\tilde{a}_k)E.
\end{align}
Next, since $\wedge$ is antisymmetric and $\wp_2(z)$ is an even function \eqref{eq:parity}, we can rewrite the double sum in the first line of \eqref{eq:UwedgeTUx2} according to
\begin{equation}
\begin{aligned}
&\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_j\mathbf{s}_j\wedge\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_k)\big(A_{r_j}(x-a_j)-A_{r_k}(x-a_k)\big) \\
&= \frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}(r_j+r_k)\mathbf{s}_j\wedge\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_k)\big(A_{r_k}(x-a_j)-A_{r_k}(x-a_k)\big) \\
&= \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}(r_j+r_k)\mathbf{s}_j\wedge\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_k)A_{r_j}(x-a_j).
\end{aligned}
\end{equation}
Hence, inserting this and swapping some indices $j\leftrightarrow k$ (using the antisymmetry of $\wedge$ and the fact that $\zeta_2(z)$ is an odd function \eqref{eq:parity}) in \eqref{eq:UwedgeTUx2}, we obtain
\begin{align}\label{eq:UwedgeTUx3}
\mathbf{U}\wedgecirc\mathcal{T}\mathbf{U}_x=&\; \sideset{}{'}\sum_{j=1}^{{\mathcal N}} \boldsymbol{\phi}\wedge\mathbf{s}_j A_{r_j}'(x-a_j) -{\rm i} \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (r_j+r_k)\mathbf{s}_j\wedge\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_k)A_{r_j}(x-a_j) \nonumber \\
&\; -{\rm i}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_k\mathbf{s}_j\wedge\mathbf{s}_k \zeta_2(\tilde{a}_j-\tilde{a}_k)A_{r_j}(x-a_j)-\frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_k\mathbf{s}_j\wedge\mathbf{s}_k F_{r_j}'(x-a_j) \nonumber \\
&\; +\frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_j\mathbf{s}_j\wedge\mathbf{s}_k f_{2}(\tilde{a}_j-\tilde{a}_k)E,
\end{align}
which may be rearranged to
\begin{align}\label{eq:UwedgeTUx4}
\mathbf{U}\wedgecirc\mathcal{T}\mathbf{U}_x=&\; \frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_j\mathbf{s}_j\wedge\mathbf{s}_k f_{2}(\tilde{a}_j-\tilde{a}_k)E -{\rm i} \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (r_j+r_k)\mathbf{s}_j\wedge\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_k)A_{r_j}(x-a_j) \nonumber \\
&\; - \sideset{}{'}\sum_{j=1}^{{\mathcal N}} \mathbf{s}_j \wedge \Bigg(\boldsymbol{\phi} + {\rm i} \sideset{}{'}\sum_{k=1}^{{\mathcal N}} r_k \mathbf{s}_k \zeta_2(\tilde{a}_j-\tilde{a}_k)\Bigg) A_{r_j}'(x-a_j) -\frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}r_j \Bigg(\sideset{}{'}\sum_{k=1}^{{\mathcal N}}r_k\mathbf{s}_k\Bigg) F_{r_j}'(x-a_j),
\end{align}
where we have used $\mathbf{s}_k\wedge\mathbf{s}_k=\mathbf{0}$ to rewrite the final sum.
Inserting \eqref{eq:Ut} and \eqref{eq:UwedgeTUx4} into \eqref{eq:ncIHF2} and using the linear independence of $E$, $\{A_{r_j}(x-a_j)\}_{j=1}^{{\mathcal N}}$, $\{A_{r_j}'(x-a_j)\}_{j=1}^{{\mathcal N}}$, and $\{F_{r_j}'(x-a_j)\}_{j=1}^{{\mathcal N}}$ as a consequence of \eqref{eq:imaj}--\eqref{eq:ajakbjbk}, we obtain the equations of motion
\begin{align}
\mspace{4mu}\dot{\mspace{-4mu}\bphi} =&\; \frac{{\rm i}}{4}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (r_j+r_k)\mathbf{s}_j\wedge\mathbf{s}_k f_2'(\tilde{a}_j-\tilde{a}_k), \label{eq:phidotSH}\\
\dot{\mathbf{s}}_j=&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k)\mathbf{s}_j\wedge\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_k) \quad (j=1,\ldots,{\mathcal N}), \label{eq:sjdotSH}\\
\dot{a}_j \mathbf{s}_j=&\; -r_j \mathbf{s}_j \wedge \Bigg({\rm i}\boldsymbol{\phi}-\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k \mathbf{s}_k \zeta_2(\tilde{a}_j-\tilde{a}_k)\Bigg) \quad (j=1,\ldots,{\mathcal N}) \label{eq:ajdotSH}
\end{align}
and the constraint \eqref{eq:constraint3SH}. Equations \eqref{eq:phidotSH}--\eqref{eq:ajdotSH} are equivalent to \eqref{eq:phidot}, \eqref{eq:sCMs} and \eqref{eq:sCMt}, and \eqref{eq:ajdot}, respectively via the notation \eqref{eq:shorthand} and \eqref{eq:at}. To prove the proposition, it remains to show that if \eqref{eq:constraint3SH}, equivalent to \eqref{eq:constraint3}, is satisfied at $t=0$, it holds on $[0,T)$ when the variables $\{a_j,\mathbf{s}_j\}_{j=1}^{{\mathcal N}}$ evolve according to \eqref{eq:phidotSH}--\eqref{eq:ajdotSH}. We prove the following stronger result.
\begin{lemma}\label{lem:totalspin}
The total spins
\begin{equation}\label{eq:totalspin}
\mathbf{S}\coloneqq \sideset{}{'}\sum_{j=1}^N \mathbf{s}_j, \qquad \mathbf{T} \coloneqq \sideset{}{'}\sum_{j=1}^M \mathbf{t}_j
\end{equation}
are conserved by the equations of motion \eqref{eq:sCMs} and \eqref{eq:sCMt}.
\end{lemma}
\begin{proof}
We differentiate $\mathbf{S}$ in \eqref{eq:totalspin} with respect to $t$ and insert \eqref{eq:sCMs} to find
\begin{align*}
\mathbf{S}_t=\sideset{}{'}\sum_{j=1}^{N} \dot{\mathbf{s}}_j=-2\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_j\wedge \mathbf{s}_k \wp_2(a_j-a_k).
\end{align*}
The sum vanishes because $\wp_2(z)$ is an even function \eqref{eq:parity} and hence the summand is antisymmetric under the interchange of $j$ and $k$. The proof for $\mathbf{T}$ is similar.
\end{proof}
Because $\mathbf{S}$ and $\mathbf{T}$ are conserved quantities, so is their difference and hence, \eqref{eq:constraint3} holds on $[0,T)$ if it is satisfied at $t=0$.
\end{proof}
\section{Conserved quantities}\label{sec:conserved}
This section is devoted to proving that the constraints in Proposition~\ref{prop:constraints} correspond to conserved quantities of the ODE system of Proposition~\ref{prop:firstorder}, i.e., if the constraints are satisfied at $t=0$, as required by Theorem~\ref{thm:main}, they hold at all future times. We note that the constancy of the total spins appearing in the constraint \eqref{eq:constraint3} was already proved in Lemma~\ref{lem:totalspin}.
We prove the following.
\begin{proposition}\label{prop:conserved}
Under the assumptions of Proposition~\ref{prop:firstorder}, the following quantities are conserved:
\begin{align}
&P_j\coloneqq \mathbf{s}_j^2 \quad (j=1,\ldots,N),\qquad P_{N+j}\coloneqq \mathbf{t}_j^2 \quad (j=1,\ldots,M), \label{eq:Pj}\\
\begin{split}\label{eq:Qj}
&Q_j\coloneqq\mathbf{s}_j\cdot\Bigg({\rm i}\boldsymbol{\phi}-\sideset{}{'}\sum_{k\neq j}^{N}\mathbf{s}_k\zeta_2(a_j-a_k)+\sideset{}{'}\sum_{k=1}^M \mathbf{t}_k\zeta_2(a_j-b_k+{\rm i}\delta)\Bigg) \quad (j=1,\ldots,N),\\
&Q_{N+j}\coloneqq \mathbf{t}_j\cdot\Bigg({\rm i}\boldsymbol{\phi}+\sideset{}{'}\sum_{k\neq j}^{M} \mathbf{t}_k\zeta_2(b_j-b_k)-\sideset{}{'}\sum_{k=1}^N \mathbf{s}_k\zeta_2(b_j-a_k+{\rm i}\delta)\Bigg) \quad (j=1,\ldots,M),
\end{split} \\
&R\coloneqq \boldsymbol{\phi}^2-\frac12\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_j\cdot\mathbf{s}_k f_2(a_j-a_k)-\frac12\sideset{}{'}\sum_{j=1}^M\sideset{}{'}\sum_{k\neq j}^M \mathbf{t}_j\cdot\mathbf{t}_k f_2(b_j-b_k) \nonumber \\
&\phantom{R\coloneqq} +\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k=1}^M \mathbf{s}_j\cdot\mathbf{s}_k f_2(a_j-b_k+{\rm i}\delta). \label{eq:Rj}
\end{align}
\end{proposition}
\subsection{Proof of Proposition~\ref{prop:conserved}}
We prove the proposition in three parts corresponding to the quantities \eqref{eq:Pj}, \eqref{eq:Qj}, and \eqref{eq:Rj}.
\subsubsection{Conservation of $P_j$}
Using the notation \eqref{eq:shorthand}, we write \eqref{eq:Pj} as
\begin{equation}\label{eq:Pj2}
P_j=\mathbf{s}_j^2 \quad (j=1,\ldots,{\mathcal N}).
\end{equation}
Differentiating this with respect to time and inserting \eqref{eq:sjdotSH}, we have
\begin{align}
\dot{P}_{j}= \mathbf{s}_j\cdot\dot{\mathbf{s}}_j =&\; -\sideset{}{'}\sum_{k=1}^{{\mathcal N}}(1+r_jr_k) \mathbf{s}_j\cdot (\mathbf{s}_j\wedge\mathbf{s}_k)\wp_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
=&\; -\sideset{}{'}\sum_{k=1}^{{\mathcal N}}(1+r_jr_k) \mathbf{s}_k\cdot (\mathbf{s}_j\wedge\mathbf{s}_j)\wp_2(\tilde{a}_j-\tilde{a}_k)=0,
\end{align}
where we have used the invariance of the vector triple product under cyclic permutations,
\begin{equation}\label{eq:cyclic}
\mathbf{x}\cdot(\mathbf{y}\wedge\mathbf{z})=\mathbf{z}\cdot(\mathbf{x}\wedge\mathbf{y})=\mathbf{y}\wedge(\mathbf{z}\wedge\mathbf{x}).
\end{equation}
\subsubsection{Conservation of $Q_j$}
We write \eqref{eq:Qj} as
\begin{equation}\label{eq:Yj2}
Q_j=\mathbf{s}_j\cdot\mathbf{b}_j \quad (j=1,\ldots,{\mathcal N}),
\end{equation}
where
\begin{equation}\label{eq:bj}
\mathbf{b}_j\coloneqq {\rm i}\boldsymbol{\phi}-\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k\mathbf{s}_k \zeta_2(\tilde{a}_j-\tilde{a}_k) \quad (j=1,\ldots,{\mathcal N}),
\end{equation}
with the shorthand notation \eqref{eq:shorthand} and \eqref{eq:at}.
Differentiating \eqref{eq:Yj2} with respect to time and inserting \eqref{eq:sjdotSH}--\eqref{eq:ajdotSH}, we find
\begin{align}\label{eq:Qjdot1}
\dot{Q}_{j}=&\; \dot{\mathbf{s}}_j\cdot\mathbf{b}_j+\mathbf{s}_j\cdot \!\!\dot{\,\,\bb}_j \nonumber \\
=&\; -\sideset{}{'}\sum_{j=1}^{{\mathcal N}} (1+r_jr_k)(\mathbf{s}_j\wedge\mathbf{s}_k)\cdot\mathbf{b}_j \wp_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\;+\mathbf{s}_j\cdot\Bigg({\rm i} \mspace{4mu}\dot{\mspace{-4mu}\bphi} -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k \dot{\mathbf{s}}_k\zeta_2(\tilde{a}_j-\tilde{a}_k)+\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k \mathbf{s}_k (\dot{a}_j-\dot{a}_k)\wp_2(\tilde{a}_j-\tilde{a}_j)\Bigg) \nonumber \\
=&\; -\sideset{}{'}\sum_{j=1}^{{\mathcal N}} (1+r_jr_k)(\mathbf{s}_j\wedge\mathbf{s}_k)\cdot\mathbf{b}_j \wp_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\;+{\rm i} \mathbf{s}_j\cdot \mspace{4mu}\dot{\mspace{-4mu}\bphi} -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} r_k(1+r_kr_l) \mathbf{s}_j\cdot (\mathbf{s}_k\wedge\mathbf{s}_l) \zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&\; +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k(-r_j (\mathbf{s}_j\wedge\mathbf{b}_j)\cdot{\mathbf{s}_k}+r_k(\mathbf{s}_k\wedge\mathbf{b}_k)\cdot\mathbf{s}_j) \wp_2(\tilde{a}_j-\tilde{a}_j) .
\end{align}
We use \eqref{eq:cyclic} again to reorder triple products,
\begin{align}\label{eq:Qjdot2}
\dot{Q}_j =&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \big((1+r_jr_k) \mathbf{s}_j\cdot (\mathbf{s}_j\wedge\mathbf{s}_k)+r_jr_k (\mathbf{s}_j\wedge\mathbf{b}_j)\cdot\mathbf{s}_k-(\mathbf{s}_k\wedge\mathbf{b}_k)\cdot\mathbf{s}_j\big)\wp_2(\tilde{a}_j-\tilde{a}_k) \nonumber\\
&\; +{\rm i} \mathbf{s}_j\cdot \mspace{4mu}\dot{\mspace{-4mu}\bphi} -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} (r_k+r_l) \mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
=&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (\mathbf{s}_j\wedge\mathbf{s}_k)\cdot(\mathbf{b}_j-\mathbf{b}_k)\wp_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; +{\rm i}\mathbf{s}_j\cdot \mspace{4mu}\dot{\mspace{-4mu}\bphi} -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} (r_k+r_l) \mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l).
\end{align}
To proceed, we rewrite the quantity $(\mathbf{b}_j-\mathbf{b}_k)\wp_2(\tilde{a}_j-\tilde{a}_k)$ in a convenient way. By the definition of $\mathbf{b}_j$ \eqref{eq:bj},
\begin{align}\label{eq:bjbk1}
(\mathbf{b}_j-\mathbf{b}_k)\wp_2(\tilde{a}_j-\tilde{a}_k)=&\; -\Bigg(\sideset{}{'}\sum_{l\neq j}^{{\mathcal N}} r_l \mathbf{s}_l \zeta_2(\tilde{a}_j-\tilde{a}_l)- \sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} r_l \mathbf{s}_l \zeta_2(\tilde{a}_k-\tilde{a}_l)\Bigg)\wp_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
=&\; -(r_k\mathbf{s}_k+r_j\mathbf{s}_j)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; -\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l \mathbf{s}_l \big(\zeta_2(\tilde{a}_j-\tilde{a}_l)-\zeta_2(\tilde{a}_k-\tilde{a}_l)\big)\wp_2(\tilde{a}_j-\tilde{a}_k),
\end{align}
where we have used the fact that $\zeta_2(z)$ is an odd function \eqref{eq:parity} in the second step. To proceed, we use the identities
\begin{equation}\label{eq:EllipticId1}
\zeta_2(z)\wp_2(z)=-\frac12\big(\wp_2'(z)+f_2'(z)\big)
\end{equation}
and
\begin{align}\label{eq:EllipticId2}
\big(\zeta_2(\tilde{a}_j-\tilde{a}_l)-\zeta_2(\tilde{a}_k-\tilde{a}_l)\big)\wp_2(\tilde{a}_j-\tilde{a}_k)=&\; -\big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&\; -\frac12\big(f_2'(\tilde{a}_j-\tilde{a}_k)-f_2'(\tilde{a}_k-\tilde{a}_l)\big),
\end{align}
The first identity \eqref{eq:EllipticId1} is obtained by differentiating \eqref{eq:IdV} with respect to $z$ and the second identity \eqref{eq:EllipticId2} is obtained by differentiating \eqref{eq:Idmain} with respect to $a$ and setting $z=\tilde{a}_j$, $a=\tilde{a}_k$, and $b=\tilde{a}_l$.
Inserting \eqref{eq:EllipticId1} and \eqref{eq:EllipticId2} into \eqref{eq:bjbk1} and simplifying gives
\begin{align}\label{eq:bjbk2}
& (\mathbf{b}_j-\mathbf{b}_k)\wp_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&= -\frac12(r_k\mathbf{s}_k+r_j\mathbf{s}_j)\wp_2'(\tilde{a}_j-\tilde{a}_k) -\frac12(r_k\mathbf{s}_k+r_j\mathbf{s}_j)f_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
& \phantom{=\;}-\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l \mathbf{s}_l \big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
& \phantom{=\;}-\frac12\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l \mathbf{s}_l f_2'(\tilde{a}_j-\tilde{a}_k)-\frac12\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l\mathbf{s}_l f_2'(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&= -\frac12(r_k\mathbf{s}_k+r_j\mathbf{s}_j)\wp_2'(\tilde{a}_j-\tilde{a}_k)-\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l \mathbf{s}_l \big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
& \phantom{=\;}-\frac12\sideset{}{'}\sum_{l=1}^{{\mathcal N}} r_l \mathbf{s}_l f_2'(\tilde{a}_j-\tilde{a}_k)-\frac12\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l \mathbf{s}_l f_2'(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&= -\frac12(r_k\mathbf{s}_k+r_j\mathbf{s}_j)\wp_2'(\tilde{a}_j-\tilde{a}_k) -\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l \mathbf{s}_l \big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
& \phantom{=\;}-\frac12\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l\mathbf{s}_l f_2'(\tilde{a}_k-\tilde{a}_l),
\end{align}
where we have used Lemma~\ref{lem:totalspin} in the final step to replace $\sideset{}{'}\sum_{l=1}^{{\mathcal N}}r_l\mathbf{s}_l$ by $\boldsymbol{0}$.
Inserting \eqref{eq:bjbk2} into \eqref{eq:Qjdot2} gives
\begin{align}\label{eq:Qjdot3}
\dot{Q}_j=&\; \frac12 \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (r_k\mathbf{s}_k+r_j\mathbf{s}_j)\cdot (\mathbf{s}_j\wedge\mathbf{s}_k)\wp_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l \mathbf{s}_l\cdot(\mathbf{s}_j\wedge\mathbf{s}_k)\big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&\; +\frac12\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l \mathbf{s}_l\cdot(\mathbf{s}_j\wedge\mathbf{s}_k)f_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; +{\rm i}\mathbf{s}_j\cdot \mspace{4mu}\dot{\mspace{-4mu}\bphi} -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} (r_k+r_l) \mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l).
\end{align}
The sum in the first line of \eqref{eq:Qjdot3} vanishes as a consequence of \eqref{eq:cyclic}. The double sum in the second line of \eqref{eq:Qjdot3} may be symmetrized,
\begin{multline}\label{eq:Qjdotsum1}
\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l \mathbf{s}_l\cdot(\mathbf{s}_j\wedge\mathbf{s}_k)\big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l) \\
=\frac12\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} (r_k+r_l) \mathbf{s}_l\cdot(\mathbf{s}_j\wedge\mathbf{s}_k)\big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l),
\end{multline}
(using \eqref{eq:cyclic}, the antisymmetry of $\wedge$, and the fact that $\wp(z)$ is an even function \eqref{eq:parity}) and the final sum in \eqref{eq:Qjdot3} may be rewritten as
\begin{multline}\label{eq:Qjdotsum2}
\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} (r_k+r_l) \mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l) \\
= \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}(r_k+r_j)\mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_j)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l) +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} (r_k+r_l) \mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l) \\
=\frac12\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} (r_k+r_l) \mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l),
\end{multline}
where we have used $\mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_j)=0$ and, similarly as in \eqref{eq:Qjdotsum1}, symmetrized the double sum in the final step. Hence, using \eqref{eq:cyclic}, we see that \eqref{eq:Qjdotsum1} and \eqref{eq:Qjdotsum2} are equal, leading to cancellation in \eqref{eq:Qjdot3}. We are left with
\begin{align}\label{eq:Qjdot4}
\dot{Q}_{j}=&\; \frac12\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_l\mathbf{s}_l\cdot(\mathbf{s}_j\wedge\mathbf{s}_k)f_2'(\tilde{a}_k-\tilde{a}_l)+{\rm i} \mathbf{s}_j\cdot \mspace{4mu}\dot{\mspace{-4mu}\bphi} \nonumber\\
=&\; \frac14\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} (r_k+r_l)\mathbf{s}_l\cdot(\mathbf{s}_j\wedge\mathbf{s}_k)f_2'(\tilde{a}_k-\tilde{a}_l)-\frac14\sideset{}{'}\sum_{k=1}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} (r_k+r_l)\mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)f_2'(\tilde{a}_k-\tilde{a}_k),
\end{align}
where we have symmetrized the double sum (using \eqref{eq:cyclic}, the antisymmetry of $\wedge$, and the fact that $f_2'(z)$ is an odd function \eqref{eq:parity}) and inserted \eqref{eq:phidot} in the second step. Noting that all terms proportional to $\mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)$ with $j=k$ and $j=l$ are zero in the second double sum in \eqref{eq:Qjdot4} and using \eqref{eq:cyclic}, we see that $\dot{Q}_j=0$.
\subsubsection{Conservation of $R$}
We write
\begin{equation}\label{eq:Rsum}
R=R^{(1)}+R^{(2)},
\end{equation}
where
\begin{equation}\label{eq:R1R2}
R^{(1)}\coloneqq \boldsymbol{\phi}^2,\qquad R^{(2)}\coloneqq -\rho^2-\frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k\mathbf{s}_j\cdot\mathbf{s}_k f_2(\tilde{a}_j-\tilde{a}_k).
\end{equation}
By differentiating $R^{(1)}$ with respect to $t$ and inserting \eqref{eq:phidotSH} and
\begin{equation}
\boldsymbol{\phi}=-{\rm i} \mathbf{b}_j-{\rm i}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_k \mathbf{s}_k \zeta_2(\tilde{a}_j-\tilde{a}_k) \quad (j=1,\ldots,{\mathcal N}),
\end{equation}
which follows from \eqref{eq:bj}, we compute
\begin{align}\label{eq:R1dot}
\dot{R}^{(1)}= 2\boldsymbol{\phi}\cdot \mspace{4mu}\dot{\mspace{-4mu}\bphi} =&\; \frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (r_j+r_k)\boldsymbol{\phi}\cdot(\mathbf{s}_j\wedge\mathbf{s}_k )f_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
=&\; \frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j\Bigg(-{\rm i} \mathbf{b}_k-{\rm i} \sideset{}{'}\sum_{l\neq j}^{{\mathcal N}} r_l\mathbf{s}_l \zeta_2(\tilde{a}_k-\tilde{a}_l)\Bigg)\cdot(\mathbf{s}_j\wedge\mathbf{s}_k)f_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; + \frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k\Bigg(-{\rm i} \mathbf{b}_j-{\rm i} \sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} r_l\mathbf{s}_l \zeta_2(\tilde{a}_j-\tilde{a}_l)\Bigg)\cdot(\mathbf{s}_j\wedge\mathbf{s}_k)f_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
=&\; \frac12 \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (r_j\mathbf{b}_k+r_k\mathbf{b}_j)\cdot(\mathbf{s}_j\wedge\mathbf{s}_k)f_2'(\tilde{a}_j-\tilde{a}_k) \nonumber\\
&\; +\frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} \mathbf{s}_j \cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\big(r_jr_l\zeta_2(\tilde{a}_k-\tilde{a}_l)+r_kr_l\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)f_2'(\tilde{a}_j-\tilde{a}_k),
\end{align}
where we have used \eqref{eq:cyclic} in the last step.
By differentiating $R^{(2)}$ with respect to $t$, using $\dot{\tilde{a}}_j=\dot{a}_j$ for $j=1,\ldots,{\mathcal N}$, and inserting \eqref{eq:sjdotSH} and \eqref{eq:ajdotSH} in the form
\begin{equation}
\dot{a}_j\mathbf{s}_j=-r_j\mathbf{s}_j\wedge\mathbf{b}_j \quad (j=1,\ldots,{\mathcal N}),
\end{equation}
we compute
\begin{align}\label{eq:R2dot}
\dot{R}^{(2)}=&\; -\frac12 \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k\big((\dot{\mathbf{s}}_j\cdot\mathbf{s}_k+\mathbf{s}_j\cdot\dot{\mathbf{s}}_k)f_2(\tilde{a}_j-\tilde{a}_k)+\mathbf{s}_j\cdot\mathbf{s}_k(\dot{a}_j-\dot{a}_k)f_2'(a_j-a_k)\big) \nonumber \\
=&\; \frac12 \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \Bigg\{ \sideset{}{'}\sum_{l\neq j}^{{\mathcal N}} (r_jr_k+r_kr_l) (\mathbf{s}_j\wedge\mathbf{s}_l)\cdot\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_l) \nonumber \\
&\;\phantom{\frac12 \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \Bigg(}+\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} (r_jr_k+r_jr_l)\mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\wp_2(\tilde{a}_k-\tilde{a}_l)\Bigg\} f_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; +\frac12 \sideset{}{'}\sum_{j=1}^{{\mathcal N}} \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \big(r_k(\mathbf{s}_j\wedge\mathbf{b}_j)\cdot\mathbf{s}_k)-r_j\mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{b}_k)\big) f_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
=&\; - \frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} (\mathbf{s}_j\wedge\mathbf{s}_k)\cdot\mathbf{s}_l \big((r_jr_k+r_kr_l)\wp_2(\tilde{a}_j-\tilde{a}_l)-(r_jr_k+r_jr_l)\wp_2(\tilde{a}_k-\tilde{a}_l)\big)f_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; -\frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (\mathbf{s}_j\wedge\mathbf{s}_k)\cdot(r_j\mathbf{b}_j+r_j\mathbf{b}_k)f_2'(\tilde{a}_j-\tilde{a}_k),
\end{align}
where we have again used \eqref{eq:cyclic} in the last step. Hence, differentiating \eqref{eq:Rsum} with respect to $t$ and inserting \eqref{eq:R1dot} and \eqref{eq:R2dot}, we see that the terms in $\mathbf{b}_j$, $\mathbf{b}_k$ cancel and we are left with
\begin{align}\label{eq:Rdot1}
\dot{R}=\dot{R}^{(1)}+\dot{R}^{(2)}=&\; \frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} \mathbf{s}_j \cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\big(r_jr_l\zeta_2(\tilde{a}_k-\tilde{a}_l)+r_kr_l\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)f_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; - \frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} (\mathbf{s}_j\wedge\mathbf{s}_k)\cdot\mathbf{s}_l \big\{(r_jr_k+r_kr_l)\wp_2(\tilde{a}_j-\tilde{a}_l) \nonumber \\
&\; \phantom{- \frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} (\mathbf{s}_j\wedge\mathbf{s}_k)\cdot\mathbf{s}_l \big(}-(r_jr_k+r_jr_l)\wp_2(\tilde{a}_k-\tilde{a}_l)\big\}f_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
=&\; \frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} \mathbf{s}_j \cdot(\mathbf{s}_k\wedge\mathbf{s}_l) \big\{-r_jr_l\partial_{\tilde{a}_k}\big( \zeta_2(\tilde{a}_k-\tilde{a}_l)f_2(\tilde{a}_j-\tilde{a}_k) \big) \nonumber \\
&\; \phantom{\frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} \mathbf{s}_j \cdot(\mathbf{s}_k\wedge\mathbf{s}_l) \big\{} +r_kr_l \partial_{\tilde{a}_j}\big(\zeta_2(\tilde{a}_j-\tilde{a}_l)f_2(\tilde{a}_j-\tilde{a}_k)\big) \nonumber \\
&\; \phantom{\frac12\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} \mathbf{s}_j \cdot(\mathbf{s}_k\wedge\mathbf{s}_l) \big\{} -r_jr_k\partial_{\tilde{a}_l}\big(\zeta_2(\tilde{a}_j-\tilde{a}_l)-\zeta_2(\tilde{a}_k-\tilde{a}_l)\big)f_2(\tilde{a}_j-\tilde{a}_k)\big\}.
\end{align}
By permuting indices $k\leftrightarrow l$ and $j\leftrightarrow l$ in the first and second terms in the summand, respectively, we find that
\begin{equation}\label{eq:Rdot2}
\dot{R}= -\frac12 \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_jr_k \mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\partial_{\tilde{a}_l} g(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l),
\end{equation}
where
\begin{align}\label{eq:g}
g(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)\coloneqq &\; \zeta_2(\tilde{a}_k-\tilde{a}_l)f_2(\tilde{a}_j-\tilde{a}_l)-\zeta_2(\tilde{a}_j-\tilde{a}_l)f_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&\; +\big(\zeta_2(\tilde{a}_j-\tilde{a}_l)-\zeta_2(\tilde{a}_k-\tilde{a}_l)\big)f_2(\tilde{a}_j-\tilde{a}_k).
\end{align}
The function $g(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)$ is a meromorphic function of $\tilde{a}_l$ with no poles in the parallelogram $\Pi$ defined by vertices at $(0,0)$, $(2\ell,0)$, $(0,-2{\rm i}\delta)$, and $(2\ell,-2{\rm i}\delta)$. It follows from \eqref{eq:realperiod}--\eqref{eq:imperiod} that $g(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l+2{\rm i}\delta)=g(\tilde{a}_j,\tilde{a}_j,\tilde{a}_l)$ and
\begin{equation}\label{eq:gperiod}
g(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l+2\ell)=g(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)-\frac{\pi}{\delta}\big(f_2(\tilde{a}_j-\tilde{a}_l)-f_2(\tilde{a}_k-\tilde{a}_l)\big)+\bigg(\frac{\pi}{\delta}\bigg)^2\big(\zeta_2(\tilde{a}_j-\tilde{a}_l)-\zeta_2(\tilde{a}_k-\tilde{a}_l)\big).
\end{equation}
By adding certain terms to $g(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)$, we obtain a function $\tilde{g}(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)$ which is doubly-periodic with respect to $\tilde{a}_l$ and has no poles $\tilde{a}_l\in \Pi$ and thus, by Liouville's theorem, is a constant function of $\tilde{a}_l$. Let
\begin{equation}\label{eq:gt}
\tilde{g}(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)\coloneqq g(\tilde{a}_j,\tilde{a}_k,\tilde{a}_k)-h(\tilde{a}_j-\tilde{a}_l)+h(\tilde{a}_k-\tilde{a}_l)
\end{equation}
where
\begin{equation}\label{eq:h}
h(z)\coloneqq \zeta_2(z)\big(f_2(z)-f_2(0)\big)-\frac23\bigg(\zeta_2(z)^3+3\frac{\zeta({\rm i}\delta)}{{\rm i}\delta}\zeta_2(z)+\frac12\wp_2'(z)\bigg).
\end{equation}
The function $h(z)$ is seen to be regular at $z=0$ using the Laurent series
\begin{equation}\label{eq:laurent}
\zeta_2(z)=\frac{1}{z}-\frac{\zeta({\rm i}\delta)}{{\rm i}\delta} z+O(z^3),\qquad \wp_2(z)=\frac{1}{z^2}-\frac{\zeta({\rm i}\delta)}{{\rm i}\delta}+O(z^2)
\end{equation}
as $z\to 0$; the series in \eqref{eq:laurent} follow from those for $\zeta(z)$ and $\wp(z)$ \cite[Chapter~23.9]{DLMF} and the definitions of $\zeta_2(z)$ \eqref{eq:zeta2} and $\wp_2(z)$ \eqref{eq:wp2}. Hence the functions $h(\tilde{a}_j-\tilde{a}_l)$ and $h(\tilde{a}_k-\tilde{a}_l)$ are regular for $\tilde{a}_l\in\Pi$. Moreover, $h(z)$ is $2{\rm i}\delta$-periodic and satisfies the identity
\begin{align}\label{eq:hperiod}
h(\tilde{a}_j-\tilde{a}_l-2\ell)-h(\tilde{a}_k-\tilde{a}_l-2\ell)=&\; h(\tilde{a}_j-\tilde{a}_k)-h(\tilde{a}_k-\tilde{a}_l)-\frac{\pi}{\delta}\big(f_2(\tilde{a}_j-\tilde{a}_l)-f_2(\tilde{a}_k-\tilde{a}_l)\big) \nonumber \\
&\;+\bigg(\frac{\pi}{\delta}\bigg)^2\big(\zeta_2(\tilde{a}_j-\tilde{a}_l)-\zeta_2(\tilde{a}_k-\tilde{a}_l)\big),
\end{align}
by \eqref{eq:h} with \eqref{eq:realperiod}--\eqref{eq:imperiod}.
The function $\tilde{g}(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)$ \eqref{eq:gt} is thus analytic in $\tilde{a}_l$ when $\tilde{a}_l\in\Pi$. Using \eqref{eq:gperiod} and \eqref{eq:hperiod}, it follows that $\tilde{g}(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l+2\ell)=\tilde{g}(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l+2{\rm i}\delta)=\tilde{g}(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)$. Hence $\tilde{g}(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)$ is constant with respect to $\tilde{a}_l$.
Inserting \eqref{eq:gt} into \eqref{eq:Rdot2} gives
\begin{align}
\dot{R}=&\; -\frac12 \sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_jr_k \mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\partial_{\tilde{a}_l}\tilde{g}(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l) \nonumber \\
&\; -\frac{1}{2}\sideset{}{'}\sum_{j=1}^{{\mathcal N}}\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_jr_k \mathbf{s}_j\cdot(\mathbf{s}_k\wedge\mathbf{s}_l)\partial_{\tilde{a}_l}\big(h(\tilde{a}_j-\tilde{a}_l)-h(\tilde{a}_k-\tilde{a}_l) \big).
\end{align}
The sum in the first line vanishes because $\partial_{\tilde{a}_k}\tilde{g}(\tilde{a}_j,\tilde{a}_k,\tilde{a}_l)=0$. The second sum vanishes by \eqref{eq:constraint3SH} and \eqref{eq:constraint1SH}. We conclude that $\dot{R}=0$.
\section{B\"{a}cklund transformation}\label{sec:Backlund}
We prove the Bäcklund transformation between the elliptic spin CM systems \eqref{eq:sCM1} and \eqref{eq:sCM2} stated in Theorem~\ref{thm:backlund} in Section~\ref{subsec:Backlundproof}. Building on this result and using the results of Sections~\ref{sec:solitons} and \ref{sec:conserved}, we prove Theorem~\ref{thm:main} in Section~\ref{subsec:mainproof}.
\subsection{Proof of Theorem~\ref{thm:backlund}}\label{subsec:Backlundproof}
This proof consists in deriving the deriving second-order equations
\begin{equation}\label{eq:ajddotSH}
\ddot{a}_j=-\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}(1+r_jr_k) \mathbf{s}_j\cdot\mathbf{s}_k \wp_2'(a_j-a_k) \quad (j=1,\ldots,{\mathcal N}),
\end{equation}
equivalent to \eqref{eq:sCMa}, \eqref{eq:sCMb}, via the notation \eqref{eq:shorthand}, as a consequence of the first-order equations \eqref{eq:phidot}, \eqref{eq:sCMs}, \eqref{eq:sCMt}, and \eqref{eq:ajdot} in the form \eqref{eq:phidotSH}, \eqref{eq:sjdotSH}, and \eqref{eq:ajdotSH}. We use that the constraints \eqref{eq:constraint1}--\eqref{eq:constraint3} hold on $[0,T)$ by Proposition~\ref{prop:conserved} and Lemma~\ref{lem:totalspin}.
Recalling the definition of $\mathbf{b}_j$ \eqref{eq:bj}, \eqref{eq:ajdotSH} can be written as
\begin{equation}\label{eq:ajdotSH2}
\dot{a}_j \mathbf{s}_j= - r_j \mathbf{s}_j \wedge \mathbf{b}_j \quad (j=1,\ldots,{\mathcal N}).
\end{equation}
Differentiating \eqref{eq:ajdotSH2} with respect to $t$ and rearranging gives
\begin{equation}\label{eq:ajddot1}
\ddot{a}_j \mathbf{s}_j = -\dot{a}_j\dot{\mathbf{s}}_j - r_j \dot{\mathbf{s}}_j\wedge\mathbf{b}_j-r_j\mathbf{s}_j\wedge \!\!\dot{\,\,\bb}_j.
\end{equation}
We compute the terms on the right hand side of \eqref{eq:ajddot1}. Using \eqref{eq:sjdotSH} and then \eqref{eq:ajdotSH},
\begin{align}\label{eq:ajddot2}
-\dot{a}_j\dot{\mathbf{s}}_j-r_j \dot{\mathbf{s}}_j\wedge\mathbf{b}_j= &\; \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) \dot{a}_j \mathbf{s}_j \wedge\mathbf{s}_k \wp_2(a_j-a_k) + \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j (1+r_j r_k) (\mathbf{s}_j\wedge\mathbf{s}_k)\wedge\mathbf{b}_j \wp_2(a_j-a_k) \nonumber \\
=&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j(1+r_jr_k)(\mathbf{s}_j\wedge\mathbf{b}_j)\wedge\mathbf{s}_k \wp_2(a_j-a_k) \nonumber \\
&\; +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}r_j (1+ r_jr_k) (\mathbf{s}_j\wedge\mathbf{s}_k)\wedge\mathbf{b}_j \wp_2(a_j-a_k),
\end{align}
To simplify, we use $r_j^2=1$, the standard vector identities
\begin{equation}\label{eq:VecId}
(\mathbf{x}\wedge\mathbf{y})\wedge\mathbf{z}=-(\mathbf{y}\cdot\mathbf{z})\mathbf{x}+(\mathbf{x}\cdot\mathbf{z})\mathbf{y},\qquad \mathbf{x}\wedge(\mathbf{y}\wedge\mathbf{z})=(\mathbf{x}\cdot\mathbf{z})\mathbf{y}-(\mathbf{x}\cdot\mathbf{y})\mathbf{z},
\end{equation}
and \eqref{eq:constraint2} in the form $\mathbf{b}_j\cdot\mathbf{s}_j=0$. Hence,
\begin{equation}\label{eq:ajddot4}
-\dot{a}_j\dot{\mathbf{s}}_j-r_j \dot{\mathbf{s}}_j\wedge\mathbf{b}_j =- \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (r_j+r_k) (\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{b}_j \wp_2(a_j-a_k).
\end{equation}
To compute the remaining term in \eqref{eq:ajddot1}, we first differentiate \eqref{eq:bj} with respect to $t$ to find
\begin{equation}\label{eq:bjdot1}
\!\!\dot{\,\,\bb}_j= {\rm i} \mspace{4mu}\dot{\mspace{-4mu}\bphi} -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k\dot{\mathbf{s}}_k\zeta_2(\tilde{a}_j-\tilde{a}_k)+\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_k\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_k)(\dot{a}_j-\dot{a}_k)
\end{equation}
where we have used that $\dot{\tilde{a}}_j=\dot{a}_j$. Taking the cross product with $-r_j\mathbf{s}_j$ and using \eqref{eq:sjdotSH} gives
\begin{align}\label{eq:bjdot2}
-r_j\mathbf{s}_j\wedge \!\!\dot{\,\,\bb}_j=&\; -{\rm i} r_j \mathbf{s}_j\wedge \mspace{4mu}\dot{\mspace{-4mu}\bphi} +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j r_k\mathbf{s}_j\wedge\dot{\mathbf{s}}_k\zeta_2(\tilde{a}_j-\tilde{a}_k)-\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k\mathbf{s}_j\wedge\mathbf{s}_k \wp_2(\tilde{a}_j-\tilde{a}_k)(\dot{a}_j-\dot{a}_k) \nonumber \\
=&\; -{\rm i} r_j \mathbf{s}_j\wedge \mspace{4mu}\dot{\mspace{-4mu}\bphi} -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} r_j (r_k+r_l)\mathbf{s}_j\wedge(\mathbf{s}_k\wedge\mathbf{s}_l)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_jr_k\big((\dot{a}_j\mathbf{s}_j)\wedge\mathbf{s}_k-\mathbf{s}_j\wedge(\dot{a}_k\mathbf{s}_k)\big) \wp_2(\tilde{a}_j-\tilde{a}_k).
\end{align}
The double sum in \eqref{eq:bjdot2} can be rewritten as
\begin{multline}\label{eq:firstsum1}
\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} r_j (r_k+r_l)\mathbf{s}_j\wedge(\mathbf{s}_k\wedge\mathbf{s}_l)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l) \\
= \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} r_j(r_k+r_j) \mathbf{s}_j\wedge(\mathbf{s}_k\wedge\mathbf{s}_j)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_j) +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_j (r_k+r_l)\mathbf{s}_j\wedge(\mathbf{s}_k\wedge\mathbf{s}_l)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l) \\
\begin{aligned}= &\; \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) \mathbf{s}_j\wedge(\mathbf{s}_k\wedge\mathbf{s}_j)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_j-\tilde{a}_k)\\
&\; +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_j (r_k+r_l)\mathbf{s}_j\wedge(\mathbf{s}_k\wedge\mathbf{s}_l)\big(\zeta_2(\tilde{a}_j-\tilde{a}_k-\zeta_2(\tilde{a}_j-\tilde{a}_l)\wp_2(\tilde{a}_k-\tilde{a}_l),
\end{aligned}
\end{multline}
using $r_j^2=1$ and the parity properties of $\wp_2(z)$ and $\zeta_2(z)$ \eqref{eq:parity} in the second step. Then, the second identity in \eqref{eq:VecId} and the constraint \eqref{eq:constraint1} yield
\begin{multline}\label{eq:firstsum2}
\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} r_j (r_k+r_l)\mathbf{s}_j\wedge(\mathbf{s}_k\wedge\mathbf{s}_l)\zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_l) \\
\begin{aligned}= &\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) (\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j \zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_j)\\
&\; +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_j (r_k+r_l)\big((\mathbf{s}_j\cdot\mathbf{s}_l)\mathbf{s}_k-(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_l \big)\big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l).
\end{aligned}
\end{multline}
We simplify the remaining sum in \eqref{eq:bjdot2} using \eqref{eq:ajdotSH}, $r_j^2=r_k^2=1$, and \eqref{eq:VecId}:
\begin{align}\label{eq:secondsum}
r_jr_k\big((\dot{a}_j\mathbf{s}_j)\wedge\mathbf{s}_k-\mathbf{s}_j\wedge(\dot{a}_k\mathbf{s}_k)\big)=&\; -r_k(\mathbf{s}_j\wedge\mathbf{b}_j)\wedge\mathbf{s}_k+r_j\mathbf{s}_j\wedge(\mathbf{s}_k\wedge\mathbf{b}_k) \nonumber \\
=&\; r_k(\mathbf{b}_j\cdot\mathbf{s}_k)\mathbf{s}_j-r_k(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{b}_j +r_j(\mathbf{s}_j\cdot\mathbf{b}_k)\mathbf{s}_k-r_j(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{b}_k.
\end{align}
By using \eqref{eq:firstsum2} and \eqref{eq:secondsum} in \eqref{eq:bjdot2}, we arrive at
\begin{align}\label{eq:bjdot3}
-r_j\mathbf{s}_j\wedge \!\!\dot{\,\,\bb}_j=&\; -{\rm i} r_j \mathbf{s}_j\wedge \mspace{4mu}\dot{\mspace{-4mu}\bphi} +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) (\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j \zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_j) \nonumber \\
&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_j (r_k+r_l)\big((\mathbf{s}_j\cdot\mathbf{s}_l)\mathbf{s}_k-(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_l \big)\big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \big(r_k(\mathbf{b}_j\cdot\mathbf{s}_k)\mathbf{s}_j-r_k(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{b}_j +r_j(\mathbf{s}_j\cdot\mathbf{b}_k)\mathbf{s}_k-r_j(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{b}_k \big) \wp_2(\tilde{a}_j-\tilde{a}_k),
\end{align}
Then inserting \eqref{eq:ajddot4} and \eqref{eq:bjdot3} into \eqref{eq:ajddot1}, we get
\begin{align}\label{eq:ajddot5}
\ddot{a}_j \mathbf{s}_j =&\; -{\rm i} r_j \mathbf{s}_j\wedge \mspace{4mu}\dot{\mspace{-4mu}\bphi} +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) (\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j \zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_j) \nonumber \\
&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_j (r_k+r_l)\big((\mathbf{s}_j\cdot\mathbf{s}_l)\mathbf{s}_k-(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_l \big)\big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \big(r_k(\mathbf{b}_j\cdot\mathbf{s}_k)\mathbf{s}_j+r_j(\mathbf{s}_j\cdot\mathbf{b}_k)\mathbf{s}_k-r_j(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{b}_k \big) \wp_2(\tilde{a}_j-\tilde{a}_k),\nonumber \\
=&\; -{\rm i} r_j \mathbf{s}_j\wedge \mspace{4mu}\dot{\mspace{-4mu}\bphi} +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) (\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j \zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_k-\tilde{a}_j) \nonumber \\
&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_j (r_k+r_l)\big((\mathbf{s}_j\cdot\mathbf{s}_l)\mathbf{s}_k-(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_l \big)\big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&\; -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \big(r_k((\mathbf{b}_j-\mathbf{b}_k)\cdot\mathbf{s}_k)\mathbf{s}_j-r_j(\mathbf{s}_j\cdot(\mathbf{b}_j-\mathbf{b}_k))\mathbf{s}_k+r_j(\mathbf{s}_j\cdot\mathbf{s}_k)(\mathbf{b}_j-\mathbf{b}_k) \big) \wp_2(\tilde{a}_j-\tilde{a}_k),
\end{align}
using \eqref{eq:constraint2} in the form $\mathbf{s}_j\cdot\mathbf{b}_j=\mathbf{s}_k\cdot\mathbf{b}_k=0$ in the second step.
The remainder of the proof consists of using known expressions for $(\mathbf{b}_j-\mathbf{b}_k)\wp_2(\tilde{a}_j-\tilde{a}_k)$ \eqref{eq:bjbk2} and $ \mspace{4mu}\dot{\mspace{-4mu}\bphi} $ \eqref{eq:phidotSH} and elliptic function identities to show that \eqref{eq:ajddot5} becomes \eqref{eq:ajddotSH}.
We insert \eqref{eq:bjbk2} (which was derived under the assumption \eqref{eq:constraint3}) into \eqref{eq:ajddot5} to obtain, after combining some terms,
\begin{align}\label{eq:ajddot6}
\ddot{a}_j\mathbf{s}_j =&\; -{\rm i} r_j \mathbf{s}_j\wedge \mspace{4mu}\dot{\mspace{-4mu}\bphi} +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) (\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j \zeta_2(\tilde{a}_j-\tilde{a}_k)\wp_2(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; -\frac12 \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) (\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j \wp_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; +\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_k r_l (\mathbf{s}_l\cdot\mathbf{s}_k)\mathbf{s}_j \big(\zeta_2(\tilde{a}_j-\tilde{a}_k)-\zeta_2(\tilde{a}_j-\tilde{a}_l)\big)\wp_2(\tilde{a}_k-\tilde{a}_l) \nonumber \\
&\; +\frac12 \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} \sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}}r_k r_l(\mathbf{s}_l \cdot\mathbf{s}_k)\mathbf{s}_j f_2'(\tilde{a}_k-\tilde{a}_l) -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_jr_l \big((\mathbf{s}_j\cdot \mathbf{s}_l)\mathbf{s}_k-(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_l \big) f_2'(\tilde{a}_k-\tilde{a}_l).
\end{align}
Since $\wp(z)$ is an even function \eqref{eq:parity}, the double sum in the third line of \eqref{eq:ajddot6} is antisymmetric under the interchange of $k$ and $l$ and hence vanishes. The first double sum in the fourth line of \eqref{eq:ajddot6} similarly vanishes by symmetry, because $f_2'(z)$ is an odd function \eqref{eq:parity}. We again use the identity \eqref{eq:EllipticId1}, leading to, after some rearrangement,
\begin{align}\label{eq:ajddot7}
\ddot{a}_j\mathbf{s}_j =&\; -{\rm i} r_j \mathbf{s}_j\wedge \mspace{4mu}\dot{\mspace{-4mu}\bphi} -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) (\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j \wp_2'(\tilde{a}_j-\tilde{a}_k) -\frac12 \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k)(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j f_2'(\tilde{a}_j-\tilde{a}_k) \nonumber \\
&\; -\frac12 \sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_j r_l \big((\mathbf{s}_j\cdot\mathbf{s}_l)\mathbf{s}_k-(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_l\big)f_2'(\tilde{a}_k-\tilde{a}_l).
\end{align}
The second identity in \eqref{eq:VecId} and
\begin{multline}
\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq j,k}^{{\mathcal N}} r_j r_l \big((\mathbf{s}_j\cdot\mathbf{s}_l)\mathbf{s}_k-(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_l\big)f_2'(\tilde{a}_k-\tilde{a}_l) \\
=\sideset{}{'}\sum_{k=1}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} r_j r_l \big((\mathbf{s}_j\cdot\mathbf{s}_l)\mathbf{s}_k-(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_l\big)f_2'(\tilde{a}_k-\tilde{a}_l)-\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k)(\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j f_2'(\tilde{a}_j-\tilde{a}_k)
\end{multline}
lead to
\begin{align}\label{eq:ajddot8}
\ddot{a}_j\mathbf{s}_j =&\; -{\rm i} r_j \mathbf{s}_j\wedge \mspace{4mu}\dot{\mspace{-4mu}\bphi} -\sideset{}{'}\sum_{k\neq j}^{{\mathcal N}} (1+r_jr_k) (\mathbf{s}_j\cdot\mathbf{s}_k)\mathbf{s}_j \wp_2'(\tilde{a}_j-\tilde{a}_k) -\frac12 \sideset{}{'}\sum_{k=1}^{{\mathcal N}}\sideset{}{'}\sum_{l\neq k}^{{\mathcal N}} r_j r_l \mathbf{s}_j\wedge(\mathbf{s}_k\wedge\mathbf{s}_l) f_2'(\tilde{a}_k-\tilde{a}_l).
\end{align}
Symmetrizing the double sum (using the antisymmetry of $\wedge$ and the fact that $f_2'(z)$ is an odd function \eqref{eq:parity}) and inserting \eqref{eq:phidotSH} gives the result \eqref{eq:ajddotSH} after recalling that $\tilde{a}_j-\tilde{a}_k=a_j-a_k$ for $r_j=r_k$.
\subsection{Proof of Theorem~\ref{thm:main}}\label{subsec:mainproof}
We first show that the assumptions of the theorem imply those of Proposition~\ref{prop:firstorder}.
By assumption, $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ in the statement of the theorem is a solution of the following initial value problem (IVP) for some choice of initial conditions.
\begin{IVP}\label{IVP1}
Find $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ such that
\begin{itemize}
\item \eqref{eq:sCM1}, \eqref{eq:sCM2}, and \eqref{eq:phidot} hold on a subset of $[0,T)$ containing $t=0$
\item the initial conditions
\begin{equation}\label{eq:picarddata1}
\begin{split}
&a_{j}(0)=a_{j,0},\qquad \mathbf{s}_j(0)=\mathbf{s}_{j,0} \quad (j=1,\ldots,N), \\
&b_j(0)=b_{j,0},\qquad \mathbf{t}_j(0)=\mathbf{t}_{j,0} \quad (j=1,\ldots,M),
\end{split}
\end{equation}
and
\begin{equation}\label{eq:picarddata2}
\dot{a}_j(0)=\dot{a}_{j,0} \quad (j=1,\ldots,N), \qquad \dot{b}_j(0)=\dot{b}_{j,0} \quad (j=1,\ldots,M)
\end{equation}
satisfy \eqref{eq:ajdot} at $t=0$
\end{itemize}
\end{IVP}
Because \eqref{eq:imaj}--\eqref{eq:ajakbjbk} hold, the functions of the form $F\big(\{a_j,\mathbf{s}_j\}_{j=1}^N, \{b_j,\mathbf{t}_j\}_{j=1}^M \big)$ defining the ODEs \eqref{eq:sCM1}, \eqref{eq:sCM2}, and \eqref{eq:phidot} are locally Lipshitz near the solution. Standard uniqueness results on ODEs, for instance \cite[Theorem~4.18]{logemann2014}, thus guarantee that the solution is the unique solution of IVP~\ref{IVP1} on any interval $[0,T')\subseteq [0,T)$.
We will relate our known solution of IVP~\ref{IVP1} to a solution of the \eqref{eq:sCMs}, \eqref{eq:sCMt}, \eqref{eq:phidot}, and \eqref{eq:ajdot} in Theorem~\ref{thm:backlund}.
Note that when $\mathbf{s}_j\neq \boldsymbol{0}$ and $\mathbf{t}_j\neq \boldsymbol{0}$, \eqref{eq:ajdot} may be written as
\begin{equation}
\begin{split}\label{eq:ajdotIVP}
\dot{a}_j=&\; -\frac{\mathbf{s}_j}{\mathbf{s}_j\cdot\mathbf{s}_j^*}\wedge \Bigg({\rm i}\boldsymbol{\phi}-\sideset{}{'}\sum_{k\neq j}^N\mathbf{s}_k\zeta_2(a_j-a_k)+\sideset{}{'}\sum_{k=1}^M \mathbf{t}_k \zeta_2(a_j-b_k+{\rm i}\delta)\Bigg) \quad (j=1,\ldots,N), \\
\dot{b}_j=&\; \frac{\mathbf{t}_j}{\mathbf{t}_j\cdot\mathbf{t}_j^*}\wedge \Bigg({\rm i}\boldsymbol{\phi}+\sideset{}{'}\sum_{k\neq j}^M \mathbf{t}_k\zeta_2(b_j-b_k)-\sideset{}{'}\sum_{k=1}^N \mathbf{s}_k \zeta_2(b_j-a_k+{\rm i}\delta)\Bigg) \quad (j=1,\ldots,M).
\end{split}
\end{equation}
We consider the following IVP.
\begin{IVP}\label{IVP2}
Find $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ such that
\begin{itemize}
\item \eqref{eq:sCMs}, \eqref{eq:sCMt}, \eqref{eq:phidot}, and \eqref{eq:ajdotIVP} hold on a subset of $[0,T)$ containing $t=0$
\item the initial conditions \eqref{eq:picarddata1} hold
\end{itemize}
\end{IVP}
By standard arguments (see, for instance, \cite[Chapters~4.2--4.3]{logemann2014}), IVP~\ref{IVP2} has a unique local solution which may be extended to a unique solution on a maximal interval $[0,T')\subseteq [0,T)$ where (i) the functions of the form $F\big(\{a_j,\mathbf{s}_j\}_{j=1}^N, \{b_j,\mathbf{t}_j\}_{j=1}^M \big)$ defining the ODEs \eqref{eq:sCMs}, \eqref{eq:sCMt}, \eqref{eq:phidot}, and \eqref{eq:ajdotIVP} are locally Lipschitz; this is guaranteed when the conditions
\begin{equation}\label{eq:ajbk}
a_j-b_k+{\rm i}\delta \neq 0 \bmod \Lambda \quad (j=1,\ldots,N,k=1,\ldots,M),
\end{equation}
where
\begin{equation}\label{eq:Lambda}
\Lambda\coloneqq \{2n\ell+2m{\rm i}\delta: n,m\in {\mathbb Z}\},
\end{equation}
\eqref{eq:ajakbjbk} ($\mathrm{mod}\,\Lambda$), and \eqref{eq:sjtj} hold and (ii) each function $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ remains finite. In the case where $T'<T$, one of the conditions (i), (ii) must be violated as $t\to T'$.
We denote the maximal solution of IVP~\ref{IVP2} by $\hat{\boldsymbol{\phi}}$, $\{\hat{a}_j,\hat{\mathbf{s}}_j\}_{j=1}^N$, and $\{\hat{b}_j,\hat{\mathbf{t}}_j\}_{j=1}^M$. Theorem~\ref{thm:backlund} shows that this solution of IVP~\ref{IVP2} is also a solution of IVP~\ref{IVP1} on $[0,T')$.
Suppose $T'<T$. We now consider two cases.
In the first case, suppose that each of the quantities $|\hat{a}_j-\hat{b}_k+2n\ell +(2m+1){\rm i}\delta|$, $\hat{\mathbf{s}}_j\cdot\hat{\mathbf{s}}_j^*$, and $\mspace{-1.5mu}\hat{\mspace{1.5mu} \mathbf{t}}_k\cdot\mspace{-1.5mu}\hat{\mspace{1.5mu} \mathbf{t}}_k^*$ (where $j=1,\ldots,N$, $k=1,\ldots,M$, $n,m\in{\mathbb Z}$, and $|\cdot|$ is the modulus) is bounded from below by some $\epsilon>0$ on $[0,T')$. By Theorem~\ref{thm:backlund}, this gives a solution of IVP~\ref{IVP1} that either violates \eqref{eq:ajakbjbk} or becomes unbounded as $t\to T'$. We have constructed a maximal solution of IVP~\ref{IVP1} on a proper subinterval of $[0,T)$, a contradiction.
In the second case, suppose that either \eqref{eq:ajbk} is violated or least one of the quantities $\hat{\mathbf{s}}_j\cdot\hat{\mathbf{s}}_j^*$ and $\mspace{-1.5mu}\hat{\mspace{1.5mu} \mathbf{t}}_k\cdot\mspace{-1.5mu}\hat{\mspace{1.5mu} \mathbf{t}}_k^*$ tends to $0$ as $t\to T'$. It follows that either \eqref{eq:imaj} or \eqref{eq:sjtj} fails to hold in the limit $t\to T'$. By Theorem~\ref{thm:backlund}, we have a solution of IVP~\ref{IVP1} on $[0,T')$ such that \eqref{eq:imaj} or \eqref{eq:sjtj} is violated as $t\to T'$.
Because the known solution of IVP~\ref{IVP1} is unique and satisfies \eqref{eq:imaj} and \eqref{eq:sjtj} on $[0,T)$, this is a contradiction.
We conclude that $T'=T$ and so IVP~\ref{IVP2} admits a unique maximal solution on $[0,T)$. By Theorem~\ref{thm:backlund} and the uniqueness of the known solution $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ to IVP~\ref{IVP1}, we see that this known solution solves IVP~\ref{IVP2} on $[0,T)$. It follows that assumptions of Proposition~\ref{prop:firstorder} are satisfied.
We now use Propositions~\ref{prop:constraints}, \ref{prop:firstorder}, and \ref{prop:conserved} to show that under the conditions of the theorem, the ansatz \eqref{eq:ansatz} solves the periodic ncIHF equation \eqref{eq:ncIHF} and each component of this solution has constant length $\rho$.
By Proposition~\ref{prop:firstorder}, \eqref{eq:ansatz} with the solution $\boldsymbol{\phi}$, $\{a_j,\mathbf{s}_j\}_{j=1}^N$, and $\{b_j,\mathbf{t}_j\}_{j=1}^M$ of IVP~\ref{IVP1} solves the periodic ncIHF equation at all $t\in [0,T)$ where the derivatives of $\mathbf{u}$ and $\mathbf{v}$ with respect to $x$ and $t$ exist. It remains to show that this solution satisfies $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=\rho^2$ for all $x\in [-\ell,\ell)$ and $t\in [0,T)$. By Proposition~\ref{prop:constraints}, this holds provided \eqref{eq:constraint1}--\eqref{eq:constraint3} and \eqref{eq:constraint4} are satisfied. Each of the constraints \eqref{eq:constraint1}--\eqref{eq:constraint3} and \eqref{eq:constraint4} are satisfied at $t=0$ by assumption. By Lemma~\ref{lem:totalspin} and Proposition~\ref{prop:conserved}, these constraints hold on $[0,T)$ and hence the theorem follows.
\section{Examples of solutions}\label{sec:explicit}
We construct examples of solutions of the periodic ncIHF equation \eqref{eq:ncIHF} using Theorem~\ref{thm:main}. Analogs of one-soliton traveling wave solutions known for the ncIHF equation on the real line \cite{berntsonklabbers2020} are given in Section~\ref{subsec:tw}. We use an elliptic parameterization of $S^2$ to construct a class of real initial data for the periodic ncIHF equation satisfying the constraints of Theorem~\ref{thm:main} in Section~\ref{subsec:parameterization}. The results of Section~\ref{subsec:parameterization} are used to obtain a breather-type solution of the periodic ncIHF equation in Section~\ref{subsec:breather}.
Sections~\ref{subsec:parameterization}--\ref{subsec:breather} are concerned with real-valued solutions
\begin{align}\label{eq:ansatzreal}
\left(\begin{array}{c} \mathbf{u}(x,t) \\ \mathbf{v}(x,t) \end{array}\right)= \boldsymbol{\phi}(t)\left(\begin{array}{c} 1 \\ 1 \end{array}\right) &\; +{\rm i} \sideset{}{'}\sum_{j=1}^N \mathbf{s}_j(t)\left(\begin{array}{c} \zeta_2(x-a_j(t)+{\rm i}\delta/2) \\ \zeta_2(x-a_j(t)-{\rm i}\delta/2) \end{array}\right) \nonumber \\
&\; -{\rm i} \sideset{}{'}\sum_{j=1}^N \mathbf{s}_j^*(t) \left(\begin{array}{c} \zeta_2(x-a_j^*(t)-{\rm i}\delta/2) \\ \zeta_2(x-a_j^*(t)+{\rm i}\delta/2) \end{array}\right),
\end{align}
of the periodic ncIHF equation satisfying $\mathbf{u}(x,t)=\mathbf{v}(x,t)^2=1$. Such solutions are characterized by the following consistent reduction of Theorem~\ref{thm:main} where
\begin{equation}\label{eq:reduction}
N=M, \qquad \rho=1,\qquad \boldsymbol{\phi}^*=\boldsymbol{\phi}, \qquad b_j=a_j^*, \qquad \mathbf{t}_j=\mathbf{s}_j^* \quad (j=1,\ldots,N).
\end{equation}
\begin{corollary}\label{cor:main}
For $N\in {\mathbb Z}_{\geq 1}$ and $T>0$, let $\boldsymbol{\phi}$ and $\{a_j,\mathbf{s}_j\}_{j=1}^N$ be a solution of the equations \eqref{eq:sCM1} and
\begin{equation}\label{eq:phidotreal}
\mspace{4mu}\dot{\mspace{-4mu}\bphi} =\frac{{\rm i}}{2} \sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_j\wedge\mathbf{s}_k f_2'(a_j-a_k)- \frac{{\rm i}}{2}\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_j^*\wedge\mathbf{s}_k^* f_2'(a_j^*-a_k^*)
\end{equation}
on the interval $[0,T)$ with initial conditions that satisfy the following equations at $t=0$,
\begin{align}
&\mathbf{s}_j^2=0, \\
&\mathbf{s}_j\cdot\Bigg({\rm i}\boldsymbol{\phi}-\sideset{}{'}\sum_{k\neq j}^{N} \mathbf{s}_k\zeta_2(a_j-a_k)+\sideset{}{'}\sum_{k=1}^N \mathbf{s}_k^* \zeta_2(a_j-a_k^*+{\rm i}\delta)\Bigg)=0, \\
&\mathbf{s}_j \dot{a}_j = -\mathbf{s}_j\wedge\Bigg({\rm i}\boldsymbol{\phi}-\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_k\zeta_2(a_j-a_k)+\sideset{}{'}\sum_{k=1}^N \zeta_2(a_j-a_k^*+{\rm i}\delta)\Bigg) \label{eq:ajdotreal},
\end{align}
for $j=1,\ldots,N$ and
\begin{equation}
\boldsymbol{\phi}^2=1+\frac12 \sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_j\cdot\mathbf{s}_k f_2(a_j-a_k)+\frac12\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k\neq j}^N \mathbf{s}_j^*\cdot\mathbf{s}_k^* f_2(a_j^*-a_k^*)-\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k=1}^N \mathbf{s}_j\cdot\mathbf{s}_k^*f_2(a_j-a_k^*+{\rm i}\delta).
\end{equation}
Moreover, suppose that the conditions
\begin{equation}
\frac{\delta}{2}< \mathrm{Im} \,a_j(t)<\frac{3\delta}{2} \quad (j=1,\ldots,N),\qquad a_j(t)\neq a_k(t)\quad (1\leq j<k\leq N)
\end{equation}
hold for $t\in [0,T)$. Then, for all $t\in [0,T)$ such that the functions $\mathbf{u}(x,t)$ and $\mathbf{v}(x,t)$ in \eqref{eq:ansatzreal} are differentiable with respect to $x$ and $t$ for all $x\in [-\ell,\ell)$, \eqref{eq:ansatzreal} provides an exact solution to the periodic ncIHF equation \eqref{eq:ncIHF} satisfying $\mathbf{u}(x,t)^2=\mathbf{v}(x,t)^2=1$.
\end{corollary}
\subsection{Solutions that are sums of traveling waves}\label{subsec:tw}
As mentioned in Remark~\ref{rem:complex}, Theorem~\ref{thm:main} does not include real traveling wave solutions when $N=M=1$. In fact, as we will show, the class of solutions with $N=M=1$ does not contain any traveling waves but instead consists of solutions that are the sums of two traveling waves moving in opposite directions.
When $N=M=1$, the constraint \eqref{eq:constraint3} implies that $\mathbf{s}_1=\mathbf{t}_1$. Consequently, the constraints \eqref{eq:constraint1}, \eqref{eq:constraint2}, and \eqref{eq:constraint4} at $t=0$ are reduced to
\begin{equation}\label{eq:twconstraints}
\mathbf{s}_{1,0}^2=0,\qquad \mathbf{s}_{1,0}\cdot \boldsymbol{\phi}_0=0, \qquad \boldsymbol{\phi}_0^2=\rho^2,
\end{equation}
respectively. The general solution of the first constraint in \eqref{eq:twconstraints} is \cite[Lemma~B.1]{berntsonklabbers2020}
\begin{equation}\label{eq:sgensol}
\mathbf{s}_{1,0}=s_{1,0}(\mathbf{n}_1+{\rm i}\mathbf{n}_2),
\end{equation}
where $s_{1,0}\in {\mathbb C}$ and $\mathbf{n}_1,\mathbf{n}_2\in S^2$ such that $\mathbf{n}_1\cdot\mathbf{n}_2=0$. By expanding $\boldsymbol{\phi}_0$ in the basis $\{\mathbf{n}_1,\mathbf{n}_2,\mathbf{n}_1\wedge\mathbf{n}_2\}$ of ${\mathbb C}^3$, $\boldsymbol{\phi}_0=\phi_{0,1}\mathbf{n}_1+\phi_{0,2}\mathbf{n}_2+\phi_{0,12}\mathbf{n}_{1}\wedge\mathbf{n}_{2}$, we see that the general solution of the second and third constraints in \eqref{eq:twconstraints} with \eqref{eq:sgensol} is
\begin{equation}\label{eq:phi0}
\boldsymbol{\phi}_0=\phi_{1,0}(\mathbf{n}_1+{\rm i}\mathbf{n}_2)+\rho \mathbf{n}_1\wedge\mathbf{n}_2,
\end{equation}
with $\phi_{1,0}\in{\mathbb C}$ arbitrary. Moreover, the equations of motion \eqref{eq:phidot}, \eqref{eq:sCM1}, and \eqref{eq:sCM2} reduce to
\begin{equation}\label{eq:tweom}
\mspace{4mu}\dot{\mspace{-4mu}\bphi} =\boldsymbol{0}, \qquad \dot{\mathbf{s}}_1=0, \qquad \ddot{a}_1=0, \qquad \ddot{b}_1=0.
\end{equation}
which may be integrated to $\boldsymbol{\phi}=\boldsymbol{\phi}_0$, $\mathbf{s}_1=\mathbf{s}_{1,0}$, $a_1=a_{1,0}+\dot{a}_{1,0} t$, $b_1=b_{1,0}+\dot{b}_{1,0} t$. Imposing \eqref{eq:ajdot} and using $\mathbf{s}_{1,0}\wedge({\rm i}\boldsymbol{\phi}_0)=-\rho\mathbf{s}_{1,0}$, which follows from \eqref{eq:VecId} with \eqref{eq:sgensol}--\eqref{eq:phi0}, gives $\dot{a}_{1,0}=\rho$ and $\dot{b}_{1,0}=-\rho$. We have arrived at the following class of exact solutions of the periodic ncIHF equation \eqref{eq:ncIHF},
\begin{align}\label{eq:tw1}
\left(\begin{array}{c} \mathbf{u}(x,t) \\ \mathbf{v}(x,t) \end{array}\right)=&\; \big(\phi_{1,0}(\mathbf{n}_1+{\rm i}\mathbf{n}_2)+\rho \mathbf{n}_1\wedge\mathbf{n}_2\big)\left(\begin{array}{c} 1 \\ 1 \end{array}\right) \nonumber \\
&\; +{\rm i}\big(s_{1,0}(\mathbf{n}_1+{\rm i}\mathbf{n}_2)\big)\left(\begin{array}{c} \zeta_2(x-a_{1,0}-\rho t+{\rm i}\delta/2)-\zeta_2(x-b_{1,0}+\rho t-{\rm i}\delta/2) \\ \zeta_2(x-a_{1,0}-\rho t-{\rm i}\delta/2)-\zeta_2(x-b_{1,0}+\rho t+{\rm i}\delta/2) \end{array}\right),
\end{align}
where $\phi_{1,0},s_{1,0},\rho\in {\mathbb C}$ and $\mathbf{n}_1,\mathbf{n}_2\in S^2$ are arbitrary and $a_{1,0}$ and $b_{1,0}$ satisfy \eqref{eq:ajakbjbk} (at $t=0$). We note that in the case $\rho\notin {\mathbb R}$, the condition \eqref{eq:imaj} will be violated in finite time, after which Theorem~\ref{thm:main} does not guarantee that \eqref{eq:tw1} provides a solution.
The argument above can be generalized to give the following solutions in the case $N=M$ ($N\geq 1$),
\begin{align}\label{eq:tw2}
\left(\begin{array}{c} \mathbf{u}(x,t) \\ \mathbf{v}(x,t) \end{array}\right)=&\; \big(\phi_{1,0}(\mathbf{n}_1+{\rm i}\mathbf{n}_2)+\rho \mathbf{n}_1\wedge\mathbf{n}_2\big)\left(\begin{array}{c} 1 \\ 1 \end{array}\right) \nonumber \\
&\; +{\rm i}\big(s_{1,0}(\mathbf{n}_1+{\rm i}\mathbf{n}_2)\big)\sideset{}{'}\sum_{j=1}^N \left(\begin{array}{c} \zeta_2(x-a_{j,0}-\rho t+{\rm i}\delta/2)-\zeta_2(x-b_{j,0}+\rho t-{\rm i}\delta/2) \\ \zeta_2(x-a_{j,0}-\rho t-{\rm i}\delta/2)-\zeta_2(x-b_{j,0}+\rho t+{\rm i}\delta/2) \end{array}\right).
\end{align}
where $\phi_{1,0},s_{1,0},\rho\in {\mathbb C}$ and $\mathbf{n}_1,\mathbf{n}_2\in S^2$ are arbitrary and the $a_{j,0}$ and $b_{j,0}$ ($j=1,\ldots,N$) satisfy \eqref{eq:ajakbjbk} (at $t=0$). Similar remarks as above, concerning the finite-time existence of the solution \eqref{eq:tw2} with $\rho\notin {\mathbb R}$, apply.
\begin{remark}
The absence of nontrivial traveling wave solutions for the periodic ncIHF equation in Theorem~\ref{thm:main} is surprising in view of the the rich structure of analogous solutions, obtainable via pole ansatz \cite{berntsonklabbers2020}, for the half wave maps equation \cite{zhou2015,lenzmann2018}. We regard the classification of traveling wave solutions of the periodic ncIHF equation as an interesting open problem.
\end{remark}
\subsection{Initial data from an elliptic parameterization of the two-sphere}\label{subsec:parameterization}
One way to find initial data satisfying \eqref{eq:constraint1}--\eqref{eq:constraint3} and \eqref{eq:constraint4} is by considering the following parameterization of the two-sphere, defined by a map from ${\mathbb R}^2$ to $S^2$,
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\begin{equation}\label{eq:R2toS2}
(x_1,x_2)\mapsto\big( \mathrm{sn}(x_1 | m) \mathrm{cn}(x_2 |m), \mathrm{sn}(x_1|m)\mathrm{sn}(x_2|m), \mathrm{cn}(x_1|m) \big),
\end{equation}
where $\mathrm{sn}(\cdot|m)$ and $\mathrm{cn}(\cdot|m)$ are the Jacobi sine and cosine functions with elliptic modulus $m$. The $S^2$-valuedness of \eqref{eq:R2toS2} can be shown using the identity \eqref{eq:sn2cn2}. Requisite details on the functions $\mathrm{sn}(z|m)$ and $\mathrm{cn}(z|m)$ and the elliptic integrals\footnote{In this context only, the prime in $K'=K'(m)$ does not indicate differentiation with respect to the argument; see \eqref{eq:Kp} for the definition of this function.} $K=K(m)$ and $K'=K'(m)$, which determine the periods of Jacobi elliptic functions, can be found in Appendix~\ref{app:elliptic}. The functions $\mathrm{sn}(z|m)$ and $\mathrm{cn}(z|m)$ are elliptic functions of $z$ with half-periods $(2K,{\rm i} K')$ and $(2K,K+{\rm i} K')$, respectively. Both functions have simple poles at
\begin{equation}\label{eq:xijk}
\xi_{jk} \coloneqq 2j K + (2k+1){\rm i} K' \quad (j,k\in {\mathbb Z})
\end{equation}
with corresponding residues
\begin{equation}\label{eq:residues}
\underset{z=\xi_{jk}}{\mathrm{Res}}\mathrm{sn}(z|m)=\frac{(-1)^j}{\sqrt{m}},\qquad \underset{z=\xi_{jk}}{\mathrm{Res}}\mathrm{cn}(z|m)=-\frac{{\rm i}(-1)^{j+k}}{\sqrt{m}}.
\end{equation}
Below, in Proposition~\ref{prop:jacobi}, we show that a specialization of the map \eqref{eq:R2toS2},
\begin{equation}\label{eq:r}
\mathbf{r}(x)\coloneqq \big(\mathrm{sn}(px|m)\mathrm{cn}(q(x-x_0)|m),\mathrm{sn}(px|m)\mathrm{sn}(q(x-x_0)|m),\mathrm{cn}(px|m)\big),
\end{equation}
for positive integers $p,q$ and real $x_0$, can be used to construct real initial data satisfying the conditions of Theorem~\ref{thm:main}, where the parameters $\boldsymbol{\phi}_0$, $\mathbf{s}_{j,0}$, and $a_{j,0}$ satisfy the constraints \eqref{eq:constraint1}--\eqref{eq:constraint3} and \eqref{eq:constraint4} with $N=M$, $\mathbf{t}_j=\mathbf{s}_j^*$, and $b_j=a_j^*$ at $t=0$. Note that the primitive periods of the function $\mathbf{r}(x)$ in \eqref{eq:r} are $4K(m)$ and $4{\rm i} K'(m)$ and thus we set $\ell=2K(m)$, ${\rm i}\delta=2{\rm i} K'(m)$ as the half-periods of the $\zeta_2$-function in \eqref{eq:ansatzreal}.
\begin{proposition}\label{prop:jacobi}
Let $m \in (0,1)$, $p,q \in {\mathbb Z}_{\geq 1}$, and $x_0\in (0,4K(m))$ such that the sets
\begin{equation}\label{eq:A1}
\mathcal{A}_1\coloneqq \big\{\alpha_{jk}^{(1)}:0\leq j\leq 2p-1,0\leq k\leq p-1\big\},\qquad \alpha_{jk}^{(1)}\coloneqq \frac{\xi_{jk}}{p}
\end{equation}
and
\begin{equation}\label{eq:A2}
\mathcal{A}_2\coloneqq \big\{\alpha_{jk}^{(2)}:0\leq j\leq 2q-1,0\leq k\leq q-1\big\},\qquad \alpha_{jk}^{(2)}\coloneqq \frac{\xi_{jk}}{q}+x_0
\end{equation}
are disjoint. Then, \eqref{eq:ansatzreal} with $N=2(p^2+q^2)$, $\ell=2K(m)$, ${\rm i}\delta=2{\rm i} K'(m)$,
\begin{equation}\label{eq:ajsj1}
\begin{split}
a_{jp+k+1,0}=&\; \alpha_{jk}^{(1)}+\frac{{\rm i}\delta}{2}, \\
\mathbf{s}_{jp+k+1,0}=&\; \frac{-{\rm i} (-1)^j}{ p \sqrt{m}}\left(\begin{array}{c}
\mathrm{cn}\big(q\big(\alpha_{jk}^{(1)}-x_0\big)\big) \\
\mathrm{sn}\big(q\big(\alpha_{jk}^{(1)}-x_0\big)\big) \\
-{\rm i} (-1)^{k}
\end{array}\right)
\end{split} \quad (0\leq j\leq 2p-1,0\leq k\leq p-1),
\end{equation}
\begin{equation}\label{eq:ajsj2}
\begin{split}
a_{2p^2+j q+k+1,0}=&\; \alpha_{jk}^{(2)}+\frac{{\rm i}\delta}{2}, \\
\mathbf{s}_{2p^2+j q+k+1,0}=&\; \frac{-{\rm i} (-1)^{j}}{ q \sqrt{m}}\left(\begin{array}{c}
-{\rm i}(-1)^k\mathrm{sn}\big( p \alpha_{jk}^{(2)} \big) \\
\mathrm{sn}\big( p \alpha_{jk}^{(2)}\big) \\
0
\end{array}\right)
\end{split} \quad (0\leq j\leq 2q-1,0\leq k\leq q-1),
\end{equation}
and
\begin{align}\label{eq:phi02}
\boldsymbol{\phi}_0= \left(\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right)+\frac{1}{\sqrt{m}} &\left(\sideset{}{'}\sum_{j=0}^{2p-1}\sideset{}{'}\sum_{k=0}^{p-1} \frac{(-1)^j}{p}\zeta_2\big(\alpha_{jk}^{(1)}\big)\left(\begin{array}{c}
\mathrm{cn}\big(q\big(\alpha_{jk}^{(1)}-x_0\big)\big) \\
\mathrm{sn}\big(q\big(\alpha_{jk}^{(1)}-x_0\big)\big) \\
-{\rm i}(-1)^k
\end{array}\right)\right. \nonumber \\
& \left.+\sideset{}{'}\sum_{j=0}^{2q-1}\sideset{}{'}\sum_{k=0}^{q-1} \frac{(-1)^j}{q}\zeta_2\big(\alpha_{jk}^{(2)}\big)\left(\begin{array}{c}
-{\rm i}(-1)^k\mathrm{sn}\big(p\alpha_{jk}^{(2)}\big) \\
\mathrm{sn}\big(p\alpha_{jk}^{(2)}\big) \\
0
\end{array}\right)+\mathrm{c.c.}\right)
\end{align}
(where $\mathrm{c.c.}$ denotes the complex conjugate of the terms within the parentheses) provides initial data for the periodic ncIHF equation satisfying the conditions of Theorem~\ref{thm:main}.
\end{proposition}
\begin{remark}
The sets $\mathcal{A}_1$ and $\mathcal{A}_2$ contain the poles of the functions $\mathrm{sn}(px|m)$ and $\mathrm{cn}(q(x-x_0)|m)$ (equivalently $\mathrm{sn}(q(x-x_0)|m)$, see \eqref{eq:xijk}), respectively for $x$ in $[0,2\ell)\times {\rm i}[0,\delta)$. The corresponding full sets of poles within the period parallelogram $[0,2\ell)\times {\rm i}[-\delta,\delta)$ are $\mathcal{A}_1\cup \mathcal{A}_1^*$ and $\mathcal{A}_2\cup \mathcal{A}_2^*$, respectively. In view of \eqref{eq:xijk}, it is natural to label the elements of $\mathcal{A}_1$ by non-negative integers $j,k$ satisfying $0\leq j\leq 2p-1$, $0\leq k\leq p-1$ and similarly for $\mathcal{A}_2$, see \eqref{eq:A1}-\eqref{eq:A2}. On the other hand, the poles $a_j$ in \eqref{eq:ansatzreal} are labelled by a single index $j\in \{1,\ldots,N=2(p^2+q^2)\}$. To bridge this gap, the subscripts in \eqref{eq:ajsj1}--\eqref{eq:ajsj2} define a bijection between the underlying index sets of (i) the $\alpha_{jk}^{(1)}$, $\alpha_{jk}^{(2)}$ in \eqref{eq:A1}--\eqref{eq:A2} and (ii) the $a_j$ in \eqref{eq:ansatzreal}.
\end{remark}
\subsubsection{Proof of Proposition~\ref{prop:jacobi}}
We begin by writing the function $\mathbf{r}(x)$ in \eqref{eq:r} in terms of the function $\zeta_2(z)$ \eqref{eq:zeta2}.
\begin{lemma}\label{lem:jacobi}
The components $\big(r^1(x),r^2(x),r^3(x)\big)$ of the function $\mathbf{r}(x)$ in \eqref{eq:r} can be decomposed in terms of the function $\zeta_2(z;\ell,{\rm i}\delta)$ with half-periods $\ell=2K(m)$ and ${\rm i}\delta = 2{\rm i} K'(m)$ as follows,
\begin{align}
\label{eq:jacobi_decomposition}
r^1(x)=&\; \frac{1}{\sqrt{m}}\sideset{}{'}\sum_{j=0}^{2p-1} \sideset{}{'}\sum_{k=0}^{p-1} \frac{(-1)^{j}}{p} \mathrm{cn} \big(q\big(\alpha_{jk}^{(1)} -x_0\big)\big)\big( \zeta_2\big(x-\alpha_{jk}^{(1)} \big) +\zeta_2\big(\alpha_{jk}^{(1)}\big) \big) \nonumber \\
&\;- \frac{{\rm i}}{\sqrt{m}} \sideset{}{'}\sum_{j=0}^{2q-1} \sideset{}{'}\sum_{k=0}^{q-1} \frac{(-1)^{j+k}}{q} \mathrm{sn}\big(p \alpha_{jk}^{(2)}\big) \big( \zeta_2\big(x-\alpha_{jk}^{(2)} \big)+\zeta_2\big( \alpha_{jk}^{(2)} \big)\big) + \mathrm{c.c.}, \nonumber \\
r^2(x) =&\; \frac{1}{\sqrt{m}}\sideset{}{'}\sum_{j=0}^{2p-1} \sideset{}{'}\sum_{k=0}^{p-1} \frac{(-1)^{j}}{p} \mathrm{sn}\big(q\big(\alpha_{jk}^{(1)} -x_0\big)\big)\big( \zeta_2\big(x-\alpha_{jk}^{(1)} \big) +\zeta_2\big(\alpha_{jk}^{(1)}\big) \big) \nonumber \\
&\;+ \frac{1}{\sqrt{m}}\sideset{}{'}\sum_{j=0}^{2q-1} \sideset{}{'}\sum_{k=0}^{q-1} \frac{(-1)^{j}}{q} \mathrm{sn}\big(p \big( \alpha_{jk}^{(2)} \big) \big) \big( \zeta_2\big(x-\alpha_{jk}^{(2)}\big) +\zeta_2\big(\alpha_{jk}^{(2)}\big) \big) + \mathrm{c.c.}, \nonumber \\
r^3(x)-1=&\; -\frac{{\rm i}}{\sqrt{m}} \sideset{}{'}\sum_{j=0}^{2p-1} \sideset{}{'}\sum_{k=0}^{p-1}\frac{(-1)^{j+k}}{p} \big(\zeta_2\big(x-\alpha_{jk}^{(1)}\big)+\zeta_2\big(\alpha_{jk}^{(1)}\big)\big)+ \mathrm{c.c.},
\end{align}
where $\mathrm{c.c.}$ denotes the complex conjugate of the written terms.
\end{lemma}
\begin{proof}
We consider the function $\mathbf{r}(z)$ for $z\in \Pi\coloneqq [0,2\ell)\times {\rm i}[-\delta,\delta)$, a (primitive) period parallelogram. The function $\mathbf{r}(z)$ has a pole at each element of $\mathcal{A}_1\cup \mathcal{A}_2\cup \mathcal{A}_1^*\cup \mathcal{A}_2^*$, with $\mathcal{A}_1$ and $\mathcal{A}_2$ defined in \eqref{eq:A1}--\eqref{eq:A2}. It follows from \eqref{eq:xijk}, \eqref{eq:residues}, and the definition of $\mathbf{r}(x)$ \eqref{eq:r} that
\begin{equation}\label{eq:residuesymmetry}
\underset{z=\big(\alpha_{jk}^{(1)}\big)^*}{\mathrm{Res}} \mathbf{r}(z)=\bigg(\underset{z=\alpha_{jk}^{(1)}}{\mathrm{Res}} \mathbf{r}(z)\bigg)^*,\qquad \underset{z=\big(\alpha_{jk}^{(2)}\big)^*}{\mathrm{Res}} \mathbf{r}(z)=\bigg(\underset{z=\alpha_{jk}^{(2)}}{\mathrm{Res}} \mathbf{r}(z)\bigg)^*.
\end{equation}
We will use this symmetry to obtain \eqref{eq:jacobi_decomposition}. Let
\begin{equation}\label{eq:g2}
g(z;\alpha)\coloneqq \zeta_2(z-\alpha)-\zeta_2(\alpha),
\end{equation}
a $2{\rm i}\delta$-periodic meromorphic function of $z$ with simple poles at $z=\alpha \bmod \Lambda$, with $\Lambda$ as in \eqref{eq:Lambda}. Additionally, $g(0;\alpha)=0$ when $\alpha\neq 0\bmod\Lambda$; this follows from \eqref{eq:g2} and the fact that $\zeta(z)$ is an odd function \eqref{eq:parity}.
We claim that
\begin{equation}\label{eq:ransatz}
\mathbf{r}(x)-\mathbf{r}(0)=\sideset{}{'}\sum_{j=0}^{2p-1}\sideset{}{'}\sum_{k=0}^{p-1} \bigg(\underset{z=\alpha_{jk}^{(1)}}{\mathrm{Res}} \mathbf{r}(z)\bigg) g\big(x;\alpha_{jk}^{(1)}\big)+\sideset{}{'}\sum_{j=0}^{2q-1}\sideset{}{'}\sum_{k=0}^{q-1} \bigg(\underset{z=\alpha_{jk}^{(2)}}{\mathrm{Res}} \mathbf{r}(z)\bigg) g\big(x;\alpha_{jk}^{(2)}\big)+\mathrm{c.c.}
\end{equation}
To see this, we note that, by \eqref{eq:residuesymmetry}, the right-hand-side of \eqref{eq:ransatz} has the same poles and residues as $\mathbf{r}(z)-\mathbf{r}(0)$ within $\Pi$. Moreover, the right-hand-side of \eqref{eq:ransatz} is elliptic: while $2{\rm i}\delta$-periodicity follows from that of $g(z;\alpha)$ via \eqref{eq:imperiod}, $2\ell$-periodicity is a consequence of the identities $g(z+2\ell;\alpha)=g(z;\alpha)+\pi/\delta$, which follows from \eqref{eq:realperiod}, and
\begin{equation}
\sideset{}{'}\sum_{j=0}^{2p-1}\sideset{}{'}\sum_{k=0}^{p-1} \underset{z=\alpha_{jk}^{(1)}}{\mathrm{Res}} \mathbf{r}(z) +\sideset{}{'}\sum_{j=0}^{2q-1}\sideset{}{'}\sum_{k=0}^{q-1} \underset{z=\alpha_{jk}^{(2)}}{\mathrm{Res}} \mathbf{r}(z) +\mathrm{c.c.}=\boldsymbol{0},
\end{equation}
which holds using \eqref{eq:residuesymmetry} and the fact that the sum of residues within $\Pi$ of the elliptic function $\mathbf{r}(z)$ vanishes. Because both sides of \eqref{eq:ransatz} evaluate to $\boldsymbol{0}$ at $x=0$, by Liouville's theorem, \eqref{eq:ransatz} holds.
The result \eqref{eq:jacobi_decomposition} follows from \eqref{eq:ransatz} after inserting $\mathbf{r}(0)=(0,0,1)$ (because $\mathrm{sn}(0|m)=0$ and $\mathrm{cn}(0|m)=1$) and computing the residues using \eqref{eq:r} and \eqref{eq:residues}.
\end{proof}
We set $\mathbf{u}_0(x)=\mathbf{r}(x)$. By comparing \eqref{eq:ansatzreal} with \eqref{eq:jacobi_decomposition}, we obtain \eqref{eq:ajsj1}--\eqref{eq:ajsj2}, and \eqref{eq:phi02}. Because $\mathbf{r}(x)^2=1$ by construction, we have $\mathbf{u}_0(x)^2=1$. From \eqref{eq:UdotU3}, it is clear that $\mathbf{v}_0(x)^2=1$ (with $\mathbf{v}_0(x)$ given by \eqref{eq:ansatzreal} with \eqref{eq:ajsj1}--\eqref{eq:ajsj2}, and \eqref{eq:phi02}) if and only if $\mathbf{u}_0(x)^2=1$. We now apply Proposition~\ref{prop:constraints} directly in the special case $N=M$, $\mathbf{t}_j=\mathbf{s}_j^*$, $b_j=a_j^*$, and $\rho=1$. Because $\mathbf{u}_0(x)^2=\mathbf{v}_0(x)^2=1$, we have that the constraints \eqref{eq:constraint1}-\eqref{eq:constraint3} and \eqref{eq:constraint4} are satisfied by \eqref{eq:ajsj1}--\eqref{eq:ajsj2}, and \eqref{eq:phi02}.
\subsubsection{Numerical implementation}\label{subsubsec:numerics}
In the source file of our submission, we have included a Mathematica notebook to visualize solutions of the periodic ncIHF equation with initial data in the form \eqref{eq:r}. Using Proposition~\ref{prop:jacobi}, such data may be transformed into the form \eqref{eq:ansatzreal} (a special case of \eqref{eq:ansatz}) to which Theorem~\ref{thm:main} applies. For chosen $p$, $q$, $m$, and $x_0$, our Mathematica notebook performs the transformation of Proposition~\ref{prop:jacobi} and uses the resulting parameters $a_{j,0}$, $\mathbf{s}_{j,0}$, and $\boldsymbol{\phi}_0$ as initial conditions for the reduction \eqref{eq:reduction} of the ODE system in Theorem~\ref{thm:main}. By numerically solving these ODEs, we obtain numerical solutions of the periodic ncIHF equation in the form \eqref{eq:ansatzreal}. Visualizations of a particular solution obtained using this method are presented in Section~\ref{subsec:breather}.
\subsection{A breather solution}\label{subsec:breather}
We study a particular instance of the solution of the periodic ncIHF equation with initial data constructed using Proposition~\ref{prop:jacobi}. This solution exhibits energy oscillations reminiscent of well-known breather solutions of the nonlinear Schr\"{o}dinger \cite{akhmediev1986} and sine-Gordon equations \cite{ablowitz1973}. To be more specific, we will present numerical evidence of a solution of the ncIHF equation where the energy density is time-periodic but the solution itself is not. An explicit formula for the energy density of a solution \eqref{eq:ansatzreal} of the ncIHF equation is presented in Section~\ref{sec:energydensity}.
To avoid misunderstanding, we emphasize that the results presented in this subsection are primarily numerical: a particular exact solution of the constraints \eqref{eq:constraint1}--\eqref{eq:constraint3} and \eqref{eq:constraint4} given in Section~\ref{subsec:parameterization} provides admissible initial data for Theorem~\ref{thm:main}; we numerically solve the equations of motion of the spin CM system \eqref{eq:sCM1} and background dynamics \eqref{eq:phidot} to evolve the solution \eqref{eq:ansatz} in time, using the method described in Section~\ref{subsubsec:numerics}.
We set $p=q=1$ and $x_0=K$ in \eqref{eq:r} to obtain the following map from ${\mathbb R}$ to $S^2$.
\begin{equation}\label{eq:rbreather}
\mathbf{r}(x)\coloneqq \big(\mathrm{sn}(x|m)\mathrm{cn}(x-K|m),\mathrm{sn}(x|m)\mathrm{sn}(x-K|m),\mathrm{cn}(x|m)\big).
\end{equation}
We set $m=1/2$, yielding $\ell=\delta=2K(1/2) \approx 3.708$. Using Proposition~\ref{prop:jacobi}, \eqref{eq:rbreather} can be written as \eqref{eq:ansatzreal} with $N=4$ and
\begin{align}\label{eq:breatherdata}
&a_{1,0}=2{\rm i} K(1/2), \qquad a_{2,0}=(2+2{\rm i}) K(1/2), \qquad a_{3,0}=(1+2{\rm i}) K(1/2),\qquad a_{4,0}=(3+2{\rm i}) K(1/2), \nonumber \\
&\mathbf{s}_{1,0}=\big(\sqrt{2},2{\rm i},-\sqrt{2}\big),\qquad \mathbf{s}_{2,0}=\big(\sqrt{2},2{\rm i},-\sqrt{2}\big),\qquad \mathbf{s}_{3,0}=(-2,-2{\rm i},0),\qquad \mathbf{s}_{4,0}=(-2,-2{\rm i},0), \nonumber \\
&\boldsymbol{\phi}_0\approx (0,1.694,0).
\end{align}
In accordance with Corollary~\ref{cor:main}, we solve \eqref{eq:sCM1} and \eqref{eq:phidotreal} subject to the initial conditions \eqref{eq:breatherdata} and with initial velocities computed from \eqref{eq:ajdotreal} (at $t=0$). The resulting dynamics for the poles $a_j$ are time-periodic with period $T\approx 11.83$. A visualization of the dynamics of the poles is shown in Fig.~\ref{fig:breather_poles}.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.9]
\def\a{2.25};
\def\b{2.5};
\def\d{4.5};
\node at (\d,0)
{
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=0.65]{plots/poles2.eps}
};
\node at (-3.3,-\a+0.07) {\scriptsize{$0$}};
\node at (-1.6,-\a) {\scriptsize{$T/4$}};
\node at (0,-\a) {\scriptsize{$T/2$}};
\node at (1.6,-\a) {\scriptsize{$3T/4$}};
\node at (3.25,-\a) {\scriptsize{$T$}};
\node at (3.4,-1.65) {$t$};
\node at (-3.05,2.2) {\small $\mathrm{Im}\,a$};
\node at (-3.6,1) {\scriptsize $3\delta/2$};
\node at (-3.55,-1) {\scriptsize $\delta/2$};
\end{tikzpicture}
};
\node at (-\d,0)
{
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=0.65]{plots/poles1.eps}
};
\node at (-3.25,-\a+0.07) {\scriptsize{$0$}};
\node at (-1.6,-\a) {\scriptsize{$L/4$}};
\node at (0,-\a) {\scriptsize{$L/2$}};
\node at (1.6,-\a) {\scriptsize{$3L/4$}};
\node at (3.25,-\a) {\scriptsize{$L$}};
\node at (3.4,-1.65) {\small $\mathrm{Re}\,a$};
\node at (-3,2.2) {\small $\mathrm{Im}\,a$};
\node at (-3.6,1) { \scriptsize $3\delta/2$};
\node at (-3.5,-1) {\scriptsize $\delta/2$};
\end{tikzpicture}
};
\end{tikzpicture}
\caption{Time evolution of the breather solution with initial data \eqref{eq:breatherdata} I: evolution of the poles. The left plot shows the location of the four poles $a_1$ (blue), $a_2$ (purple), $a_3$ (yellow), and $a_4$ (green) at $t=0$. In addition, the colored shadow indicates the path the poles trace as time evolves, showing that $a_1$ and $a_2$ are stationary, whereas $a_3$ and $a_4$ oscillate vertically. The right plot shows the imaginary part of the two moving poles during a full period from $t=0$ to $t=T \approx 11.83$.
}
\label{fig:breather_poles}
\end{figure}
However, the dynamics of the spins $\mathbf{s}_j$ and of the background vector $\boldsymbol{\phi}$ are not time-periodic, and correspondingly, the solution \eqref{eq:ansatzreal} of the ncIHF equation is not time-periodic. This solution is shown in Fig.~\ref{fig:breather}. We observe that at times $t=T/4+n T/2$ ($n\in {\mathbb Z}_{\geq 0}$), the solution has two points of non-differentiability. At these times, Corollary~\ref{cor:main} does not apply but rather guarantees a solution of the ncIHF equation on intervals with the times $\{T/4+n T/2:n\in {\mathbb Z}_{\geq 0}\}$ subtracted.
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=0.9]
\def\a{3.4};
\def\b{2.5};
\def\d{2.1};
\node at (0,\d) {\includegraphics[scale=0.47]{plots/spheres1.eps}};
\node at (0,-\d) {\includegraphics[scale=0.47]{plots/spheres2.eps}};
\node at (-0.05,-\d-\a) {\includegraphics[scale=1.3]{plots/legend.eps}};
\node at (0,-\d-\a+0.7) {$x $};
\node at (-5.07,-\d-\a-0.55) {\footnotesize $0$};
\node at (-2.57,-\d-\a-0.55) {\footnotesize $L/4$};
\node at (0,-\d-\a-0.55) {\footnotesize $L/2$};
\node at (2.4,-\d-\a-0.55) {\footnotesize $3L/4$};
\node at (5,-\d-\a-0.55) {\footnotesize $L$};
\draw[gray] (-8,\d+\b) -- (8,\d+\b);
\draw[gray] (-8,-\d-\b+0.2) -- (8,-\d-\b+0.2);
\foreach \x in {0,4,8,12,16}
{
\draw[gray] (-8+\x,-\d-\b+0.2) -- (-8+\x,\d+\b);
};
\node at (-6,\d+\b-0.3) {\scriptsize $t=0$};
\node at (-6+4,\d+\b-0.3) {\scriptsize $t=T/4-1/2$};
\node at (-6+8,\d+\b-0.3) {\scriptsize $t=T/4$};
\node at (-6+12,\d+\b-0.3) {\scriptsize $t=T/4+1/2$};
\node at (-6,-\d+\b-0.5) {\scriptsize $t=3T/4-1/2$};
\node at (-6+4,-\d+\b-0.5) {\scriptsize $t=3T/4$};
\node at (-6+8,-\d+\b-0.5) {\scriptsize $t=3T/4+1/2$};
\node at (-6+12,-\d+\b-0.5) {\scriptsize $t=T$};
\node at (-7.7,-\d-\b+0.6) {
\tdplotsetmaincoords{75}{90-41}
\begin{tikzpicture}[tdplot_main_coords,font=\sffamily,scale=0.4]
\draw[-latex] (0,0,0) -- (1,0,0) node[yshift=0pt, xshift=2.3pt] {\tiny $x$};
\draw[-latex] (0,0,0) -- (0,1,0) node[yshift=2pt, xshift=0.5pt] {\tiny $y$};
\draw[-latex] (0,0,0) -- (0,0,1) node[yshift=2.0pt, xshift=0pt] {\tiny $z$};
\end{tikzpicture}
};
\end{tikzpicture}
\caption{Time evolution of the breather solution with initial data \eqref{eq:breatherdata} II: spatial dependence of $\mathbf{u}(x,t)$ at eight instances of time $t$ measured in the ``period" time $T\approx 11.83$, with colors indicating the position $x$ according to the legend on the bottom. Note that at $t=T/4$ and $t=3T/4$, when $\mathbf{u}(x,t)$ is not differentiable at two points, $\mathbf{u}(x,t)$ traces its image exactly twice as $x$ goes from $0$ to $L$. The plots only show one such tracing. By comparing $t=0$ and $t=T$ one sees that the image of $\mathbf{u}$ is not periodic in time, in contrast to the pole and energy evolution (see Figs.~\ref{fig:breather_poles} and \ref{fig:breather_energy}). The time evolution of $\mathbf{v}(x,t)$ is the reflection of $\mathbf{u}$ in the $xz$-plane. The orientation of all plots is the same and indicated by the coordinate system in the bottom left corner.
}
\label{fig:breather}
\end{figure}
The energy density associated with this solution oscillates periodically in time, see Fig.~\ref{fig:breather_energy}. An explicit formula for the energy density is presented below in Section~\ref{sec:energydensity}.
\begin{remark}
We expect that, by using the methods in \cite{krichever1995spin}, one could verify that the solution of \eqref{eq:sCM1} with the initial conditions \eqref{eq:breatherdata} and initial velocities satisfying \eqref{eq:ajdotreal} at $t=0$ exists on $[0,\infty)$ and that the poles $a_j$ are time-periodic. Then, Corollary~\ref{cor:main} would guarantee a solution of the periodic ncIHF equation on $[0,\infty)\setminus \{T/4+nT/2:n\in {\mathbb Z}_{\geq 0}\}$. We further expect that for a suitable notion of weak solutions of the periodic ncIHF equation, the ansatz \eqref{eq:ansatz} with the elliptic spin CM solution described above would solve the periodic ncIHF equation on $[0,\infty)$. These investigations are outside of the scope of the present paper.
\end{remark}
\subsubsection{Energy densities}\label{sec:energydensity}
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.86]
\def\a{3};
\def\b{0.6};
\def\d{1.4};
\def\s{0.6};
\def\sc{0.46};
\def\scc{0.36};
\def\sccc{1.06};
\def\h{0.93};
\node at (-7,0) {
\includegraphics[scale=\sc]{plots/energy1.eps}
};
\node at (-7,0) {
\begin{tikzpicture}[scale=\sccc]
\node at (1.85,-0.6) {\scalebox{0.6}{$x$}};
\node at (-1.48,1.5) {\small $\mathbf{\epsilon}$};
\node at (-1.55,-0.78) {\scalebox{\s}{$0$}};
\node at (-1.55,0.12) {\scalebox{\s}{$2$}};
\node at (-1.55,0.92) {\scalebox{\s}{$4$}};
\node at (-0.65,-0.85) {\scalebox{\s}{$L/4$}};
\node at (0.15,-0.85) {\scalebox{\s}{$L/2$}};
\node at (0.95,-0.85) {\scalebox{\s}{$3L/4$}};
\node at (1.8,-0.83) {\scalebox{\s}{$L$}};
\end{tikzpicture}
};
\node at (-2.4,0) {
\includegraphics[scale=\sc]{plots/energy2.eps}
};
\node at (-2.4,0) {
\begin{tikzpicture}[scale=\sccc]
\node at (1.85,-0.6) {\scalebox{0.6}{$x$}};
\node at (-1.48,1.5) {\small $\mathbf{\epsilon}$};
\node at (-1.55,-0.78) {\scalebox{\s}{$0$}};
\node at (-1.55,0.12) {\scalebox{\s}{$2$}};
\node at (-1.55,0.92) {\scalebox{\s}{$4$}};
\node at (-0.65,-0.85) {\scalebox{\s}{$L/4$}};
\node at (0.15,-0.85) {\scalebox{\s}{$L/2$}};
\node at (0.95,-0.85) {\scalebox{\s}{$3L/4$}};
\node at (1.8,-0.83) {\scalebox{\s}{$L$}};
\end{tikzpicture}
};
\node at (2.4,0) {
\includegraphics[scale=\sc]{plots/energy3.eps}
};
\node at (2.4,0) {
\begin{tikzpicture}[scale=\sccc]
\node at (1.85,-0.6) {\scalebox{0.6}{$x$}};
\node at (-1.48,1.5) {\small $\mathbf{\epsilon}$};
\node at (-1.55,-0.78) {\scalebox{\s}{$0$}};
\node at (-1.55,0.12) {\scalebox{\s}{$2$}};
\node at (-1.55,0.92) {\scalebox{\s}{$4$}};
\node at (-0.65,-0.85) {\scalebox{\s}{$L/4$}};
\node at (0.15,-0.85) {\scalebox{\s}{$L/2$}};
\node at (0.95,-0.85) {\scalebox{\s}{$3L/4$}};
\node at (1.8,-0.83) {\scalebox{\s}{$L$}};
\end{tikzpicture}
};
\node at (7,0) {
\includegraphics[scale=\sc]{plots/energy4.eps}
};
\node at (7,0) {
\begin{tikzpicture}[scale=\sccc]
\node at (1.85,-0.6) {\scalebox{0.6}{$x$}};
\node at (-1.48,1.5) {\small $\mathbf{\epsilon}$};
\node at (-1.55,-0.78) {\scalebox{\s}{$0$}};
\node at (-1.55,0.12) {\scalebox{\s}{$2$}};
\node at (-1.55,0.92) {\scalebox{\s}{$4$}};
\node at (-0.65,-0.85) {\scalebox{\s}{$L/4$}};
\node at (0.15,-0.85) {\scalebox{\s}{$L/2$}};
\node at (0.95,-0.85) {\scalebox{\s}{$3L/4$}};
\node at (1.8,-0.83) {\scalebox{\s}{$L$}};
\end{tikzpicture}
};
\node at (-7,\d+\b-0.3) {\scriptsize $t=0$};
\node at (-2.4,\d+\b-0.3) {\scriptsize $t=T/4$};
\node at (2.4,\d+\b-0.3) {\scriptsize $t=T/2$};
\node at (7,\d+\b-0.3) {\scriptsize $t=3T/4$};
\end{tikzpicture}
\caption{Time evolution of the breather solution with initial data \eqref{eq:breatherdata} III: energy density at four instances of time $t$.
At each time $t$, the total energy density $\epsilon(x,t)=\epsilon_{\mathbf{u}}(x,t)+\epsilon_{\mathbf{v}}(x,t)$ and the individual energy densities $\epsilon_{\mathbf{u}}(x,t)$ (red) and $\epsilon_{\mathbf{v}}(x,t)$ (blue) \eqref{eq:energy_uv} of the $\mathbf{u}$- and the $\mathbf{v}$-channels are shown. The plots illustrate that the total energy density $\epsilon(x,t)$ is periodic with period $T/2\approx 5.916$, but the $\mathbf{u}$- and $\mathbf{v}$-channel energy densities are periodic with period $T\approx 11.83$ only.
At $t=T$ the energy densities are exactly the same as at $t=0$. }
\label{fig:breather_energy}
\end{figure}
It was shown in \cite[Appendix~A]{berntsonklabbers2021} that a Hamiltonian for the periodic ncIHF equation is given by
\begin{equation}\label{eq:hamiltonian}
\mathcal{H}= \int_{-\ell}^{\ell} (\epsilon_{\mathbf{u}}+\epsilon_{\mathbf{v}})\,\mathrm{d}x,\qquad \left(\begin{array}{c} \epsilon_{\mathbf{u}} \\ -\epsilon_{\mathbf{v}} \end{array}\right)\coloneqq -\frac12 \mathbf{U}\dotcirc \mathcal{T}\mathbf{U}_x = -\frac12 \left(\begin{array}{c} \mathbf{u}\cdot (T\mathbf{u}_x-\tilde{T}\mathbf{v}_x) \\ -\mathbf{v}\cdot (T\mathbf{v}_x-\tilde{T}\mathbf{u}_x) \end{array}\right),
\end{equation}
where the functions $\epsilon_\mathbf{u}$ and $\epsilon_\mathbf{v}$ can be interpreted as the energy densities associated with the $\mathbf{u}$ and $\mathbf{v}$ fields, respectively. By inserting \eqref{eq:ansatzreal} into \eqref{eq:hamiltonian} and using \eqref{eq:cTA}, \eqref{eq:ArjArk}, and \eqref{eq:constraint2}--\eqref{eq:constraint3}, a calculation similar to that in \cite[Section~5.3]{berntsonklabbers2021} gives the following result.
\begin{proposition}
The energy densities \eqref{eq:hamiltonian} associated with a real $N$-soliton solution \eqref{eq:ansatzreal} of the periodic ncIHF equation \eqref{eq:ncIHF} are given by
\begin{equation}
\label{eq:energy_uv}
\begin{split}
\epsilon_{\mathbf{u}}=&\; -2\,\mathrm{Im} \Bigg(\sideset{}{'}\sum_{j=1}^N \sideset{}{'}\sum_{k=1}^N\mathbf{s}_j\cdot\mathbf{s}_k^*\bigg(\wp_2(a_j-a_k^*+{\rm i}\delta)\zeta_2(x-a_j+{\rm i}\delta/2)+\frac12 f_2'(a_j-a_k^*+{\rm i}\delta) \bigg)\Bigg), \\
\epsilon_{\mathbf{v}}=&\; +2\,\mathrm{Im} \Bigg(\sideset{}{'}\sum_{j=1}^N\sideset{}{'}\sum_{k=1}^N \mathbf{s}_j\cdot\mathbf{s}_k^*\bigg(\wp_2(a_j-a_k^*+{\rm i}\delta)\zeta_2(x-a_j-{\rm i}\delta/2)+\frac12 f_2'(a_j-a_k^*+{\rm i}\delta)\bigg)\Bigg).
\end{split}
\end{equation}
\end{proposition}
| 2023-04-23T06:41:13.085Z | 2022-04-06T02:26:23.000Z | redpajama/arxiv | arxiv_0001 | 1,936 | 23,080 |
712411ab0e36f4361d051a9390ac367ff1e8680a | \section{#1}}
\renewcommand{\thefootnote}{\alph{footnote})}
\renewcommand{\thefootnote}{\number_style{\fnsymbol}}
\setcounter{footnote}{0}
\title{On the Hill's Spherical Vortex in Fluid and Plasma, its Generalization, and Stability}
\author{ \renewcommand{\thefootnote}{\alph{footnote})} Jason M. Keller\footnotemark[1], ~~Alexei
F. Cheviakov\footnotemark[2] \\ {\small
\emph{Department of Mathematics and Statistics, University of
Saskatchewan, Saskatoon, S7N 5E6 Canada}}
}
\addtolength{\topmargin}{-1.3 in} \addtolength{\oddsidemargin}{-0.5in}
\addtolength{\evensidemargin}{-0.5in} \setlength{\textwidth}{6.5in}
\setlength{\textheight}{9.5 in} \setlength{\parindent}{8pt}
\newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma}
\newtheorem{corollary}{Corollary} {\theoremstyle{definition}
\newtheorem{remark}{Remark}
{\theoremstyle{definition} \newtheorem{example}{Example}}
{\theoremstyle{definition} \newtheorem{definition}{Definition}}
\def \setcounter{example}{0} \setcounter{definition}{0 {\setcounter{example}{0} \setcounter{definition}{0}
\setcounter{theorem}{0} \setcounter{remark}{0}
\setcounter{lemma}{0}\setcounter{corollary}{0}\setcounter{equation}{0}}
\renewcommand{\theexample}{\arabic{section}.\arabic{example}}
\renewcommand{\thedefinition}{\arabic{section}.\arabic{definition}}
\renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}}
\renewcommand{\theremark}{\arabic{section}.\arabic{remark}}
\renewcommand{\thelemma}{\arabic{section}.\arabic{lemma}}
\renewcommand{\thefigure}{\arabic{figure}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\renewcommand{\thepage}{\arabic{page}}
\renewcommand{\thesection}{\arabic{section}}
\renewcommand{\thesubsection}{\arabic{section}.\arabic{subsection}}
\renewcommand{\thesubsubsection}{\arabic{section}.\arabic{subsection}.\arabic{subsubsection}}
\newcommand{\raisebox{0ex}[2.5ex][0ex]{\null}}{\raisebox{0ex}[2.5ex][0ex]{\null}}
\renewcommand{\arraystretch}{1.2}
\def 2ex{2ex}
\def 0.7ex{0.7ex}
\newcommand{|\Omega|}{|\Omega|}
\newcommand{\sum_{i=1}^{N}\sum_{j=i+1}^{N}}{\sum_{i=1}^{N}\sum_{j=i+1}^{N}}
\newcommand{x}{x}
\def\hbox{\rm const}{\hbox{\rm const}}
\def\hbox{\rm diag}{\hbox{\rm diag}}
\def\hbox{\rm mod}{\hbox{\rm mod}}
\def\mathop{\hbox{\rm rank}}{\mathop{\hbox{\rm rank}}}
\def\mathop{\hbox{\rm Image}}{\mathop{\hbox{\rm Image}}}
\def\mathop{\hbox{\rm mes}}{\mathop{\hbox{\rm mes}}}
\def\mathop{\hbox{\rm grad}}{\mathop{\hbox{\rm grad}}}
\def\mathop{\hbox{\rm sign}}{\mathop{\hbox{\rm sign}}}
\def\mathop{\hbox{\rm ind}}{\mathop{\hbox{\rm ind}}}
\def\mathop{\hbox{\rm id}}{\mathop{\hbox{\rm id}}}
\def\mathop{\hbox{\rm d}}{\mathop{\hbox{\rm d}}}
\def\hbox{ad}{\hbox{ad}}
\def\mathop{\hbox{\rm ch}}{\mathop{\hbox{\rm ch}}}
\def\mathop{\hbox{\rm th}}{\mathop{\hbox{\rm th}}}
\def\mathop{\hbox{\rm cth}}{\mathop{\hbox{\rm cth}}}
\def\mathop{\hbox{\rm sh}}{\mathop{\hbox{\rm sh}}}
\def\mathop{\hbox{\rm max}}{\mathop{\hbox{\rm max}}}
\def\mathop{\hbox{\rm ind}}{\mathop{\hbox{\rm ind}}}
\def\mathop{\hbox{\rm Tr}}{\mathop{\hbox{\rm Tr}}}
\def\mathop{\hbox{\rm div}}{\mathop{\hbox{\rm div}}}
\def\mathop{\hbox{\rm curl}}{\mathop{\hbox{\rm curl}}}
\def\vec#1{{\boldsymbol{\rm #1}}}
\def\tens#1{{\mathbb {#1}}}
\def\abs#1{|\vec{#1}|}
\def\lrcorner{\lrcorner}
\def{\mathcal F}{{\mathcal F}}
\def{\mathrm i}{{\mathrm i}}
\newcommand{{\displaystyle \varepsilon}}{{\displaystyle \varepsilon}}
\def\PLET#1#2#3#4{{\bf #1}^{(#2)}\{#3\,; #4\} }
\def\PDEsr#1#2#3#4{{\bf #1^\mathrm{#2}}\{#3\,; #4\} }
\def\PDEsi#1#2#3#4{{\bf #1^\mathit{#2}}\{#3\,; #4\} }
\def\PDEssh#1{{\bf #1} }
\def\CMBPS#1#2{\mathbb{#1}_{#2} }
\def\dfo#1#2{\frac{\partial {#1}}{\partial {#2}}}
\def\dft#1#2{\frac{\partial ^2{#1}}{\partial {#2} ^2}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{array}{ll}{\begin{array}{ll}}
\def\end{array}{\end{array}}
\def\displaystyle{\displaystyle}
\begin{document}
\maketitle
\begin{abstract}
In 1894 M.J.M. Hill published an article describing a spherical vortex moving through a stationary fluid. Using cylindrical coordinates and assuming the azimuthal velocity component zero, Hill found a simple solution that described this flow. A similar modern problem in the MHD framework was put forth in 1987 by A. A. Bobnev and in 1995 by R. Kaiser and D. Lortz who applied the setup to model a ball lighting. We present a much simpler derivation of Hill's spherical vortex using the Bragg-Hawthorne equation. In particular, by using the moving frame of reference, the Euler equations reduce to equilibrium flow which are equivalent to the static equilibrium MHD equations up to relabelling. A new generalized version of Hill's spherical vortex with a nonzero azimuthal component is derived. A physical solution to the static equilibrium MHD equations is computed by looking at a separated solution to the Grad-Shafranov equation in spherical coordinates. Finally, the stability of Hill's spherical vortex is examined by performing an axisymmetric perturbation described; it is shown that the Hill's spherical vortex is linearly unstable with respect to certain kinds of small perturbations.
\end{abstract}
\section{Introduction}
In 1894 Micaiah John Muller Hill published an article describing a sphere moving symmetrically with regards to an axis through a stationary fluid. Using cylinderical coordinates and assuming that the azimuthal velocity component is zero, Hill was able to find a simple solution that describes this fluid flow. This solution and the method that it was computed is available in \cite{hill1894vi}. A similar modern problem in the MHD framework was put forth in 1987 by A. A. Bobnev in which they considered a spherical vortex moving in an ideally conducting fluid \cite{Bobnev}. In this work, several small mistakes were made. Interestingly enough, in 1995 R. Kaiser and D. Lortz again considered the problem of a spherical vortex in MHD equilibrium to model ball lighting \cite{kaiser1995ball} essentially re-deriving the solution A. Bobnev found in \cite{Bobnev}. In the following chapter, a modern and much simpler derivation of Hill's spherical vortex using the Bragg-Hawthorne equation, (which was first derived in 1898 by William Mitchinson Hicks and only gained popularity after being re-derived in 1950 by William Hawthorne and Stephen Bragg) will be shown to emphasize the usefulness of the Bragg-Hawthorne equation for such problems. By using the moving frame of reference the Euler equations reduce to equilibrium flow which are equivalent to the static equilibrium MHD equations up to relabelling. Next, the spherical vortex in an ideally conducting fluid is computed similar to methods in both \cite{Bobnev} and \cite{kaiser1995ball}. Using results from the previous two sections, a new generalized version of Hill's spherical vortex is put forth. After this, a physical solution to the static equilibrium MHD equations is computed by looking at a separated solution to the Grad-Shafranov equation in spherical coordinates and lastly, the stability of Hill's spherical vortex is examined by performing an axisymmetric perturbation described in \cite{pozrikidis1986nonlinear}. A similar analysis for the new generalized Hill's spherical vortex is also attempted.
\section{Hill's spherical vortex: a modern derivation}
A sphere of radius $R$ moving through a stationary fluid directed along the $z$ axis can be modelled with the incompressible Euler equations. Starting with the equations of motion for an incompressible fluid
\begin{subequations}\label{eq:Euler3}
\begin{equation}
\frac{\partial \vec{V}}{\partial t} + (\vec{V} \cdot \nabla) \vec{V} = -\frac{1}{\rho}\mathop{\hbox{\rm grad}} P,
\end{equation}
\begin{equation}
\mathop{\hbox{\rm div}} {\vec{V}} = 0,
\end{equation}
\end{subequations}
the well known result that the incompressible Euler equations are invariant under a general Galilean transformations motivates the following change of variables
\begin{equation}\label{eq:Gal}
\vec{V}(\vec{r},t) = \vec{\tilde{v}}(\vec{{{r}}} - Z(t)\vec{e}_z) + Z'(t)\vec{e}_z, \quad P(\vec{r},t) = \tilde{P}(\vec{r} - Z(t)\vec{e}_z).
\end{equation}
Here $Z(t)$ is an arbitrary function of time and $\tilde{\vec{v}}$, $\tilde{P}$ denote fluid parameters measured in the corresponding moving frame of reference.
\medskip
Assuming that the moving frame of reference is moving at the same speed as the spherical vortex, and the density is constant, after omitting the tilde on the new variables, the Euler equations can be written as
\begin{subequations}\label{eq:Euler_eq3}
\begin{equation}
\mathop{\hbox{\rm curl}} \vec{v} \times \vec{v} = \mathop{\hbox{\rm grad}} H,
\end{equation}
\begin{equation}
\mathop{\hbox{\rm div}} \vec{v} = 0,
\end{equation}
\end{subequations}
where
\begin{equation}
H = -\left(\frac{P}{\rho} + \frac{1}{2} |\vec{v}|^2\right)
\end{equation}
is a modified pressure term. In the rest of this section $H$ will simply be refereed to as the pressure. As one can see, Assuming that the motion is axially symmetric it is natural to use cylindrical coordinates and set $\vec{v}$ and $H$ independent of $\phi$. In doing so, one can reduce (\ref{eq:Euler_eq3}) to the well known Bragg-Hawthorne equation
\begin{equation}\label{eq:B_Heq}
\frac{\partial^2 \psi}{\partial r^2} + \frac{\partial^2 \psi}{\partial z^2} - \frac{1}{r}\frac{\partial \psi}{\partial r} + F(\psi)F'(\psi) = r^2 H'(\psi),
\end{equation}
where
\begin{equation}
\vec{v} = \frac{\psi_z}{r} \vec{e}_r + \frac{F(\psi)}{r}\vec{e}_\phi + \frac{-\psi_r}{r} \vec{e}_z,
\end{equation}
and $F$, $H$ are arbitrary functions of $\psi$, where $\psi$ is the stream function discussed in Section 6.2. Chapter 1. Following Hill's assumption who considered a two-component axially symmetric flow, the azimuthal component of the velocity is set to zero, giving the condition
\begin{equation}\label{eq:F_cond}
F(\psi) = 0.
\end{equation}
From this, the vorticity becomes
\begin{equation}
\vec{\omega} = r^2H'(\psi)\vec{e}_{\phi}.
\end{equation}
Note that when the pressure is constant: $H = H_0$, one has $\vec{\omega} = 0$, which corresponds to an irrotational flow, (\ref{eq:F_cond}) also gives a simplified Bragg-Hawthorne equation
\begin{equation}\label{eq:B_Heq_simp}
\frac{\partial^2 \psi}{\partial r^2} + \frac{\partial^2 \psi}{\partial z^2} - \frac{1}{r}\frac{\partial \psi}{\partial r} = -r^2 H'(\psi),
\end{equation}
with
\begin{equation}\label{vel_comp}
\vec{v} = \frac{\psi_z}{r} \vec{e}_r + \frac{-\psi_r}{r} \vec{e}_z.
\end{equation}
The arbitrary function is chosen as the highest power series expansion in $\psi$ such that the (\ref{eq:B_Heq_simp}) becomes separable in spherical coordinates and the asymptotics of the pressure $H(\psi)$ behaves properly. As far as separability of (\ref{eq:B_Heq_simp}) goes, $H(\psi)$ cannot be of higher degree then linear in $\psi$. In regards to the asymptotics, the pressure far away from the sphere must not change and needs to be the ambient pressure $H_0$. This gives the best choice for $H(\psi)$ to be broken into two pieces that match at the boundary
\begin{equation}\label{eq:piecepress}
H(\psi) =
\begin{cases}
H_0 - 10\delta \psi, & \rho < R\\
H_0 . & \rho > R
\end{cases}
\end{equation}
Here the coefficient $10\delta$ is only chosen in this way to make the calculation cleaner. The problem is now be decomposed into two pieces: the rotational flow inside of the sphere with pressure linear in $\psi$, and the irrotational flow outside of the sphere with constant pressure. \medskip
\begin{enumerate}
\item{Rotational flow inside the sphere}
\begin{subequations}
\begin{equation}
H(\psi) = H_0 - 10\delta \psi
\end{equation}
\begin{equation}\label{eq:B_Heq_lin_inside}
\frac{\partial^2 \psi}{\partial r^2} + \frac{\partial^2 \psi}{\partial z^2} - \frac{1}{r}\frac{\partial \psi}{\partial r} = 10\delta r^2.
\end{equation}
\end{subequations}
\item{Irrotational flow outside the sphere}
\begin{subequations}
\begin{equation}
H(\tilde{\psi}) = H_0
\end{equation}
\begin{equation}\label{eq:B_Heq_lin_outside}
\frac{\partial^2 \tilde{\psi}}{\partial r^2} + \frac{\partial^2 \tilde{\psi}}{\partial z^2} - \frac{1}{r}\frac{\partial \tilde{\psi}}{\partial r} = 0.
\end{equation}
\end{subequations}
\end{enumerate}
Along with these two equations, there is the condition that both pieces must have matching pressure and velocity components at the boundary of the sphere ($r^2 + z^2 = R^2$). For matching pressure, this implies that for the inside solution, $\psi(r,z) = 0$ when $r^2 + z^2 = R^2$. It turns out that one can effectively seek solutions to (\ref{eq:B_Heq_lin_inside}) and (\ref{eq:B_Heq_lin_outside}) in spherical coordinates, in the separated form $\psi(\rho,\theta) = R(\rho) \Theta(\theta)$. Here standard spherical coordinates are related to cylindrical coordinates by $r = \rho \sin \theta$, $z = \rho \cos \theta$. Converting the above problem into spherical coordinates gives
\begin{enumerate}
\item{Rotational flow inside the sphere}
\begin{subequations}\label{eq:inside11}
\begin{equation}
H(\psi) = H_0 - 10\delta \psi
\end{equation}
\begin{equation}\label{eq:Hill_inside}
\left[ \frac{\partial^2}{\partial \rho^2} + \frac{\sin \theta}{\rho^2} \frac{ \partial}{\partial \theta} \left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right)\right] \psi = 10 \delta \rho^2 \sin^2{\theta}.
\end{equation}
\end{subequations}
\item{Irrotational flow outside the sphere}
\begin{subequations}\label{eq:inside22}
\begin{equation}
H(\tilde{\psi}) = H_0
\end{equation}
\begin{equation}
\left[ \frac{\partial^2}{\partial \rho^2} + \frac{\sin \theta}{\rho^2} \frac{ \partial}{\partial \theta} \left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right)\right] \psi = 0.
\end{equation}
\end{subequations}
\end{enumerate}
The velocity components inside and outside are given by
\begin{equation}
\vec{v}_{in} = \frac{1}{\rho^2\sin \theta}\frac{\partial \psi}{\partial \theta} \vec{e}_{\rho} - \frac{1}{\rho \sin \theta}\frac{\partial \psi}{\partial \rho} \vec{e}_{\theta},
\end{equation}
and
\begin{equation}
\vec{v}_{out} = \frac{1}{\rho^2\sin \theta}\frac{\partial \tilde{\psi}}{\partial \theta} \vec{e}_{\rho} - \frac{1}{\rho \sin \theta}\frac{\partial \tilde{\psi}}{\partial \rho} \vec{e}_{\theta}.
\end{equation}
respectively. Along with this, the matching conditions and the need for $\psi(\rho,\theta)$ to be regular at $\rho = 0$ give the following four boundary conditions \begin{equation}\label{eq:matching}
\psi(R,\theta) = 0, \quad |\psi(0,\theta)| < \infty, \quad \frac{\partial \psi}{\partial \theta} \bigg|_{\rho = R} = \frac{\partial \tilde{\psi}}{\partial \theta} \bigg|_{\rho = R}, \quad \frac{\partial \psi}{\partial \rho} \bigg|_{\rho = R} = \frac{\partial \tilde{\psi}}{\partial \rho} \bigg|_{\rho = R}.
\end{equation}
A general solution for the inhomogeneous inside equation (\ref{eq:Hill_inside}) is sought in the form of $\psi(\rho,\theta) = \psi(\rho,\theta)_{gen} + \psi(\rho,\theta)_{part}$ where $\psi(\rho,\theta)_{gen}$ is a general solution to the homogeneous version of (\ref{eq:Hill_inside}) given by
\begin{equation}\label{eq:hillhom}
\left[ \frac{\partial^2}{\partial \rho^2} + \frac{\sin \theta}{\rho^2} \frac{ \partial}{\partial \theta} \left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right)\right] \psi = 0.
\end{equation}
and $\psi(\rho,\theta)_{part}$ is a particular solution to (\ref{eq:Hill_inside}). A particular solution is found to be
\begin{equation}
\psi(\rho,\theta)_{part} = \delta\rho^4\sin^2\theta.
\end{equation}
The general solution to (\ref{eq:hillhom}) is obtained by a separated solution $\psi(\rho,\theta) = R(\rho) \Theta(\theta)$. Upon substituting the separated form into (\ref{eq:hillhom}) one arrives at the two ODEs
\begin{equation}\label{eq:rho_ode1}
\rho^2 R'' - \mathcal{C}R = 0,
\end{equation}
\begin{equation}\label{eq:theta_ode1}
\big((-\csc \theta) \Theta'\big)' = \mathcal{C} (\csc \theta) \Theta,
\end{equation}
where $\mathcal{C}$ is a separation constant to be determined. Using the change of variables
\begin{equation}
t = \cos\theta, \quad \Theta(\theta) = T(t),
\end{equation}
the equation (\ref{eq:theta_ode1}) becomes
\begin{equation}\label{eq:Tequation1}
(1-t^2)T''(t) + \mathcal{C}T(t) = 0.
\end{equation}
This ODE can be related to the associated Legendre ODE with the transformation
\begin{equation}
T(t) = \sqrt{1 - t^2}P(t)
\end{equation}
leading to
\begin{equation}\label{eq:Legendre_m11}
(1 - t^2)P''(t) - 2tP'(t) + \left(\mathcal{C} - \frac{1}{1-t^2}\right)P(t) = 0.
\end{equation}
The equation (\ref{eq:Legendre_m11}) is related to the associated Legendre ODE \cite{kaiser1995ball}.
\begin{equation}\label{eq:Legendre1}
(1 - x^2)\tilde{P}''(x) - 2x\tilde{P}'(x) + \left(l(l+1) - \frac{m^2}{1 - x^2}\right)\tilde{P}(x) = 0
\end{equation}
Clearly (\ref{eq:Legendre_m11}) is the same as (\ref{eq:Legendre1}) when $m = 1$ and $\mathcal{C} = l(l+1)$. The equation (\ref{eq:Legendre1}) has nonsingular solutions on the interval $[-1,1]$ only when $l$ and $m$ are integer values \cite{arfken1999mathematical}. For $m = 1$, the associated Legendre polynomials have the form
\begin{equation}
P_l(x) = -\sqrt{1 - x^2}\frac{d}{dx}\mathcal{P}_l(x),
\end{equation}
where $\mathcal{P}_l$ refers to the lth order Legendre polynomial. One then arrives at the regular solutions to (\ref{eq:Tequation1})
\begin{equation}
T_l(t) = -(1 - t^2)\frac{d}{dt}\mathcal{P}_l.
\end{equation}
which can be written as
\begin{equation}
T_l(t) = (l+1)\mathcal{P}_{l+1}(t) - (l+1)t\mathcal{P}_l(t).
\end{equation}
This gives $\Theta(\theta)$ as
\begin{equation}\label{eq:thetasol}
\Theta_l(\theta) = (l+1)\mathcal{P}_{l+1}(\cos\theta) - (l+1)\cos\theta \ \mathcal{P}_l(\cos\theta).
\end{equation}
The value $\mathcal{C} = l(l+1)$ can now be substituted into (\ref{eq:rho_ode1}) giving
\begin{equation}
\rho^2 R''(\rho) - l(l+1)R(\rho) = 0.
\end{equation}
This has the solution
\begin{equation}
R_l(\rho) = a_l\rho^{l+1} + b_l\rho^{-l}.
\end{equation}
As the solution is required to be regular at $\rho = 0$, $b_l$ will be set to zero. A separated solution to the homogeneous PDE (\ref{eq:hillhom}) is therefore
\begin{equation}\label{eq:psi_sep}
\psi_l(\rho,\theta) = a_l\rho^{l+1}\Theta_l(\theta),
\end{equation}
giving the solution for $\psi$ inside of the sphere as
\begin{equation}
\psi(\rho,\theta) = \delta\rho^4\sin^2\theta + \sum_{l = 0}^{\infty}a_l\rho^{l+1}\Theta_l(\theta).
\end{equation}
\medskip
Using the condition that the pressure must match at the boundary which reduces to the condition that $\psi(R,\theta) = 0$ as specified in (\ref{eq:matching}) gives
\begin{equation}\label{eq:cond}
\sum_{l = 0}^{\infty}a_l R^{l+1}\Theta_l(\theta) = -\delta R^4\sin^2\theta.
\end{equation}
The solutions $\Theta_l(\theta)$ form a complete orthogonal basis as (\ref{eq:theta_ode1}) is a classical Sturm-Liouville second-order linear ODE with weight $w(\theta) = -\csc \theta$. Using the observation that $\Theta_1(\theta) = -\sin^2\theta$ equation (\ref{eq:cond}) can be written as
\begin{equation}
\sum_{l = 0}^{\infty}a_l R^{l+1}\Theta_l(\theta) = \delta R^4\Theta_l(\theta).
\end{equation}
By multiplying the above equation by $-\csc \theta \Theta_l(\theta)$ and integrating from $0 < \theta < \pi$ one arrives at
\begin{equation}
a_l R^{l+1} = \frac{-\int_0^\pi \delta R^4 \csc \theta \Theta_1(\theta) \Theta_l(\theta) d\theta}{-\int_0^\pi\csc \theta (\Theta_l (\theta))^2 d\theta}.
\end{equation}
The right hand side is zero due to the orthogonality of $\Theta_l(\theta)$ for all $l$ except when $l = 1$. In this case one obtains the condition that
\begin{equation}
a_1 = \delta R^2.
\end{equation}
Therefore the solution inside the sphere can be written as
\begin{equation}
\psi(\rho,\theta) = \delta \rho^2\sin^2\theta(\rho^2 - R^2).
\end{equation}
For outside of the sphere, the solution is the same as the homogeneous solution to \ref{eq:Hill_inside} given by
\begin{equation}
\tilde{\psi}(\rho,\theta) = \sum_{l = 0}^{\infty}\left(c_l\rho^{l+1} + \frac{d_1}{\rho}\right)\Theta_l(\theta).
\end{equation}
The forth condition in (\ref{eq:matching}) gives the condition that
\begin{equation}\label{eq:cond2}
\sum_{l = 0}^{\infty}\left(c_l(l+1) R^{l} - \frac{d_1}{R^2}\right)\Theta_l(\theta) = 3\delta R^3\sin^2 \theta.
\end{equation}
Using the orthogonality of $\Theta_l(\theta)$ as discussed before, $l = 1$. Lastly, the third condition gives
\begin{equation}
\left(c_lR^{2} + \frac{d_1}{R}\right) = 0.
\end{equation}
giving $c_1 = -d_1/R^3$. Substituting this back into ($\ref{eq:cond2})$ with $l = 1$ one achieves the complete solution
\begin{equation}\label{eq:sol1}
\psi(\rho,\theta) =
\begin{cases}
\delta \rho^2\sin^2\theta(\rho^2 - R^2), & \rho < R\\
\frac{2}{3} \delta R^2\sin^2\theta \left(\frac{\rho^3 - R^3}{\rho}\right). & \rho > R
\end{cases}
\end{equation}
This can be written in cylindrical coordinates as
\begin{equation}
\psi(r,z) =
\begin{cases}
\delta\left((r^2z^2 + r^4) -R^2r^2\right), & r^2 + z^2 < R^2\\
\frac{2}{3}\delta R^2 r^2\left(1 - \frac{R^3}{(r^2 + z^2)^{3/2}}\right). & r^2 + z^2 > R^2
\end{cases}.
\end{equation}
The velocity components can be computed from (\ref{vel_comp}) to be
\begin{equation}\label{eq:vel1}
v_r =
\begin{cases}
2\delta rz, & r^2 + z^2 < R^2\\
\frac{2\delta R^5rz}{(r^2 + z^2)^{5/2}}, & r^2 + z^2 > R^2
\end{cases}
\end{equation}
\begin{equation}\label{eq:vel2}
v_z =
\begin{cases}
2\delta\left(R^2 -r^2 - z^2\right), & r^2 + z^2 < R^2\\
\frac{4}{3}\delta R^2 + \frac{2\delta R^5}{3}\frac{(r^2 - 2z^2)}{(r^2 + z^2)^{5/2}}, & r^2 + z^2 > R^2
\end{cases}.
\end{equation}
Moving back into the lab frame with the transformation given by (\ref{eq:Gal}) one arrives at
\begin{equation}
V_r =
\begin{cases}
2\delta r(z - Z(t)), & r^2 + (z-Z(t))^2 < R^2\\
\frac{2\delta R^5r(z - Z(t))}{(r^2 + (z - Z(t))^2)^{5/2}}, & r^2 + (z - Z(t))^2 > R^2
\end{cases}
\end{equation}
\begin{equation}
V_z =
\begin{cases}
Z'(t) + 2\delta\left(R^2 -r^2 - (z - Z(t))^2\right), & r^2 + (z - Z(t))^2 < R^2\\
Z'(t) + \frac{4}{3}\delta R^2 + \frac{2\delta R^5}{3}\frac{(r^2 - 2(z - Z(t))^2)}{(r^2 + (z - Z(t))^2)^{5/2}}. & r^2 + (z - Z(t))^2 > R^2
\end{cases}
\end{equation}
The pressure in the stationary frame of reference is given by
\begin{equation}\label{eq:hillpressure}
H(r,z) =
\begin{cases}
H_0 - 10\delta^2 \left(r^2\left((z-Z(t))^2 + r^2 - R^2)\right)\right), & r^2 + (z-Z(t))^2 < R^2\\
H_0. & r^2 + (z-Z(t))^2 > R^2
\end{cases}
\end{equation}
One additional boundary condition that can be considered is the behaviour of the velocity far away from the spherical vortex. In particular, if the fluid that the sphere is moving through is stationary, it is natural to demand $v_r, v_z \to 0$ as $r^2 + z^2 \to \infty$. The first limit for $v_r$ is trivially satified
\begin{equation}
\lim_{r^2 + z^2 \to \infty} v_r = 0.
\end{equation}
however, for the $z$ component of velocity, $v_z$ one gets
\begin{equation}
\lim_{r^2 + z^2 \to \infty} v_z = Z'(t) + \frac{4}{3}\delta R^2 = 0.
\end{equation}
This gives the additional condition that $Z'(t) = -\frac{4}{3}\delta {R^2}$. This implies the interesting result that the group velocity of the moving spherical vortex is constant with a speed that is proportional to the square of the radius. In this case, the solution depending on the freedom of $R$ and $\delta$ can be written completely in terms of
\begin{equation}
Z(t) = Z_0-\frac{4}{3}\delta {R^2}t
\end{equation}
as
\begin{equation}
V_r =
\begin{cases}
2\delta r(z + \frac{4}{3}\delta {R^2}t), & r^2 + (z+\frac{4}{3}\delta {R^2}t)^2 < R^2\\
\frac{2\delta R^5r(z+\frac{4}{3}\delta {R^2}t)}{(r^2 + (z+\frac{4}{3}\delta {R^2}t)^2)^{5/2}}, & r^2 + (z+\frac{4}{3}\delta {R^2}t)^2 > R^2
\end{cases}
\end{equation}
\begin{equation}
V_z =
\begin{cases}
-\frac{4}{3}\delta {R^2} + 2\delta\left(R^2 -r^2 - (z+\frac{4}{3}\delta {R^2}t)^2\right), & r^2 + (z+\frac{4}{3}\delta {R^2}t)^2 < R^2\\
-\frac{4}{3}\delta {R^2} + \frac{4}{3}\delta R^2 + \frac{2\delta R^5}{3}\frac{(r^2 - 2(z+\frac{4}{3}\delta {R^2}t)^2)}{(r^2 + (z+\frac{4}{3}\delta {R^2}t)^2)^{5/2}}. & r^2 + z+\frac{4}{3}\delta {R^2}t)^2 > R^2
\end{cases}
\end{equation}
with the pressure profile in the stationary frame as
\begin{equation}\label{eq:hillpressure1}
H(r,z) =
\begin{cases}
H_0 + 10\delta^2 \left(r^2\left((z+\frac{4}{3}\delta {R^2}t)^2 + r^2 - R^2)\right)\right), & r^2 + (z+\frac{4}{3}\delta {R^2}t)^2 < R^2\\
H_0. & r^2 + (z+\frac{4}{3}\delta {R^2}t)^2 > R^2
\end{cases}
\end{equation}
Level curves of $H(r,z)$ can be seen in Figure \ref{fig:hills}.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width = .7\textwidth]{Hillsvortexpressure.pdf}
\end{center}
\caption{\label{fig:Level_curves_of_xi}A cross-section of surfaces $H(\psi) = \hbox{\rm const}$ in the lab frame given by (\ref{eq:hillpressure1}). Here $R = 1$, $H_0 = 1$, $\delta = 1$ and $t = 0$. The black arrows correspond to the velocity vectors on a given surface. By the first equation of (\ref{eq:Euler_eq3}), both $\vec{v}$ and $\mathop{\hbox{\rm curl}} \vec{v}$ are tangent to this surface.}\label{fig:hills}
\end{figure}
\section{A stationary spherical MHD vortex}
A similar problem to Hill's spherical vortex is the concept of a spherical vortex moving through an ideally conducting fluid. With this problem, negligibly small fluid motion ($\vec{V} = 0$) is assumed which gives the starting point as the static equilibrium MHD equations
\begin{subequations}\label{eq:MHDst2}
\begin{equation}
\mathop{\hbox{\rm curl}} \vec{B} \times \vec{B}= \mu \mathop{\hbox{\rm grad}} P,
\end{equation}
\begin{equation}
\mathop{\hbox{\rm div}} \vec{B} = 0.
\end{equation}
\end{subequations}
Here, the main differences between Hill's spherical vortex and this stationary conducting spherical vortex is: the search for $\vec{v}$ inside and outside the sphere is replaced with the search for $\vec{B}$, and the azimuthal component of this magnetic field is \emph{not} assumed to be zero. Two assumptions of this conducting spherical vortex are: the pressure goes to a constant value taken to be zero at the boundary of the sphere (similar to Hill's spherical vortex), and every magnetic field component goes to zero at the boundary. The last condition here regarding the magnetic field is chosen in this way because the asymptotic behaviour of the magnetic field must decay at least as quickly as a dipole moment, but it was shown in \cite{kaiser1995ball}, that the only solution outside of the sphere consistent with the inside pressure and magnetic field that has the proper asymptotic behaviour is when $\vec{B} = 0$.
\medskip
The spherical vortex is assumed to have inherent axial symmetry which allows the reduction of (\ref{eq:MHDst2}) to the Grad-Shafranov equation
\begin{equation}\label{eq:G-Seq}
\frac{\partial^2 \psi}{\partial r^2} + \frac{\partial^2 \psi}{\partial z^2} - \frac{1}{r}\frac{\partial \psi}{\partial r} + I(\psi)I'(\psi) = -r^2 P'(\psi).
\end{equation}
where the magnetic field components are given by
\begin{equation}\label{eq:B-comp}
\vec{B} = \frac{\psi_z}{r} \vec{e}_r + \frac{I(\psi)}{r}\vec{e}_\phi - \frac{\psi_r}{r} \vec{e}_z.
\end{equation}
Inside of the sphere, the pressure $P(\psi)$ and the arbitrary function related to the toroidal magnetic field $I(\psi)$ are taken to be linear (as any higher power series expansion of $P(\psi)$ and $I(\psi)$ makes (\ref{eq:G-Seq}) not separable in spherical coordinates). Therefore, these arbitrary functions are written as
\begin{equation}\label{eq:mhdvortfuncs}
P(\psi) = P_0 - \gamma \psi, \quad I(\psi) = \lambda \psi.
\end{equation}
The Grad-Shafranov equation now becomes a second order linear homogeneous PDE. This equation is now converted to spherical coordinates
\begin{equation}\label{eq:GS_spherical1}
\left[ \frac{\partial^2}{\partial \rho^2} + \frac{\sin \theta}{\rho^2} \frac{ \partial}{\partial \theta} \left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right) + \lambda^2\right] \psi = \gamma \rho^2 \sin^2{\theta},
\end{equation}
where the magnetic field is given by
\begin{equation}\label{eq:magn}
\vec{B} = \frac{1}{\rho^2\sin \theta}\frac{\partial \psi}{\partial \theta} \vec{e}_{\rho} + \frac{I(\psi)}{\rho \sin \theta}\vec{e}_\phi - \frac{1}{\rho \sin \theta}\frac{\partial \psi}{\partial \rho} \vec{e}_{\theta}.
\end{equation}
Following a similar method to the previous section, $\psi(\rho,\theta) = \psi(\rho,\theta)_{gen} + \psi(\rho,\theta)_{part}$ where $\psi(\rho,\theta)_{gen}$ is a general solution to the homogeneous version of (\ref{eq:GS_spherical1}) given by
\begin{equation}\label{eq:Bobhom}
\left[ \frac{\partial^2}{\partial \rho^2} + \frac{\sin \theta}{\rho^2} \frac{ \partial}{\partial \theta} \left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right) + \lambda^2\right] \psi = 0,
\end{equation}
A particular solution to (\ref{eq:GS_spherical1}) is found to be
\begin{equation}\label{eq:part}
\psi(\rho,\theta) = \frac{\delta}{\lambda^2}\rho^2\sin^2\theta.
\end{equation}
The general solution to (\ref{eq:Bobhom}) is obtained by a separated solution $\psi(\rho,\theta) = R(\rho) \Theta(\theta)$. Upon substituting the separated form into (\ref{eq:Bobhom}) one arrives at the two ODEs
\begin{equation}\label{eq:rho_ode2}
\rho^2 R''(\rho) - (c + \lambda^2)R(\rho) = 0
\end{equation}
\begin{equation}\label{eq:theta_ode2}
\big((-\csc \theta) \Theta'\big)' = \mathcal{C} (\csc \theta) \Theta.
\end{equation}
One can notice that (\ref{eq:theta_ode2}) is the exact same as in the previous section given by (\ref{eq:theta_ode1}). Therefore, due to the $\sin^2\theta$ dependence in (\ref{eq:part}) and the orthogonality of $\Theta_l(\theta)$ given by (\ref{eq:thetasol}), one can conclude in a similar fashion to the previous section that the only value of $l$ which satisfies the pressure $P$ going to the constant ambient pressure P0 on the boundary is $l = 1$. This gives the following separated anzats to use
\begin{equation}
\psi(\rho,\theta) = G(\rho)\rho^2\sin^2\theta.
\end{equation}
Upon substituting the above into equation (\ref{eq:GS_spherical1}), the second order linear ODE is obtained
\begin{equation}\label{eq:eig_ODE}
G''(\rho) + \frac{4}{\rho} G'(\rho) + G(\rho)\lambda^2 = \gamma.
\end{equation}
This third order equation, (\ref{eq:eig_ODE}), along with the following three physical conditions gives a well posed eigenvalue problem \cite{Bobnev}.
\begin{enumerate}\label{eq:boundarycondit}
\item
To achieve finite energy inside the sphere $\lim_{\rho \to 0}|G(\rho)| < \infty$.
\item
The magnetic field components given by (\ref{eq:magn}) must vanish at the boundary for the proper asymptotic behaviour as discussed in \cite{Bobnev,kaiser1995ball}, $G'(R) = G(R) = 0$.
\item
The pressure must go to the constant ambient pressure $P_0$ at the boundary, $G(R) = 0$.
\end{enumerate}
A general solution to (\ref{eq:eig_ODE}) can be found to be
\begin{equation}
G(\rho) = C_1 \frac{\rho\lambda\sin(\rho\lambda) + \cos(\rho\lambda)}{\rho^3} + C_2 \frac{\rho\lambda\cos(\rho\lambda) - \sin(\rho\lambda)}{\rho^3} + \frac{\gamma}{\lambda^2}.
\end{equation}
From the first condition above, $C_1 = 0$. The second condition gives a countable number of normalized eigenvalues $\lambda_n = \lambda R$ corresponding to the nth root of the following transcendental equation
\begin{equation}\label{eq:Bobtrans}
x^2\tan x - 3\tan x + 3x = 0.
\end{equation}
Lastly, the third condition gives a value for $\gamma$ depending on the value of $\lambda_n$,
\begin{equation}\label{eq:Bobgamma}
\gamma_n = -C_2\lambda_n^2\frac{\lambda_n\cos\lambda_n - \sin \lambda_n}{R^5}.
\end{equation}
This gives the flux function inside of the sphere as
\begin{equation}\label{eq:Bobnev}
\psi(\rho,\theta) = \left(C_2\frac{\frac{\rho}{R}\lambda_n\cos(\frac{\rho}{R}\lambda_n) - \sin(\frac{\rho}{R}\lambda_n)}{\rho} + \frac{\rho^2 R^2\gamma_n}{\lambda_n^2}\right)\sin^2\theta.
\end{equation}
which can be written in terms of a first order spherical Bessel function of the first kind, $j_1$ as
\begin{equation}\label{eq:Bobnev1}
\psi(\rho,\theta) = \left(\tilde{C_2}\frac{\rho}{R} \lambda_n j_1\left(\frac{\rho}{R} \lambda_n\right) + \frac{\rho^2 R^2\gamma_n}{\lambda_n^2}\right)\sin^2\theta.
\end{equation}
Outside of the sphere $\rho > R$ all of the magnetic field components are zero and the pressure is equal to the ambient pressure $P_0$. An example of this solution for $n = 1$ has its pressure shown in Figure \ref{fig:Bobnev1}.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width = .6\textwidth]{Bobnevpressure1.pdf}
\end{center}
\caption{Pressure profile of static spherical vortex in ideally conducting fluid given by $P(\psi_n) = P_0 - \gamma_n \psi_n$ where $\psi_n$ is given by (\ref{eq:Bobnev}) for $R = 1$, $n = 1$ and $C_2 = 1$. $\vec{B}$ is not shown on this plot as the non-zero $\phi$ component would make it point out of, or into the page.}\label{fig:Bobnev1}
\end{figure}
A few other solutions are shown for higher values of $n$. In Figure \ref{fig:Bobnev23} pressure profiles $P(\psi_n) = P_0 - \gamma_n\psi_n$ for $\psi_n$ given by (\ref{eq:Bobnev}) with $n = 2$ and $n = 3$ can be seen.
\begin{figure}[htb!]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{Bobnevpressure2.pdf}
\caption{}\label{fig:Bobnev2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\hfill
\centering
\includegraphics[width=\textwidth]{Bobnevpressure3.pdf}
\caption{}\label{fig:Bobnev3}
\end{subfigure}
\caption{Pressure profile of static spherical vortex in ideally conducting fluid given by $P(\psi_n) = P_0 - \gamma_n \psi_n$ where $\psi_n$ is given by (\ref{eq:Bobnev}) for $C_2 = 1$, $R = 1$, $n = 2$ on the left, and $n = 3$ on the right.}\label{fig:Bobnev23}
\end{figure}
In Figure \ref{fig:Bobnev45} $n = 4$ and $n = 5$.
\begin{figure}[htb!]
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{Bobnevpressure4.pdf}
\caption{}\label{fig:Bobnev4}
\end{subfigure}
\hfill
\begin{subfigure}{0.48\textwidth}
\hfill
\centering
\includegraphics[width=\textwidth]{Bobnevpressure5.pdf}
\caption{}\label{fig:Bobnev5}
\end{subfigure}
\caption{Pressure profile of static spherical vortex in ideally conducting fluid given by $P(\psi_n) = P_0 - \gamma_n \psi_n$ where $\psi_n$ is given by (\ref{eq:Bobnev}) for $C_2 = 1$, $R = 1$, $n = 4$ on the left, and $n = 5$ on the right.}\label{fig:Bobnev45}
\end{figure}
\section{A generalized version of Hill's spherical vortex}
In the last section, as the magnetic field outside of the spherical vortex needed to vanish in order to satisfy asymptotic behaviour that decays at least as fast as a dipole moment \cite{kaiser1995ball}, and as the velocity asymptotics of Hill's spherical vortex outside of the sphere have good behaviour from a fluid dynamics standpoint, a generalized spherical vortex with a non-zero $V^{\phi}$ can be considered in a very similar way to the previous section.
\medskip
Similar to the first section of chapter 3, using a moving frame of reference, assuming axial invariance, the Euler equations can reduce to the Bragg-Hawthorne equation. Starting from said equation in spherical coordinates
\begin{equation}\label{eq:spherical_gs}
\left[ \frac{\partial^2}{\partial \rho^2} + \frac{\sin \theta}{\rho^2} \frac{ \partial}{\partial \theta} \left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right) + F(\psi)F'(\psi)\right] \psi = -H'(\psi)\rho^2 \sin^2{\theta},
\end{equation}
the arbitrary functions are again chosen as the highest power series expansion in $\psi$ such that the (\ref{eq:spherical_gs}) becomes separable and the asymptotics of the pressure $H(\psi)$ and the toroidal velocity component function $F(\psi)$ behave properly. As far as seperability of (\ref{eq:spherical_gs}) goes, both functions cannot be of higher degree then linear in $\psi$. In regards to the asymptotics, the pressure far away from the sphere is chosen to change and thus needs to be the ambient pressure $H_0$, similarly, $F(\psi)$ must also not change far away from the sphere, however, $F(\psi) = F_0$ where $F_0 = \hbox{\rm const}$ is not allowed as it corresponds to a singular $V^{\phi}$. This gives the best option for the free functions as
\begin{equation}
H(\psi) =
\begin{cases}
H_0 - \gamma \psi, & \rho < R\\
H_0 , & \rho > R
\end{cases}
\end{equation}
\begin{equation}
F(\psi) =
\begin{cases}
\lambda\psi, & \rho < R\\
0. & \rho > R
\end{cases}
\end{equation}
This allows one to decompose the spherical Grad-Shafranov equation into two problems like before, one inside and one outside of the sphere, namely:
\begin{enumerate}
\item{Rotational flow inside the sphere $\rho < R$}
\begin{subequations}\label{eq:11}
\begin{equation}
H(\psi) = H_0 - \gamma \psi,
\end{equation}
\begin{equation}\label{eq:Hill_inside2}
\left[ \frac{\partial^2}{\partial \rho^2} + \frac{\sin \theta}{\rho^2} \frac{ \partial}{\partial \theta} \left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right) + \lambda^2\right] \psi = \gamma \rho^2 \sin^2{\theta}.
\end{equation}
\end{subequations}
\item{Irrotational, force-free flow outside the sphere $\rho > R$}
\begin{subequations}\label{eq:22}
\begin{equation}
H(\tilde{\psi}) = H_0,
\end{equation}
\begin{equation}
\left[ \frac{\partial^2}{\partial \rho^2} + \frac{\sin \theta}{\rho^2} \frac{ \partial}{\partial \theta} \left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right)\right] \psi = 0.
\end{equation}
\end{subequations}
\end{enumerate}
The velocity components inside and outside are given by
\begin{equation}
\vec{v}_{in} = \frac{1}{\rho^2\sin \theta}\frac{\partial \psi}{\partial \theta} \vec{e}_{\rho} + \frac{F(\psi)}{\rho\sin\theta}\vec{e}_{\phi} - \frac{1}{\rho \sin \theta}\frac{\partial \psi}{\partial \rho} \vec{e}_{\theta},
\end{equation}
and
\begin{equation}
\vec{v}_{out} = \frac{1}{\rho^2\sin \theta}\frac{\partial \tilde{\psi}}{\partial \theta} \vec{e}_{\rho} - \frac{1}{\rho \sin \theta}\frac{\partial \tilde{\psi}}{\partial \rho} \vec{e}_{\theta}.
\end{equation}
respectively. Along with this, the matching pressure at the boundary, the need for $\psi(\rho,\theta)$ to be regular at $\rho = 0$ and the matching velocity at the boundary give in order the following four boundary conditions identical to the first section of Chapter 1
\begin{equation}\label{eq:matching3}
\psi(R,\theta) = 0, \quad |\psi(0,\theta)| < \infty, \quad \frac{\partial \psi}{\partial \theta} \bigg|_{\rho = R} = \frac{\partial \tilde{\psi}}{\partial \theta} \bigg|_{\rho = R}, \quad \frac{\partial \psi}{\partial \rho} \bigg|_{\rho = R} = \frac{\partial \tilde{\psi}}{\partial \rho} \bigg|_{\rho = R}.
\end{equation}
From the last Section, a solution inside the sphere that is bounded at the origin is found to be
\begin{equation}\label{eq:inside2}
\psi(\rho,\theta) = \left(C\frac{\rho\lambda\cos(\rho\lambda) - \sin(\rho\lambda)}{\rho} + \frac{\gamma}{\lambda^2}\rho^2\right)\sin^2\theta,
\end{equation}
and from the first section, the solution outside of the sphere is given by
\begin{equation}
\tilde{\psi}(\rho,\theta) = \rho^2\sin^2\theta \left(A + \frac{B}{\rho^3}\right).
\end{equation}
\medskip
After applying the matching pressure boundary condition given by the first equation in (\ref{eq:matching3}) one obtains the transcendental equation between $\lambda$ and $\gamma$
\begin{equation}\label{eq:trans1}
C\lambda^2R\cos(R\lambda) - C\lambda\sin(R\lambda) + R^3\gamma = 0.
\end{equation}
\medskip
Using the third boundary condition in (\ref{eq:matching3}) one obtains
\begin{equation}\label{eq:cond5}
A = -\frac{B}{R^3}
\end{equation}
giving the outside solution as
\begin{equation}
\tilde{\psi}(\rho,\theta) = B\rho^2\sin^2\theta \left(\frac{1}{\rho^3} - \frac{1}{R^3}\right).
\end{equation}
\medskip
Lastly, the final boundary condition in (\ref{eq:matching3}) allows one to solve for $B$ in terms of the other constants, giving
\begin{equation}\label{eq:trans2}
B = \frac{CR\lambda^3\cos(R\lambda)+ C\lambda^4 R^2 \sin(R\lambda) - C\lambda^2\sin(R\lambda) - 2\gamma R^3}{3\lambda^2}.
\end{equation}
The three conditions on the constants given by (\ref{eq:trans1}), (\ref{eq:cond5}) and (\ref{eq:trans2}) gives $\psi(\rho,\theta)$ in the whole space as
\begin{equation}\label{eq:general_hills}
\psi(\rho,\theta) =
\begin{cases}
\left(C\frac{\rho\lambda\cos(\rho\lambda) - \sin(\rho\lambda)}{\rho} + \frac{\gamma}{\lambda^2}\rho^2\right)\sin^2\theta, & \rho < R\\
\frac{CR\lambda^3\cos(R\lambda)+ C\lambda^4 R^2 \sin(R\lambda) - C\lambda^2\sin(R\lambda) - 2\gamma R^3}{3\lambda^2} \rho^2\sin^2\theta\left(\frac{1}{R^3} - \frac{1}{\rho^3}\right). & \rho > R
\end{cases}.
\end{equation}
This solution (\ref{eq:general_hills}) of the spherical Grad-Shafranov equations (\ref{eq:11}) and (\ref{eq:22}) is a more general version of Hill's spherical vortex as:
\begin{itemize}
\item The $\phi$ component of the velocity is non-zero inside of the sphere. Whereas Hill's original vortex solution had $V^{\phi} = 0$.
\item There is the choice of freedom for three constants, $C$, ($\lambda$ or $\gamma$) and $R$, whereas Hill's original solution only has a choice of freedom for $R$ and one constant $\delta$.
\end{itemize}
The asymptotics of the velocity field outside of the sphere behave in a suitable manner as this is the same outside solution of Hill's spherical vortex given in cylindrical coordinates by (\ref{eq:vel1}) and (\ref{eq:vel2}) which has correct asymptotics as discussed in \cite{hill1894vi}. One interesting remark is that if the outside magnetic field must vanish which corresponds in this case to the coefficient of the outside solution given in \ref{eq:general_hills} as $B$, then this problem reduces to the problem in the previous section and the equations (\ref{eq:trans1}) and (\ref{eq:trans2}) reduce to the transcendental equations given by (\ref{eq:Bobtrans}) and (\ref{eq:Bobgamma}) as they should. This result is briefly discussed in \cite{kaiser1995ball} as they require this condition for the proper asymptotics of the magnetic field.
\section{Spherical separation of variables for Grad-Shafranov equation}
In Section 3 of this Chapter, a separated solution in spherical coordinates to the Grad-Shafranov equation \ref{eq:GS_spherical} was obtained to satisfy boundary conditions that correspond to a spherical vortex moving through a stationary fluid. During this, the behaviour of $\Theta_l(\theta)$ given by (\ref{eq:thetasol}) was restricted to $l = 1$ to satisfy the boundary conditions. In this section, a fully separated solution is considered in its own right.
\medskip
Using the first part of Section 3 up until \ref{eq:theta_ode2}, the linear Grad-Shafranov equation in spherical coordinates
\begin{equation}\label{eq:GS_spherical}
\left[ \frac{\partial^2}{\partial \rho^2} + \frac{\sin \theta}{\rho^2} \frac{ \partial}{\partial \theta} \left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right) + \lambda^2\right] \psi = \gamma \rho^2 \sin^2{\theta}.
\end{equation}
which corresponds to the free functions from Section 3 given by $I(\psi) = \lambda \psi$ and $P(\psi) = P_0 - \gamma \psi$. A solution in the form
of $\psi(\rho,\theta) = \psi(\rho,\theta)_{gen} + \psi(\rho,\theta)_{part}$ is sought with $\psi(\rho,\theta)_{part} = \frac{\gamma \rho^2 \sin^2\theta}{\lambda^2}$. A separated solution for the homogenous version of (\ref{eq:GS_spherical}) is sought in the form $\psi(\rho,\theta) = R(\rho)\Theta(\theta)$.
\medskip
The homogeneous version of equation (\ref{eq:GS_spherical}) then reduces to the two ODEs
\begin{equation}\label{eq:rho_ode}
\rho^2 R''(\rho) - (\mathcal{C} + \lambda^2)R(\rho) = 0,
\end{equation}
\begin{equation}\label{eq:theta_ode}
\Theta''(\theta) - \frac{\cos\theta}{\sin\theta}\Theta'(\theta) + c\Theta(\theta) = 0,
\end{equation}
where $\mathcal{C}$ is a separation constant to be determined.
\medskip
From Section 1 of this chapter, the separation constant is found to be $\mathcal{C} = l(l+1)$ for $l \in \mathbb{N}$ with a solution to (\ref{eq:theta_ode}) given by
\begin{equation}\label{eq:theta11}
\Theta_l(\theta) = (l+1)\mathcal{P}_{l+1}(\cos\theta) - (l+1)\cos\theta \ \mathcal{P}_l(\cos\theta).
\end{equation}
The value $\mathcal{C} = l(l+1)$ can now be substituted into (\ref{eq:rho_ode}) giving
\begin{equation}
\rho^2 R''(\rho) - (l(l+1) + \lambda^2)R(\rho) = 0.
\end{equation}
This has a solution in terms of the Bessel function of the first kind
\begin{equation}
R_l(\rho) = \sqrt{\rho}\mathcal{J}\left(\frac{2l+1}{2},\rho\lambda\right).
\end{equation}
So a separated solution to the homogenous version of (\ref{eq:GS_spherical}) is given by
\begin{equation}\label{eq:Sep_sphere}
\psi_l(\rho,\theta) = \sqrt{\rho}\mathcal{J}\left(\frac{2l+1}{2},\rho\lambda\right)\Big((l+1)\mathcal{P}_{l+1}(\cos\theta) - (l+1)\cos\theta\mathcal{P}_l(\cos\theta)\Big).
\end{equation}
\medskip
As equation (\ref{eq:GS_spherical}) is linear, any linear combination of the separated solution (\ref{eq:Sep_sphere}) with the addition of the particular solution will also be a solution. This can be written in a general way as
\begin{equation}\label{eq:GeneralSphere}
\Psi(\rho,\theta) = \frac{\gamma \rho^2 \sin^2\theta}{\lambda^2} + \sum_{l=0}^n a_l \sqrt{\rho}\mathcal{J}\left(\frac{2l+1}{2},\rho\lambda\right)\Theta_l(\theta).
\end{equation}
Where $\Theta_l(\theta)$ is given by (\ref{eq:theta11}). Clearly this solution is no longer related to the spherical vortex but is an MHD equilibria solution which can be considered in its own right. A pressure profile $P = P_0 - \gamma \psi$ with $\psi$ given by (\ref{eq:GeneralSphere}) can be seen in Figure \ref{fig:general_spherical}.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width = .7\textwidth]{General_bobnevpressure.pdf}
\end{center}
\caption{A cross-section of magnetic surfaces where the magnetic surfaces are shown by $P(\psi) = \hbox{\rm const}$ for $P = P_0 - \gamma \psi$ where $\psi$ is given by (\ref{eq:GeneralSphere}). Here $\gamma = 1$, $\lambda = 1$, $n = 5$, $a_l = 1$, $l = 1,2,3,4,5$. Any toroidal surface can be considered a truncated solution with the outer surface described by a current sheet.}\label{fig:general_spherical}
\end{figure}
\section{Stability considerations for the spherical vortex}
In this section, stability of the spherical vortices solutions described in the previous chapters will be analyzed. These include Hill's vortex solutions from Section 1 given by (\ref{eq:sol1}), the MHD spherical vortex solution given in Section 2 given by (\ref{eq:Bobnev}) and the generalized Hill's vortex from Section 3 given by (\ref{eq:general_hills}). In the first part, an axially-symmetric perturbation of Hill's spherical vortex on the sphere following a method described in \cite{pozrikidis1986nonlinear} is performed with the goal of observing modes that grow exponentially in time to conclude the instability of the solution. In the next sections, a similar perturbation is attempted but is shown to not be possible. A generalized perturbation is performed with the goal of observing modes that grow exponentially in time.
\subsection{Axisymmetric perturbation of Hill's vortex}
The solution of Hill's spherical vortex at the surface of the sphere $\rho = R$ is considered. Using the dynamic equation for $\psi$ found in Hill's paper \cite{hill1894vi}
\begin{equation}\label{eq:dynamic_psi}
\left( \frac{\partial}{\partial t} + \frac{1}{r}\frac{\partial \psi}{\partial z} \frac{\partial}{\partial r} - \frac{1}{r}\frac{\partial \psi}{\partial r} \frac{\partial}{\partial z}\right)\left[\frac{1}{r^2}\left(\frac{\partial^2 \psi}{\partial z^2} + \frac{\partial^2 \psi}{\partial r^2} - \frac{1}{r}\frac{\partial \psi}{\partial r}\right)\right] = 0.
\end{equation}
The inside solution given by (\ref{eq:sol1}) is perturbed using
\begin{equation}\label{eq:ppert}
\rho \mapsto \rho(1 + \epsilon h(\theta,t))
\end{equation}
giving
\begin{equation}\label{eq:pert1}
\psi(\rho,\theta) = \delta \rho^2(1 + \epsilon h(\theta,t))^2\sin^2\theta(\rho^2(1 + \epsilon h(\theta,t))^2 - R^2).
\end{equation}
This perturbed solution is now substituted into the spherical version of the dynamic $\psi$ equation (\ref{eq:dynamic_psi}). After this, the substitution ($\rho = R$) is made and then discarding terms beyond the first order of $\epsilon$ the following third order PDE for $h(\theta,t)$ is obtained
\begin{equation}
2R\delta\sin\theta\frac{\partial^3 h}{\partial \theta^3} + \frac{\partial^3 h}{\partial t \partial \theta^2} + 6R\delta\cos\theta \frac{\partial^2 h}{\partial \theta^2} + 3\frac{\cos\theta}{\sin\theta} \frac{\partial^2 h}{\partial t \partial \theta} - \frac{40R\delta}{\sin\theta} \left(\cos^2\theta - \frac{17}{20}\right)\frac{\partial h}{\partial \theta} + 20\frac{\partial h}{\partial t} = 0.
\end{equation}
This linear homogeneous equation is separable: one can seek its solutions as $h(\theta,t) = \Theta(\theta)T(t)$ where $\Theta(\theta)$ and $T(t)$ satisfy
\begin{equation}\label{eq:Theta}
\frac{d^3 \Theta}{d \theta^3} = -3\left(\frac{\cos\theta}{\sin\theta} + \frac{\lambda}{6R\delta\sin\theta}\right)\frac{d^2\Theta}{d \theta^2} + \left(20\frac{\cos^2\theta}{\sin^2\theta} - 3\frac{\cos\theta}{2R\delta\sin^2\theta}\lambda - \frac{17}{\sin^2\theta}\right)\frac{d\Theta}{d\theta} - 10\frac{\lambda}{R\delta\sin\theta}\Theta,
\end{equation}
\begin{equation}
\frac{dT}{dt} = \lambda T.
\end{equation}
The $T$ equation above has the exponential solution $T(t) = Ae^{\lambda t}$. The $\Theta$ equation (\ref{eq:Theta}) can be converted into a simpler equation with the transformation $z = \cos\theta$ with $\Theta(\theta) = Z(z)$. This gives
\begin{equation}\label{Degen_ode}
(1 - z^2)\frac{d^3 Z}{dz^3} - \left(2K_2 + 6z\right)\frac{d^2 Z}{dz^2} + 8\left(2 + \frac{K_2z}{1-z^2}\right)\frac{dZ}{dz} - \frac{40K_2}{1-z^2}Z = 0.
\end{equation}
Solutions to (\ref{Degen_ode}) can be expressed as a linear combination of the following functions written in terms of the hypergeometric functions
\begin{subequations}\label{eq:Zss}
\begin{equation}\label{eq:Z1}
Z_1 = \mathcal{H}\left(\left[\frac{3}{4} + \frac{\sqrt{89}}{4} , \frac{3}{4} - \frac{\sqrt{89}}{4}\right],\frac{1}{2},z^2\right),
\end{equation}
\begin{equation}
Z_2 = z\mathcal{H}\left(\left[\frac{5}{4} + \frac{\sqrt{89}}{4} , \frac{5}{4} - \frac{\sqrt{89}}{4}\right],\frac{3}{2},z^2\right),
\end{equation}
\begin{equation}
Z_3 = -Z_1\int_{z_0}^{z} z(z + 1)^{1 + \frac{\lambda}{4R\delta}}(z - 1)^{1 -\frac{\lambda}{4R\delta}} Z_2 dz + Z_2\int_{z_0}^{z} (z + 1)^{1 + \frac{\lambda}{4R\delta}}(z - 1)^{1 -\frac{\lambda}{4R\delta}} Z_1 dz.
\end{equation}
\end{subequations}
Here $z_0$ is any constant such that $z_0 < z$. One should notice that both the first and second solution of (\ref{eq:Zss}) do not depend on the separation constant $\lambda$. This is because (\ref{Degen_ode}) can be written as
\begin{equation}
\mathcal{L} = \left(\frac{d}{dz} - \frac{2K_2}{1-z^2}\right) \mathcal{G},
\end{equation}
where
\begin{equation}\label{Degen_ode_aux}
\mathcal{G} \equiv \left(1-z^2\right)\frac{d^2 Z}{dz^2} - 4z\frac{d Z}{dz} + 20 Z = 0.
\end{equation}
Here (\ref{Degen_ode_aux}) has the general solution
\begin{equation}
Z = C_1 Z_1 + C_2 Z_2
\end{equation}
where $Z_1$ and $Z_2$ are given in (\ref{eq:Zss}).
As $\lambda$ does not appear in $Z_1$ and $Z_2$, there will exist $h(\theta,t)$ which grows exponentially in time as $\lambda$ can by positive. However, one must check and make sure that these $h(\theta,t)$ that grow in time correspond to regular surfaces. One such $h(\theta,t)$ that gives regular surfaces utilizes $Z_1$ given above by \ref{eq:Z1}. This gives $h(\theta,t)$ as
\begin{equation}
h(\theta,t) = Ae^{\lambda t}\mathcal{H}\left(\left[\frac{3}{4} + \frac{\sqrt{89}}{4} , \frac{3}{4} - \frac{\sqrt{89}}{4}\right],\frac{1}{2},\cos^2\theta\right)
\end{equation}
This is now substituted into (\ref{eq:pert1}). After expanding out, and converting back to cylindrical coordinates, one arrives at
\begin{equation}\label{eq:psipert2}
\psi(r,z,t) = -2\delta \left(Ae^{\lambda t}\epsilon^2(R^2 - 2r^2 - 2z^2)\mathcal{H}\left(\left[\frac{3}{4} + \frac{\sqrt{89}}{4} , \frac{3}{4} - \frac{\sqrt{89}}{4}\right],\frac{1}{2},\frac{r^2}{r^2 + z^2}\right) + \frac{R^2 - r^2 - z^2}{2}\right).
\end{equation}
When $\psi = 0$ this corresponds to the boundary of the sphere. Several plots of the evolution of this surface are shown in \ref{fig:goodpert}.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width = .45\textwidth]{hillperturb2.pdf}
\end{center}
\caption{The evolution of the perturbation given by \ref{eq:psipert2} is shown for $\epsilon = 0.0001$, $\delta = 1$, $R = 1$, $A = 1$, $\lambda = 1$, at several different times $0<t<16$. These surfaces are regular.}\label{fig:goodpert}
\end{figure}
Despite the irregular look of this surface at some of the points in time, the implicit derivative ${dz}/{dr}$ of (\ref{eq:psipert2}) when $\psi(r,z,t) = 0$ can be shown to be zero at the irregular looking points $r = 0$.
\medskip
The above analysis leads to the following conclusion that Hill's spherical vortex is in general not linearly stable with respect to magnetic surface perturbations described by (\ref{eq:pert1}). Stability analysis of Hill's vortex has been previously considered numerically in ref \cite{pozrikidis1986nonlinear}, however, no details including mathematical formulas, numerical method used, and initial/boundary conditions were presented; we were not able to reproduce the results of \cite{pozrikidis1986nonlinear}.
\subsection{An axisymmetric perturbation of generalized Hill's spherical vortex and MHD vortex}
The goal here is to use a similar axisymmetric perturbation method as above following the method in \cite{pozrikidis1986nonlinear} to study the stability of the generalized hill's spherical vortex solution (\ref{eq:general_hills}) and the MHD spherical vortex in an ideally conducting fluid solution (\ref{eq:Bobnev}). The dynamic equation for $\psi$ (\ref{eq:dynamic_psi}) taken from \cite{hill1894vi} in which $V^{\phi} = 0$ was used to study the time evolution of $\psi$ with the perturbation given by equation (\ref{eq:ppert}). Therefore for a similar analysis of the two other solutions, a dynamic equation for $\psi$ needs to be derived from the time dependent, axially symmetric Euler equations with $V^{\phi} = I(\psi)/r$ (which is the form of $V^{\phi}$ in both (\ref{eq:Bobnev}) and (\ref{eq:general_hills})).
\subsubsection{Deriving axially symmetric dynamic $\psi$ equation with non-zero $V^{\phi}$}
Starting with the dynamic Euler Equations (\ref{eq:Euler3}) in cylindrical coordinates with axial invariance one arrives at the system
\begin{subequations}\label{eq:Axial_Euler}
\begin{equation}
V^r_t + rV^z(V^r_z -V^z_r) - V^\phi(rV^\phi)_r = rH_r,
\end{equation}
\begin{equation}\label{phi_equation}
V^{\phi}_t + V^z(rV^\phi)_z + V^r(rV^\phi)_r = 0,
\end{equation}
\begin{equation}
V^z_t + V^r(V^z_r - V^r_z) - V^\phi V^\phi_z = H_z,
\end{equation}
\begin{equation}
(rV^z)_z + (rV^r)_r = 0,
\end{equation}
\end{subequations}
where superscripts denote the vector component and subscripts denote the partial differentiation. The last equation, by the Pointcar\'e lemma, implies the local existence of a potential such that
\begin{equation}
V^r = \frac{\psi_z}{r}, \quad
V^z = -\frac{\psi_r}{r}.
\end{equation}
\medskip
Upon substituting the above vector components and the form of $\phi$ component of the velocity to be $V^{\phi} = I(\psi)/r$, (as taken from both the MHD spherical vortex solution and generalized hill solution) into (\ref{phi_equation}) one obtains
\begin{equation}\label{phi_equation2}
I'(\psi)\frac{\partial \psi}{\partial t} = 0.
\end{equation}
This implies that for $V^{\phi} = I(\psi)/r$, either
\begin{enumerate}
\item
$\psi(r,z,t)$ is time independent, in which case (\ref{eq:Axial_Euler}) can reduce to the Grad-Shafranov (Bragg Hawthorn) equation.
\item
$I(\psi)$ is constant with respect to $\psi$. For the case when $I(\psi) = 0$, \ref{eq:dynamic_psi} can be obtained.
\end{enumerate}
Therefore, either $\psi$ is time independent or $V^{\phi} = I(\psi)/r$ is not the correct form of $V^{\phi}$ concluding that no dynamic $\psi$ equation with $V^{\phi} = I(\psi)/r$ can exist. If $V^{\phi}$ is an arbitrary function of $r$, $z$ and $t$, with the use of Poisson Brackets, dynamic equations for $\psi$ were derived in \cite{bogoyavlenskij2010restricted}. However, these are of no use for studying the case when $V^{\phi} = I(\psi)/r$. Therefore, the time evolution of $\psi$ using a single equation is not possible and more general type of perturbation analysis needs to be considered.
\subsection{A general linear perturbation for generalized Hill's spherical Vortex}
In this section, finding solutions to the general linear perturbations was not successful, however the following methodology is still presented to show how one can derive the perturbed linear systems.
\medskip
In order to study the stability of the solution given by (\ref{eq:general_hills}) a linear perturbation on the dependent variables will be considered. As the $r$ and $z$ components of $\vec{V}$ are related by the stream function $\psi$ by
\begin{equation}\label{vel_comp}
\vec{v} = \frac{\psi_z}{r} \vec{e}_r + \frac{-\psi_r}{r} \vec{e}_z,
\end{equation}
and the other dependent variables being the pressure $H$ and the $\phi$ component of the magnetic field $V^{\phi}$, instead of the usual four dependent variables, there are only three. These three quantities are perturbed as follows
\begin{subequations}\label{eq:Euler_perturbation}
\begin{equation}
\psi(r,z,t) = \psi_0(r,z) + \epsilon\psi_1(r,z,t),
\end{equation}
\begin{equation}
F(r,z,t) = F_0 (\psi_0) + \epsilon F_1(r,z,t),
\end{equation}
\begin{equation}
H(r,z,t) = H_0(\psi_0) + \epsilon H_1(r,z,t).
\end{equation}
\end{subequations}
where $\psi_0$ is the static solution given by (\ref{eq:general_hills}), $F_0(\psi_0) = \lambda \psi_0$ and $H_0(\psi_0) = H_0 - \gamma \psi_0$. Substituting these into the axially invariant Euler equations and discarding terms of $\epsilon^2$ and higher one obtains a closed linear system for the three unknown functions $\psi_1(r,z,t)$, $F_1(r,z,t)$, $H_1(r,z,t)$. The goal now is to see if any solutions to this linear system have time dependence that grows unbounded. The system, though linear is still very large and complex (so much so that it is not even written here), and no meaningful nontrivial solutions were able to be found to this variable coefficient linear system.
\subsection{General perturbation for an MHD spherical vortex}
Similarly to above, one can considered the perturbation of the MHD spherical vortex with the solution given by (\ref{eq:Bobnev}). The main difference from the previous section being that the magnetic field components are perturbed as well as the velocity field components. The static equilibrium MHD equations, $\mathop{\hbox{\rm div}}{\vec{B}} = 0$ gives the condition that $B^r = \psi_z/r$ and $B^z = -\psi_r/r$. Also from $\mathop{\hbox{\rm div}}{\vec{V}} = 0$ gives the condition that $V^r = \xi_z/r$ and $V^z = -\xi_r/r$ This gives 5 dependent variables $\psi(r,z,t)$, $\xi(r,z,t)$, $I(r,z,t)$, $F(r,z,t)$ and $P(r,z,t)$, instead of the usual 6. These quanities are perturbed and written as
\begin{subequations}
\begin{equation}
\psi(r,z,t) = \psi_0(r,z) + \epsilon\psi_1(r,z,t),
\end{equation}
\begin{equation}
I(r,z,t) = I_0 (\psi_0) + \epsilon I_1(r,z,t),
\end{equation}
\begin{equation}
P(r,z,t) = P_0(\psi_0) + \epsilon P_1(r,z,t).
\end{equation}
\begin{equation}
\xi(r,z,t) = 0 + \epsilon\xi_1(r,z,t),
\end{equation}
\begin{equation}
F(r,z,t) = 0 + \epsilon F_1(r,z,t),
\end{equation}
\end{subequations}
where $\psi_0$ is given by \ref{eq:Bobnev}. Here $B^{\phi} = I(r,z,t)/r$ and $V^{\phi} = F(r,z,t)/r$.
Substituting these into the axially MHD equations gives an overdetermined system of 6 equations for the 5 unknown $\psi_1(r,z,t)$, $I_1(r,z,t)$, $P_1(r,z,t)$, $\xi_1(r,z,t)$ $F_1(r,z,t)$, and $H_1(r,z,t)$. Similar to above, no solutions were able to be found to this variable coefficient linear system.
\subsubsection*{Acknowledgements}
The authors are grateful to NSERC of Canada for the financial support
{\footnotesize
| 2023-04-23T06:41:13.099Z | 2022-04-06T02:26:30.000Z | redpajama/arxiv | arxiv_0001 | 1,938 | 10,064 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.